独立人工智能网络的需求
The Need for an Independent AI Grid

原始链接: https://amppublic.com/

## 苦涩的教训与人工智能创新的未来 人工智能进步的关键,“苦涩的教训”在于扩展计算规模。近期Anthropic、Luma和ElevenLabs等人才密集的团队取得的成功证明了这一点,他们通过高效利用计算资源,取得了最先进的成果。然而,独立团队面临一个关键挑战:获取和*高效*利用大规模计算资源成本高昂,并且常常导致大量资源浪费(30-40%的FLOPs未被使用)。 这迫使人们面临艰难的抉择——要么接受低效的计算资源使用,要么加入更大的组织,这可能会扼杀创新。 提出的解决方案是“人工智能网格”——一种汇集计算基础设施,允许独立团队在保持控制权的同时,通过共享基础设施和平滑需求来最大化资源利用率。 这个网格解决了通用云提供商无法解决的独特前沿工作负载需求(数据损坏、调度),并促进集体智能和安全。 尽管一些团队最终可能会扩展到自给自足,但网格的优势——特别是平滑需求和集体解决问题——将仍然具有价值。 AMP PBC正在构建这个网格,旨在最大化前沿产出,并维持一个健康、独立的AI生态系统。

Hacker News 新闻 | 过去 | 评论 | 提问 | 展示 | 工作 | 提交 登录 独立AI电网的需求 (amppublic.com) 20 分,olalonde 发表于 3 小时前 | 隐藏 | 过去 | 收藏 | 3 条评论 帮助 onlyhumans 发表于 58 分钟前 | 下一个 [–] 每个人都应该从这些数据中心获得家庭用电补贴。 qwjHag 发表于 1 小时前 | 上一个 | 下一个 [–] 谷歌和 A16Z 成立的公益性公司。我确信公众将从中受益匪浅! 把你们的数据中心放在卡尔格岛,放我们走吧。 pitcock 发表于 1 小时前 | 上一个 | 下一个 [–] 技术已经存在——点对点 GPU 计算。 有很多选择,其中许多已经部署。 我不认为你需要一个专门的电网,就像冰箱一样。除了各种煽动之外,我不明白能源消耗的问题。 例如,AI 消耗稀土的想法是不正确的,笑死人了,那是电池。在文章中看到了 :/ 指南 | 常见问题 | 列表 | API | 安全 | 法律 | 申请 YC | 联系 搜索:
相关文章

原文

The bitter lesson tells us to scale compute to unlock frontier AI progress. The empirical record confirms this.

Focused, independent teams have demonstrated extraordinary output per unit of compute in recent years. Anthropic/Claude in code, Black Forest Labs/Flux in image generation, Luma in video generation, ElevenLabs and Sesame in speech and conversation - are all teams that have produced state of the art outputs in a rapid amount of time relative to non-bitter-lesson pilled approaches.

The optimal unit of frontier progress is a focused, talent-dense team with access to enormous compute.

This dynamic is accelerating. AI tooling is making focused teams radically more capable. What took a large team in 2022 can now be done by a five-person lab in 2026. Code generation, data pipeline automation, and open-weight models all compound the returns to talent density - and the penalty for organizational bloat. This also means the number of teams capable of frontier work is exploding.

The problem is that independence comes with a heavy structural cost - low compute utilization and uncertain access.

Frontier workloads are often unpredictable - massive training runs followed by periods of cyclical inference, interspersed with idle capacity, and many shades in between. It is difficult for individual teams to efficiently provision for this. They often have to overprovision for peaks and waste during troughs. They lack the dedicated multi-tenancy and scale to optimize job orchestration.

The result is that the field's most productive teams are also frequently its least efficient consumers of its most expensive input. Based on empirics, it is not uncommon for 30-40% of all FLOPs to be frequently unused within independent teams, and yet these teams feel perpetually under-resourced on compute.

This creates a brutal choice. To access compute at scale and use it efficiently, independent teams often have to accept that a significant fraction of their most critical resource is being burned, or they must reluctantly join larger, unaligned organizations that have secured compute access at scale.

In that timeline, humanity is worse off, since the number of teams who can develop at the frontier reduces. A healthy independent frontier technology ecosystem is good for innovation.

The Grid

Frontier innovation, however, consumes scarce physical resources -energy, land, rare earths -at increasing scale. Those resources need to flow to organizations that can effectively convert watts, acres, and dollars into breakthroughs. Maximizing frontier output per unit of scarce physical resources is a humanity-scale problem.

An independent AI grid -a system that pools compute across independent teams -resolves this problem.

This is done by decoupling the two scaling problems. Innovation scales through independence. Compute scales through pooled infrastructure.

The utilization economics can be powerful. When grid members are in different phases at any given time -one team is in research mode while another is in deployment mode -the aggregate demand curve is smooth even when each individual workload is spiky.

When independent teams pool their compute needs, they create an infrastructure layer whose sole function is to maximize utilization for each other, without compromising individual freedoms. Each member stays independent, retains full control over its own baseload, and gets access to automated infrastructure at a scale that would otherwise require becoming the kind of organization that produces fewer breakthroughs per unit of compute.

Frontier workloads impose hard infrastructure problems -silent data corruption, topology-aware scheduling, power management, fast checkpointing -that generic cloud providers do not solve. Today, each lab builds its own team to solve them, duplicating effort across the field. A grid solves them once, as shared automation that every member can leverage.

One objection might be that an AI grid is just a cloud with more steps.

The distinction is that a grid pools compute across providers in a way that makes compute access as flexible as possible for individual teams.

A second objection might be that grid members will graduate -that as an individual grid member scales, it can replicate the grid's value internally and no longer needs it. This may all be least partially true: the things a single member can replicate -a scheduler, an ops team, procurement leverage, token discounts -do get less valuable at scale. But the things only the grid collectively provides get more valuable at scale: the smoothed demand curve across all members, the collective intelligence across the ecosystem and the security at scale in times of crisis that an aligned grid can deliver to each individual member.

The bitter lesson told us to scale compute. Reality tells us this is difficult without compromising independence at scale. An AI grid is a system that can, if operated correctly, fix this.

AMP PBC

AMP is a public benefit corporation. Our vision is a healthy, independent frontier technology ecosystem. Our mission is to maximize the world's frontier output.

Our full time founding team includes alums from A16Z's AI Infrastructure fund, Google's global AI infrastructure engineering team, and partners from Orrick and Edelman.

Our founding grid partners include some of the leading research labs in the world.

Our immediate goal is to ensure the same compute consumed via AMP buys more training runs, more experiments, more frontier research output for our members, and for humanity.

We are now inviting mission-aligned organizations to join the grid as early members. You can reach out to us at [email protected] if you'd like to learn more.

联系我们 contact @ memedata.com