边疆的消失
The Closing of the Frontier

原始链接: https://tanyaverma.sh/2026/04/10/closing-of-the-frontier.html

Anthropic“Mythos”模型的发布标志着一个令人担忧的转变——一个新的前沿正在关闭,这类似于美国历史上自由土地的终结。互联网曾经为任何人提供杠杆和创新的机会,但现在,公众可访问的AI模型与为富人和权贵保留的模型之间,正在出现越来越大的差距。 这种智能的集中化创造了一种潜在的危险的新封建主义动态。资本现在可以轻松转化为超人能力,使新来者几乎不可能竞争。与核武器不同,智能是一种创造力,对其的垄断可能会造成永久的下层阶级。 作者反对这种AI的私有化,主张更广泛的访问——不一定是为所有人提供开放的API,而是一种具有正当程序、透明度和申诉机制的系统。限制访问会阻碍安全研究和创新,同时会在少数安全记录可疑的组织手中滋生“零日漏洞生成器”。人们仍然希望,就像从大型机到个人电脑的转变一样,硬件会变得更易于访问,从而实现AI的民主化。然而,目前的趋势有扼杀潜力的风险,并重蹈历史上掠夺和权力集中的模式。

## AI 前沿的关闭 - 摘要 最近在 Hacker News 上的一场讨论集中在 Anthropic 决定将其新的“Mythos”AI 模型访问权限限制给 Crowdstrike 和 Microsoft 等企业合作伙伴,引发了关于 AI 可访问性未来的争论。一些评论员认为这是一种营销策略,预测在制造炒作后会更广泛地发布。另一些人则担心将强大的 AI 限制给既定实体会造成危险的权力失衡,这与人们对“两级”系统的担忧相呼应,在这种系统中,访问权由财富和地位决定。 一个关键的争论点围绕着互联网最初的无许可创新承诺,而现在这种承诺正受到日益封闭的 AI 技术威胁。这种做法被比作美国历史上“边疆的关闭”,即获得机会的自由土地消失了。 虽然一些人承认 Anthropic 目前的计算限制,但许多人认为竞争压力最终会迫使其更广泛地发布。另一些人指出,开源模型有可能赶上,提供一种更易于访问的替代方案。核心问题是 AI 是否会保持其民主化的力量,或者会成为另一种强化现有等级制度的工具。
相关文章

原文

The Anthropic Mythos announcement is the first time in my life I’ve felt truly poor. Maybe because I grew up on the internet and it was the one permissionless place where you could have leverage and a shot at uncapped exploration and ambition. That is now changing with the gap between models that are publicly available vs those reserved for the already wealthy and pre-established.

In 1893, Frederick Jackson Turner argued that much that is distinctive about America was shaped by the existence of free land to the West where anyone could start over, and that this condition infused America with its characteristic liberty, egalitarianism, rejection of feudalistic hierarchy, self-sufficiency, and ambition.

Since the days when the fleet of Columbus sailed into the waters of the New World, America has been another name for opportunity... But never again will such gifts of free land offer themselves... each frontier did indeed furnish a new field of opportunity, a gate of escape from the bondage of the past... And now, four centuries from the discovery of America, at the end of a hundred years of life under the Constitution, the frontier has gone, and with its going has closed the first period of American history. – Frederick Jackson Turner, The Significance of the Frontier in American History, 1893

We are witnessing the closing of yet another frontier in history. Even though the American dream is nearly dead, the one somewhat accessible escape hatch that offered economic mobility and cherished individual agency was the wired. Perhaps you would never own a house, but when it came to technology, a poor person and the wealthiest person in the world had access to the same internet, the same phone, the same encryption protocols (my TLS connection wasn’t using AES-ECB-quant-8 vs your AES-GCM-512).

A 16-year-old with no credentials and no capital could just do things. The world of bits offered the freedom to build without being drowned in arbitrary constraints, in a way that didn’t require assembling vast capital or prestige or connections, where your creativity and work could speak for itself, and you had agency. This is a precious thing and we should seek to preserve it for as long as it is possible, because there is still much possibility left. We’ve only just begun scratching the surface for what is possible to build and how best to harness the intelligence of powerful models.

I feel this most acutely in the cordoning off of frontier models from public access, though the logic also applies to the general replacement of labor and intelligence with capital. Rudolf Laine articulates this well in his essay, Capital, AGI and Ambition.

Those with significant capital when labour-replacing AI started have a permanent advantage. Upstarts will not defeat them, since capital now trivially converts into superhuman labour in any field. – Rudolf Laine, 2024

George Hotz more bluntly calls it neofeudalism.

This isn’t like nuclear weapons, this is intelligence itself. A nuclear weapon can only destroy; intelligence is the greatest creative force in the world. If a small group of people have a monopoly on it, you are the permanent underclass in the same way animals are. – George Hotz, 2026

The Manhattan Project comparison the labs reach for again and again, has long been a pet peeve of mine. Nuclear non-proliferation worked, to the extent it did, because nukes are instruments of mass destruction and laws are written in blood. Intelligence is economically valuable in a wholly different way. Every country will pursue it as far as it can, and given the multipolar world we are back in, and our recent record with treaties and commitments, I do not believe there will be global alignment on risk reduction. Not before there is blood, at the very least.


Anthropic has mentioned that it does not plan to make Mythos generally available. However, it’s one thing to not release the model at all and keep it under full containment. It’s also valid to have some embargo period after which you’ll release it for public use with some vetting.

Today we’re announcing Project Glasswing, a new initiative that brings together Amazon Web Services, Anthropic, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks in an effort to secure the world’s most critical software. – Anthropic

But it is another thing entirely to share access only with enterprise partners such as Crowdstrike, Cisco, and Microsoft, which are known to have massive security incidents regularly. How dangerous from a safety standpoint would it be if the private capability gap grows exponentially (already happening with recursive self improvement) before the world has had any time to price it in, and there were to be a security breach at one of these labs, or their partners? Or what if a foreign lab drops close to an equivalent model with minimal access restrictions? Though the limited availability of compute has a sizable hand in the restriction calculus here as well.

Those are not the only organizations with security concerns. I am not arguing that the model should be made publicly available to anyone via API. But structurally speaking, a private company has built the most capable AI model in the world, and has decided unilaterally who gets access and is worth protecting. They and their established partners are now sitting on a zero day generator, accumulating private knowledge of exploits in everyone else’s infrastructure: capabilities that once belonged to nation states and are now being privatized to a handful of well-connected organizations. These are state-scale capabilities without state-scale accountability. If you believe in democracy, we built three branches of the government for a reason. Anthropic is simultaneously the manufacturer, the regulator, and the appeals court, with no on-ramp even for someone willing to pay and undergo strong KYC.


API access may not be full ownership, but at least it is a programmable surface that doesn’t foreclose possibility. Locking that down for safety and “unapproved” use certainly helps prevent abuse, but it also stifles innovation. Public access also forces latent capabilities into the open, which given how eval-aware models are (Mythos alignment report calls eval awareness “a key challenge”) and the constraints of artificial red-teaming, is better from a safety standpoint. Fail fast and fix, as opposed to accumulating a capabilities overhang that has never been tested in the real world. It’s bad enough as is for the world to adjust and make sense of AI capabilities, when half the American population thinks AI is worthless because they are forced to use Copilot at work.

The reaction to AIs finding security vulnerabilities also feels overstated. Security is always an arms race. A decade ago fuzzers like American Fuzzy Lop looked like a gift to attackers, but many security-first projects instead built fuzzing into their CI pipelines and now catch most bugs before release. I wrote about this symmetry in my post on the death of security through obscurity. Here again, frontier model access will allow more people to build security systems that will help the world upskill its security. For too long, organizations have been cavalier about security and risked their customers’ data with poor security practices. The transition will be rough, but this is a period of great upheaval in many dimensions, so why would we expect security to get by unscathed?

And the people who would actually do rigorous safety research on these models can’t get access to them. A couple weekends ago I was at the MATS research symposium. MATS is one of the most serious AI safety programs out there, and about two-thirds of the posters involved a Chinese open source model. Many experiments require white-box access, and these researchers can’t get it anywhere else. Meanwhile, the mainstream AI safety position is that open source models are dangerous. Most projects were also restricted to tiny models due to compute limitations, leaving open whether their results would survive at frontier scale. Thank god for open source models, because if meaningful safety research depends on the benevolence of the labs, or on being hired by one, that would be disappointing.


You can generate your own electricity with a solar panel (think local models), but most people would rather pay a utility bill. And the power company doesn’t decide, on the basis of pedigree, who is worthy of electricity. Intelligence should work similarly, where the capabilities you can access scale may scale with vetting and due process, but the presumption should be access. Add safety guardrails to restrict dangerous use; start by making them overly trigger-happy if you must, and calibrate over time. But the default should be to allow entry.

If you have government-level capabilities, time to start acting like a government. There should be due process, publicly disclosed criteria for who gets access and why, and a clear appeals mechanism that isn’t email the trust and safety team and pray. And when you cut someone off, you should be required to say why, because getting your frontier model access revoked is akin to being unbanked. From an audit perspective, there should be FOIA-style obligations to show your work in safety-critical areas.


There is something special about training a model on all of humanity’s data and then locking it up for the benefit of a few well-connected organizations that you have relationships with. Maybe you’ll notice another historical pattern here. Extract value from a population that can’t meaningfully consent, concentrate the returns within a small inner circle, and then offer some version of charity to the people you extracted from as moral cover for the arrangement. The pattern repeats itself with labs promising post-AGI UBI or encouraging EA philanthropy while continuing to concentrate frontier capability. Not saying the intent is malicious, I think many are trying to do the best they can, I’m simply noticing.

If we are lucky, none of this will matter. This might just be the mainframe era of AI, a waypoint on the way to personal computing. When the Apple II came out it was woefully underpowered compared to mainframes, and most adoption was driven by hobbyists and aesthetics. Compared to that gap, open source models already pack quite a punch, running 3-12 months behind the frontier depending on the dimension. So perhaps hardware supply chains will scale, a glut of chips and energy will become available, and intelligence will be too cheap to meter.

The city is cutting down twenty-year-old ficus trees in my neighborhood because they could fall on someone during a hurricane and the city doesn’t want to get sued. San Francisco gets about one thunderstorm a year at best. I hope we don’t snuff out the wired in a similar way.

联系我们 contact @ memedata.com