Anthropic 犯了一个大错误。
Anthropic Made a Big Mistake

原始链接: https://archaeologist.dev/artifacts/anthropic

## Anthropic 的冒险举动与 OpenAI 的应对 2025年初,“氛围编码”——使用基于终端的 AI 代理,如 Anthropic 的 Claude Code、OpenAI 的 Codex CLI 和 Google 的 Gemini CLI——彻底改变了开发者工作流程。这些代理通过 API 密钥利用 LLM,Anthropic 的订阅计划(Pro 和 Max)因其性价比而特别受欢迎。第三方代理,如 OpenCode,通过允许用户使用他们的 Anthropic 订阅登录,从而有效地利用了这些更低的价格而获得发展。 然而,Anthropic 最近关闭了这一漏洞,阻止了非 Anthropic 客户端的访问,理由是调试问题和异常流量。这引发了强烈反弹,用户威胁取消订阅。关键是,Anthropic 在没有正式宣布的情况下做出了这一改变。 这一举动显示了 Anthropic 希望控制整个用户体验,并避免仅仅成为模型提供商,尤其是在其最近 3500 亿美元估值之后。然而,适得其反。虽然 Claude 的模型在开发者中很受欢迎,但其聊天机器人本身的市場份额很低,商品化是一个真实威胁。 OpenAI 迅速抓住机会,正式支持 OpenCode 和其他开源工具,以利用*他们*的订阅,有效地“叛逃”,形成了一种“囚徒困境”。这让许多人质疑 Anthropic 的客户关系,并预测该公司将面临长期的后果。

## Anthropic 的 Claude Code 问题引发讨论 最近 Hacker News 上出现了一场讨论,围绕 Anthropic 试图限制其 Claude Code 会员资格,并与 OpenCode 等替代工具一起使用。虽然一些人认为 Anthropic 有权控制其服务,但许多人认为此举凸显了一个更深层的问题:**Claude Code 的质量问题。** 用户报告了频繁的错误——包括终端渲染问题和内存泄漏——导致他们更喜欢 OpenCode,尽管后者需要更多精力。OpenCode 因其对错误修复的响应速度、更大的控制选项以及整体更高的质量而受到称赞。 核心争论不在于利用系统漏洞,而在于用户寻求*更好的产品*。批评者认为,Anthropic 试图锁定用户的做法是一种薄弱的策略,尤其是在他们接近 IPO 的时候,而专注于产品改进才是获得市场主导地位的关键。 讨论还简要涉及了令人惊讶的“氛围编码”利基市场。
相关文章

原文

Anthropic may have just committed the biggest business blunder of 2026 -- and we're less than two weeks in. To understand why, let's briefly rewind to 2025, the year when agentic AI went mainstream.

On 3 February 2025, Andrej Karpathy coined the term "vibe coding" to describe the new paradigm.

Less than three weeks later, Anthropic released the first research preview of Claude Code, bringing large language models directly into developers' native habitat: the terminal.

OpenAI followed with Codex CLI in April, and Google released Gemini CLI in June.

All of these terminal-based coding agents follow the same principle:

  1. you type a prompt
  2. the agent sends it to a large language model
  3. the LLM responds and may instruct the agent to carry out actions like editing files or running commands
  4. the agent carries out the actions and appends the results to the prompt

These steps are repeated in a loop, but with a twist: the agent can continue working through the loop until the LLM decides that it requires user input.

The principle is so simple that it immediately gave rise to a bunch of alternative coding agents, including OpenCode, Roo, and Amp Code (to name but a few). Each brought its own unique philosophy and approach to the table, but what they all have in common is that they ultimately rely on large language models for intelligence. Their job is purely to collect user input, execute tool calls, and pass those to the model, over and over again. Therefore, they tend to provide a way to select from a predefined set of models and a means of authenticating with the relevant providers (such as Anthropic or OpenAI), generally using an API key.

When Claude Code launched for real in June 2025, usage of the Anthropic models was included in the Pro and Max plans, for a flat monthly or annual subscription. These plans quickly became very popular when users realised that the effective cost per token was much lower compared to Anthropic's API pricing. So popular, in fact, that it reached $1 billion in annualised revenue after only six months.

Meanwhile, OpenCode rapidly gained popularity, amassing over 50,000 GitHub stars and more than 650,000 monthly active users in the same short timeframe. One of its key selling points was the ability to "Log in with Anthropic to use your Claude Pro or Max account", thus enabling developers to benefit from the attractive Claude subscription pricing. In contrast, other coding agents such as Amp only provided the ability to connect to Claude models via the much more expensive pay-per-token API.

It turns out that logging into third-party coding agents with an Anthropic OAuth token was a bit of a loophole. This was evident from the fact that it would only work if the client-supplied system prompt contained a specific phrase identifying itself as Claude Code. Nevertheless, many (presumably) unsuspecting Anthropic customers used OpenCode in this way; from their perspective, they were simply using the same service that they were already paying for, just in the comfort of their preferred coding harness.

However, Anthropic clearly didn't see it this way. On 9 January 2026, Anthropic unceremoniously closed the loophole, changing their API to detect and reject requests from third-party clients. The renowned vibe-coder Peter Steinberger soon posted about it on the website formerly known as Twitter, and disgruntled Anthropic customers expressed their discontent in a GitHub issue, requesting the decision to be reversed, many threatening to cancel their Claude subscription otherwise.

It's notable that Anthropic has not formally announced this change in ToS enforcement, neither ahead of time nor after the fact. The only quasi-announcement of this change was this thread, posted by an Anthropic employee on their personal account the day after the changes took effect, presumably in response to customer complaints. The stated motivation for the change was the allegation that "third-party harnesses using Claude subscriptions create problems for users and generate unusual traffic patterns [...] making it really hard for us to help debug when they have questions about rate limit usage or account bans and they don’t have any other avenue for this support."

I will leave it to the reader to decide for themselves whether they consider this a credible explanation or not; frankly, it doesn't matter. The truth is that Anthropic is free to put whatever they want into their ToS, and customers have to abide by it or leave. It appears quite a few have opted for the latter. However, what does matter is what Anthropic has implicitly revealed through its actions last Friday:

  1. Anthropic is willing to go to war with their paying customers over a trivial ToS violation, and
  2. they really, really want to own the entire value chain rather than being relegated to becoming just another "model provider", and
  3. they utterly failed to consider the second-order effects of this business decision.

The first point has received ample discussion already, so I want to focus on the second and third points.

It was reported just a few days earlier that Anthropic has signed a term sheet to raise $10bn at a humongous $350bn valuation. Related or not, the incentives are clear. Model-agnostic harnesses such as OpenCode present a real threat to Anthropic. Whilst its models are incredibly popular in the software developer community and it has made big inroads in enterprise LLM usage, the Claude chatbot itself reportedly commands a market share of... wait for it... 1.07%. So it's no surprise that they are trying to avoid being commoditized in their core market.

Which brings us to the final point: without anticipating it, Anthropic just found itself in a classic prisoner's dilemma with OpenAI -- and OpenAI just defected. Not only are they officially supporting OpenCode users to use their Codex subscriptions and usage limits in OpenCode, they are extending the same support to other open-source coding harnesses such as OpenHands, RooCode, and Pi. And it's not just a theoretical announcement either: support for connecting ChatGPT Pro/Plus subscriptions with OpenCode has already shipped.

What are we to take away from all this?

For me personally, I have decided I will never be an Anthropic customer, because I refuse to do business with a company that takes its customers for granted. Beyond my personal choices, though, I predict that the folks at Anthropic will come to regret their actions last week. By cracking down on their own customers in a vain attempt to quash healthy competition, they have destroyed a lot of goodwill and gave their main rival an opening that was ripe for the picking. Whilst they have plenty of cash in the bank for now, they will eventually need to learn to treat their customers with respect if they are to survive in the ever-more-competitive LLM provider landscape.

The views expressed here are my own. While the analysis is based on publicly available information, I welcome any factual corrections -- please feel free to reach out.

联系我们 contact @ memedata.com