Anthropic 首席执行官为“独裁者特朗普”崩溃备忘录道歉,淡化“供应链风险” designation,并将提起诉讼。
Anthropic CEO Apologizes For 'Dictator Trump' Meltdown Memo, Downplays 'Supply Chain Risk' Designation, And Is Going To Sue

原始链接: https://www.zerohedge.com/ai/anthropic-ceo-apologizes-dictator-trump-meltdown-memo-downplays-supply-chain-risk-designation

Anthropic是一家人工智能公司,正试图修复其与特朗普政府的关系,此前该公司被战争部(DoW)指定为国家安全供应链风险。这一前所未有的举动源于关于Anthropic的AI模型Claude的军事应用谈判停滞,该公司拒绝允许将其用于大规模监控或自主武器等用途。 首席执行官达里奥·阿莫迪公开为一份泄露的批评白宫的内部备忘录道歉,将激烈的语气归因于对包括与OpenAI的交易在内的事件的“迷失方向”和恐慌性反应。虽然以降低的价格向军方提供Claude,但Anthropic仍然坚持其伦理红线。 尽管已道歉,Anthropic计划在法律上挑战这一指定,认为其在法律上站不住脚,并可能扼杀人工智能创新。战争部的限制似乎仅限于直接合同,像微软这样的合作伙伴可以继续进行非国防相关的工作。这种情况凸显了伦理人工智能发展与国防部门对无限制访问的需求之间的紧张关系,并将OpenAI定位为更合规的替代方案。Anthropic的未来及其3800亿美元的估值,现在取决于其如何应对这一复杂局面。

相关文章

原文

As Anthropic attempts to salvage their relationship with the Trump administration, CEO Dario Amodei publicly apologized Thursday for the inflammatory tone of his leaked internal memo that accused the White House of targeting his company because it hadn't offered "dictator-style praise" to President Trump. The apology came in his first major interview since the Pentagon's Department of War (DoW) formally designated Anthropic a supply chain risk to national security - effective immediately - marking the first time such a label has been applied to a U.S. company.

Anthropic CEO Dario Amodei. Photos: Getty Images

The March 5 designation, confirmed in a letter to Anthropic leadership, stems from weeks of failed negotiations over Claude AI's military applications. Anthropic refused to drop strict red lines prohibiting the model's use for mass domestic surveillance of Americans or fully autonomous lethal weapons, insisting on meaningful safeguards rather than what Amodei previously called "safety theater" in rival deals like OpenAI's. Defense Secretary Pete Hegseth had threatened broad restrictions, including barring defense contractors from any commercial activity with Anthropic, but the company clarified the scope appears narrower: it primarily affects direct DoW-related work, with partners like Microsoft confirming continued availability for non-defense uses.

Last Friday, the Trump administration 'fired' the company after a bruising dispute with the Pentagon came to a head over ethical concerns surrounding Claude's military use. The Pentagon demanded to use ClaudeAI for "any lawful purpose" with no guardrails - or having to allegedly ask permission in a life-or-death scenario.

In the interview with The Economist Amodei described the crisis as one of the most "disorienting" in Anthropic's history. He attributed the leaked memo - written hastily on Slack amid rapid-fire events including Trump's announcements and OpenAI snaking their contract - to confusion and panic from a "difficult day."

"It does not reflect my careful or considered views," he said, downplaying it as a casual internal message rather than a formal memo. He said he'd apologized to DoW personnel and signaled openness to further dialogue with administration figures, though he sidestepped a direct personal apology to Trump.

Amodei's Thursday mea culpa was accompanied with a blog post titled: "Where things stand with the Department of War," where he emphasized shared interests with the military, offered Claude at nominal cost plus engineer support for warfighters, and highlighted ongoing "productive conversations" despite the label.

I also want to apologize directly for a post internal to the company that was leaked to the press yesterday. Anthropic did not leak this post nor direct anyone else to do so—it is not in our interest to escalate this situation. That particular post was written within a few hours of the President’s Truth Social post announcing Anthropic would be removed from all federal systems, the Secretary of War’s X post announcing the supply chain risk designation, and the announcement of a deal between the Pentagon and OpenAI, which even OpenAI later characterized as confusing. It was a difficult day for the company, and I apologize for the tone of the post. It does not reflect my careful or considered views. It was also written six days ago, and is an out-of-date assessment of the current situation.

He also refuted a claim from an anonymous Pentagon official cited in the Washington Post that the Pentagon would have to call Anthropic before making life-or-death decisions in the field, writing: 

As we stated last Friday, we do not believe, and have never believed, that it is the role of Anthropic or any private company to be involved in operational decision-making—that is the role of the military. Our only concerns have been our exceptions on fully autonomous weapons and mass domestic surveillance, which relate to high-level usage areas, and not operational decision-making.

The letter also reiterates a Friday comment that Anthropic will sue to challenge the supply-chain risk designation, calling it "not legally sound" and warning of a "chilling" effect on AI innovation and business if it stands. "We see no choice but to challenge it in court," he wrote, while reiterating Anthropic's desire to continue equipping U.S. forces amid operations (like against Iran via tools including Palantir's Maven). The dual track - apology plus litigation - reflects heavy investor pressure from backers like Amazon and Nvidia to salvage the $380 billion valuation amid revenue momentum nearing a $20 billion annual run rate.

The company also suggested that the supply chain risk designation isn't that bad anyway, and won't affect their customers who deal with the government outside the Department of War; 

The language used by the Department of War in the letter (even supposing it was legally sound) matches our statement on Friday that the vast majority of our customers are unaffected by a supply chain risk designation. With respect to our customers, it plainly applies only to the use of Claude by customers as a direct part of contracts with the Department of War, not all use of Claude by customers who have such contracts.

The Department’s letter has a narrow scope, and this is because the relevant statute (10 USC 3252) is narrow, too. It exists to protect the government rather than to punish a supplier; in fact, the law requires the Secretary of War to use the least restrictive means necessary to accomplish the goal of protecting the supply chain. Even for Department of War contractors, the supply chain risk designation doesn’t (and can’t) limit uses of Claude or business relationships with Anthropic if those are unrelated to their specific Department of War contracts.

The situation underscores major issues that will persist with Silicon Valley's AI-defense nexus: ethical red lines versus unrestricted "any lawful purpose" access, with OpenAI positioned as the compliant alternative. As talks continue and a courtroom battle looms, Anthropic's future hangs on whether Amodei's contrition buys enough goodwill to avert broader fallout—or if the "supply chain risk" label becomes a permanent scar on one of AI's most principled players.

Loading recommendations...

联系我们 contact @ memedata.com