特朗普总统禁止Anthropic在政府系统中使用的规定。
President Trump bans Anthropic from use in government systems

原始链接: https://www.npr.org/2026/02/27/nx-s1-5729118/trump-anthropic-pentagon-openai-ai-weapons-ban

特朗普总统下令禁止美国政府使用人工智能公司Anthropic的产品,五角大楼将其列为国家安全风险,加剧了关于军事人工智能使用的争端。这场冲突源于Anthropic拒绝允许其人工智能模型Claude被用于国内大规模监控或完全自主的武器系统,涉及潜在的2亿美元合同。 特朗普指责Anthropic试图“强迫”军队,而五角大楼辩称,它为了合法的防御目的需要对人工智能进行无限制的访问。Anthropic坚持认为,由于人工智能在这些应用中的当前不可靠性以及对侵犯基本权利的担忧,其限制是必要的。 Anthropic计划对这一认定提出法律挑战,认为五角大楼的行动树立了一个危险的先例。另一家与国防部签订合同的人工智能公司OpenAI也表达了类似的担忧。这种情况很不寻常,通常情况下,承包商不会规定其产品的用途,但专家指出,人工智能的新颖性使得此案有所不同。禁令出台之际,Anthropic正在准备首次公开募股(IPO),引发了人们对投资者反应的质疑。

## Anthropic 被禁止进入美国政府系统 特朗普总统已下令所有联邦机构停止使用 Anthropic 的人工智能技术,理由是该公司是一个试图“强迫”国防部的“激进左派、觉醒”实体。 此举源于 Anthropic 拒绝允许其人工智能用于大规模国内监控或完全自主武器系统。 这一决定引发了争论,一些人赞扬 Anthropic 坚持道德标准,另一些人批评该公司可能阻碍国家安全。 评论员指出,一位以有争议行为而闻名的总统将此问题定位为捍卫宪法的讽刺之处。 许多人认为,这一禁令可能会适得其反,可能提高 Anthropic 在反对现任政府的人们中的受欢迎程度,并吸引包括欧洲在内的国际合作伙伴的兴趣。 还有关于 OpenAI 是否将面临类似限制的讨论,以及美国可能依赖不太可靠的人工智能替代方案(例如来自中国的方案)的担忧。 一些人推测这是一种权力游戏,政府可能试图迫使 Anthropic 遵守或甚至接管该公司。
相关文章

原文

The Pentagon is seen from an airplane, Monday, Feb. 2, 2026, in Washington. Julia Demaree Nikhinson/Associated Press hide caption

toggle caption
Julia Demaree Nikhinson/Associated Press

President Trump ordered the U.S. government to stop using the artificial intelligence company Anthropic's products and the Pentagon moved to designate the company a national security risk on Friday, in a sharp escalation of a high-stakes fight over the military's use of AI.

The twin decisions cap an acrimonious dispute between Anthropic and the Pentagon over whether the company could prohibit its tools from being used in mass surveillance of American citizens or to power autonomous weapon systems, as part of a military contract worth up to $200 million.

"The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War, and force them to obey their Terms of Service instead of our Constitution," Trump wrote in a Truth Social post. "Therefore, I am directing EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic's technology. We don't need it, we don't want it, and will not do business with them again!"

He said there would be a six-month phaseout of Anthropic's products.

Trump's announcement came about an hour before a deadline set by the Pentagon, which had called on Anthropic to back down. Shortly after the deadline passed, Defense Secretary Pete Hegseth said he was labeling Anthropic a supply chain risk to national security, blacklisting it from working with the U.S. military or contractors.

"In conjunction with the President's directive for the Federal Government to cease all use of Anthropic's technology, I am directing the Department of War to designate Anthropic a Supply-Chain Risk to National Security. Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic," Hegseth posted on X , using the Pentagon's "Department of War" rebranding. "Anthropic will continue to provide the Department of War its services for a period of no more than six months to allow for a seamless transition to a better and more patriotic service."

Anthropic said it would challenge the supply chain risk designation in court.

"We believe this designation would both be legally unsound and set a dangerous precedent for any American company that negotiates with the government," the company said in a statement on Friday evening.

Anthropic also challenged Hegseth's comments that anyone who does business with the U.S. military would have to cut off all business with Anthropic. "The Secretary does not have the statutory authority to back up this statement," the company said. Under federal law, it said, designating Anthropic a supply chain risk would only apply to "the use of Claude as part of Department of War contracts—it cannot affect how contractors use Claude to serve other customers."

The company said it had "tried in good faith" to reach an agreement with the Pentagon over months of negotiations, "making clear that we support all lawful uses of AI for national security aside from the two narrow exceptions" being disputed. "To the best of our knowledge, these exceptions have not affected a single government mission to date," Anthropic said.

It said its objections to those uses were rooted in two reasons: "First, we do not believe that today's frontier AI models are reliable enough to be used in fully autonomous weapons. Allowing current models to be used in this way would endanger America's warfighters and civilians. Second, we believe that mass domestic surveillance of Americans constitutes a violation of fundamental rights."

Ban comes as Anthropic plans an IPO

Defense Department officials had given Anthropic a deadline of 5:01 p.m. ET on Friday to drop restrictions on its AI model, Claude, from being used for domestic mass surveillance or entirely autonomous weapons, or face losing its contract. The Pentagon has said it doesn't intend to use AI in those ways, but requires AI companies to allow their models to be used "for all lawful purposes."

The government had also threatened to invoke the Korean War-era Defense Production Act to compel Anthropic to allow use of its tools and, at the same time, warned it would label Anthropic a supply chain risk.

In his post carrying out the latter threat, Hegseth said Anthropic had "delivered a master class in arrogance and betrayal as well as a textbook case of how not to do business with the United States Government or the Pentagon." He accused the company of trying to "seize veto power over the operational decisions of the United States military."

He said the department would not waver from its position: "the Department of War must have full, unrestricted access to Anthropic's models for every LAWFUL purpose in defense of the Republic."

"America's warfighters will never be held hostage by the ideological whims of Big Tech. This decision is final," Hegseth concluded.

The government ban comes at a time when Anthropic is under heightened scrutiny, since the company, which is valued at $380 billion, is planning to go public this year.

While the Pentagon contract worth as much as $200 million is a relatively small portion of Anthropic's $14 billion in revenue, it's unclear how the friction with the administration will sit with investors or affect other deals the company has to license its AI model to non-government partners.

Anthropic CEO Dario Amodei has pointed out that the company's valuation and revenue have only grown since it took a stand against Trump officials over how AI can be deployed on the battlefield.

Whether AI companies can set restrictions on how the government uses their technology has emerged as a major sticking point in recent months between Anthropic and the Trump administration.

On Thursday, Amodei said the company would not budge in the face of the Pentagon's threats. "We cannot in good conscience accede to their request," he wrote in a lengthy statement.

A 2024 file photo of Dario Amodei, CEO and cofounder of Anthropic. Jeff Chiu/Associated Press hide caption

toggle caption
Jeff Chiu/Associated Press

"Anthropic understands that the Department of War, not private companies, makes military decisions. We have never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner," he said. But, he added, domestic mass surveillance and fully autonomous weapons are uses that are "simply outside the bounds of what today's technology can safely and reliably do."

Emil Michael, the Pentagon's undersecretary for research and engineering, shot back in a post on X on Thursday, accusing Amodei of lying and having a "God-complex."

"He wants nothing more than to try to personally control the US Military and is ok putting our nation's safety at risk," Michael wrote. "The @DeptofWar will ALWAYS adhere to the law but not bend to whims of any one for-profit tech company," he wrote.

In an late Thursday interview with CBS News, Michael said federal law and Pentagon policies already bar the use of AI for domestic mass surveillance and autonomous weapons."

"At some level, you have to trust your military to do the right thing," he said.

OpenAI expressed similar concerns

Any effort by the Pentagon to replace Anthropic could be complicated. Earlier on Friday, OpenAI CEO Sam Altman said he shares Anthropic's "red lines" restricting military use of AI.

OpenAI, Google, and Elon Musk's xAI also have Defense Department contracts and have agreed to allow their AI tools to be used in any "lawful" scenarios. This week, xAI became the second company after Anthropic to be approved for use in classified settings.

Altman told CNBC on Friday morning that it's important for companies to work with the military "as long as it is going to comply with legal protections" and "the few red lines" that "we share with Anthropic and that other companies also independently agree with."

Sam Altman, co-founder and CEO of OpenAI, testifying before a Senate committee in 2025. Jose Luis Magana/Associated Press hide caption

toggle caption
Jose Luis Magana/Associated Press

In an internal note sent to staff on Thursday evening, Altman said OpenAI was seeking to negotiate a deal with the Pentagon to deploy its models in classified systems with exclusions preventing use for surveillance in the U.S. or to power autonomous weapons without human approval, according to a person familiar with the message who was not authorized to speak publicly. The Wall Street Journal first reported Altman's note to staff.

The Defense Department didn't respond to a request for comment on Altman's statements.

Independent experts say the standoff is highly unusual in the world of Pentagon contracting.

"This is different for sure," said Jerry McGinn, director of the Center for the Industrial Base at the Center for Strategic and International Studies, a Washington DC think tank. Pentagon contractors don't usually get to tell the Defense Department how their products and services can be used, he notes "because otherwise you'd be negotiating use cases for every contract, and that's not reasonable to expect."

At the same time, McGinn noted, artificial intelligence is a new and largely untested technology. "This is a very unusual, very public fight," he said. "I think it's reflective of the nature of AI."

NPR's Bobby Allyn contributed to this report.

联系我们 contact @ memedata.com