五角大楼威胁将Anthropic变成弃儿。
Pentagon threatens to make Anthropic a pariah

原始链接: https://www.cnn.com/2026/02/24/tech/hegseth-anthropic-ai-military-amodei

五角大楼正在向人工智能公司Anthropic施压,要求其解除Claude AI模型的安全限制,否则将面临失去2亿美元合同和可能被列入黑名单的风险。国防部长皮特·赫格塞斯发布了周五的合规期限,要求“所有合法用途”的军事访问权限。 Anthropic由前OpenAI员工创立,他们优先考虑人工智能安全,拒绝解除关于人工智能控制武器和大规模国内监控的限制,理由是不可靠且缺乏监管。五角大楼威胁要援引《国防生产法》或将Anthropic指定为供应链风险——这一标签通常保留给与外国对手有关联的公司——这可能会损害其与政府客户的业务。 尽管进行了“友好的”会谈,但Anthropic首席执行官达里奥·阿莫迪据报道坚持其伦理底线。虽然致力于支持国家安全,但该公司仍坚持“负责任”的人工智能部署,并且没有计划屈服于五角大楼的要求。谈判已经持续了几个月,凸显了军事应用与人工智能安全问题之间日益增长的紧张关系。

## Anthropic 与五角大楼在人工智能伦理问题上发生冲突 五角大楼与人工智能公司Anthropic之间出现了一场争端,核心是关于人工智能使用的伦理问题。据报道,五角大楼威胁要终止与Anthropic的合同,除非该公司满足其要求,否则可能会援引《国防生产法》或将Anthropic指定为供应链风险——这一标签通常保留给与外国对手有关联的实体。 Anthropic 拒绝继续合作,除非得到保证,即其人工智能不会被用于两个关键应用:开发人工智能控制的武器和对美国公民进行大规模国内监控。这一立场引发了争论,一些人认为这是坚定的立场,另一些人则认为鉴于五角大楼的利益,这是不切实际的立场。 评论员认为,Anthropic 的拒绝可能会使其在市场上占据有利地位,吸引对不道德人工智能应用持谨慎态度的客户。然而,其他人认为该公司面临生存威胁,因为政府可能会采取潜在的报复措施。这种情况凸显了人工智能发展、国家安全和伦理考量之间日益增长的紧张关系,引发了对人工智能治理和负责任创新的未来问题的思考。
相关文章

原文

Defense Secretary Pete Hegseth gave Anthropic CEO Dario Amodei a Friday deadline to comply with demands to peel back safeguards on its AI model or risk losing a Pentagon contract.

He also threatened to put the AI company on what could amount to a government blacklist.

At issue is the guardrails Anthropic placed on its AI model Claude. The Pentagon, which has a $200 million contract with Anthropic, wants the company to lift its restrictions for the military to be able to use the model for “all lawful use,” according to a source familiar with the discussions.

But Anthropic has concerns over two issues that it isn’t willing to drop, the source said: AI-controlled weapons and mass domestic surveillance of American citizens. According to a source familiar, Anthropic believes AI is not reliable enough to operate weapons, and there are no laws or regulations yet that cover how AI could be used in mass surveillance.

A source familiar with the Tuesday meeting says the Pentagon said it would terminate Anthropic’s contract by Friday if the company does not agree to its terms. Pentagon officials also warned they would either use the Defense Production Act against Anthropic, or designate Anthropic a supply chain risk if the company didn’t comply with their demands.

The DPA is a law that gives the government the ability to influence businesses in the interest of national defense, recently invoked by the Trump administration during the COVID pandemic. The supply chain risk designation is usually reserved for companies seen as extensions of foreign adversaries like Russia or China. It could severely impact Anthropic’s business because enterprise customers with government contracts would have to make sure their government work doesn’t touch Anthropic’s tools.

During the meeting, the tone was cordial and respectful and there were no raised voices, the source said, adding that Hegseth praised Anthropic’s products and said they want to work with them.

But Amodei reiterated Anthropic’s redlines on autonomous weapons and mass surveillance, and a source familiar said Anthropic has no plans to budge and adhere to the Pentagon’s demands.

An Anthropic spokesperson described the meeting to CNN as a “good-faith” conversation about usage of the company’s technology.

“Anthropic CEO Dario Amodei met with Secretary Hegseth at the Pentagon this morning. During the conversation, Dario expressed appreciation for the Department’s work and thanked the Secretary for his service,” Anthropic said in a statement to CNN. “We continued good-faith conversations about our usage policy to ensure Anthropic can continue to support the government’s national security mission in line with what our models can reliably and responsibly do.”

The negotiations have been ongoing for a couple months, the source said, but in recent weeks reports began surfacing about the tensions between the two sides.

Then last week, Axios reported Hegseth was close to cutting the Pentagon’s contract with Anthropic and designating the company a “supply chain risk.”

“Anthropic is committed to using frontier AI in support of US national security,” the Anthropic spokesperson said. “That’s why we were the first frontier AI company to put our models on classified networks and the first to provide customized models for national security customers.”

A Pentagon official confirmed to CNN the meeting with Anthropic was taking place, but did not comment further.

Anthropic has long positioned itself as the AI company most concerned with AI safety. Its founders were all former OpenAI employees who left the company over disagreements about the ChatGPT maker’s direction, approach to safety and pace of AI development. Anthropic also recently announced it is giving $20 million to a political group campaigning for more regulation of AI.

CNN’s Kaanita Iyer contributed to this report.

联系我们 contact @ memedata.com