Anthropic表示公司“不能出于良心同意”五角大楼的要求。
Anthropic says company 'cannot in good conscience accede' to Pentagon's demands

原始链接: https://apnews.com/article/anthropic-ai-pentagon-hegseth-dario-amodei-9b28dda41bdb52b6a378fa9fc80b8fda

Anthropic,Claude聊天机器人的背后公司,拒绝了五角大楼提出的更广泛获取其技术的请求,理由是担心潜在的滥用。首席执行官达里奥·阿莫迪表示,该公司“不能昧着良心”允许其人工智能被用于对美国人进行大规模监控或在自主武器系统中——这些用途已经违反了Anthropic的政策。 五角大楼坚持认为,它打算合法使用该技术,并且不会被供应商左右,并威胁要取消合同,将Anthropic列为安全风险,甚至援引《国防生产法》以强制合作。Anthropic认为这些威胁是自相矛盾的,既认为他们存在风险,又认为他们对国家安全至关重要。 这场争端受到了参议员的批评,一些人指责五角大楼采取不专业的策略,并忽视人工智能治理。Anthropic需要在周五之前同意,否则合作关系可能会终止,军方将寻求其他供应商。这种情况凸显了人工智能在国防领域部署所带来的伦理和法律影响日益加剧的紧张关系。

## Anthropic 拒绝五角大楼要求 Anthropic 是一家定位为优先考虑人类福祉的 AI 公司,已拒绝了五角大楼关于访问其 AI 模型的部分要求。这一决定引发了 Hacker News 上的讨论,一些用户质疑 Anthropic 最初为何要提供访问权限。 一些人推测,Anthropic 认为由一家注重道德的公司与军工复合体合作会更好,而另一些人则认为这是坚定的立场。一个关键因素似乎是 Anthropic 正在寻求与英国、印度和日本政府的合同——与美国国防部合作可能会危及这些交易。 这种情况凸显了一种日益增长的趋势,即各国投资于自己的“主权”AI 模型,以保持控制并避免对外国供应商的依赖,这反映了之前对云基础设施主导地位的担忧。 此次拒绝是对 Anthropic 声明的 AI 安全原则的重要考验,并可能影响其未来的政府合作关系。
相关文章

原文

WASHINGTON (AP) — Anthropic CEO Dario Amodei said Thursday the artificial intelligence company “cannot in good conscience accede” to the Pentagon’s demands to allow wider use of its technology.

The maker of the AI chatbot Claude said in a statement that it’s not walking away from negotiations, but that new contract language received from the Defense Department “made virtually no progress on preventing Claude’s use for mass surveillance of Americans or in fully autonomous weapons.”

The Pentagon’s top spokesman has reiterated that the military wants to use Anthropic’s artificial intelligence technology in legal ways and will not let the company dictate any limits ahead of a Friday deadline to agree to its demands.

Sean Parnell said Thursday on social media that the Pentagon “has no interest in using AI to conduct mass surveillance of Americans (which is illegal) nor do we want to use AI to develop autonomous weapons that operate without human involvement.”

Anthropic’s policies prevent its models, such as its chatbot Claude, from being used for those purposes. It’s the last of its peers — the Pentagon also has contracts with Google, OpenAI and Elon Musk’s xAI — to not supply its technology to a new U.S. military internal network.

Parnell said the Pentagon wants to “use Anthropic’s model for all lawful purposes” but didn’t offer details on what that entailed. He said opening up use of the technology would prevent the company from “jeopardizing critical military operations.”

“We will not let ANY company dictate the terms regarding how we make operational decisions,” he said.

During a meeting on Tuesday between Defense Secretary Pete Hegseth and Amodei, military officials warned that they could cancel Anthropic’s contract, designate the company as a supply chain risk, or invoke a Cold War-era law called the Defense Production Act to give the military more sweeping authority to use its products, even if the company doesn’t approve.

Amodei said Thursday that “those latter two threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security.”

Parnell left out the threatened use of the Defense Production Act in the Thursday post on X and said Anthropic has “until 5:01 PM ET on Friday to decide.”

“Otherwise, we will terminate our partnership with Anthropic and deem them a supply chain risk,” he wrote.

The talks that escalated this week began months ago. Amodei said that given “the substantial value that Anthropic’s technology provides to our armed forces, we hope they reconsider.” But if they don’t, he said Anthropic “will work to enable a smooth transition to another provider.”

Sen. Thom Tillis, a North Carolina Republican who is not seeking reelection, said Thursday that the Pentagon has been handling the matter unprofessionally while Anthropic is “trying to do their best to help us from ourselves.”

“Why in the hell are we having this discussion in public?” Tillis told reporters. “This is not the way you deal with a strategic vendor that has contracts.”

He added, “When a company is resisting a market opportunity for fear of negative consequences, you should listen to them and then behind closed doors figure out what they’re really trying to solve.”

Sen. Mark Warner of Virginia, the ranking Democrat on the Senate Intelligence Committee, said he was “deeply disturbed” by reports that the Pentagon is “working to bully a leading U.S. company.”

“Unfortunately, this is further indication that the Department of Defense seeks to completely ignore AI governance,” Warner said in a statement. It “further underscores the need for Congress to enact strong, binding AI governance mechanisms for national security contexts.”

As Pentagon officials say they always will follow the law with their use of AI models, Hegseth told Fox News last February, weeks after becoming defense secretary, that “ultimately, we want lawyers who give sound constitutional advice and don’t exist to attempt to be roadblocks to anything.”

___

Associated Press writer Ben Finley contributed to this report.

联系我们 contact @ memedata.com