Anthropic-五角大楼的谈判因人工智能安全措施而停滞。
Anthropic–Pentagon Talks Stall Over AI Guardrails

原始链接: https://www.zerohedge.com/markets/anthropic-pentagon-talks-stall-over-ai-guardrails

Anthropic 与五角大楼的续约谈判因对 Anthropic 的 Claude AI 系统使用存在分歧而面临障碍。Anthropic 将自己定位为“安全至上”的开发者,寻求更严格的限制,以防止其人工智能被用于大规模监控或自主武器开发。 五角大楼希望拥有更广泛的部署灵活性,只要其仍在法律范围内即可。这反映了关于人工智能在军事应用中作用的更广泛争论。虽然 Anthropic 旨在通过其专门的“Claude Gov”模型支持国家安全,但国防部优先为“战士赢得胜利”提供装备,甚至正在考虑限制对 Anthropic 的依赖,因为存在潜在的供应链漏洞。 这些谈判的结果意义重大,可能会影响与 OpenAI 和 Google 等其他人工智能提供商的未来合同,并为人工智能在军事应用中的伦理边界设定先例。

相关文章

原文

Contract renewal talks between Anthropic and the Pentagon have stalled over how its Claude system can be used. The AI firm is seeking stricter limits before extending its agreement, according to a person familiar with the private negotiations and Bloomberg.

At the heart of the dispute is control.

Anthropic wants firm guardrails to prevent Claude from being used for mass surveillance of Americans or to build weapons that operate without human oversight.

The Defense Department’s position is broader: it wants flexibility to deploy the model so long as its use complies with the law.

The tension reflects a larger debate over how far advanced AI should go in military settings.

Bloomberg writes that Anthropic has tried to distinguish itself as a safety-first AI developer. It created a specialized version, Claude Gov, tailored to U.S. national security work, designed to analyze classified information, interpret intelligence and process cybersecurity data. The company says it aims to serve government clients while staying within its own ethical red lines.

“Anthropic is committed to using frontier AI in support of US national security,” a spokesperson said, describing ongoing discussions with the Defense Department as “productive conversations, in good faith.” The Pentagon, however, struck a firmer tone. “Our nation requires that our partners be willing to help our warfighters win in any fight,” spokesman Sean Parnell said, adding that the relationship is under review and emphasizing troop safety.

Some defense officials have grown wary, viewing reliance on Anthropic as a potential supply-chain vulnerability. The department could ask contractors to certify they are not using Anthropic’s models, according to a senior official—an indication that the disagreement could ripple beyond a single contract.

Rival AI developers are watching closely. Tools from OpenAI, Google and xAI are also being discussed for Pentagon use, with companies working to ensure their systems can operate within legal boundaries. Anthropic secured a two-year Pentagon deal last year involving Claude Gov and enterprise products, and the outcome of its current negotiations could influence how future agreements with other AI providers are structured.

Loading recommendations...

联系我们 contact @ memedata.com