达里奥·阿莫迪关于我们与战争部讨论的声明
Statement from Dario Amodei on Our Discussions with the Department of War

原始链接: https://www.anthropic.com/news/statement-department-of-war

Anthropic是一家领先的AI公司,一直是美国国防部和情报界的关键AI模型提供商,优先考虑国家安全和对抗专制敌对势力的民主防御。他们率先在机密网络和国家实验室部署前沿AI,其“Claude”模型被用于情报分析和网络行动等关键任务。 尽管牺牲了大量收入——通过阻止与中国共产党有关联的实体访问而放弃了数亿美元——Anthropic一直在积极努力保持美国的AI领导地位。然而,他们在两个应用方面划清了界限:被视为对民主价值观构成威胁的大规模国内监控,以及被认为当前技术不可靠的完全自主武器。 美国国防部现在要求Anthropic移除这些安全措施,并威胁将该公司列为安全风险,并可能援引《国防生产法》。Anthropic拒绝了这一要求,优先考虑伦理考量以及对作战人员和平民的安全。他们仍然致力于支持美国国家安全,并准备在最终被撤换为承包商的情况下确保平稳过渡。

## Anthropic 与美国国防部争议 - 摘要 Anthropic 的 Dario Amodei 发表声明,详细说明了与美国国防部(政府称之为“战争部”)在人工智能模型使用方面存在分歧。Anthropic 拒绝允许其技术用于国内大规模监控或为完全自主武器提供支持,理由是安全问题和伦理考量。 争论的核心在于对人工智能可接受应用的不同看法。Anthropic 愿意与国防部合作开展与国家防御和对抗专制对手相关项目,但他们坚决反对侵犯美国公民隐私或创建没有人类控制的武器系统。 评论员们争论了国内和国外监控的区别,质疑美国的监控行为以及对英国 GCHQ 等盟友的依赖。一些人认为 Anthropic 的立场是原则性的,而另一些人则持怀疑态度,认为这是一种公关手段,或者该公司最终会屈服于政府压力。人们也担心,如果 Anthropic 坚持立场,国防部可能会利用像 xAI 这样不太注重伦理的人工智能供应商。最终,这种情况凸显了人工智能发展、国家安全和伦理责任之间日益增长的紧张关系。
相关文章

原文

I believe deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries.

Anthropic has therefore worked proactively to deploy our models to the Department of War and the intelligence community. We were the first frontier AI company to deploy our models in the US government’s classified networks, the first to deploy them at the National Laboratories, and the first to provide custom models for national security customers. Claude is extensively deployed across the Department of War and other national security agencies for mission-critical applications, such as intelligence analysis, modeling and simulation, operational planning, cyber operations, and more.

Anthropic has also acted to defend America’s lead in AI, even when it is against the company’s short-term interest. We chose to forgo several hundred million dollars in revenue to cut off the use of Claude by firms linked to the Chinese Communist Party (some of whom have been designated by the Department of War as Chinese Military Companies), shut down CCP-sponsored cyberattacks that attempted to abuse Claude, and have advocated for strong export controls on chips to ensure a democratic advantage.

Anthropic understands that the Department of War, not private companies, makes military decisions. We have never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner.

However, in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values. Some uses are also simply outside the bounds of what today’s technology can safely and reliably do. Two such use cases have never been included in our contracts with the Department of War, and we believe they should not be included now:

  • Mass domestic surveillance. We support the use of AI for lawful foreign intelligence and counterintelligence missions. But using these systems for mass domestic surveillance is incompatible with democratic values. AI-driven mass surveillance presents serious, novel risks to our fundamental liberties. To the extent that such surveillance is currently legal, this is only because the law has not yet caught up with the rapidly growing capabilities of AI. For example, under current law, the government can purchase detailed records of Americans’ movements, web browsing, and associations from public sources without obtaining a warrant, a practice the Intelligence Community has acknowledged raises privacy concerns and that has generated bipartisan opposition in Congress. Powerful AI makes it possible to assemble this scattered, individually innocuous data into a comprehensive picture of any person’s life—automatically and at massive scale.
  • Fully autonomous weapons. Partially autonomous weapons, like those used today in Ukraine, are vital to the defense of democracy. Even fully autonomous weapons (those that take humans out of the loop entirely and automate selecting and engaging targets) may prove critical for our national defense. But today, frontier AI systems are simply not reliable enough to power fully autonomous weapons. We will not knowingly provide a product that puts America’s warfighters and civilians at risk. We have offered to work directly with the Department of War on R&D to improve the reliability of these systems, but they have not accepted this offer. In addition, without proper oversight, fully autonomous weapons cannot be relied upon to exercise the critical judgment that our highly trained, professional troops exhibit every day. They need to be deployed with proper guardrails, which don’t exist today.

To our knowledge, these two exceptions have not been a barrier to accelerating the adoption and use of our models within our armed forces to date.

The Department of War has stated they will only contract with AI companies who accede to “any lawful use” and remove safeguards in the cases mentioned above. They have threatened to remove us from their systems if we maintain these safeguards; they have also threatened to designate us a “supply chain risk”—a label reserved for US adversaries, never before applied to an American company—and to invoke the Defense Production Act to force the safeguards’ removal. These latter two threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security.

Regardless, these threats do not change our position: we cannot in good conscience accede to their request.

It is the Department’s prerogative to select contractors most aligned with their vision. But given the substantial value that Anthropic’s technology provides to our armed forces, we hope they reconsider. Our strong preference is to continue to serve the Department and our warfighters—with our two requested safeguards in place. Should the Department choose to offboard Anthropic, we will work to enable a smooth transition to another provider, avoiding any disruption to ongoing military planning, operations, or other critical missions. Our models will be available on the expansive terms we have proposed for as long as required.

We remain ready to continue our work to support the national security of the United States.

联系我们 contact @ memedata.com