战争部的情况。
Where things stand with the Department of War

原始链接: https://www.anthropic.com/news/where-stand-department-war

美国战争部将Anthropic列为对美国国家安全构成“供应链风险”,该公司计划在法律上对这一决定提出挑战,认为其缺乏法律依据。该认定仅适用于Anthropic的服务在战争部合同中*直接*使用的情况,并不广泛适用于所有政府承包商。 Anthropic强调该法律的意图是保护政府,而非惩罚供应商,并且他们与战争部就继续在既定的安全准则(避免自主武器和大规模监控)下提供服务进行了富有成效的讨论。 该公司为泄露的一篇内部帖子道歉,该帖子是对近期事件(包括总统的声明和与OpenAI的协议)的反应,称其评估仓促且已过时。Anthropic 仍然致力于支持美国军人和国家安全,在过渡期间提供持续且经济实惠的AI模型访问,并重申双方共同的目标是通过AI来提升国家安全。

## Anthropic 与战争部:科技伦理的转变 Anthropic 最近的一份内部备忘录显示,该公司愿意与美国战争部(DoW)合作,引发了关于科技行业伦理演变的讨论。 过去,许多科技公司和毕业生因道德原因拒绝从事具有军事应用的项目。 现在,Anthropic 提议与 DoW 合作,仅有两个狭窄的例外——暂时避免自主武器系统,以及防止支持特定未公开的活动。 Hacker News 上的评论员指出,“奥佛顿窗口”发生了显著变化,Anthropic 似乎优先考虑务实考虑而非严格的道德立场。 一些人对此表示失望,提及过去的理想,甚至提及科技伦理妥协的虚构描绘。 另一些人则指出科技行业的增长和价值观变化,以及其历史上重要人物的影响。 这场讨论凸显了对企业伦理、军事合作常态化以及人工智能可能被用于有害应用的担忧。 虽然一些人认为 Anthropic 的做法比其他选择“略好”,但另一些人则认为这是向权力屈服,并且是对未来的一个令人不安的信号。
相关文章

原文

A statement from Dario Amodei

Yesterday (March 4) Anthropic received a letter from the Department of War confirming that we have been designated as a supply chain risk to America’s national security.

As we wrote on Friday, we do not believe this action is legally sound, and we see no choice but to challenge it in court.

The language used by the Department of War in the letter (even supposing it was legally sound) matches our statement on Friday that the vast majority of our customers are unaffected by a supply chain risk designation. With respect to our customers, it plainly applies only to the use of Claude by customers as a direct part of contracts with the Department of War, not all use of Claude by customers who have such contracts.

The Department’s letter has a narrow scope, and this is because the relevant statute (10 USC 3252) is narrow, too. It exists to protect the government rather than to punish a supplier; in fact, the law requires the Secretary of War to use the least restrictive means necessary to accomplish the goal of protecting the supply chain. Even for Department of War contractors, the supply chain risk designation doesn’t (and can’t) limit uses of Claude or business relationships with Anthropic if those are unrelated to their specific Department of War contracts.

I would like to reiterate that we had been having productive conversations with the Department of War over the last several days, both about ways we could serve the Department that adhere to our two narrow exceptions, and ways for us to ensure a smooth transition if that is not possible. As we wrote on Thursday, we are very proud of the work we have done together with the Department, supporting frontline warfighters with applications such as intelligence analysis, modeling and simulation, operational planning, cyber operations, and more.

As we stated last Friday, we do not believe, and have never believed, that it is the role of Anthropic or any private company to be involved in operational decision-making—that is the role of the military. Our only concerns have been our exceptions on fully autonomous weapons and mass domestic surveillance, which relate to high-level usage areas, and not operational decision-making.

I also want to apologize directly for a post internal to the company that was leaked to the press yesterday. Anthropic did not leak this post nor direct anyone else to do so—it is not in our interest to escalate this situation. That particular post was written within a few hours of the President’s Truth Social post announcing Anthropic would be removed from all federal systems, the Secretary of War’s X post announcing the supply chain risk designation, and the announcement of a deal between the Pentagon and OpenAI, which even OpenAI later characterized as confusing. It was a difficult day for the company, and I apologize for the tone of the post. It does not reflect my careful or considered views. It was also written six days ago, and is an out-of-date assessment of the current situation.

Our most important priority right now is making sure that our warfighters and national security experts are not deprived of important tools in the middle of major combat operations. Anthropic will provide our models to the Department of War and national security community, at nominal cost and with continuing support from our engineers, for as long as is necessary to make that transition, and for as long as we are permitted to do so.

Anthropic has much more in common with the Department of War than we have differences. We both are committed to advancing US national security and defending the American people, and agree on the urgency of applying AI across the government. All our future decisions will flow from that shared premise.

联系我们 contact @ memedata.com