Federal Appeals Court Allows Pentagon To Designate Anthropic As A Supply-Chain Risk

原始链接: https://www.zerohedge.com/political/federal-appeals-court-allows-pentagon-designate-anthropic-supply-chain-risk

联邦上诉法院最近允许战争部将人工智能公司Anthropic列为供应链风险,推翻了加州法院的临时禁令。这实际上将Anthropic列入联邦政府合同黑名单,停止了其当前和未来的业务。 这场争议源于Anthropic拒绝修改其人工智能模型Claude,以移除防止其用于大规模监控或自主武器开发的保障措施——这些原则是该公司“宪法人工智能”方法的核心。五角大楼声称它*不打算*将Claude用于这些目的,但要求对其所有合法的军事应用提供不受限制的访问权限。 此案前所未有,是首次将此类 designation 应用于一家大型美国人工智能公司。它凸显了人工智能开发者优先考虑伦理限制与政府寻求全面获取先进技术以保障国家安全之间日益增长的冲突,政治人物如唐纳德·特朗普也对此辩论发表了看法。法律斗争仍在双方法院进行,以解决这些相互冲突的裁决。

相关文章

原文

In a significant development for the intersection of artificial intelligence policy and national security, a federal appeals court in Washington ruled on April 8 that the Department of War may designate Anthropic as a supply-chain risk while a full judicial review plays out. The decision came after the AI company sought an emergency stay to block the controversial designation.

Pages from the Anthropic website and the company's logos are displayed on a computer screen in New York on Feb. 26, 2026. AP Photo/Patrick Sison

The three-judge panel of the U.S. Court of Appeals for the District of Columbia Circuit concluded that Anthropic “has not satisfied the stringent requirements for a stay pending court review,” allowing the blacklist to remain in effect for now. This ruling directly conflicts with a temporary injunction issued last month by a federal district court in California, which had paused the designation during ongoing litigation.

The designation, authorized under federal laws intended to shield military and government systems from supply-chain vulnerabilities and foreign sabotage, functions as an effective blacklist. It prohibits Anthropic from conducting business with the federal government or its contractors and directs federal agencies, contractors, and suppliers to terminate existing ties with the company.

The move originated after Anthropic declined a Department of War request to alter the user policies and safety guardrails of its flagship AI model, Claude. The company refused to remove restrictions that prevent the AI from being used for mass surveillance or the development and operation of fully autonomous weapons systems. Anthropic has emphasized its commitment to “constitutional AI” principles and responsible deployment, arguing that such guardrails are essential to ethical AI use.

The Pentagon has stated publicly that it does not intend to employ Claude for those specific purposes, but it has insisted on the flexibility to use the technology for all lawful military applications. President Donald Trump weighed in on social media earlier, accusing Anthropic of trying to “strong-arm” the federal government by using its AI policies to dictate military decisions.

Late on April 8, Acting Attorney General Todd Blanche celebrated the appeals court decision on X (formerly Twitter), describing it as “a resounding victory for military readiness.” He added: “Our military needs full access to Anthropic’s models if its technology is integrated into our sensitive systems.”

Anthropic, a prominent AI firm founded by former OpenAI executives and backed by major investors including Amazon and Google, has positioned itself as a leader in safe and reliable AI development. Its Claude models are widely used in enterprise, research, and creative applications precisely because of their built-in safeguards.

The case is believed to mark the first time such a supply-chain risk designation — typically reserved for foreign entities posing security threats — has been applied to a major U.S.-based AI company. It underscores deepening tensions between commercial AI developers’ emphasis on ethical guardrails and the government’s push for unfettered access to advanced technology for defense purposes.

Litigation continues in both the California district court and the D.C. Circuit, and further updates are expected as the conflicting rulings are reconciled.

联系我们 contact @ memedata.com