法官阻止特朗普政府对Anthropic的“奥威尔式”供应链风险标签——目前为止。
Judge Blocks Trump Admin's 'Orwellian' Supply Chain Risk Label On Anthropic - For Now

原始链接: https://www.zerohedge.com/political/judge-shreds-anthropic-supply-chain-risk-label-slams-trump-admins-orwellian-blacklisting

美国地区法官已发布初步禁令,阻止特朗普政府对人工智能公司Anthropic采取的惩罚性措施。这场争议源于Anthropic拒绝在其Claude AI模型中移除安全限制,反对将其用于“大规模监控”和“自主武器”。 政府随后将Anthropic列为“供应链风险”,实际上阻碍了它与国防承包商的合作。Anthropic提起诉讼,认为这些行为是非法的,旨在惩罚该公司坚持其立场。 林法官同意这一观点,表示没有理由将一家美国公司仅仅因为不同意政府的观点而定性为威胁。该禁令阻止了执行禁止政府使用Anthropic技术的指令,以及禁止承包商与其合作的规定。 虽然五角大楼*不被强制*使用Anthropic的技术,但该裁决阻止了旨在削弱该公司的更广泛禁令。此案仍在继续,但法官的裁决表明了对惩罚倡导负责任人工智能发展的公司的强烈反对立场。

相关文章

原文

It looks like Anthropic isn't as radioactive to other defense contractors - for now, and on paper. 

In a sharply worded 43-page order issued Thursday, U.S. District Judge Rita F. Lin (Biden) of the Northern District of California granted Anthropic PBC's motion for a preliminary injunction, blocking key punitive measures imposed by the Trump administration after the AI company publicly refused to lift safety restrictions on its Claude model.

Photo-Illustration: WIRED Staff; PHotograph: FABRICE COFFRINI/Getty Images

Lin minced no words on the supply-chain label - the core of the dispute:

“Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government.”

Recall that Anthropic refused to change the user policy for its AI tool Claude to allow the government to use it for what Anthropic described as “mass surveillance” and “fully autonomous weapons.” After they were branded a Supply Chain Risk, they sued on March. 9, calling the government's actions "unprecedented and unlawful." 

Lin ruled that the broad measures “do not appear to be directed at the government’s stated national security interests” and instead “appear designed to punish Anthropic.” One amicus brief called the actions “attempted corporate murder”; the judge noted they “might not be murder, but the evidence shows that they would cripple Anthropic.”

The Injunction

Lin's preliminary injunction bars enforcement of three Challenged Actions; Trump ordering the government to immediately stop using Anthropic tech, Hegseth prohibiting government contractors from 'commercial activity' with Anthropic, and the DoW's formal designation of Anthropic as a “supply chain risk” to national security.

The order does not force the Pentagon to start using Anthropic again, nor does it interfere with a planned six-month phase-out for existing systems if done without the broader bans. A separate parallel challenge to one DoW letter (under 41 U.S.C. § 4713) is pending in the D.C. Circuit; that case remains unaffected.

At the March 24 hearing, DoW counsel conceded that portions of the Hegseth Directive had "absolutely no legal effect" on their own and that DoW did not intend to terminate unrelated commercial relationships - yet declined to stipulate to an injunction, citing ongoing "assessment."

“While this case was necessary to protect Anthropic, our customers, and our partners, our focus remains on working productively with the government to ensure all Americans benefit from safe, reliable AI," Anthropic said in a statement. 

The order is a preliminary injunction only; the case will proceed to full merits. But Judge Lin’s thorough factual record and legal analysis make clear that branding a domestic AI firm a national-security threat for publicly advocating safety guardrails crosses a bright constitutional line. The government retains full authority to choose its tools—just not to punish a company for speaking out about their limitations.

联系我们 contact @ memedata.com