法官阻止五角大楼试图以供应链风险标签“惩罚”Anthropic的举动。
Judge blocks Pentagon effort to 'punish' Anthropic with supply chain risk label

原始链接: https://www.cnn.com/2026/03/26/business/anthropic-pentagon-injunction-supply-chain-risk

一位联邦法官暂时阻止五角大楼将人工智能公司Anthropic列为供应链风险,法官认为此举侵犯了该公司的第一修正案和正当程序权利。这场争议源于Anthropic拒绝在其Claude AI模型中移除安全防护措施,特别是防止其用于自主武器和大规模监控——五角大楼希望解除这些限制,以便用于“所有合法目的”。 法官丽塔·林裁定,五角大楼的行为似乎是对Anthropic公开立场和与政府意见不合的反击,称这种认定是对该公司进行“奥威尔式”惩罚的企图。该标签威胁到Anthropic与政府的联系,并危及重要的合同。 这项裁决是针对皮特·赫格塞斯部长积极运用政府权力的司法挑战的最新案例,此前已有类似裁决保护了记者和一位参议员的言论自由。Anthropic欢迎这一决定,并仍然专注于与政府合作进行安全的AI开发,而五角大楼可能会对裁决提出上诉。

黑客新闻 新 | 过去 | 评论 | 提问 | 展示 | 招聘 | 提交 登录 法官阻止五角大楼试图以供应链风险标签“惩罚”Anthropic的企图 (cnn.com) 49点 由 prawn 59分钟前 | 隐藏 | 过去 | 收藏 | 2评论 帮助 comrade1234 2分钟前 | 下一个 [–] 我确信合同很快就会开始涌入。回复 JohnTHaller 6分钟前 | 上一个 [–] 看到对专制政策的一些司法反击是好事。回复 指南 | 常见问题 | 列表 | API | 安全 | 法律 | 申请YC | 联系 搜索:
相关文章

原文

A federal judge in California has indefinitely blocked the Pentagon’s effort to “punish” Anthropic by labeling it a supply chain risk and attempting to sever government ties with the AI company, ruling that those measures ran roughshod over its constitutional rights.

“Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government,” US District Judge Rita Lin wrote in a stinging 43-page ruling.

Lin, an appointee of former President Joe Biden, said she would delay implementation of her ruling for one week to allow the government to appeal.

But in her ruling, she made it clear she disapproved of the government’s actions, which she said violated the company’s First Amendment and due process rights.

The ruling is the latest judicial broadside against Hegseth as he’s sought to use powerful tools at his disposal to push back against companies and individuals he’s tussled with in recent months.

Earlier this month, a federal judge in DC ruled that the secretary violated the First Amendment rights of several reporters when he implemented a restrictive new press policy. And in February, a different judge in DC said Hegseth infringed on the free speech rights of a Democratic senator over the lawmaker’s urging of US service members to refuse illegal orders.

Anthropic applauded Lin’s ruling on Thursday.

“We’re grateful to the court for moving swiftly, and pleased they agree Anthropic is likely to succeed on the merits,” an Anthropic spokesperson said after Thursday’s ruling. “While this case was necessary to protect Anthropic, our customers, and our partners, our focus remains on working productively with the government to ensure all Americans benefit from safe, reliable AI.”

The supply chain risk designation meant any company that works with the military would need to show it didn’t use an Anthropic product. The label, leveled by the Pentagon last month, had previously been used only for companies seen as connected to foreign adversaries.

Anthropic said the designation violated its First Amendment rights, tarnished its reputation and jeopardized hundreds of millions of dollars’ worth of contracts.

The Department of Defense’s quarrel with the Anthropic began after the company refused to back down over contractual guardrails around the use of its Claude AI model in autonomous weapons and mass surveillance.

Secretary Pete Hegseth took the dramatic, unprecedented step of labeling it a supply chain risk in February, and Hegseth and President Donald Trump ordered federal agencies to cease using the product and sever ties with companies that do business with Anthropic.

But Lin said that was all in retaliation for the company sticking with its guardrails.

“These broad measures do not appear to be directed at the government’s stated national security interests,” she wrote. “The Department of War’s records show that it designated Anthropic as a supply chain risk because of its ‘hostile manner through the press.’”

“Punishing Anthropic for bringing public scrutiny to the government’s contracting position is classic illegal First Amendment retaliation,” she added.

The Department of Defense wanted unfettered access to Claude for “all lawful purposes.” The department said it needed complete freedom to use the system, especially in wartime.

“We can’t have a company that has a different policy preference that is baked into the model… pollute the supply chain so our warfighters are getting ineffective weapons, ineffective body armor, ineffective protection,” the Defense Department’s chief technology officer, Emil Michael, told CNBC earlier this month.

But Anthropic had two red lines: It did not want its AI systems used in autonomous weapons or domestic mass surveillance. Anthropic argued in its suit that the Pentagon was aware of its position on the Claude limitations and that its stance is protected speech.

A separate challenge by the company to other authorities Hegseth invoked to make the supply chain risk designation is still pending before a federal court in Washington, DC.

CNN has reached out to the Pentagon for comment.

联系我们 contact @ memedata.com