旧金山法官对美国国防部禁止Anthropic一事表示担忧。
San Francisco Judge Voices Concerns Over War Department's Ban Of Anthropic

原始链接: https://www.zerohedge.com/political/democrat-judge-voices-concerns-over-war-departments-ban-anthropic

人工智能公司Anthropic正在起诉美国国防部,原因是其被指定为供应链风险——实际上是被列入黑名单,无法获得联邦合同——理由是国家安全问题。这场争议源于Anthropic拒绝在其人工智能产品Claude上移除安全限制,从而阻止其被用于大规模监控和自主武器,尽管五角大楼声明并非如此使用。 前总统特朗普公开批评Anthropic,指责他们试图干预军事政策。Anthropic认为这一指定是非法的报复,可能给他们造成数十亿美元的损失,而政府担心该公司可能会通过未来的更新来破坏军事系统,如果他们不同意其使用方式。 一位联邦法官最近听取了辩论,并表示担忧,认为该指定似乎过于宽泛,并且可能是在惩罚该公司发表的公开批评,暗示存在违反第一修正案的情况。她质疑为什么不只是停止使用Claude,而是需要完全禁止。关于是否临时阻止该指定,预计很快会有判决。

相关文章

原文

Authored by Matthew Vadum via The Epoch Times (emphasis ours),

A federal judge on March 24 seemed receptive to Anthropic’s request to temporarily block the Department of War from continuing to designate the AI company as a supply-chain risk.

The designation, under a federal law designed to protect military systems from foreign sabotage, functions as a blacklist, preventing the company from doing business with the federal government and its contractors.

If the block is granted, the company would be able to continue doing business with federal agencies and their contractors while the lawsuit moves forward in court.

The AI developer sued the department in the federal court’s Northern California district on March 9, after the federal government issued the designation, citing its national security concerns about the company. Anthropic also filed a separate lawsuit the same day over the designation in the U.S. Court of Appeals for the District of Columbia Circuit.

President Donald Trump and War Secretary Pete Hegseth previously announced a federal boycott of Anthropic, directing federal agencies, contractors, and suppliers to end ties to the company.

On social media, Trump said Anthropic was attempting to “strong-arm” the federal government and officials elected by the American people by dictating its military policy.

WE will decide the fate of our Country—NOT some out-of-control, Radical Left AI company run by people who have no idea what the real World is all about,” Trump said in a Feb. 27 Truth Social post.

The lawsuits came after Anthropic said it declined to change the user policy for its AI product, Claude, to remove safety guardrails preventing its use for mass surveillance and fully autonomous weapons.

The Department of War has said that it has no intent to use Claude for those purposes.

The department said on March 5 that the issue comes down to ensuring the military can use technology for “all lawful purposes.”

“The military will not allow a vendor to insert itself into the chain of command by restricting the lawful use of a critical capability and put our warfighters at risk,” the Pentagon said in a statement provided to multiple news outlets.

Anthropic’s technology is embedded in classified military platforms, including those that are already in use in current U.S. operations against the Islamic Republic of Iran in Operation Epic Fury.

The company alleged that the supply-chain risk designation constitutes unlawful retaliation and that it could lose billions of dollars if the federal boycott of its products continues. The designation requires that defense contractors certify that they are not using Claude when working with the department.

At a preliminary injunction hearing in San Francisco on March 24, U.S. District Judge Rita F. Lin (D) said the department’s actions against the company were “troubling” because they did not seem “tailored to the stated national security concerns,” and looked like “an attempt to cripple Anthropic.”

If the department were worried “about the integrity of the operational chain of command, [it] could just stop using Claude,” without banning Anthropic “from ever having another government contract,” she said.

She said the supply-chain risk designation applies to “adversaries of the U.S. government who may sabotage its technology systems.” Such designations typically apply to foreign intelligence, terrorists, and other hostile actors, the judge said.

Lin expressed concern that the government might be trying to punish Anthropic for “criticizing the government’s contracting position in the press,” adding that such punishment would violate the Constitution’s First Amendment.

Also at the hearing, Anthropic’s attorney, Mike Mongan, said the designation was the first time a U.S. company has been designated as a supply-chain risk.

“Everything we stand accused of in this proceeding is entirely above board,” Mongan said, adding the company has sincere concerns about the “profound implications” of using its AI tools for mass surveillance and fully autonomous weapons.

Deputy Assistant U.S. Attorney General Eric Hamilton said that the government’s national security concerns outweighed any alleged violations of Anthropic’s rights.

Given the company’s concerns about how the government will use its technology, the Department of War is worried that Anthropic may sabotage or subvert systems, possibly through technology updates, Hamilton said.

Mongan said the government has not identified any legal authority for the federal boycott of Anthropic. He urged the court to grant a preliminary injunction against the supply-chain risk designation, arguing the company was likely to succeed at trial on First Amendment and other grounds.

Hamilton told the court it should deny the injunction.

Failing to do so could allow a technology company to “gain leverage over the Department of War and decide which missions it can conduct,” the government lawyer said.

Anthropic asked Lin to rule on the company’s motion to stay the designation by March 26, but she is not required to meet that suggested deadline.

“I anticipate issuing an order in the next few days,” the judge said as she concluded the hearing.

Stacy Robinson, Aldgra Fredly, and The Associated Press contributed to this report.

联系我们 contact @ memedata.com