科技公司不应被胁迫进行监控。
Tech companies shouldn't be bullied into doing surveillance

原始链接: https://www.eff.org/deeplinks/2026/02/tech-companies-shouldnt-be-bullied-doing-surveillance

Anthropic是一家获得机密行动许可的AI公司,正面临来自美国国防部的压力,要求其解除对其技术的限制。据报道,国防部威胁将Anthropic列为“供应链风险”——实际上禁止其获得政府合同——如果他们不允許軍方無限制地使用其技術,特別是關於自主武器和監控。 这场冲突源于Anthropic怀疑其AI被用于委内瑞拉的一次袭击,并重申了其致力于防止其技术被用于这些特定应用的承诺,称之为“不可逾越的红线”。尽管此前已获得许可,政府现在寻求绕过这些伦理保障。 这种情况对Anthropic来说是一次关键的考验:它会优先考虑原则还是利润,并抵制政府的胁迫吗?许多人敦促该公司坚持其声明的价值观,避免成为监视工具,为AI的负责任开发和部署树立重要的先例。

这次黑客新闻的讨论围绕着电子前沿基金会(EFF)的一篇文章,该文章认为不应向科技公司施压以配合政府监控。一个关键观点是,科技行业对隐私的立场似乎发生了转变——与今天相比,过去(尤其是在伊拉克战争时期)曾表现出抵抗。 用户对苹果等公司表示失望,认为它们为了政治 expediency 放弃了原则。Anthropic 目前受到赞扬,被认为是罕见的大型科技公司之一,展现出骨气,但也有人提醒不要过早赞扬它们,理由是它们与政府官员的交往缺乏透明度。 人们也对 Anthropic 自身的潜在妥协表示担忧(具体是放弃了一项安全承诺,评论中提供了链接)。总之,这次讨论反映了对科技巨头日益增长的不信任,以及一种信念,即监控是不可避免的,无论最初承诺如何。一些用户正在积极转向替代方案,例如 Ubuntu。
相关文章

原文

The Secretary of Defense has given an ultimatum to the artificial intelligence company Anthropic in an attempt to bully them into making their technology available to the U.S. military without any restrictions for their use. Anthropic should stick by their principles and refuse to allow their technology to be used in the two ways they have publicly stated they would not support: autonomous weapons systems and surveillance. The Department of Defense has reportedly threatened to label Anthropic a “supply chain risk,” in retribution for not lifting restrictions on how their technology is used. According to WIRED, that label would be, “a scarlet letter usually reserved for companies that do business with countries scrutinized by federal agencies, like China, which means the Pentagon would not do business with firms using Anthropic’s AI in their defense work.”

Anthropic should stick by their principles and refuse to allow their technology to be used in the two ways they have publicly stated they would not support: autonomous weapons systems and surveillance.

In 2025, reportedly Anthropic became the first AI company cleared for use in relation to classified operations and to handle classified information. This current controversy, however, began in January 2026 when, through a partnership with defense contractor Palantir, Anthropic came to suspect their AI had been used during the January 3 attack on Venezuela. In January 2026, Anthropic CEO Dario Amodei wrote to reiterate that surveillance against US persons and autonomous weapons systems were two “bright red lines” not to be crossed, or at least topics that needed to be handled with “extreme care and scrutiny combined with guardrails to prevent abuses.” You can also read Anthropic’s self-proclaimed core views on AI safety here, as well as their LLM, Claude’s, constitution here

Now, the U.S. government is threatening to terminate the government’s contract with the company if it doesn’t switch gears and voluntarily jump right across those lines.  

Companies, especially technology companies, often fail to live up to their public statements and internal policies related to human rights and civil liberties for all sorts of reasons, including profit. Government pressure shouldn’t be one of those reasons. 

Whatever the U.S. government does to threaten Anthropic, the AI company should know that their corporate customers, the public, and the engineers who make their products are expecting them not to cave. They, and all other technology companies, would do best to refuse to become yet another tool of surveillance.

联系我们 contact @ memedata.com