OpenAI 支持一项法案,该法案将限制人工智能导致大规模死亡的责任。
OpenAI backs Illinois bill that would limit when AI labs can be held liable

原始链接: https://www.wired.com/story/openai-backs-bill-exempt-ai-firms-model-harm-lawsuits/

OpenAI 令人惊讶地支持了伊利诺伊州的一项法案(SB 3444),该法案将很大程度上保护人工智能开发者免于对其“前沿”人工智能模型(训练成本超过1亿美元)造成的重大损害(包括死亡、100多人受伤或超过10亿美元的损失)的责任。 只要损害不是故意或鲁莽造成的,并且公司发布安全报告,即可适用该保护。 这标志着 OpenAI 的转变,此前该公司一直专注于反对责任法案。 专家认为 SB 3444 对人工智能公司过于保护。 OpenAI 认为该法案鼓励创新,并避免了州法规的“拼凑”局面,推动制定统一的国家标准。 他们还倡导建立联邦监管框架,呼应了硅谷对维护美国在人工智能领域领导地位的担忧。 然而,该法案在伊利诺伊州面临强烈反对,90% 的受访居民反对对人工智能公司免除责任。 尽管如此,OpenAI 希望该法案能够为联邦监管和安全人工智能部署铺平道路。

## OpenAI 支持人工智能责任法案 - 摘要 最近一篇Hacker News上的帖子讨论了OpenAI支持一项旨在限制人工智能系统造成死亡责任的法案。讨论主要持批评态度,评论员们对OpenAI的动机表示怀疑。 许多人认为该公司将利润和避免责备置于其宣称的造福人类的使命之上。担忧范围从人工智能设计的产品造成广泛损害(如缺陷药物)到人工智能在军事应用中的使用。 评论员质疑保护公司免受责任的逻辑,认为游说法律保护比确保产品安全更便宜。普遍的观点是,OpenAI正在将自我保护置于道德考量之上,这加剧了人们对将重大权力交给该组织的担忧。
相关文章

原文

OpenAI is throwing its support behind an Illinois state bill that would shield AI labs from liability in cases where AI models are used to cause serious societal harms, such as death or serious injury of 100 or more people or at least $1 billion in property damage.

The effort seems to mark a shift in OpenAI’s legislative strategy. Until now, OpenAI has largely played defense, opposing bills that could have made AI labs liable for their technology’s harms. Several AI policy experts tell WIRED that SB 3444—which could set a new standard for the industry—is a more extreme measure than bills OpenAI has supported in the past.

The bill would shield frontier AI developers from liability for “critical harms” caused by their frontier models as long as they did not intentionally or recklessly cause such an incident, and have published safety, security, and transparency reports on their website. It defines a frontier model as any AI model trained using more than $100 million in computational costs, which likely could apply to America’s largest AI labs, like OpenAI, Google, xAI, Anthropic, and Meta.

“We support approaches like this because they focus on what matters most: Reducing the risk of serious harm from the most advanced AI systems while still allowing this technology to get into the hands of the people and businesses—small and big—of Illinois,” said OpenAI spokesperson Jamie Radice in an emailed statement. “They also help avoid a patchwork of state-by-state rules and move toward clearer, more consistent national standards.”

Under its definition of critical harms, the bill lists a few common areas of concern for the AI industry, such as a bad actor using AI to create a chemical, biological, radiological, or nuclear weapon. If an AI model engages in conduct on its own that, if committed by a human, would constitute a criminal offense and leads to those extreme outcomes, that would also be a critical harm. If an AI model were to commit any of these actions under SB 3444, the AI lab behind the model may not be held liable, so long as it wasn’t intentional and they published their reports.

Federal and state legislatures in the US have yet to pass any laws specifically determining whether AI model developers, like OpenAI, could be liable for these types of harm caused by their technology. But as AI labs continue to release more powerful AI models that raise novel safety and cybersecurity challenges, such as Anthropic’s Claude Mythos, these questions feel increasingly prescient.

In her testimony supporting SB 3444, a member of OpenAI’s Global Affairs team, Caitlin Niedermeyer, also argued in favor of a federal framework for AI regulation. Niedermeyer struck a message that’s consistent with the Trump administration’s crackdown on state AI safety laws, claiming it’s important to avoid “a patchwork of inconsistent state requirements that could create friction without meaningfully improving safety.” This is also consistent with the broader view of Silicon Valley in recent years, which has generally argued that it’s paramount for AI legislation to not hamper America’s position in the global AI race. While SB 3444 is itself a state-level safety law, Niedermeyer argued that those can be effective if they “reinforce a path toward harmonization with federal systems.”

“At OpenAI, we believe the North Star for frontier regulation should be the safe deployment of the most advanced models in a way that also preserves US leadership in innovation,” Niedermeyer said.

Scott Wisor, policy director for the Secure AI project, tells WIRED he believes this bill has a slim chance of passing, given Illinois' reputation for aggressively regulating technology. “We polled people in Illinois, asking whether they think AI companies should be exempt from liability, and 90 percent of people oppose it. There’s no reason existing AI companies should be facing reduced liability,” Wisor says.

联系我们 contact @ memedata.com