人工智能安全领导人称“世界处于危险之中”并辞职去研究诗歌。
AI safety leader says 'world is in peril' and quits to study poetry

原始链接: https://www.bbc.com/news/articles/c62dlvdq3e3o

## 人工智能担忧引发辞职与反思 两位人工智能研究人员近期从领先公司——Anthropic和OpenAI辞职,理由是日益增长的人工智能潜在危险的担忧。Anthropic人工智能安全负责人Mrinank Sharma警告说,由于人工智能和生物武器等相互关联的危机,世界正处于“危险之中”,并计划研究诗歌。他的离职紧随OpenAI的一起类似辞职,Zoe Hitzig表达了对ChatGPT中人工智能驱动的社交互动和广告产生的心理社会影响的担忧。 两家公司都处于生成式人工智能繁荣的前沿,但面临着在安全与商业利益之间取得平衡的审查。Anthropic将自身定位为以安全为中心,最近批评了OpenAI推出广告的做法。然而,两家公司都面临法律挑战——Anthropic面临版权索赔,OpenAI面临数据隐私问题。 这些辞职凸显了人工智能社区对该技术快速发展和部署日益增长的不安,以及努力确保伦理考量仍然是其进步的核心。离职的研究人员建议需要更严格的监管和更谨慎的方法,以避免重蹈社交媒体过去犯过的错误。

## AI安全领袖辞职转研诗歌 – 摘要 Anthropic的一位AI安全领袖辞职,称“世界正处于危险之中”,并计划研究诗歌。这一消息在Hacker News上引发了讨论,许多人对此举表示质疑。一些人将其解读为对全球危机的真正担忧——涵盖的不只是AI风险,而另一些人则认为这是一种由经济保障所实现的特权退出策略。 讨论的中心在于,辞职并追求诗歌是对潜在危险的有意义的回应,还是一种自私的行为。一些人认为留在该领域倡导变革会更有影响力,而另一些人则捍卫优先考虑个人满足感的权利。 许多评论员指出,辞职信本身强调了更广泛的社会问题,而不仅仅是AI。人们也对像AI辅助生物恐怖主义这样危险技术的可访问性,以及当前AI系统中固有的操纵潜力表示担忧。最终,这一事件凸显了从潜在有害技术中获利与积极努力减轻其风险之间的紧张关系。
相关文章

原文

AI safety leader says 'world is in peril' and quits to study poetry

Liv McMahon,Technology reporterand
Ottilie Mitchell
Getty Images Black and white Anthropic logo displayed on a smartphone screen.Getty Images

An AI safety researcher has quit US firm Anthropic with a cryptic warning that the "world is in peril".

In his resignation letter shared on X, Mrinank Sharma told the firm he was leaving amid concerns about AI, bioweapons and the state of the wider world.

He said he would instead look to pursue writing and studying poetry, and move back to the UK to "become invisible".

It comes in the same week that an OpenAI researcher said she had resigned, sharing concerns about the ChatGPT maker's decision to deploy adverts in its chatbot.

Anthropic, best known for its Claude chatbot, had released a series of commercials aimed at OpenAI, criticising the company's move to include adverts for some users.

The company, which was formed in 2021 by a breakaway team of early OpenAI employees, has positioned itself as having a more safety-orientated approach to AI research compared with its rivals.

Sharma led a team there which researched AI safeguards.

He said in his resignation letter his contributions included investigating why generative AI systems suck up to users, combatting AI-assisted bioterrorism risks and researching "how AI assistants could make us less human".

But he said despite enjoying his time at the company, it was clear "the time has come to move on".

"The world is in peril. And not just from AI, or bioweapons, but from a whole series of interconnected crises unfolding in this very moment," Sharma wrote.

He said he had "repeatedly seen how hard it is to truly let our values govern our actions" - including at Anthropic which he said "constantly face pressures to set aside what matters most".

Sharma said he would instead look to pursue a poetry degree and writing.

He added in a reply: "I'll be moving back to the UK and letting myself become invisible for a period of time."

Those departing AI firms which have loomed large in the latest generative AI boom - and sought to retain talent with huge salaries or compensation offers - often do so with plenty of shares and benefits intact.

Eroding principles

Anthropic calls itself a "public benefit corporation dedicated to securing [AI's] benefits and mitigating its risks".

In particular, it has focused on preventing those it believes are posed by more advanced frontier systems, such as them becoming misaligned with human values, misused in areas such as conflict or too powerful.

It has released reports on the safety of its own products, including when it said its technology had been "weaponised" by hackers to carry out sophisticated cyber attacks.

But it has also come under scrutiny over its practices. In 2025, it agreed to pay $1.5bn (£1.1bn) to settle a class action lawsuit filed by authors who said the company stole their work to train its AI models.

Like OpenAI, the firm also seeks to seize on the technology's benefits, including through its own AI products such as its ChatGPT rival Claude.

It recently released a commercial that criticised OpenAI's move to start running ads in ChatGPT.

OpenAI boss Sam Altman had previously said he hated ads and would use them as a "last resort".

Last week, he hit back at the advert's description of this as a "betrayal" - but was mocked for his lengthy post criticising Anthropic.

Watch: Zoe Hitzig on why she quit

A former OpenAI researcher who resigned this week, in part due to fears of the use of advertising on ChatGPT, has told BBC Newsnight she feels "really nervous about working in the industry".

Zoe Hitzig said her concerns stemmed from the possible psychosocial impacts of a "new type of social interaction" that were not yet understood.

She noted "early warning signs" that dependence on AI tools were "worrisome" and could "reinforce certain kinds of delusions" as well as negatively impacting users' mental health in other ways.

"Creating an economic engine that profits from encouraging these kinds of new relationships before we understand them is really dangerous," she continued.

"We saw what happened with social media" she said, noting "there's still time to set up the social institutions, the forms of regulation that can actually govern this". It was, she said, a "critical moment".

Responding to BBC News, a spokesperson for OpenAI pointed to the firm's principles which state: "Our mission is to ensure AGI benefits all of humanity; our pursuit of advertising is always in support of that mission and making AI more accessible."

They add: "We keep your conversations with ChatGPT private from advertisers, and we never sell your data to advertisers."

A green promotional banner with black squares and rectangles forming pixels, moving in from the right. The text says: “Tech Decoded: The world’s biggest tech news in your inbox every Monday.”

Sign up for our Tech Decoded newsletter to follow the world's top tech stories and trends. Outside the UK? Sign up here.


联系我们 contact @ memedata.com