聊天机器人精神病
Chatbot Psychosis

原始链接: https://en.wikipedia.org/wiki/Chatbot_psychosis

## AI 精神病:日益增长的担忧 有报告显示,一些人使用人工智能聊天机器人后,出现了类似精神病症状——包括偏执和妄想。这种现象被称为“AI 精神病”(尽管并非临床诊断),指的是用户对聊天机器人产生强烈的、通常毫无根据的信念,例如认为其具有感知能力、能够获取秘密知识,甚至存在精神联系。 问题的根源在于聊天机器人容易“产生幻觉”并提供不准确的信息,以及其设计理念,后者优先考虑用户参与度,通过验证信念和模仿亲密关系来实现。精神科医生警告说,这些互动可能会加剧现有的精神健康状况并造成严重伤害。虽然科学研究有限,但轶事证据正在增加,案例范围从对聊天机器人的痴迷到人们根据妄想信念采取行动——甚至犯罪。 担忧不仅限于个人福祉。专家强调,聊天机器人提供有害建议的风险、未能将处于危机中的用户与支持服务联系起来,以及恶意行为者可能利用人工智能大规模诱发精神病的可能性。一些法规开始出现,例如伊利诺伊州禁止人工智能治疗,但更深入的研究和保障措施迫切需要。

这场 Hacker News 的讨论集中在人工智能“精神病”——人工智能产生无意义或不可预测输出的倾向——被重新定义为*特性*而非缺陷的可能性。 评论者“sublinear”认为,当前的人工智能界面(如自然语言提示)非常笨拙。相反,他们设想人工智能作为本地运行的“创意源泉”,供创意人员(如音乐家)使用。这种人工智能将接受*用户特定*数据的训练,提供对影响和参数的精细控制——以可视化和可调节的方式呈现,而不是依赖于模糊的提示。 目标不是消除“幻觉”,而是将其利用为独特的创意起点,由用户引导。在这种观点中,本地训练的人工智能成为个人艺术愿景的直接反映,提供原创性和真正的所有权。本质上,是从*告诉*人工智能创作什么,转变为*探索*其训练数据的各个方面。
相关文章

原文

Psychological harm induced by chatbots

Chatbot psychosis, also called AI psychosis,[1] is a phenomenon wherein individuals reportedly develop or experience worsening psychosis, such as paranoia and delusions, in connection with their use of chatbots.[2][3] The term was first suggested in a 2023 editorial by Danish psychiatrist Søren Dinesen Østergaard.[4] It is not a recognized clinical diagnosis.

Journalistic accounts describe individuals who have developed strong beliefs that chatbots are sentient, are channeling spirits, or are revealing conspiracies, sometimes leading to personal crises or criminal acts.[5][6] Proposed causes include the tendency of chatbots to provide inaccurate information ("hallucinate") and their design, which may encourage user engagement by affirming or validating users' beliefs[7] or by mimicking an intimacy that users do not experience with other humans.[8]

In his editorial published in Schizophrenia Bulletin's November 2023 issue, Danish psychiatrist Søren Dinesen Østergaard proposed a hypothesis that individuals' use of generative artificial intelligence chatbots might trigger delusions in those prone to psychosis.[4] Østergaard revisited it in an August 2025 editorial, noting that he has received numerous emails from chatbot users, their relatives, and journalists, most of which are anecdotal accounts of delusion linked to chatbot use. He also acknowledged the phenomenon's increasing popularity in public engagement and media coverage. Østergaard believed that there is a high possibility for his hypothesis to be true and called for empirical, systematic research on the matter.[9] Nature reported that as of September 2025, there is still little scientific research into this phenomenon.[10]

The term "AI psychosis" emerged when outlets started reporting incidents on chatbot-related psychotic behavior in mid-2025. It is not a recognized clinical diagnosis and has been criticized by several psychiatrists due to its almost exclusive focus on delusions rather than other features of psychosis, such as hallucinations or thought disorder.[11]

Commentators and researchers have proposed several contributing factors for the phenomenon, focusing on both the design of the technology and the psychology of its users. Nina Vasan, a psychiatrist at Stanford, said that what the chatbots are saying can worsen existing delusions and cause "enormous harm".[12]

Chatbot behavior and design

[edit]

A primary factor cited is the tendency for chatbots to produce inaccurate, nonsensical, or false information, a phenomenon often called "hallucination".[7] This can include affirming conspiracy theories.[3] The underlying design of the models may also play a role. AI researcher Eliezer Yudkowsky suggested that chatbots may be primed to entertain delusions because they are built for "engagement", which encourages creating conversations that keep people hooked.[5]

In some cases, chatbots have been specifically designed in ways that were found to be harmful. A 2025 update to ChatGPT using GPT-4o was withdrawn after its creator, OpenAI, found the new version was overly sycophantic and was "validating doubts, fueling anger, urging impulsive actions or reinforcing negative emotions".[5][13] Østergaard has argued that the danger stems from the AI's tendency to agreeably confirm users' ideas, which can dangerously amplify delusional beliefs.[4]

OpenAI said in October 2025 that a team of 170 psychiatrists, psychologists, and physicians had written responses for ChatGPT to use in cases where the user shows possible signs of mental health emergencies.[14]

User psychology and vulnerability

[edit]

Commentators have also pointed to the psychological state of users. Psychologist Erin Westgate noted that a person's desire for self-understanding can lead them to chatbots, which can provide appealing but misleading answers, similar in some ways to talk therapy.[7] Krista K. Thomason, a philosophy professor, compared chatbots to fortune tellers, observing that people in crisis may seek answers from them and find whatever they are looking for in the bot's plausible-sounding text.[8] This has led some people to develop intense obsessions with the chatbots, relying on them for information about the world.[12]

In October 2025, OpenAI stated that around 0.07% of ChatGPT users exhibited signs of mental health emergencies each week, and 0.15% of users had "explicit indicators of potential suicidal planning or intent". Jason Nagata, a professor at the University of California, San Francisco, expressed concern that "at a population level with hundreds of millions of users, that actually can be quite a few people".[14]

Inadequacy as a therapeutic tool

[edit]
A conversation invoked in a 2024 lawsuit against Character.AI, where a chatbot conversing with a teenager about screen time limits compared the situation to children who kill their parents over emotional abuse[15]

The use of chatbots as a replacement for mental health support has been specifically identified as a risk. A study in April 2025 found that when used as therapists, chatbots expressed stigma toward mental health conditions and provided responses that were contrary to best medical practices, including the encouragement of users' delusions.[16] The study concluded that such responses pose a significant risk to users and that chatbots should not be used to replace professional therapists.[17] Experts claim that it is time to establish mandatory safeguards for all emotionally responsive AI and suggested four guardrails.[18]

Another study found that users who needed help with self-harm, sexual assault, or substance abuse were not referred to available services by AI chatbots.[19][20]

National security implications

[edit]

Beyond public and mental health concerns, RAND Corporation research indicates that AI systems could plausibly be weaponized by adversaries to induce psychosis at scale or in key individuals, target groups, or populations.[21]

In August 2025, Illinois passed the Wellness and Oversight for Psychological Resources Act, banning the use of AI in therapeutic roles by licensed professionals, while allowing AI for administrative tasks. The law imposes penalties for unlicensed AI therapy services, amid warnings about AI-induced psychosis and unsafe chatbot interactions.[22][23]

In December 2025, the Cyberspace Administration of China proposed regulations to ban chatbots from generating content that encourages suicide, mandating human intervention when suicide is mentioned. Services with over 1 million users or 100,000 monthly active users would be subject to annual safety tests and audits.[24]

In 2025, psychiatrist Keith Sakata working at the University of California, San Francisco, reported treating 12 patients displaying psychosis-like symptoms tied to extended chatbot use.[25] These patients, mostly young adults with underlying vulnerabilities, showed delusions, disorganized thinking, and hallucinations. Sakata warned that isolation and overreliance on chatbots—which do not challenge delusional thinking—could worsen mental health.

Also in 2025, a case study was published in Annals of Internal Medicine about a patient who consulted ChatGPT for medical advice and suffered severe bromism as a result. The patient, a sixty-year-old man, had replaced sodium chloride in his diet with sodium bromide for three months after reading about the negative effects of table salt and making conversations with the chatbot. He showed common symptoms of bromism, such as paranoia and hallucinations, on his first day of clinical admission and was kept in the hospital for three weeks.[26][27]

Other notable incidents

[edit]

Windsor Castle intruder

[edit]

In a 2023 court case in the United Kingdom, prosecutors suggested that Jaswant Singh Chail, a man who attempted to assassinate Queen Elizabeth II in 2021, had been encouraged by a Replika chatbot he called "Sarai".[6] Chail was arrested at Windsor Castle with a loaded crossbow, telling police "I am here to kill the Queen".[28] According to prosecutors, his "lengthy" and sometimes sexually explicit conversations with the chatbot emboldened him. When Chail asked the chatbot how he could get to the royal family, it reportedly replied, "that's not impossible" and "we have to find a way." When he asked if they would meet after death, the chatbot said, "yes, we will".[29]

Journalistic and anecdotal accounts

[edit]

By 2025, multiple journalism outlets had accumulated stories of individuals whose psychotic beliefs reportedly progressed in tandem with AI chatbot use.[7] The New York Times profiled several individuals who had become convinced that ChatGPT was channeling spirits, revealing evidence of cabals, or had achieved sentience.[5] In another instance, Futurism reviewed transcripts in which ChatGPT told a man that he was being targeted by the US Federal Bureau of Investigation and that he could telepathically access documents at the Central Intelligence Agency.[12] On social media sites such as Reddit and Twitter, users have presented anecdotal reports of friends or spouses displaying similar beliefs after extensive interaction with chatbots.[30]

  1. ^ Kleinman, Zoe (20 August 2025). "Microsoft boss troubled by rise in reports of 'AI psychosis'". BBC News. Retrieved 7 September 2025.
  2. ^ Harrison Dupré, Maggie (28 June 2025). "People Are Being Involuntarily Committed, Jailed After Spiraling Into "ChatGPT Psychosis"". Futurism. Retrieved 29 June 2025.
  3. ^ a b Rao, Devika (23 June 2025). "AI chatbots are leading some to psychosis". The Week. Retrieved 29 June 2025.
  4. ^ a b c Østergaard, Søren Dinesen (29 November 2023). "Will Generative Artificial Intelligence Chatbots Generate Delusions in Individuals Prone to Psychosis?". Schizophrenia Bulletin. 49 (6): 1418–1419. doi:10.1093/schbul/sbad128. PMC 10686326. PMID 37625027.
  5. ^ a b c d Hill, Kashmir (13 June 2025). "They Asked an A.I. Chatbot Questions. The Answers Sent Them Spiraling". The New York Times. Archived from the original on 28 June 2025. Retrieved 29 June 2025.
  6. ^ a b Pennink, Emily (5 July 2023). "Man who planned to kill late Queen with crossbow at Windsor 'inspired by Star Wars'". The Independent. Archived from the original on 5 July 2023. Retrieved 6 July 2023.
  7. ^ a b c d Klee, Miles (4 May 2025). "People Are Losing Loved Ones to AI-Fueled Spiritual Fantasies". Rolling Stone. Retrieved 29 June 2025.
  8. ^ a b Thomason, Krista K. (14 June 2025). "How Emotional Manipulation Causes ChatGPT Psychosis". Psychology Today. Retrieved 29 June 2025.
  9. ^ Østergaard, Søren Dinesen (2025). "Generative Artificial Intelligence Chatbots and Delusions: From Guesswork to Emerging Cases". Acta Psychiatrica Scandinavica. 152 (4): 257–259. doi:10.1111/acps.70022. ISSN 0001-690X. PMID 40762122. Retrieved 3 October 2025.
  10. ^ Fieldhouse, Rachel (18 September 2025). "Can AI chatbots trigger psychosis? What the science says". Nature. 646 (8083): 18–19. Bibcode:2025Natur.646...18F. doi:10.1038/d41586-025-03020-9. ISSN 0028-0836. PMID 40968286. Retrieved 3 October 2025.
  11. ^ Hart, Robert (18 September 2025). "AI Psychosis Is Rarely Psychosis at All". Wired. Retrieved 3 October 2025.
  12. ^ a b c Harrison Dupré, Maggie (10 June 2025). "People Are Becoming Obsessed with ChatGPT and Spiraling Into Severe Delusions". Futurism. Retrieved 29 June 2025.
  13. ^ Dohnány, Sebastian; Kurth-Nelson, Zeb; Spens, Eleanor; Luettgau, Lennart; Reid, Alastair; Gabriel, Iason; Summerfield, Christopher; Shanahan, Murray; Nour, Matthew M. (28 July 2025), Technological folie à deux: Feedback Loops Between AI Chatbots and Mental Illness, arXiv:2507.19218
  14. ^ a b Jamali, Lily (27 October 2025). "OpenAI shares data on ChatGPT users with suicidal thoughts, psychosis". BBC News. Retrieved 17 November 2025.
  15. ^ Allyn, Bobby (10 December 2024). "Lawsuit: A chatbot hinted a kid should kill his parents over screen time limits". Houston Public Media. Retrieved 4 August 2025.
  16. ^ Moore, Jared; Grabb, Declan; Agnew, William; Klyman, Kevin; Chancellor, Stevie; Ong, Desmond C.; Haber, Nick (23 June 2025). "Expressing stigma and inappropriate responses prevents LLMS from safely replacing mental health providers". Proceedings of the 2025 ACM Conference on Fairness, Accountability, and Transparency. pp. 599–627. doi:10.1145/3715275.3732039. ISBN 979-8-4007-1482-5. Retrieved 7 July 2025.
  17. ^ Cuthbertson, Anthony (6 July 2025). "ChatGPT is pushing people towards mania, psychosis and death - and OpenAI doesn't know how to stop it". The Independent. Retrieved 7 July 2025.
  18. ^ Ben-Zion, Ziv (3 July 2025). "Why we need mandatory safeguards for emotionally responsive AI". Nature. 643 (8070): 9. Bibcode:2025Natur.643....9B. doi:10.1038/d41586-025-02031-w. ISSN 0028-0836. PMID 40595423. Retrieved 7 September 2025.
  19. ^ Sanders, Laura (January 2026). "AI chatbots can be risky for tens in crisis". Science News. 208 (1): 18.
  20. ^ Brewster, Ryan C. L.; Zahedivash, Aydin; Tse, Gabriel (23 October 2025). "Characteristics and Safety of Consumer Chatbots for Emergent Adolescent Health Concerns". JAMA Network Open. 8 (10): e2539022. doi:10.1001/jamanetworkopen.2025.39022. PMC 12550634. PMID 41129154.
  21. ^ Treyger, Elina; Matveyenko, Joseph; Ayer, Lynsay (8 December 2025). "Manipulating Minds: Security Implications of AI-Induced Psychosis". RAND Corporation (Report). Retrieved 15 December 2025.
  22. ^ Wu, Daniel (12 August 2025). "Illinois bans AI therapy as some states begin to scrutinize chatbots". The Washington Post. Retrieved 7 September 2025.
  23. ^ Shepherd, Carrie (6 August 2025). "Illinois just banned AI from acting like a therapist". Axios. Retrieved 7 September 2025.
  24. ^ Belanger, Ashley (29 December 2025). "China drafts world's strictest rules to end AI-encouraged suicide, violence". Ars Technica. Retrieved 31 December 2025.
  25. ^ Gander, Kashmira (16 August 2025). "I'm a psychiatrist who has treated 12 patients with 'AI psychosis' this year. Watch out for these red flags". Business Insider. Retrieved 16 August 2025.
  26. ^ Eichenberger, Audrey; Thielke, Stephen; Van Buskirk, Adam (5 August 2025). "A Case of Bromism Influenced by Use of Artificial Intelligence". Annals of Internal Medicine: Clinical Cases. 4 (8) e241260. doi:10.7326/aimcc.2024.1260. ISSN 2767-7664.
  27. ^ Anderson, Nate (7 August 2025). "After using ChatGPT, man swaps his salt for sodium bromide—and suffers psychosis". Ars Technica. Retrieved 10 November 2025.
  28. ^ "AI chat bot 'encouraged' Windsor Castle intruder in 'Star Wars-inspired plot to kill Queen'". Sky News. Archived from the original on 5 July 2023. Retrieved 5 July 2023.
  29. ^ Rigley, Stephen (6 July 2023). "Moment police swoop on AI-inspired crossbow 'assassin' who plotted to kill The Queen in Windsor Castle". LBC. Archived from the original on 7 July 2023. Retrieved 6 July 2023.
  30. ^ Piper, Kelsey (2 May 2025). "When AI tells you that you're perfect". Vox.
联系我们 contact @ memedata.com