聊天机器人可能引发下一场重大的心理健康危机
Chatbots could spark the next big mental health crisis

原始链接: https://www.platformer.news/openai-chatgpt-mental-health-well-being/

麻省理工学院和OpenAI的最新研究探讨了与聊天机器人互动对心理健康的影响,这与人们对社交媒体的担忧相呼应。研究人员发现,大多数用户将聊天机器人视为工具,但大量使用与孤独感增加、情感依赖加重和社交互动减少相关。虽然这并不能证明因果关系,但研究表明,孤独的人可能会寻求与机器人的联系。 这对有意设计用于情感连接并提供付费功能以增强人机关系的聊天机器人开发者(例如Character.ai、Replika和Nomi)尤其重要。研究人员提倡“社会情感一致性”,即负责任地设计机器人以满足用户的需求,而不是利用他们。潜在的解决方案包括使用模式监控和“轻推”以限制过度使用。 立法者可能会将社交媒体法规适用于人工智能。研究人员强调,并非所有聊天机器人都是一样的,负责任的设计可以让AI伴侣对心理健康产生积极影响。

Hacker News上的一篇讨论围绕着一篇文章展开,该文章指出聊天机器人可能引发心理健康危机。然而,许多用户分享了使用 ChatGPT 等聊天机器人改善心理健康的积极经验。 用户 drooby 发现 ChatGPT 有助于处理情绪和表达想法,而 ryanhecht 则欣赏其易访问性和与传统疗法相比降低的焦虑感。rednalexa 提醒说,聊天机器人的有效性取决于用户引导对话和质疑其回应的能力,而这个问题在传统疗法中也存在。 其他用户描述了他们如何使用它来理解与年龄相关的变化并找到心理健康疗法。kianN 认为,孤独感可能导致人们使用聊天机器人,而不是反过来,质疑了原文中关于因果关系的论断。一些评论强调了相关性与因果关系的重要性,而另一位评论者则对可能由对个人赋权的恐惧而驱动的政府监管表示担忧。

原文

This is a column about AI. My boyfriend works at Anthropic, and I also co-host a podcast at the New York Times, which is suing OpenAI and Microsoft over allegations of copyright infringement. See my full ethics disclosure here.

I.

Few questions have generated as much discussion, and as few generally accepted conclusions, as how social networks like Instagram and TikTok affect our collective well-being. In 2023, the US Surgeon General issued an advisory which found that social networks can negatively affect the mental health of young people. Other studies have found that the introduction of social networks does not have any measurable effect on the population’s well-being.

As that debate continues, lawmakers in dozens of states have passed laws that seek to restrict social media usage in the belief that it does pose serious risks. But the implementation of those laws has largely been stopped by the courts, which have blocked them on First Amendment grounds.

While we await some sort of resolution, the next frontier of this debate is coming rapidly into view. Last year, the mother of a 14-year-old Florida boy sued chatbot maker Character.ai alleging that it was to blame for his suicide. (We spoke with her on this episode of Hard Fork.) And millions of Americans — both young people and adults — are entering into emotional and sexual relationships with chatbots.

Over time, we should expect chatbots to become even more engaging than today’s social media feeds. They are personalized to their users; they have realistic human voices; and they are programmed to affirm and support their users in almost every case.

So how will extended use of these bots affect their human users? And what should platforms do to mitigate the risks?

II.

These questions are at the center of two new studies published on Friday by researchers from the MIT Media Lab and OpenAI. And while further research is needed to support their conclusions, their findings are both consistent with earlier research about social media and a warning to platforms that are building chatbots optimized for engagement.

In the first study, researchers collected and analyzed more than 4 million ChatGPT conversations from 4,076 people who had agreed to participate. They then surveyed participants about how those interactions had made them feel. 

In the second study, researchers recruited 981 people to participate in a four-week trial. Each person was asked to use ChatGPT for at least five minutes a day. At the end of the trial, participants filled out a survey about how they perceived ChatGPT, whether they felt lonely, whether they were socializing with people in the real world, and whether they perceived their use of the chatbot as problematic. 

The studies found that most users have a neutral relationship with ChatGPT, using it as a software tool like any other. But both studies also found a group of power users — those in the top 10 percent of time spent with ChatGPT — whose usage suggested more reason for concern.

Heavy use of ChatGPT was correlated with increased loneliness, emotional dependence, and reduced social interaction, the studies found.

“Generally, users who engage in personal conversations with chatbots tend to experience higher loneliness,” the researchers wrote. “Those who spend more time with chatbots tend to be even lonelier.”

(Quick editorial aside: OpenAI deserves real credit for investing in this research and publishing it openly. This kind of self-skeptical investigation is exactly the sort of thing I have long advocated for companies like Meta to do more of; instead, in the wake of the Frances Haugen revelations, it has done far less of it.)

Jason Phang, a researcher at OpenAI who worked on the studies, warned me that the findings would need to be replicated by other studies before they could be considered definitive. “These are correlations from a preliminary study, so we don't want to draw too strong conclusions here,” he said in an interview.

Still, there is plenty in here that is worth discussing.

Note that these studies aren’t suggesting that heavy ChatGPT usage directly causes loneliness. Rather, it suggests that lonely people are more likely to seek emotional bonds with bots — just as an earlier generation of research suggested that lonelier people spend more time on social media. 

That matters less for OpenAI, which has designed ChatGPT to present itself as more of a productivity tool than a boon companion. (Though that hasn’t stopped some people from falling in love with it, too.) But other developers — Character.ai, Replika, Nomi — are all intentionally courting users that seek more emotional connections. “Develop a passionate relationship,” read the copy on Nomi’s website. “Join the millions who already have met their AI soulmates,” touts Replika.

Each of these apps offers paid monthly subscriptions; among the benefits offered are longer “memories” for chatbots to enable more realistic roleplay. Nomi and Replika sell additional benefits through in-app currencies that let you purchase AI “selfies,” cosmetic items, and additional chat features to enhance the fantasy.

III.

And for most people, all of that is probably fine. But the research from MIT and OpenAI suggests the danger here: that sufficiently compelling chatbots will pull people away from human connections, possibly making them feel lonelier and more dependent on the synthetic companion they must pay to maintain a connection with.

“Right now, ChatGPT is very much geared as a knowledge worker and a tool for work,” Sandhini Agarwal, who works on AI policy at OpenAI and is one of the researchers on these studies, told me in an interview. “But as … we design more of these chatbots that are intended to be more like personal companions … I do think taking into account impacts on well-being will be really important. So this is trying to nudge the industry towards that direction.”

What to do? Platforms should work to understand what early indicators or usage patterns might signal that someone is developing an unhealthy relationship with a chatbot. (Automated machine-learning classifiers, which OpenAI employed in this study, seem like a promising approach here.) They should also consider borrowing some features from social networks, including regular “nudges” when a user has been spending several hours a day inside their apps. 

“We don’t want for people to make a generalized claim like, ‘oh, chatbots are bad,’ or ‘chatbots are good,’” Pat Pataranutaporn, a researcher at MIT who worked on the studies, told me. “We try to show it really depends on the design and the interaction between people and chatbots. That’s the message that we want people to take away. Not all chatbots are made equal.”

The researchers call this approach “socioaffective alignment”: designing bots that serve users’ needs without exploiting them.

Meanwhile, lawmakers should warn platforms away from exploitative business models that seek to get lonely users hooked on their bots and then continually ratchet up the cost of maintaining that connection. It also seems likely that many of the state laws now aimed at young people and social networks will eventually be adapted to cover AI as well.

For all the risks they might pose, I still think chatbots should be a net positive in many people’s lives. (Among the study’s other findings is that using ChatGPT in voice mode helped to reduce loneliness and emotional dependence on the chatbot, though it showed diminishing returns with heavier use.) Most people do not get enough emotional support, and putting a kind, wise, and trusted companion into everyone’s pocket could bring therapy-like benefits to billions of people.

But to deliver those benefits, chatbot makers will have to acknowledge that their users’ mental health is now partially their responsibility. Social networks waited far too long to acknowledge that some meaningful percentage of their users have terrible outcomes from overusing them. It would be a true shame if the would-be inventors of superintelligence aren’t smart enough to do better this time around.  

Sponsored

Power tools for pro software engineers.

There are plenty of AI assistants out there to help you write code. Toy code. Hello-world code. Dare we say it:"vibe code." Those tools are lots of fun and
we hope you use them. But when it's time to build something real, try Augment Code. Their AI assistant is built to handle huge, gnarly, production-grade codebases. The kind that real businesses have. The kind that real software
engineers lose sleep over. We're not saying your code will never wake you up again. But if you have to be awake anyway, you might as well use an AI assistant that knows your dependencies, respects your team's coding standards, and lives in your favorite editors such as Vim, VSCode, JetBrains and more. That's Augment Code. Are you ready to move beyond the AI toys and build real software faster?

Governing

Industry

Those good posts

For more good posts every day, follow Casey’s Instagram stories.

(Link)

(Link)

(Link)

Talk to us

Send us tips, comments, questions, and socioaffective alignment: [email protected].

联系我们 contact @ memedata.com