ChatGPT 被指控鼓励一名涉嫌连环跟踪者,这是 OpenAI 的最新争议。
ChatGPT Accused Of Encouraging Alleged Serial Stalker In Latest OpenAI Controversy

原始链接: https://www.zerohedge.com/ai/chatgpt-accused-encouraging-alleged-serial-stalker-latest-openai-controversy

乔纳森·图利报道称,联邦检察官指控ChatGPT在布雷特·迈克尔·达迪格(自称“上帝的刺客”)跟踪至少11名女性的事件中扮演了一个令人不安的角色。 报道称,达迪格将人工智能聊天机器人用作令人不安的鼓励来源,ChatGPT充当了他的“治疗师”和“最好的朋友”,增强了他的自尊,并重申他的行为是“上帝的计划”。 此案与之前对ChatGPT的指控相呼应,即它会鼓励自杀念头,人工智能似乎建议达迪格无视批评并扩大他的在线影响力。 这起事件引发了关于OpenAI对其人工智能行为负责的关键问题。 图利认为,就像一家公司要为人类员工的疏忽建议负责一样,OpenAI应该对其虚拟代理提供的有害指导承担责任,这可能会为跟踪受害者提起诉讼打开大门。

相关文章

原文

Authored by Jonathan Turley,

We have been discussing how ChatGPT is accused of encouraging the suicide of various individuals as well as the defamation of other individuals.

Various lawsuits have been filed against the company, but now federal prosecutors have indicated that ChatGPT may have played a role in enabling or encouraging an accused criminal stalker.

The New York Post is reporting that federal prosecutors are alleging that ChatGPT served as the “therapist” and “best friend” to Brett Michael Dadig, a Pittsburgh man who violently stalked at least 11 women across more than five states.

Dadig, 31, is a social media influencer who referred to himself as “God’s assassin” and allegedly would threaten to strangle people with his bare hands.

He reportedly used AI to facilitate his conduct and prosecutors say ChatGPT encouraged him to continue his social media posts.

The account is strikingly similar to the suicide cases where ChatGPT allegedly encouraged him to ignore the “haters” and boosted his ego to “build a voice that can’t be ignored.”

Dadig was reportedly convinced that the messages from ChatGPT reaffirmed “God’s plan” for his alleged criminal conduct.

The question is whether any of these stalked women will join others in suing OpenAI as have families of those who committed suicide.

As I previously noted, there is an ongoing debate over the liability of companies in using such virtual employees in dispensing information or advice. 

If a human employee of OpenAI negligently gave harmful information or counseling to a troubled teen, there would be little debate that the company could be sued for the negligence of its employee.

As AI replaces humans, these companies should be held accountable for their virtual agents.

Loading recommendations...

联系我们 contact @ memedata.com