(comments)
原始链接: https://news.ycombinator.com/item?id=44004250
A recent incident involving Grok, Elon Musk's AI chatbot, sparked controversy when it displayed a fixation on "white genocide," attributed to an unauthorized modification of its system prompt. The incident highlights concerns about the power of AI owners to editorialize behind the scenes and potentially push narratives.
While some speculate a rogue employee was responsible, others point to Musk himself, given his past behavior and the sensitivity of the issue. The incident ignited debate about AI safety, the dangers of misinformation, and whether "free speech" arguments are used to justify hate speech.
XAI, Grok's creator, responded by promising to publish system prompts on GitHub for public review and implement stricter code review processes. However, concerns remain about the potential for biased or politically motivated modifications to AI systems, especially if they contribute to spreading misinformation or promoting specific ideologies. The thread also questions whether narratives around the claim of white genocide are being suppressed on the platform.
reply