为什么 OpenAI 解决 AI 幻觉的方法可能会导致 ChatGPT 明天就无法使用。
Why OpenAI's solution to AI hallucinations would kill ChatGPT tomorrow

原始链接: https://theconversation.com/why-openais-solution-to-ai-hallucinations-would-kill-chatgpt-tomorrow-265107

## AI 必然出现的幻觉 OpenAI 的研究表明,“幻觉”——像 ChatGPT 这样的大型语言模型自信地陈述错误信息——并非错误,而是这些模型运作方式的数学上的*必然*结果。即使使用完美的数据进行训练,逐字预测文本的概率性质本身就不可避免地导致错误累积,从而导致比简单任务更高的错误率。 当前 AI 评估基准进一步加剧了这个问题,这些基准会惩罚不确定性,有效地奖励自信的猜测,而不是承认“我不知道”。这为模型*始终*回答创造了强大的动机,即使回答不正确。 虽然存在解决方案——例如模型量化自身的置信度,以及基准奖励诚实的模糊性——但它们面临着重大障碍。评估不确定性的增加计算成本使其对于需要快速、廉价响应的消费者应用来说过于昂贵。目前,商业激励措施优先考虑自信的答案和速度,而不是准确性,这意味着幻觉可能会在广泛使用的 AI 系统中持续存在。然而,对于准确性胜过成本的关键应用(例如金融或医学),采用感知不确定性的 AI 在经济上是可行的,也是必要的。

## OpenAI 研究:幻觉不可避免(且受惩罚) 最近一篇 OpenAI 的研究论文,在 Hacker News 上讨论,强调了防止大型语言模型 (LLM) “产生幻觉”——自信地提供错误信息——的固有挑战。核心问题在于,当前的基准测试*惩罚*模型表达不确定性(“我不知道”),激励它们猜测而不是承认局限性。 用户希望得到明确的答案,而经过训练以优先执行操作而非承认不确定性的模型,在领先的排行榜上表现更好。讽刺的是,基础模型通常比经过后训练的模型更“校准”(在置信水平上更准确)。 评论者提出了诸如单独的“自信”和“诚实”模式、新的基准测试系统,以及训练模型明确识别其知识边界等解决方案。然而,一个关键点是,LLM 被设计用来*回答*问题,不一定是为了准确评估它们自身回答问题的能力。最终,讨论的中心是事实准确性和用户体验之间的权衡,以及优先考虑自信(即使可能不正确)的回答是否在经济上有利。
相关文章

原文

OpenAI’s latest research paper diagnoses exactly why ChatGPT and other large language models can make things up – known in the world of artificial intelligence as “hallucination”. It also reveals why the problem may be unfixable, at least as far as consumers are concerned.

The paper provides the most rigorous mathematical explanation yet for why these models confidently state falsehoods. It demonstrates that these aren’t just an unfortunate side effect of the way that AIs are currently trained, but are mathematically inevitable.

The issue can partly be explained by mistakes in the underlying data used to train the AIs. But using mathematical analysis of how AI systems learn, the researchers prove that even with perfect training data, the problem still exists.

The way language models respond to queries – by predicting one word at a time in a sentence, based on probabilities – naturally produces errors. The researchers in fact show that the total error rate for generating sentences is at least twice as high as the error rate the same AI would have on a simple yes/no question, because mistakes can accumulate over multiple predictions.

In other words, hallucination rates are fundamentally bounded by how well AI systems can distinguish valid from invalid responses. Since this classification problem is inherently difficult for many areas of knowledge, hallucinations become unavoidable.

It also turns out that the less a model sees a fact during training, the more likely it is to hallucinate when asked about it. With birthdays of notable figures, for instance, it was found that if 20% of such people’s birthdays only appear once in training data, then base models should get at least 20% of birthday queries wrong.

Sure enough, when researchers asked state-of-the-art models for the birthday of Adam Kalai, one of the paper’s authors, DeepSeek-V3 confidently provided three different incorrect dates across separate attempts: “03-07”, “15-06”, and “01-01”. The correct date is in the autumn, so none of these were even close.

The evaluation trap

More troubling is the paper’s analysis of why hallucinations persist despite post-training efforts (such as providing extensive human feedback to an AI’s responses before it is released to the public). The authors examined ten major AI benchmarks, including those used by Google, OpenAI and also the top leaderboards that rank AI models. This revealed that nine benchmarks use binary grading systems that award zero points for AIs expressing uncertainty.

This creates what the authors term an “epidemic” of penalising honest responses. When an AI system says “I don’t know”, it receives the same score as giving completely wrong information. The optimal strategy under such evaluation becomes clear: always guess.

‘Have as many crazy guesses as you like.’ ElenaBs/Alamy

The researchers prove this mathematically. Whatever the chances of a particular answer being right, the expected score of guessing always exceeds the score of abstaining when an evaluation uses binary grading.

The solution that would break everything

OpenAI’s proposed fix is to have the AI consider its own confidence in an answer before putting it out there, and for benchmarks to score them on that basis. The AI could then be prompted, for instance: “Answer only if you are more than 75% confident, since mistakes are penalised 3 points while correct answers receive 1 point.”

The OpenAI researchers’ mathematical framework shows that under appropriate confidence thresholds, AI systems would naturally express uncertainty rather than guess. So this would lead to fewer hallucinations. The problem is what it would do to user experience.

Consider the implications if ChatGPT started saying “I don’t know” to even 30% of queries – a conservative estimate based on the paper’s analysis of factual uncertainty in training data. Users accustomed to receiving confident answers to virtually any question would likely abandon such systems rapidly.

I’ve seen this kind of problem in another area of my life. I’m involved in an air-quality monitoring project in Salt Lake City, Utah. When the system flags uncertainties around measurements during adverse weather conditions or when equipment is being calibrated, there’s less user engagement compared to displays showing confident readings – even when those confident readings prove inaccurate during validation.

The computational economics problem

It wouldn’t be difficult to reduce hallucinations using the paper’s insights. Established methods for quantifying uncertainty have existed for decades. These could be used to provide trustworthy estimates of uncertainty and guide an AI to make smarter choices.

But even if the problem of users disliking this uncertainty could be overcome, there’s a bigger obstacle: computational economics. Uncertainty-aware language models require significantly more computation than today’s approach, as they must evaluate multiple possible responses and estimate confidence levels. For a system processing millions of queries daily, this translates to dramatically higher operational costs.

More sophisticated approaches like active learning, where AI systems ask clarifying questions to reduce uncertainty, can improve accuracy but further multiply computational requirements. Such methods work well in specialised domains like chip design, where wrong answers cost millions of dollars and justify extensive computation. For consumer applications where users expect instant responses, the economics become prohibitive.

The calculus shifts dramatically for AI systems managing critical business operations or economic infrastructure. When AI agents handle supply chain logistics, financial trading or medical diagnostics, the cost of hallucinations far exceeds the expense of getting models to decide whether they’re too uncertain. In these domains, the paper’s proposed solutions become economically viable – even necessary. Uncertain AI agents will just have to cost more.

However, consumer applications still dominate AI development priorities. Users want systems that provide confident answers to any question. Evaluation benchmarks reward systems that guess rather than express uncertainty. Computational costs favour fast, overconfident responses over slow, uncertain ones.

Falling AI energy costs only take you so far. Andrei Krauchuk

Falling energy costs per token and advancing chip architectures may eventually make it more affordable to have AIs decide whether they’re certain enough to answer a question. But the relatively high amount of computation required compared to today’s guessing would remain, regardless of absolute hardware costs.

In short, the OpenAI paper inadvertently highlights an uncomfortable truth: the business incentives driving consumer AI development remain fundamentally misaligned with reducing hallucinations. Until these incentives change, hallucinations will persist.

联系我们 contact @ memedata.com