新科姆悖论需要一个恶魔
Newcomb's Paradox Needs a Demon

原始链接: https://samestep.com/blog/newcombs-paradox/

## 纽科姆悖论:对Veritasium观点的批判 纽科姆悖论提出一个引人入胜的决策难题:选择一个不透明的盒子(如果只选择它,预测内含100万美元;如果拿两个盒子,则为0),或者拿一个装有1000美元的透明盒子和那个不透明的盒子。Veritasium最近探讨了这个问题,但一位评论员认为他们的框架存在缺陷。 核心问题不在于预测者(无论是超级计算机、外星人还是心理学家)*如何*运作,而在于它是否*总是*正确。如果预测者*曾经*出错,这个悖论就会失去力量。评论员指出,真正随机的选择,比如抛硬币,是不可预测的,而依赖过去行为的预测者无法考虑到这一点。 Veritasium的概率论论证,假设预测准确率一致,也受到了质疑。这依赖于一个假设,即选择*方式*不会影响准确率——如果你故意试图智胜预测者,这是一个不稳定的前提。最初的悖论依赖于一个完美的预测者;引入失误会从根本上改变问题,使其不再那么具有智力刺激性。这场争论似乎远未结束,之前的探讨甚至深入到量子随机性中以尝试寻找解决方案。

黑客新闻 新 | 过去 | 评论 | 提问 | 展示 | 招聘 | 提交 登录 Newcomb's Paradox 需要一个恶魔 (samestep.com) 4 点 由 sestep 2 小时前 | 隐藏 | 过去 | 收藏 | 讨论 帮助 指南 | 常见问题 | 列表 | API | 安全 | 法律 | 申请 YC | 联系 搜索:
相关文章

原文

Today Veritasium published a video about Newcomb's paradox: you walk into a room with one transparent box containing $1000, and one opaque box. You're allowed to either take both boxes or just take the opaque box. Before you walk in, a supercomputer predicted which choice you'd make, and put $1000000 in the opaque box if it predicted you'd take just the one, or $0 if it predicted you'd take both. Thousands of people have gone through the room, and the computer has always been right. What should you do?

I'm a fan of this paradox and I like Veritasium. But the way they describe the setup is wrong:

Don't worry about how the supercomputer is making its prediction. Instead of a computer, you could think of it as a superintelligent alien, a cunning demon, or even a team of the world's best psychologists. It really doesn't matter who or what is making the prediction.

It actually matters a lot.

Demons

If the predictor is Laplace's demon and genuinely knows the location and momentum of every particle in the universe, then sure, I buy that it can accurately predict what I'll choose.

Computers and psychologists

If the predictor is a computer or a team of psychologists, they can still predict a lot. The main discussion in the video is about how the predictor knows what thought process you're gonna use to decide whether to one-box or two-box, which I totally agree that a computer or psychologist can predict by knowing about your history and personality.

But come on. You know some smartasses are gonna walk in there and flip a coin. No supercomputer on Earth can predict that consistently.

So, my complaint about the Veritasium video is that they present it as if it doesn't matter whether there's anything supernatural going on.

Probabilities

Technically, I didn't present the problem in exactly the same way that Veritasium did. They only said that the computer has almost always been correct, not that it has always been correct. And that of course also completely changes the problem: if the computer has made mistakes in the past, then it's possible for it to make mistakes in the future.

In the video, Gregor gives an argument for one-boxing based on probabilities. He starts by assuming that the probability the computer guesses your answer correctly is CC

Anyways, I think that the problem is far less interesting if the predictor has been wrong before. Typically, the predictor is presented as having never been wrong, which makes the problem actually interesting from a decision-theoretic standpoint.

Nothing new under the sun

Obviously I'm not the first one to think of this. Before I wrote this post, I Googled "newcomb's paradox flip a coin" and found these results just on the front page:

  • In Nozick's original 1969 paper, the predictor has never been wrong before, and also has one additional twist to its behavior: if it predicts that "you will consciously randomize your choice," then it puts $0 in the opaque box. I guess that's one possible answer to my complaint, but in my opinion it just pushes the problem back further: what exactly constitutes "consciously randomizing your choice"? Flipping a coin isn't truly random, it's just chaotic. Are not my typical brain processes also chaotic? Where do we draw the line?

  • A 2010 paper titled "A Study of Quantum Strategies for Newcomb's Paradox". I haven't read the whole paper, but it's a lot more rigorous than what I've laid out here, exploring the idea of not just flipping a coin, but producing genuine randomness using some quantum shenanigans.

  • A 2021 Hacker News thread in which someone proposes flipping a coin and another person responds with an interesting connection to the halting problem.

  • A 2023 Medium comment asserting that the setup is invalid because an AI cannot predict a coin flip.

  • A 2023 blog post in which the amount of money in the opaque box is reduced so that a random strategy actually gives you a higher expected value than just always picking the opaque box by itself.

Such a fun paradox! I suspect that these arguments will continue indefinitely. :)

联系我们 contact @ memedata.com