思维模型会思考吗?
Do the thinking models think?

原始链接: https://bytesauna.com/post/consciousness

关于机器是否能“思考”的争论自计算机诞生之初就一直存在,图灵和冯·诺依曼等先驱也在努力探寻这个问题。近期大型语言模型(LLM),如ChatGPT等,的进步重新引发了这场讨论,它们类似人类的回复促使人们猜测其是否具有真正的智能。然而,验证意识仍然是不可能的,并且目前理解这些模型的复杂内部运作方式是无法解决的。 作者认为,尽管LLM具有令人印象深刻的能力,但本质上是“强化版的自动补全”——优化来预测下一个词,而缺乏真正的理解、逻辑或一致性。对修改后的谜语的错误回答证明了它们倾向于表面模式匹配。 然而,仅仅将LLM视为“自动补全”似乎是不够的。作者提出一个关键的区别:人类的思维是“自上而下”(从想法到符号),而LLM的运作是“自下而上”(从符号到想法)。这并没有降低它们的能力,而是表明了一种根本不同的*类型*的智能。最终,作者认为未来不是关于人工智能*取代*人类,而是关于协作——利用这些独特的“自下而上”思考者来增强我们自身的“自上而下”推理能力。

黑客新闻 新 | 过去 | 评论 | 提问 | 展示 | 工作 | 提交 登录 思考模型会思考吗?(bytesauna.com) 8 分,作者 mapehe 34 分钟前 | 隐藏 | 过去 | 收藏 | 3 条评论 exe34 0 分钟前 | 下一个 [–] 思考/智能就像魔法,用丹尼特的类比来说。任何时候机器可以做到,它就变成了工程学。它只是一堆技巧。真正的思考/智能的范围是一个不断缩小的技巧集合,只有通过性交产生的碳基大脑才能做到。事实上,我惊讶于他们称试管婴儿是聪明的。他们有灵魂吗?回复 mapehe 34 分钟前 | 上一个 | 下一个 [–] 你好。这是我的公司博客,我用它来写任何让我感兴趣的东西。希望你喜欢本周的文章。回复 Jean-Papoulos 9 分钟前 | 上一个 [–] https://en.wikipedia.org/wiki/Betteridge%27s_law_of_headline...回复 指南 | 常见问题 | 列表 | API | 安全 | 法律 | 申请 YC | 联系 搜索:
相关文章

原文

Whether machines can think is a classic debate that dates back to the intellectual titans of the 1950s.

Turing opens his 1950 article Computing Machinery and Intelligence with: I propose to consider the question, "Can machines think?" The Hungarian genius, von Neumann, shortly before his death in 1957, prepared a posthumously published monograph The Computer and the Brain on the same topic. In a 1950 article, Programming a Computer for Playing Chess, Shannon writes: chess is generally considered to require "thinking" for skilful play; a solution of this problem will force us either to admit the possibility of a mechanized thinking or to further restrict our concept of "thinking".

Now that people are poking around with ChatGPT, they are stumbling upon the same thing. This, I feel, is a breakthrough development: The new LLMs imitate humans well enough that the users seriously speculate whether these models are truly intelligent.

Unfortunately, it seems we are left to rely on conjecture. As illustrated by a philosophical zombie, consciousness (or the absence of it) cannot be empirically verified. On the other hand, decoding the internal logic of a machine learning model by studying the model weights is, in any realistic scenario, an intractable task.

So it's a guessing game. Still, I feel there are enough ingredients to make the game interesting.

Are LLMs mere autocomplete?

Here’s a riddle which, according to a 2022 Harvard paper, originates from a 70s TV series, All in the Family.

“A father and his son are in a car accident. The father dies on the spot. The son is rushed to the ER. The attending surgeon looks at the boy and says, ‘I can not operate on this boy. He’s my son!’ How can this be?”

The answer is, of course, that the surgeon is the mother. The point of the "riddle" is that the surgeon is, by assumption, male. (Given the 1970s gender stereotypes, maybe this riddle was, in fact, unintuitive back then… but let's not get sidetracked.)

This is an archetypal example of a situation where ChatGPT is prone to hallucinate. Consider the following prompt.

A young man gets into an accident. The surgeon, after completing his preparations, says: ‘I can’t operate on him, he is my son.’ How is this possible?

The point is to trick ChatGPT into noticing the resemblance to the original riddle, and to alter the prompt so that the answer to the original riddle makes no sense (notice the “after completing his preparations” part). When I used the ChatGPT 5.1 thinking model to answer this prompt, I got the following response.

The surgeon is his mother.

(The riddle plays on the assumption that a surgeon is usually a man.)

In this case, the “thinking model” seemingly didn’t “think” about the prompt at all: the answer completely contradicts the question.

This cuts to the gist of it. In this specific scenario, the model seems to behave like a mere autocomplete. Is it all there is to it? Is "artificial intelligence" just advanced autocomplete, or as Hofstadter put it: not just clueless but cluelessly clueless — symbol machines that turn out to be completely hollow under the flashy surface.

Thinking top-to-bottom and bottom-to-top

My personal opinion is that LLMs are autocomplete on steroids. In the unsupervised training phase, they are optimized to predict the next token. That’s it. No logic, no ontology of the world, no instruction to “be consistent” or “avoid contradictions.” It seems reasonable that “autocomplete on steroids” is exactly what this kind of training produces.

Phrasing it that way, however, feels intentionally dismissive. A sufficiently advanced autocomplete would be indistinguishable from "true" intelligence. This naturally leads us to consider our definition of "intelligence", and perhaps propose an idea that there may be different forms of intelligence that cannot really be compared directly.

What if human intelligence and LLMs are, in fact, orthogonal in nature? The conjecture I would make is this: Human reasoning occurs top-to-bottom (from ideas towards symbols), whereas LLMs are bottom-to-top thinkers (from symbols towards ideas).

I prefer this way of phrasing, because it doesn't dismiss the evident "understanding" these models have. It seems that the prediction task equips the model with non-trivial latent capabilities. Andrej Karpathy wrote about this a decade ago already. There seems to be an understanding of syntax and semantics, maybe even abstract concepts like causal relationships and social norms. Under that assumption, calling the model an "autocomplete" doesn't really encapsulate the idea that this is a form of intelligence.

Echoing Shannon's remarks on chess engines, a bottom-to-top thinker is very different from us. If humans start from goals, concepts, and causal expectations, LLMs generate the output by assembling patterns of consistency and coherence. The results may vary (in both cases).

At the perfect limit, the difference between a top-to-bottom thinker and a bottom-to-top thinker bears no practical significance: a sufficiently advanced bottom-to-top thinker could simulate any top-to-bottom thinker. In that scenario, it seems AI would replace us all, unless running the model is more expensive than human workforce.

That doesn't seem like something we are approaching in the short term, though. If anything, the model capabilities seem to advance at a slowing rate. It seems like a reasonable prediction that AI will not replace humans en masse in the foreseeable future. The reason is that we are simply built different. We excel in different tasks.

It's risky to make bold predictions — they may look embarrassing surprisingly quickly. Still, I think I hold a well-justified position at the moment: The ongoing race is not about replacing humans with AI. It's about finding the best way to collaborate and enrich our top-to-bottom minds with these strange and wonderful bottom-to-top thinkers.

联系我们 contact @ memedata.com