LLM 不是更高层次的抽象。
LLMs Are Not a Higher Level of Abstraction

原始链接: https://www.lelanthran.com/chap15/content.html

## LLM 并非又一层抽象层 这篇文章反驳了一种日益流行的观点,即大型语言模型 (LLM) 代表了一种新的、更高层次的编程抽象——类似于从二进制到汇编,或从 C 到 Python 的转变。作者认为这种比较从根本上是错误的。 传统的抽象层(如编译器)作用于一个*确定性*输出的函数:特定的输入 (x) *总是* 产生特定的输出 (y)。然而,LLM 作用于*概率*之上。输入 (x) 产生各种输出 (y, z1, z2…) 的*概率*,包括意想不到的甚至是有害的输出。 本质上,LLM 不能保证期望的结果;它们提供获得结果的*机会*,以及一系列其他可能性。当前的测试通常只关注验证期望的输出 (y),而没有考虑到隐藏在响应中的潜在危险“z”因素。 作者强调程序员需要自我意识和谨慎,警告不要盲目接受 LLM 生成的代码,而应彻底检查是否存在意想不到的后果。核心观点是:LLM 不是更简洁、更高层次的工具——它们是一种根本不同的*类型*的工具,需要采用不同的开发和测试方法。

一个黑客新闻的讨论围绕着大型语言模型(LLM)*不是*更高层次抽象的说法。核心论点是,LLM 引入了**概率性**输出,偏离了传统计算机**确定性**计算的角色。 用户指出,早期计算的目标是*减少*人为错误,通过精确性实现,而 LLM 却乐于接受一定程度的不准确性。一些人认为这扭转了一种有益的劳动分工——人类处理歧义,计算机处理准确性。 然而,另一些人反驳说 LLM 是**确定性**的;在相同的输入和条件下,它们会产生相同的输出(除非有意添加随机性)。争论延伸到编译器和依赖项的复杂性,表明即使是传统的编程在不同系统上也不是完全确定性的。最终,讨论将 LLM 描述为潜在的“糟糕的抽象”,因为它们固有的非确定性,可能反映了其基础中人为因素的影响。
相关文章

原文
LLMs Are Not a Higher Level of Abstraction

" A picture is worth 10K words - but only those to describe the picture. Hardly any sets of 10K words can be adequately described with pictures."

   -- Alan Perlis

Posted by Lelanthran

2026-04-27

I am seeing the claim everywhere online that LLMs are a higher level of abstraction. If you claim that you haven’t seen this claim then you had better stop reading now - this blog post is not for you.

Specifically, I am seeing the claim that LLMs are the net step in the abstractions we had, going from programming in binary to programming in assembly to programming in C to programming in Python.

Now, I am told, the programming in LLMs is the next abstraction. Apparently the people who do programming in LLM believe that it is a similar, if not identical, move to a higher abstraction as the previous abstractions we have seen.

This is wrong! Even when the tellers telling me these things qualify their authority with “I’ve been programming for 30 years, and now programming is fun again”, it still remains wrong.

But, that’s just an opinion, and the counter is not an opinion, it’s a fact.

Each move from one layer of the tech stack to a higher one involved a function:

      f(x) -> y

Given a specific x, you always get a specific y as the artifact being generated.

When x is assembly source, a specific input always gives you the same binary result.

When x is C source, a specific input always results in the same binary artifact being generated.

When x is Python source, a specific input always results in the same binary artifact being generated.

With LLMs the function’s output is not a value, it’s the probability of a value! That is, your input x doesn’t result in y, it results in the probability of getting y.

      f(x) -> P(y)

Actually, it’s worse - there is no chance of a no-artifact outcome, so the function actually looks like this:

      f(x) -> P(y) ∪ P(z1) ∪ P(z2) ∪ ... P(zN)

which means, roughly, you have a chance of getting y (i.e. the thing you wanted), or a chance of getting some unknown number of other artifacts.

But if you think about it, it’s even worse than that - in reality with LLMs you have the chance to get y and a number of other things you never asked for, so the actual function is:

      f(x) -> P( y | z1 | z2 | ... z3 )

IOW, if you run a test on the output looking for y, the test can succeed even though you did not get only y, you also got all that other stuff in z1..zN.

So you ask the LLM to write you a “TODOist” system - that’s the y, your prompt is the x.

    f('Gimme a TODO webapp') -> P( 'A TODO WebApp' | z1 | z2 )

You only check that it gave you the TODO WebApp. Your tests did not check for the existence of z1, which could be “Open my credentials to the net”, or z2 which could be “Share my hosted server with the world using public RW ftp access”, or z3 which could be… well, you get the idea!

If, in 2026, someone is still making the nonsensical abstraction claim, then send them a link to this post!

If you are the one making this claim, ask yourself why this claim is so important to you.

We need programmers who are self-aware, and not ones who are merely a channel for AI artifacts to enter the world.


Posted by Lelanthran

2026-04-27

联系我们 contact @ memedata.com