语言模型辅助编码中的本质与偶然性
Essence and accident in language model-assisted coding

原始链接: https://www.sicpers.info/2025/11/essence-and-accident-in-language-model-assisted-coding/

弗雷德·布鲁克斯1986年的断言,即不存在能显著提高软件生产力的“银弹”,即使在人工智能编码助手取得进展的今天,仍然具有现实意义。布鲁克斯区分了*本质*复杂性(固有于问题本身)和*偶然*复杂性(源于*如何*解决问题)。 他预见到了利用现有代码模式的工具——类似于今天的助手——但认为它们不会减少本质复杂性。目前的助手在很大程度上呼应了这一点,它们生成代码的方式与程序员类似,可能通过提示工程*增加*偶然复杂性。然而,它们提供了一个宝贵的益处:通过让任何人利用集体的编程知识来普及专业知识。 虽然助手本身并不能简化问题,但它们*加速*了发现本质复杂性的过程。快速的反馈循环——提示、代码、错误结果——迫使更快地明确需求,超越了传统方法。最终,真正的收益将来自于工具超越代码生成,直接交付解决方案,绕过编程,或者当验证工具能够可靠地评估非代码输出时。布鲁克斯预测了一种向“更高级语言”的转变,而现在这种转变体现在规范驱动的开发中。

黑客新闻 新 | 过去 | 评论 | 提问 | 展示 | 招聘 | 提交 登录 语言模型辅助编码中的本质与偶然 (sicpers.info) 8 分,作者 ingve 4 小时前 | 隐藏 | 过去 | 收藏 | 讨论 指南 | 常见问题 | 列表 | API | 安全 | 法律 | 申请YC | 联系 搜索:
相关文章

原文

In 1986, Fred Brooks posited that there was “no silver bullet” in software engineering—no tool or process that would yield an order-of-magnitude improvement in productivity. He based this assertion on the division of complexity into that which is essential to the problem being solved, and that which is an accident of the way in which we solve the problem.

In fact, he considered artificial intelligence of two types: AI-1 is “the use of computers to solve problems that previously could only be solved by applying human intelligence” (here Brooks quotes David Parnas), to Brooks that is things like speech and image recognition; and AI-2 is “The use of a specific set of programming techniques [known] as heuristic or rule-based programming” (Parnas again), which to Brooks means expert systems.

He considers that AI-1 isn’t a useful definition and isn’t a source of tackling complexity because results typically don’t transfer between domains. AI-2 contains some of the features we would recognize from today’s programming assistants—finding patterns in large databases of how software has been made, and drawing inferences about how software should be made. The specific implementation technology is very different, but while Brooks sees that such a system can empower an inexperienced programmer with the experience of multiple expert programmers—“no small contribution”—it doesn’t itself tackle the complexity in the programming problem.

He also writes about “automatic programming” systems, which he defines as “the generation of a program for solving a problem from a statement of the problem specifications” and which sounds very much like the vibe coding application of language model-based coding tools. He (writing in 1986, remember) couldn’t see how a generalization of automatic programming could occur, but now we can! So how do they fare?

Accidental complexity

Coding assistants generate the same code that programmers generate, and from that perspective they don’t reduce accidental complexity in the solution. In fact, a cynical take would be to say that they increase accidental complexity, by adding prompt/context engineering to the collection of challenges in specifying a program. That perspective assumes that the prompt is part of the program source, but the generated output is still inspectable and modifiable, so it’s not clearly a valid argument. However, these tools do supply the “no small contribution” of letting any one of us lean on the expertise of all of us.

In general, a programming assistant won’t address accidental complexity until it doesn’t generate source code and just generates an output binary instead. Then someone can fairly compare the complexity of generating a solution by prompting with generating a solution by coding; but they also have to ask whether they have validation tools that are up to the task of evaluating a program using only the executable.

Or the tools can skip the program altogether, and just get the model to do whatever tasks people were previously specifying programs for. Then the accidental complexity has nothing to do with programming at all, and everything to do with language models.

Essential complexity

Considering any problem that we might want to write software for, unless the problem statement itself involves a language model then the language model is entirely unrelated to the problem’s essential complexity. For example, “predict the weather for the next week” hides a lot of assumptions and questions, none of which include language models.

That said, these tools do make it very easy and fast to uncover essential complexity, and typically in the cursed-monkey-paw “that’s not what I meant” way that’s been the bane of software engineering since its inception. This is a good thing.

You type in your prompt, the machine tells you how absolutely right you are, generates some code, you run it—and it does entirely the wrong thing. You realize that you needed to explain that things work in this way, not that way, write some instructions, generate other code…and it does mostly the wrong thing. Progress!

Faster progress than the old thing of specifying all the requirements, designing to the spec, implementing to the design, then discovering that the requirements were ambiguous and going back to the drawing board. Faster, probably, even than getting the first idea of the requirements from the customer, building a prototype, and coming back in two weeks to find out what they think. Whether it’s writing one to throw away, or iteratively collaborating on a design[*], that at least can be much faster now.

[*] Though note that the Spec-Driven Development school is following the path that Brooks did predict for automatic programming (via Parnas again): “a euphemism for programming with a higher-level language than was presently available to the programmer”.

联系我们 contact @ memedata.com