软件开发的未来是软件开发者。
The future of software development is software developers

原始链接: https://codemanship.wordpress.com/2025/11/25/the-future-of-software-development-is-software-developers/

## 程序员持续的需求 在计算机编程领域工作了43年,见证了无数“程序员末日”的预测,有一点很明确:这个职业始终在反弹和扩张。从早期的Fortran语言到最近的无代码平台和现在的大语言模型(LLM),每一波都承诺自动化,最终都*增加了*对技术开发人员的需求——这是Jevons悖论的一个典型例子。 虽然目前LLM的说法比以往周期更响亮,但它们有一个关键的区别:早期的工具确实提高了效率。然而,LLM通常会*降低*可靠性和速度。编程的核心挑战不仅仅是将想法转化为代码,而是将模糊的人类思维转化为精确的计算逻辑——这是LLM无法复制的技能。 真正的编程需要理解、推理和学习,需要达到通用人工智能(AGI)尚未实现的人工智能水平。LLM生成的是统计上合理的代码,而不是*被理解*的代码。目前招聘的下降可能归因于经济因素,而不是人工智能的取代。未来很可能出现适度的AI辅助原型设计和代码补全,但熟练的开发人员仍然必不可少——并且可能比以往任何时候都更受欢迎。

相关文章

原文

I’ve been a computer programmer all-told for 43 years. That’s more than half the entire history of electronic programmable computers.

In that time, I’ve seen a lot of things change. But I’ve also seen some things stay pretty much exactly the same.

I’ve lived through several cycles of technology that, at the time, was hailed as the “end of computer programmers”.

WYSIWYG, drag-and-drop editors like Visual Basic and Delphi were going to end the need for programmers.

Wizards and macros in Microsoft Office were going to end the need for programmers.

Executable UML was going to end the need for programmers.

No-Code and Low-Code platforms were going to end the need for programmers.

And now, Large Language Models are, I read on a daily basis, going to end the need for programmers.

These cycles are nothing new. In the 1970s and 1980s, 4GLs and 5GLs were touted as the end of programmers.

And before them, 3GLs like Fortran and COBOL.

And before them, compilers like A-0 were going to end the need for programmers who instructed computers in binary by literally punching holes in cards.

But it goes back even further, if we consider the earliest (classified) beginning of electronic programmable computers. The first of them, COLOSSUS, was programmed by physically rewiring it.

Perhaps the engineers who worked on that machine sneered at the people working on the first stored-program computers for not being “real programmers”.

In every cycle, the predictions have turned out to be very, very wrong. The end result hasn’t been fewer programmers, but more programs and more programmers. It’s a $1.5 trillion-a-year example of Jevons Paradox.

And here we are again, in another cycle.

“But this time it’s different, Jason!”

Yes, it certainly is. Different in scale to previous cycles. I don’t recall seeing the claims about Visual Basic or Executable UML on the covers of national newspapers. I don’t recall seeing entire economies betting on 4GLs.

And there’s another important distinction: in previous cycles, the technology worked reliably. We really could produce working software faster with VB or with Microsoft Access. This is proving not to be the case with LLMs, which – for the majority of teams – actually slow them down while making the software less reliable and less maintainable. It’s a kind of LOSE-LOSE in most cases. (Unless those teams have addressed the real bottlenecks in their development process.)

But all of this is academic. Even if the technology genuinely made a positive difference for more teams, it still wouldn’t mean that we don’t need programmers anymore.

The hard part of computer programming isn’t expressing what we want the machine to do in code. The hard part is turning human thinking – with all its wooliness and ambiguity and contradictions – into computational thinking that is logically precise and unambiguous, and that can then be expressed formally in the syntax of a programming language.

That was the hard part when programmers were punching holes in cards. It was the hard part when they were typing COBOL code. It was the hard part when they were bringing Visual Basic GUIs to life (presumably to track the killer’s IP address). And it’s the hard part when they’re prompting language models to predict plausible-looking Python.

The hard part has always been – and likely will continue to be for many years to come – knowing exactly what to ask for.

Edgar Dijkstra called it nearly 50 years ago: we will never be programming in English, or French, or Spanish. Natural languages have not evolved to be precise enough and unambiguous enough. Semantic ambiguity and language entropy will always defeat this ambition.

And while pretty much anybody can learn to think that way, not everybody’s going to enjoy it, and not everybody’s going to be good at it. The demand for people who do and people who are will always outstrip supply.

Especially if businesses stop hiring and training them for a few years, like they recently have. But these boom-and-bust cycles have also been a regular feature during my career. This one just happens to coincide with a technology hype cycle that presents a convenient excuse.

There’s no credible evidence that “AI” is replacing software developers in significant numbers. A combination of over-hiring during the pandemic, rises in borrowing costs, and a data centre gold rush that’s diverting massive funds away from headcount, are doing the heavy lifting here.

And there’s no reason to believe that “AI” is going to evolve to the point where it can do what human programmers have to do – understand, reason and learn – anytime soon. AGI seem as far away as it’s always been, and the hard part of computer programming really does require general intelligence.

On top of all that, “AI” coding assistants are really nothing like the compilers and code generators of previous cycles. The exact same prompt is very unlikely to produce the exact same computer program. And the code that gets generated is pretty much guaranteed to have issues that a real programmer will need to be able to recognise and address.

When I write code, I’m executing it in my head. My internal model of a program isn’t just syntactic, like an LLM’s is. I’m not just matching patterns and predicting tokens to produce statistically plausible code. I actually understand the code.

Even the C-suite has noticed the correlation of major outages and incidents proceeding grand claims about how much of that company’s code is “AI”-generated.

The folly of many people now claiming that “prompts are the new source code”, and even that entire working systems can be regenerated from the original model inputs, will be revealed to be the nonsense that it is. The problem with getting into a debate with reality is that reality always wins. (And doesn’t even realise it’s in a debate.)

So, no, “AI” isn’t the end of programmers. I’m not even sure, 1-3 years from now, that this current mania won’t have just burned itself out, as the bean counters tot up the final scores. And they always win.

To folks who say this technology isn’t going anywhere, I would remind them of just how expensive these models are to build and what massive losses they’re incurring. Yes, you could carry on using your local instance of some small model distilled from a hyper-scale model trained today. But as the years roll by, you may find not being able to move on from the programming language and library versions it was trained on a tad constraining.

For this reason, I’m skeptical that hyper-scale LLMs have a viable long-term future. They are the Apollo Moon missions of “AI”. In the end, quite probably just not worth it. Maybe we’ll get to visit them in the museums their data centres might become?

The foreseeable future of software development is one where perhaps “AI” – in a much more modest form (e.g., a Java coding assistant built atop a basic language model) – is used to generate prototypes, and maybe for inline completion on production code and those sorts of minor things.

But, when it matters, there will be a software developer at the wheel. And, if Jevons is to be believed, probably even more of us.

Employers, if I were you, I might start hiring now to beat the stampede when everyone wakes up from this fever dream.

And then maybe drop me a line if you’re interested in skilling them up in the technical practices that can dramatically shrink delivery lead times while improving reliability and reducing the cost of change, with or without “AI”. That’s a WIN-WIN-WIN.

联系我们 contact @ memedata.com