编程的开端,正如我们所知的。
The beginning of programming as we'll know it?

原始链接: https://bitsplitting.org/2026/04/01/the-beginning-of-programming-as-well-know-it/

## 编程的未来:人类仍然掌控方向 尽管像Claude和Codex这样的人工智能编码助手发展迅速,但人类程序员的角色并未消失——它正在*演变*。虽然人工智能现在可以快速生成代码,甚至可以从基本提示中创建功能性应用程序,但目前它最有价值的是作为程序员的*工具*,而不是替代品。 炒作往往集中在人工智能的成功上,而忽视了频繁的失败和对大量人工监督的需求。人工智能生成的代码通常需要校正、完善,并遵守既定的编码标准才能真正有效。一个关键点是,在人类审查和修复之前,人工智能的输出不能算是“作品”。 那些拥抱人工智能,同时保持批判性思维的程序员,将能够独特地利用其力量。他们可以增强技能,提高生产力,并最终产生更高质量的结果。这一原则不仅适用于编程,也适用于任何创意领域——那些熟练地将人工智能融入其工作流程的人可能会蓬勃发展,而抵制则可能导致过时。未来不是人类*对抗*人工智能,而是人类*与*人工智能。

黑客新闻 新 | 过去 | 评论 | 提问 | 展示 | 招聘 | 提交 登录 编程的开端,如我们所知 (bitsplitting.org) 5 分,作者 zdw 1小时前 | 隐藏 | 过去 | 收藏 | 1 条评论 帮助 chiengineer2 35分钟前 [–] https://sites.google.com/view/test4000?usp=sharing 这是我目前正在做的项目,另外也在编写大约20个其他我计划出售的东西,这只是开源的部分。回复 指南 | 常见问题 | 列表 | API | 安全 | 法律 | 申请YC | 联系 搜索:
相关文章

原文

In the wake of AI coding assistants like Claude and Codex, which can seemingly perform the equivalent of a day’s work in a matter of minutes, many of us are wondering if the human role of “computer programmer” is coming to an end. Will the AI bots one day do all the programming for us?

Maybe so, but not yet. At this particular moment, human developers are especially valuable, because of the transitional period we’re living through. Just a few years ago, AI essentially could not program at all. In the future, a given AI instance may “program better” than any single human in history. But for now, real programmers will always win. Why? Because we are uniquely positioned to harness most of the power of AI while augmenting it with human taste, wisdom, and caution, among other qualities that an AI is thus far incapable of possessing.

There are many examples of stunned programmers who describe how they asked an AI to create an app from scratch and it “just did it.” They wrote a few paragraphs clearly defining the functionality and user interface, and let the AI run with it. A few minutes, hours, or days later, and tada! The app is complete. It runs, it performs the tasks required, and the interface “isn’t even that bad.”

If you interpret these examples to mean that any person can write down any list of requirements along with any user interface specs, and the AI will consistently produce a satisfactory product, then I’d agree programmers are toast. But in my experience that is not what’s happening.

There is a confirmation bias at work here: every developer who has experienced such a remarkable outcome is delighted to share it. It helps to contribute to a mass (human) hallucination that computers really are capable of anything, and really are taking over the world. It’s exciting! But people are less likely to share all the times the AI failed in some ridiculous way. When it produced thousands of lines of inscrutable code, betrayed a complete lack of knowledge in some field, or spiraled into a loop of deeper and deeper “stupidity.” In the same way social networks are filled with photographs that portray a false reality of endlessly joyful vacations, flawless families, and universal good cheer, the AI victory stories we read are not a trustworthy reflection of reality.

Why am I so confident about this? Because I work with AI every day. I patiently hold its hand, and pull it back when it follows the wrong impulses. I correct its mistakes. I rewrite its code. I sometimes speak to it sternly. I play one AI off another, asking ChatGPT to criticize Claude’s work, and vice-versa. In my opinion, the majority of code generated by AI systems is not great, but it’s the great quantity it can create in such a short period of time that makes it so powerful. And that’s why I go to the trouble to work with it at all. Because it’s so good at what it’s good at.

Speaking of goodness, I share the majority opinion that AI is generally good. That is to say that I believe it will prove to have a positive impact on humanity. It will accelerate productivity in virtually every field, lead to insights in science and medicine, and offer accessibility advantages to millions of people. And yes, it will inevitably “take the jobs” of many unsuspecting victims. But as I hinted earlier, the suspecting victims all stand to gain. So be … suspectful? That doesn’t sound right. But be wary.

A mantra I’ve been repeating to myself lately is that an AI’s code can not be counted as “work” until a human has reviewed it and fixed any problems. If we’re going to talk about computers replacing humans, then the “work” that is done has to meet or exceed the standard that humans have set. We have these standards not just because we’re fussy, but because they lead to less buggy, higher performance, and more maintainable code. They’re not going to take our jobs by writing unreadable functions that are 4-times as long and defy platform conventions. Once they’ve completely taken over, they can write the code however they like. But for now, they need to abide by human standards.

And so I repeat that mantra, because I don’t want to fall into the same trap that I’m sure many programmers already have: committing AI-generated code without review. And when I say I don’t want to fall into that trap, I mean I don’t want to fall into that trap again. Or at least not too many more times. Or not too often.

The truth is, it’s hard to avoid falling into that trap because of the illusion of perfection that AI so often projects. People used to talk all the time about Steve Jobs’s “reality distortion field.” It seemed that when he asserted some truth about a technology or product, people would eat it up in the moment, perceiving it all to be both inevitable and true. Only later, after taking a breath and pondering on what was claimed, would they determine he might have been completely bullshitting. He had a real knack for doing that, and AI has it to.

When I catch myself falling for one an AI’s bullshit ideas, I have to pull myself out of that reality distortion zone, apply my own wisdom to the task at hand, and set it back on course. Many technologies that seem like magic are, in fact, only useful or practical when a human plays a pivotal role. If, in horse-drawn buggy days, you had loaded a car full of people, pointed it the direction of a destination, and cued the horse to start moving, there’s a chance they would end up where they wanted to go. In that case, they would rejoice the miracle. The self-driving car is here! Alas, it turns out that as amazing as horses are, they can not be relied upon without the attentive management of a human.

The time may come, perhaps even soon, when AI takes over programming completely. But in the mean time, a programmer who embraces AI, yet is skeptical about everything it creates, is better-equipped than any comparably-skilled human in programming history. I’ve written specifically about programmers, but I think this also applies to writers, artists, musicians, and people in every other profession whose products can be described by any stretch as “creative work.” Anybody who maintains strict control over the final product may find that AI enhances, rather than replaces, their creativity. The computers will come for all of our jobs eventually, but those of us who refuse or decline to embrace the most powerful creative tools we’ve ever been given will be the first to fall.

联系我们 contact @ memedata.com