轻视人工智能的工程师
Engineers who dismiss AI

原始链接: https://terriblesoftware.org/2025/12/19/the-strange-case-of-engineers-who-dismiss-ai/

作者最初因2022年与ChatGPT的负面经历而否定人工智能编码工具,坚持认为“手工”编写的、充满bug的代码以及“硬碰硬”的原则更为重要。然而,撰写一篇论述人工智能不会*取代*程序员的文章却引发了意想不到的反弹——不是害怕失业,而是强烈否认人工智能的编码*能力*。 作者意识到这种抵触源于过时的认知。自2022年以来,人工智能编码工具已经大幅改进,像Claude Code这样的工具现在能够胜任代码库范围内的重构等复杂任务。虽然仍然不完美且需要审查,但它们远非“垃圾”。 作者认为,拒绝探索这些工具并不能保护专业知识,反而会阻碍进步。工程师中正在形成一道明显的鸿沟,一部分人拥抱人工智能辅助,另一部分人则不愿接受,前者交付速度更快,能够承担更大的项目。呼吁很简单:如果你最近没有尝试过*现代*人工智能编码工具,请以开放的心态去尝试——基于旧经验的观点已经不再适用。

## 软件工程中的人工智能:格局转变 最近在Hacker News上的一场讨论集中在工程师们不愿采用人工智能工具上,许多人认为那些抵制者正在落后,技能差距正在扩大。虽然人工智能正在迅速改进——能够进行复杂的重构,甚至诊断微妙的错误——但人们仍然担心它对基本编程技能的影响。 许多人担心过度依赖人工智能会导致问题解决能力下降,并产生依赖性,阻碍未来的开发。一些人建议人工智能应该*增强*技能,而不是取代它们。另一些人指出,这与之前的技术转变(例如使用库而不是编写排序算法)相似,并强调将人工智能作为工具使用的重要性,并批判性地评估其输出。 一个关键点是,当前的人工智能工具通常需要大量的“引导”,并且可能产生冗长、有缺陷的代码,从而增加技术债务。然而,其他人则报告了显著的生产力提高,尤其是在自动化繁琐的任务和快速理解不熟悉的的代码库方面。最终,这场辩论凸显了软件开发中抽象化程度提高的转变,其中交付产品,而不仅仅是编写代码,是首要目标。
相关文章

原文

I refuse to use AI coding tools. I don’t need to try them to know they’re garbage. I have instincts.

I tested ChatGPT in 2022, and asked it to write something. It (obviously) got it wrong; I don’t remember what exactly, but it was definitely wrong. That was three years ago and I haven’t looked back. Why would I? It’s not like there was anything meaningful that changed about AI tools, right?

My coworker uses Claude Code now. She finished a project last week that would’ve taken me a month. I told her she’s “cheating.” She doesn’t understand that some of us have principles. Some of us believe in doing things the hard way (even when the easy way produces identical results).

“The code AI writes is full of bugs,” I always say. Unlike my code, which is also full of bugs, but those are artisanal bugs.

I copy from Stack Overflow constantly. That’s different from AI, though. The distinction is clear (well, at least in my head).

Will AI ever be good enough for me to try? Maybe. When it can read my mind, never make a single mistake, and personally apologize to me for threatening my sense of identity. Until then, I’ll pass.


Okay, let me drop the act.

Last week I wrote AI Can Write Your Code. It Can’t Do Your Job. The thesis was that: AI is changing some things, but the job isn’t going anywhere. Programming is a task; software engineering is a role.

To be honest, I was expecting some controversy. I figured people would push back on the optimistic part. “You’re naive, AI will replace us all!” That kind of thing.

That’s not what happened; the backlash came from the other direction. Engineers got mad at the “AI can write your code.” part, which was shocking to me.

Here’s what I got:

“Hardly. It can assist a bit under constant supervision.”

“(…) it can’t write my code either.”

“AI can write my bugs for me.”

“Only if my code is supposed to be a hallucinated 5 unnecessary layers of abstraction that have 10 security holes.”

“Not really… most of what AI writes is trash…”

“It can maybe write code that I would have written 20 years ago when I just started.”

These are real engineers, the ones who should be most curious about new tools. The ones whose entire career has been about learning and adapting. And yet the dismissiveness was, again, shocking. Not “it works for some things”, but just: “can’t”, “trash”, “garbage”.

I think I get why this happens? Maybe they tried AI two or three years ago and it was genuinely bad for that one case. Maybe they’ve seen colleagues misuse it to ship garbage. Maybe it feels threatening to their identity, I mean, your expertise is wrapped up in being someone who can write code. Some tool threatens that? Of course you want to dismiss it.

Here’s the thing, though: what was true in 2022 isn’t true now, the gap between AI coding tools then and now is like IE11 vs Chrome. Tools like Claude Code and Cursor have changed the game dramatically. They can now work across entire codebases, understand project context, refactor multiple files at once, and iterate until it’s really done. If your last serious attempt was more than six months ago, your opinion is (I’m sorry to say, but) outdated.

And look, I’m not saying AI tools are perfect. They’re not: they still produce bugs, sometimes over-abstract, they hallucinate APIs that don’t exist (way less than before, though). You still need to review everything they generate. But “imperfect” and “useless” are very different claims.

The engineers refusing to try aren’t protecting themselves; quite the opposite, they’re falling behind. The gap is widening between engineers who’ve integrated these tools and engineers who haven’t. The first group is shipping faster, taking on bigger challenges. The second group is… not.

So here’s my ask: if you haven’t tried modern AI coding tools recently, try one this week. Not to prove it doesn’t work, but to genuinely find out what it can do.

If you’ve actually tried modern tools and they didn’t work for you, that’s a conversation worth having. But “I tried ChatGPT in 2022” isn’t that conversation.

联系我们 contact @ memedata.com