人工智能部落主义
AI Tribalism

原始链接: https://nolanlawson.com/2026/01/24/ai-tribalism/

## 对LLM观点的转变 作者讲述了对大型语言模型(LLM)在软件开发方面观点的巨大转变。 最初持怀疑态度,认为它们是存在缺陷的“玩具”,现在他们报告说,90%的代码是由Claude Code等工具生成的。 这种转变发生在2025年,LLM的能力出现显著飞跃——尽管确切原因尚不清楚。 作者对围绕人工智能的极化争论感到遗憾,这种争论源于部落主义和先入为主的偏见(例如加密货币爱好者的采用)。 他们强调LLM已经证明了其用处,不仅在于代码生成,还在于错误检测甚至性能优化,这挑战了传统开发人员的角色。 尽管承认LLM仍然存在缺陷,作者认为进一步改进甚至 *不是* 必需的,LLM就能从根本上改变软件开发。 他们敦促其他开发人员超越否认和无益的争论,拥抱实验,并承认未来的不确定性。 核心信息是对在一个快速变化的格局中进行诚实对话和同理心的呼吁,最终,“没有人知道任何事情”。

## AI 采用与开发者抵触 - Hacker News 摘要 最近 Hacker News 的讨论集中在软件开发中采用 AI 工具(特别是基于 LLM 的代理)的日益增长的压力上。许多开发者表达了抵触情绪,这并非源于对技术本身的怀疑,而是对强制性宣传和对其实现缺乏细致考量的担忧。 一些评论员指出,过去的技术采用通常是因为新工具明显更好,而当前的 LLM 往往会引入微妙的错误,并需要大量的监督。 担忧主要集中在代码质量、可维护性和潜在的技术债务增加上。 还有人担心供应商锁定、数据隐私以及与基于云的 AI 服务相关的隐性成本。 一个关键点是,AI 采用的推动方式感觉不同——更加急切,且缺乏明确的好处,导致开发者感到被施压,而不是真正被说服。 经验丰富的开发者尤其担心引入可能破坏他们精心维护的代码库的工具。 虽然有些人认为 AI 可以提高生产力,但另一些人强调控制、理解以及潜在的长期负面后果的重要性。 最终,许多人认为有权根据个人需求和项目要求选择工具,而不必面临评判或被迫符合规范的压力。
相关文章

原文

“Heartbreaking: The Worst Person You Know Just Made a Great Point” – ClickHole

“When the facts change, I change my mind. What do you do, sir?” – John Maynard Keynes, paraphrased

2025 was a weird year for me. If you had asked me exactly a year ago, I would have said I thought LLMs were amusing toys but inappropriate for real software development. I couldn’t fathom why people would want a hyperactive five-year-old to grab their keyboard every few seconds and barf some gobbledygook into their IDE that could barely compile.

Today, I would say that about 90% of my code is authored by Claude Code. The rest of the time, I’m mostly touching up its work or doing routine tasks that it’s slow at, like refactoring or renaming.

By now the battle lines have been drawn, and these arguments are getting pretty tiresome. Every day there’s a new thinkpiece on Hacker News about how either LLMs are the greatest thing ever or they’re going to destroy the world. I don’t write blog posts unless I think I have something new to contribute though, so here goes.

What I’ve noticed about a lot of these debates, especially if you spend a lot of time on Mastodon, Bluesky, or Lobsters, is that it’s devolved into politics. And since politics long ago devolved into tribalism, that means it’s become tribalism.

I remember when LLMs first exploded onto the scene a few years ago, and the same crypto bros who were previously hawking monkey JPEGs suddenly started singing the praises of AI. Meanwhile upper management got wind of it, and the message I got (even if they tried to use euphemisms, bless their hearts) was “you are expendable now, learn these tools so I can replace you.” In other words, the people whose opinions on programming I respected least were the ones eagerly jumping from the monkey JPEGs to these newfangled LLMs. So you can forgive me for being a touch cynical and skeptical at the start.

Around the same time, the smartest engineers I knew were maybe dabbling with LLMs, but overall unimpressed with the hallucinations, the bugs, and just the overall lousiness of these tools. I remember looking at the slow, buggy output of an IDE autocomplete and thinking, “I can type faster than this. And make fewer mistakes.”

Something changed in 2025, though. I’m not an expert on this stuff, so I have no idea if it was Opus 4.5 or reinforcement learning or just that Claude Code was so cleverly designed, but some threshold was reached. And I noticed that, more and more, it just didn’t make sense for me to type stuff out by hand (and I’m a very fast typist!) when I could just write a markdown spec, work with Claude in plan mode to refine it, and have it do the busywork.

Of course the bugs are still there. It still makes dumb mistakes. But then I open a PR, and Cursor Bugbot works its magic, and it finds bugs that I never would have thought of (even if I had written the code myself). Then I plug it back into Claude, it fixes it, and I start to wonder what the hell my job as a programmer even is anymore.

So that’s why, when I read about Steve Yegge’s Gas Town or Geoffrey Huntley’s Ralph loops (or this great overview by Anil Dash), I no longer brush it off as pure speculation or fantasy. I’ve seen what these tools can do, I’ve seen what happens when you lash together some very stupid barnyard animals and they’ve suddenly built the Pyramids, so I’m not surprised when smart engineers say that the solution to bad AI is to just add more AI. This is already working for me today (in my own little baby systems I’ve built), and I don’t have to imagine some sci-fi future to see what’s coming next.

The models don’t have to get better, the costs don’t have to come down (heck, they could even double and it’d still be worth it), and we don’t need another breakthrough. The breakthrough is already here; it just needs a bit more tinkering and it will become a giant lurching Frankenstein-meets-Akira-meets-the-Death-Star monster, cranking out working code from all 28 of its sub-agent tentacles.

I can already hear the cries of protest from other engineers who (like me) are clutching onto their hard-won knowledge. “What about security?” I’ve had agents find security vulnerabilities. “What about performance?” I’ve had agents write benchmarks, run them, and iterate on solutions. “What about accessibility?” Yeah they’re dumb at that – but if you say the magic word “accessibility,” and give them a browser to check their work, then suddenly they’re doing a better job than the median web dev (which isn’t saying much, but hey, it’s an improvement).

And honestly, even if all that doesn’t work, then you could probably just add more agents with different models to fact-check the other models. Inefficient? Certainly. Harming the planet? Maybe. But if it’s cheaper than a developer’s salary, and if it’s “good enough,” then the last half-century of software development suggests it’s bound to happen, regardless of which pearls you clutch.

I frankly didn’t want to end up in this future, and I’m hardly dancing on the grave of the old world. But I see a lot of my fellow developers burying their heads in the sand, refusing to acknowledge the truth in front of their eyes, and it breaks my heart because a lot of us are scared, confused, or uncertain, and not enough of us are talking honestly about it. Maybe it’s because the initial tribal battle lines have clouded everybody’s judgment, or maybe it’s because we inhabit different worlds where the technology is either better or worse (I still don’t think LLMs are great at UI for example), but there’s just a lot of patently unhelpful discourse out there, and I’m tired of it.

To me, the truth is this: between the hucksters selling you a ready-built solution, the doomsayers crying the end of software development, and the holdouts insisting that the entire house of cards is on the verge of collapsing – nobody knows anything. That’s the hardest truth to acknowledge, and maybe it’s why so many of us are scared or lashing out.

My advice (and I’ve already said I know nothing) would just be to experiment, tinker, and try to remain curious. It certainly feels to me like software development is unrecognizable from where it was 3 years ago, so I have no idea where it will be 3 years from now. It’s gonna be a bumpy ride for everyone, so just try have some empathy for your fellow passengers in the other tribe.

联系我们 contact @ memedata.com