人机协作
The Human in the Loop

原始链接: https://adventures.nodeland.dev/archive/the-human-in-the-loop/

## 软件开发的转变:从编码到审查 虽然承认人工智能对软件开发具有变革性影响,但本文挑战了迈克·阿纳尔迪认为该领域“已死”的观点。作者同意人工智能极大地加快了*实现*速度,但认为关键瓶颈已经从编码速度转移到*审查的技能*。 开发者现在越来越多地验证人工智能生成的解决方案——评估安全性、架构适应性和整体正确性,而不是编写代码。 这不是一项新技能,而是高级工程师和开源维护者长期以来承担的责任的强化版。仅仅通过测试是不够的;深刻的理解和判断至关重要。 作者担心未来“我没有审查,是人工智能写的”会成为一个可以接受的借口,可能导致前所未有的广泛漏洞和安全隐患。 就像工业革命一样,人工智能驱动的开发需要新的安全措施和对负责任的监督的文化重视。 最终,价值不在于更快地生成代码,而在于通过严格的人工审查来维护责任制并确保质量。 2026年的核心技能不是提示人工智能,而是拥有判断力来决定*应该*构建什么,以及人工智能的实现是否真正“足够好”。

Hacker News 新闻 | 过去 | 评论 | 提问 | 展示 | 招聘 | 提交 登录 人工参与循环 (nodeland.dev) 6 分,artur-gawlik 发表于 2 小时前 | 隐藏 | 过去 | 收藏 | 2 条评论 mpalmer 发表于 16 分钟前 | 下一个 [–] 为什么我要听取一个人关于保持人工参与循环的建议,而这个人却让 LLM 写了 90% 的博客文章?回复 chrisjj 发表于 1 小时前 | 上一个 [–] > 当我修复安全漏洞时,我不仅仅是检查测试是否通过。我还会问:这是否真的关闭了攻击向量? 如果你需要问这个问题,那么你最好把精力放在改进测试覆盖率上。回复 指南 | 常见问题 | 列表 | API | 安全 | 法律 | 申请 YC | 联系 搜索:
相关文章

原文

Mike Arnaldi wrote a thought-provoking piece titled "The Death of Software Development." I respect Mike a lot. Effect is brilliant work, and his analysis of the current AI moment is sharper than most. But I think he's missing something critical.

My Workflow Has Changed

Let me be clear: I'm not here to argue that AI isn't transforming our industry. It is. My own workflow has changed dramatically.

When an issue lands in my queue today, my first instinct is to throw it at AI. Security vulnerabilities in Node.js or Undici. Bugs in Fastify. New features for Platformatic. AI handles the implementation. I've shipped dozens of fixes this way in the past few months.

But here's the thing Mike glosses over: I review every single change. Every behavior modification. Every line that ships.

The Bottleneck Has Shifted

Mike writes that he built a Polymarket analysis tool in 2 hours, writing zero lines of code, reviewing zero lines of code. He presents this as a triumph.

I see it differently.

My ability to ship is no longer limited by how fast I can code. It's limited by my skill to review. And I think that's exactly how it should be.

When I fix a security vulnerability, I'm not just checking if the tests pass. I'm asking: does this actually close the attack vector? Are there edge cases the AI missed? Is this the right fix, or just a fix? When I ship a new feature, I need to understand if it fits the architecture, if it maintains backward compatibility, if it's something I can stand behind.

The moment I stop reviewing is the moment I stop being responsible for what I ship.

The Bloomberg Terminal Question

Mike asks: "If an idiot like me can clone a product that costs $30k per month in two hours, what even is software development?"

I'd ask a different question: who's responsible when that clone has a bug that causes someone to make a bad trade? Who understands the edge cases? Who can debug it when it breaks in production at 3 AM?

The Bloomberg Terminal isn't expensive because the code is hard to write. It's expensive because there are people who understand financial markets, regulatory requirements, data integrity, and system reliability standing behind it. People who have spent years building mental models about what can go wrong.

Forty Years of Practices Need Rethinking

Mike is absolutely right about one thing: forty years of best practices are now outdated. The patterns we relied on, the team structures we built, the processes we followed. All of it needs to be reconsidered.

Code review processes designed for human-written code? Need rethinking. Sprint planning based on human typing speed? Obsolete. The assumption that more developers means more output? Questionable.

I've been in the Node.js ecosystem long enough to see "best practices" come and go. But this is different. This isn't a new framework or a new paradigm. This is a fundamental shift in how code gets produced. Anyone pretending we can keep doing things the old way is in denial.

Software Engineers and Architects Are More Crucial Than Ever

Mike argues that while software development is dead, software engineering is alive. I completely agree. Engineers are now "designing higher-order systems" and "building techniques." The role of software engineers and architects is more crucial than ever.

What's gone is the role of the programmer who takes a task from Jira, does it, and clocks off for the day. That job is gone. AI can do that now, faster and cheaper.

But I think there's more to it. Reviewing and assessing code done by others is what we have done in open source since forever. As a maintainer of Node.js, Fastify, Pino, and Undici, and as Chair of the Node.js Technical Steering Committee, I spend most of my time reviewing pull requests from contributors I've never met. I don't write most of the code that ships. I review it, I assess it, I decide if it's good enough. This isn't new to me. AI is just another contributor now.

I've also shipped contributions that I did not fully understand. I regret them fondly. Every maintainer has done this at some point. And every time, it comes back to bite you. The bugs are harder to fix. The behavior is harder to explain. The technical debt compounds. This is why review matters. This is why understanding matters.

Yes, I design systems. But more importantly, I provide judgment. I decide what should be built, how it should behave, and whether the implementation matches the intent. I catch the cases where the AI confidently produces something that looks right but isn't. I understand the context that no prompt can fully capture.

This isn't a new skill. It's the same skill senior engineers have always had. The difference is that now it's the primary skill, not one of many.

The Real Question

Mike says that "the average software developer is not even close to understanding the extent of this change." I agree. But I think the misunderstanding cuts both ways.

Some developers underestimate AI. They think their job is safe because AI makes mistakes. They're wrong. AI is already good enough to handle a huge portion of routine coding work.

But some AI enthusiasts overestimate the transformation. They think the human in the loop is a temporary limitation, a bottleneck to be optimized away. I think they're wrong too.

The human in the loop isn't a limitation. It's the point.

When I ship code, my name is on it. When there's a security vulnerability in Undici or a bug in Fastify, it's my responsibility. I can use AI to help me move faster, but I cannot outsource my judgment. I cannot outsource my accountability.

What I'm Actually Worried About

My worry isn't that software development is dying. It's that we'll build a culture where "I didn't review it, the AI wrote it" becomes an acceptable excuse.

I've been maintaining open source projects for over a decade. I've seen what happens when people ship code they don't understand. It's not pretty. And the scale of damage possible when you can generate code at AI speed is much larger than when you're limited by typing speed.

The Industrial Revolution comparison is apt, but not in the way Mike suggests. The Industrial Revolution didn't just make goods abundant. It also created new categories of industrial accidents, new forms of pollution, new ways for things to go wrong at scale. It took decades to develop the safety practices, regulations, and cultural norms to handle industrial-scale production responsibly.

We're at the beginning of that process for AI-generated software. And the answer isn't to remove humans from the loop. It's to get much better at the review part.

The Path Forward

I'm not arguing against using AI. I use it every day. I'm more productive than I've ever been.

But I've accepted that my bottleneck is now review, not coding. And I'm working on getting better at it. Faster pattern recognition. Better mental models for common failure modes. More efficient ways to verify behavior.

This is the skill that matters in 2026. Not prompting. Not "agentic infrastructure." Judgment.

Mike's right that things are changing fast. But the human in the loop isn't a bug to be fixed. It's a feature to be protected.

联系我们 contact @ memedata.com