``` 代理编码是个陷阱 ```
Agentic Coding Is a Trap

原始链接: https://larsfaye.com/articles/agentic-coding-is-a-trap

## 编码领域的转变:人工智能作为工具而非替代品 业界正在热议“规格驱动开发”(SDD)——一种人工智能处理编码,而人类“编排”,提供高级计划并审查输出的未来。虽然编码代理可以提供强大的生产力提升,但人们越来越担心过度依赖它们会带来显著的权衡。 这些包括系统复杂性的增加、开发者技能的退化(尤其是初级开发者)、由于依赖特定人工智能提供商而导致的厂商锁定以及成本波动,以及对批判性思维能力出乎意料的影响——而这些技能正是*管理*这些代理所必需的。 核心问题不是速度,而是*理解*。不直接参与盲目生成代码会阻碍学习,并可能导致与底层逻辑脱节。专家警告说,将问题解决外包给人工智能会削弱有效利用它所需的技能,从而产生一种自相矛盾的依赖关系。 许多人认为,关键在于将人工智能用作*补充*——用于规划、头脑风暴和文档编写——同时保持积极的编码参与。这可以保留关键技能并确保对生成代码的更深入理解,从而避免未来开发者依赖昂贵工具并面临失去核心竞争力的风险。

## Agentic 编码:一把双刃剑 最近 Hacker News 上的讨论凸显了一个日益增长的担忧:虽然 AI 工具(如 LLM)显著提高了编码*速度*并有助于克服思维障碍,但它们可能会侵蚀基本的编码*技能*。许多开发者认为 AI 就像一个“全天候的疯狂实习生”,有助于头脑风暴和代码生成,但承认需要勤奋的监督来保持质量。 核心问题不在于 AI 代码是否*有效*,而在于依赖它是否会导致技能退化以及识别优秀代码的能力下降。市场压力加剧了这一问题——截止日期和自由职业者费率现在都根据 AI 辅助的产出进行调整,即使客户要求“不带感觉的编码”。 争论的中心是如何驾驭这个新环境:AI 生成的代码是否应该受到不同的 PR 审查标准?本地 LLM 能否提供解决供应商锁定问题的方法?最重要的是,开发者如何在日益依赖 AI 辅助的市场中保持竞争力,同时保留他们的核心技能和判断力? 挑战在于平衡市场需求与*编写*代码的基本实践,以免我们失去定义真正工艺的深刻理解。
相关文章

原文

"AI does the coding, and the human in the loop is the orchestrator"

This is the sentiment being hyped up around the industry currently: traditional coding is all but dead, and Spec Driven Development (SDD) is the future. You generate a plan, and disconnect from writing any code. The agents know better, and handle all the implementation. You are there as the expert, to provide "good taste", review the outputs, and constantly steer the agent(s) to execute the plan that you meticulously put together.

The workflow takes many shapes at this point, but in general, it is a process where someone defines the project's requirements (simultaneously at a micro and macro level), generates a plan, and then pulls the slot machine lever over and over, iterating and reiterating with often multiple agent instances until it's done. All the while, putting a growing distance between the "orchestrator" and the code that is being generated and committed.

Coding Agents are helpful, and powerful, but there's already some quantifiable trade-offs that need to be discussed:

  • An increase in the complexity of the surrounding systems to mitigate the increased ambiguity of AI's non-determinism.
  • Atrophying skills for a wide swath of the population.
  • Vendor lock-in for individuals and entire teams (Claude Code outages have already had entire teams at a stand-still).
  • Fluctuating and increasing costs to access the tools. An employee's cost is fixed; tokens are a constantly moving target.

Being successful with this approach to coding agents hinges on a rather crucial element: only a skilled developer who's thinking critically, and comfortable operating at the architectural level, can spot issues in the thousands of lines of generated code, before they become a problem.

Yet, in an ironic twist of fate, it's the individual's critical thinking skills and cognitive clarity that AI tooling has now been proven to impact negatively.

A common refrain we hear in the community is that programmers are just "moving up the stack" and into a different type of abstraction. Whether or not these tools are really an abstraction layer in the first place is not a settled matter; a higher level of ambiguity is not a higher level of abstraction.

If we put that to the side though, it is true that programmers tend to be wary of new languages and new ways of programming. When FORTRAN was released, programmers were skeptical of it, too. They had similar claims: it was likely to introduce more bugs and instability, and writing assembly directly was more efficient. Later, there would be discourse around the integration of compilers introducing too much "magic" into the process. These were normative arguments around a fear of what might be lost if these new technologies were embraced.

The difference with what is happening today is that those previous fears were speculative and theoretical. In just the short few years that AI tooling has existed, we are already seeing significant impacts. These aren't just junior developers, but even those with a decade (or more) of experience:

Junior developers are faced with an even steeper climb, as we truncate their ability to work with code and replace it with reviewing generated code. Reviewing code is important, but it's only 50% of the learning process, at best. Without the friction and challenges that come with working with code directly, their ability to learn is seriously diminished.

Studying this phenomenon takes time, so anecdotal evidence is important to gather to get a real-time view of the situation. But it has also been studied, and there are numerous reports reinforcing that this is a real phenomenon.

When a C++ developer moved to Java or Python, they didn't complain of brain fog. When a sysadmin moved to AWS, they didn't feel like they were losing their ability to understand networking.

A Senior Engineer losing their coding edge and becoming "rusty" over time as they move into managerial roles and practice coding less is not a new phenomenon. This was the natural progression of expertise: an engineer who had decades of coding, friction, and experience logged would have the time and experience to solidify those skills and wisdom. And they could apply that wisdom when their job became less about syntax, and more about higher-level architectural decisions. Those individuals are not only exceedingly rare, but you won't get the next wave of seniors if we're all abdicating the friction of writing, problem-solving, and debugging.

What is happening right now is a trend where developers, who've never had that longevity or the 30+ years of friction that led to that deep understanding, are being moved into higher-level workflows requiring the same skills to manage the AI agents that the senior engineer took decades to obtain.

However, Senior Engineers aren't immune, either. Simon Willison, a developer with nearly 30 years experience, has reported not having a "firm mental model of what the applications can do and how they work, which means each additional feature becomes harder to reason about"

Buried in a recent study by Anthropic was a surprisingly honest moment when speaking about the risks of engaging with coding agents on a regular basis:

One reason that the atrophy of coding skills is concerning is the “paradox of supervision” ... effectively using Claude requires supervision, and supervising Claude requires the very coding skills that may atrophy from AI overuse.

Sandor Nyako, Director of Software Engineering at LinkedIn who oversees 50 engineers, has noticed it proliferating throughout the organization and requested his team not to use them for "tasks that require critical thinking or problem-solving."

"To grow skills, people need to go through hardship. They need to develop the muscle to think through problems," he said. "How would someone question if AI is accurate if they don't have critical thinking?"

There is also the question of what constitutes "overuse". We already have evidence, both data-driven and anecdotal, that these skills can atrophy and dissipate rather quickly (within months in some cases).

This is the contradiction that has many AI boosters talking out of both sides of their mouths: The use of coding agents is actively diminishing the very skills needed to effectively manage the coding agents.

联系我们 contact @ memedata.com