我是一名程序员,而不是一个批准Copilot生成代码的橡皮图章。
I am a programmer, not a rubber-stamp that approves Copilot generated code

原始链接: https://prahladyeri.github.io/blog/2025/10/i-am-a-programmer.html

程序员们日益担心的一个问题是,从*自愿*使用人工智能工具转变为被*要求*使用,并且绩效现在与人工智能使用指标挂钩。Reddit上的一篇帖子强调了这种趋势,导致开发者开始质疑他们的职业道路。 担忧不在于人工智能的潜力,而在于被迫依赖像Copilot和ChatGPT这样的工具,甚至用于琐碎的任务。程序员们担心这会降低职业技能,将他们的角色简化为仅仅审查和批准人工智能生成的代码——本质上成为“橡皮图章”,同时仍然要对错误承担全部责任。 核心问题是,如果公司真的相信人工智能的有效性,为什么要强制采用人工智能;结果应该会证明一切。这种强制依赖引发了人们对就业岗位流失的担忧,这种担忧被“人工智能取代工作”的说法所掩盖,以及对编程核心的创造性工艺的丧失。

## 黑客新闻讨论:AI 在开发中的阻力与担忧 最近一篇黑客新闻帖子引发了关于 Copilot 等 AI 工具在软件开发中激进推广的讨论。许多评论者表达了对这些工具倾向于生成错误或干扰性代码的沮丧,以及对更不具侵入性的用户体验的渴望。 一个主要担忧是公司追踪并*强制*使用 AI 工具,将其与绩效评估甚至潜在裁员挂钩。这引发了关于开发者是否正变得不那么关注理解代码基础知识,而只是简单地“让事情运行起来”的争论。另一些人则认为 AI 对于快速的副项目很有帮助,并且对新工具的抵制在历史上很常见。 几位用户强调了根据 AI 使用情况*而非*传统指标(如代码质量和错误率)来评估开发人员的荒谬性。一些人提倡反击此类政策,即使这意味着另谋工作,而另一些人则将其视为不可避免的行业变革的迹象。 讨论还涉及传统技能工艺的更广泛自动化。
相关文章

原文

Posted on 15 Oct 2025

Today morning, I came across this reddit post titled Completely losing interest in the career due to AI and AI-pilled people. They describe how in a span of just two months, their corporate job went from "I'll be here for life" to "Time to switch careers?". And this post isn’t alone, there is a deep and dark pattern to it.

When CTOs or project managers suggest programmers in their team to use AI assistance from copilot, chatgpt or other LLMs to improve productivity, it’s totally understandable. But once it’s no longer voluntary but is enforced as a policy, you start entering sinister territory. Worse, said usage is actually getting monitored and performance appraisals have now started depending on the AI usage instead of (or at least in addition to) traditional metrics like number of priority bugs raised, code reviews, Function Points Analysis, etc.

If they’re really so confident on the LLM’s effectiveness, why not just keep it voluntary, why force it on people? The results will be there in the outcome of the shipped product for all to see. By forcing LLM usage upon programmers for implementation of every tiny little thing, are they trying to make us dependent on LLMs to such extent that programmers will be reduced to mere approvers of LLM generated code in the new scheme of things; mere rubber stamps, if you will, who just label the commits and annotate the tags as a formality?

Needless to say, they’d still want you to take the responsibility. If bugs or tickets get raised on the shipped code, it’s you who gets fired, not the copilot or chatgpt - though the larger narrative or news headlines next day would still be, “AI is eating jobs”!

If the essence of programming shifts from creating to merely approving, we risk losing not just a profession, but a craft. What do you think is going on here, let me know your thoughts in comments.

programming
联系我们 contact @ memedata.com