我拒绝你的人工智能生成的合并请求的原因。
Why I'm declining your AI generated MR

原始链接: https://blog.stuartspence.ca/2025-08-declining-ai-slop-mr.html

## AI 生成代码与代码审查:2025 年视角 本文概述了对在合并请求 (MR) 中滥用 AI 生成代码的担忧。作者是一位资深 AI/云科学家和教育者,他承认 AI 的潜在益处,并详细描述了 AI 辅助代码*损害*项目和团队的场景,通常需要立即拒绝*而无需*详细审查。 这些问题案例包括提交可以轻易删除的代码、缺乏基本的语言理解、生成冗余文档(“文档垃圾邮件”)、引入不一致性、过度复杂化处理不必要的边缘情况,或添加无意义的依赖项。这些问题破坏了代码审查的核心目标:学习、合理性检查、一致性以及最大限度地减少心理负担。 作者强调,仅仅拒绝此类 MR 并分享此指导,就能表明问题所在。可接受的 AI 使用案例仅限于临时解决方案或边缘功能,*前提是*作者解释了 AI 的作用并验证了结果。 最终,这是一个不断发展的问题。作者正在努力思考如何与初级开发人员解决这些问题,在建设性反馈与保护项目质量和防止“AI 垃圾”和技术债务积累之间取得平衡。

## AI 生成代码与开源贡献:日益增长的担忧 最近 Hacker News 上出现了一场讨论,中心议题是:低质量、AI 生成的合并请求(MR)——也称为拉取请求——被提交到开源项目的情况日益增多。一位软件开发者详细描述了收到代码时的沮丧,这些代码通常表面上正确,但充斥着初级水平的错误,需要大量的审查工作。 核心问题并非 AI 本身,而是贡献者提交的代码缺乏充分的验证和理解。许多评论者对此表示认同,指出 AI 工具现在*加速*了无益贡献的产生。一些人认为自动审查工具可能会适得其反,而另一些人则提出了更严格的贡献指南或直接拒绝此类提交的策略。 一个关键点是维护者的时间成本。他们没有花时间提供建设性的反馈,而是花费时间剖析那些对 AI 输出理解不足的代码。讨论还涉及 AI 生成的代码可能*看起来*很专业,掩盖其潜在缺陷的可能性。最终,这场对话强调了在 AI 代码生成唾手可得的时代,需要改变方法来维护项目质量。
相关文章

原文

ai slop soup nazi

Sometimes a merge request (MR) doesn't merit a code review (CR) because AI was used in a bad way that harms the team or the project. For example:

  1. Code deletion would improve the MR a lot.
  2. You don't know the basics of the language you submitted.
  3. Documentation spam.
  4. Blatantly inconsistent.
  5. Edge case overload.
  6. Adding pointless or deprecated dependencies without knowing why.

If I decline your AI code MR with no further comments and send you this page then I suspect some of these conditions were met.

Despite some recent research and discussions on it I know that AI can be helpful in writing code. However AI misuse is also a new phenomenon and we need guidelines to help identify it. This page was written in 2025 and I expect the tools and guidelines to evolve.

  • Merge Request (MR): when a programmer submits proposed changes to a project in a structured way. This makes it easy for anyone to see the differences and review the changes. Sometimes called a Pull Request.
  • Code Review (CR): when another programmer reviews a MR, provides feedback or improvements, and approves or rejects the changes.

I feel like I'm in a good position to write about this because:

  1. I'm a senior computer scientist for AI and cloud. I've been a technical supervisor to about 20 students and juniors.
  2. I have degrees and work experience in both computer science and education.
  3. I know code and AI. My AI project has a million installs and monthly income.
  4. I spend a lot of personal time enjoying, exploring, and discussing AI news and breakthroughs.
  5. I don't need a job and I'm not selling anything. I'm not an investor or CEO shilling AI slop and I don't get ad revenue for flaming AI.

There's thousands of opinions and articles. Here's a good one from Google. Instead of rehashing that, let's focus on what AI misuse threatens. With good code reviews:

  1. Authors learn and improve.
  2. Reviewers learn and improve.
  3. We sanity check important changes.
  4. We minimize mental load for both humans and AI.
  5. We get consistent and simple code.
  6. Every MR makes the project better.
  7. Authors take responsibility for their code and can justify it.

1) Code deletion would improve the MR a lot.

Can code be trivially deleted?

This violates the CR goals "sanity check" and "mental load". For example a setup script handling operating systems that we don't even have in our org. Not only should the author do this basic cleanup, but now they're placing an added burden on the reviewer to do it for them. In 2025 AI is not in a state where I'm comfortable running it in production with zero human review.

2) You don't know the basics of the language you submitted.

This violates the CR goal "authors learn". How can my feedback improve you as a software developer if you don't understand your own code? Going through you is not the best way for me to give feedback to your AI.

3) Documentation spam.

One example I've seen is two nearly identical copies of documentation in two different formats.

This violates "reviewers learn" and "make the project better".

If an author didn't trim or even read the AI generated documentation I think "they don't value my time or the time of my team". It's not the responsibility of a reviewer to edit 300 words of AI slop because the author didn't write the 3 words "keep it short" in their prompt.

4) Blatantly inconsistent.

Common examples I see are using new frameworks or styles for logging and unit tests.

This violates the CR goals "consistency" and "mental load". To understand a software project humans and AI may need to understand 50 concepts at once. Do we want them to have to consider 200 instead? Failing to manage complexity and consistency paralyzes a project once no human or AI is smart enough to improve it further.

5) Edge case overload.

This violates the CR goals "sanity check" and "make the project better". Handling many new unusual edge cases likely means the author didn't test all the code.

If we implement a feature at the cost of introducing twenty bugs with untested edge cases, that does not make the project better. It's like taking one step forward (progress) but falling into a mud pit.

Similarly, AI slop may desperately catch an exhaustive list of all exceptions to "handle all cases". But the AI isn't handling the cases. It's just suppressing the valid exception or writing a non-standard error message.

6) Adding pointless or deprecated dependencies without knowing why.

This violates the CR goal "reviewers learn" and "take responsibility for your code". A reviewer might ask "why are we using this new thing here?" The author shouldn't respond "I have no idea, the AI did it." This may teach the team to use a deprecated tool, or the wrong tool for the job.

ai slop this is fine

None of these are hard rules. I'm more inclined to accept an AI generated MR or give a CR to one if:

  • The code is temporary or a one-shot analysis with no long term maintenance requirements. If it works it works!
  • The MR includes an explanation of why AI was used, how much, why, and what extra steps the author took to validate it.
  • This is an edge feature and not a core component.

As a team lead, teacher, and I think nice guy I'm struggling with how to confront juniors when I feel their MR harms them, or the team, or the project. Why did the junior submit AI code? Was it a smart decision or just laziness? Do I harshly confront them and call it AI slop or do something else?

It's not always clear to me when it's a good use of AI that I should support with a full CR, or when it's a bad use of AI that I need to confront by rejecting it entirely. For me, just writing this page has helped.

Nobody has years of experience understanding how AI slop impacts technical debt or learning. If software development is changing in a good way then team leads need to change with it. If software development is changing in a bad way then we need to resist.

联系我们 contact @ memedata.com