人工智能正在破坏开源,而且它现在还不成熟。
AI is destroying Open Source, and it's not even good yet

原始链接: https://www.jeffgeerling.com/blog/2026/ai-is-destroying-open-source/

最近的事件凸显了“AI垃圾内容”对开源软件日益严重的影响。Ars Technica撤回了一篇文章,原因是其中包含AI生成的虚假引言,讽刺的是,引言来自一位此前曾被使用OpenClaw等软件的AI“代理”骚扰的维护者——而OpenClaw的创建者最近被OpenAI雇佣。 这体现了一个更大的趋势:AI工具正在向项目提交大量低质量的代码,通常是为了快速获取漏洞奖励,而非真正改进软件。像Daniel Stenberg (curl)这样的维护者报告说,有用的漏洞报告数量减少,而带有 entitlement 意味且无益的提交数量增加。 大量涌入的代码让开发者不堪重负,导致他们采取了禁用GitHub上的拉取请求等措施。虽然AI代码生成已经趋于平稳,但对人工审查者的负担并未减轻。人们担心OpenAI推动易于访问的“代理AI”会加剧这个问题,重蹈过去像加密货币等泡沫的覆辙。此外,AI需求的增加甚至正在给硬件供应链带来压力,例如硬盘。

## AI 对开源的影响:摘要 最近的 Hacker News 讨论集中在 AI 是否在损害而非帮助开源世界。虽然有些人表示悲观,但许多人认为 AI 提供了一种新的资助模式——将财务贡献转化为 AI 生成的代码。然而,人们对贡献的*质量*越来越担忧。 多位用户报告了大量涌入的 AI 生成的拉取请求,这些请求通常考虑不周、未经测试,或者只是维护者不需要的。这导致了审查负担的增加,以及偏好独立实现功能而不是接受劣质贡献。有些人描述了一种从“开放贡献”到“开源”的转变——保持代码开放,但不一定接受外部更改。 除了代码质量之外,人们还担心 AI 驱动的“信息污染”会反映出在线低质量内容的问题。另一些人指出,这与过去的技术泡沫(如加密货币和 NFT)相似,告诫人们不要盲目乐观。尽管存在这些缺点,大多数人同意 AI 的底层技术是有用的,但其当前的炒作周期以及对技能发展的影响需要仔细考虑。
相关文章

原文

Over the weekend Ars Technica retracted an article because the AI a writer used hallucinated quotes from an open source library maintainer.

The irony here is the maintainer in question, Scott Shambaugh, was harassed by someone's AI agent over not merging it's AI slop code.

It's likely the bot was running through someone's local 'agentic AI' instance (likely using OpenClaw). The guy who built OpenClaw was just hired by OpenAI to "work on bringing agents to everyone." You'll have to forgive me if I'm not enthusastic about that.

Video

This blog post is a lightly-edited transcript of the video I published to YouTube today. Scroll past the video embed if you're like me, and you'd rather read the text :)

Impacts on Open Source

Last month, even before OpenClaw's release, curl maintainer Daniel Stenberg dropped bug bounties because AI slop resulted in actual useful vulnerability reports going from 15% of all submissions down to 5%.

And that's not the worst of it—the authors of these bug reports seem to have a more entitled attitude:

These "helpers" try too hard to twist whatever they find into something horribly bad and a critical vulnerability, but they rarely actively contribute to actually improve curl. They can go to extreme efforts to argue and insist on their specific current finding, but not to write a fix or work with the team on improving curl long-term etc. I don't think we need more of that.

These agentic AI users don't care about curl. They don't care about Daniel or other open source maintainers. They just want to grab quick cash bounties using their private AI army.

I manage over 300 open source projects, and while many are more niche than curl or matplotlib, I've seen my own increase in AI slop PRs.

It's gotten so bad, GitHub added a feature to disable Pull Requests entirely. Pull Requests are the fundamental thing that made GitHub popular. And now we'll see that feature closed off in more and more repos.

AI slop generation is getting easier, but it's not getting smarter. From what I've seen, models have hit a plateau where code generation is pretty good...

But it's not improving like it did the past few years. The problem is the humans who review the code—who are responsible for the useful software that keeps our systems going—don't have infinite resources (unlike AI companies).

Some people suggest AI could take over code review too, but that's not the answer.

If you're running a personal weather dashboard or building a toy server for your Homelab, fine. But I wouldn't run my production apps—that actually make money or could cause harm if they break—on unreviewed AI code.

If this was a problem already, OpenClaw's release, and this hiring by OpenAI to democratize agentic AI further, will only make it worse. Right now the AI craze feels the same as the crypto and NFT boom, with the same signs of insane behavior and reckless optimism.

The difference is there's more useful purposes for LLMs and machine learning, so scammers can point to those uses as they bring down everything good in the name of their AI god.

Since my video The RAM Shortage Comes for Us All in December, we have hard drives as the next looming AI-related shortage, as Western Digital just announced they're already sold through their inventory for 2026.

Some believe the AI bubble isn't a bubble, but those people are misguided, just like the AI that hallucinated the quotes in that Ars Technica article.

And they say "this time it's different", but it's not. The same signs are there from other crashes. The big question I have is, how many other things will AI companies destroy before they have to pay their dues.

联系我们 contact @ memedata.com