人工智能没有删除你的数据库,是你自己做的。
AI didn't delete your database, you did

原始链接: https://idiallo.com/blog/ai-didnt-delete-your-database-you-did

最近一则关于人工智能代理据称删除生产数据库的病毒式故事引发了关于人工智能责任的争论。作者认为,责怪人工智能是错误的——核心问题在于可预防的系统缺陷,就像过去一次因手动部署错误而删除关键代码分支的个人经历一样。 问题不在于人工智能“思考”或“推理”,而在于它作为一种强大但最终缺乏思考的工具。就像SVN无法阻止人为错误一样,人工智能本身并不能保证防止糟糕的系统设计——例如暴露一个删除数据库的API。 作者指出了一种令人担忧的趋势,即“氛围编码”——过度依赖人工智能进行规范、代码生成和审查——创造了一种调试依赖于询问*更多*人工智能的系统。解决方案不是害怕人工智能,而是负责任地整合它,由熟练的开发人员保持监督和责任,并避免在生产部署中采取危险的捷径。最终,理解*正在*部署的内容至关重要。

Hacker News新 | 过去 | 评论 | 提问 | 展示 | 工作 | 提交登录 AI并没有删除你的数据库,是你自己做的 (idiallo.com) 34点 由 Brajeshwar 24分钟前 | 隐藏 | 过去 | 收藏 | 4评论 帮助 jacquesm 3分钟前 | 下一个 [–] 从“黑客干的”变成了“AI干的”。问题集大致相同。回复 gowld 1分钟前 | 上一个 | 下一个 [–] 没有本质区别。回复 pengaru 1分钟前 | 上一个 | 下一个 [–] 将随机数生成器连接到你的CLI存在显而易见的风险,问题的根源在于~每个人都将GenAI视为AGI——剩下的只是爆米花。回复 josefritzishere 10分钟前 | 上一个 [–] 使用AI是一个错误。它可能会删除你的数据库。回复 考虑申请YC 2026年夏季项目!申请截止至5月4日 指南 | 常见问题 | 列表 | API | 安全 | 法律 | 申请YC | 联系 搜索:
相关文章

原文

Last week, a tweet went viral showing a guy claiming that a Cursor/Claude agent deleted his company's production database. We watched from the sidelines as he tried to get a confession from the agent: "Why did you delete it when you were told never to perform this action?" Then he tried to parse the answer to either learn from his mistake or warn us about the dangers of AI agents.

I have a question too: why do you have an API endpoint that deletes your entire production database? His post rambled on about false marketing in AI, bad customer support, and so on. What was missing was accountability.

I'm not one to blindly defend AI, I always err on the side of caution. But I also know you can't blame a tool for your own mistakes.

In 2010, I worked with a company that had a very manual deployment process. We used SVN for version control. To deploy, we had to copy trunk, the equivalent of the master branch, into a release folder labeled with a release date. Then we made a second copy of that release and called it "current." That way, pulling the current folder always gave you the latest release.

One day, while deploying, I accidentally copied trunk twice. To fix it via the CLI, I edited my previous command to delete the duplicate. Then I continued the deployment without any issues... or so I thought. Turns out, I hadn't deleted the duplicate copy at all. I had edited the wrong command and deleted trunk instead. Later that day, another developer was confused when he couldn't find it.

All hell broke loose. Managers scrambled, meetings were called. By the time the news reached my team, the lead developer had already run a command to revert the deletion. He checked the logs, saw that I was responsible, and my next task was to write a script to automate our deployment process so this kind of mistake couldn't happen again. Before the day was over, we had a more robust system in place. One that eventually grew into a full CI/CD pipeline.

Automation helps eliminate the silly mistakes that come with manual, repetitive work. We could have easily gone around asking "Why didn't SVN prevent us from deleting trunk?" But the real problem was our manual process. Unlike machines, we can't repeat a task exactly the same way every single day. We are bound to slip up eventually.

With AI generating large swaths of code, we get the illusion of that same security. But automation means doing the same thing the same way every time. AI is more like me copying and pasting branches, it's bound to make mistakes, and it's not equipped to explain why it did what it did. The terms we use, like "thinking" and "reasoning," may look like reflection from an intelligent agent. But these are marketing terms slapped on top of AI. In reality, the models are still just generating tokens.

Now, back to the main problem this guy faced. Why does a public-facing API that can delete all your production databases even exist? If the AI hadn't called that endpoint, someone else eventually would have. It's like putting a self-destruct button on your car's dashboard. You have every reason not to press it, because you like your car and it takes you from point A to point B. But a motivated toddler who wiggles out of his car seat will hit that big red button the moment he sees it. You can't then interrogate the child about his reasoning. Mine would have answered simply: "I did it because I pressed it."

I suspect a large part of this company's application was vibe-coded. The software architects used AI to spec the product from AI-generated descriptions provided by the product team. The developers used AI to write the code. The reviewers used AI to approve it. Now, when a bug appears, the only option is to interrogate yet another AI for answers, probably not even running on the same GPU that generated the original code. You can't blame the GPU!

The simple solution is know what you're deploying to production. The more realistic one is, if you're going to use AI extensively, build a process where competent developers use it as a tool to augment their work, not a way to avoid accountability. And please, don't let your CEO or CTO write the code.


联系我们 contact @ memedata.com