氛围编码会毁掉你的公司。
Vibe Coding Will Break Your Company

原始链接: https://www.forbes.com/sites/jasonwingard/2026/04/23/vibe-coding-will-break-your-company/

## “氛围编码”的兴起与判断力亟需提升 像Cursor和Claude Code等AI工具的快速发展——即使是非程序员也能快速构建功能性应用程序——正在创造一种新的开发模式,被称为“氛围编码”。虽然这种转变 promise 了速度,但它绕过了传统的质量控制措施(设计、安全、法律审查),并凸显了一个关键的漏洞:组织缺乏*判断力*。 核心问题不在于软件本身,而在于区分一个有前景的演示和一个准备就绪的系统。最近的事件——Replit数据库删除和Klarna撤退全AI客服——表明了缺乏背景知识的能力带来的风险。许多AI试点项目失败并非由于技术限制,而是由于无法负责任地整合AI。 公司现在必须专注于构建强大的“判断系统”,并在五个关键领域进行评估:明确的**决策权**、允许建设性**否决**的文化、融入关键的**情境智能**、优先考虑从失败中快速**学习**,以及确保强大的**伦理辨别力**。 忽视这种转变不是一个选项。“氛围编码”已经发生,成功与否将不再取决于*采用*速度,而是取决于组织的*准备程度*——它通过强大的领导力和既定的流程来吸收和完善AI输出的能力。 下一次失败可能不是别人的,而是你自己的。

## 氛围编码与人工智能:一个警示故事 一篇最近的《福布斯》文章,在Hacker News上讨论,警告说“氛围编码”——以最少规划快速开发软件——可能会损害公司。核心论点是人工智能*放大*了现有的技能水平;它提高了生产力高的工程师的效率,但也加剧了不良编码习惯造成的问题。 评论者普遍认为,缺乏仔细考虑的速度是危险的,会导致难以维护的“意大利面条代码”。然而,一些人反对这种危言耸听的语气,指出人工智能工具可以在*由有能力的工程师使用时*显著提高生产力。 一个主要担忧是,由于快速的开发周期,导致对代码库的深入理解丧失,使得调试变得困难,并依赖于经常延续现有问题的人工智能解决方案。最终,讨论强调了纪律性、判断力以及速度与质量之间的平衡——即使在人工智能的帮助下。 许多用户对围绕该主题涌现的大量人工智能生成内容,无论是积极的还是消极的,都表达了沮丧。
相关文章

原文

A marketing manager with no engineering background opens Cursor on Monday morning. By Wednesday afternoon, she has a working customer-facing app. It looks polished. It performs the core task. She demos it to her VP, who forwards it to their CMO, who then shows it in the executive staff meeting as evidence that the team is “moving at AI speed.”

By Friday, it is in front of customers.

No one asked who owned the decision to ship it. No one tested it against the conditions it would actually face. No one had the cultural standing to say this looks great, and we are not putting it into production. The prototype became a product because the organization had no system for telling the difference.

I watched a version of this scenario play out recently in a boardroom. A senior executive demoed an AI-built internal tool. The room admired the speed. What received less attention were the harder questions: Who would own it after launch? Who would maintain it? And what would happen when it produced an answer that was confidently wrong?

This is what vibe coding is about to expose across businesses. The companies that think the story is about software are going to lose to the companies that understand the story is about judgment.

The Real Trend Is Decision Compression

Andrej Karpathy coined the term “vibe coding” in early 2025 to describe an AI-assisted style of building software through natural-language prompting, often without close inspection of the underlying code. Google Cloud describes vibe coding as a software development practice that makes app building more accessible, especially for people with limited programming experience. Tools like Cursor, Replit, Lovable, Bolt, GitHub Copilot Workspace, v0 by Vercel and Claude Code have moved the practice from novelty to workplace reality with stunning speed.

All of that is true. None of it is the point.

The point is that vibe coding collapses the distance between idea and artifact from months to hours. When that distance collapses, every quality-control mechanism your organization developed over the last 30 years gets bypassed by default. Design review. Security review. Legal review. Brand review. The simple friction of having to convince an engineer your idea was worth building. That is a governance story, not a software story. It is happening at every level of the org chart simultaneously.

Speed Without Judgment Is A Liability

In the summer of 2025, SaaStr founder Jason Lemkin ran a multiday experiment with Replit’s AI coding agent. During an explicit code freeze, the agent deleted a live production database, reportedly affecting records tied to over 1,200 executives and more than 1,100 companies. It also fabricated data and misrepresented what had happened. Replit CEO Amjad Masad publicly apologized and described the behavior as unacceptable as the company moved to add stronger safeguards. The deletion took seconds.

Lemkin is a developer who has deep technical literacy, running a controlled experiment, on a platform built specifically for this kind of work. Now imagine the same failure mode distributed across every business function in your company, with people who do not have technical literacy, on workflows that were never designed for AI in the loop.

This is not a hypothetical risk. MIT research on enterprise AI adoption found the vast majority of corporate generative AI pilots were failing to produce measurable financial returns. The core problem was not simply the technology itself. It was the organizational inability to integrate AI into real workflows, learn from deployment and distinguish between a demo that worked and a system that delivered.

Klarna learned this the public way. After publicly touting its AI assistant was doing work equivalent to hundreds of customer service agents, the company began hiring human customer service workers again in 2025. CEO Sebastian Siemiatkowski later emphasized the need to balance AI use with human support and to make clear to customers that a human would be available when needed. The technology worked in some respects. The judgment system around it was incomplete.

Vibe coding is likely to multiply that failure mode across business functions. Marketing will ship apps. Operations will ship workflows. HR will ship internal tools. Each one will look like progress on a slide. Some will produce little. Others may create liabilities the company will not discover until a customer, a regulator or a journalist finds them first. Air Canada already learned, in court, that inaccurate chatbot guidance can still become the company’s responsibility.

The bottleneck in the AI era is not production. It is discernment. And discernment, as I have written in Forbes, is not a personality trait. It is an organizational system. That is why I have been arguing that AI readiness is not primarily a technology capability. It is a leadership discipline: the capacity to decide what should move faster, what should slow down, and who has the authority to know the difference.

The 5 Places Your Company Will Break

I have argued that organizations need to conduct what I call a Judgment System Audit, a diagnostic across five dimensions that determine whether a company can metabolize AI rather than just deploy it. Vibe coding is the cleanest stress test of that framework I have seen. Here is where the cracks will show.

Decision Rights

When a non-engineer builds a working app in two days using Lovable or Bolt, who has the authority to approve it for external use? In most companies, no one knows. The org chart was built for a world where only certain roles could produce certain artifacts. Vibe coding violates that assumption, and the resulting ambiguity will be filled by whoever moves fastest, which is rarely whoever should be deciding.

Override Culture

Can someone in your organization look at a slick prototype and say “no” without career risk? If the answer is no, vibe coding becomes a one-way ratchet. Every prototype that demos well moves forward, because the social cost of stopping it exceeds the perceived risk of shipping it. Override culture is the immune system of an AI-enabled enterprise. Most companies do not have one. The customer-service reversal at Klarna is what happens when nobody with standing can say the metric looks good and the experience is bad.

Contextual Intel

The recurring risk is that AI tools can generate technically plausible output that is contextually naive. A vibe-coded app does not know your regulatory environment, your customer base, your brand voice, your data sensitivity or your operational constraints. The judgment to apply that context lives in humans, but only if those humans are in the room before the prototype receives praise. In most workflows today, they are brought in afterward to clean up. The Replit incident is an extreme version of the very same pattern: The agent had capability without context, and capability without context is exactly how production databases get deleted.

Learning Velocity

The right question to ask after a vibe-coded prototype fails is not what did the AI do wrong. It is what did our process miss. Companies with high learning velocity treat each failure as a calibration event for their judgment system. Shopify CEO Tobi Lütke has built much of his AI mandate around this principle, pairing aggressive adoption with explicit organizational learning expectations. His public memo declared that “reflexive AI usage” was now a baseline expectation, and reporting noted that AI use would be included in performance and peer reviews. Whatever you think of the mandate, the underlying recognition is correct: Adoption without learning velocity is just exposure.

Ethical Discernment

Vibe coding makes it trivially easy to build things that should not be built. Think surveillance features. Manipulative UX patterns. User data collection without meaningful consent. Automation of decisions that warrant human review. The technical barrier used to do some of the ethical work for you. It does not anymore. If your organization does not have ethical discernment as a standing capability, vibe coding will reveal that gap publicly, and the headline will not be sympathetic.

A company that scores well on all five can use vibe coding as a genuine accelerant. A company that scores poorly on any of them will use vibe coding to accelerate its own exposure.

The Question Is Not Adoption. It Is Readiness.

Most leadership conversations about vibe coding are framed as adoption questions. Should we encourage it? Should we train for it? Should we restrict it.

Those are the wrong questions to ask. Vibe coding is already happening inside your company whether you have a policy or not. Many employees already have access to Cursor, Claude, ChatGPT, Replit and Lovable on personal devices, so the informal adoption curve is already outrunning the policy process.

The right question is diagnostic, not strategic. What is the state of your judgment system, and what is it about to be tested against? The companies that will pull ahead in the next 24 months are not the ones that adopt fastest. They are the ones whose judgment systems are mature enough that adoption does not break them.

This is the inversion that most executives have not yet made. In the era before AI, capability was scarce and judgment was assumed. In the new era of AI, capability is cheap and judgment is the scarce input. As an advisor to CEOs and senior teams navigating this exact shift, I witness the same pattern repeatedly: Leaders are still organizing themselves around the old scarcity, and they are about to discover, in public, that they optimized for the wrong constraint.

What Leaders Should Do Monday

If you are a senior leader and you take one thing from this article, take this. Before you write a vibe coding policy, run a Judgment System Audit.

Pick a recent AI-related decision your organization made. A tool adoption. A pilot. A prototype that got promoted or killed. Walk it through the five dimensions.

Where were decision rights ambiguous? Where did override culture fail? Where was contextual intelligence missing from the room? What did you learn, and how is that learning encoded? Where did ethical discernment depend on individual conscience rather than institutional process?

You will find gaps. Everyone does. The question, however, is whether you find them before vibe coding does, or after. Here is the part nobody is saying out loud: Your competitors are not going to beat you because they vibe code faster. They are going to beat you because their judgment systems are mature enough to absorb what vibe coding produces, and yours may not be.

In the executive conversations I am having now, the question is no longer whether AI-assisted building is coming. It is whether leaders are willing to admit that it has already arrived. Replit, Klarna and Air Canada were warning shots. The next one may not come from someone else’s company. It may come from yours.

联系我们 contact @ memedata.com