单披萨工程团队的兴起
Product and design are the new bottlenecks

原始链接: https://www.jampa.dev/p/the-rise-of-one-pizza-engineering

## 人工智能时代工程团队的变迁 像Claude Code这样的人工智能编码工具正在极大地加速开发速度,但这在编码之外创造了新的瓶颈。传统的工程团队结构正在被颠覆,迫使人们重新评估角色和团队规模。 随着编码不再是主要的时间限制,对高质量的产品规格和设计的需求增加——而这些是当前人工智能难以胜任的领域。这种不平衡(工程师多于产品经理/设计师)推动了“产品工程师”的兴起,这是一个融合工程、产品和设计职责的混合角色。 人工智能生成的代码虽然速度快,但往往缺乏细微差别,并可能引入需要专家审查的细微错误。这需要专门的工程师来维护代码质量并防止不良模式的复制。因此,我们可能会看到从全栈角色向更专业化的后端和前端专业知识的转变。 由于人工智能需要更广泛的背景信息才能发挥作用,因此更小、更集中的团队(2-3名工程师)正变得理想,这使得大型、独立的任务效率较低。工程经理将越来越多地参与编码,利用他们的技能分解问题并审查人工智能生成的解决方案,但他们的管理职责仍将优先。这仅仅是变革的第一波,关于质量保证的未来以及人工智能辅助设计师和产品经理工具的潜力,仍有待进一步探讨。

一个黑客新闻的讨论围绕着“一个披萨团队”的概念——小型工程团队,理想规模是可以用一个披萨喂饱(大约2-3名工程师)。原文(jampa.dev)指出,这正在成为新项目的首选团队结构,尤其是不受现有复杂系统负担的项目。 评论者普遍认同,小型团队一直以来在全新开发中都很有效。 也有观点认为,这种转变可能会导致“全栈”工程师岗位的减少,以及对专业后端和前端专业知识的需求增加。 一位评论员认为,全栈岗位在历史上低估了专业技能,影响了薪酬,而转向专业化可以使工程师在经济上受益。 最终,这场讨论强调了在软件开发中对专注、小型团队的偏好。
相关文章

原文

It is undeniable that using AI tools like Claude Code lets us write code faster now.

But the question is, how does that impact everything else?

In most teams, coding - reading, writing, and debugging code - used to be the part that took engineers the most time, but that is no longer the bottleneck.

The Theory of Constraints states that every system has a bottleneck, since without one, it would operate infinitely fast, which is impossible.

Let's see the new bottlenecks, their effect on the size and roles of engineering teams, and why Amazon's two-pizza-team rule - “teams should be small enough to be fed by two large pizzas, ideally comprising 5-8 people” - is being phased out. 

Currently, LLMs are less useful for product managers and designers than for engineers.

On the Designer side, they struggle to create great prototypes. You notice how every AI product homepage looks the same. I believe this is a limitation of LLMs: they tend to generate ideas near the middle of their training data’s bell curve, which prevents bad design but also limits truly innovative concepts.

On the Product Manager (PM) side, LLMs can gather data and insights, but the most time-consuming part for the PM is communicating and talking with clients, which can't be automated as effectively.

This creates a new bottleneck: The project output starts to depend on the delivery speed of product specs and wireframes.

And it gets worse: generally, you have 4-7 engineers but a single (sometimes shared) PM and Designer in a team, creating an imbalance.

Some companies recognized this imbalance and asked, “What if the engineers were involved in Product and Design instead of just receiving the product specs?” Then they started hiring Product Engineers. 

This is not a new concept; the role has existed for over 16 years. But I am highlighting this because they are now more relevant than ever.

Okay, so what are product engineers? They are software engineers empowered to handle some responsibilities of PMs and Designers, balancing the roles.

Product engineers assume traditional PM roles, including owning the roadmap, engaging with users, analyzing data, framing opportunities, and determining what to build. However, they do not replace a PM. The PM still provides context but is no longer the main driver of implementation.

On the designer side, they also assemble the building blocks of a design system. The designer still creates those blocks and collaborates on the UX flow, but they are no longer responsible for producing “pixel-perfect prototypes” that engineers must follow.

Not everyone should or will be a product engineer.

But the traditional software engineer who is "A jack of all trades, master of none" will not surpass an engineer who is a “master of one”.

AI is good at producing code of acceptable quality, but it is rarely excellent. You can't simply prompt and merge it into a mature codebase without in-depth human review.

There are many problems with AI coding:

  • AI often approaches coding carelessly, neglecting second-order effects: it modifies or removes essential code without considering the consequences.

  • For an LLM to fix bugs, they need to make sense (and for the ones that pierce through the abstraction layers, they rarely do), so you can't just have a team entirely using AI without a solid understanding of the tools they use.

  • AI will replicate destructive patterns in your codebase, causing a decline in code quality over time because no one will detect and fix them.

So, we need specialists to manage the platform code. It’s not that they won’t use AI, but they will act as gatekeepers during reviews and prevent bad patterns from being merged into the codebase.

We will probably see fewer full-stack engineer openings and more roles for back-end and front-end engineers. This doesn't mean they will do only one or the other, but they will be expected to be an expert in one area.

Besides the previously mentioned imbalance, large teams also face two main challenges: communication overhead and the need to divide work so team members can work independently toward a shared goal.

For the second problem, it wasn't too bad. We could divide the work into epics that would take an engineer two weeks to finish, and they could work on them independently. But now two weeks feels like a long time.

Another issue is that AI performs best when given a wider context for a problem. This, in turn, makes dividing work even more difficult.

The ideal team size now appears to be 2-3 engineers per project. Even with a larger team, you can divide it into smaller groups of 2 engineers for a set period and observe how quickly they progress. 

However, avoid assigning large projects to a single person. Working alone for an extended period without anyone to brainstorm, review code, or collaborate can be challenging.

From my experience, an individual contributor's frustration tends to increase over time when they are on their own. Also, developers need to maintain and improve their communication skills, which are essential for growth.

I get many DMs on Reddit and LinkedIn, pitching AI manager tools that track “productivity” and use AI to “evaluate” an engineer’s performance.

Most fail at the concept stage. Manager performance tools aren’t new, but they all fail by trying to outsmart the manager. AI will never have even 40% of the context a manager needs. It can only evaluate “quantitative” metrics.

But managers who don’t code will be rare: With smaller teams, their responsibility with the People pillar decreases, freeing up time for the “Programming” pillar.

I already explained that the “well-defined engineering manager role” is a myth, and engineering management always involves adapting to a team, so that will always remain the same.

AI makes it easier for engineering managers to participate in coding. They already know how to break down a larger problem into smaller, reviewable goals, just like any senior engineer would.

So you prompt Claude, attend a meeting, and then review the code when you return!

But the role of a manager will differ from that of an engineer. They will not be assigned the same tasks because situations that demand a manager’s attention will always take priority. So it’s crucial to avoid work that could potentially block the team if halted.

I believe this is just the first wave of changes.

My post assumes that AI won't improve much beyond its current state, which seems like a safe prediction. The progress isn't primarily from better AI models but from how we use them (tools, thinking capacity).

We're also seeing significant investment in designer and PM tooling where AI is not a centerpiece but a complement. I wonder how much more they can still improve.

There are many unknowns I haven’t addressed because they’re still uncertain, such as QA: How much can AI take over QA, and what will the roles of the QA Engineers be?

What else do you think is going to change? If you know, drop a comment.

联系我们 contact @ memedata.com