落后于最前沿一步:关于人工智能在开发中的思考。
A Step Behind the Bleeding Edge: A Philosophy on AI in Dev

原始链接: https://somehowmanage.com/2026/01/22/a-step-behind-the-bleeding-edge-monarchs-philosophy-on-ai-in-dev/

## 君主工程:关于人工智能集成的理念 人工智能正在迅速改变软件工程,在君主,我们怀着深思熟虑的态度拥抱它,并以我们的核心工程价值观为指导。我们的方法并非追求*最新*的人工智能工具——这会导致不稳定和安全风险。相反,我们力求“落后于技术前沿一步”,在优先考虑成熟度和安全性的同时,理解新兴技术。 其中关键在于**责任制**:你应对所有署名的工作负责,无论是否借助人工智能。这意味着严格的审查、质量控制,以及抵制优先考虑数量而非实质的诱惑。人工智能是一种用于减少重复劳动和产生想法的工具,但**深刻的思考、判断和灵感必须保持人类驱动。** 我们鼓励通过专门的时间、安全的实验(原型、内部工具)和知识共享来进行探索。我们还强调健全的验证循环——设计系统,让人工智能能够自我验证*并*接受人类监督。 最终,人工智能不会取代熟练的工程师;它将*改变*工程师的角色。专注于解决问题和构建产品,而不仅仅是编写代码。不要感到必须不断采用每一种新工具的压力——集体探索和谨慎的方法是关键。

这个Hacker News讨论围绕一篇博客文章(somehowmanage.com),提倡在软件开发中使用AI时采取务实的态度——“落后于最前沿一步”。 一个关键观点是对当前基于聊天的AI界面感到满意。用户欣赏不同AI模型之间的一致体验,易于切换供应商(避免厂商锁定),以及认识到使用这些工具的风险与依赖成熟的大科技平台类似。 本质上,评论者认为不需要复杂的、集成的AI工具,而是在聊天的简单性中发现了价值。另一条评论指出,撰写关于*写作过程*的文章,而不是内容本身,显得有些自命不凡。
相关文章

原文

This is a memo I published internally to my team at Monarch. I’m sharing it more publicly in case it helps other software engineering teams that are managing the crazy times we’re experiencing.

There’s no question: AI is changing how we work as Software Engineers. There’s a lot of hype, excitement, anxiety, and uncertainty around these changes.

As an Engineering org, we’ve had a strong set of Engineering Values (How We Work Together) that have served us really well as we’ve grown. I wanted to drop a few thoughts on our philosophy on AI in Engineering, grounded in these values. For more details, you can see our AI in Engineering@Monarch [internal, redacted link] doc.

Here is my ask of our team as we explore and implement AI in Engineering:

Understand and explore the bleeding edge, but adopt a dampened one

We definitely believe in and want to leverage AI in our work to increase productivity and quality. That said, if we try to always be on the bleeding edge, we will suffer from:

  • Thrash. Since the bleeding edge is constantly changing (new tools come out, existing tools leapfrog each other, etc). Setting up, learning, and utilizing new tools and workflows takes time. We don’t want that to take away from our momentum and focus on shipping.
  • Security exposure. There is a gold rush in AI. Companies are cutting corners to ship (or adopt) new tools. As evidence: every couple of days there is a new, high-profile AI-related vulnerability. We’ve built our product around trust, security and privacy. We cannot compromise here.

So as an org, we may feel one step behind the bleeding edge, only adopting things once they are a bit more mature and battle-tested (”a step behind the blood”).

That said, to know we are (only) a step behind, we must still understand the frontier. To do this, we will:

  • dedicate time and resources to exploration (collectively, as an org)
  • empower team-members to explore in certain, safe circumstances (eg prototypes, hackathons, or other individual initiatives).
  • expect people to share what they learn: tools, workflows, prompts, tips, failure modes.

We need to understand the bleeding edge, but work at a step behind it.

Continue to own your work

Whether you use AI or not, if work has your name on it, you are accountable for it.

That means that you are responsible for the quality of the written documents or code that you put out. You should review everything before you ask others to take a look.

Likewise, work we put out collectively to our users has our company’s name on it and we are collectively accountable for it (its functionality, its quality, its security, etc). AI has no accountability, no pride in it’s craft, no shame if it gets things wrong. The human (that’s you) provides the accountability.

It’s much easier to generate code or documents, but if you generate a lot and don’t control for quality, you are shifting the burden onto your peers (who will review your work), or worse, our users (if it doesn’t get properly reviewed and tested).

As a side note, even teams at frontier AI labs don’t blindly trust their AI. When we’ve asked friends there about how they use their own tech, they have said there is always human review. Apparently, claims of otherwise are probably just one-offs (ie, prototypes or non-critical systems) or just plain hype.

Do the deep thinking yourself (don’t get l-ai-zy)

Andy Grove argued that often, writing a deep report is more important than reading it: “Their (ie the document’s) value stems from the discipline and the thinking the writer is forced to impose upon himself as [she] identifies and deals with trouble spots”.

If you ask AI to write a document for you, you might get 80% of the deep quality you’d get if you wrote it yourself for 5% of the effort. But, now you’ve also only done 5% of the thinking. Delegate things that require time and toil to AI, but keep things that require thought, judgment, and rigor for yourself.

You can still use AI as a thought-partner, idea generator, editor, or synthesizer. You can (should) also use AI for toil (things that are time-consuming, repetitive, and menial). But you need to do the deep thinking yourself.

Continue to leave room for inspiration

When we wrote our Engineering values, and included “leave room for inspiration”, one thing we were guarding against was working so hard with so little slack that we don’t have room for inspiration, creativity and brilliance. AI changes that risk profile. With AI and increased productivity, you might have more time and slack, but, if you’re delegating too much to AI, you may not have the deep thought, context, and connectedness to the code and product that is required for inspiration.

People often worry about AI slop, but if you’re owning your work and reviewing it (as requested above), you will catch bad things and ideas that look bad things. You’ll need to be more careful about catching bad ideas that look like great ideas (since generative AI is notorious for producing those), but again, if you’re owning your work, you should catch those, too.

I’m most worried about missing good ideas that sound like bad ideas (at first)—in other words, sins of omission. Those will never occur unless you own your work, do the deep thought—and create space for inspiration.

Carefully design validation/verification loops

We strongly believe in systems thinking, and one of the most important parts of systems thinking is feedback loops. When using AI, think about feedback and validation loops:

  • Creating ways for AI to validate it’s own work allows it to run more autonomously with less input from you. You can get much higher leverage if AI has a way to test the functionality and quality of it’s own work.
  • That said, that doesn’t absolve you of owning your work, and so, you should also be thinking about human validation loops. Where should you be involved? Often, the template will look like: asking AI to develop a plan and reviewing that, then asking it to do some work and reviewing/refining that.

In other words, design that system (you + AI), will figuring out your role in it, since you will ultimately own the output.

Use AI more liberally in safe settings

We’ve found that there are a couple areas where using AI more liberally (that is, more autonomous agents, less human-in-the-loop, etc) makes a lot of sense, and we recommend you use these in your workflow:

  1. Conceptual prototypes. It can often be faster to build a concept (whether in the Monarch code-base or in some 0-1 tool like Replit) than to get designs into Figma or a PRD into Notion. These concepts can help show-case things internally, to users in surveys/interviews, etc.
  2. Internal tooling. Since these won’t be user-facing, the amount of polish, etc is lower, and they can be built more liberally.
  3. 0-1 builds. New code that is less reliant on existing code is easier to use AI to build.

Each of these may require more thought, polish, or verification later, but in the early stages, they can be great areas to “build-then-think” (rather than “think-then-build”).


Will AI replace my job?

If you consider your job to be “typing code into an editor”, AI will replace it (in some senses, it already has). On the other hand, if you consider your job to be “to use software to build products and/or solve problems”, your job is just going to change and get more interesting.

There is a lot that goes into building great software that AI isn’t going to replace (at least, any time soon). How we work will change, and we should be able to build faster and with better quality.

Am I falling behind if I’m not using AI constantly?

We know it can be stressful to feel like you’re not keeping up, but on the other hand, if we don’t change how we work at all, we will eventually fall behind. This has always been the case in software development, but things are moving a lot faster now.

That said, constantly worrying about falling behind only creates anxiety. Our philosophy (as described above) is to collectively explore the bleeding edge, but work an inch or two behind it. We also will walk that path together, so that no one feels like they are being left behind. You are expected to contribute to exploration and sharing learnings, but you aren’t expected to figure out our full strategy on how we use AI on your own.

Is the code AI writes actually good?

You should be the judge. With the right context and the right prompting, we’ve found that AI can write good code (at minimum, consistent with the code base it’s operating in). But since you’ll also be reviewing the code, you can and should decide when it has written good code or not.

Am I losing skills by relying on AI?

It depends on how you use it. If you abdicate your responsibility as a developer to AI, yes, your skills may atrophy. But if you do the deep work, and review/validate AI’s work, your skills shouldn’t atrophy. In fact, they should improve, since you’ll constantly and instantly have access to a somewhat knowledgable resource that you can consult.

联系我们 contact @ memedata.com