手工代码
Artisanal Code

原始链接: https://sunnyamrat.com/posts/2026-01-17-artisanal-code/

## “匠人代码”的兴起与人工智能的角色 手工软件与自动化工具之间的界限正在模糊,导致一种日益增长的“匠人”式编码观念——代码的价值在于其可理解性和可维护性。虽然像Claude和Codex这样的人工智能工具提供了显著的节省时间的好处,尤其是在样板代码和熟悉任务方面,但它们完全取代传统开发的潜力值得怀疑。 作者认为,真正的软件工艺建立在对代码的深刻心理模型之上,从而能够有效地进行调试、修改和长期维护。目前的人工智能,即使具有“代理”能力,也常常生成缺乏这种清晰度的复杂代码,从而产生一种新的“作弊”形式——一种可能导致不可持续、不透明系统的风险。 无代码工具被持负面看法,被认为本质上不如代码的灵活性。最终,成功的人工智能集成需要工程师*理解*和*批准*生成的代码,将其视为一个强大的助手,而不是自主的替代品。关键在于利用人工智能提高效率,同时保留对最终产品的所有权和理解——确保它仍然是“匠人代码”,而不是“速成烧毁品”。

黑客新闻 新 | 过去 | 评论 | 提问 | 展示 | 招聘 | 提交 登录 手工代码 (sunnyamrat.com) 10 分,由 sunnyam 5小时前发布 | 隐藏 | 过去 | 收藏 | 讨论 指南 | 常见问题 | 列表 | API | 安全 | 法律 | 申请YC | 联系 搜索:
相关文章

原文

Our single-origin, small-batch code is traceable through its commit history to artisanal programmers and traditional foundries. Incorporating a confluence of different cultures and languages, our code is hand-crafted, wrought through pull requests, processed on baremetal and roasted to perfection in a cast-iron CI/CD pipeline - giving you a bold, full-bodied, rich shot of open-source goodness. The perfect start to your day.

It may be wishful thinking, but it feels a bit premature to talk about software engineering as artisanal, but it increasingly feels like it. With project managers frothing at the mouth about the screen they whipped up using AI, what could once be dismissed with an eye-roll is now starting to feel more serious.

This has happened before. I used to work at a company where a technical expert who had management’s ear was non-technical themself. They evangelised no-code tools and when there was a limitation in their tooling, it was just a matter of “just write a little bit of code to do this part, for now” with the hope that the tool would have a new feature added to it in the future to get rid of the unsightly code.

AI is different though. There are different types of coding tasks and AI does have a place. Firstly, and most importantly, it’s not replacing code as the engine. Arguably, no-code tools that preceded were trying to replace that engine. They would interact with APIs and provide some functionality to the user but it never felt like the right thing to do. The promise of being able to automate business processes without engineers is seductive to a technophobic executive, but it adds vendor lock-in, integration hell and additional complexity in bundles.

Philosophically, as a software engineer, it always felt like cheating to use a no-code tool. Not cheating because it was easy or simple - it was often the opposite. It felt like cheating on code. The puzzle of engineering was lost and replaced with a far less satisfying process of connecting boxes with arrows and conditionals everywhere. 1

The fact that AI generates real code that you can inspect is perhaps why it has had so much adoption by engineers. It still feels like cheating, but a different type, like cheating on a test, but that doesn’t feel as bad.

The main places where I think that AI has the biggest benefit are:

  • Writing boilerplate code that you already know how to write
  • Autocompleting functions that you already know how to write
  • Implementing logic that you already understand

The key is that you should already know how to do it. If I’m writing a new React component, a Django View, a D3 chart or something I’ve already done before, AI is fantastic at generating the skeleton. It might have a decent go at writing the thing I want too. This is extremely valuable and is genuinely a massive time saver.

My issue, however, is with “agentic” coding. I enjoy using tools like Claude Code and Codex, but I’ve only found real benefit when the task is simple or when I’m working on a toy project, where the implementation doesn’t really matter.

I expect people to attack this and say that I’m not using the tools properly, and while this may be true to an extent, the issue comes with taking this to production. I need to read and understand all the code I deploy. After all, I’ll be on the hook for maintaining it, fixing bugs and tweaking functionality. AI code as it is proselytised, will generate code, it will generate tests and it will go and fix its own bugs. So what’s the problem?

Well — it’s quite simple really. If you aren’t able to construct a mental model of how your code works, then you can’t look after it, and therefore, you can’t use it. If you’re wading in waters containing Factory classes or if you’ve got inheritance trees or if you’re calling code from external libraries or utilities, you very quickly arrive at a point where you need a good understanding and you yourself need to be in a state of focus to allow you to see the entire picture. Not all of this lives in code either — some of the understanding you need is historical decisions, business requirements and more.

Sure, you could create skills to go and search your knowledge base or project management tool for requirements, but this isn’t a seamless process. In fact, trying to give an AI the same context you have as a human is specifically, integration hell. 2

When I use AI to write code, I might ask it to generate some files, but to be successful (you choose your definition of success), I have to be absolutely clear about what it has written and approve the changes. This might be me being a luddite, but I’ve had sessions where I dangerously-allow-edits on Claude Code where I end up something that does 90% of what I asked, but where it has done it in a way that is very hard to unpick or tweak. I end up just having to keep prompting it as the result asymptotes towards the target. 3

Although I didn’t have a definition of “artisan code”, I’m not so militant that I think that using any AI counts as you not having written it yourself. To me, artisan code is code that you can explain, defend and fix. If you’ve strayed beyond that path, I’m sorry, you’ve just got yourself a burnt instant. Just add boiling water and gulp it down, that’s all you’re worth.

Preemptive Rebuttals

No-code Tools

I hold the strong opinion that these are evil. It’s trying to be an interface for something that is too flexible and too manipulable. One might argue that I’m referring to code that is undisciplined or unstructured, but trying to replicate the flexibility of code visually only works if the medium has inherent limitations.

Scratch is a great example of visual programming. There are lots of other really cool visual programming languages, but you can’t do in them what you can do in Python, or JavaScript even. And don’t even think about what you can do with Go or Rust.

As a medium, it’s limited and rigid. Unfortunately, business requirements are usually neither of those things.

Human Context

There is an opposing argument to be made here. If you’ve got really good documentation, training materials, a well-enforced style guide and all that good stuff, you will get better AI code but you’ll also find it easier to train newcomers to your team. It’s similar to how designing something with disabilities in mind results in a better product for everyone.

And integration hell might not even be such a big issue for many organisations. Increasingly, meeting transcripts are also available so even offline context can be fed into AI. The inherent “validity” of information is quite easy for humans to determine, however, but needs to be quite firmly defined for it to be a useful input to an LLM.

Agent Loops

There’s the core agent loop, where the LLM calls tools and then gives a response once it’s done doing the thing. But I’m talking about the loop where you ask it to do something and then it turns out that you’ve not given it enough context or your language wasn’t precise enough. Then you try again with different words and more context and it still doesn’t work. Then you try to start writing it yourself and then hope that it can pick up where you left off. Then it still gets it wrong. So then you try and make it do a smaller part of the task. And then you give up and tell your mates that AI is overhyped.

Preaching to the choir

Fair enough.

联系我们 contact @ memedata.com