不要陷入反人工智能的炒作。
Don't fall into the anti-AI hype

原始链接: https://antirez.com/news/158

## 编程的未来已来 软件开发者 Antirez 反思了人工智能,特别是大型语言模型 (LLM) 对编程领域的快速和变革性影响。他承认长期以来对编写良好、易于访问的软件以及一个更加公平的社会抱有渴望,虽然对人工智能的经济可行性持怀疑态度,但他认识到它的必然性。 他详细描述了最近的经验,即 LLM 在数小时内完成了重要的编码任务——从错误修复和库修改到创建整个 C 库——而这些任务以前需要数周时间。这让他相信,对于许多项目来说,手动编写代码正变得越来越不必要,重点将转移到*理解*需要构建的内容以及有效地将这些内容*传达*给人工智能。 Antirez 将此视为他毕生工作的延续:民主化代码。他对人工智能在赋能小型团队和加速创新方面的潜力表示乐观,这与开源软件的影响相呼应。然而,他表达了对人工智能技术中心化以及潜在的就业岗位流失的担忧,并倡导政府对受影响者提供支持。最终,他敦促程序员拥抱这些工具,深思熟虑地进行实验,并找到利用人工智能构建更多更好软件的方法,同时保留创造的乐趣。

## 黑客新闻讨论:不要陷入反人工智能的炒作 最近一篇黑客新闻帖子引发了关于人工智能在软件开发中反应的争论。核心论点,由antirez提出,是避免直接否定人工智能工具。虽然承认存在担忧,但讨论强调人工智能本身并不坏——它是一种可以增强编程的工具,允许开发者专注于更高层次的问题解决和构建,而不是繁琐的样板代码。 然而,对话显示出分歧。一些人担心人工智能会贬值技能、导致失业,或在没有适当署名或遵守许可的情况下利用开源贡献。另一些人认为人工智能赋予开发者权力,加速创新,并使编码访问民主化。 一个关键点是需要负责任地使用人工智能——积极监督输出,提供详细的指示,并与技术一起持续学习。人们也对当前人工智能商业模式的可持续性以及权力中心化的可能性表示担忧。最终,讨论强调了适应不断变化的环境的重要性,而不是抵制它,同时承认对职业未来的合理焦虑。
相关文章

原文
antirez 1 hour ago. 11278 views.
I love writing software, line by line. It could be said that my career was a continuous effort to create software well written, minimal, where the human touch was the fundamental feature. I also hope for a society where the last are not forgotten. Moreover, I don't want AI to economically succeed, I don't care if the current economic system is subverted (I could be very happy, honestly, if it goes in the direction of a massive redistribution of wealth). But, I would not respect myself and my intelligence if my idea of software and society would impair my vision: facts are facts, and AI is going to change programming forever.

In 2020 I left my job in order to write a novel about AI, universal basic income, a society that adapted to the automation of work facing many challenges. At the very end of 2024 I opened a YouTube channel focused on AI, its use in coding tasks, its potential social and economical effects. But while I recognized what was going to happen very early, I thought that we had more time before programming would be completely reshaped, at least a few years. I no longer believe this is the case. Recently, state of the art LLMs are able to complete large subtasks or medium size projects alone, almost unassisted, given a good set of hints about what the end result should be. The degree of success you'll get is related to the kind of programming you do (the more isolated, and the more textually representable, the better: system programming is particularly apt), and to your ability to create a mental representation of the problem to communicate to the LLM. But, in general, it is now clear that for most projects, writing the code yourself is no longer sensible, if not to have fun.

In the past week, just prompting, and inspecting the code to provide guidance from time to time, in a few hours I did the following four tasks, in hours instead of weeks:

1. I modified my linenoise library to support UTF-8, and created a framework for line editing testing that uses an emulated terminal that is able to report what is getting displayed in each character cell. Something that I always wanted to do, but it was hard to justify the work needed just to test a side project of mine. But if you can just describe your idea, and it materializes in the code, things are very different.

2. I fixed transient failures in the Redis test. This is very annoying work, timing related issues, TCP deadlock conditions, and so forth. Claude Code iterated for all the time needed to reproduce it, inspected the state of the processes to understand what was happening, and fixed the bugs.

3. Yesterday I wanted a pure C library that would be able to do the inference of BERT like embedding models. Claude Code created it in 5 minutes. Same output and same speed (15% slower) than PyTorch. 700 lines of code. A Python tool to convert the GTE-small model.

4. In the past weeks I operated changes to Redis Streams internals. I had a design document for the work I did. I tried to give it to Claude Code and it reproduced my work in, like, 20 minutes or less (mostly because I'm slow at checking and authorizing to run the commands needed).

It is simply impossible not to see the reality of what is happening. Writing code is no longer needed for the most part. It is now a lot more interesting to understand what to do, and how to do it (and, about this second part, LLMs are great partners, too). It does not matter if AI companies will not be able to get their money back and the stock market will crash. All that is irrelevant, in the long run. It does not matter if this or the other CEO of some unicorn is telling you something that is off putting, or absurd. Programming changed forever, anyway.

How do I feel, about all the code I wrote that was ingested by LLMs? I feel great to be part of that, because I see this as a continuation of what I tried to do all my life: democratizing code, systems, knowledge. LLMs are going to help us to write better software, faster, and will allow small teams to have a chance to compete with bigger companies. The same thing open source software did in the 90s.

However, this technology is far too important to be in the hands of a few companies. For now, you can do the pre-training better or not, you can do reinforcement learning in a much more effective way than others, but the open models, especially the ones produced in China, continue to compete (even if they are behind) with frontier models of closed labs. There is a sufficient democratization of AI, so far, even if imperfect. But: it is absolutely not obvious that it will be like that forever. I'm scared about the centralization. At the same time, I believe neural networks, at scale, are simply able to do incredible things, and that there is not enough "magic" inside current frontier AI for the other labs and teams not to catch up (otherwise it would be very hard to explain, for instance, why OpenAI, Anthropic and Google are so near in their results, for years now).

As a programmer, I want to write more open source than ever, now. I want to improve certain repositories of mine abandoned for time concerns. I want to apply AI to my Redis workflow. Improve the Vector Sets implementation and then other data structures, like I'm doing with Streams now.

But I'm worried for the folks that will get fired. It is not clear what the dynamic at play will be: will companies try to have more people, and to build more? Or will they try to cut salary costs, having fewer programmers that are better at prompting? And, there are other sectors where humans will become completely replaceable, I fear.

What is the social solution, then? Innovation can't be taken back after all. I believe we should vote for governments that recognize what is happening, and are willing to support those who will remain jobless. And, the more people get fired, the more political pressure there will be to vote for those who will guarantee a certain degree of protection. But I also look forward to the good AI could bring: new progress in science, that could help lower the suffering of the human condition, which is not always happy.

Anyway, back to programming. I have a single suggestion for you, my friend. Whatever you believe about what the Right Thing should be, you can't control it by refusing what is happening right now. Skipping AI is not going to help you or your career. Think about it. Test these new tools, with care, with weeks of work, not in a five minutes test where you can just reinforce your own beliefs. Find a way to multiply yourself, and if it does not work for you, try again every few months.

Yes, maybe you think that you worked so hard to learn coding, and now machines are doing it for you. But what was the fire inside you, when you coded till night to see your project working? It was building. And now you can build more and better, if you find your way to use AI effectively. The fun is still there, untouched.
联系我们 contact @ memedata.com