(评论)
(comments)

原始链接: https://news.ycombinator.com/item?id=43452421

Hacker News 的讨论主题是 IBM CEO 阿文德·克里希纳预测 AI 未来将编写 20%-30% 的代码,这比 Anthropic CEO 达里奥·阿莫迪预测的 90% 要保守得多。评论者们就 AI 对编码的影响程度和性质展开了辩论。一些人认为 AI 只是一个“智能模板器”或打字助手,可以自动化样板代码并节省时间,尤其是在更新现有项目时。另一些人则强调了 AI 当前的局限性,它需要人工监督和专业知识才能处理复杂的任务、新问题和更大的代码库。 几位用户分享了 AI 在编码方面令人印象深刻的成就,例如生成复杂的图形布局算法或配置 VPN 设置,但他们也承认需要详细的提示和偶尔的校正。一些人对 AI 完全取代程序员的说法表示怀疑,并指出理解和创新不仅仅是编写代码的重要性。另一些人则认为开发人员的角色将会发生转变,重点将放在创新而不是编码的机制上,同时仍然能够生成一些可用的代码。


原文
Hacker News new | past | comments | ask | show | jobs | submit login
IBM's CEO doesn't think AI will replace programmers anytime soon (techcrunch.com)
29 points by PaulHoule 1 hour ago | hide | past | favorite | 39 comments










Here's a thought: Does it count as AI written code if you're basically just telling it to fill in a template?

My common use case is basically just writing some code in the style I use. Then I'll tell the LLM, "Do X based on Y using the rest of the methods in Z". The code I would write would look exactly the same. I just the LLM as a shortcut to getting the code written.

Like, I'm not depending on any sort of knowledge within the model. I just use its natural ability to be a smart templater.

If using AI this way counts as AI written code. Then sure, I reckon you could easily get to 90% AI written code in loads of applications. But in terms of developing a product or feature from just plain english prompts alone? Definitely lower, and definitely babysat with someone with software development knowledge.



I call that AI-assisted programming and I think it's a very appropriate way to use this tech - it's effectively a typing assistant, saving you on time spent entering code into a computer.


This is what he says:

> He disagreed with a recent prediction from Dario Amodei, the CEO of Anthropic, that 90% of code may be written by AI in the next three to six months.

> "I think the number is going to be more like 20-30% of the code could get written by AI — not 90%"

I kind of agree with him. Definitely not in six months :)



I can kinda see it doing 90% of the code by volume, but it would be the easy 90%. And the remaining 10% will need to be done by someone who actually knows what they’re doing.

Still amazing for time saving but so far I haven’t been able to get even a SwiftUI class written to the standard I’d use it as-is.



I agree with you. The same way compiling code meant that 99% of the code was machine-generated, but you still wrote tons of code.

Lots of code is boilerplate or large switch statements (esp in OOP), or other such things. All of which AI code completion makes a breeze. The actual hard parts of code (often the various ways of connecting two disparate systems), AI doesn't really help with unless you tell it what to do and how, so you're still the ultimate important decision-maker in this scenario and can't be "replaced"



> I kind of agree with him. Definitely not in six months :)

I certainly hope not. But three to six years from now? I’m not so confident.



I would be very interested to know from everyone here:

What is the most impressive thing you have managed to get an AI to code - WITHOUT having to babysit it or give it any tips hints or corrections that a non-coder would have been unable to do.



So far most impressive AI achievements for me were both by ChatGPT o1 pro:

Case one. I configured IPsec VPN on a host machine which run docker containers. Everything worked from the host itself, however containers were not able to reach IPsec subnet. I spent quite a bit of time, untangling docker iptables rules, figuring out how iptables interacts with IPsec, running tcpdumps everywhere. However my skills were not enough, probably I would resolved the issue given more time, however I decided to try ChatGPT. I made a very thorough question, added everything I tried, related logs and stuff. Actually I wanted to ask the question on some Linux forums, so I was preparing the question. ChatGPT though few minutes and then spew one iptables command which just resolved the issue. I was truly impressed.

Case two. I was writing firmware for some device using C. One module was particularly complex, involved management of two RAM buffers and one external SPI buffer. I spent two weeks writing this module and then asked ChatGPT to review my code for major bugs and issues. ChatGPT was able to find out that I used SPI to talk to FRAM chip, it understood that my command usage was subtly wrong (I sent WREN and WRITE commands in the one SPI transaction) and highlighted this issue. I tried other modes, I also tried Claude, but so far only o1 pro was able to find that issue. This was impressive because it required to truly understand the workflow of the code and it required extensive knowledge of protocols and their usages.

Other than that, I don't think I was impressed by AI. Of course I'm generally impressed by its progress, it's marvellous that AI exists at all and can write some code that makes sense. But so far I didn't fully integrate AI into my workflows. I'm using AI as Google replacement for some queries, I use AI as a code reviewer and I'm using Copilot plugin as a glorified autocomplete. I don't generate any complex code with it and I rarely generate any meaningful code at all.



Python scripts for managing and automatically editing photos. Anything bigger and once the code reaches a certain point it starts actively breaking things, and in some cases, makes them unrecoverably worse. I have gotten benefit as a programmer being able to break down projects and spot when it's going down the wrong path, but I think you need quite a bit of experience to use AI on large codebases or with types of code it has few training examples of. I've also caught it 'fixing' code by essentially silencing errors without addressing the underlying problems.


The conditions you’ve set here are very strict, but I have something that may almost fit. I’ve used Grok 3 to create a decompiler and then a compiler for a made up scripting language (for scripts extracted from Final Fantasy VII data files). I’ve fed it an example of how the low-level “assembly/opcodes” look like and then how I would like the higher level scripting language to look. With minimal guiding (I was telling it what I want, but not how I want it) it implemented a working decompiler and compiler that worked exactly to my specifications, written in Typescript. Created a custom tokenizer, parser to a custom AST structure and everything else that was needed. It took a few prompts to get right (and to add features I didn’t think about initially), but the resulting 1400+ lines of code I found to be very impressive for what a “fancy autocomplete” as many people say could generate.

Here’s the initial conversation if you’re interested: https://grok.com/share/bGVnYWN5_9ce1bed4-7136-4761-b45e-0ab0...



> I’ve fed it an example of how the low-level “assembly/opcodes” look like and then how I would like the higher level scripting language to look. With minimal guiding (I was telling it what I want, but not how I want it) it implemented a working decompiler and compiler that worked exactly to my specifications, written in Typescript.

I think this already falls out of OP's guidelines, which you pointed out are quite strict. They also happen to be the guidelines an AI would need to meet to "replace" competent engineers.



Built out an entire web accessibility monitoring / scanning tool. The backend/scanning system required my knowledge as a programmer but the UI was entirely vibe coded. I said "build a UI for this scanning system on Cloudflare workers and use Hono.dev", and then just described in plain English how I wanted the UI to work.

Did this over a few weeks in my free time and now it has all of the features of accessibility monitoring SaaS that I was previously paying $600/mo for.



I can take my blog or website, fire up aider, and ask it to completely reformat or restyle it or add effects. It does a fantastic job at that, and that's something I'd have to pay $20 on Fiverr to get done before because I cannot coordinate colors or style things to save my life.


This is a good example. It's really good at updating old, small projects. Whereas most humans need to build context about the project, and do so slowly.

I wonder over time how those small, infrequent updates might hamper the ability to perform the next, small infrequent update (as your code begins to resemble less and less any examples the AI might have seen and more and more a kludge of differing styles, libraries, etc.), but that's really not any different than how most projects like a personal website operate today.



Nothing that I've cared much about. Little python utility scripts like "merge the rows of these CSV files in such and such a way" are about the only thing that has worked first time without me adding more of the "how" to do it.


Make a bookmarklet to do X

Make a vbscript to toggle scroll-lock every 15 seconds ... to prevent system from auto locking

Ad hoc text processing ... like strip the HTML from this snippet (drop-down list copied from a web page DOM)



Multi-axis matplotlib charts on the first try


I've gotten it to write complex graph layout algorithms, including converting them to all work in the same orientation (about half of graph layout papers are written to produce vertical graphs, and about half horizontal graphs).


Oooh fascinating, out of curiosity, what are you working on?


Physics system for soap bubbles in a gamejam game


This is close to being an appeal to authority. These companies - and IBM is the classic case of this - are the last to pick up on new trends and are usually blindsided by new technologies. If AI was perfectly superhuman IBM's CEO is still going to be one of the the last people in the tech world to pick up on it. He'd hopefully be ahead of other industries.

Startup CTOs are the people deciding how much code is going to get written by AI. They're going to be building products in new, untested ways. Mostly failing, some succeeding. Technical growth doesn't come from established majors. The entire AI boom left Google doing a cartoonish impression of being run over by a gaggle of bystanders heading to the next big thing and they thought they were actively researching and leading the field of AI.

As news of what IBM is doing he's a leading authority, but following IBM's exploits is going to be skating to where the puck was last year on a good day.



It's not an "appeal to authority" to report on the opinions of tech leaders. The only way it would make sense to call this an appeal to authority is if you think TechCrunch is making an argument for a side. This just looks like ordinary reporting, however.


This man is incredibly prescient. No wonder he is CEO of IBM!


People love to throw jabs at one of the companies that happens to be a champion of patents per year, and also responsible keeping the lights on for many critical projects on the Linux ecosystem since 1998.


The 21st century adage is no one gets fired for dunking on IBM


We like to shit on IBM, but they dominated many industries when they tried.

IBM pushed Linux very hard and contributed significantly to it's adoption- it would be revisionist to think otherwise.

They also created quite literally legendary laptops, before deciding that they didn't want to and then moved on.

I mean, their stock is not hurting as much as we think of them as a relic of the past: https://finance.yahoo.com/quote/IBM/?guccounter=1&guce_refer...



Some will take your comment as irony, but he has a Phd in Electrical Engineering from UIUC and I would trust him more than an MBA.

"In a leaked recording, Amazon cloud chief tells employees that most developers could stop coding soon as AI takes over" - https://www.businessinsider.com/aws-ceo-developers-stop-codi...

""If you go forward 24 months from now, or some amount of time — I can't exactly predict where it is — it's possible that most developers are not coding," said Garman, who became AWS's CEO in June.

"Coding is just kind of like the language that we talk to computers. It's not necessarily the skill in and of itself," the executive said. "The skill in and of itself is like, how do I innovate? How do I go build something that's interesting for my end users to use?"

This means the job of a software developer will change, Garman said."



That last quote hurt me so much.

If that’s my job, why do I have no influence on the product?

What’s the point of product managers?



Lol, I'm a junior PM and I have the same question - in the end, developers tend to do whatever the fuck they want anyway...


> What’s the point of product managers?

Many of us have been asking this for years. Including those of us who have been product managers.



Can you imagine the CEO of the lead cloud provider telling his Developers:

"Coding is just kind of like the language that we talk to computers. It's not necessarily the skill in and of itself"



Krishna sucks a ton, I see IBM Granite is not performing financially up to expectations.


I will admit, he seems to be a very smart person and probably right.

But my feeling about AI is it a scam to wrestle money from stupid investors :)



If all code is written by AI or by "vibe coders" then what happens when companies are selling products they don't themselves understand? - sounds like a recipe for disaster to me.

I think it's pretty obvious that generative AI isn't replacing the need for engineering - ever.



This happens regardless as people leaving companies and whatever they produced. You will find so many examples of code out there in production and works through hopes and dreams.

And you know what? If it works it works - regardless how it was produced.



companies already sell products they don't understand all the time, it's practically in the job description of salesperson to make shit up


"The statements are also a bit of a reversal for Krishna, who said in 2023 that IBM planned to pause hiring on back-office functions that the company anticipated it could replace with AI tech."

I suspect his enthusiasm for AI is going to keep cooling down.



That is different things though -- you can replace functions with AI tech without needing AI to write all the code and that is happening in a big way right now -- Krishna was talking about reducing customer facing roles by around 30% over 5 years in that earlier statement

As of right now the code assistants mostly just make existing coders more productive -- predicting 20 or 30% of new code in near term will be generated is not unreasonable -- 90% is a stretch as has been discussed here many times



"CEO of a (human) consulting company says AI won't replace consultants any time soon".

I mean, he's right, but come on.







Join us for AI Startup School this June 16-17 in San Francisco!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact



Search:
联系我们 contact @ memedata.com