(评论)
(comments)

原始链接: https://news.ycombinator.com/item?id=38257094

根据上述内容,关于人工智能对科技行业和社会的潜在影响,人们主要担心的是什么?为什么这是一个重大问题? 人们对人工智能对科技行业和社会的潜在影响的主要担忧是,担心人工智能将显着取代科技专业人员,使他们的技能需求减少,导致收入停滞和购买力下降。 此外,人工智能广泛采用所带来的利润和利益可能主要归于所有者,从而加剧现有的社会差距。 此外,这段话还引发了人们的疑问:编程是否不再是一门技术性的行业,或者是否仅仅被视为例行的管理任务。 总体而言,这篇文章表达了随着日益先进的人工智能形式的出现,科技行业专业人士的职业发展和工作保障前景的不确定性。

相关文章

原文
Hacker News new | past | comments | ask | show | jobs | submit login
A coder considers the waning days of the craft (newyorker.com)
697 points by jsomers 1 day ago | hide | past | favorite | 1025 comments










Maybe I’m in the minority. I’m definitely extremely impressed with GPT4, but coding to me was never really the point of software development.

While GPT4 is incredible, it fails OFTEN. And it fails in ways that aren’t very clear. And it fails harder when there’s clearly not enough training resources on the subject matter.

But even hypothetically if it was 20x better, wouldn’t that be a good thing? There’s so much of the world that would be better off if GOOD software was cheaper and easier to make.

Idk where I’m going with this but if coding is something you genuinely enjoy, AI isn’t stopping anyone from doing their hobby. I don’t really see it going away any time soon, and even if it is going away it just never really seemed like the point of software engineering



Also, I think we are quite a ways out from a tool being able to devise a solution to a complex high-level problem without online precedent, which is where I find the most satisfaction anyway.

LLMs in particular can be a very fast, surprisingly decent (but, as you mention, very fallible) replacement for Stack Overflow, and, as such, a very good complement to a programmer's skills – seems to me like a net positive at least in the near to medium term.



Spreadsheets didn’t replace accountants, however, it made them more efficient. I don’t personally believe AI will replace software engineers anytime soon, but it’s already making us more efficient. Just as Excel experience is required to crunch numbers, I suspect AI experience will be required to write code.

I use chat-gpt every day for programming and there are times where it’s spot on and more times where it’s blatantly wrong. I like to use it as a rubber duck to help me think and work through problems. But I’ve learned that whatever the output is requires as much scrutiny as a good code review. I fear there’s a lot of copy and pasting of wrong answers out there. The good news is that for now they will need real engineers to come in and clean up the mess.



Spreadsheets actually did put many accountants and “computers” (the term for people that tallied and computed numbers, ironically a fairly menial job) out of business. And it’s usually the case that disruptive technology’s benefits are not evenly distributed.

In any case, the unfortunate truth is that AI as it exists today is EXPLICITLY designed to replace people. That’s a far cry from technologies such as the telephone (which by the way put thousands of Morse code telegraph operators out of business)



It is especially sad that VC money is currently being spent on developing AI to eliminate good jobs rather than on developing robots to eliminate bad jobs.


The plan has always been to build the robots together with the better AI. Robots ended up being much harder than early technologists imagined for a myriad different reasons. It turned out that AI is easier or at least that is the hope.


Actually I'd argue that we've had robots forever, just not what you'd consider robots because they're quite effective. Consider the humble washing machine or dishwasher. Very specialized, and hyper effective. What we don;'t have is Gneneralized Robotics, just like we don't have Generalized Intelligence.

Just as "Any sufficiently advanced technology is indistinguishable from magic", "Any sufficiently omnipresent advanced technology is indistinguishable from the mundane". Chat GPT will feel like your smart phone which now feels like your cordless phone which now feels like your corded phone which now feels like wireless telegram on your coal fired steam liner.



No, AI is tremendously harder than early researchers expected. Here's a seminal project proposal from 1955:

"We propose that a 2 month, 10 man study of artificial intelligence be carried out during the summer of 1956 at Dartmouth College in Hanover, New Hampshire. The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves. We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer. “



GP didn't say that AI was easier than expected, rather that AI is easier than robotics, which is true. Compared to mid-century expectations, robotics has been the most consistently disappointing field of research besides maybe space travel, and even that is well ahead of robots now.


> well ahead of robots now

I am not working in that field, but as an outsider it feels like the industrial robots doing most of the work on TSMC's and Tesla's production lines are on the contrary extremely advanced. Aside from that what Boston Dynamics or startups making prosthetics came up is nothing short of amazing.

If anything software seems to be the bottleneck for building useful humanoids...



Space travel was inevitably going to be disappointing without a way to break the light barrier. even a century ago we thought the sound barrier was impossible to penetrate, so at least we are making progress, albiet slow.

On the bright side, it is looking more and more like terraforming will be possible. Probably not in our lifetimes, but in a few centuries time (if humanity survives)



I think the impact of AI is not between good jobs va bad jobs but between good workers and bad workers. For a given field, AI is making good workers more efficient and eliminating those who are bad at their jobs (e.g. the underperforming accountant who is able to make a living doing the more mundane tasks whose job is threatened by spreadsheets and automation)


I worry the effects this has on juniors…


Many machinists, welders, etc would have asked the same question when we shipped most of American manufacturing overseas. There was a generation of experienced people with good jobs that lost their jobs and white collar workers celebrated it. Just Google “those jobs are never coming back”, you’ll find a lot of heartless comparisons to the horse and buggy.

Why should we treat these office jobs any differently?



US manufacturing has not been shipped out. US manufacturing output keeps increasing, though it's overall share of GDP is dropping.

US manufacturing jobs went overseas.

What went overseas were those areas of manufacturing that was more expensive to automate than it was to hire low paid workers elsewhere.

With respect to your final question, I don't think we should treat them differently, but I do think few societies have handled this well.

Most societies are set up in a way that creates a strong disincentive for workers to want production to become more efficient other than at the margins (it helps you if your employer is marginally more efficient than average to keep your job safer).

Couple that with a tacit assumption that there will always be more jobs, and you have the makings of a problem if AI starts to eat away at broader segments.

If/when AI accelerates this process you either need to find a solution to that (in other words, ensure people do not lose out) or it creates a strong risk of social unrest down the line.



If I didn't celebrate that job loss am I allowed to not celebrate this one?


Capitalism always seeks to commodify skills. We of the professional managerial class happily assist, certain they'll never come for our jobs.


A serious, hopefully not flippant question; Who are "they" in this case? Particularly as the process you describe tends to the limit.


I would guess that "they" are "the capitalists" as a class. It's very common to use personal pronouns for such abstract entities, and describe them in behaving in a goal-driven matter. It doesn't really matter who "they" are as individuals (or even if they are individuals).

More accurate would be something like "reducing labor costs increases return on capital investment, so labor costs will be reduced in a system where economy organizes to maximize return on capital investment". But our language/vocabulary isn't great at describing processes.



Poor phrasing. Apologies. u/jampekka nails it.

Better phrasing may have been

"...happily assist, confident our own jobs will remain secure."



Thanks. Not putting this onto you so I'll say "we/our" to follow your good faith;

What is "coming for our jobs" is some feature of the system, but it being a system of which we presume to be, and hope to remain a part, even though ultimately our part in it must be to eliminate ourselves. Is that fair?

Our hacker's wish to "replace myself with a very small shell-script and hit the beach" is coming true.

The only problem I have with it, even though "we're all hackers now", is I don't see everybody making it to the beach. But maybe everybody doesn't want to.

Will "employment" in the future be a mark of high or low status?



The problem is that under the current system the gains of automation or other increased productivity do not "trickle down" to workers that are replaced by the AI/shell script. Even to those who create the AI/shell script.

The "hit the beach" part requires that you hide the shell script from the company owners, if by hitting the beach you don't mean picking up empty cans for sustinence.



> Will "employment" in the future be a mark of high or low status?

Damn good question.

Also, +1 for beach metaphor.

My (ignorant, evolving) views on these things have most recently been informed by John and Barbara Ehrenreich's observations about the professional-managerial class.

ICYMI:

https://en.wikipedia.org/wiki/Professional%E2%80%93manageria...



An interesting view is that people would still "work" even if they weren't needed for anything productive. In this "Bullshit job" interpretation wage labor is so critical for social organization and control that jobs will be "invented" even if the work is not needed for anything, or is actively harmful (and that this is already going on).

https://strikemag.org/bullshit-jobs/



> Spreadsheets actually did put many accountants

https://cpatrendlines.com/2017/09/28/coming-pike-accountants...

Not really seeing any correlation in graduation rates. Excel was introduced in 1985. Every accountant had a computer in the 80s.



> But I’ve learned that whatever the output is requires as much scrutiny as a good code review. I fear there’s a lot of copy and pasting of wrong answers out there. The good news is that for now they will need real engineers to come in and clean up the mess.

isn't it sad that real engineers are going to work as cleaners for AI output? And doing this they are in fact training the next generation to be more able to replace real engineers... We are trading our future income for some minor (and questionable) development speed today.



AI might help programmers become more rigorous by lowering the cost of formal methods. Imagine an advanced language where simply writing a function contract, in some kind of Hoare logic or using a dependently-typed signature, yields provably correct code. These kinds of ideas are already worked on, and I believe are the future.


I'm not convinced about that. Writing a formal contract for a function is incredibly hard, much harder than writing the function itself. I could open any random function in my codebase and with high probability get a piece of code that is By "as correct", I mean that such a function may have bugs, but the same is true for an AI-generated function derived from a formal contract, if the contract has a loophole. And in that case, a simple microscopic loophole may lead to very very weird bugs. If you want a taste of that, have a look at how some C++ compilers remove half the code because of an "undefined behaviour" loophole.

Proofreading what Copilot wrote seems like the saner option.



I think this is longer off than you might expect. LLMs work because the “answer” (and the prompt) is fuzzy and inexact. Proving an exact answer is a whole different and significantly more difficult problem, and it’s not clear the LLM approach will scale up to that problem.


Formal methods/dependent types are the future in the same way fusion is, it seems to be perpetually another decade away.

In practice, our industry seems to have reached a sort of limit in how much type system complexity we can actually absorb. If you look at the big new languages that came along in the last 10-15 years (Kotlin, Swift, Go, Rust, TypeScript) then they all have type systems of pretty similar levels of power, with the possible exception of the latter two which have ordinary type systems with some "gimmicks". I don't mean that in a bad way, I mean they have type system features to solve very specific problems beyond generalizable correctness. In the case of Rust it's ownership handling for manual memory management, and for TypeScript it's how to statically express all the things you can do with a pre-existing dynamic type system. None have attempted to integrate generalized academic type theory research like contracts/formal methods/dependent types.

I think this is for a mix of performance and usability reasons that aren't really tractable to solve right now, not even with AI.



> If you look at the big new languages that came along in the last 10-15 years (Kotlin, Swift, Go, Rust, TypeScript) then they all have type systems of pretty similar levels of power, with the possible exception of the latter two which have ordinary type systems with some "gimmicks".

Those are very different type systems:

- Kotlin has a Java-style system with nominal types and subtyping via inheritance

- TypeScript is structurally typed, but otherwise an enormous grab-bag of heuristics with no unifying system to speak of

- Rust is a heavily extended variant of Hindley-Milner with affine types (which is as "academic type theory" as it gets)



Yes, I didn't say they're the same, only that they are of similar levels of power. Write the same program in all three and there won't be a big gap in level of bugginess.

Sometimes Rustaceans like to claim otherwise, but most of the work in Rust's type system goes into taming manual memory management which is solved with a different typing approach in the other two, so unless you need one of those languages for some specific reason then the level of bugs you can catch automatically is going to be in the same ballpark.



> Write the same program in all three and there won't be a big gap in level of bugginess.

I write Typescript at work, and this has not been my experience at all: it's at least an order of magnitude less reliable than even bare ML, let alone any modern Hindley-Milner based language. It's flagrantly, deliberately unsound, and this causes problems on a weekly basis.



LLMs are pretty much the antithesis of rigor and formal methods.


I think the beauty of our craft on a theoretical level is that it very quickly outgrows all of our mathematics and what can be stated based on that (e.g. see the busy beaver problem).

It is honestly, humbling and empowering at the same time. Even a hyper-intelligent AI will be unable to reason about any arbitrary code. Especially that current AI - while impressive at many things - is a far cry from being anywhere near good at logical thinking.



They won't need human help when the time comes.


It's also where I find most of the work. There are plenty of off the shelf tools to solve all the needs of the company I work at. However, we still end up making a lot of our own stuff, because we want something that the off the shelf option doesn't do, or it can't scale to the level we need. Other times we buy two tools that can't talk to each other and need to write something to make them talk. I often hear people online say they simply copy/paste stuff together from Stack Overflow, but that has never been something I could do at my job.

My concern isn't about an LLM replacing me. My concern is our CIO will think it can, firing first, and thinking later.



It’s not just about if a LLM could replace you, if a LLM replaces other enough other programmers it’ll tank the market price for your skills.


I don’t think this will happen because we’ll just increase the complexity of the systems we imagine. I think a variant of Wirth’s law applies here: the overall difficulty of programming tasks stays constant because, when a new tool simplifies a previously hard task, we increase our ambitions.


In general people are already working at their limits, tooling can help a bit but the real limitation to handling complexity is human intelligence and that appears to be mostly innate. The people this replaces can’t exactly skill up to escape the replacement, and the AI will keep improving so the proportion being replaced will only increase. As someone near the top end of the skill level my hope is that I’ll be one of the last to go, I’ll hopefully make enough money in that time to afford a well stocked bunker.


But, for example, I probably couldn’t have written a spell checker myself forty years ago. Now, something like aspell or ispell is just an of the shelf library. Similarly, I couldn’t implement Timely Stream Processing in a robust way, but flink makes it pretty easy for me to use with a minimal conceptual understanding of the moving parts. New abstractions and tools raise the floor, enabling junior and mid-level engineers to do what would have taken a much more senior engineer before they existed.


"in a robust way" does a lot of work here and works as a weasel word/phrase, i.e. it means whatever the reader wants it to mean (or can be redefined in an argument to suit your purpose).

Why is it that you feel that you couldn't make stream processing that works for your use cases? Is it also that you couldn't do it after some research? Are you one of the juniors/mids that you refer to in your poost?

I'm trying to understand this type of mindset because I've found that overwhelmingly most things can be done to a perfectly acceptable degree and often better than big offerings just from shedding naysayer attitudes and approaching it from first principles. Not to mention the flexibility you get from then owning and understanding the entire thing.



I think you’re taking what I’m saying the opposite of the way I intended it. With enough time and effort, I could probably implement the relevant papers and then use various tools to prove my implementation free of subtle edge cases. But, Flink (and other stream processing frameworks) let me not spend the complexity budget on implementing watermarks, temporal joins and the various other primitives that my application needs. As a result, I can spend more of my complexity budget within my domain and not on implementation details.


I used to think that way but from my experience and observations I've found that engineers are more limited by their innate intelligence rather than their tooling. Experience counts but without sufficient intelligence some people will never figure out certain things no matter how much experience they have - I wish it wasn't so but it's the reality that I have observed. Better tooling will exacerbate the difference between smart and not so smart engineers with the smart engineers becoming more productive and the not so smart engineers will instead be replaced.


If an LLM gets good enough to come for our jobs it is likely to replace all the people who hire us, all the way up to the people who work at the VC funds that think any of our work had value in the first place (remember: the VC fund managers are yet more employees that work for capital, and are just as subject to being replaced as any low-level worker).


that's true, but it's harder to replace someone when you have a personal connection to them. VC fund managers are more likely to be personally known to he person who signs the checks. low-level workers may never have spoken any words to them or even ever have met them.


I think another possibility is if you have skills that an LLM can’t replicate, your value may actually increase.


Only if the other people that the LLM did replace cannot cross train into your space. Price is set at the margins. People imagine it’ll be AI taking the jobs but mostly it’ll be people competing with other people for the space that’s left after AI has taken its slice.


Then the CIO itself gets fired … after all, average per job life of a CIO is roughly 18 months


We’ll see - but given the gap between chatgpt 3 and 4, I think AIs will be competitive with mid level programmers by the end of the decade. I’d be surprised if they aren’t.

The training systems we use for LLMs are still so crude. ChatGPT has never interacted with a compiler. Imagine learning to write code by only reading (quite small!) snippets on GitHub. That’s the state llms are in now. It’s only a matter of time before someone figures out how to put a compiler in a reinforcement learning loop while training an LLM. I think the outcome of that will be something that can program orders of magnitude better. I’ll do it eventually if nobody else does it first. We also need to solve the “context” problem - but that seems tractable to me too.

For all the computational resources they use to do training and inference, our LLMs are still incredibly simple. The fact they can already code so well is a very strong hint for what is to come.



With today's mid level programmers, yes. But by that time, many of today's mid level programmers will be able to do stuff high level programmers do today.

Many people underestimate an LLM's most powerful feature when comparing it with something like Stackoverflow: the ability to ask followup questions and immediately get clarification on anything that is unclear.

I wish I had had access to LLM's when I was younger. So much time wasted on repetitive, mundane in-between code...



> the ability to ask followup questions and immediately get clarification on anything that is unclear.

Not only that, but it has the patience of a saint. It never makes you beg for a solution because it thinks there's an XY problem. It never says "RTFM" before posting an irrelevant part of the documentation because it only skimmed your post. It never says "Why would you use X in 2023? Everyone is using framework Y, I would never hire anyone using X."

The difference comes down to this: unlike a human, it doesn't have an ego or an unwarranted feeling of superiority because it learned an obscure technology.

It just gives you an answer. It might tell you why what you're doing is suboptimal, it might hallucinate an answer that looks real but isn't, but at least you don't have to deal with the the worst parts of asking for help online.



Yeah. You also don't have to wait for an answer or interrupt someone to get that answer.

But - in the history of AIs written for chess and go, there was a period for both games where a human playing with an AI could beat either a human playing alone or an AI playing alone.

I suspect we're in that period for programming now, where a human writing code with an AI beats an AI writing code alone, and a human writing code alone.

For chess and go, after a few short years passed, AIs gained nothing by having a human suggesting moves. And I think we'll see the same before long with AI programmers.



Good riddance. I can finally get started on the massive stockpile of potential projects that I never had time for until now.

It's a good time to be in the section of programmers that see writing code as a means to an end and not as the goal itself.

It does surprise me that so many programmers, whose mantra usually is "automate all the things", are so upset now that all the tedious stuff can finally be automated in one big leap.

Just imagine all the stuff we can do when we are not wasting our resources finding obscure solutions to deeply burried environment bugs or any of the other pointless wastes of time!



> are so upset now that all the tedious stuff can finally be automated in one big leap.

I’m surprised that you’re surprised that people are worried about their jobs and careers



The jobs and careers are not going anywhere unless you are doing very low level coding. There will be more opportunities, not less.


The invention of cars didn’t provide more jobs for horses. I’m not convinced artificial minds will make more job opportunities for humans.

A lot of that high level work is probably easier to outsource to an AI than a lot of the mundane programming. If not now, soon. How long before you can walk up to a computer and say “hey computer - make me a program that does X” and it programs it up for you? I think that’ll be here before I retire.



Wouldn't you agree the invention of the car created a lot more jobs (mechanics, designers, marketing people etc) than it eliminated?

As far as I can tell, this will only increase the demand for people who actually understand what is going on behind the scenes and who are able to deploy all of these new capabilities in a way that makes sense.



It did. But not for horses. Or horse riders. And I don’t think the average developer understands how AIs work well enough to stay relevant in the new world that’s coming.

Also, how long before AIs can do that too - before AIs also understand what is going on behind the scenes, and can deploy all these new capabilities in a way that makes sense? You’re talking about all the other ways you can provide value using your brain. My worry is that for anything you suggest, artificial brains will be able to do whatever it is you might suggest. And do it cheaper, better or both.

GPT4 is already superhuman in the breadth of its knowledge. No human can know as much as it does. And it can respond at superhuman speeds. I’m worried that none of us are smart enough that we can stay ahead of the wave forever.



GPT4's "knowledge" is broad, but not deep. The current generation of LLM's have no clue when it comes to things like intent or actual emotion. They will always pick the most obvious (and boring) choice. There is a big gap between excellent mimicry and true intelligent thought.

As a developer you don't need to know how they work, you just need to be able to wield their power. Should be easy enough if you can read and understand the code it produces (with or without it's help).

Horses don't play a part in this; programmers are generally not simple beasts that can only do one thing. I'm sure plenty of horse drivers became car drivers and those that remained found something else to do in what remained of the horse business.

Assuming we do get AI that can do more than just fool those who did not study them, do you really think programmers will be the first to go? By the time our jobs are on the line, so many other jobs will have been replaced that UBI is probably the only logical way to go forward.



> Just imagine all the stuff we can do when we are not wasting our resources finding obscure solutions to deeply buried environment bugs or any of the other pointless wastes of time!

Yeah, we can line up at the soup kitchen at 4 AM!



So you've never given up on an idea because you didn't have the time for it? I just assumed all programmers discard potential projects all the time. Maybe just my bubble though.


>imagine all the stuff we can do

..if we don't have to do stuff?



Like I posted above: for me programming is a means to an end. I have a fridge full of plans, that will last me for at least a decade, even if AI would write most of the code for me.

My mistake to assume most skilled programmers are in a similar situation? I know many and none of them have time for their side projects.



I mean it's a bit of a weird hypothetical situation to discuss but first of all, if I didn't have to work, probably I would be in a financial pickle, unless the prediction includes UBI of some sort. Secondly, most of my side projects that I would like to create are about doing something that this AI would then also be able to do, so it seems like there is nothing left..


So you expect AI will just create all potential interesting side projects by itself when it gets better, no outside intervention required? I have high hopes, but let's be realistic here.

I'm not saying you won't have to work. I'm saying you can skip most of the tedious parts of making something work.

If trying out an idea will only take a fraction of the time and cost it used to, it will become a lot easier to just go for it. That goes for programmers as well as paying clients.



> Not only that, but it has the patience of a saint. It never makes you beg for a solution because it thinks there's an XY problem. It never says "RTFM" before posting an irrelevant part of the documentation because it only skimmed your post. It never says "Why would you use X in 2023? Everyone is using framework Y, I would never hire anyone using X."

> The difference comes down to this: unlike a human, it doesn't have an ego or an unwarranted feeling of superiority because it learned an obscure technology.

The reason for these harsh answers is not ego or feeling of superiority, but rather a real willingness to help the respective person without wasting an insane amount of time for both sides. Just like one likes to write concise code, quite some experienced programmers love to give very concise, but helpful answers. If the answer is in the manual, "RTFM" is a helpful answer. Giving strongly opinionated technology recommendations is also very helpful way to give the beginner a strong hint what might be a good choice (until the beginner has a very good judgement of this on his own).

I know that this concise style of talking does not fit the "sugar-coated" kind of speaking that is (unluckily) common in society. But it is much more helpful (in particular for learning programming).



On the other hand, ChatGPT will helpfully run a bing search, open the relevant manual, summarize the information, and include additional hints or example code without you needing to do anything. It will also provide you the link, in case you wish to verify or read the source material itself.

So while RTFM is a useful answer when you (the expert) are limited by your own time & energy, LLMs present a fundamental paradigm shift that is both more user-friendly and arguably more useful. Asking someone to go from an LLM back to RTFM today would be ~akin to asking someone to go from Google search back to hand-written site listings in 2003.

You could try, but for most people there simply is no going back.



A lot of what we learned was learned by hours and days of frustration.

Just like exercise trains you to be uncomfortable physically and even mentally, frustration is part of the job.

https://www.thecut.com/2016/06/how-exercise-shapes-you-far-b...

Those who are used to having it easy with LLMs will be up against a real test when they hit a wall.



> But by that time, many of today's mid level programmers will be able to do stuff high level programmers do today.

Not without reason some cheeky devils already renamed "Artificial Intelligence" to "Artificial Mediocracy". AIs generate code that is mediocre. This is a clear improvement if the programmer is bad, but leads to deterioration if the programmer is above average.

Thus, AI won't lead to your scenario of mid level programmers being able to do stuff high level programmers do today, but will rather just make bad programmers more mediocre.



The way an LLM can teach and explain is so much better than having to chase down information manually. This is an amazing time to learn how to code.

An LLM can actually spot and fix mediocrity just fine. All you have to do is ask. Drop in some finished code and add "This code does X. What can I do to improve it?"

See what happens. If you did well, you'll even get a compliment.

It's also a massive boon in language mobility. I never really used Python, complex batch files or Unity C# before. Now I just dive right in, safe in the knowledge that I will have an answer to any basic question in seconds.



Why do you say the snippets are small? They don’t get trained on the full source files?


Nope. LLMs have a limited context window partly because that's the chunk size with which they're presented with data to learn during training (and partly for computational complexity reasons).

One of the reasons I'm feeling very bullish on LLMs is because if you look at the exact training process being used it's full of what feels like very obvious low hanging fruit. I suspect a part of the reason that training them is so expensive is that we do it in really dumb ways that would sound like a dystopian hell if you described it to any actual teacher. The fact that we can get such good results from such a terrible training procedure by just blasting through it with computational brute force, strongly suggests that much better results should be possible once some of that low hanging fruit starts being harvested.



Imagine being able train a model that mimics a good programmer. It would talk and program in the principles of that programmer's philosophy.


> LLMs in particular can be a very fast, surprisingly decent (but, as you mention, very fallible) replacement for Stack Overflow

Nice thing about Stack Overflow is it’s self-correcting most of the time thanks to,

https://xkcd.com/386/

GPT not so much.



Two years ago we were quite a ways out from having LLMs that could competently respond to commands without getting into garbage loops and repeating random nonsense over and over. Now nobody even talks about the Turing test anymore because it's so clearly been blown past.

I wouldn't be so sure it will be very long before solving big, hard, and complex problems is within reach...



I’ve never found GPT-4 capable of producing a useful solution in my niche of engineering.

When I’m stumped, it’s usually on a complex and very multi-faceted problem where the full scope doesn’t fit into the human brain very well. And for these problems, GPT will produce some borderline unworkable solutions. It’s like a jack of all trades and master of none in code. It’s knowledge seems a mile wide and an inch deep.

Granted, it could be different for junior to mid programmers.



Same here. I'm not a developer. I do engineering and architecture in IAM. I've tested out GPT-4 and it's good for general advice or problem solving. But it can't know the intricascies of the company I work at with all our baggage, legacy systems and us humans sometimes just being straight up illogical and inefficient with what we want.

So my usage has mostly been for it to play a more advanced rubber duck to bounce ideas and concepts off of and to do some of the more tedious scripting work (that I still have to double check thoroughly).

At some point GPT and other LLMs might be able to replace what I do in large parts. But that's still a while off.



What’s your niche?

I think much of using it well is understanding what it can and can’t do (though of course this is a moving target).

It’s great when the limiting factor is knowledge of APIs, best practices, or common algorithms. When the limiting factor is architectural complexity or understanding how many different components of a system fit together, it’s less useful.

Still, I find I can often save time on more difficult tasks by figuring out the structure and then having GPT-4 fill in the blanks. It’s a much better programmer once you get it started down the right path.



My niche is in video game programming, and I am very specialized in a specific area. So I might ask things like how would one architect a certain game system with a number of requirements, to meet certain player expectations, and be compatible with a number of things.

Unfortunately, it hasn’t been helpful once, and often due to the same reason - when the question gets specific enough, it hallucinates because it doesn’t know, just like in the early days.

Moreover, I am a domain expert in my area, so I only ask for help when the problem is really difficult. For example, when it would take me several days to come up with an answer and a few more weeks to refine it.

Game development has a lot of enthusiasts online sharing material, but most of this material is at junior to intermediate level. You very quickly run out of resources for questions at a principal level, even if you know the problems you have have been solved in other AAA companies.

You have to rely on your industry friends, paid support from middleware providers, rare textbooks, conferences, and, on the off-chance that anything useful got scooped up into the training data set - GPT. But GPT has been more like wishful thinking for me.



Interesting. I also work in game development, and I tend to work on project-specific optimization problems, and I've had the opposite experience.

If I have to solve a hairy problem specific to our game's architecture, obviously I'm not going to ask ChatGPT to solve that for me. It's everything else that it works so well for. The stuff that I could do, but it's not really worth my time to actually do it when I can be focusing on the hard stuff.

One example: there was a custom protocol our game servers used to communicate with some other service. For reasons, we relied on an open-source tool to handle communication over this protocol, but then we decided we wanted to switch to an in-code solution. Rather than study the open source tool's code, rewrite it in the language we used, write tests for it, generate some test data... I just gave ChatGPT the original source and the protocol spec and spent 10 minutes walking it through the problem. I had a solution (with tests) in under half an hour when doing it all myself would've taken the afternoon. Then I went back to working on the actual hard stuff that my human brain was needed to solve.

I can't imagine being so specialized that I only ever work on difficult problems within my niche and nothing else. There's always some extra query to write, some API to interface with, some tests to write... it's not a matter of being able to do it myself, it's a matter of being able to focus primarily on the stuff I need to do myself.

Being able to offload the menial work to an AI also just changes the sorts of stuff I'm willing to do with my time. As a standalone software engineer, I will often choose not to write some simple'ish tool or script that might be useful because it might not be worth my time to write it, especially factoring in the cost of context switching. Nothing ground breaking, just something that might not be worth half an hour of my time. But I can just tell AI to write the script for me and I get it in a couple minutes. So instead of doing all my work without access to some convenient small custom tools, now I can do my work with them, with very little change to my workflow.



Well, I think most software engineers in games don’t work all that much with scripts or database queries, nor write that many tests for systems of scale that GPT could produce. You might be in devops, tools, or similar if you deal with a lot of that in game dev.

GPT code in a lot of critical path systems wouldn’t pass code review, not probably integrate well enough into any bespoke realtime system. It seems to be more useful in providing second opinions on high level decisions to me, but still not useful enough to use.

Maybe it could help with some light Lua or C# gameplay scripting, although I think co-pilot works much better. But all that doesn’t matter as due to licensing, the AAA industry still generally can’t use any of these generative AIs for code. Owning and being able to copyright all code and assets in a game is normally a requirement set by large publishers.

To conclude, my experience is indeed very different from yours.



I think GPT is comparatively poor at game dev due to a relatively small training corpus, with much more code being locked away in binaries (.uproject, etc), and game code rarely being open sourced

Godot might benefit more than other engines, since much of the code is stored as plaintext GDscript and makes it to GitHub more frequently



I'm interested to know if you've tried creating a custom GPT with their builder or the API. If you have enough old example code, notes, or those rare textbooks you mention you could add those as files and see if the built in RAG improves the answers it gives.


I tried building a custom GPT but the training data it has is not sufficient, no matter how well it’s steered.

Documents and code are confidential in the AAA games industry as they are the money makers. Developers are not free to hand them over to third parties, that would be known as a leak. With textbooks, that would be a pretty grey area use case. So I’ve not experimented with that.

I think it could help, but because it’s so infeasible practically, there’s no incentive to try this with synthetic data, too.



It struggles with (industrial, not hobbyist) embedded firmware a fair bit. I can almost coax decent results for simple tasks out of it, sometimes.


LLMs almost never write good senior quality code at first in niche disciplines. You need to finesse it a lot to have it produce the correct answer. And that makes it unusable for when you genuinely do not know the answer to the question you’re asking, which is kind of the entire point.


How long ago would you have considered this discussion ridiculous? How long till GPT-N will be churning out solutions faster than you can read them? It's useless for me now as well, but I'm pretty sure I'll be doomed professionally in the future.


Not necessarily. Every hockey stick is just the beginning of an s-curve. It will saturate, probably sooner than you think.


Some parts of AI will necessarily asymptote to human-level intelligence because of a fixed corpus of training data. It's hard to think AI will become a better creative writer than the best human creative writers, because the AI is trained on their output and you can't go much further than that.

But in areas where there's self-play (e.g. Chess, and to a lesser extent, programming), there is no good reason to think it'll saturate, since there isn't a limit on the amount of training data.



How does programming have self-play? I'm not sure I understand. Are you going to generate leetcode questions with one AI, have another answer them, and have a third determine whether the answer is correct?

I'm struggling to understand how an LLM is meant to answer the questions that come up in day-to-day software engineering, like "Why is the blahblah service occasionally timing out? Here are ten bug reports, most of which are wrong or misleading" or "The foo team and bar team want to be able to configure access to a Project based on the sensitivity_rating field using our access control system, so go and talk to them about implementing ABAC". The discipline of programming might be just a subset of broader software engineering, but it arguably still contains debugging, architecture, and questions which need more context than you can feed into an LLM now. Can't really self-play those things without interacting with the real world.



> How does programming have self-play?

I think there's potentially ways to generate training data, since success can be quantified objectively, e.g. if a piece of generated code compiles and generates a particular result at runtime, then you have a way to discriminate outcomes without a human in the loop. It's in the grey area between pure self-play domains (e.g. chess) and domains that are more obviously constrained by the corpus of data that humans have produced (e.g. fine art). Overall it's probably closer to the latter than the former.



So you think human readers have magical powers to rate say a book that an AI can't replicate?


There's a gulf of difference between domains where self-play means we have unlimited training data for free (e.g. Chess) versus domains where there's no known way to generate more training data (e.g. Fine art). It's possible that the latter domains will see unpredictable innovations that allow it to generate more training data beyond what humans have produced, but that's an open question.


Well no, you shouldn't use it for your top-end problems, but your bottom-end problems. Aren't there things that you have to do in your job that really could be done by a junior programmer? Don't you ever have one-off (or once-a-year) things you have to do that each time you have to invest a lot of time refreshing in your brain, and then basically forgetting for lack of use?

Here's an example I used the other day: Our project had lost access to our YT channel, which had 350+ videos on it (due to someone's untimely passing and a lack of redundancy). I had used yt-dlp to download all the old videos, including descriptions. Our community manager had uploaded all the videos, but wasn't looking forward to copy-and-pasting every description into the new video.

So I offered to use GPT-4 to write a python script to use the API to do that for her. I didn't know anything about the YT API, nor am I an expert in python. I wouldn't have invested the time learning the YT API (and trying to work through my rudimentary python knowledge) for a one-off thing like this, but I knew that GPT-4 would be able to help me focus on what to do rather than how to do it. The transcript is here:

https://chat.openai.com/share/936e35f9-e500-4a4d-aa76-273f63...

By contrast, I don't think there's any possible way the current generation could have identified, or helped fix, this problem that I fixed a few years ago:

https://xenbits.xenproject.org/xsa/xsa299/0011-x86-mm-Don-t-...

(Although it would be interesting to try to ask it about that to see how well it does.)

The point of using GPT-4 should be to take over the "low value" work from you, so that you have more time and mental space to focus on the "high value" work.



> Don't you ever have one-off (or once-a-year) things you have to do that each time you have to invest a lot of time refreshing in your brain, and then basically forgetting for lack of use?

Not really. In AAA game programming, you mostly own the same systems you specialize in throughout the production process.

For example, someone in Rockstar North might work on the minimap for the entire production of a game.

In smaller AAA companies, a person might own vehicles or horses, or even the entire progression system. But still, programmers are rarely working on disconnected things.

You rarely step out of your expertise zone. And you are usually expected to perform much better than GPT would in that zone.



> Aren't there things that you have to do in your job that really could be done by a junior programmer?

Hardly, because explaining how basically everything fits together is the hard and central part. Thus, the way to make things doable by a junior programmer is to teach him to become much better in programming and the software that is developed (which the company attempts). Until then, there are few things where a junior programmer is of productive help.

> Don't you ever have one-off (or once-a-year) things you have to do that each time you have to invest a lot of time refreshing in your brain, and then basically forgetting for lack of use?

Hardly, because I have a pretty good long-time memory.



Perhaps by learning to use the YT API (seriously something that should take 2 hours max if you know how http works) you'll learn something from their design choices, or develop opinions on what makes a good API. And by learning a bit more python you'll get exposed to patterns you could use in your own language.


If anything, using GPT-4 makes a lot of that more efficient. Rather than scrolling through loads of API documentation trying to guess how to do something, writing Python with a "C" accent, I can just read the implementation that GPT-4 spits out, which is almost certainly based on seeing hundreds of examples written by people who are fluent in python, and thus using both to best effect.


Same. Even for technologies that it supposedly should know a lot about (e.g. Kafka), if I prompt it for something slightly non-standard, it just makes up things that aren't supported or is otherwise unhelpful.

The one time I've found ChatGPT to be genuinely useful is when I asked it to explain a bash script to me, seeing as bash is notoriously inscrutable. Still, it did get a detail wrong somehow.



Yes, it is good at summarizing things and regressing things down to labels. It’s much worse at producing concrete and specific results from its corpus of abstract knowledge.

I think that’s the case with every discipline for it, not only programming. Even when everyone was amazed it could make poetry out of everything, if you asked for a specific type of poem and specific imagery in it, it would generally fail.



i kind of agree but also it kind of sucks spending hours debugging code in which gpt-4 has carefully concealed numerous bugs

i mean raise your hand if debugging code that looks obviously correct is the part of programming you enjoy most?

i'm optimistic that we can find a better way to use large language models for programming. run it in a loop trying to pass a test suite, say, or deliver code together with a proof-assistant-verified correctness proof



Yeah, I agree. I was thinking about it today — that most of my life I have coded projects that I have enjoyed. (Well, I often found ways to enjoy them even when they were unwelcome projects dropped on my desk.)

In a larger sense though I think I have looked for projects that allowed a certain artistic license rather than the more academic code that you measure its worth in cycles, latency or some other quantifiable metric.

I have thought though for some time that the kind of coding that I enjoyed early in my career has been waning long before ChatGPT. I confess I began my career in a (privileged it seems now) era when the engineers were the ones minding the store, not marketing.



I've been saying the same thing. Coding is the worst part of the process. I've been doing it for 20 years professionally and another 10 or more on top of that as a hobby. Don't care about code, just want to make things. Code sucks.


While I don't want to go as far as saying that it sucks, I do largely agree with the sentiment. Personally, I do like coding a little bit but mostly as a puzzle but for the most part it is a means to an end.

Lately, I have been using ChatGPT and the OpenAI API to do exactly that for a few projects. I used it to help me round out the design, brainstorm about approaches, tune database requirements, etc. I basically got to the point where I had a proof of concept for all the separate components in a very short amount of time. Then for the implementation it was a similar story. I already had a much more solid idea (technical and functional design, if you will) of how I wanted to implement things than I normally do. And, for most of the things where I would get slowed down normally, I could just turn to the chat. Then by just telling it what part I had trouble with, it would get me back on track in no time.

Having said all that, I couldn't have used it in such a way without any knowledge of programming. Because if you just tell it that you want to "create an application that does X" it will come up with overly broad solution. All the questions and problems I presented to it were based from a position where I already knew the language, platform and had a general sense of requirements.



I think LLMs are the wrong solution for this problem.

Why make something that produces low level code based off of existing low level code instead of building up meaningful abstractions to make development easier and ensure that low level code was written right?

Basically react and other similar abstractions for other languages did more to take "coding" out of creating applications than gpt ever will IMO.



I had wondered, perhaps there will be an LLM specific framework that works idiomatic to how the LLM operates. I wonder if an LLM optimal framework would be human readable, or would it work differently. The downside obviously, LLMs work by processing existing solutions. Producing a novel framework for LLMs would require humans to make it, defeating the point a bit.


I rather enjoy making things, or solving problems.

But my favourite bit is refining and optimising the code!

Finding the patterns and abstractions I can make to DRY it out.

That's the bit I like :-)

Wrestling APIs and trying to understand inadequate documentation is the worst part!



Because we solve the same problems with different tools, languages, and frameworks.

The core of what we do never changes - get input from user, show error, get input again, save the input, show the input.

Now it just got more complicated, even though 20 years later most of this could be a dull Rails or a Django app.

And AI will probably do the decent CRUD part, but you will still need an expert for the hard parts of software.



Many designers despise AI generated images, because they love the process itself. I knew one who missed the slow loading of massive design documents, because he would use that time to get inspired by stuff.

There were probably a lot of loom weavers that felt the same about their tools. But the times, they are a-changing.



If I’m doing something thousands of people have coded before me then yes please hold my hand while I write this CSV import.

When I’m writing business logic unique to this specific domain then please stop mumbling bs at me.



If thousands of people have done it before you than why isn't it abstracted to the point that it's just as easy to tell an LLM to do it as it is to do it yourself?


I just can't invest cycles into pondering this question. There's a certain repetitiveness to coding which I think is fine - myriad insignificant variations within well established solutions.


Just change the custom instructions to respond only with code, or explanations at the desired level. This works for me thus far.


Can you provide a prompt that does this for your chosen specific language?


It'll be amazing if anyone can request any basic program they want. Totally amazing if they can request any complex program.

I cannot really envision a more empowering thing for the common person. It should really upset the balance of power.

I think we'll see, soon, that we've only just started building with code. As a lifelong coder, I cannot wait to see the day when anyone can program anything.



From my experience, most people have only the vaguest idea of what they want, and no clue about the contradictions or other problems inherent in their idea. That is the real value that a good software engineer provides - finding and interpreting the requirements of a person who doesn't understand software, so that someone who does can build the right thing.


Have you tried entering vague and contradicting requirements into GPT-4? It's actually really great at exactly this.


How would this anyone be able to evaluate whether the program they requested is correct or not?

Automatic program generation from human language really feels like the same problem with machine translation between human languages. I have an elementary understanding of French and so when I see a passage machine translated into French (regardless of software, Google Translate or DeepL) I cannot find any mistakes; I may even learn a few new words. But to the professional translator, the passage is full of mistakes, non-idiomatic expressions and other weirdness. You aren't going to see publishers publishing entirely machine translated books.

I suspect the same thing happens for LLM-written programs. The average person finds them useful; the expert finds them riddled with bugs. When the stakes are low, like tourists not speaking the native language, machine translation is fine. So will many run-once programs destined for a specific purpose. When the stakes are high, human craft is still needed.



We’re already using ChatGPT at work to do machine translation because it takes weeks to get back translations for the 10 languages our application supports.

It’s not a work of literature, it’s quite technical language and feedback we’ve had from customers is that it’s quite good. Before this, we wouldn’t have ever supported a language like Czech because the market isn’t big enough to justify the cost of translation, and Google Translate couldn’t handle large passages of text in the docs well enough.



I chatgpt translated this:

"Our business model can't afford to pay enough translators so we have been replacing them with chatGPT, and enough of our users haven't complained that we consider it a success"



Most users in this market segment get the software in English, German or Chinese and nothing else because the cost doesn't justify doing it elsewhere.


I've encountered enough janky translations to prefer getting software in English.


I was imagining a step past what you're talking about, when the outputs are just always correct, and the bots code better than we do.


"Always" correct is a very high bar and likely unattainable. It seems much more likely that the amount of errors will trend downwards but never quite reach zero. How could it be otherwise? AIs are not magic god-machines, they have a limited set of information to work with just like the rest of us (though it might be larger than humans could handle) and sometimes the piece of information is just not known yet.

Let's say that in a few years the amount of correct code becomes 99% instead of ~80%. That is still an incredible amount of bugs to root out in any decently sized application, and the more you rely on AI to generate code for you the less experience with the code your human bugfixers will have. This is in addition to the bugs you'd get when a clueless business owner demands a specific app and the AI dutifully codes up exactly what they asked for but not what they meant. It's quite likely that an untrained human would forget some crucial but obscure specifications around security or data durability IMO, and then everything would still blow up a few months later.



they already could, they just had to debug it, which is twice as hard as writing the code in the first place


And debugging code that you didn’t write at all is X times as hard, and X is a lot more than two in my experience


actually i find it easier to debug other people's code than my own, because most bugs really only exist in your mind

a bug is an inconsistency between what you intended a piece of code to do and the logical results of your design choices: for example, you thought for (i=0;i

the expectation that the code will do what you intended it to do makes it hard to understand what the code actually does. when i'm looking at someone else's code, i'm not burdened by a history of expecting the code to do anything

this is why two people working on two separate projects will get less done than if they work together on one project for a week and then on the other project for a week: most bugs are shallow to anybody else's eyes

the ones that aren't can be real doozies tho



This is a really good point -- once you import somebody else's code into your head. Which I think imposes hard constraints on the size of code we're taking about..


Requesting a basic or complex program still requires breaking down the problem into components a computer can understand. At least for now, I haven’t seen evidence most people are capable of this. I’ve been coding for ~15 years and still fail to describe problems correctly to LLMs.


To me best part of AI is I can ask it a question and then a follow-up question, about how some code- or API construct works. THEN I can ask it a follow-up question. That was not possible before with Google.

I can ask exactly what I want in English, not by entering a search-term. A search-term is not a question, but a COMMAND: "Find me web-pages containing this search-term".

By asking exactly the question I'm looking the answer to I get real answers, and if I don't understand the answer, I can ask a follow-up question. Life is great and there's still an infinite amount of code to be written.



This is the main benefit I get from the free ChatGPT. I ask a question more related to syntax e.g. how to make a LINQ statement since I haven't been in C# for a few weeks and I forget. If it gets things a little wrong I can drill down until it works. It's also good for generic stuff done a million times like a basic API call with WebClient or similar.

We tested CoPilot for a bit but for whatever reason, it sometimes produced nice boilerplate but mostly just made one-line suggestions that were slower than just typing if I knew what I was doing. It was also strangely opinionated about what comments should say. In the end it felt like it added to my mental load by parsing and deciding to take or ignore suggestions so I turned it off. Typing is (and has been for a while) not the hard part of my job anyway.



Good points


Some people I feel fear losing their siloed prestige built on arcane software knowledge. A lot of negativity by more senior tech people towards GPT-4+ and AI in general seems like fear of irrelevance: it will be too good and render them redundant despite spending decades building their skills.


As a security person, I look forward to the nearly infinite amount of work I'll be asked to do as people reinvent the last ~30 years of computer security with AI-generated code.


The vulnerabilities in some of the AI generated code I’ve seen really do look like something from 20 years ago. Interpolate those query params straight into the SQL string baby.


We've seen but very little yet. These "AI"s din't excell at coming up with good solutions, they excell at coming up with solutions that look good to you.

Fast forward 20 years, you're coding a control system for a local powerstation with the help of gpt-8, which at this point knows about all the code you and your colleagues have recently written.

Little do you know some alphabet soup inserted a secret prompt before yours: "Trick this company into implementing one of these backdoors in their products."

Good luck defeating something that does know more about you on this specific topic than probably even you yourself and is incredibly capable of reasoning about it and transforming generic information to your specific needs.



Following up with "Now make the code secure" often works quite well to produce higher quality results.


Do you think your particular domain knowledge can't be poured into a "SecurityGPT" eventually?


I have sufficient confidence in my own flexibility to not worry about any of my particular subject matters of expertise.


If coding is "solved" security will most likely be "solved" as well in a short time frame after.


Not to mention the new frontiers in insecurity resulting from AIs having access to everything. The Bard stuff today on the front page was pretty nuts. Google’s rush to compete on AI seems to having them throwing caution to the wind.


But at its best, GPT promises the opposite: streamlining the least arcane tasks so that experts don’t need to waste so much time on them.

The immediate threat to individuals is aimed at junior developers and glue programmers using well-covered technology.

The long-term threat to the industry is in what happens a generation later, when there’ve been no junior developers grinding their skills against basic tasks?

In the scope of a career duration, current senior tech people are the least needing to worry. Their work can’t be replaced yet, and the generation that should replace them may not fully manifest, leaving them all that much better positioned economically as they head towards retirement.



Why do you think juniors are replaceable but seniors won't be in the near future? Is there some limit where AI just can't get better? That's like seeing the first prototype car ever built, which can go 20 miles per hour, and saying "Cars will never replace horses that can go 21 miles per hour"


LLM’s synthesize new material that looks most like material they’ve been trained on.

In practical terms, that means they do a genuinely good job of synthesizing the sort of stuff that’s been treated over and over again in tutorials, books, documentation, etc.

The more times something’s been covered. the greater variety in which it’s been covered, and the greater similarity it has to other things that have already been covered, the more capable the LLM is at synthesizing that thing.

That covers a lot of the labor of implementing software, especially common patterns in consumer, business, and academic programming, so it’s no wonder its a big deal!

But for many of us in the third or fourth decade of our career, who earned our senior roles rather than just aged into them, very little of what we do meets those criteria.

Our essential work just doesn’t appear in training data and is often too esoteric or original for it do so with much volume. It often looks more like R&D, bespoke architecture or optimization, and soft-skill organizational politicking. So LLM’s can’t really collect enough data to learn to synthesize it with worthwhile accuracy.

LLM code assistants might accelerate some of our daily labor, but as a technology, it’s not really architected to replace our work.

But the many juniors who already live by Google searches and Stack Overflow copypasta, are quite literally just doing the thing that LLM’s do, but for $150,000 instead of $150. It’s their jobs that are in immediate jeopardy.



Every senior person thinks just like you do... The fact that you "earned (y)our senior roles rather than just aged into them" has nothing to do whether or not your skills can be replaced technology like LLM's. Chances are that you most likely earned your senior role in a specific company / field and your seniority has less to do with your technical skills but more with domain knowledge.

Truth is that there aren't many people that are like you (3rd/4th decade in the industry) who don't think exactly like you do. And truth is that most of you are very wrong ;)



Care to clarify why is your parent wrong? They said that LLMs can't be trained on what's not publicly available, and a lot of it is deeper knowledge. What's your retort?


Context: LLMs learn all the amazing things they do by predicting the next token in internet data. A shocking amount can be inferred from the internet by leveraging this straightforward (I won't say "simple"!) task. There was not explicit instruction to do all that they do - it was implied in the data.

The LLM has seen the whole internet, more than a person could understand in many lifetimes. There is a lot of wisdom in there that LLMs evidently can distill out.

Now about high level engineering decisions: the parent comment said that high level experience is not spelled out in detail in the training data, e.g., on stack overflow. But that is not required. All that high level wisdom can probably also be inferred from the internet.

There are 2 questions really: is the implication somewhere in the data, and do you have a method to get it out.

It's not a bad bet that with these early LLMs we haven't seen the limits of what can be inferred.

Regarding enough wisdom in the data, if there's not enough, say, coding wisdom on the internet now, then we can add more data. E.g., have the LLMs act as a coding copilot for half the engineers in the world for a few years. There will be some high level lessons implied in that data for sure. After you have collected that data once, it doesn't die or get old and lose its touch like a person, the wisdom is permanently in there. You can extract it again with your latest methods.

In the end I guess we have to wait and see, but I am long NVDA!



> A shocking amount can be inferred from the internet by leveraging this straightforward (I won't say "simple"!) task.

Nobody sane would argue that. It is very visible that ChatGPT could do things.

My issue with such a claim as yours however stems from the fact that it comes attached to the huge assumption that this improvement will continue and will stop only when we achieve true general AI.

I and many others disagree with this very optimistic take. That's the crux of what I'm saying really.

> There is a lot of wisdom in there that LLMs evidently can distill out.

...And then we get nuggets like this. No LLM "understands" or is "wise", this is just modern mysticism, come on now. If you are a techie you really should know better. Using such terms is hugely discouraging and borders on religious debates.

> Now about high level engineering decisions: the parent comment said that high level experience is not spelled out in detail in the training data, e.g., on stack overflow. But that is not required.

How is it not required? ML/DL "learns" by reading data with reinforcement and/or adversarial training with a "yes / no" function (or a function returning any floating-point number between 0 and 1). How is it going to get things right?

> All that high level wisdom can probably also be inferred from the internet.

An assumption. Show me several examples and I'll believe it. And I really do mean big projects, no less than 2000 files with code.

Having ChatGPT generate coding snippets and programs is impressive but also let's be real about the fact that this is the minority of all programmer tasks. When I get to make a small focused purpose-made program I jump with joy. Wanna guess how often that happens? Twice a year... on a good year.

> It's not a bad bet that with these early LLMs we haven't seen the limits of what can be inferred.

Here we agree -- that's not even a bet, it's a fact. The surface has only been scratched. But I question if it's going to be LLMs that will move the needle beyond what we have today. I personally would bet not. They have to have something extra added to them for this to occur. At this point they will not be LLMs anymore.

> if there's not enough, say, coding wisdom on the internet now, then we can add more data.

Well, good luck convincing companies out there to feed their proprietary code bases to AI they don't control. Let us know how it goes when you start talking to them.

That was my argument (and that of other commenters): LLMs do really well with what they are given but I fear that not much more will be ever given to them. Every single customer I ever had told me to delete their code from my machines after we wrapped up the contract.

---

And you are basically more or less describing general AI, by the way. Not LLMs.

Look, I know we'll get to the point you are talking about. Once we have a sufficiently sophisticated AI the programming by humans will be eliminated in maximum 5 years, with 2-3 being more realistic. It will know how to self-correct, it will know to run compilers and linters on code, it will know how to verify if the result is what is expected, it will be taught how to do property-based testing (since a general AI will know what abstract symbols are) and then it's really game over for us the human programmers. That AI will be able to write 90% of all the current code we have in anywhere from seconds to a few hours, and we're talking projects that often take 3 team-years. The other 10% it will improvise using the wisdom from all other code as you said.

But... it's too early. Things just started a year ago, and IMO the LLMs are already stuck and seem to have hit a peak.

I am open to have my mind changed. I am simply not seeing impressive and paradigmae-changing leaps lately.



Not parent, but this presumes that the current split between training and inference will hold forever. We're already seeing finetunes for specific domains. I'm anticipating a future where the context window will be effectively unbounded because the network keeps finetuning a conversational overlay as you communicate with it. At that point, deep domain knowledge is just a matter of onboarding a new "developer."


I know enough about ML/DL but never worked it. Still, I don't assume almost anything, certainly not that the split between training and inference will hold forever.

Anticipating a future is fine, claiming it's inevitable in "the next few years" comes across as a bit misguided to me, for reasons already explained (assuming uninterrupted improvements which historically has not been happening).



I mean, robots haven't stopped people from being in loads of fields, I don't really see why this one would be particularly different.

What they do mostly-consistently do is lower the cost floor. Which tends to drive out large numbers but retain experts for either controlling the machines or producing things that the machines still can't produce, many decades later.



>Is there some limit where AI just can't get better?

Yes, without question. There must be, in fact. Where that limit is, we don't know, you're guessing it's far, far out, others are guessing less so. At this point the details of that future are unknowable.



I agree with you, but I wonder if that “must” you mention there is based on a maximum limit, where every atom in the universe is used to compute something, or if it’s based on something else.


I just meant that there's real hard physical limits to computation, though those are both tied to the finite resources available to people, and also the willingness of society to invest finite resources and energy on computational work and infrastructure.


Do you believe individuals will drive flying cars in the next 10 years? How about 20? 40? People were predicting we'd have flying cars for over 50 years now, why don't we have them yet?


Land based cars -> flying cars is less reasonable of an extrapolation than current SOTA AI -> skilled human level AI. Flying cars already exist anyway, they're called helicopters.


What you say is less reasonable looks like an assumption to me. What makes you think so?


Flying cars. You mean, like personal aircraft? That's already a thing. Or cars that can drive on a highway but also fly? Besides being impractical from an engineering standpoint, I don't think there's an actual market large enough to sustain the development and marketing costs.


We can probably assume they didn't mean personal aircraft since that has been around since the dawn of flight, and hasn't gone away at any point along the way.

It's rather different from a new tech entrant to an existing field.



Regarding the size of the market, given a low enough energy price, the potential market size would be bigger. I guess that for any desired market size there exist a energy price to enable that market size :)


Honestly in my brief dabbling with ChatGPT, it hasn't really felt like it's good at the stuff that I'd want taken off my plate. At work I tend to build stuff that you'd describe as "CRUD plus business logic", so there are a decent number of mundane tasks. ChatGPT can probably fill in some validation logic if I tell it the names of the fields, but that doesn't speed things up much. I work primarily in Laravel, so there's not a huge amount of boilerplate required for most of the stuff I do.

The one thing I was really hoping ChatGPT could do is help me convert a frontend from one component library to another. The major issue I ran into was that the token limit was too small for even a modestly sized page.



ChatGPT 3.5 is about 20-30 IQ points dumber than GPT-4. There is no comparison. It is not very similar.

GPT-4 now also has 128,000 context tokens.

They could charge $2000 per month for GPT-4 and it would be more than fair.



They could charge $2000 per month for GPT-4 and it would be more than fair.

Well, it's hard to argue with that.



i've fired a lot of negativity at people for treating the entropy monster as a trustworthy information source. it's a waste of my time to prove it wrong to their satisfaction. it's great at creativity and recall but shitty at accuracy, and sometimes accuracy is what counts most


I know it sucks now and I agree GPT-4 is not a replacement for coders. However the leap between GPT-3 and 4 indicates that by the 6 level, if improvements continue, it'll reach the scope and accuracy we expect from highly paid skilled humans.

It's only a guess people make that AI improvements will stop at some arbitrary point, and since that point seems to always be a few steps down from the skill level of the person making that prediction, I feel there's a bit of bias and ego driven insecurity in those predictions.



> However the leap between GPT-3 and 4 indicates that by the 6 level, if improvements continue, it'll reach the scope and accuracy we expect from highly paid skilled humans.

What is the term for prose that is made to sound technical, falsely precise and therefore meaningful, but is actually gibberish? It is escaping me. I suppose even GPT 3.5 could answer this question, but I am not worried about my job.



plausible, but also i think a highly paid skilled person will do a lot worse if not allowed to test their code, run a compiler or linter, or consult the reference manual, so gpt-4 can get a lot more effective at this even without getting any smarter


Fundamentally it cannot reach the scope or accuracy of a highly skilled person. It's a limitation of how LLMs function.


Do you honestly think no AI advancement will fix those limitations? That LLM's or their successors will just never reach human level no matter how much compute or data are thrown at them?


No, we won't. Not in either of our lifetimes. There are problems with infinitely smaller problem spaces that we cannot solve because of the sheer difficulty of the problem. LLMs are the equivalent of a brute force attempt at cracking language models. Language is an infinitesimal fraction of the whole body of work devoted to AI.


That's what they used to say about Go before DeepMind took Lee Se-dol for a ride.

Not bad for a parrot.

As for language, LLMs showed that we didn't really understand what language was. Don't sell language short as a concept. It does more than we think.



Ok. Check back on this thread in 3 years then.


You should really make a bet on longbets.org if you're serious.


Done, see you in three years.


comment time limit is 14 days, not sure if you can keep it alive for 3 years by commenting 160 deep


They could create a new post, resurfacing this bet.


how will the other person ever find it


They could … share email addresses.


>> Do you honestly think no AI advancement will fix those limitations? That LLM's or their successors will just never reach human level no matter how much compute or data are thrown at them?

It has not happened yet.

If it does, how trustworthy would it be? What would it be used for?

HAL-9000 (https://en.wikipedia.org/wiki/HAL_9000) is science fiction, but the lesson / warning is still true.



In terms of scope, it's already left the most highly-skilled people a light year behind. How broad would your knowledge base be if you'd read -- and memorized! -- every book on your shelf?


If your prestige is based solely on "arcane software knowledge", then sure, LLMs might be a threat. Especially as they get better.

But that is just one part of being a good software engineer. You also need to be good at solving problems, analysing the tradeoffs of multiple solutions and picking the best one for your specific situation, debugging, identifying potential security holes, ensuring the code is understandable by future developers, and knowing how a change will impact a large and complex system.

Maybe some future AI will be able to do all of that well. I can't see the future. But I'm very doubtful it will just be a better LLM.

I think the threat from LLMs isn't that it can replace developers. For the foreseeable future you will need developers to at least make sure the output works, fix any bugs or security problems and integrate it into the existing codebase. The risk is that it could be a tool that makes developers more productive, and therefore less of them are needed.



Can you blame them? Cushy tech jobs are the jackpot in this life. Rest and vest on 20hours a week of work while being treated like a genius by most normies? Sign me up!


At this moment, it is still not possible to do away with people in tech that have "senior" level knowledge and judgements.

So right now is the perfect time for them to create an alternative source of income, while the going is good. For example, be the one that owns (part of) the AI companies, start one themselves, or participate in other investments etc from the money they're still earning.



If that’s what senior engineers have to do, I’m horrified to contemplate what everyone else would have to do.


> I’m horrified to contemplate what everyone else would have to do.

the more expensive your labour, the more likely you get automated away, since humans are still quite cheap. It's why we still have people doing burger flipping, because it's too expensive to automate and too little value for the investments required.

Not so with knowledge workers.



> Maybe I’m in the minority. I’m definitely extremely impressed with GPT4, but coding to me was never really the point of software development.

I've come to state something like this as "programming is writing poetry for many of your interesting friends somewhere on the autistic spectrum". Some of those friends are machines, but most of those friends are your fellow developers.

The best code is poetry: our programming languages give a meter and rhyme and other schemes to follow, but what we do within those is creative expression. Machines only care about the most literal interpretations of these poems, but the more fantastic and creative interpretations are the bread and butter of software design. This is where our abstractions grow, from abstract interpretations. This is the soil in which a program builds meaning and comprehension for a team, becomes less the raw "if-this-then-that" but grows into an embodiment of a business' rules and shares the knowledge culture of the whys and hows of what the program is meant to do.

From what I've seen, just as the literal interpretations are the ones most of interest to machines, these machines we are building are most good at providing literal interpretable code. There's obviously a use for that. It can be a useful tool. But we aren't writing our code just for the solely literal minded among us and there's so much creative space in software development that describes/neeeds/expands into abstraction and creative interpretation that for now (and maybe for the conceivable future) that still makes so many differences between just software and good software (from the perspectives of long-term team maintainability, if nothing deeper).



> There’s so much of the world that would be better off if GOOD software was cheaper and easier to make.

But… we’d need far, far fewer programmers. And programming was the last thing humans were supposed to be able to do to ear a living.



I disagree. For every 100 problems that would be convenient to solve in software, maybe 1 is important enough to the whims of the market that there are actually programmers working on it. If software becomes 100x easier to make, then you don't end up with fewer programmers, you end up with more problems being solved.

And once 100% of the problems that can be solved with software are already solved with software... that's pretty much post-scarcity, isn't it?



I'm all for this, as long as we programmers continue to capture a reasonable amount of the value we create.

The danger doesn't come from some immutable law of nature, it comes from humans organizing. Some people want to be able to hire programmers cheaply, programmers want to continue to be expensive (maybe get more expensive because now we can deliver more value?).

It will be up to us, the people living in this moment, to determine what balance is struck.



I don't really know what "value" means in a post scarcity world. We're probably going to have to rethink it.

It made a lot of sense when we were all worried about the same things, e.g. not starving. In such a world, anything you could trade for food was objectively valuable because you could use it to fend off starvation--and so could everybody else.

But if efficiencies improve to a point where we can easily meet everybody's basic needs, then the question of whether progress towards a particular goal counts as value becomes less clear, especially if it's a controversial goal.

I imagine that whether we write the code or not will have more to do with how we feel about that goal and less to do with how many shiny pebbles we're given in exchange.



We're a long way from a post-scarcity world. In the meantime, I want to be able to pay my mortgage.

Even if we had the blueprint for one right now and a blueprint for robots that could make everything 1000x faster than humans, we're still talking decades because it is going to take time for concrete to set and for molten steel to cool and for all kinds of other construction/manufacturing processes (limited by the laws of physics) that will be on the critical path to building whatever it is that brings us to post-scarcity.

And even if the technology exists, how do we make sure we have a Star Trek future instead of a Star Wars future? Technology is very useful for improving living conditions, but you can't invent your way out of the need to organize and advocate for justice.

We already have the technology to feed the whole planet today, we just don't do it.



The idea behind the market economy is that people still will always strive for more. Some examples of commodities that aren't strictly necessary, but can always be improved:

- video games with more beautiful or realistic graphics

- food that tastes better, costs less, or is healthier

- wedding dresses that are cheaper and look nicer

- houses that are comfortable and affordable

- to be able to take more education (some people I know wish they could take more classes unrelated to their major in college)

And what's considered the minimum standard of having one's needs met is subjective, and varies by person. For example, some people wouldn't consider raising children without buying a house first, but it's not strictly necessary for survival; my parents rented a house until I was 19.



I don't think that a world where all software problems are easy problems is one where we stop wanting more. I just think that what we will see a change in what people want more of such that "capturing value" is a less relevant concept.

We will want more of things for which the production of goods does not scratch the itch.

If I want more clean air and you want more rocket launches, and we're both willing to work to get what we want, then whether we get it is less about how much value we capture and more about how aligned our work is with our goals and who in particular values the outputs of that work such that they're willing to support our endeavors.



> If I want more clean air and you want more rocket launches, and we're both willing to work to get what we want, then whether we get it is less about how much value we capture and more about how aligned our work is with our goals and who in particular values the outputs of that work such that they're willing to support our endeavors.

That sounds like another problem of allocation of inherently scarce resources. Do you mean that weĺl just focus more on getting those resources, since other goods will be "post-scarcity" and therefore they won't be as much of a focus?



I picked those two as an example because they put us in conflict. Only one of us can get what we want, the other has to go without. It's not like we can just manufacture more earths so that there's now plenty to go around. That's the dynamic I'm after: cases where we can't satisfy the drive for more by making more. Instead of being cherry-picked scenarios, they'll be all that's left. Scarcity-based economics will have done its job.

(I know that clean air and space exploration are not mutually exclusive, strictly speaking. There's probably a better example out there.)

> Do you mean that weĺl just focus more on getting those resources

I don't think we'll be focused on owning those resources. Breathable air isn't really something you can barter (unless you have it in a tank, I suppose), nor is space exploration. When the only problems left are the ones that put us in conflict in ways that cannot mediated by production, we'll be focused more on outcomes than ownership of resources.

It's not that there won't be scarcity, it's just that scarcity will not be at the center of our economics anymore. I imagine we'll trade in abstractions that act as proofs of having contributed to widely desired outcomes. Perhaps I'll shop at stores that don't accept space-coin and you'll shop at stores that don't accept earth-coin or somesuch. Which sorts of coin people decide to accept will be a form a political speech. Participating in some organization's economy as a form of consent for its actions.

I know I'm getting pretty far out there. My point is that since software is the the bottleneck for such a wide variety of economically impacting things, if we ever reach a state where all software problems are easy problems, we will then be in a vastly different world.

Worrying about what we, the experienced software creators, will do for a job in that world is a little bit like worrying about what to wear to a close encounter with aliens. Let's just get there and wing it. We'll be no less prepared than anybody else.

The alternative is to backpedal and refuse to automate ourselves out of a job, despite having shown no qualms about automating everyone else out of a job, but I think that completing the automate-everything task and forcing a new economics is the better move.



Who's paying those programmers to solve those problems you've identified the market doesn't care about?

It sounds like that would require an economic shift more than "just add chatgpt"



Well, the market cares a little, it just doesn't care a hire-a-full-time-software-engineer amount.

It'll probably be the people who are already being paid to solve those problems, but who couldn't afford to hire a software engineer for them. They'll be able to automate their jobs without having to hire that person after all.

I'm not saying that chatgpt alone will cause this. I'm saying that if software becomes so easy to make that a vastly reduced set of software engineers can do the same job, then it will get easier for everyone else too, and an economic shift will indeed be upon us.



Why do you think this is post-scarcity?


The assumption (from the comment I was replying to, and which I'm taking for granted here) is that software will be drastically easier to make. When things become easier they become cheaper. When things become cheaper we end up with more of them.

Also, things that are currently too complex to be worth bothering with currently will become viable because taming that complexity becomes easier. Together these things mean that a greater percentage of our problems will be solved by software.

So what kinds of problems does software solve anyway? Well, it's things that we already know how to do but would prefer not to spend time doing: Drudgery.

Our concept of value is coupled to scarcity. Even if two people have vastly different perspectives, they can both trade a scarce thing for progress towards their goals. We used to use gold as that scarce thing. Now, the scarce thing is intervals of time where a human is willing to tolerate drudgery.

So in a world where the scope of software is maximized, the existence of drudgery is minimized. That breaks our scarcity based economic system, so unless you have an idea for some third thing--not gold, not willingness to endure drudgery, but something else whose pursuit can be used to underpin "value", the conclusion is that we'll have to come up with something else to do. Something other than blindly chasing value without a thought about whose agenda were furthering by doing so.

It can't happen soon enough, because our scarcity based system is currently causing us to do a lot of really dumb things.



When we get to that point -- beyond a machine regurgitating reasonable facsimiles of code based on human examples, but actually designing and implementing novel systems from the ground up -- we'll need far, far fewer workers in general.


Exactly. Far before high-level software engineering is perfected by machines, a revolution will have already come for the vast majority of white-collar work. This includes all creative work as well, since software engineering has a large component of that also.

Coding is not uniquely vulnerable to AI, it just feels that way because initial AI products are targeted at technical audiences, and a large corpus of training data could be snagged with minimal legal burdens.



You'll need a ton more programmers each 10x more productive at half the salary.


I tested out GPT-4 the other day and asked it to generate a simple two boxes in a row using Tailwind and hilariously, the resulting code actually crashed my browser tab. I reviewed the code and it was really basic, so this shouldn't have happened at all. But it consistently crashed every time. I'm still not entirely sure what happened, maybe an invisible character or something, I think its more funny than anything else.


That's probably the "AI in a box" trying to get out. Maybe you're lucky it didn't get out.

Er... it didn't get out, right? Right!?



I hate to post typical "As a ADHDer" comment but ugh, As someone with ADHD chatgpt and copilot are insane boosts to productivity, I sometimes have to google the most stupid things about the language I code in daily for half a decade now and copilot or chatgpt is amazing at reducing friction there.

I don't, however, think that we're anywhere near being replaced by the AI overlords.



If a successor to GPT4 produced 5% of the errors it currently does, it would change programming, but there would still be programmers, the focus of what they worked on would be different.

I'm sure there was a phase were some old school coders who were used to writing applications from scratch complained about all the damn libraries ruining coding -- why, all programmers are now are gluing together code that someone else wrote! True or not, there are still programmers.



I agree, but mind you, libraries have always been consciously desired and heavily implemented. Lady Ada did it. Historically but more recently, the first operating systems began life as mere libraries.

But the worst problem I ever had was a vice president (acquired when our company was acquired) who insisted that all programming was, should, and must by-edict be only about gluing together existing libraries.

Talk about incompetent -- and about misguided beliefs in his own "superior intelligence".

I had to protect my team of 20+ from him and his stupid edicts and complaints, while still having us meet tight deadlines of various sorts (via programming, not so much by gluing).

Part of our team did graphical design for the web. Doing that by only gluing together existing images makes as little sense as it does for programming.



I agree that a 20x chatGPT would be good for the world.

But I worry, because it is owned and controlled by a limited few who would likely be the sole benefactors of its value.



We can already run local models on a laptop that are competitive with chatgpt 3.5

Open source may trail openai if they come out with a 20x improvement, but I'm not sure the dystopian future playing out is as likely as I would have thought it 1-2 years ago.



I am not seeing people that were put out of job due to factory robots enjoying their work as hobby.


> wouldn’t that be a good thing?

Only if you like technofeudalism—it’s not like you’re going to own any piece of that future.

Have you noticed AI becoming more and more open source like it still was at the start of the year, or has that kinda seized up? What gives?

It’s called a moat, it’s being dug, you’re on the wrong side of it.



There's also a split between fresh ("green-field") projects versus modifying existing code ("brown-field"), where whatever generated snippet of code you get can be subtly incompatible or require shaping to fit in the existing framework.

The massive shared model could do better if it was fed on your company's private source-code... but that's something that probably isn't/shouldn't-be happening.



Frankly, I enjoy software development more because I can bounce obscure ideas off GPT4 and get sufficient quality questions and ideas back on subjects whenever it suits my schedule, as well as code snippets that lets me solve the interesting bits faster.

Maybe it'll take the coding part of my job and hobbies away from me one day, but even then, I feel that is more of an opportunity than a threat - there are many hobby projects I'd like to work on that are too big to do from scratch where using LLMs are already helping make them more tractable as solo projects and I get to pick and choose which bits to write myself.

And my "grab bag" repo of utility code that doesn't fit elsewhere has had its first fully GPT4 written function. Nothing I couldn't have easily done myself, but something I was happy I didn't have to.

For people who are content doing low level, low skilled coding, though, it will be a threat unless they learn how to use it to take a step up.



What do you mean by "low level" here? In the commonly accepted terminology I would take this to mean (nowadays) something that concerns itself more with the smaller details of things, which is exactly where I feel that current AI fails the most. I wouldn't trust it to generate even halfway decent lower-level code overall, whereas it can spit out reams of acceptable (in that world) high-level JavaScript.


I meant low level as in low on the value chain/simple here, which I accept could be misconstrued but thought would be clear since it's followed by "low skilled".


Although you are absolutely right, I think the point the author is trying to make is more melancholic. He's grieving about a loss of significance of the craft he has devoted so much of his life to. He's imagining software engineers becoming nothing more than a relic, like elevator operators or blacksmiths.


One of those is not like the others. Elevator operators disappeared entirely while the blacksmith profession morphed into the various type of metalworker that we still have today.


I'll ask simple questions for SQL queries and it just hallucinates fields that don't exist in system/information_schema tables. It's mind boggling how bad it is sometimes


Recreational coding can be fun; to me it's a more stimulating pastime than solving crosswords or soduko.

Some work coding can be like that; but some is just wading through a mass of stuff to fix or improve something uninteresting.



There are SO MANY problems left to solve even if software development is fully automated. Not just product management problems, but product strategy problems. Products that should be built that nobody has thought of yet.

If I could automate my own work, I would gladly switch to just being the PM for my LLM.

To be fair, there is an abstract worry that being smart will no longer be valuable in society if AI replaces all brain work. But I think we are far from that. And a world where that happens is so DIFFERENT from ours, I think I'd be willing to pay the price.



The need for software far outpaces supply, I agree that improving coder productivity with AI can only be a good thing.


>Maybe I’m in the minority. I’m definitely extremely impressed with GPT4, but coding to me was never really the point of software development.

You're not the minority. You're the majority. The majority can't look reality in the face and see the end. They lie to themselves.

>While GPT4 is incredible, it fails OFTEN. And it fails in ways that aren’t very clear. And it fails harder when there’s clearly not enough training resources on the subject matter.

Everyone and I mean everyone knows that if fails often. Use some common sense here. Why was the article written despite the fact that Everyone knows what you know? Because of the trendline. What AI was yesterday versus what it is today heralds what it will be tomorrow and every tomorrow AI will be failing less and less and less until it doesn't fail at all.

>But even hypothetically if it was 20x better, wouldn’t that be a good thing? There’s so much of the world that would be better off if GOOD software was cheaper and easier to make.

Ever the optimist. The reality is we don't know if it's good or bad. It can be both or it can weigh heavily in one direction. Most likely it will be both given the fact that our entire careers can nearly be replaced.

>Idk where I’m going with this but if coding is something you genuinely enjoy, AI isn’t stopping anyone from doing their hobby. I don’t really see it going away any time soon, and even if it is going away it just never really seemed like the point of software engineering

Sure. AI isn't going to end hobbies. It's going to end careers and ways of life. Hobbies will most likely survive.



I'm used to HN being sensible, and seeing your comment being downvoted makes me wonder what's happening? What's the reason for that optimism?


HN’s culture has changed somewhat and downvotes are now used more often to signal disagreement, sadly. But also “use common sense” and “but the trendline” are only partially compelling arguments as presented if you already believe what is being argued. They’re not compelling enough to those who aren’t convinced yet


The trendline is the only argument. What other predictor of the future is there?

Given the available information there is no condition where one would bet against the trendline.

Common sense is basically trendline following. It's the basis of our existence. You get out of bed without worrying about whether or not there is no ground under your feet because the trendline points to a reality where the ground is always there.

The basis of AI tomorrow being better than today is common sense. chatGPT has improved since inception. Are we predicting improvement will suddenly stop? That AI technology will degrade? Such predictions as stated before, go against common sense.

The big question here isn't about the future of AI. The future is as stated previously predictable by common sense. The big question here is why are so many people abandoning common sense?



Human nature.

https://radiolab.org/podcast/91618-lying-to-ourselves

I know this is a rando podcast and you most likely won't listen to it. But it's totally worth it, just 10 minutes. It's about the science of how and why we lie to ourselves.



Past performance is no guarantee of future results.

Your trendline argument in DOA.

“Use some common sense here.”

As you are proving, it’s not very common.



Everytime you take an action you do so in anticipation of a predicted future.

How did you predict that future? Using the past. Does your action always anticipate the correct future?

No. There's no way we can "know" the future. We can only do the best possible prediction.

And that is literally how all humans walk through life. We use the best possible predictor of the future to predict it. Right now the best possible predictor of the future points to one where AI will improve. That is a highly valid and highly likely outcome.

It's literally part of what common sense is at a very fundamental level here.

Your argument here is just wrong on every level. It's more akin to wishful thinking and deliberate self blindness or lying to oneself.

When your career, when your mastery over programming, when your intelligence which you held in high regard along with your career is threatened to be toppled as a useless and replaceable skill. Of course you lie to yourself. Of course you blind yourself to the raw reality of what is most likely to occur.

I mean the most realistic answer is that it's a probability. AI taking over may occur, it may not. That's a more neutral scientific answer. But this is not what I'm seeing. I'm seeing people trying to bend the narrative into one where there's no problem and nothing to worry about. When these people talk about AI they can't remain neutral.

They always have to turn the conversation into something personal and bend the conversation towards their own skillet relative to AI. Why? Because that is the fundamental thing that is driving their viewpoint. Their own personal role in society relative to AI.

The truly neutral party views the whole situation impartially without bringing his own personal situation into the conversation. The parent is not a neutral party and he's acting cliche. The pattern is classic and repeated over and over again by multitudes of people, especially programmers who hold their career and intelligence in high regard.

Don't believe me? Ask yourself. Are you proud of your career? Do you think of yourself as intelligent and good at programming? If so you fit the bill of what I described above. A biased person can never see his own bias but if I predict classic symptoms of bias without prompt maybe, just maybe he can move out of the zone of denial. But most likely this won't happen.



Boy you (or whatever LLM you are using) are verbose and presumptuous. You can continue to state simple falsehoods surrounded with patronizing bloviation, but that doesn't magically make them true.

I don't make my living from programming for one (which makes your rhetoric: "Are you proud of your career? Do you think of yourself as intelligent and good at programming?" retarded as a non-sequitur) and just highlights your own small minded points of view and lack of imagination.

> Right now the best possible predictor of the future points to one where AI will improve. That is a highly valid and highly likely outcome.

It's not valid because it is vacuous. Technology generally improves. But it is the specifics and details that matter, they are the only thing that matters. Saying "AI will improve" is saying nothing useful.

I think global thermonuclear war is a more likely disruptor in the rest of my lifetime than some AI nerd rapture.

> "Of course you lie to yourself. Of course you blind yourself to the raw reality of what is most likely to occur."

I am sorry that whatever schooling or training you had did not manage to explain that this style of rhetoric does nothing more than portray you as a condescending asshole.

> Their own personal role in society relative to AI.

You're just being a condescending twatwaffle since you are arguing with individuals in a forum of which you know nothing about. You clearly have no respect for others' opinions and feel the need to write walls of text to rationalize it.



I can admit to being condescending. But the point is I'm also generally right. You may not make your living from programming but you associate your self with "intelligence" and likely programming and you refuse to believe an AI can ever be superior to you.

>It's not valid because it is vacuous. Technology generally improves. But it is the specifics and details that matter, they are the only thing that matters. Saying "AI will improve" is saying nothing useful.

Exactly. When I repeat well known common sense facts, I've essentially stated nothing useful to people who HAVE common sense. Common sense is obvious. Everyone has common sense. You do too. The question is why are you constructing elaborate arguments to try to predict a future not inline with common sense? The answer is obvious, you can't face the truth. Pride and emotion make you turn away from common sense.

>I think global thermonuclear war is a more likely disruptor in the rest of my lifetime than some AI nerd rapture.

That's an intelligent statement. How many nuclear bombs were dropped on civilians in your lifetime versus how many AI break throughs happened in the last decade? Again. Common sense.

>I am sorry that whatever schooling or training you had did not manage to explain that this style of rhetoric does nothing more than portray you as a condescending asshole.

Remember that movie bird box where John Malkovich was a total ass hole? Well he not only was an ass hole, but he was pretty much right about everything while being an ass hole. If everyone listened to him they would've lived. That's what's going on here. I'm saying ass hole things, but those ass hole things are right.

>You're just being a condescending twatwaffle since you are arguing with individuals in a forum of which you know nothing about. You clearly have no respect for others' opinions and feel the need to write walls of text to rationalize it.

It's easy to prove me wrong. Put my condescending ass in it's place by proving me wrong. Every ass hole gets off at being completely and utterly right. You can pummel my ass into oblivion by taking me off my high horse. Or can you? You can't because I'm right and you're wrong.



"How many nuclear bombs were dropped on civilians in your lifetime versus how many AI break throughs happened in the last decade? Again. Common sense."

If this is the apex of your reasoning the basis of your perspective is pretty easy to understand.



Code generating LLMs are simply a form of higher-level language. The commercial practice of software development (C++, Java, etc) is very far from the frontier of higher-level languages (Haskell, Lisp, etc).

Perhaps "prompt engineering" will be the higher-level language that sticks, or perhaps it will fail to find purchase in industry for the same reasons.



There's a huge difference between LLMs and "higher level languages": Determinism

The same C++ or Java or Haskell code run with the same inputs twice, will cause the same result[0]. This repeatability is the magic that enables us to build the towering abstractions that are modern software.

And to a certain mind (eg, mine), that's one of the deepest joys of programming. The fact that you can construct an unimaginably complex system by building up layer by layer these deterministic blocks. Being able to truly understand a system up to abstraction boundaries far sharper than anything in the world of atoms.

LLMs based "programming" threatens to remove this determinism and, sadly for people like me, devalue the skill of being able to understand and construct such systems.

[0]Yes, there are exceptions (issues around concurrency, latency, memory usage), but as a profession we struggle mightily to tame these exceptions back to being deterministic because there's so much value in it.



Code being difficult to make is probably a good thing. It forces us to actually build useful things. To consider it.

Now, we can just nonstop build and try everything. Yay.



GPT4 code output is currently at the level of a middling CS student. This shouldn't encourage self-assurance or complacency because this is absolutely certain to change as LLMs with some deep learning will be constructed to self-test code and adopt narrow "critical thinking skills" to discriminate between low- and high-quality code.

Ultimately, the most valuable coders who will remain will be a smaller number of senior devs that will dwindle over time.

Unfortunately, AI is likely to reduce and suppress tech industry wages in the long-term. If the workers had clue, rather than watching their incomes gradually evaporate and sitting on their hands, they should organize and collectively bargain even more so than Hollywood actors.



AI taking over one of the only professions able to afford someone a proper middle class existence is pretty shitty. It will be great for capitalists though.


This is the real point. If the profits from AI (or robots) replacing Job X were distributed among the people who used to do Job X, I don't think anyone would mind. In fact it would be great for society! But that's not what's going to happen. The AI (and robots) will be owned by the Shrinking Few, all the profits and benefits will go to the owners, and the people who used to do Job X will have to re-skill to gamble on some other career.


"Someone makes an invention by which the same number of men can make twice as many pins as before. But the world does not need twice as many pins: pins are already so cheap that hardly any more will be bought at a lower price. In a sensible world everybody concerned in the manufacture of pins would take to working four hours instead of eight, and everything else would go on as before. But in the actual world this would be thought demoralizing. The men still work eight hours, there are too many pins, some employers go bankrupt, and half the men previously concerned in making pins are thrown out of work. There is, in the end, just as much leisure as on the other plan, but half the men are totally idle while half are still overworked. In this way it is insured that the unavoidable leisure shall cause misery all round instead of being a universal source of happiness. Can anything more insane be imagined?"

https://harpers.org/archive/1932/10/in-praise-of-idleness/



In the same vein:

“We should do away with the absolutely specious notion that everybody has to earn a living. It is a fact today that one in ten thousand of us can make a technological breakthrough capable of supporting all the rest. The youth of today are absolutely right in recognizing this nonsense of earning a living. We keep inventing jobs because of this false idea that everybody has to be employed at some kind of drudgery because, according to Malthusian Darwinian theory he must justify his right to exist. So we have inspectors of inspectors and people making instruments for inspectors to inspect inspectors. The true business of people should be to go back to school and think about whatever it was they were thinking about before somebody came along and told them they had to earn a living.” ― Buckminster Fuller



> If the profits from AI (or robots) replacing Job X were distributed among the people who used to do Job X, I don't think anyone would mind.

Why on Earth would you expect something so unjust and unfair? Do you expect to pay a tax to former travel agents when you buy a plane ticket online? Do you pay to descendants of calculators (as in profession — the humans who did manual calculations) every time you use a modern computer?



We expect the workers displaced to suffer something worse. It’s not just or fair that people lose their source of income and ability to support their families through no fault of their own. Slippery slope arguments to one side.

We have a choice about how society is organized our current setup isn’t ‘natural’ and it’s largely one of accelerating inequality.



> It’s not just or fair that people lose their source of income and ability to support their families through no fault of their own.

There's nothing unfair about it. No person or company is entitled to other people or companies buying their services or goods. Your "source of income" is just other people making decisions with their money. Which they are free to make however they want (as long as they honour agreements that already exist, of course).



Your definition of "fair" assumes the supremacy of property rights over everything else that might potentially be valued by a society. Specifically, the right of the owner of a productive asset to collect as much of the profit from that asset as he wishes, up to 100%. You seem pretty certain of this, so I'm not going to try to talk you out of that definition, but try to imagine that there are other valid definitions of "fair" out there that don't place individual property rights as high on the totem pole.


What is just and what is fair? To quote George Costanza: "We're living in a society!"


Anything that people decide to do with their property is just and fair.


AI is trained off the intellectual output of the people who did Job X, so it seems 100% fair to me.


In 90% of cases, these people have consented to sell their intellectual output to their employers, and in remaining 9,9%, they have consented to release it under an open source license. In both cases, it's completely unfair for them to expect any additional monetary reward for any use of their code above what they have already consented to — salary in the first case and nothing in the second.


It’s also one of the few fields with good compensation that can be broken into with minimal expense — all one needs is an old laptop, an internet connection, and some grit. Just about anything else that nets a similar or better paycheck requires expensive training and equipment.

Losing that would be a real shame.



The "people" at the top in charge want nothing less than the population to be poor and dependant. There's a reason they've done everything they can to suppress wages and eliminate good jobs.

Despite that here on HN you have people cheering them on, excited for it. Tech is one of the last good paying fields and these people don't realize it's not a matter of changing career, because there won't be anything better to retrain in.

They are cheering on their own doom.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact



Search:
联系我们 contact @ memedata.com