There’s a strange dynamic in AI discourse where you’re only allowed to hold one of two positions: either large language models will automate all knowledge work, collapse employment, and fundamentally restructure civilisation within a decade, or they’re stochastic parrots that can’t really do anything useful and the whole thing is a bubble. The measured take, that LLMs are a significant productivity tool comparable to previous technological shifts but not a rupture in the basic economic fabric, doesn’t generate much engagement. It’s boring.
I want to make the case for boring.
Consider how we talk about LLMs as a new abstraction layer for programming. You write intent in English, the model translates it to code, you debug at the level of English when things go wrong. This is framed as revolutionary, but there’s another way to see it: it’s the same transition we’ve made repeatedly throughout computing history. Assembly programmers became C programmers became Python programmers. The abstraction rose, individual productivity increased, more total software got written, and roughly similar numbers of people were employed writing it.
If English-to-code is just another abstraction layer, maybe the equilibrium looks like “same number of software engineers, each individually more productive, much more total software in the world.” That’s a big deal, but it’s not mass unemployment. It’s not the end of programming as a career. It’s what happens every time we get better tools.
The counterargument is that previous transitions still required learning a formal language with precise syntax, whereas English is natural and everyone speaks it already. This should dramatically lower barriers to entry. Perhaps. Though I suspect the binding constraint was never syntax but the underlying skill of thinking precisely about systems, edge cases, state management, and failure modes. The compiler was pedagogical in that it forced you to confront ambiguity. If the LLM just does something plausible when your specification is vague, you may never develop that precision, and the systems you build may be worse for it. Whether that matters depends on the use case.
Here’s a more useful way to think about which work is vulnerable to automation. Consider knowledge work as sitting on a spectrum from well-specified to ambiguous.
At the well-specified end, you have tasks where the inputs are clean, the desired output is clear, and success criteria are obvious: processing a standard form, writing code to a precise spec, translating a document, summarising a report. LLMs are excellent at this, and there’s strong evidence they can automate significant portions of it.
At the ambiguous end, you have tasks where the context is messy, the right approach isn’t obvious, and success depends on knowledge that isn’t written down anywhere. “Put together something on the competitive landscape” where you need to know what the CEO actually cares about, what was discussed in a meeting last month, which competitors are politically sensitive to mention, what level of detail is appropriate for this audience. The specification itself is the work, and it requires drawing on institutional history, unstated preferences, and accumulated judgement that no system has access to.
We’re roughly three years into widespread LLM deployment, and employment in ambiguous knowledge work (strategy, research, engineering, complex negotiation, anything where the right answer isn’t obvious ex ante) hasn’t collapsed. What we’ve seen is productivity tools that let the same people do more, or need fewer junior people supporting them. That’s meaningful, but it’s not replacement.
The response is usually “give it time, organisations are slow to restructure.” Fine, but that’s unfalsifiable in the short term, which makes it a weak argument. At some point the restructuring either happens or it doesn’t, and so far it hasn’t.
There’s a contradiction at the heart of the most common AI pessimist story that I don’t see addressed often enough.
The pessimist typically argues two things simultaneously: first, that AI will displace workers and suppress wages across large portions of the economy; second, that cost disease sectors like healthcare, education, and childcare will remain very expensive relative to incomes. These two claims are in tension. Healthcare, education, and childcare are expensive primarily because they’re labour-intensive. If labour is getting cheaper or displaced economy-wide, those sectors should eventually feel that impact too.
The most common escape route is to argue that AI specifically displaces knowledge work while physical presence and human trust requirements insulate care sectors. A copywriter loses their job to Claude, but a nurse or nursery worker remains essential because you can’t automate holding someone’s hand or watching a toddler. This might be true, but notice what it concedes: the disruption is contained to specific sectors rather than being the economy-wide transformation usually advertised. That’s a much more modest claim than “AI changes everything”.
The cleanest resolution is that the disruption is more contained than the hype suggests. We’re looking at a productivity tool that makes certain categories of work more efficient, shifts relative wages somewhat, and eventually equilibrates through normal labour market mechanisms, rather than a transformation that breaks existing economic relationships.
The historical track record on “this technology will end work” predictions is remarkably consistent: they’re always wrong. Agricultural employment went from 90% of the workforce to under 2% in developed economies. Manufacturing employment has declined dramatically since the 1950s. Each time, the prediction was mass unemployment, and each time, labour markets reallocated.
The frictions are real but finite. Retraining takes time. Credentialing is a barrier. A 45-year-old copywriter isn’t going to become a surgeon. Yet at the margin, some people retrain, and the next generation makes different career choices based on the changed landscape. Over a 10-15 year horizon, labour supply shifts toward the sectors that still need humans.
If LLMs displace some knowledge workers, the plausible story is that some of them move into care work, skilled trades, and other labour-intensive sectors that haven’t been automated. Supply in those sectors increases, wages there moderate, and prices eventually stabilise. The displaced workers find new roles, possibly at lower wages than their previous careers but not zero. This is more benign than either extreme narrative, neither “everyone loses their jobs and starves” nor “nothing changes,” just the normal process of technological adjustment that has happened repeatedly throughout economic history.
This story requires that there remain sectors where humans are needed. If AI eventually does everything including physical care and embodied presence, then reallocation has nowhere to go. Yet that’s a much stronger claim than “LLMs are good at generating text,” and we’re nowhere near demonstrating it.
There’s a related point that gets lost in discussions focused purely on wages. Suppose median wages stagnate while AI-augmented workers pull ahead. Is that actually bad?
Real wages are supposed to capture improvements in purchasing power through inflation adjustment, but CPI probably doesn’t fully account for quality improvements or entirely new categories of goods. The median person today has access to information, entertainment, communication, and diagnostic tools that didn’t exist at any price thirty years ago. An iPhone is not just a more expensive Nokia; it’s a genuinely new capability. LLMs themselves might fall into this category, as a tool that was previously unavailable at any price is now accessible to anyone with an internet connection.
The counterargument involves composition effects. Technology and manufactured goods have gotten dramatically cheaper and better. Housing, healthcare, education, and childcare have gotten more expensive relative to wages, and those happen to be the things that most determine life outcomes. Your iPhone is incredible, but you can’t live in it.
There’s something to this, though it circles back to the earlier point: if AI really does suppress labour costs broadly, the labour-intensive cost disease sectors should eventually feel that pressure. You can’t simultaneously have AI destroying jobs and human-intensive services remaining expensive forever. One of those has to give.
I’m not arguing that LLMs are unimportant or that nothing will change. LLMs are clearly a significant technology that will restructure how certain work gets done. I’m arguing that the most likely outcome is something like “computers” or “the internet” rather than “the end of employment as we know it.”
What would falsify this moderate view? A few things.
If ambiguous knowledge work employment actually starts shrinking meaningfully (not just growth slowing, but absolute declines in headcount for strategists, researchers, engineers, people doing work that requires contextual judgement), that would be strong evidence that this viewpoint is wrong. If agentic systems start successfully navigating the illegible organisational context that currently requires human judgement, things like understanding unstated preferences, political sensitivities, and implicit standards, that would be significant.
So far I don’t see compelling evidence of any of these. What I see is a very impressive technology that’s being discussed in maximally dramatic terms because moderate takes don’t go viral.
The boring middle ground is usually closer to the truth. That’s why it’s boring.