如果人工智能既有益处,又没有那么大的颠覆性呢?
What if AI is both good and not that disruptive?

原始链接: https://deadneurons.substack.com/p/what-if-ai-is-both-really-good-and

## 关于大型语言模型的“平淡”真相 当前人工智能的讨论两极分化:要么大型语言模型(LLM)将彻底自动化所有知识型工作,要么它们只是被过度炒作的“随机鹦鹉”。本文认为,应该采取更冷静、因此也更不激动的观点——LLM是一种重要的*生产力*工具,类似于过去的科技变革,但不太可能导致广泛的经济崩溃。 LLM代表了另一层抽象,就像从汇编到Python编程的转变。虽然这通常会提高个人产出,但通常不会整体消除工作岗位。关键技能不是语法(现在由LLM处理),而是精确、系统的思维——这是LLM无法教授的。 影响可能主要体现在明确定义的任务中(总结、按规范编码),而模糊、依赖上下文的工作(战略、复杂谈判)——需要不成文的知识和判断力——仍然主要由人类驱动。LLM部署的三年里,这些模糊角色的就业并没有崩溃。 此外,大规模失业的说法与对劳动密集型行业(如医疗保健)持续高成本的预期相悖。如果由于人工智能导致劳动力成本*下降*,这些行业最终应该会看到价格稳定。 最终,最有可能的结果是劳动力重新分配,一些工人将转向仍然需要人类存在的行业,反映了历史上技术调整的模式。虽然LLM具有影响力,但期望它们引发一场独特的灾难性转变可能是一种过度反应。

## 人工智能:并非革命,而是进化? 一个Hacker News的讨论探讨了人工智能的影响可能更类似于计算机或互联网的引入——意义重大,但并未从根本上颠覆就业。核心论点是,人工智能擅长明确定义的任务(如数据处理或翻译),但在处理编程和问题解决中固有的模糊性方面却存在困难。 评论者认为人工智能不会*取代*专业人士,而是可以*增强*他们的能力。例如,在医疗保健领域,人工智能可以使医生从日常任务中解放出来,让他们专注于复杂的病例和患者护理。然而,人们也对数据隐私、人工智能可能加剧现有系统性问题(例如,疲于奔命的医生只是简单地转移任务)以及人类联系不可替代的价值表示担忧,尤其是在医学等领域。 总体情绪倾向于谨慎乐观,主张默认相信渐进式技术变革,除非有令人信服的证据表明并非如此。
相关文章

原文

There’s a strange dynamic in AI discourse where you’re only allowed to hold one of two positions: either large language models will automate all knowledge work, collapse employment, and fundamentally restructure civilisation within a decade, or they’re stochastic parrots that can’t really do anything useful and the whole thing is a bubble. The measured take, that LLMs are a significant productivity tool comparable to previous technological shifts but not a rupture in the basic economic fabric, doesn’t generate much engagement. It’s boring.

I want to make the case for boring.

Consider how we talk about LLMs as a new abstraction layer for programming. You write intent in English, the model translates it to code, you debug at the level of English when things go wrong. This is framed as revolutionary, but there’s another way to see it: it’s the same transition we’ve made repeatedly throughout computing history. Assembly programmers became C programmers became Python programmers. The abstraction rose, individual productivity increased, more total software got written, and roughly similar numbers of people were employed writing it.

If English-to-code is just another abstraction layer, maybe the equilibrium looks like “same number of software engineers, each individually more productive, much more total software in the world.” That’s a big deal, but it’s not mass unemployment. It’s not the end of programming as a career. It’s what happens every time we get better tools.

The counterargument is that previous transitions still required learning a formal language with precise syntax, whereas English is natural and everyone speaks it already. This should dramatically lower barriers to entry. Perhaps. Though I suspect the binding constraint was never syntax but the underlying skill of thinking precisely about systems, edge cases, state management, and failure modes. The compiler was pedagogical in that it forced you to confront ambiguity. If the LLM just does something plausible when your specification is vague, you may never develop that precision, and the systems you build may be worse for it. Whether that matters depends on the use case.

Here’s a more useful way to think about which work is vulnerable to automation. Consider knowledge work as sitting on a spectrum from well-specified to ambiguous.

At the well-specified end, you have tasks where the inputs are clean, the desired output is clear, and success criteria are obvious: processing a standard form, writing code to a precise spec, translating a document, summarising a report. LLMs are excellent at this, and there’s strong evidence they can automate significant portions of it.

At the ambiguous end, you have tasks where the context is messy, the right approach isn’t obvious, and success depends on knowledge that isn’t written down anywhere. “Put together something on the competitive landscape” where you need to know what the CEO actually cares about, what was discussed in a meeting last month, which competitors are politically sensitive to mention, what level of detail is appropriate for this audience. The specification itself is the work, and it requires drawing on institutional history, unstated preferences, and accumulated judgement that no system has access to.

We’re roughly three years into widespread LLM deployment, and employment in ambiguous knowledge work (strategy, research, engineering, complex negotiation, anything where the right answer isn’t obvious ex ante) hasn’t collapsed. What we’ve seen is productivity tools that let the same people do more, or need fewer junior people supporting them. That’s meaningful, but it’s not replacement.

The response is usually “give it time, organisations are slow to restructure.” Fine, but that’s unfalsifiable in the short term, which makes it a weak argument. At some point the restructuring either happens or it doesn’t, and so far it hasn’t.

There’s a contradiction at the heart of the most common AI pessimist story that I don’t see addressed often enough.

The pessimist typically argues two things simultaneously: first, that AI will displace workers and suppress wages across large portions of the economy; second, that cost disease sectors like healthcare, education, and childcare will remain very expensive relative to incomes. These two claims are in tension. Healthcare, education, and childcare are expensive primarily because they’re labour-intensive. If labour is getting cheaper or displaced economy-wide, those sectors should eventually feel that impact too.

The most common escape route is to argue that AI specifically displaces knowledge work while physical presence and human trust requirements insulate care sectors. A copywriter loses their job to Claude, but a nurse or nursery worker remains essential because you can’t automate holding someone’s hand or watching a toddler. This might be true, but notice what it concedes: the disruption is contained to specific sectors rather than being the economy-wide transformation usually advertised. That’s a much more modest claim than “AI changes everything”.

The cleanest resolution is that the disruption is more contained than the hype suggests. We’re looking at a productivity tool that makes certain categories of work more efficient, shifts relative wages somewhat, and eventually equilibrates through normal labour market mechanisms, rather than a transformation that breaks existing economic relationships.

The historical track record on “this technology will end work” predictions is remarkably consistent: they’re always wrong. Agricultural employment went from 90% of the workforce to under 2% in developed economies. Manufacturing employment has declined dramatically since the 1950s. Each time, the prediction was mass unemployment, and each time, labour markets reallocated.

The frictions are real but finite. Retraining takes time. Credentialing is a barrier. A 45-year-old copywriter isn’t going to become a surgeon. Yet at the margin, some people retrain, and the next generation makes different career choices based on the changed landscape. Over a 10-15 year horizon, labour supply shifts toward the sectors that still need humans.

If LLMs displace some knowledge workers, the plausible story is that some of them move into care work, skilled trades, and other labour-intensive sectors that haven’t been automated. Supply in those sectors increases, wages there moderate, and prices eventually stabilise. The displaced workers find new roles, possibly at lower wages than their previous careers but not zero. This is more benign than either extreme narrative, neither “everyone loses their jobs and starves” nor “nothing changes,” just the normal process of technological adjustment that has happened repeatedly throughout economic history.

This story requires that there remain sectors where humans are needed. If AI eventually does everything including physical care and embodied presence, then reallocation has nowhere to go. Yet that’s a much stronger claim than “LLMs are good at generating text,” and we’re nowhere near demonstrating it.

There’s a related point that gets lost in discussions focused purely on wages. Suppose median wages stagnate while AI-augmented workers pull ahead. Is that actually bad?

Real wages are supposed to capture improvements in purchasing power through inflation adjustment, but CPI probably doesn’t fully account for quality improvements or entirely new categories of goods. The median person today has access to information, entertainment, communication, and diagnostic tools that didn’t exist at any price thirty years ago. An iPhone is not just a more expensive Nokia; it’s a genuinely new capability. LLMs themselves might fall into this category, as a tool that was previously unavailable at any price is now accessible to anyone with an internet connection.

The counterargument involves composition effects. Technology and manufactured goods have gotten dramatically cheaper and better. Housing, healthcare, education, and childcare have gotten more expensive relative to wages, and those happen to be the things that most determine life outcomes. Your iPhone is incredible, but you can’t live in it.

There’s something to this, though it circles back to the earlier point: if AI really does suppress labour costs broadly, the labour-intensive cost disease sectors should eventually feel that pressure. You can’t simultaneously have AI destroying jobs and human-intensive services remaining expensive forever. One of those has to give.

I’m not arguing that LLMs are unimportant or that nothing will change. LLMs are clearly a significant technology that will restructure how certain work gets done. I’m arguing that the most likely outcome is something like “computers” or “the internet” rather than “the end of employment as we know it.”

What would falsify this moderate view? A few things.

If ambiguous knowledge work employment actually starts shrinking meaningfully (not just growth slowing, but absolute declines in headcount for strategists, researchers, engineers, people doing work that requires contextual judgement), that would be strong evidence that this viewpoint is wrong. If agentic systems start successfully navigating the illegible organisational context that currently requires human judgement, things like understanding unstated preferences, political sensitivities, and implicit standards, that would be significant.

So far I don’t see compelling evidence of any of these. What I see is a very impressive technology that’s being discussed in maximally dramatic terms because moderate takes don’t go viral.

The boring middle ground is usually closer to the truth. That’s why it’s boring.

联系我们 contact @ memedata.com