人工智能乐观主义是一种特权。
AI optimism is a class privilege

原始链接: https://joshcollinsworth.com/blog/sloptimism

## 人工智能乐观主义的特权 这篇文章写于2025年末,表达了对人工智能不受控制的发展的深切担忧,认为对这项技术的狂热往往源于特权和对潜在危害的漠视。作者回忆起早期使用人工智能“吐槽”生成器的经历,发现它的残酷出人意料地具有冲击力——那一刻引发了他对孩子们未来所继承世界的恐惧。 他认为人工智能并非中立工具,而是现有社会问题的放大器,从失业和虚假信息到骚扰和滥用,尤其是在深度伪造等技术的推动下。虽然承认人工智能的潜在益处,但他认为乐观主义需要无视那些境况不佳的人所面临的风险——那些容易受到经济冲击、网络掠夺或人工智能放大数据致偏见的人。 作者批评“人工智能乐观主义”心态,常常需要与他人所经历的现实后果脱节,以及相信自己和所爱的人将免受伤害。他担心未来人工智能会赋予恶意行为者权力并加剧不平等,最终质疑便利性和潜在收益是否值得牺牲弱势群体的利益。归根结底,他认为人工智能正在加速成为“完美的欺凌工具”。

这场 Hacker News 的讨论集中在对人工智能热情的一种批判性观点。核心论点是,仅仅关注人工智能带来的个人利益,就意味着忽视它可能对他人造成的潜在伤害。 评论者对此进行扩展,将乐观的观点与特权和否认主义联系起来——一种相信自己会受益,同时淡化失业风险的信念。许多人认为人工智能的进步会在完全取代人类之前停滞,这种信念普遍存在,但可能并不准确。 除了个人工作保障之外,对话还涉及更广泛的社会后果。即使那些“人工智能免疫”的人,也可能在很大一部分人口遭受收入损失和潜在社会不稳定时受到影响。讨论强调了一种在受到直接影响之前否认现实的倾向,并质疑如果人工智能驱动的进步导致广泛的经济 disruption,其可持续性。
相关文章

原文

Published: December 3, 2025
Updated: December 11, 2025

A while back, in a slightly earlier era of AI, a project was making the rounds which would read your GitHub profile, and create a personalized roast based on the contents.

It was intended, I assume, as a harmless, lighthearted novelty. I wanted to join in on the fun, so I put my profile in and tried it out.

I didn’t laugh at my roast.

It wasn’t clever, or funny, or even particularly unexpected. A tech-savvy stranger on Fiverr probably could’ve done better.

But more than that: I remember being surprised at how mean it was. Little of what the model produced even felt like a joke; instead, it just read as a slew of very personal insults.

And then I remember being surprised that the artificial cruelty actually affected me.

Despite knowing this was all a soulless (and as it turns out, humorless) machine making a poor attempt at comedy—one that nobody else even saw!—reading those words hurt. Bizarrely, I suppose, AI actually managed to hurt me.

And that was the first time I remember thinking about what AI was going to do to my children.

If I—a grown man with thick skin, hardened by decades of internet usage—can still be susceptible to highly personalized online bullying, what will it be like for my son, when some mean kid inevitably gets their hands on this technology and decides to put it to malicious use?

By the time my kids encounter real bullying, I’m sure derogatory jokes will be about the least harmful form of antagonism AI will be empowering. Imagine the damage one bad kid could cause using deepfakes, for example. Forget the days of starting a nasty rumor and spreading it around the school; now you can share a video of it happening.

Imagine the shame, intimidation, harassment, and trauma AI might enable a cruel juvenile to inflict—particularly once the tech has had another few years to improve. (To say nothing, of course, of what it might enable for an unethical adult.)

Imagine how absolutely unmitigable the damage would be.

My reaction wasn’t laughter; my reaction was horror at the realization that we’re racing to build the perfect bullying tool.

I was never exactly an optimist when it comes to AI. But that was the first time I realized exactly how dark the future I foresaw actually was.


Although it’s not an entirely correct description, I’ll use the term “AI optimist” a lot in this post, as it’s at least a serviceable label for a general group of people.

That group, to be a bit more descriptive, is made up of people who are excited about AI. This might include future developments, but they’re particularly excited about AI in the present and near term, and how they can use it right now. You might call them enthusiasts, or even believers, maybe. But in any case, they’re generally enthusiastic about AI, and aren’t overly concerned with costs or downsides.

You almost certainly know at least one or two of these people. Maybe you even are one. (If so: I’m not naive enough to think I’ll change your mind with this post, but I hope I’ll at least give you some things to think about.)

It seems to me that to be in this group—to regard AI, as it exists currently, with optimism and enthusiasm—requires at least a certain degree of privilege. Hence, the somewhat blunted title of this post.

I had long struggled to put the thought into words. But once it crystallized into this post’s titular sentence, I felt as though a great deal around me suddenly shifted into perspective.

So, that’s why I wrote this post; to share that perspective. It is my own, and it comes from my own experiences (and yes, through the lens of my own substantial privileges, class and otherwise). You can take it, or not, as you like.


It’s late 2025, and so you don’t need me to tell you how extreme opposing views on AI can be. Everyone has an opinion of AI, and the overwhelming majority fall to one far end of the spectrum or the other. There’s a vast divide between the sides, each fiercely passionate in their own entirely opposite ways.

For my part, I’m decidedly on the pessimist side of the chasm, for many reasons. Some I’ll get into here; others, I’ll mostly pass over, as they’ve been well covered elsewhere.

But for now, suffice to say: when I look around me at the impact AI is currently having, I see little reason for enthusiasm—let alone the little-questioned, quasi-religious belief that this fundamentally flawed technology might one day soon bring about some sort of economic, societal, and/or scientific revolution all on its own.

Come to think of it, “religious” might be a good word to describe how AI optimism feels, from the outside. It has fervent believers, prophecies from prominent figures to be taken on faith, and—of course, as with any religion—a central object of worship which can at all times be offered as The Answer, no matter what the question might happen to be.

In fairness: that’s not all AI optimists. I’m mostly describing the extreme ones.

Even among the more moderate optimists, though—ordinary people who just like the tech—the enthusiasm has always seemed…disproportionate, let’s say.

It was always perplexing to me that so many of my peers seemed so eager to be across the divide from me; that they were so much more impressed with AI than I was, and so indifferent to what I felt were alarming flaws and drawbacks.

They didn’t seem particularly different than me. In fact, many were my friends, connections, and people I looked up to.

We were looking at the same tech, with the same outcomes, and drawing entirely different conclusions. What was I missing?

The answer eventually hit me:

They see themselves as the ones benefiting from AI, and not as the ones it might cost.


I concede AI can occasionally be helpful for certain tasks, and I can understand the enthusiasm, as far as that goes. I don’t use it often, but admittedly some. (I do still write every word of every post on my own, however, hand-typed em dashes and all.)

I sometimes find AI helpful for generating reference images to use as starting points for illustrations, and occasionally for ideating, as a “rubber duck” of sorts. I also use it once in a while to compensate for my own color vision deficiency. But mostly, it helps me with code.

In full disclosure of all the mundane details: I mostly only use code completion suggestions in VS Code, even though they’re often hit and miss. I rarely use chat mode, and when I do, it tends to be mostly for rote tasks like format conversion or pattern matching. That’s pretty much it. Every time I’ve tried giving AI more responsibility than that, it’s let me down pretty spectacularly.

I’m deeply skeptical that AI offers a net productivity boost in general, but particularly that it’s capable of high-quality frontend code. I theorize good frontend is just too subjective, too visual, balances too many concerns, and is too underrepresented in training data. (That might explain why developers in other specialties seem to report better results.)

I can already hear the enthusiasts scoffing and getting up to leave, because I don’t use AI “the right way,” by vibe-coding with Cursor agentic MCP, or whatever the flavor of the week is. And it’s true; I’ve never gone that deep with it.

That’s partly because I’ve heard too many horror stories about leaked secrets, deleted databases, and wiped hard drives. I don’t like the idea of giving a non-deterministic black box full control of my machine and/or production.

But it’s also because I like using my brain. Any passion I have for what I do comes largely from the process of ideating, building, and creatively solving a problem. Having a machine do all that for me and skipping to the result is as unsatisfying as a book full of already-completed sudoku puzzles, or loading up a save file where somebody else already played the first two thirds of a video game. I don’t do these things just because I want the result; I also do them because I want the experience.

I want to improve! And it’s hard to imagine how that might happen if I’m not actually putting skills into practice.

All of that’s mostly beside the point anyway, though; my issues with AI have little to do with its level of effectiveness.


Even if my new coding buddy is severely prone to overconfidence, it’s still admittedly exciting when it makes tasks that would’ve been previously time-consuming and/or challenging quick and easy.

In order to be an AI optimist about this, however: that’s where I would have to stop thinking about it.

I would be forced to ignore what else my little coding buddy is getting up to when I’m not looking; the other impacts he’s having on other people’s lives.

Let’s take layoffs as an example.

In order to be an AI optimist, it seems to me you’d have to believe yours is not among the jobs at risk of being automated or downsized, and that you aren’t among the countless workers staring down displacement. (Or at least: not at risk of AI taking over the interesting and fulfilling parts of your work, as your role is reduced to acting as its manager.) After all, how could you feel enthusiastic about a threat to your own livelihood?

You’d need to be high enough in the org chart; far enough up the pyramid; advanced enough along the career ladder.

To be an AI optimist, I’m guessing you must not be worried about where your next job might come from, or whether you can even find one. The current dire state of the job market, I have to assume, doesn’t scare you. You must feel secure.

Maybe it’s because you’ve already made a name for yourself. Maybe you’re known at conferences, or on podcasts. Maybe you’re just senior enough that your résumé opens doors for you.

Or maybe you’ve been promoted into leadership. Maybe you spend your days in important meetings.

Maybe this is all a lot easier to be optimistic about with the right charts and graphs in front of you.

You almost certainly aren’t a junior, though, or an intern, or somebody trying to break into the field. You must not be near the rising tide engulfing entry-level workers across my industry and a wide range of others. Because, infamously, nobody is hiring juniors anymore.

It seems fairly safe to assume you aren’t in the first group against the wall, if you’re excited about the thing putting them there.

You probably aren’t a contractor, either, or working at a consultancy. And for that matter: you almost certainly aren’t an artist, or illustrator, or writer. You probably haven’t watched client dollars funnelled upwards, with the bitter knowledge that this thing eroding your income is only possible because it brazenly plagiarized you and a million other people who do what you do.

AI optimism probably means you’re in a position where nobody is stealing your work, or bulldozing your entire career field.

That’s the thing about being bullish on AI: to focus on its benefits to you, you’re forced to ignore its costs to others.

AI optimism requires believing that you (and your loved ones) are not among those who will be driven to psychosis, to violence, or even to suicide by LLM usage. At the very least, this means you feel secure in your own mental health; likely, it also means you have a wider and more substantial support system propping up your wellbeing.

(Not to put too fine a point on it, but: those things are otherwise known as privileges.)

AI optimism requires you to believe that, whoever will be impacted by the sprawling data centers, the massive electricity demands, the water consumption, and the other environmental hazards of the AI boom, it won’t be you. Whatever disaster might happen, your neighborhood will be safe from it. Probably far away from it.


The harms of AI aren’t a standalone issue; as AI becomes a part of other technologies, systems, and parts of society, it’s exacerbating their existing problems, and accelerating damage already being done elsewhere.

I have to believe scammers are enthusiastic about AI; there’s likely never been a more helpful tool for fraud. Criminals and con artists have always been around, of course, but they’ve never had such powerful instruments at their disposal. After all, it’s much easier to rob somebody’s unsuspecting grandma when you can simply conjure a video of that person out of thin air, or perfectly imitate their voice on a phone call.

But that’s a relatively small scale of harms, aimed at individuals. The broader harms come from AI interacting with systems, like governments and their substructures.

Malicious state actors (both in and outside of the US) are wielding AI as a ruthlessly efficient propaganda machine, disseminating disinformation that’s more convincing than ever, faster than ever previously possible. Much of what’s being produced serves to dehumanize and victimize vulnerable groups, like immigrants, refugees, queer people, and political dissidents. Mainly (but not exclusively), this is to bolster authoritarian power.

It’s hard to imagine how one could be optimistic about the technology empowering such horrors, but I suppose knowing it probably won’t affect you must help.

I doubt I could feel very good about the tech helping me write emails faster if I knew that same tech was helping to make me, or people close to me, a target of violence.

Even when the intent might be good, however, AI often amplifies existing harms.

In the rush to shove AI into everything possible, we’ve now injected it into parts of the justice system, too. It’s in everything from facial recognition and surveillance tech to data and administrative work. It’s even in the legal system.

In theory, this is an efficiency boost. In theory, a machine should be less biased than humans.

In reality, not only do these models make mistakes at a rate that is utterly unacceptable in this context; they mimic and amplify the inherent racism present in their own training data. (Tech is always a mirror of its creators; it is never neutral.) Compounding this problem, AI is non-deterministic, and something of a black box, offering little to no way to inspect, challenge, or appeal its results.

Needless to say, this deployment of AI has already had profoundly devastating impact on real people’s lives—damage which shows no signs of slowing.

Forgive me, but I can’t imagine being excited that this technology which is rapidly accelerating inequality is also helping me save a little time on writing code.

I have to imagine such excitement would require me to think none of this could happen to me, or to anybody who matters to me.

Or, at the very least: that it’s all undeniably unfortunate, but ultimately, in service to some greater good. A justifiable tradeoff; a glitch to be ironed out.

AI optimism requires you to see the lives of at least some of your fellow humans as worthwhile sacrifices; bug reports in a backlog.

But even when there’s no larger system behind it, and even with no broader goal or agenda at all—malicious or otherwise—AI can still amplify existing harms.

One example at the top of my mind: Facebook was recently flooded with AI-generated videos of women being violently strangled. There was no apparent deeper purpose behind this horrifying wave of misogynistic terrorism, however; it just happened to be what the algorithm rewarded. That content generated engagement, and that engagement generated more of the same content.

A similar thing happened recently on TikTok, but this time it was videos of immigrants being ruthlessly brutalized that struck a nerve and triggered a proliferation of objectionable content across the platform.

Sometimes this effect is more or less benign (see: Shrimp Jesus); other times, a machine built to provoke a reaction will inevitably hit paydirt in the horrifying, the traumatizing, the inhumane, and the unacceptable.

AI isn’t just harmful on its own; it’s a force multiplier for existing harms. The intent behind it, if one even exists, is irrelevant; the impact is the same.


I think all of this is why so many of us are so pessimistic about AI; we can see very clearly the many ways it represents a threat to us, and to the things we care about.

For so many, AI stands to take away something important from us and those around us—safety, stability, creativity—and replace it with benefits for somebody else; productivity and profit, going mainly to those above us.

I think so many people are against AI because they see how it functions as a system for taking away from those with the least, to give even more to the already highly privileged.

This is why the promise of AI fixing everything and empowering workers is so important; it’s the linchpin of the whole operation. It’s required to get buy-in from the people who stand to lose the most.

So let’s talk about that next.


Some might argue I’m missing the entire point here, by focusing so much on the present. Optimism isn’t about what’s happening right now, they might say; it’s about the future!

Forget what AI actually is currently; the models will get better, the data centers more energy-efficient, the tokens cheaper, the mistakes rarer, the harms mitigated, and so on, until we have something that changes the world for the better; an actual benevolent technology that solves our problems, in whatever way. Maybe it even is, or leads to, AGI (actual human-level artificial intelligence; the thing AI used to mean before 2022).

I take issue with these predictions, for several reasons:

  • While I’m sure the technology and its costs will continue to improve, it’s hard to see how that would mitigate most of these harms. Many would just as likely be intensified by greater speed, efficiency, and affordability.

  • I’m wary of predictions in general, but particularly those that bear little to no resemblance to observed reality. Absent a clear evidential link, prediction tends to be based purely on hype and speculation, and there’s a wild overabundance of both around AI.

    It’s reasonable to believe the tech will improve. It seems much less reasonable to think it might suddenly change into something new, develop presently impossible capabilities, or take us somewhere far distant with absolutely no clear path or connection between here and there.

  • Most of the utopian visions of AI center on the idea that AI is sentient, which it categorically, factually, is not. Language and statistics can simply mimic cognition easily, and our human brains are overly eager to anthropomorphize anything that vaguely imitates human behavior. Thinking and reasoning are very different than statistically emulating communication.

  • Many LLM experts, including prominent researchers from OpenAI and Meta, tell us the models are already approaching their realistic ceiling, and cannot practically scale further beyond this plateau, for several reasons (not least of which: the exhaustion of training data that isn’t already tainted by AI). Several also tell us LLMs are a dead end when it comes to achieving AGI, and most generally agree it’s literally impossible to stop LLMs from making things up. (Really: actual people from OpenAI publicly admitted LLMs will never stop “hallucinating.” It’s an un-fixable bug, because it’s a core component of how LLMs work.)

    So if you believe LLMs are intelligent, or eventually will be, you’re fighting a lot of people who know a lot more than you do from every angle.

  • Even if we ignore all the technical limitations, or find ways around them: new advancements simply don’t work that way. They never have. (The equitable, worker-liberating way, that is.) Tech doesn’t free workers; it forces them to do more in the same amount of time, for the same rate of pay or less.

    If you become twice as productive, you don’t get twice the pay or twice the time off; you just get twice the workload—likely because somebody else doing the same job just got laid off, and now you’re doing their work, too.

    This sort of technology distributes instability to the many at the bottom, while consolidating benefit at the top—and there has arguably never been a more efficient mechanism for this than AI. I see absolutely no reason to believe this time will be different, especially because:

  • AI models exist in the consolidated hands of a precious few huge companies, which are themselves quite obviously happy to do away with as many of their own workers as they possibly can. AI will serve, and is already serving, corporate interests first and foremost—especially as these models continue to replace core infrastructure, like web search, and can be manipulated however the companies please.

  • Regardless, even if you naively believe in the tech: you’re still willing to put up with all the harms and dangers of AI until that imagined potential future arrives—which brings us back to the original point.

Some might also point to positive use cases for AI. Accessibility is a popular one. (In fact, it’s so popular that online AI apologists have realized all they need to do is invoke the word “ableist” to shut down any discussion.)

Yes, there are good use cases for AI. I don’t think most reasonable people would argue with that. Like I said: I sometimes even use it myself, to compensate for my own physical inability. But calling out such cases tends to be a bad-faith attempt to justify all of AI’s other harms by using disabled people and others who might benefit from AI, rather than reckoning with the damage and rethinking our deployment of AI in order to maximize good and minimize harm for everyone. We don’t have to accept every use of AI and all of its impacts just because some of them might be beneficial.

Finally, let me take a moment to address anyone who might be thinking: sure, AI is being used for some bad things, but I’m not personally using it that way. What’s wrong with me just focusing on the good parts and enjoying the benefits to me?

My friend, that’s privilege. You are literally describing privilege.


Let me close this post the same way it began; with a personal example from my own family.

I have a newborn daughter.

I began writing this post before she was born, and mostly because of her, I’m now finishing it up several weeks later. (I’ve fit most of this writing into her nap schedule, typing as she sleeps beside me.)

And I can’t shake the thought that I’m welcoming her into a world where so much of the potential malicious misuse of AI could one day be directed at her.

Looking beyond all the things we’ve already talked about: technology in general has made things like stalking and abuse easier than ever. But AI goes even further. I live knowing AI will allow any degenerate pervert with an internet connection to create deepfakes of this little girl—up to and including pornography—without any consent at all, at barely the click of a button.

If this sounds like a horrifying, disturbed thought: it is! It absolutely is! But I’m not coming up with this on my own; this is already happening to untold numbers of women, many of whom are school-aged girls.

To be an AI optimist, I would need to turn away from this. Ignore it. Consider it all just part of the plan; a price to be casually paid (hopefully by somebody else) in exchange for…what? Writing software a little bit faster?

Optimism would require me to believe that my children probably won’t have that kind of experience, or any others I’ve described here.

To believe they’ll be in better schools. Better neighborhoods. Have better friends. Better support systems.

Won’t ever attract the attention of the wrong guy, or piss off the wrong girl.

Won’t ever live in the wrong places. Won’t ever find themselves in the wrong part of the system.

Won’t end up on the wrong side of the accelerated inequality.

AI optimism requires you to see yourself and your loved ones as safe from AI; as the passengers in the self-driving car, and not as the pedestrians it might run over.

I don’t know how you see yourself that way without a great deal of class privilege.

The rest of us?

I guess it’s hard to see the convenience as worth the price—let alone exciting—when you know you could be among the ones paying for it.

联系我们 contact @ memedata.com