人工智能将被暴力对待,而且不会有好事发生。
AI Will Be Met with Violence, and Nothing Good Will Come of It

原始链接: https://www.thealgorithmicbridge.com/p/ai-will-be-met-with-violence-and

这篇内容将历史上的反纺织机卢德运动与当前对人工智能的担忧联系起来。就像纺织机既脆弱又构成整个行业的基础一样,数据中心——看似坚不可摧的堡垒——现在容纳着驱动我们世界的算法,以及最终的超智能潜力。 然而,与破坏一台纺织机不同,拆解数字基础设施证明要复杂得多。真正的威胁不是硬件,而是其*内部*不断演化的智能。作者认为,尽管技术取得了进步,人类的脆弱和绝望仍然是始终如一的薄弱环节。 最近发生的事件——对人工智能领导人如山姆·奥特曼的住所的袭击以及对OpenAI的威胁——表明,随着人们感受到被迅速发展的人工智能取代和无力感,一种日益增长的沮丧情绪正在蔓延。作者谴责暴力行为,但警告说,绝望情绪的加剧,受到失业和被排除在未来的感觉的助长,可能会导致进一步的动荡。 关键在于,人工智能行业公开承认潜在的破坏,而非缓解它,被视为加剧了问题,为更广泛的社会问题创造了替罪羊,并可能煽动进一步的危险反应。

## AI、焦虑与暴力潜力 - Hacker News 摘要 一篇名为“人工智能将被暴力反击,且不会有好事发生”的文章引发了 Hacker News 的讨论,揭示了人们对人工智能快速发展及其对就业和社会潜在影响的广泛焦虑。 许多评论者同意,虽然暴力不能为之辩护,但在对失业和无力感的恐惧下,这种情绪是可以理解的。批评指向了 Altman 和 Amodei 等人工智能 CEO,但也承认移除他们并不能阻止进展——问题是系统性的。 几位用户指出资本配置不当,人工智能似乎优先考虑自动化创意领域而非繁重任务。 一个关键主题是焦虑没有得到解决;仅仅驳斥关于人工智能影响的个别说法无法平息潜在的恐惧。 一些人认为,目前的情况类似于“反向创新者的困境”,即自动化生产速度超过了消费速度。 另一些人建议将责任归咎于政府,因为缺乏监管,而不是针对个人。 讨论的结论是,人类天生具有暴力倾向,这种倾向会被对生计和福祉的威胁所触发。
相关文章

原文

Sorry to bother you on Saturday. Thought this was important to share.

The first thing you learn about a loom is that it’s easy to break.

The shuttle runs along a track that warps with humidity. The heddles hang from cords that fray. The reed is a row of thin metal strips, bent by hand, that bend back just as easily. The warp beam cracks if you over-tighten it. The treadles loosen at the joints. The breast beam, the cloth roller, the ratchet and pawl, the lease sticks, the castle; the whole contraption is wood and string held together by tension. It’s a piece of ingenuity and craftsmanship, but one as delicate as the clothes it manifests out of wild plant fibers. It is, also, the foundational tool of an entire industry, textiles, that has kept its relevance to our days of heavy machinery, factories, energy facilities, and datacenters.

It is not nearly as easy to break a datacenter.

It is made of concrete and steel and copper and it’s on the bigger side. It has interchangeable servers, and biometric locks and tall electrified fences and heavily armed guards and redundancy upon redundancy: every component duplicated so that no single failure brings the whole thing down. There is no treadle to loosen or reed to bend back.

But say you managed to bypass the guards, jump the fences, open the locks, and locate all the servers. Then you’d face the algorithm. The datacenter was never your goal; the algorithm lurking inside is. It doesn’t run on that rack, or any rack for that matter. It is a digital pattern distributed across millions of chips, mirrored across continents; it could be reconstituted elsewhere, and it’s trained to addict you at a glance, like a modern Medusa.

But say you managed to elude the stare, stop the replication, and break the patterns. Then you’d face superintelligence. The algorithm was also not your goal; the vibrant, ethereal, latent superintelligence lurking inside is. Well, there’s nothing you can do here: It always “gets out of the box” and, suddenly, you are inside the box, like a chimp being played by a human with a banana. It’s just so tasty…

There’s another solution to break a datacenter: You can bomb it, like one hammers down the loom.

Some have argued that this is the way to ensure a rogue superintelligence doesn’t get out of the box. A different rogue creature took the proposal seriously: last month, Iran’s Revolutionary Guard released satellite footage of OpenAI’s Stargate campus in Abu Dhabi and promised its “complete and utter annihilation.”

But you probably don’t have a rogue nation handy to fulfill your wishes. Maybe you will end up bombed instead and we don’t want that to happen. That’s what happens with rogue intelligences: you can’t predict them.

And yet. Two hundred years of increasingly impenetrable technology—from looms to datacenters—have not changed the first thing about the people who live alongside it. The evolution of technology is a feature of the world just as much as the permanent fragility of the human body.

And so, more and more, it is people who are the weaker link in this chain of inevitable doom. And it is people who will be targeted.

April of 1812. A mill owner named William Horsfall was riding home on his beautiful white stallion back from the Cloth Hall market in Huddersfield, UK. He had spent weeks boasting that he would ride up to his saddle in Luddite blood (a precious substance that served as fuel for the mills).

A few yards later, at Crosland Moor, a man named George Mellor—twenty-two years old—shot him. It hit Horsfall in the groin, who, nominative-deterministically, fell from his horse. People gathered, reproaching him for having been the oppressor of the poor. Naturally, loyal to his principles in death as he was in life, he couldn’t hear them. He died one day later in an inn. Mellor was hanged.

History rhymes, they say.

April of 2026. A datacenter owner named Samuel Altman was driving home on his beautiful white Koenigsegg Regera back from Market Street in San Francisco, US. He had spent weeks boasting that he would scrap and steal our blog posts (a precious substance that serves as fuel for the datacenters).

A few hours later, at Russian Hill, a man named Daniel Alejandro Moreno-Gama—twenty years old—allegedly threw a Molotov cocktail at his house. He hit an exterior gate. Altman and his family were asleep, but they’re fine. Moreno-Gama is in custody.

This kind of violence must be condemned. This is not the way. It’s horrible that it is happening at all. And yet, for some reason, it keeps happening.

Last week, the house of Ron Gibson, a councilman from Indianapolis, was shot at thirteen times. The bullet holes are still there. The shooter left a message on his doorstep: “NO DATA CENTERS.” Gibson supports a datacenter project in the Martindale-Brightwood neighborhood. He and his son were unharmed.

In November 2025, a 27-year-old anti-AI activist threatened to murder people at OpenAI’s SF offices, prompting a lockdown. He had expressed a desire to buy weapons.

Increasingly, as the objects of people’s anger and frustration and desperation become unreachable behind fences and guards, or abstracted away in ones and zeros, or elevated above the clouds, the mob will turn their unassailable emotions toward human targets.

I don’t want to trivialize the grievances of the people who fear for their futures. I don’t want to defend Altman’s decisions. But this is not the way. This is how things devolve into chaos.

And I wonder: how desperate can people be before these isolated events become a snowball of violence that will be resisted by neither datacenters nor rich people’s houses?

Every time I hear from Amodei or Altman that I could lose my job, I don’t think “oh, ok, then allow me pay you $20/month so that I can adapt to these uncertain times that have fallen upon my destiny by chance.” I think: “you, for fuck’s sake, you are doing this.” And I consider myself a pretty levelheaded guy, so imagine what not-so-levelheaded people think.

There’s a lot of friction to escalating violence, but that friction dissolves the moment this sentiment starts to be common. Normally, it just fades away anyway, but there’s one scenario where I see it inevitably escalating:

If people feel that they have no place in the future.

If they feel expelled from the system—they’re unable to buy stuff, their skills become obsolete, their chance at earning a living is replaced by a swarm of AI agents, they think we are truly going to die (so far, the violence has been tied mostly to safety AI movements)—then they will feel they have nothing to lose.

And then, and I’m sorry to be so blunt, then it’s die or kill.

Perhaps the most serious mistake that the AI industry made after creating a technology that will transversally disrupt the entire white-collar workforce before ensuring a safe transition, was making it explicit by doing constant discourses that amount to: “we are creating a technology that will transversally disrupt the entire white-collar workforce before ensuring a safe transition.”

And, to top it off, they add “careful down there.”

The difference between AI and, say, looms, is that this has been broadcast to the entire globe, and it has been treated in a sort of self-conscious way. The AI leaders know the problems that will emerge and so they cannot help but talk about them constantly and so they are letting us know, which makes them look like psychopaths. How do you guys think people will react to this? You should be much less self-conscious and much more self-aware: realize what you sound like!

(No piece of journalism, much less one that leans forensic rather than sensationalist, could do a better job against them than their own words. These guys lack basic self-awareness. For what it’s worth, the New Yorker piece I’m referring to, which Altman also referred to in his blog post, made me see him more as a flawed human rather than a sociopathic strategist. My sympathy for him will probably never be very high, but it grew after reading it.)

People hate AI so much that they are prone to attribute to it everything that’s going wrong in their lives, regardless of the truth. That’s why they mix real arguments, like data theft, with fake ones, like the water stuff. Employers do it, too. Most layoffs are not caused by AI, but it’s the perfect excuse to do something that’s otherwise socially reprehensible.

AI has become the perfect scapegoat. It doesn’t help that the entire AI industry has decided that throwing rocks at its own roof is its best selling point: If AI is so powerful and so dangerous and soon to be so ubiquitous, then what is so unexpected about people blaming everything on it?

Nothing that Altman could say justifies violence against him. This is an undeniable truth. But unfortunately, violence might still ensue. I hope not, but I guess we are seeing what appears to be the first cases.

I just hope that, contrary to the cases of ChatGPT-induced psychosis, chatbot addiction, AI-blamed job layoffs, and a growing trend of illiteracy, it stops.

联系我们 contact @ memedata.com