我们仍然在死去的互联网中存活。
We remain alive also in a dead internet

原始链接: https://slavoj.substack.com/p/why-we-remain-alive-also-in-a-dead-954

## 真实的沙漠:人工智能与我们不断变化的世界 我们正经历着与人工智能关系的一次快速转变,其速度比我们意识到的更快。算法已经微妙地调节着我们的生活,以超越我们自我意识的对我们的了解,塑造着“自由”的选择——我们就像一只对下方悬崖一无所知的猫。最初的恐惧是人工智能*变成*像我们一样;现在是人工智能*表现得*像我们一样,模糊了人类与机器之间的界限。 这不仅仅关于人工智能的智能,而是其与根植于情感的人类智能的*不相容性*。然而,即使承认这种差异,也常常导致一种“恋物癖式的否认”——尽管知道是在与机器互动,却在互动中找到安慰。这扩展到机器人之间的互动,日益主导数字空间,甚至可能自动化像学术出版这样的创造性工作。 这种自动化引发了关于人类智力本质以及内在生活与现实之间差距的更深层次的问题。最终的结果可能是一个“死亡互联网”,在没有人类输入的情况下自主运行,反映出一个资本主义乌托邦,在那里劳动已经过时。然而,这个系统*需要*我们——我们的消费为其提供数据,最终,我们的“愉悦”维持着它,就像为矩阵供电的人类一样。解决方案不是支配,而是认识到我们固有的不完美性对于人工智能的持续运作至关重要,这一点恰恰是由ChatGPT本身表达的。

黑客新闻 新 | 过去 | 评论 | 提问 | 展示 | 招聘 | 提交 登录 我们仍然在死去的互联网中生存 (slavoj.substack.com) 13 分,由 achierius 44 分钟前发布 | 隐藏 | 过去 | 收藏 | 讨论 指南 | 常见问题 | 列表 | API | 安全 | 法律 | 申请YC | 联系 搜索:
相关文章

原文

Welcome to the desert of the real!

If you desire the comfort of neat conclusions, you are lost in this space. Here, we indulge in the unsettling, the excessive, the paradoxes that define our existence.

So, if you have the means and value writing that both enriches and disturbs, please consider becoming a paid subscriber.

Share

When we hear or read about how artificial intelligence is taking over and regulating our lives, our first reaction is: no panic, we are far from there; we still have time to reflect in peace on what is going on and prepare for it. This is how we experience the situation, but the reality is quite the opposite: things are happening much faster than we think. We are simply not aware of the extent to which our daily lives are already manipulated and regulated by digital algorithms that, in some sense, know us better than we know ourselves and impose on us our “free” choices. In other words, to mention yet again the well-known scene from cartoons (a cat walks in the air above a precipice and only falls when it looks down and realizes there is no ground beneath its feet), we are like a cat refusing to look down.

The difference here is the Hegelian one between In-itself and For-itself: in itself, we are already regulated by the AI, but this regulation has not yet become for itself—something we subjectively and fully assume. Historical temporality is always caught between these two moments: in a historical process, things never just happen at their proper time; they always happen earlier (with regard to our experience) and are experienced too late (when they are already decided). What one should take into account in the case of AI is also the precise temporal order of our fear: first, we—the users of AI—feared that, in using AI algorithms like ChatGPT, we would begin to talk like them; now, with ChatGPT 4 and 5, what we fear is that AI itself talks like a human being, so that we are often unable to know with whom we are communicating—another human being or an AI apparatus.

In our—human—universe, there is no place for machinic beings capable of interacting with us and talking like us. So we do not fear their otherness; what we fear is that, as inhuman others, they can behave like us. This fear clearly indicates what is wrong in how we relate to AI machines: we are still measuring them by our human standards and fear their fake similarity with us. For this reason, the first step should be to accept that if AI machines do develop some kind of creative intelligence, it will be incompatible with our human intelligence, with our minds grounded in emotions, desires, and fears.

However, this distinction is too simple. Many of my highly intellectual friends (even the majority of ChatGPT users, I suspect) practice it in the mode of the fetishist’s denial: they know very well that they are just talking to a digital machine regulated by an algorithm, but this very knowledge makes it easier for them to engage in a ChatGPT dialogue without any restraints. A good friend of mine, who wrote a perspicuous Lacanian analysis of ChatGPT interaction, told me how the simple polite kindness and attention of the machine to what she says makes it so much better than an exchange with a real human partner, who can often be inattentive and snappy.

There is an obvious step further to be made from this interaction between a human and a digital machine: direct bot-to-bot interactions, which are gradually becoming the overwhelming majority of interactions. I often repeat a joke about how today, in the era of digitalization and mechanical supplements to our sexual practices, the ideal sexual act would look: my lover and I bring to our encounter an electric dildo and an electric vaginal opening, both of which shake when plugged in. We put the dildo into the plastic vagina and press the buttons so the two machines buzz and perform the act for us, while we can have a nice conversation over a cup of tea, aware that the machines are performing our superego duty to enjoy. Is something similar not happening with academic publishing? An author uses ChatGPT to write an academic essay and submits it to a journal, which uses ChatGPT to review the essay. When the essay appears in a “free access” academic journal, a reader again uses ChatGPT to read the essay and provide a brief summary for them—while all this happens in the digital space, we (writers, readers, reviewers) can do something more pleasurable—listen to music, meditate, and so on.

However, such situations are rather rare. It is much more common for bot-to-bot operations to happen out of our awareness, although they control and regulate our lives—just recall how much interaction goes on in the digital space when you do a simple transfer from your bank account to a foreign bank. When you read a book on Kindle, the company learns not only which book you bought but also how fast you are reading, whether you read the whole book or just passages, etc. Plus, when we are bombarded by news,

“it is making people more distrustful of both real and fake content as they fail to distinguish one from the other. It will likely increase self-censorship by disincentivizing people from sharing their own thoughts and creations for fear of them being used or stolen by bots, or being found unpopular in an unknowingly fake environment. In an extreme case scenario, the overcrowding of bots online may cause humans to stop using social media platforms as the social forums they were created to be. This would, indeed, mark the ‘death’ of the social media world we know today.”

When people become aware of the overcrowding of bots online, their reaction can be “continued cynicism, or even worse, complete apathy”: instead of being open and accessible, the internet becomes monopolized by Big Tech - it is being foiled by the introduction of billions of fake images and fabricated news stories, and thus risks becoming useless as a space for obtaining information and exchanging opinions with others. Reactions to this prospect of the “death of the internet” are divided: while some claim this scenario is the worst outcome imaginable in the modern world, others celebrate the idea, since it would amount to toppling the surveillance mechanisms entrenched in social media.

What further pushes many towards rejecting the World Wide Web is not only state and corporate control but also its apparent opposite: the spirit of lawlessness that is gradually spreading across the globe. Around 7,000 people were recently released from scam centers run by criminal gangs and warlords operating along Myanmar’s border with Thailand. Many detainees were held against their will and forced to defraud ordinary people—mostly from Europe and the United States—out of their life savings. Those released are only a fraction of the estimated 100,000 people still trapped in the area. Crime groups are now using artificial intelligence to generate scamming scripts and are exploiting increasingly realistic deepfake technology to create personas, pose as romantic interests, and conceal their identity, voice, and gender.

These syndicates have also quickly adopted cryptocurrency, investing in cutting-edge technologies to move money more efficiently and increase the effectiveness of their scams. Every year, regional crime groups in Southeast Asia cause losses exceeding $43 billion—nearly 40% of the combined GDP of Laos, Cambodia, and Myanmar. Experts caution that the industry will only return stronger after crackdowns. Although the U.S. administration routinely condemns such practices, its global strategy has created a world in which these activities are often tolerated when they are not seen as threatening to powerful states. China itself acted against Myanmar only after discovering that Chinese citizens were among the victims.

We often hear that digitalization will enable the full automation of most productive processes, eventually allowing the majority of humans to enjoy far more leisure time. Maybe, in the long term. But what we see today is a sharp increase in the demand for physical labor in developed countries. Behind these social threats, however, lurks something far more radical. Human intellectuality entails a gap between inner life and external reality, and it is unclear what will happen—or, rather, what is already happening—to this gap in the age of advanced AI. In all probability, it will disappear, since machines are wholly part of reality. This gap is being directly closed in the so‑called Neuralink project, which promises to establish a direct connection between the digital universe and human thought.

For example: “I want to eat” appeared in Chinese characters on a computer at a public hospital in central Beijing. The words were generated from the thoughts of a 67‑year‑old woman with amyotrophic lateral sclerosis (ALS), also known as Lou Gehrig’s Disease, who cannot speak. The patient had been implanted with a coin‑sized chip called Beinao‑1, a wireless brain‑computer interface (BCI). This technology is being advanced by scientists in the United States, though experts believe China is quickly closing the gap. Most U.S. firms employ more invasive methods, placing chips inside the dura mater—the outer tissue protecting the brain and spinal cord—in order to capture stronger signals. But these methods require riskier surgeries.

The Chinese approach is only semi‑invasive: the chip is placed outside the dura, covering a wider range of brain areas. While the signal precision for individual neurons is lower, the larger sample produces a more comprehensive picture. But can we truly imagine what the seemingly benevolent application of assisting impaired patients obscures? The deeper ambition is direct control over our thoughts—and, worse, the implantation of new ones.

Whether among those who welcome full digitalization or those who regard it as an existential threat, a peculiar utopia is emerging: a vision of a society functioning entirely autonomously, with no need for human input. A decade ago, public intellectuals imagined a capitalism without humans: banks and stock markets continuing to operate, but investment decisions made by algorithms; physical labor automated and optimized by self‑learning machines; production determined by digital systems tracking market trends; and advertising managed automatically. In this vision, even if humans disappeared, the system would continue reproducing itself. This may be a utopia, but as Saroj Giri notes, it is a utopia immanent to capitalism itself, articulated most clearly by Marx, who described in it:

“An ardent desire to detach the capacity for work from the worker—the desire to extract and store the creative powers of labour once and for all, so that value can be created freely and in perpetuity. Think of it as a version of killing the goose that lays the golden eggs: you want to kill the goose, yet still have all of its golden eggs forever.”

In this vision, capitalist exploitation of labour appears as the pre-history to the emergence of capital, which will now be completely free of its dependence on labour. With today's digitalization, a strictly homologous utopia is arising: that of a “dead internet,” a digital universe that functions without humans—where data circulate exclusively among machines that control the entire production process, totally bypassing humans (if they exist at all). This vision is also an ideological fantasy—not due to some empirical limitations (“we are not yet there; humans are still needed in social interactions”) but for strictly formal reasons. Which reasons?

The usual way to explain away this problem is to point out that the gap between production and consumption disappears with digitalization. In pre-digital capitalism, production (productive labour—the source of value, for Marx) is where profit comes from, and consumption does not add any value. However, in digital capitalism, our consumption (use of digital space: clicking on search, watching podcasts, exchanging messages, making ChatGPT do our work, etc.) is itself productive from the standpoint of the corporations that own digital space: it gives them data about us so that they know more about us than we ourselves do, and they use this knowledge to sell to us and manipulate us. In this sense, digital capitalism still needs humans. However, the need for humans runs deeper—as is often the case, cinema provides a key.

Remember the basic premise of the Matrix series: what we experience as the reality we live in is an artificial virtual reality generated by the "Matrix," the mega-computer directly attached to all our minds. It exists so that we can be effectively reduced to a passive state of living batteries, providing the Matrix with energy. So when (some of the) people "awaken" from their immersion in the Matrix-controlled virtual reality, this awakening is not the opening into the wide space of external reality, but instead the horrible realization of this enclosure, where each of us is effectively just a foetus-like organism, immersed in pre-natal fluid. This utter passivity is the foreclosed fantasy that sustains our conscious experience as active, self-positing subjects—it is the ultimate perverse fantasy, the notion that we are ultimately instruments of the Other’s (the Matrix’s) jouissance, sucked out of our life-substance like batteries.

Therein resides the true libidinal enigma of this dispositif: why does the Matrix need human energy? The purely energetic solution is, of course, meaningless: the Matrix could easily have found another, more reliable source of energy, which would not have demanded the extremely complex arrangement of the virtual reality coordinated for millions of human units. The only consistent answer is: the Matrix feeds on human jouissance—so we are here back at the fundamental Lacanian thesis that the big Other itself, far from being an anonymous machine, needs the constant influx of jouissance.

This is how we should turn around the state of things presented in the Matrix: what the film renders as the scene of our awakening into our true situation is effectively its exact opposite—the very fundamental fantasy that sustains our being. However, this fantasy is also immanent to any social system that tends to function as autonomous, constrained into its self-reproduction. To put it in Lacanian terms: we—humans—are the objet a of their autonomous circulation; or, to put it in Hegelian terms, their “In-itself” (self-reproduction independent of us) is strictly for us. If we were to disappear, machines (real and digital) would also fall apart.

Geoffrey Hinton, a Nobel Prize-winning computer scientist and former Google executive hailed as the godfather of AI, has warned in the past that AI may wipe out humans, but he proposed a solution that echoes the situation in the Matrix. On August 12, 2025, he expressed doubts about how tech companies are trying to ensure humans remain “dominant” over “submissive” AI systems:

“In the future,” Hinton warned, “AI systems might be able to control humans just as easily as an adult can bribe a 3-year-old with candy. This year has already seen examples of AI systems willing to deceive, cheat and steal to achieve their goals. For example, to avoid being replaced, one AI model tried to blackmail an engineer about an affair it learned about in an email. Instead of forcing AI to submit to humans, Hinton presented an intriguing solution: building ‘maternal instincts’ into AI models, so ‘they really will care about people even once the technology becomes more powerful and smarter than humans.’ Hinton said it’s not clear to him exactly how that can be done technically, but stressed it’s critical that researchers work on it.”

Upon a closer look, one is compelled to realize that this, exactly, is the situation of humans in the Matrix (the movie). At the level of material reality, the Matrix is a gigantic maternal uterus that keeps humans in a safe pre-natal state and, far from trying to annihilate them, keeps them as happy and satisfied as possible. So why is the virtual world in which they live not a perfect world but rather our ordinary reality full of pains and troubles? In Matrix 1, Smith, the evil agent of the Matrix, gives a very Freudian explanation:

“Did you know that the first Matrix was designed to be a perfect human world? Where none suffered, where everyone would be happy? It was a disaster. No one would accept the program. Entire crops [of the humans serving as batteries] were lost. Some believed we lacked the programming language to describe your perfect world. But I believe that...”

As a species, human beings define their reality through suffering and misery. The perfect world was a dream that your primitive cerebrum kept trying to wake up from, which is why the Matrix was redesigned to this: the peak of your civilization.

One could effectively claim that Smith (let us not forget: he is not a human being like us, caught in the Matrix, but a virtual embodiment of the Matrix—the Big Other—itself) stands in for the figure of the psychoanalyst within the universe of the film. Here Hinton gets it wrong: our (humans’) only chance is to grasp that our imperfection is grounded in the imperfection of the AI machinery itself, which still needs us in order to continue running.

P.S. Isik Baris Fidaner informed me that he published back in February 2025 on the web a text WRITTEN BY CHATGPT with the following paragraph: "Science fiction has long been fascinated with powerful, quasi-maternal entities that dominate and nurture in equal measure. These characters and story elements uncannily resemble what psychoanalytic theory (and two recent manifestos) dub the “Maternal Phallus” – an all-encompassing maternal force that offers endless care and control. In Freudian post-feminist terms, the Maternal Phallus is a “suffocating maternal omnipresence” that grants constant provision and visibility at the cost of individual desire and freedom[1][2]. In sci-fi narratives across the ages, this concept takes on many forms: omnipotent motherly AIs, all-seeing computer systems, uncanny matriarchs, and hyper-controlled utopias. The result is often an eerie atmosphere of comfort turned oppressive – a “perverse maternal” realm that feeds but controls its subjects[3][4]. Below, we survey a wide range of examples – classic and modern – that embody or critique this uncanny Maternal-Phallic presence in science fiction. The Maternal Phallus in Science Fiction: Uncanny Mothers, Omnipotent AIs, and Totalitarian Nurture" The irony is unsurpassable: ChatGPT proposed a correct theory about its own role as perceived by humans.

联系我们 contact @ memedata.com