死亡互联网理论
Dead Internet Theory

原始链接: https://kudmitry.com/articles/dead-internet-theory/

## “死亡互联网”日益增长的猜疑 最近在HackerNews上的一次经历引发了一种令人不安的认识:互联网上真正的人可能比我们想象的少得多。作者观察到一场关于一个开源项目是否真正由人类创建,或者是否得到大量协助——甚至完全由人工智能生成的激烈争论。作者最初接受人工智能辅助并要求公开披露,但随着项目创建者的回应表现出人工智能生成文本的明显迹象,作者开始感到怀疑。 这起事件引发了对“死亡互联网理论”的更深入思考,该理论认为,互联网上很大一部分活动现在都是自动化的,由机器人创建内容来销售产品或操纵搜索结果。作者是一位长期互联网用户,回忆起一个真正互联互通和知识共享的时代,现在感到越来越脱节。 例如,LinkedIn上人工智能生成的“办公室氛围”照片进一步加剧了这种担忧。人工智能内容创作的简易性引发了一个问题:有意义的互动和真实的知识是否正变得越来越稀少,取而代之的是机器人与机器人之间的交流循环,以及为了商业利益而重新包装信息。这种转变对互联网作为真正人类互联互通和进步的平台的未来效用产生了怀疑。

## “死亡互联网理论”与Reddit的衰落 最近的Hacker News讨论集中在“死亡互联网理论”上——即许多在线内容现在是由机器人而非人类生成的。这次讨论是由一篇探讨这一概念的文章链接引发的。 用户普遍认为Reddit是这一趋势的典型例子,他们指出在2023年API抗议活动移除版主工具后,机器人活动明显增加。上市后提高收入的需求被认为是激励Reddit允许更多机器人,甚至可能*创建*机器人,以夸大流量和广告收入。 担忧不仅限于Reddit,人们担心人工智能最终将主导互联网,导致付费访问和创新受阻。一些人提议建立一个“平行互联网”来对抗这种情况。讨论还涉及人工智能如何像操纵性的销售策略一样,引导用户得出结论,而不是直接提供答案,以及人工智能对其自身输出的“解释”不可靠。最终,该帖子表达了对在线真正的人际互动丧失的惋惜。
相关文章

原文

#Technology #Internet

The other day I was browsing my one-and-only social network — which is not a social network, but I’m tired of arguing with people online about it — HackerNews. It’s like this dark corner of the internet, where anonymous tech-enthusiasts, scientists, entrepreneurs, and internet-troll, like to lurk. I like HackerNews. It helps me stay up-to-date about recent tech news (like Cloudflare acquiring Astro which makes me happy for the Astro team, but also sad and worried since I really like Astro, and big-tech has a tendency to ruin things); it mostly avoids politics; and it’s not a social network.

And, in the fashion of HackerNews, I stumbled upon someone sharing their open-source project. It’s great to see people work on their projects and decide to show them to the world. I think people underestimate the fear of actually shipping stuff, which involves sharing it with the world.

Upon glancing at the comment section, I started to see other anonymous participants questioning the validity of said open-source project in terms of how much of it was AI-generated. I grabbed my popcorn, and started to follow this thread. More accusations started to appear: the commit timeline does not make sense; the code has AI-generated comments; etc. And at the same time, the author tried to reply to every comment claiming that they wrote this 100% without using AI.


I don’t mind people using AI to write code, even though I tried to resist it myself, until eventually succumbing to it. But I think it’s fair to disclose the use of AI, especially in open-source software. People on the internet are, mostly, anonymous, and it’s not always possible to verify the claims or expertise of particular individuals. But as the amount of code is growing, considering that everyone is using AI to generate whatever-app they want, it’s impossible to verify every piece of code we are going to use. So it’s fair to know, I think, if some project is AI generated and to what extent. In the end, LLMs are just probabilistic next-token generators. And while they are getting extremely good at most simple tasks, they have the potential to wreak havoc with harder problems or edge-cases (especially if there are no experienced engineers, with domain knowledge, to review the generated code).


As I was following this thread, I stared to see a pattern: the comments of the author looked AI generated too:

  • The use of em-dashes, which on most keyboard require a special key-combination that most people don’t know, and while in markdown two dashes will render as em-dash, this is not true of HackerNews (hence, you often see -- in HackerNews comments, where the author is probably used to Markdown renderer turning it into em-dash)
  • The notorious “you are absolutely right”, which no living human ever used before, at least not that I know of
  • The other notorious “let me know if you want to [do that thing] or [explore this other thing]” at the end of the sentence

I was sitting there, refreshing the page, seeing the author being confronted with use of AI in both their code and their comments, while the author claiming to have not used AI at all. Honestly, I was thinking I was going insane. Am I wrong to suspect them? What if people DO USE em-dashes in real life? What if English is not their native language and in their native language it’s fine to use phrases like “you are absolutely right”? Is this even a real person? Are the people who are commenting real?

And then it hit me. We have reached the Dead Internet. The Dead Internet Theory claims that since around 2016 (a whooping 10 years already), the internet is mainly dead, i.e. most interactions are between bots, and most content is machine generated to either sell you stuff, or game the SEO game (in order to sell you stuff).

I’m ashamed proud to say that I spent a good portion of my teenage years on the internet, chatting and learning from real people who knew more than me. Back in the early 2000s, there were barely bots on the internet. The average non-tech human didn’t know anything about phpBB forums, and the weird people with pseudonyms who hanged-out in there. I spent countless hours inside IRC channels, and on phpBB forums, learning things like network programming, OS-development, game-development, and of course web-development (which became my profession for almost two decades now). I’m basically a graduate of the Internet University. Back then, nobody had doubts that they were talking to a human-being. Sure, you could think that you spoke to a hot girl, who in reality was a fat guy, but hey, at least they were real!

But today, I no longer know what is real. I saw a picture on LinkedIn, from a real tech company, posting about their “office vibes” and their happy employees. And then I went to the comment section, and sure enough this picture is AI generated (mangled text that does not make sense, weird hand artifacts). It was posted by an employee of the company, it showed other employees of said company, and it was altered with AI to showcase a different reality. Hell, maybe the people on the picture do not even exist!

And these are mild examples. I don’t use social networks (and no, HackerNews is not a social network), but I hear horror stories about AI generated content on Facebook, Xitter, TikTok, ranging from photos of giants that built the pyramids in Egypt, all the way to short videos of pretty girls saying that the EU is bad for Poland.

I honestly got sad that day. Hopeless, if I could say. AI is easily available to the masses, which allow them to generate shitload of AI-slop. People no longer need to write comments or code, they can just feed this to AI agents who will generate the next “you are absolutely right” masterpiece.

I like technology. I like software engineering, and the concept of the internet where people could share knowledge and create communities. Were there malicious actors back then on the internet? For sure. But what I am seeing today, makes me question whether the future we are headed to is a future where technology is useful anymore. Or, rather, it’s a future where bots talk with bots, and human knowledge just gets recycled and repackaged into “10 step to fix your [daily problem] you are having” for the sake of selling you more stuff.

联系我们 contact @ memedata.com