人工智能的反弹来临了:公众对科技巨头的耐心正在耗尽。
The AI Backlash Is Here: Why Public Patience with Tech Giants Is Running Out

原始链接: https://www.newsweek.com/ai-backlash-openai-meta-friend-10807425

公众对生成式人工智能的怀疑情绪正在迅速增长,从最初的乐观情绪转变为广泛的愤世嫉俗。 受到荒谬的深度伪造(包括山姆·奥特曼看似盗窃的病毒视频)和无效的AI驱动的广告活动等例子推动,人们正在质疑这项技术的目的和益处。 最初的调查显示出前景,但现在大多数美国人认为人工智能更有可能造成危害而非帮助。 这种不满表现为对AI品牌广告的在线嘲讽和公共破坏,广告上出现了“监控资本主义”等口号。 担忧集中在就业岗位流失、AI生成内容缺乏真实联系以及驱动其发展的剥削上。 尽管全球投资巨大——到2025年上半年已超过3200亿美元——但专家警告称可能出现泡沫,理由是缺乏真正的客户需求和不可持续的投资模式。 虽然支持者强调潜在的长期益处,但当前AI内容的“垃圾”正在滋生抵制情绪, “clanker”等词语作为取代工作的机器的俚语出现。 未来仍然不确定,但日益增长的焦虑表明人工智能繁荣可能被“过度宣传”。

## AI 反弹与日益增长的挫败感 (Hacker News 总结) 一篇最近的《新闻周刊》文章引发了 Hacker News 的讨论,强调了公众对人工智能日益增长的疲劳感,这已经超越了对质量的担忧,开始质疑其广泛应用的影响。用户表达了对人工智能侵入性存在的沮丧——不受欢迎的“帮助”和被迫集成到工作流程中——以及一种被持续建议所“ infantilize”(使幼稚化)的感觉。 许多人认为人工智能正在成为更深层经济焦虑的替罪羊,特别是工作保障和社会保障体系的缺乏。人们还对与人工智能开发相关的硬件成本上涨表示担忧,以及公司从人工智能中获利,同时裁员的行为。 一个反复出现的主题是,人工智能工具往往缺乏细微差别和真正的价值,一些用户甚至发誓完全避免使用它们。虽然承认人工智能的潜力,但评论员强调了对能动性和控制权的渴望,希望按照*他们*自己的意愿集成人工智能,而不是被强加。这场讨论表明,可能需要一次“硬重置”来解决炒作问题,并将人工智能开发带回现实。
相关文章

原文

On OpenAI’s new social app, Sora 2, a popular video shows a disturbingly lifelike Sam Altman sprinting out of a Target store with stolen computer chips, begging police not to take his “precious technology.” The clip is absurdist, a parody of the company’s own CEO, but it also speaks to a larger conversation playing out in dinner conversations, group chats and public spaces around the country: What, exactly, is this technology for?

From ads scrawled with graffiti to online comment sections filled with mockery, the public’s patience with AI-generated media is starting to wear thin. Whether it's YouTube comments deriding synthetic ad campaigns or scribbled in Sharpie across New York City subway posters for AI startups, the public's discontent with the AI boom is growing louder.

What began in 2022 as broad optimism about the power of generative AI to make peoples' lives easier has instead shifted toward a sense of deep cynicism that the technology being heralded as a game changer is, in fact, only changing the game for the richest technologists in Silicon Valley who are benefiting from what appears to be an almost endless supply of money to build their various AI projects — many of which don't appear to solve any actual problems. Three years ago, as OpenAI's ChatGPT was making its splashy debut, a Pew Research center survey found that nearly one in five Americans saw AI as a benefit rather than a threat. But by 2025, 43 percent of U.S. adults now believe AI is more likely to harm them than help them in the future, according to Pew.


Slop as a Service

As AI spreads, public skepticism is turning into into open hostility toward its products and ads. Campaigns made with generative AI are mocked online and vandalized in public. Friend, a startup that spent $1 million on a sprawling campaign in the New York City subway with more than 11,000 advertisements on subway cars, 1,000 platform posters, and 130 urban panels, has been hit especially hard. Most of its ads were defaced with graffiti calling the product “surveillance capitalism” and urging people to “get real friends.”

"AI doesn't care if you live or die," reads one tag on a Friend ad in Brooklyn.

Other brands like Skechers are seeing similar backlash for an AI-generated campaign showing a distorted woman in sneakers, dismissed as lazy and unprofessional. Many of the Skechers subway posters were quickly defaced — some tagged with “slop,” the memeified shorthand for AI’s cheap, joyless flood of content, now embodied by the Altman deepfakes flooding Sora.

“The idea of authenticity has long been at the center of the social media promise, for audiences and content creators alike. But a lot of AI-generated content is not following that logic,” said Natalia Stanusch, a researcher at AI Forensics, a nonprofit that investigates the impact of artificial intelligence on digital ecosystems.

“With this flood of content made using generative AI, there is a threat of social media becoming less social and users are noticing this trend,” she told Newsweek.

'Wildly Oversold'

In an era where the boundaries between the digital and physical worlds are becoming nearly indistinguishable, one thing is becoming increasingly clear: the skepticism toward generative artificial intelligence is rising on both sides of the political divide. What once held the promise of innovation in the arts—an AI that could generate art, compose music or write coherent, even beautiful, prose—has begun to feel more like saturation.

The friction isn’t just about quality—it’s about what the ubiquity of these tools signals. In entertainment, backlash has mounted as high-profile artists find themselves cloned without consent. After an AI-generated song mimicking his voice went viral on TikTok, rapper Bad Bunny lashed out on WhatsApp, telling his 19 million followers that, if they enjoyed the track, “you don’t deserve to be my friends.” Similar complaints came from Drake and The Weeknd whose own AI replicas were pulled from streaming platforms after public outcry.

“The public is finally starting to catch on,” said Gary Marcus, a professor emeritus at NYU and one of the field’s most vocal critics. “Generative AI itself may be a fad and certainly has been wildly oversold.”

That saturation, according to Marcus and others, has less to do with AI’s breakthroughs and more to do with the way companies have stripped out human labor under the guise of innovation. It's a shift that has turned into backlash—one fueled not only by developers and ethicists but by cultural figures, creators and the general public.

Alex Hanna, director of research at the Distributed AI Research Institute (DAIR) and co-author of The A.I. Con: How to Fight Big Tech’s Hype and Create the Future We Want—a critique of large language models (LLMs), the technology behind AI systems like ChatGPT and Sora—told Newsweek that public opinion is increasingly aligning with his criticism.

“We’re seeing this narrative that AI is this inevitable future and it's being used to shut down questions about whether people actually want these tools or benefit from them,” Hanna said. “It becomes an excuse to displace workers, to automate without accountability, and with serious questions about its impact on the environment.”

“Companies want to make it look like AI is magic,” Hanna added. “But behind that magic is a labor force, data that’s been extracted without consent and an entire system built on exploitation.”

One telling example: Meta’s recent launch of Vibes, a TikTok-style video app featuring only AI-generated content, was met with widespread mockery. “No one asked for this,” one viral post read. Stanusch, of AI Forensics, agreed: “For the near future, we don’t expect this adoption to slow down but rather increase,” she said.

Even as capital flows into AI infrastructure buildouts, the cultural effect of so much "slop" is creating its own language of resistance. The term “clanker”—borrowed from Star Wars and repurposed by Gen Z—has exploded in popularity on TikTok as a meme-slur for robots and AI systems replacing human jobs. The term, while satirical, reflects deeper anxieties about labor displacement, particularly among younger workers entering an economy being transformed by AI.

Still, some see a long-term upside. “The robots are coming, and they’re coming for everyone’s jobs," said Adam Dorr, director of research at RethinkX, in an interview with Newsweek. “But in the longer term, AI could take over the dangerous, miserable jobs we’ve never wanted to do.”

Dorr, like others, urges caution—not rejection. “The challenge is: how do we make this transformation safely?” he said. “People are right to be scared. We’re already on the train—and the destination may be great but the journey will be chaotic.”

The Bubble Threat

From mental health chatbots and short-form video apps to corporate ad campaigns and toilet cameras that can analyze feces, AI is everywhere, and billions of dollars are still pouring in.

But saturation breeds doubt: what might look like cutting-edge innovation to investors is starting to look like a bubble to everyone else.

In just the first half of 2025, global investment in AI infrastructure topped $320 billion, with $225 billion coming from U.S. hyperscalers and sovereign-backed funds, according to IDC. Microsoft alone committed over $50 billion to data center expansion this year. Meta, Amazon, OpenAI and others are backing the $500 billion Stargate AI initiative — championed by the Trump administration.

Since returning to office, Donald Trump has made AI central to his economic agenda, fast-tracking permitting for AI infrastructure and declaring in a recent speech: “We will win the AI race just like we did the space race.”

But many experts are unconvinced the numbers add up. “AI spending outpacing current real economic returns is not a problem—that’s what many innovative technologies call for,” Andrew Odlyzko, professor emeritus at the University of Minnesota, told Newsweek. “The problem is that current (and especially projected) AI spending appears to be outpacing plausible future real economic returns.”

Odlyzko warned that much of the sector is propped up by “circular investment patterns,” in which AI companies fund one another without enough real customer demand. In one such example, Nvidia recently said it would invest $100 billion in OpenAI to help it build massive data centers, essentially backstopping its own customer. “If there was a big rush of regular non-AI companies paying a lot for AI services, that would be different," Odlyzko said. "But there is no sign of it.”

Other experts like British technology entrepreneur Azeem Azhar have compared the current capex boom to past busts. “The trillions pouring into servers and power lines may be essential,” he wrote on his Substack, “but history suggests they are not where enduring profits accumulate.”

And while lawsuits over AI training data have begun piling up—including one filed by The New York Times against OpenAI—others center on how generative tools imitate distinct styles. A viral 2025 trend saw ChatGPT produce Studio Ghibli-style images so convincingly that it appeared the beloved Japanese animation studio had endorsed the platform. They had not.

In the meantime, so far, AI remains deeply unprofitable at scale. Last month, the consulting firm Bain predicted the AI industry would need to be making $2 trillion in combined annual revenues by 2030 to meet expected data center demand — a shortfall of roughly $800 billion.

“There is a lack of deep value,” the tech columnist and AI critic Ed Zitron told Newsweek. “The model is unsustainable.” And yet, with billions of dollars and the weight of national policy behind it, even skeptics agree: if and when the AI bubble bursts, its impact will ripple far beyond Silicon Valley.

联系我们 contact @ memedata.com