年轻人使用人工智能的频率越高,就越讨厌它。
The More Young People Use AI, the More They Hate It

原始链接: https://www.theverge.com/ai-artificial-intelligence/920401/gen-z-ai

## Z世代对人工智能热潮的反弹 尽管Z世代是像ChatGPT这样的人工智能聊天机器人的早期采用者,但他们正在对科技行业推动广泛人工智能整合的举动表现出显著且日益增长的反弹。民意调查数据表明,许多人对被强加给他们的未来感到深深的怨恨,这种怨恨源于对失业、批判性思维的侵蚀以及对人际关系的影响的担忧。 与寻求捷径不同,年轻人表达了对人工智能可能削弱基本技能和助长肤浅理解的担忧。许多人正在积极避免使用这些工具,即使这意味着要牺牲潜在的职业优势。大学快速整合人工智能,往往是由行业需求而非教学价值驱动,进一步加剧了这种不满。 最近的盖洛普民意调查显示,Z世代对人工智能的希望和热情正在下降,近50%的人认为风险大于收益。这种怀疑延伸到对人工智能生成内容的文化抵制,认为其不真实且“不酷”。专家建议关注*驱动*年轻人使用人工智能的压力,而不是仅仅批评这些工具本身,并对未来一代在无处不在、未经质疑的人工智能环境中长大表示担忧。

## AI 与年轻一代:日益增长的脱节 最近在 Hacker News 上的讨论凸显了年轻一代——特别是 Alpha 世代——对人工智能日益增长的情绪。当老一辈人还在纠结人工智能对其*自身*工作的影响时,年轻用户表达了更根本的不喜欢,质疑人工智能完成的工作的真实性,甚至不信任那些依赖它的人。 许多评论员指出,这与之前几代人对互联网等新技术的反应相似,暗示 Alpha 世代的反应可能与当前的焦虑不同。一位开发者分享了他孩子们的直截了当的评价:依赖人工智能会让一份工作“不真实”。 一个关键的担忧是人工智能采用的强制性。虽然上层阶级有选择*是否*使用人工智能的奢侈,但较低级别的工人越来越多地*必须*使用它才能保持竞争力,尽管社会上对其使用的批评不断。这造成了一种悖论,即人工智能既是生存的必要工具,又是对创造力和工作保障的焦虑来源。 最终,这场对话指向了一种潜在的力量失衡,人工智能的计算能力掌握在大型公司手中,并呼吁年轻一代积极塑造——甚至“武器化”——这项技术,而不是被动接受它。
相关文章

原文

It’s been almost three years since Silicon Valley started aggressively pushing large language model-based chatbots like ChatGPT as the supposedly inevitable future of everything, and there’s no group that has felt the pressure quite like Gen Z.

Like with many tech trends before it, it’s no surprise that young people are among the biggest adopters of AI chatbot tools. But contrary to the tales spun by tech companies like OpenAI and Google, polling data shows that Gen Z students and workers are a big part of the wider cultural backlash against AI. And even as they utilize these tools, vast swaths of young people are deeply acrimonious and even resentful of the AI-centric future that many feel is being forced on them.

“The part that feels scariest to me is the human impact … their ability to have relationships or just basic communication.”

Far from the stereotype of lazy young people looking for shortcuts, Gen Zers have had some of the loudest and most detailed objections to generative AI use. Their attitudes also reflect a much wider backlash against AI and the tech industry in general, which has recently resulted in a nonpartisan movement against data centers across the country and threatened both CEOs and politicians supportive of Silicon Valley’s AI frenzy.

Meg Aubuchon, a 27-year-old art teacher living in Los Angeles, says their response and that of many of their peers has been to avoid chatbot tools entirely. “It just makes me want to dig my heels into a career where I never have to use AI, even if that’s a career that isn’t going to pay as well,” Aubuchon told The Verge.

Emerging from academia and into the vice grip of an increasingly brutal job market, young people face an impossible contradiction. They are being told, on the one hand, that these tools are going to eliminate millions of jobs, and on the other that they have to use them if they don’t want to fall behind. They’re the first new generation of adults to navigate a world flooded with chatbots and generative AI slop, after having already lost years of their youth to the covid-19 pandemic. And all the while, Silicon Valley’s multitrillion-dollar push for AI adoption is clashing with their fears of its well-documented impacts — on the environment, disinformation, academic integrity, and our social fabric and emotional well-being, to name just a few.

“The part that feels scariest to me is the human impact, because it impacts people on an individual level and how they relate to other people, whether that be their ability to have relationships or just basic communication,” said Aubuchon.

Sharon Freystaetter, 25, went to school for computer science at a young age and spent three years working as a cloud infrastructure engineer at a major Silicon Valley company. But right as AI hype really started to take off, she left the company, citing ethical concerns and anxiety over the environmental impacts of data centers. Now, she has left the tech industry for good, and says she avoids chatbots and disables AI features in applications whenever possible.

“I think everyone in my immediate peer group is not using AI and is actively against it, besides my friends who are in computer science and are essentially mandated to use it,” Freystaetter, who is now a food service worker in New York, told The Verge. “When I came back and started to look around [for tech jobs], suddenly everything was saying ‘You need to use AI to get this job’ in the requirements.”

Fears that chatbots are wrecking critical thinking and social skills are common among many groups of young adults, even as a wide majority of them admit to using chatbot tools regularly. According to a recent Harvard-Gallup study, 74 percent of young adults surveyed in the United States said they use a chatbot at least once a month (another study found more than half of US college students admit to using the tools for their coursework on a weekly basis). At the same time, 79 percent of those surveyed by Gallup “expressed concern that AI makes people lazier,” and 65 percent said that using chatbots “promotes instant gratification, not real understanding” and prevents people from engaging with ideas in a critical or meaningful way.

“I’ve personally come to the conclusion that it’s a load of bullshit for outsourcing jobs.”

And in a more recent Gallup poll, Gen Z’s opinion of AI tools hit a new low: Only 18 percent now say they are hopeful about the technology, down from 27 percent last year, and only 22 percent say they are excited, down from 36 percent. The number of Gen Z workers who think AI’s risks outweigh its benefits has also increased over the past year by 11 points, to almost 50 percent. And even though 56 percent say the tools help them finish work faster, eight in 10 now admit that using AI in this way makes actual learning more difficult in the future.

To make matters worse, many university students are seeing school administrations awkwardly shoehorn AI into their higher education, consolidate computer science and engineering departments into new “AI” majors, and pen multimillion-dollar deals with AI companies like OpenAI and Anthropic to integrate chatbot tools into academic curricula. And at the same time, young people are graduating into a brutal job market that they complain has been made virtually impossible to navigate as AI automation tools opaquely and arbitrarily filter out their job applications.

Alex Hanna, the director of research at the Distributed AI Research Institute (DAIR), says the way students are being inundated by AI and its accompanying hype is driving their resentment, leading to widespread backlash both inside and outside academia.

“Universities are hearing from employers that they want students who know how to use these tools,” Hanna told The Verge. “This is not because the tools actually have shown much value-add — they want Gen Z to show them where the value-add is. That, or the university is investing or has donors heavily involved in the supply side (e.g., in the tech industry).”

In other words, AI companies and universities are taking an “integrate first, find use cases later” approach that essentially recruits students as marketing for the AI industry while baking these tools deep into the core of academia. At Arizona State University, for example, the school’s administration is using a beta tool called ASU Atomic that uses AI to automatically synthesize professors’ lectures into bite-sized learning materials, 404 Media recently reported.

74 percent of young adults surveyed in the United States said they use a chatbot at least once a month … 65 percent said that using chatbots prevents people from engaging with ideas in a critical or meaningful way.

Last month, the editorial board of the University of Pennsylvania’s student newspaper published a scathing piece criticizing the university administration’s adoption of chatbot tools and its integration of AI topics into nearly every part of its curriculum. While acknowledging the widespread use of chatbots by students, the authors wrote that by uncritically embracing the technology without any clear rules, the school is “only quickening its own demise.”

“AI cannot coexist with education — it can only degrade it. As technology advances and workers are replaced by machines, schools are some of the only places we have left to explore and wrestle with human thought,” the students wrote. “With our own university leading the charge, AI is now corrupting those few sacred spaces and leaving us with nowhere to engage in true scholarship.”

In another letter written by the Oberlin College Luddite Club (appropriately, using a typewriter), students rejected a similar initiative by their school administration to “experiment” with AI-centric education.

“[E]ven one semester of accepted (even encouraged) chat-bot use will jettison our student body down a lazy, irredeemable tunnel of intellectual destruction,” the Oberlin students wrote. “We will not stand by and witness the further atrophying of our liberal arts education. Rather than strengthening Silicon Valley, we build our own skills and generative sweat.”

The fear that chatbot tools will lead to a permanent loss of critical thinking skills ranks high among the worries held by young people about the technology. It’s also backed up by data: A recent study from the MIT Media Lab found that EEG scans of the brain showed decreased activity in people who have been writing essays using AI tools. Other research has found that this process, known as “cognitive offloading,” has a wide range of negative impacts on humans, including diminishing people’s skepticism and their ability to discern truth from deception, leading to “heightened manipulation and weakened democratic decision-making processes.”

The fact that so many young people are well aware of these dangers even as they make use of the tools shows that they aren’t buying the hype of AI boosters like OpenAI’s Sam Altman, who has frequently tried to pitch chatbots as tools for doing everything from writing essays to raising a child. Instead, it suggests that Gen Z is hyper-aware of the tools’ limitations — from their well-documented tendency to “hallucinate” made-up information to the social and emotional cognito-hazards of relying on machines for human advice.

“Altman talks about the technology like it is magic. He has used those words precisely, calling ChatGPT ‘Magic Intelligence in the Cloud,’” said Hanna. “Gen Z is more realistic about what the tools actually can do. They can handle text-based work that they don’t want to do or feel pressured to do. But they are often rather savvy about their limits.”

This is true even among those who aren’t “anti-AI” and say they find chatbot tools useful.

“I spend a lot of time thinking about this stuff and I’ve personally come to the conclusion that it’s a load of bullshit for outsourcing jobs,” Emma Gottlieb, a borderline Zoomer-millennial who works in technical sales for a company that makes equipment for the film industry, told The Verge. Gottlieb says she often uses AI tools to quickly sift through large volumes of technical documents for her job. But she knows better than to take the systems’ outputs at face value.

“I definitely do double-checks, personally. It’s important because somebody will mislabel an eBay listing for a component part, and then the AI will say it has this feature when it really doesn’t,” said Gottlieb. “I wouldn’t say it’s a significant time-saver, but I think it’s just like fast food — it’s easy, it’s cheap, and it’s there.”

AI companies and universities are taking an “integrate first, find use cases later” approach.

There’s one other explanation for Gen Z’s stance on AI tools that isn’t measured in data points: AI use has become culturally toxic, and many young people (like their older counterparts) won’t admit to using it out of social shame. The use of AI-generated visuals and text is frequently a subject of ridicule on social media, and any anecdotal sampling of young people will suggest that most find it fake and deeply uncool — especially when it’s used to circumvent the creative process and pass off ugly-looking slop as “AI art.”

Lacking any clear-cut rules, AI use also causes distrust and anxiety within academia, not just between students and professors, but among peers. According to one University of Pittsburgh study, students viewed the use of AI tools as a “red flag” that causes them to “think less” of their peers.

But Hanna says that a more critical approach is necessary — one that “punches up” at the CEOs, marketing teams, and school administrations that are pushing these tools as universal thinking machines, and focuses on the material conditions that pressure young people to use them in the first place.

“Speaking as an elder millennial, I approach Zoomers who use these tools with a bit more empathy,” said Hannah. “Why do they feel compelled to use them? What material conditions do they face at school such that they are feeling so pressured? Is there a way to offer them another kind of pressure valve? … That’s likely a better place to begin from.”

Freystaetter and Gottlieb both say that instead of their own generation, they are more worried about Gen Alpha and other young people that come after them, who lose their chance to develop healthy relationships with technologies when they become mandatory and ubiquitous.

“These are the kids who are growing up with [AI] integrated into everything, and with ease of access,” Fraystaetter said. “They grow up not knowing that they should be critical of it, and that they’re being influenced by it.”

Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.
联系我们 contact @ memedata.com