令人不安的谷和反人工智能情绪的兴起
The Uncanny Valley and the Rising Power of Anti-AI Sentiment

原始链接: https://localscribe.co/posts/uncanny-valley-and-rising-power-of-anti-ai-sentiment/

## 日益加剧的脱节:为什么公众对人工智能的看法正在转负 最近的调查显示,专家对人工智能的乐观态度与公众的担忧之间存在显著差距。虽然76%的人工智能专家认为人工智能将使他们个人受益,但只有24%的美国公众持有这种观点——而且这个数字正在迅速下降,超过一半的人现在认为人工智能弊大于利。这不仅仅是典型的怀疑论,而是一种更深层、更本能的反应。 这种负面情绪源于对失业、虚假信息、隐私和权力集中等问题的担忧。然而,一个关键因素是“恐怖谷”效应——人工智能的近乎人类的特性会引发对真正理解、同情和能力的期望,而这些期望却始终无法得到满足。这种不匹配会引发不安、厌恶,甚至存在主义恐惧,微妙地提醒着我们自身的死亡。 在文本、语音和视频中反复接触这些不完美的模仿品会加剧这种效应。虽然在设计上保持一致性或实现真正令人信服的人工智能可以缓解这种情况,但故意远离人类相似性可能是最有效的方法。归根结底,反人工智能的情绪不仅仅是关于逻辑上的担忧;它是一种身体的、情感的反应,由反复经历某种*几乎*是人类,但却从根本上“不对劲”的东西所驱动。

## 反对人工智能情绪增长:摘要 最近的 Hacker News 讨论强调了对人工智能日益增长的负面情绪,这种情绪源于切身关注,而不仅仅是简单的技术恐惧症。用户报告对人工智能需求推动下硬件(GPU、SSD)价格上涨感到沮丧,并且大量人工智能生成垃圾信息污染了在线论坛,使得真实的互动变得困难。担忧还包括人工智能数据中心对环境的影响以及潜在的就业岗位流失。 许多人表示希望*使用*人工智能作为工具,但抵制他人使用它*来取代*他们,或者遇到浪费时间的低质量人工智能生成内容。这种情绪在美国似乎尤为强烈,可能源于对政府监管缺乏信任。 讨论表明这并非典型的文化战争,而是人工智能受益者与感受到其负面影响者之间的冲突。一些人认为,人工智能正在助长内容的浪费性生产,加剧了评估质量的现有问题。归根结底,核心问题似乎是人们认为人工智能优先考虑资本增益,而不是造福普通人,并且越来越反感自己的时间被其输出结果浪费。
相关文章

原文

Recent survey data show a wide gap between public and expert views of AI. In Pew's 2025 survey, 76% of AI experts said AI would benefit them personally, while only 24% of the U.S. public said the same. The public was much more likely to say AI would harm them than benefit them.

Negative public sentiment also appears to be growing. In March 2026, Quinnipiac found that 55% of Americans thought AI would do more harm than good in their day-to-day lives, up from 44% in April 2025. It also found that 64% thought AI would do more harm than good in education.

Public hostility toward AI now looks stronger than ordinary skepticism toward a new technology. People have reasons for that response, including fraud, misinformation, privacy invasion, concentration of power, and job displacement. Job displacement carries its own emotional weight because it threatens status, livelihood, and social usefulness, which gives the fear an existential edge.

This essay explores why anti-AI sentiment may be gaining force. AI may now be producing a more ambient uncanny reaction across daily life. That would help explain why public reaction often sounds disgusted, unsettled, and bodily rather than merely doubtful.

The Uncanny Field

Masahiro Mori introduced the uncanny valley in 1970. In Mori's original formulation, corpses and zombies sat deep in the valley, so death and lifeless human resemblance were built into the concept from the beginning. The original graph tied human likeness directly to revulsion once the likeness crossed into something animate-seeming and lifeless.

The literature still offers several explanations for why that drop in affinity happens. Reviews continue to describe competing accounts involving mismatch, category ambiguity, expectation violation, disgust, and threat-related mechanisms rather than one settled model.

AI now appears in forms that trigger human expectations throughout daily life. People encounter text that sounds conversational, voices that sound natural, images and video that almost pass, and agents that mimic competence, memory, initiative, or empathy. Reactions that once centered on a robot or replica may now be attaching to AI as a category.

Repeated contact gives that shift its force. A chatbot that sounds empathic and hollow, a synthetic voice that feels almost right, or a generated video clip that collapses on inspection may each be minor on their own. Repeated often enough, they can make AI as a category feel socially off.

Mismatch is still the strongest basic explanation. AI keeps presenting cues that invite human social expectations, then fails to satisfy them reliably. Natural language invites expectations of understanding. Warm tone invites expectations of care. Realistic video invites expectations of authenticity. Agentic behavior invites expectations of judgment and competence. Repeated breaks in those expectations make aversion easier to understand. Research on face-voice mismatch and realism consistency supports that picture.

Repeated exposure may also change the reaction over time. Some work on repeated interaction with robots suggests uncanniness can decrease with familiarity in certain contexts. Familiarity can reduce startle while leaving behind a more stable sense that the category is untrustworthy. That possibility fits AI especially well because people are encountering many versions of the same near-human pattern across modalities.

Disgust and Danger

Disgust and disease-avoidance are longstanding candidates in uncanny valley theory. The basic idea is that near-human abnormalities can activate evolved avoidance responses because deviations in appearance or behavior may function as cues of illness, contamination, or threat. A 2025 study on virtual agents explicitly frames its findings in terms of the pathogen-avoidance hypothesis.

Danger-avoidance is a wider evolutionary version of that argument. Moosa and Ud-Dean argue that pathogen avoidance alone is too narrow because even a fresh corpse can provoke strong aversion before visible decay appears. Their proposal is that the uncanny valley reflects a danger-avoidance system more generally. That is relevant for AI because near-human abnormality may be enough to trigger caution or revulsion even when the stimulus does not resemble a diseased body.

AI often presents the kind of near-human abnormality that could fit that account. It speaks with confidence without understanding. It performs social fluency without satisfying the conditions that make human social signals trustworthy. That kind of mismatch could plausibly recruit disgust or danger-detection processes even when the stimulus is text, voice, or video rather than a literal body.

Mortality salience and terror management theory point to another mechanism. MacDorman explicitly connected the uncanny valley to terror management theory, proposing that highly humanlike robots may feel eerie partly because they act as reminders of death and human vulnerability. Related work by Ho and MacDorman also links uncanny reactions to fears associated with dying and to psychological defenses against mortality. Put bluntly, what people consider AI slop could remind them of their own mortality because it looks or sounds human while feeling hollow and intrusive.

AI now circulates alongside explicit existential discourse. People encounter AI together with warnings about extinction, superintelligence, replacement, and loss of control. Mortality cues may therefore be present in two forms at once. One is implicit and tied to uncanny responses to near-human but hollow systems. The other is explicit and tied to the surrounding narratives of existential risk and human displacement. Job displacement belongs here too because it is closely tied to redundancy, status loss, and diminished social value.

These mechanisms can accumulate rather than compete. AI can repeatedly produce social mismatch while also activating older aversion systems and explicit existential fears. Under those conditions, anti-AI sentiment would be expected to feel more forceful than a standard policy disagreement.

Why This Could Make Anti-AI Sentiment Stronger Now

The Affective Layer

The visceral layer helps explain why negative public sentiment toward AI can seem more intense than the arguments alone would predict. People are evaluating AI through explicit beliefs about misuse, power, labor, and privacy. They may also be evaluating it through repeated low-level experiences of social wrongness, aversion, and existential unease.

The public-expert gap also looks different through this lens. Experts interact with AI through a frame centered on capability, utility, and technical progress. Much of the public encounters AI as disruption, intrusion, imitation, or threat. Low-level uncanny reactions inside those encounters would help explain why reassurance about usefulness often fails to touch the emotional center of the response.

Limits

The uncanny valley literature remains mixed. Extending it from robots and embodied replicas to AI as a field is a conceptual move rather than an established finding. Repeated exposure can reduce uncanniness in some settings. Cross-national differences in anti-AI sentiment may also have more to do with regulation, media framing, labor conditions, or institutional trust than with disgust or mortality salience.

AI is now a category people keep running into across text, voice, video, and agents. If those repeated encounters keep activating mismatch, disgust, danger-avoidance, and mortality-related responses, then some share of anti-AI sentiment may be building from the body upward as well as from explicit argument.

Design and Public Reaction

AI products already create expectations through their degree of humanness. Chatbots, voice agents, avatars, AI tutors, customer service agents, companions, generated video, and humanoid robots all use human cues in different ways. Wording, voice, timing, memory, emotional tone, visual realism, and behavior can either fit together or clash.

The earlier research points to two clear ways out of the uncanny zone, and it suggests a third design path. The first is consistency. When a system's cues fit together across wording, voice, timing, visual design, and behavior, it violates fewer expectations. The second is full convincingness. In the classic uncanny valley framework, affinity rises again when a system gets far enough past the valley to feel genuinely convincing. The third is purposeful distance from humanness. This is the design lesson I would add. A system that stays clearly stylized, machine-like, or socially distinct may trigger fewer human expectations in the first place.

Humanoid robotics may intensify the problem. Multimodal AI already creates a less embodied uncanny effect through text, voice, video, and agents. Robotics adds gait, facial motion, timing, touch, and physical presence, which is closer to classic uncanny valley terrain. As anthropomorphized robots get closer to passing as human without fully succeeding, they may produce a stronger embodied wave of disgust and fear.

Anti-AI sentiment may be growing in force because repeated exposure is making AI feel increasingly hollow and intrusive. Political, economic, and ethical explanations remain central. This commentary adds an account of why the reaction can feel so visceral.

The views expressed here are my own and do not necessarily reflect the views of any current or future employer, training site, academic institution, or affiliated organization.

联系我们 contact @ memedata.com