人工智能的巨大裂痕就像宗教分裂
AI’s big rift is like a religious schism

原始链接: https://www.programmablemutter.com/p/the-singularity-is-nigh-republished

最近一篇题为“奇点临近!”的文章 由技术专家 Henry Farrell 在 Programmable Muttern 中撰写的文章讨论了人工智能 (AI) 的现状及其对社会的潜在影响。 法雷尔研究了人工智能领域内两种对立的宗教运动,一种相信神级智能的好处,另一种则致力于应对对存在风险的恐惧。 这些派别是从作家兼科学家 Vernor Vinge 近三十年前首次提出的理论演变而来的。 虽然另一位著名的人工智能倡导者雷·库兹韦尔(Ray Kurzweil)预见到人类和人工智能将融入上帝的未来,但其他人担心如果超人智能无法与人类价值观保持一致,将会带来灾难性后果。 企业家马克·安德森 (Marc Andreessen) 和音乐家格莱姆斯 (Grimes) 等业内知名人士支持建立一个以理性而非恐惧为指导的神圣智慧生命秩序。 然而,由于围绕数据操纵和治理的根深蒂固的意识形态,这些努力常常面临阻力。 行业内日益扩大的分歧阻碍了就奇点之外的关键问题进行富有成效的对话,其中包括社会影响和问责措施。 这篇文章最终强调了积极倾听的必要性,并承认很少听到的声音,同时警告不要在没有适当考虑的情况下鲁莽地前进。 总而言之,本文指出了人工智能爱好者中不同的思想流派,强调了相互冲突的意识形态,这些意识形态可能会破坏有关技术在社会中的作用的富有成效的讨论。

奇点的概念是指人工智能研究中的一个假设事件,其中人工智能系统的认知能力显着且不可逆转地超越了最聪明的人类思维,导致快速加速改进,有可能创造出“超人”水平的通用智能。 However, unlike thermodynamically limited processes such as heat flow or entropy production, there seems to be a significant possibility that increased compute resources will enable continuous advances in AI capability that could greatly exceed human performance without necessitating a discontinuous threshold of performance。 Instead, progress appears to be increasingly incremental rather than abrupt or singular。 Nevertheless, the consequences of such AI advancements may involve a radical departure from the status quo that would significantly affect numerous areas of human activity and interaction, raising questions regarding the nature, scope, and degree of control over and responsibility for this emerging phenomenon。 As a result, these issues present pressing challenges and opportunities for individuals, organizations, and societies seeking to adapt effectively and constructively to rapidly evolving trends in AI technologies and applications。
相关文章

原文

[NB - the below has just been published by The Economist under the title “AI’s big rift is like a religious schism, says Henry Farrell.” It is being republished here, for six months only, with The Economist’s permission. The picture above (replacing a blander Singularity picture), is what ChatGPT4 comes up with when asked for “an inspiring religious portrait of Based Buff Jesus ushering in the technocapital Singularity”]

Two centuries ago Henri de Saint-Simon, a French utopian, proposed a new religion, worshipping the godlike force of progress, with Isaac Newton as its chief saint. He believed that humanity’s sole uniting interest, “the progress of the sciences”, should be directed by the “elect of humanity”, a 21-member “Council of Newton”. Friedrich Hayek, a 20th-century economist, later gleefully described how this ludicrous “religion of the engineers” collapsed into a welter of feuding sects.

Today, the engineers of artificial intelligence (AI) are experiencing their own religious schism. One sect worships progress, canonising Hayek himself. The other is gripped by terror of godlike forces. Their battle has driven practical questions to the margins of debate.

Both cults are accidental by-products of science fiction. In 1993 Vernor Vinge drew on computer science and his fellow science-fiction writers to argue that ordinary human history was drawing to a close. We would surely create superhuman intelligence sometime within the next three decades, leading to a “Singularity”, in which AI would start feeding on itself. The future might be delightful or awful, depending on whether machines enhanced human intelligence or displaced it.

Some were optimistic. The futurist Ray Kurzweil wrote an enormous tome, “The Singularity is Near”, predicting a cusp in 2045. We humans would become immortal, spreading intelligence throughout the universe, and eventually merging into God. For all its statistics and exponentials, the book prophesied “the Rapture of the Nerds”, as one unkind critic called it. Its title really should have been “The Singularity is Nigh”.

Others feared the day of judgment. Eliezer Yudkowsky, a self-taught AI researcher, was deeply influenced by Mr Vinge’s ideas. He fathered Silicon Valley’s “rationalist” movement, which sought to improve human reasoning and stop AI destroying humankind.

Rationalists believed that Bayesian statistics and decision theory could de-bias human thinking and model the behaviour of godlike intelligences. They revelled in endless theoretical debates, like medieval Christian philosophers disputing the nature of angels, applying amateur game theory instead of Aristotelian logic. Sometimes their discussions were less erudite. Mr Yudkowsky popularised his ideas in a 660,000-word fan-fiction epic, “Harry Potter and the Methods of Rationality”.

Rationalists feared that superhuman AIs wouldn’t have our best interests at heart. One notorious thought experiment—a modern version of Pascal’s wager, dubbed “Roko’s basilisk”—claimed that logic dictated that future divine intelligences would torture anyone who had known that AI was possible and hadn’t devoted themselves to bringing it into existence. AIs might also use their awesome reasoning powers to escape any limits that humans imposed on them, creating an “x risk” (existential risk) to human survival.

Rationalism explains why AI pioneers became obsessed with x risk. Sam Altman, Elon Musk and others founded OpenAI, the creator of Chatgpt, as a non-profit so that it wouldn’t duck the dangers of machine intelligence. But the incentives shifted as the funding flooded in. Some OpenAI staffers feared that their employer cared more about the opportunities than the dangers and defected to found Anthropic, a rival AI firm. More recently, clashes over AI risk, money and power reportedly led to the fracture between Mr Altman and his board.

If rationalists are frustrated by Silicon Valley’s profit model, Silicon Valley is increasingly frustrated by rationalism. Marc Andreessen, the co-founder of Andreessen Horowitz, a venture-capital firm, fulminated in June that the extremist AI-risk “cult” was holding back an awesome AI-augmented future, in which humanity could reach for the stars.

This backlash is turning into its own religion of the engineers. Grimes, a musician and Silicon Valley icon, marvels that AI engineers are “designing the initial culture of the universe”. She calls for a “Council of Elrond” (this conclave a nod to “The Lord of the Rings”) comprising the “heads of key AI companies and others who understand it” to set AI policy. Grimes met Mr Musk, the father of her children, through a shared joke about Roko’s basilisk.

In October Mr Andreessen published his own “Techno-Optimist Manifesto” to wide acclaim from Silicon Valley entrepreneurs. In it, he takes aim at a decades-long “demoralisation campaign…against technology and life”, under various names including “sustainable development goals”, “social responsibility”, “trust and safety” and “tech ethics”. Efforts to decelerate AI “will cost human lives” and are thus tantamount to “murder”.

Mr Andreessen’s manifesto is a Nicene creed for the cult of progress: the words “we believe” appear no less than 113 times in the text. His list of the “patron saints” of techno-optimism begins with Based Beff Jezos, the social-media persona of a former Google engineer who claims to have founded “effective accelerationism”, a self-described “meta-religion” which puts its faith in the “technocapital Singularity”.

Our future is currently being built around Mr Vinge’s three-decades-old essay, a work that only Silicon Valley thinkers and science-fiction fans have read. Warring cults dispute whether engineers are as gods, or just unwitting Dr Frankensteins.

This schism is an attention-sucking black hole that makes its protagonists more likely to say and perhaps believe stupid things. Of course, many AI-risk people recognise that there are problems other than the Singularity, but it’s hard to resist its relentless gravitational pull. Before Mr Andreessen was fully dragged past the event horizon, he made more nuanced arguments about engineers’ humility and addressing the problems of AI as they arose.

But we need even more to listen to other people. Last month, at Rishi Sunak’s global AI-policy summit, Mr Musk pontificated about the need for an “off switch” for hostile AI. The main event was all about x risk and AI’s transformative promise, consigning other questions to a sideshow dubbed the “AI Fringe”.

At the same time, Rachel Coldicutt, a British tech thinker, was putting together a “Fringe of the Fringe”, where a much more diverse group of thinkers debated the topics that hadn’t made the main agenda: communities, transparency, power. They didn’t suggest a Council of the Elect. Instead, they proposed that we should “make AI work for eight billion people, not eight billionaires”. It might be nice to hear from some of those 8bn voices.■

Henry Farrell is a professor of international affairs and democracy at Johns Hopkins University, and co-author of “Underground Empire: How America Weaponized the World Economy”.

联系我们 contact @ memedata.com