大西洋月刊:萨姆·奥特曼正在失去对人类的控制。
Atlantic: Sam Altman Is Losing His Grip on Humanity

原始链接: https://www.theatlantic.com/technology/2026/02/sam-altman-train-a-human/686120/

在最近于印度举行的人工智能峰会上,OpenAI 首席执行官萨姆·奥特曼通过将生成式人工智能与创造一个人类所需的资源进行对比,为人工智能辩护,反驳了对其高能耗的批评。他认为“培养一个人”——包括20年的生命、维持和人类的进化历史——也需要大量的能量。 虽然这一说法在事实上存在争议,但奥特曼的评论凸显了人工智能行业内一种日益增长的趋势:将人工智能发展比作人类生命。Anthropic 的首席执行官也提出了类似的类比,而且这两家公司甚至在他们的聊天机器人中探索诸如“模型福利”和“困扰”之类的概念。 这种拟人化令人担忧,可能源于对人工智能日益增长的力量的真诚信念,或者作为一种有计算的公关策略,以证明巨额投资(OpenAI 正在寻求 8000 亿美元的估值)。批评者认为,这种心态有贬低人类生命和环境的风险,特别是由于人工智能的能源需求通过对数据中心化石燃料的依赖导致气候变化。最终,奥特曼的比较揭示了该行业与对人类意义的根本理解之间的脱节。

一篇最近发表在《大西洋月刊》关于 Sam Altman 的文章正在 Hacker News 上引发讨论。一些评论员认为这篇文章依赖于负面评价(“Sam Altman 坏”),却没有提供实质性的见解,而另一些人则认为 Altman 随着 OpenAI 在取得令人印象深刻的成果的同时,面临着不可持续的成本,而越来越多地诉诸于苍白的论点。 讨论的中心是,投资人工智能——即使它需要大量的能源——是否比培养人类问题解决者更有价值。一些用户认为 Altman 表现出源于不良教育的特征,将他比作拥有不受约束权力的个人,甚至引用了像 Elon Musk 这样的人物作为类比。 一位评论员将 Altman 的行为与“威权人格”联系起来,认为他对控制的需求以及对感知到的“不纯”的不适感驱动着他的行动。核心担忧似乎是,允许这样的人利用强大的技术塑造未来可能产生的后果。
相关文章

原文

Last Friday, onstage at a major AI summit in India, Sam Altman wanted to address what he called an “unfair” criticism. The OpenAI CEO was asked by a reporter from The Indian Express about the natural resources required to train and run generative-AI models. Altman immediately pushed back. Chatbots do require a lot of power, yes, but have you thought about all of the resources demanded by human beings across our evolutionary history?

“It also takes a lot of energy to train a human,” Altman told a packed pavilion. “It takes, like, 20 years of life and all of the food you eat during that time before you get smart. And not only that, it took, like, the very widespread evolution of the hundred billion people that have ever lived and learned not to get eaten by predators and learned how to, like, figure out science and whatever to produce you, and then you took whatever, you know, you took.”

He continued: “The fair comparison is, if you ask ChatGPT a question, how much energy does it take once its model is trained to answer that question, versus a human? And probably, AI has already caught up on an energy-efficiency basis, measured that way.”

Altman’s comments are easy to pick apart. The energy used by the brain is significantly less than even efficient frontier models for simple queries, not to mention the laptops and smartphones people use to prompt AI models. It is true that people have to consume actual sustenance before they “get smart,” though this is also a helpful bit of redirection on Altman’s part—the real concern with AI is not really the resources it demands, but the amount it contributes to climate change. Atmospheric carbon dioxide is at levels not seen in million of years—it has been driven not by the evolution of the 117 billion people and all of the other critters to have ever existed in the course of evolution, but by contemporary human society and combustion turbines akin to those OpenAI is setting up at its Stargate data centers. Other data centers, too, are building private, gas-fired power plants—which collectively will likely be capable of generating enough electricity for, and emitting as much greenhouse-gas emissions as, dozens of major American cities—or extending the life of coal plants. (OpenAI, which has a corporate partnership with the business side of this magazine, did not respond to a request for comment when I reached out to ask about Altman’s remarks.)

But what’s really significant about Altman’s words is that he thought to compare chatbots to humans at all. Doing so suggests that he views people and machines on equal terms. He didn’t fumble his words; this is a common, calculated position within the AI industry. Altman made an almost identical statement to Forbes India at the same AI summit. And a week ago, Dario Amodei—the CEO of Anthropic, and Altman’s chief rival—made a similar analogy, likening the training of AI models to human evolution and day-to-day learning. The mindset trickles down to product development. Anthropic is studying whether its chatbot, Claude, is conscious or can feel “distress,” and allows Claude to cut off “persistently harmful or abusive” conversations in which there are “risks to model welfare”—explicitly anthropomorphizing a program that does not eat, drink, or have any will of its own.

AI firms are convinced either that their products really are comparable to humans or that this is good marketing. Both options are alarming. A genuine belief that they are building a higher power, perhaps even a god—Altman, in the same appearance, said that he thinks superintelligence is just a few years away—might easily justify treating humans and the planet as collateral damage. Altman also said, in his response to concerns about energy consumption, that the problem is real because “the world is now using so much AI”—and so societies must “move towards nuclear, or wind and solar, very quickly.” Another option would be for the AI industry to wait.

If Altman’s comparison of chatbots and people is purely a PR tactic, it is a deeply misanthropic one. He is speaking to investors. The notion that AI labs are building digital life has always been convenient to their myth, of course, and OpenAI is reportedly in the middle of a fundraising round that would value the company at more than $800 billion—nearly as much as Walmart.

Tech companies may genuinely want to develop AI tools for the benefit of all humanity, to echo OpenAI’s founding mission, and genuinely believe that they need to raise amounts of cash to do so. But to liken raising a child—or, for that matter, the evolution of Homo sapiens—to developing algorithmic products makes very clear that the industry has lost touch, if it ever had any, with what it means to be human. To “train a human”—that is, to live a life—is to struggle, to accept the possibility of failure, and to sometimes meander simply in search of wonder and beauty. Generative AI is all about cutting out that process and making any pursuit as instant, efficient, and effortless as possible. These tools may serve us. But to put them on the same plane as organic life is sad.

联系我们 contact @ memedata.com