谷歌DeepMind资深人士融资11亿美元,用于开发不依赖人类数据训练的AI。
Google DeepMind Veteran Raises $1.1 Billion For AI That Doesn't Train On Human Data

原始链接: https://www.zerohedge.com/technology/google-deepmind-veteran-raises-11-billion-ai-doesnt-train-human-data

## 难以言说的智能:一项价值 51 亿美元的强化学习赌注 DeepMind 老将大卫·希尔弗,AlphaGo AI 的架构师,该 AI 击败了世界围棋冠军,已推出 Ineffable Intelligence,并获得 11 亿美元的资金。与目前专注于大型语言模型 (LLM) 的 AI 繁荣不同,希尔弗认为通往真正的“超级智能”的道路在于**强化学习**——人工智能系统通过在模拟环境中的试错和自我对弈来学习。 希尔弗认为,LLM 受限于其对现有人类数据的依赖,本质上是一种“化石燃料”方法。他设想“超级学习者”能够自主发现新知识并超越人类能力,就像 AlphaGo 在围棋中做的那样。 Ineffable Intelligence 旨在在模拟环境中创建这些智能体,使其能够在没有预先存在的数据集限制的情况下进化,并可能在科学、技术及其他领域取得突破。希尔弗强调了专注于这种方法的专用实验室的重要性,这与目前主导该领域的 LLM 为中心的研发方向不同。

相关文章

原文

Authored by Jason Nelson via decrypt.io,

In brief

  • DeepMind veteran David Silver raised $1.1 billion for his new startup Ineffable Intelligence at a $5.1 billion valuation.
  • Silver says reinforcement learning, not large language models, is the best path to superintelligence.
  • The startup aims to build AI “superlearners” that learn through simulations and self-play.

David Silver, the DeepMind scientist behind AlphaGo’s historic 2016 win over world Go champion Lee Sedol, has raised $1.1 billion to launch a startup betting that the next era of AI won’t come from today’s dominant technology.

Image: Shutterstock/Decrypt

Silver’s company, Ineffable Intelligence, launched in January at a $5.1 billion valuation and is betting on reinforcement learning, a method where AI systems improve through trial and error. Silver argues that approach, rather than the large language models now dominating the field, offers a more credible route to superintelligence.

I think of our mission as making first contact with superintelligence,” Silver told Wired. “By superintelligence, I really mean something incredible. It should discover new forms of science or technology or government or economics for itself.

Popularized by philosopher Nick Bostrom in his 2014 book “Superintelligence,” the term refers to AI that surpasses human intelligence across nearly all domains, while artificial general intelligence, or AGI, describes systems capable of matching human-level reasoning across a wide range of tasks.

Silver argues that large language models are fundamentally limited because they learn from human-generated data, instead of building their own understanding through experience.

Human data is like a kind of fossil fuel that has provided an amazing shortcut,” he said. “You can think of systems that learn for themselves as a renewable fuel—something that can just learn and learn and learn forever, without limit.”

Silver has spent much of his career advancing that argument. AlphaGo, which combined human training data with reinforcement learning and self-play, developed strategies that surprised even top human players and demonstrated how AI can exceed human precedent in narrow domains.

I feel it's really important that there is an elite AI lab that actually focuses a hundred percent on this approach,” he told Wired. “That it’s not just a corner of another place dedicated to LLMs.

Ineffable Intelligence plans to build what Silver calls “superlearners”—AI agents placed inside simulations where they can pursue goals, fail, adapt, and improve without the limits of a static human dataset. Silver declined to describe what those simulations would look like, but said the approach would allow agents to collaborate and develop capabilities autonomously.

Silver argued that large language models are limited by the data they are trained on, adding that a model trained in a world where everyone believed the Earth was flat would likely keep that belief unless it could test reality for itself. A system that learns through experience, he said, could discover otherwise.

Ineffable Intelligence did not immediately respond to a request for comment by Decrypt.

联系我们 contact @ memedata.com