AGI 已来临
AGI is here (and I feel fine)

原始链接: https://www.robinsloan.com/winter-garden/agi-is-here/

## AGI已至:呼吁承认 本文认为,通用人工智能(AGI)已经到来,很可能是在2020年左右大型语言模型出现时,特别是OpenAI的GPT-3是一个关键时刻。作者将AGI定义为“通用”智能——能够执行广泛的任务,不同于之前专注于特定功能的AI。 尽管取得了这一成就,作者注意到行业内不愿承认AGI,这可能是因为感觉其影响还不够*变革性*,或者是一种战略性的愿望,即维持一个不断后退的进一步发展目标。 作者提议“单方面宣告”AGI的到来,并非为了庆祝一个完成品,而是为了转变焦点。承认AGI能够对它的当前局限性进行现实评估——尤其是在物理世界中——并鼓励探索剩余的挑战。作者借鉴个人电脑的早期发展,强调突破性技术并不自动等同于乌托邦,并且开放的讨论,即使来自行业外部,对于理解和塑造其未来至关重要。

一个黑客新闻的讨论围绕着通用人工智能(AGI)是否已经到来。最初的帖子链接到一篇声称AGI已经出现的文章,引发了评论区的争论。 一个关键论点认为,当前的人工智能模型,尽管速度和实用性有所提高,仅仅是早期模型的放大版本,仍然受到幻觉和目标不一致等问题的困扰。核心缺失的部分不是更多的计算能力,而是人工智能处理*记忆*方式的根本转变——这是人类和动物自然拥有的能力。 其他人表达了怀疑,将当前的人工智能贬低为“高级鹦鹉”,并指出持续需要人类研究人员和操作员作为反对真正AGI的证据。有些人甚至质疑AGI概念本身的价值,而另一些人则悲观地定义它(“AGI是最后一个人类被终止的时候”)或实用主义地定义它(AGI是人工智能处理战略管理任务的时候)。 这场对话凸显了缺乏共识以及对AGI到来的愤世嫉俗的观点。
一场 Hacker News 的讨论围绕着“通用人工智能 (AGI) 已经到来”的说法。最初的帖子引发了争论,一位评论员质疑人类是否有必要*宣布* AGI 的存在。 一个关键点是,当前人工智能的进步是否真正代表“思考”,还是仅仅是复杂的模式匹配和叙事生成——这导致有人建议将 AGI 更名为“人工叙事智能”。 讨论强调了对人工智能行业炒作和虚假宣传的担忧,特别是员工似乎“沉迷”于自己的创造物,并吹嘘与人工智能系统的广泛互动。 最终,该帖子表达了对当前 AGI 声明的怀疑,并担心宣布 AGI 到来可能会进一步助长不切实际的期望,而不是消除它们。
相关文章

原文
Transmitted 20260104 · · · 405 days before impact

I pro­pose to begin this year with an acknowl­edg­ment that is strategic but/and also sin­cere. I’ll make my sin­cere case first, then explain its strategic logic.

AGI is here! When exactly it arrived, we’ll never know; whether it was one com­pany’s Pro or another com­pany’s Pro Max (Eddie Bauer Edition) that tip-toed first across the line … you may debate. But gen­er­ality has been achieved, & now we can pro­ceed to new ques­tions.

What do I mean by AGI ? Many com­peting def­i­n­i­tions depend on words that them­selves have com­peting def­i­n­i­tions; these words include “valuable”, “work”, & “human”.

Jas­mine Sun’s 2025 survey of AGI inter­pre­ta­tions is required reading on this subject, & not only for her immortal line … 

AI dis­cov­ered wholly new pro­teins before it could count the ‘r’s in ‘strawberry’, which makes it nei­ther vapor­ware nor a demigod but a secret third thing.

 … yet in the end Jas­mine leaves the ques­tion open, & I’d prefer to close it.

The trick is to read plainly.

The key word in Arti­fi­cial Gen­eral Intel­li­gence is Gen­eral. That’s the word that makes this AI unlike every other AI: because every other AI was trained for a par­tic­ular purpose. Consider land­mark models across the decades: the Mark I Perceptron, LeNet, AlexNet, AlphaGo, AlphaFold … these sys­tems were all dif­ferent, but all alike in this way.

Lan­guage models were trained for a purpose, too … but, sur­prise: the mech­a­nism & scale of that training did some­thing new: opened a wormhole, through which a vast field of action & response could be reached. Towering libraries of human writing, drawn together across time & space, all the dumb rea­sons for it … that’s rich fuel, if you can hold it all in your head.

It’s impor­tant to empha­size that the open-ended capa­bility of these big models was a gen­uine sur­prise, even to their custodians. Once understood, the oppor­tu­nity was quickly grasped … but the mag­ni­tude of that ini­tial whoa?! is still ringing the bell of this century.

I’m extreme in this regard: I think 2020’s Lan­guage Models are Few-Shot Learners marks the AGI moment. In that paper, OpenAI researchers demon­strated that GPT-3 — at that time, the biggest model of its kind ever trained — performed better on a wide range of lin­guistic tasks than models trained for those tasks specifically. A more direct title might have been: This Thing Can Do It All?!

Of course I love the idea that the epochal threshold was passed years ago. Like crossing into another state without noticing the sign.

This was before ChatGPT … & here is my test. If you appeared in a puff of smoke before the authors of that paper, just after publication — a few months before half of them cleaved from OpenAI to form Anthropic — and car­ried with you a laptop linked through time to the big models of 2026, what would their appraisal be ? There’s no doubt in my mind they would say: Wow, we really did it ! This is obvi­ously AGI!

Why don’t they say that today?

If it were only critics of the industry resisting the dec­la­ra­tion, it wouldn’t be so sur­prising. Acknowledging AGI might feel to them like con­ceding defeat — although I think the critics should indeed acknowl­edge it, & in a moment I’ll explain my logic.

But par­tic­i­pants in the industry seems reluctant, too. Why ? I think they might be ner­vous to claim the achieve­ment because … it feels like more should have changed.

Holden Karnofsky’s alter­na­tive threshold, “trans­for­ma­tive AI”, is funny in this regard:

Roughly & conceptually, trans­for­ma­tive AI is AI that pre­cip­i­tates a tran­si­tion com­pa­rable to (or more sig­nif­i­cant than) the agri­cul­tural or indus­trial revolution.

One imag­ines a dialogue:

Q: Which one is the trans­for­ma­tive AI?

A: The one that trans­forms things!

I don’t mean to say that other thresholds, other def­i­n­i­tions, aren’t inter­esting or valuable. Rather, I am just pointing out, we’ve got this ubiq­ui­tous term, Arti­fi­cial Gen­eral Intel­li­gence, & it appears that the Arti­fi­cial Intel­li­gence has become Really Very Gen­eral, so … ?

Pile up the tendencies: the Bay Area is the land of the overthinkers; a lin­guistic tech­nology invites end­less rumi­na­tion about both lan­guage & intelligence; it’s more fun to define a cool new stan­dard than go along with a boring old one; the feeling of every cre­ative project, upon completion, is the same: It’s not quite how I imagined it …  None of this should pre­vent us from using plain lan­guage to acknowl­edge an obvious capa­bility.

Why are we thrashing around like it’s us on the hook ? Why don’t we just reel in our catch?


Maybe, just maybe, the reluc­tance is strategic: for it grants the luxury of an ever-receding goal. Just a bit more money … a bit more electricity … utopia is around the corner!

This is why I pro­pose uni­lat­eral dec­la­ra­tion as a strategic countermove.

You did it! says Robin.

Who, me? says the industry.

Yes, you ! Your product is fabulous. That’s some AGI right there, just as you predicted.

No, I don’t think — 

Oh, don’t be modest. This is a sci-fi dream come to life.

It’s really not quite — 

And …  ? What now?

This con­ces­sion serves two purposes, both useful on their own, even better together:

  1. acknowl­edges a legitimate, world-historical achieve­ment: fruition of a decades-old dream

  2. tears away the veil of any-minute-now mil­lenar­i­anism to reveal deployed tech­nology

And …  ? What now?


The big models still have severe limitations: the broad realm of the physical — i.e. most of the universe — is closed to them, & even in the cozy realm of the symbolic, many tasks & processes con­found them. One might say the big models pos­sess a prodi­gious immediate gen­er­ality, which is dis­tinct from the implacable eventual gen­er­ality of a dili­gent human. This is what, for example, the researcher François Chollet is focused on: the never-before-seen puz­zles that you & I can solve in a handful of minutes, while a big model churns & fumes.

Yet you can read François’s 2019 paper On the Mea­sure of Intel­li­gence—and you should — and mostly agree with it — and I do — and still notice that, in the years since publication, the big models have become REALLY VERY WILDLY GENERAL.

AGI is here, & many juicy goals still remain. Why would anyone expect otherwise ? François makes this point in his paper: uni­ver­sality is not required for gen­er­ality. It can’t be, because it’s not possible.

Call it Robin’s Razor (it really is fun to define a stan­dard): if you find your­self parsing dif­ferent fla­vors of gen­er­ality, it’s AGI.


There’s no reason to wait around for this dec­la­ra­tion — white smoke from the industry’s chimney. I want to insist that AI, as a par­tic­ularly pro­tean & human tech­nology, is open to mean­ingful inter­pre­ta­tion & eval­u­a­tion by any­body/every­body.

There are two rea­sons for this:

  1. I repeat myself, but: the trans­for­ma­tion of the state of the art over the past five years has been deeply sur­prising even to the most insider-y insiders. There is no reason to expect those insiders won’t con­tinue to be sur­prised, just as they were in (let’s review) 2025, 2024, 2023, 2022 … “Nobody knows anything,” said a wise man once.

  2. The big models are malleable, open-ended sys­tems — communication sys­tems, really, though they don’t obvi­ously appear that way — and the providers of such sys­tems have no monopoly on under­standing them.

My tem­plate here is Twitter, the com­pany, which was, at the time I worked there, staffed almost entirely by power users of Twitter, the plat­form. The affairs of the com­pany osmosed freely into the plat­form; all-hands meet­ings could be tracked in real-time via gnomic tweets.

So here was a group of people who not only made the software, but used it more than any­body else. Yet Twitter, the plat­form, was still so much bigger than Twitter, the com­pany. The com­pany’s under­standing of the plat­form’s uses — its plea­sures & confusions — was deep but narrow, even blinkered … a fact that became apparent as the com­pany failed, again & again, to make the plat­form sen­sible to new users.

You some­times read about employees of AI com­pa­nies absorbed by their own products. Nobody on Earth has spent more hours talking to YakGPT than Katie Echo! Nobody can pump more code out of Shan­non­Soft than Johnny Narcissus! Recalling my Twitter expe­ri­ence, I think boasts (and posts) of this kind should inspire caution.

It’s not that the com­pa­nies don’t know anything — they know a lot—but rather that a reg­ular user can have an expe­ri­ence with these sys­tems, an idea about them, that is not avail­able to even the deepest, dankest, insider-iest insider. I contend that reg­ular users are having such expe­ri­ences & ideas all the time.

I think this is just how these sys­tems work, when they have so many dif­ferent people using them for so many dif­ferent rea­sons. It’s pretty weird, com­pared to other products. After all, the com­pany that man­u­fac­tures jet engines & sells them to five cus­tomers can rea­son­ably claim to under­stand the uses of those engines.

That’s all to say, for all the math & matériel involved in their care & feeding, the big models are more like Twitter than they are like jet engines, & this whole thing was a sur­prise anyway — from which no one has quite recovered — so I will defend vig­or­ously the right of any­body/every­body to reflect & opine on AI’s prop­er­ties & potential, & to declare, when it seems obvious: AGI is here.


Recently, I have been reading a lot about the early his­tory of per­sonal com­puters, the 1970s & 1980s. (This book was wonderful, because its present — the pin­nacle from which it sur­veys a tumul­tuous his­tory — is … 1984.) It’s been inter­esting to dis­cover that many of the visions & promises of that era “rhyme” with the visions & promises of the present boom. It’s clear that Bay Area tech is, if not one con­tin­uous project, 1957-2026, then for sure one con­tin­uous culture. (I happen to think it’s one con­tin­uous project.)

The pace is familiar, too: hot new com­pa­nies were appearing every month, fading just as fast. Yes, the dollar amounts were smaller — the com­puter was still a niche product — but the feel­ings really seem to be the same.

And … the visions of that era were sub­stan­tially realized ! Today, every­body really does own a per­sonal com­puter, or some­thing like one. Everybody really is con­nected to a global infor­ma­tion network, or some­thing like one.

I’m an avid user of both my per­sonal com­puter & the global infor­ma­tion network, & I observe that we appear not to live in utopia. The and … ? of those inven­tions roars in our ears.

Today, every­body really can call upon AGI, or some­thing like it: a wildly gen­eral com­puter program. I mean ! Wow!

The ques­tion, always, forever: what now?

联系我们 contact @ memedata.com