I propose to begin this year with an acknowledgment that is strategic but/and also sincere. I’ll make my sincere case first, then explain its strategic logic.
AGI is here! When exactly it arrived, we’ll never know; whether it was one company’s Pro or another company’s Pro Max (Eddie Bauer Edition) that tip-toed first across the line … you may debate. But generality has been achieved, & now we can proceed to new questions.
What do I mean by AGI ? Many competing definitions depend on words that themselves have competing definitions; these words include “valuable”, “work”, & “human”.
Jasmine Sun’s 2025 survey of AGI interpretations is required reading on this subject, & not only for her immortal line …
AI discovered wholly new proteins before it could count the ‘r’s in ‘strawberry’, which makes it neither vaporware nor a demigod but a secret third thing.
… yet in the end Jasmine leaves the question open, & I’d prefer to close it.
The trick is to read plainly.
The key word in Artificial General Intelligence is General. That’s the word that makes this AI unlike every other AI: because every other AI was trained for a particular purpose. Consider landmark models across the decades: the Mark I Perceptron, LeNet, AlexNet, AlphaGo, AlphaFold … these systems were all different, but all alike in this way.
Language models were trained for a purpose, too … but, surprise: the mechanism & scale of that training did something new: opened a wormhole, through which a vast field of action & response could be reached. Towering libraries of human writing, drawn together across time & space, all the dumb reasons for it … that’s rich fuel, if you can hold it all in your head.
It’s important to emphasize that the open-ended capability of these big models was a genuine surprise, even to their custodians. Once understood, the opportunity was quickly grasped … but the magnitude of that initial whoa?! is still ringing the bell of this century.
I’m extreme in this regard: I think 2020’s Language Models are Few-Shot Learners marks the AGI moment. In that paper, OpenAI researchers demonstrated that GPT-3 —
Of course I love the idea that the epochal threshold was passed years ago. Like crossing into another state without noticing the sign.
This was before ChatGPT … & here is my test. If you appeared in a puff of smoke before the authors of that paper, just after publication —
Why don’t they say that today?
If it were only critics of the industry resisting the declaration, it wouldn’t be so surprising. Acknowledging AGI might feel to them like conceding defeat —
But participants in the industry seems reluctant, too. Why ? I think they might be nervous to claim the achievement because … it feels like more should have changed.
Holden Karnofsky’s alternative threshold, “transformative AI”, is funny in this regard:
Roughly & conceptually, transformative AI is AI that precipitates a transition comparable to (or more significant than) the agricultural or industrial revolution.
One imagines a dialogue:
Q: Which one is the transformative AI?
A: The one that transforms things!
I don’t mean to say that other thresholds, other definitions, aren’t interesting or valuable. Rather, I am just pointing out, we’ve got this ubiquitous term, Artificial General Intelligence, & it appears that the Artificial Intelligence has become Really Very General, so … ?
Pile up the tendencies: the Bay Area is the land of the overthinkers; a linguistic technology invites endless rumination about both language & intelligence; it’s more fun to define a cool new standard than go along with a boring old one; the feeling of every creative project, upon completion, is the same: It’s not quite how I imagined it … None of this should prevent us from using plain language to acknowledge an obvious capability.
Why are we thrashing around like it’s us on the hook ? Why don’t we just reel in our catch?
Maybe, just maybe, the reluctance is strategic: for it grants the luxury of an ever-receding goal. Just a bit more money … a bit more electricity … utopia is around the corner!
This is why I propose unilateral declaration as a strategic countermove.
You did it! says Robin.
Who, me? says the industry.
Yes, you ! Your product is fabulous. That’s some AGI right there, just as you predicted.
No, I don’t think —
Oh, don’t be modest. This is a sci-fi dream come to life.
It’s really not quite —
And … ? What now?
This concession serves two purposes, both useful on their own, even better together:
-
acknowledges a legitimate, world-historical achievement: fruition of a decades-old dream
-
tears away the veil of any-minute-now millenarianism to reveal deployed technology
And … ? What now?
The big models still have severe limitations: the broad realm of the physical —
Yet you can read François’s 2019 paper On the Measure of Intelligence—and you should —
AGI is here, & many juicy goals still remain. Why would anyone expect otherwise ? François makes this point in his paper: universality is not required for generality. It can’t be, because it’s not possible.
Call it Robin’s Razor (it really is fun to define a standard): if you find yourself parsing different flavors of generality, it’s AGI.
There’s no reason to wait around for this declaration —
There are two reasons for this:
-
I repeat myself, but: the transformation of the state of the art over the past five years has been deeply surprising even to the most insider-y insiders. There is no reason to expect those insiders won’t continue to be surprised, just as they were in (let’s review) 2025, 2024, 2023, 2022 … “Nobody knows anything,” said a wise man once.
-
The big models are malleable, open-ended systems —
communication systems, really, though they don’t obviously appear that way — and the providers of such systems have no monopoly on understanding them.
My template here is Twitter, the company, which was, at the time I worked there, staffed almost entirely by power users of Twitter, the platform. The affairs of the company osmosed freely into the platform; all-hands meetings could be tracked in real-time via gnomic tweets.
So here was a group of people who not only made the software, but used it more than anybody else. Yet Twitter, the platform, was still so much bigger than Twitter, the company. The company’s understanding of the platform’s uses —
You sometimes read about employees of AI companies absorbed by their own products. Nobody on Earth has spent more hours talking to YakGPT than Katie Echo! Nobody can pump more code out of ShannonSoft than Johnny Narcissus! Recalling my Twitter experience, I think boasts (and posts) of this kind should inspire caution.
It’s not that the companies don’t know anything —
I think this is just how these systems work, when they have so many different people using them for so many different reasons. It’s pretty weird, compared to other products. After all, the company that manufactures jet engines & sells them to five customers can reasonably claim to understand the uses of those engines.
That’s all to say, for all the math & matériel involved in their care & feeding, the big models are more like Twitter than they are like jet engines, & this whole thing was a surprise anyway —
Recently, I have been reading a lot about the early history of personal computers, the 1970s & 1980s. (This book was wonderful, because its present —
The pace is familiar, too: hot new companies were appearing every month, fading just as fast. Yes, the dollar amounts were smaller —
And … the visions of that era were substantially realized ! Today, everybody really does own a personal computer, or something like one. Everybody really is connected to a global information network, or something like one.
I’m an avid user of both my personal computer & the global information network, & I observe that we appear not to live in utopia. The and … ? of those inventions roars in our ears.
Today, everybody really can call upon AGI, or something like it: a wildly general computer program. I mean ! Wow!
The question, always, forever: what now?