Even in my most bullish days for OpenAI, I secretly preferred DeepMind. I felt Demis Hassabis was trustworthy in a way Sam Altman couldn't be—a true scientist, not a businessman. Also, AlphaGo and AlphaZero. To me, they're not historical milestones but nostalgia. ChatGPT is cool, but do you remember move 37? And the AlphaZero-Stockfish 8 chess games? My love and interest for AI grew parallel to DeepMind’s successes. I was rooting, almost like a sports fan, for them.
So, for years, I’ve been low-key saddened by their constant fumbling. They had the tech, the talent, the money, the infrastructure, the prestige, and the conviction to make ChatGPT—or whatever else they wanted—before OpenAI. They didn't. CEO Sundar Pichai was afraid to thwart Google’s main revenue source (search and ads). He chose prudence over boldness. Good—they didn’t shoot themselves in the foot.
Because they didn’t shoot at all.
But that was the last mistake they made. Today, two and a half years after the ChatGPT debacle, Google DeepMind is winning. They are winning so hard right now that they’re screaming, “Please, please, we can’t take it anymore, it’s too much winning!” No, but really—I wonder if the only reason OpenAI, Anthropic, Meta, and Co. ever had the slightest chance to win is because Google fumbled that one time. They don’t anymore.
I’d been holding off on writing about Gemini 2.5. Focusing on the AI model didn’t feel like enough to tell the full story of Google’s comeback. Gemini 2.5 is only a piece—albeit a big one—of something much larger. Back in December 2024, I said they would come out on top by the end of 2025. We’re not even halfway there and it’s already happened. (For reasons I still don’t understand, some people genuinely thought xAI had a shot.)
Anyway, to avoid turning this post into an over-stylized narrative—which I do more often than I’d like—I’m keeping it to bullet points. It hits harder that way. You’ll see what I mean when the list just... doesn’t end.
Google and DeepMind fans: enjoy the long-overdue rebirth.
Gemini 2.5 Pro Experimental is the best model in the world. Number one on the LMArena, GPQA Diamond, Humanity's Last Exam, and AIME (math competition). It's also the best on private benchmarks like Aider Polyglot (code), Live Bench (diverse). It's better than Claude Sonnet at playing Pokemon (a promising agentic playground) and rising quickly on Minecraft Bench (some examples). It is decent at creative writing (e.g., long-context comprehension), which has been an elusive milestone for years.
Perhaps most importantly, the benchmark scores match the signal I receive from vibes checks, high-taste testers, and firsthand testimonials: people are reporting en masse that Gemini 2.5 Pro is indeed the best model today. A rare sight to witness. (Watch Matthew Berman’s clip below.)
And that's just pure performance. Add to the above that Gemini 2.5, compared to models of its category, is fast and cheap—I mean, they're giving away free access!—has a gigantic context window of 1 million tokens (only recently surpassed by Meta’s Llama 4) and it’s connected to the entire Google suite of products (more on that soon).
Gemini 2.5 Flash (launching soon) is Gemini 2.5 Pro’s little brother. It is just as amazing but for a different reason. Together with old versions of Flash (2.0, Lite, Thinking), it will be extremely fast and extremely cheap (much cheaper than comparable models from competitors, even DeepSeek’s, famous for being cheap). They're also small, which makes them perfect for edge applications and phone integration.
Then there’s Gemma 3—Google’s open source model, competitive with the best open-source models, including Llama 4 (too big) and DeepSeek-V3.
Swyx plotted a graph revealing that “Google owns the Pareto frontier” with Gemini 2.0/2.5 (both Pro and Flash) on the two most important metrics: performance and cost. Not only that, but Google's most performant models remain cost-effective, and its most cost-effective models remain performant. Google is bullying the competition at this point; Gemini is off the charts, literally.
The LLMs that underlie chatbots like Gemini and ChatGPT are the main attraction but far from the only one. Google dominates the other generative AI areas just as clearly as it dominates text-based models. They announced they will integrate the other AI tools they have into Vertex AI: Lyria (music), Imagen 3 (image), Veo 2 (video), and Chirp 3 (voice/speech). These are, in a way, side-projects for Google. Still, they’re world-class in their respective categories. Arguably, ChatGPT’s image generation is better, but Veo 2, for instance, compelled me to write an article I entitled “You Must See How Far AI Video Has Come,” whereas OpenAI Sora is pure slop. (The clip below is from Veo 2.)
What about agents? Gemini 2.5 Pro in Deep Research mode it's twice as good as OpenAI’s Deep Research (until now, widely considered the most important AI product for investigative work). Google DeepMind is further cooking with Project Astra (assistant) and Project Mariner (computer interaction).
If that wasn’t enough, they’ve built an Agent2Agent protocol complementary to and compatible with the Model Context Protocol, which they will also build after listening to the community’s feedback. I can’t take any more mogging.
To top it off, they keep chugging high-quality papers to publish in Nature or present at ICLR or NeurIPS or some other world-class journal or conference. And if by chance that doesn’t happen, you can either way tell they take both AGI efforts as well as industrial AI very seriously.
Is that all? Not really. Let's not forget that Google is a consumer software company as much as an AI company. They build better models than OpenAI and Anthropic, but they do plenty of other things no one else can do.
Hello friend!
Before you read on, a quick note: I write this newsletter in an attempt to understand AI and offer that understanding to others who may find themselves similarly disoriented (who isn’t these days…)
The project continues thanks to a small group of generous readers who support it with ~$2/week (ChatGPT costs twice as much!). If you find value here—or simply wish for this quiet effort to persist—you are most welcome to join them.
If you already have, my sincere thanks. This exists because of you.