英伟达斥资200亿美元收购一家营收目标落空75%的公司。
Nvidia just paid $20B for a company that missed its revenue target by 75%

原始链接: https://blog.drjoshcsimmons.com/p/nvidia-just-paid-20-billion-for-a

## 人工智能泡沫与英伟达的权力游戏:摘要 最近的头条新闻关注埃隆·马斯克的“Grok”聊天机器人,但一个更重要的故事围绕着**Groq**(带Q)公司,英伟达刚刚以200亿美元收购了它。Groq开发了专门的芯片(LPUs),旨在实现令人难以置信的快速人工智能处理——以毫秒级的速度响应查询,不同于当前依赖英伟达GPU的系统常见的延迟。 然而,Groq面临挑战:它对开源人工智能模型的依赖缺乏Gemini或Anthropic模型等领先选项的质量。尽管估值波动——从20亿美元暴跌至5亿美元——英伟达的收购表明了一个关键的担忧:**对竞争的恐惧**。英伟达并非在创新,而是在巩固权力,以防止对其人工智能硬件垄断的破坏。 此次收购凸显了一个更大的问题:人工智能基础设施的繁荣正在耗尽资源,特别是**电力**。数据中心正在推高能源成本,公司们正在采取可疑的策略——例如优先能源费率和剥削全球劳动力——以维持盈利能力。 此外,尽管进行了大量投资(包括来自沙特阿拉伯的15亿美元,现在已转投英伟达和AMD),许多人工智能公司都在消耗现金,但可衡量的投资回报率却很少。预计市场将在2026年进行调整,随着当前人工智能技术的局限性变得明显,估值可能会暴跌。虽然人工智能不会消失,但当前炒作和虚高估值是不可持续的,一场清算即将来临。

## Nvidia 200 亿美元收购 Groq 总结 Nvidia 最近以 200 亿美元收购了 AI 芯片公司 Groq,尽管 Groq 的收入未能达到目标,差额高达 75%——从最初的 20 亿美元预测大幅下降至 5 亿美元。 这引发了关于估值与收入的争论,以及这是否预示着“AI 泡沫”。 讨论的重点在于 Nvidia 的举动是战略性收购,旨在消除快速发展的 AI 推理市场的竞争,还是仅仅是人才和知识产权的收购。 一些人认为 Nvidia 旨在整合 Groq 的技术,特别是其独特的芯片架构,以增强自身的产品。 另一些人则认为这是一种防御性策略,以防止竞争对手取得进展。 人们对潜在的监管问题以及大型科技公司巩固权力的更广泛影响表示担忧。 许多评论员质疑如此高昂的价格标签背后的原因,而另一些人则指出 Groq 在 Nvidia 的资源下蓬勃发展的潜力。 最终,此次收购凸显了 AI 行业内部激烈的竞争和高风险,并引发了关于创新和市场支配地位的问题。
相关文章

原文

If you’ve heard “Grok” thrown around lately, you’re probably thinking of Elon’s chatbot from xAI. We’re not talking about that one. That model isn’t particularly good, but its whole value prop is being politically incorrect so you can get it to say edgy things.

The company Nvidia bought is Groq (with a Q). Totally different beast.

If you’ve used any high quality LLM, you’ve noticed it takes a while to generate a response. Especially for something rapid fire like a conversation, you want high quality AND speed. But speed is often what gets sacrificed. There’s always that “thinking... gathering my notes... taking some time to form the best response” delay.

Groq specialized in hardware and software that makes this way faster. They created a new type of chip called an LPU (Language Processing Unit). It’s based on an ASIC, an application specific integrated circuit. If that’s confusing, don’t worry about it. It’s just a processor that does a specific type of task really well.

So imagine you’re talking to Gemini and it takes a couple seconds to respond. Now imagine it responded instantly, like 10 or 100 times faster. That’s the problem Groq was solving.

To go one level deeper on LPUs versus GPUs (the processors most LLMs run on, typically Nvidia cards): those GPU calculations have to access a lot of things in memory. Nvidia’s chips depend on HBM, high bandwidth memory. But LPUs use something called SRAM that’s much faster to reference.

Think about it like this. Your wife has a grocery list for you. You go to the store but forget the list. Every time you’re in an aisle, you have to call her on the phone: “Hey, do I need anything from the bread aisle?” Get the bread. Put the phone down. Go to the next aisle. “Hey, do I need anything from canned goods?” And so on through produce, meat, pick up the beer, check out, get home. Very inefficient.

Groq’s approach is like you just took the list to the store. Get to a new aisle, look at the list. Next aisle, look at the list. Much faster than a phone call.

That’s the key difference. Nvidia GPUs are phoning out every time they hit a new aisle. Groq’s LPUs mean the shopper has the list in their pocket.

Groq’s main offering is GroqCloud. An engineer like me isn’t going to go out and buy an LPU (I don’t even know if they’re commercially available). What I’m going to do is, if I need lightning fast response in an application I’m building, I’ll use GroqCloud. That inference happens at an extremely fast rate, running on LPUs in a data center somewhere.

Their value prop is: fast, cheap, low energy.

Where they’ve been falling short is they mostly use open source models. Llama, Mistral, GPT-OSS (OpenAI’s open source offering). These are decent models, but nowhere near the quality of something like Anthropic’s Opus 4.5 or Gemini 3 Pro.

Groq positioned themselves for hardcore use cases where milliseconds matter. One of their big industry fits is real time data analysis for Formula 1. When your driver’s out there on the track, you don’t have time to send a query to Gemini and wait 20 to 30 seconds to figure out if you should pit this guy for new tires this lap. You want something like Groq which does pretty good analysis, really really really fast.

This is insider baseball you’re going to miss from mainstream news headlines. The media is paying attention to Grok (Elon’s chatbot), not Groq (the chip company). This isn’t a conspiracy, it’s just that the only people aware of Groq are software developers and tech nuts like me.

This is a canary in the coal mine for worse things to come in 2026.

About a year ago, Groq announced a $1.5 billion investment deal with Saudi Arabia. They also secured a $750 million Series D funding round. These are crazy multiples even for a software company that’s somewhat leveraged in hardware. Bubble level projections.

To visualize $1.5 billion: if you cashed that check out in $100 bills and stacked them one on top of another, it would reach a five story building. For ordinary plebeians like us, at the average US salary of around $75K, you’d need to work 20,000 years to earn that.

At that time, the company was valued at $2 billion. Hello bubble.

Then in maybe one of the best rug pulls of all time, in July they quietly changed their valuation to $500 million. A 75% cut in four months. I’ve never seen anything like that since the 2008 financial crisis.

This was a company valued at $2 billion, enough that the government of Saudi Arabia was investing at that valuation. Then they took a 75% haircut four months later without anything major happening.

If it can happen to Groq, who else can it happen to? Nvidia? Intel?

The rumors started flying on Christmas Eve. Confirmed the 26th: Nvidia will be buying Groq, their key product line, and key personnel including the CEO, for $20 billion.

Let’s walk through that again:

What is going on? This is a bubble.

The only explanation is this is a fear purchase. Groq was promising faster, cheaper, more efficient, less electricity use for their chips. But they couldn’t scale fast enough to compete with the vendor lock in and buddy system Nvidia has going.

Nvidia’s buying them with their insanely inflated war chest. They don’t want a chunk taken out of their market share. They can’t afford to take that chance. So it’s like they’re just saying: “Shut up, take the $20 billion, walk away from this project.”

This is a sign that in order to succeed, Nvidia needs a monopoly on the market. Otherwise they would not pay ten times the company’s valuation that was then decreased by 75%. This is a desperate move to consolidate the market into “you have to go with us.”

Saudi Arabia didn’t keep that $1.5 billion sitting around. They redirected it to Nvidia and AMD instead. Nvidia still gets paid ofc.

Smaller competitors like Cerebras and Inflection, doing things in Groq’s space or exploring different architectures for AI inference, are canceling IPOs, dropping like flies, seeking emergency funding. The chatter I’m hearing from VCs and friends in that world? Ain’t nobody buying it right now.

Google made their own chip. Microsoft and Amazon are racing to make competition chips that run on less electricity, are more efficient, faster. But no matter what anybody does, the market is consolidating around the Nvidia monopoly for AI hardware. Engineers like me and architects at large enterprises are trying to escape this. Once they consolidate enough of the market, they can set their price for chips and usage. If you don’t own them, you go through Oracle or some cloud computing service, and they can charge whatever they want because there will be no competitors. Even a competitor having a rough time but getting some traction? They just buy them out for $20 billion because with this monopoly going, that’s pocket change.

$20 billion is a rounding error to Nvidia.

The AI infrastructure boom ran on one large errant assumption: that power is cheap and plentiful. That assumption was very, very wrong.

We’ve seen parts of the power grid fail, go unmaintained. There’s a great article on Palladium called “The Competency Crisis” that explains some of what’s going on in the US right now. Electricity is now the bottleneck. It’s the constraint. Too expensive or you can’t get enough of it. Who’s paying these costs? The tech companies aren’t. Trump is meeting with Jensen Huang, with Sam Altman. He hasn’t been over my place lately. He hasn’t invited you to the White House to talk about how you can’t afford groceries and eggs cost three times what they did a few years ago.

No.

You and I are going to pay to subsidize the electricity constraints.

US data centers are using about 4% of all electricity right now. If growth continues at the same rate, in ten years they’ll use 9%. Almost one tenth of all electricity generated in the US.

I’m not much of an environmentalist. But even someone like me, pretty jaded to some amount of waste because of industrialization, something like this actually makes my stomach turn. In places with lots of data centers, AI heavy regions, they’re experiencing about a 250% increase in energy costs compared to just five years ago. That’s huge. Even compared to grocery costs, which are out of control. At least with food you have alternatives. Red meat too expensive? Buy chicken. Chicken too expensive? Buy eggs. But electricity? You have to run electricity. It’s a public utility. You can’t just “use a lot less” when your bill goes up 2.5x.

Let me walk you through what happens. Some business development guy from Oracle wants to build a new data center in Rural America. A year before it’s even built, they meet with city officials, maybe the governor. They grease the right people. Get legislation passed with the utility companies that says they’ll pay a preferential rate for electricity because they’re going to use a shit ton.

They’re Oracle or Nvidia, they’re good for it. They pay five years upfront. Then the utility decides what to do with the rest of the electricity. The grid is strained. They want everyone else to use less. They can’t just tell them to use less, so they keep raising the price until naturally you just can’t afford to use more.

You turn the lights off. Turn the TV off. Get those LED bulbs that disrupt your sleep. Do everything to keep electricity costs down. But you and I are left holding the bag. The data center folks aren’t paying for that, they paid upfront at a preferential rate.

Senate Democrats are allegedly investigating this. I’ll believe it when I see it. These tech companies have their hooks so far into influential politicians. It’s a vote grab. I’d be happy to be wrong about that, but I know I’m not going to be.

I talked about this in my last piece on the AI bubble. This computer communism that’s going on, pricing you out of personal computing, keeping it all in the family of these tech companies. But with the Groq deal confirmed, it’s gone one step further. Nvidia is not just selling chips anymore. They are lending money to customers who buy the chips. They are artificially inflating demand.

It’s like if I run a small grocery store and I need to show good signal to investors that I’m bringing cash in the door. So I go downtown during farmer’s market and give everybody a $20 voucher to use at my store. I take a wholesale hit when they come in and buy stuff. But what I can show is: “Hey, people are coming in and spending money. Look at the revenue. Give me more venture capital money.”

This can’t work infinitely, for the same reason any perpetual motion machine can’t work.

Back in September, Nvidia announced a $100 billion investment in OpenAI. Coming through in $10 billion tranches over time. And it’s not equity, it’s lease agreements for Nvidia’s chips. Literally paying them to pay themselves.There’s probably some tax shenanigans going on there since they’re typically offering $xxx in leases of their chips to the company they’re “lending” to. Assuming they do this in part because the depreciation on the physical asset (the chips) can be written off on their taxes somehow. They’re essentially incentivizing another person to use them with money. Even OpenAI’s CFO has admitted, quote: “Most of the money will go back to Nvidia.”

They’re playing both sides. In the OpenAI case, they’re financing software that uses their chips. But they’re also getting their hooks into data centers.

CoreWeave: they have something like 7% share in the company. Worth $3 billion. That’s funded by GPU revenue from CoreWeave.

Lambda: another data center operator. That’s a $1.3 billion spending deal. Renting Nvidia’s own chips back from them.

They’ve pledged to invest £2 billion across British startups, which of course are going to go back to Nvidia chips one way or another.

In 2024, they invested about $1 billion across startups and saw a $24 billion return in chip spend. They 24x’d their billion dollar investment in one year. Nvidia has more power than the Fed right now. More power than the president over the economy. They have their hand on the knob of the economy. They can choose how fast or slow they want it to go. If the Nvidia cash machine stops printing, if they stop funding startups, data centers, hardware companies, software companies, that whole part of the economy slows way down and maybe crashes if investors get spooked.

I’m waiting for somebody to blow the whistle on this. I’m not a finance guy, so it’s strange I’m even talking about it. But their entire success story for the next couple years hinges on their $100 billion investment in OpenAI, which they’re expecting to bring back about $300 billion in chip purchases.

It’s vendor financing. It’s sweetening the pot on your own deals. I cannot believe more people are not talking about this.

OpenAI, the leader of this space, the company whose CEO Sam Altman is invited to the White House numerous times, probably has a direct line to Trump, a lot of the economy hinges on this guy’s strategy, opinions, and public statements.

And he runs a company that is not profitable. Actually insane if you think about it. All he’s done with that company, from an economics point of view, is rack up debt. Spent more than he’s earned.

By that metric, I’m richer than Sam Altman. Not in net worth. But if I consider myself a business and the fact that I bring in any salary at all, even a dollar a year, that would make me earn more than Sam Altman, who has only lost money. In the next few years, they’re expected to burn something like $75 billion a year. Just set that money on fire. I have a credit card with no preset spending limit, but I assume if I run up a $75 billion charge it’s going to get denied. By their projections, they think they’ll become profitable around 2029/2030, and they need $200 billion in annual revenue to offset their debts and losses.

To visualize $200 billion: if you cashed that out in $100 bills and stacked them, it would reach halfway to the International Space Station. That’s how much they’d have to make every single year to just be profitable. Not be a massively successful company. Just to not spend more than they earn.

  • 2024: Spent $5 billion, earned $3.7 billion. Spending $1.35 for every dollar earned.

  • 2025: Set $8 billion on fire (after profits).

  • 2028 (projected): Will lose $74 billion that year. Just lose it.

  • Cumulative losses through 2029: $143 billion.

They’d need to make $200 billion in a year to offset that. Are they including interest? I have no idea how these things work, but in simple terms: they’re spending a lot more money than they make.

Groq was kind of a one off because Nvidia panic bought the competition. But they also need to figure out how to get prices down or they can’t keep this money machine moving.

Groq is probably one of the last companies that caught a good lifeboat off a sinking ship.

What we’re going to see this year: it’ll start small, but get major. A company first, then multiple larger ones, that had a 2025 valuation, will go to raise. They’ll do a 409A evaluation. A bunch of smart analysts will say what it’s worth to investors. And you’re going to see the valuation drop. They won’t be able to raise money.

Then the shit is really going to start to hit the fan.

The dominoes will start falling. That’s probably what kicks off the actual pop, and it’s imminent. Any day now.

Part of the big gamble they’ve sold investors is: we’ve got to replace you. The worker. That’s why we need to spend so much money, go into debt. It’ll all be worth it because then we won’t have to pay people to do these jobs anymore.

We’re going to continue to see massive labor displacement. To a degree this is a shell game, an investor illusion. What these larger enterprise companies are hoping to do: cut a lot of folks, have big layoffs, say it’s because AI is replacing the jobs.

In some cases they’re right. Data entry, for example. I don’t mean to be mean if you do data entry for a living, but there are models very good at that now. You need someone to manage the work, spot check it. But it’s kind of a job AI can do.

Like how grocery stores have automatic checkout lines now with one person monitoring six or eight of them. So some of that’s real. A lot of it isn’t though.

They cut a bunch of American workers under the guise that AI’s replacing workers. A lot of these megacorp execs are actually convinced of it. Americans are expensive, especially tech workers.

Then they see: damn, maybe we could have cut some, but not as many. We got greedy. Now our services are failing in production. AWS in Northern Virginia is flaky, going down again. That just happened, by the way, direct result of these layoffs.

So instead they think, “We’ll look globally! Spin up a campus in Hyderabad!” Pay them way less. The cost of living is less there, they expect less. Bring them over on H-1B when needed. I’ve written about the H-1B program. This is nothing against the individuals on that program. I’ve worked with very talented H-1Bs, and some very inferior ones, just like American citizens.

But the corporate sleight of hand is something like this, we can get those H-1B visas, and they’re not going to ask for pesky stuff like Sunday off for church. We can put them on call 24/7 and they can’t say no because if they do, we kick them back to their country. Same thing that’s happened with migrant labor in farming over the past century. Even if it doesn’t look like it on the surface, corporations know that they can pay H-1B employees less than American citizens. If a US citizen and H-1B recipient both make $120k but the H-1B works double the hours because they have no room to push back and are under threat of being sent home, they are making 50% less than the American per-hour.

Amazon/AWS: 14,000 laid off.

Microsoft: 15,000 total just in 2025.

Salesforce: My favorite one. 4,000 customer support roles cut. And the CEO is gloating about it in interviews. So great that he can replace workers!

Now Salesforce is admitting that they fucked up. They cut too many people.

I’ve been on the other side of this when I was an engineering director at a Fortune 500 company. They were neurotic about tracking AI use. Spending an exorbitant amount of money on shitty AI tools. Like tools from a year ago. GPT-4o in the GPT-5 era, in the Anthropic dominance era. More or less useless.

Not only would they monitor all usage across employees, specifically who’s using it how much, they could see every single message being sent to the AI. So theoretically you could check somebody’s queries and do a performance evaluation based on where you perceive them to be.

Pretty creepy.

They’re using yesterday’s tools because of regulation and compliance, blowing an absurd amount of money. The CEO just sees a lot going out the door: “I thought these were supposed to save money, what’s going on here?” So his lieutenants have to get a grip on it, monitor everything. Even at Amazon this is being included in performance reviews, how much they’re using AI.

That’s why if you’re a software developer wondering why your boss is on you about using AI tools. They’re probably getting pressure from their bosses. I want my engineers to be as productive as possible. I think AI is probably part of that tool belt for everyone at this point. But is tracking it really the best way? I’m going to gauge performance on metrics, how you interacted with the team, what you shipped. It puts the cart before the horse to say you’re paid by how much you use these AI tools. If I’m a developer, I’m just spinning up a script to send lorem ipsum text 24 hours a day, get maximum ratings because I used GPT-3.5 the most.

MIT did a study in summer 2025. They’re saying 95% of companies report zero measurable ROI on AI products.

Actually not that crazy if you consider that a lot of them did layoffs and subbed in AI. That’s just going according to plan in my estimation.

They estimate about $30 to $40 billion has been spent on enterprise AI. That’s the money your JPMorgan Chase is spending for their engineers to use Claude Code or whatever dated tool they have access to.

The market heard that signal. When that study came out:

That last one isn’t encouraging for an AI bubble pop because it indicates this is a big part of the economy.

January through February: Maybe down valuations, just a very flat market without much growth.

Q1 to Q2: We’ll start to see a couple businesses, or maybe one major one at first like a domino starting to fall, not able to raise capital at their 2025 valuation. We’ll see valuations go down. VCs will be like: “Not touching it, not giving them more money, cutting our losses.”

Then timing a little more indeterminate but these things will happen quickly in succession:

  • Credit markets start to tighten

  • Debt refinancing pressure builds up in the system

  • Nvidia revises its revenue guidance to something at least vaguely linked to reality

And that’s when the big reckoning begins.

I want to be clear. AI is not going anywhere. It’s going to continue being a mainstream part of the world. I like a lot of these tools, I think they’re very helpful.

But the talking heads have really promised us the world. It’s clear the technology cannot deliver above and beyond what it’s doing now.

We’ve seen progress of these models slow at an exponential rate of slowing over the past few releases. Each release is better, but the gap between the current release and the last release is much smaller than it used to be.

Because of that, a lot of these AI companies are going to survive, but their valuations are going to get giga slashed.

Valuations of $500 billion for OpenAI, $42 billion for Anthropic are unsustainable. We’re going to actually see them become unsustainable in 2026 as they’re eventually cut. Smaller companies will face those slashes first. But it’s coming for the major AI labs as well.

This is good news, honestly. This AI hype has really turned tech into something different these days, different than it used to be. While I don’t feel AI is going anywhere, I do feel we’ll get a little more back to normal once this bubble pops and resolves.

What do you think? Drop a comment below. Subscribe if you found this interesting.

联系我们 contact @ memedata.com