可视化驱动OpenAI的计算、资金和合同。
Visualizing The Compute, Cash, & Contracts That Power OpenAI

原始链接: https://www.zerohedge.com/ai/visualizing-compute-cash-contracts-power-openai

## AI基础设施生态系统:摘要 OpenAI的AI模型,如ChatGPT,依赖于一个复杂且日益互联的基础设施网络,最近的可视化展示了这一点。该系统围绕三个关键流程展开:**算力**、**资金**和**合同**,揭示了一种资金循环模式。 关键参与者包括**英伟达**(4.6万亿美元)、**微软**(3.8万亿美元)、**台积电**(1.5万亿美元)和**OpenAI**(5000亿美元)。英伟达的GPU至关重要,其脆弱的供应链涵盖设计(英伟达)、制造(台积电)、组装(广达/富士康)以及通过云提供商(如**微软Azure**和**CoreWeave**)部署。 **微软**独特地既是OpenAI的投资者*又是*算力提供商,从而推动了一个闭环系统。这种相互依赖性,资本和芯片在少数主导公司之间循环,会产生潜在风险。对GPU的强烈需求——甚至将其用作贷款抵押品——凸显了其战略重要性,促使出口管制和国内替代方案。分析师警告说,这种紧密耦合可能会放大需求或资金变化带来的冲击,并可能导致AI泡沫。

相关文章

原文

In order to train and deploy cutting-edge AI models like ChatGPT, OpenAI relies on a sprawling infrastructure network involving multiple billion-dollar entities, intricate contracts, and vast capital commitments. A new visualization from Made Visual Daily maps this infrastructure pipeline using three flows—computecash, and contracts—highlighting the increasingly circular nature of AI development funding.

The map, via VisualCapitalist.com, synthesizes data from public financial reports, media disclosures, and filings in an attempt to show who builds what, who pays whom, and where potential risk may be accumulating in the system.

The biggest nodes in the diagram are familiar names: Nvidia ($4.6 trillion), Microsoft ($3.8 trillion), TSMC ($1.5 trillion), and Oracle ($0.8 trillion).

OpenAI itself, valued at around $500B in its most recent secondary sale, anchors the middle of the chart. Microsoft, in particular, plays a dual role—both providing compute (via Azure) and injecting capital and GPU credits back into OpenAI.

The GPU Supply Chain: Scarcity, Dominance, and Dependency

The engine behind OpenAI—and much of today’s generative AI—is the Nvidia GPU.

But these chips don’t come out of thin air. The GPU supply chain is global and fragile:

  • Design: Nvidia designs the chips in-house.

  • Fabrication: TSMC (Taiwan Semiconductor Manufacturing Company) fabricates the chips at its advanced 5nm and 4nm nodes.

  • Assembly: The chips are then packaged and tested by firms like Quanta and Foxconn.

  • Deployment: Server makers such as Supermicro integrate them into AI-optimized racks and clusters.

  • Delivery: These clusters are shipped to cloud providers like Microsoft Azure and CoreWeave.

Any disruption along this chain—whether geopolitical, economic, or logistical—can send shockwaves through the entire AI sector. That’s why the U.S. has placed tight export controls on AI chips, and why countries like China are scrambling to develop domestic alternatives.

Demand for H100s has grown so intense that cloud firms and startups alike are reserving capacity months or even years in advance. In rare cases, some even use GPUs as collateral to secure financing, reinforcing their role as a new strategic commodity.

Closed-Loop Capital and the AI Bubble Risk

What makes the modern AI ecosystem remarkable isn’t just the number of players involved—it’s how deeply interwoven their financial and operational relationships have become.

Microsoft, for instance, has invested over $13 billion in OpenAI, while also serving as its primary cloud and compute partner through Azure. Much of OpenAI’s model training runs on clusters powered by Nvidia GPUs, procured via Microsoft’s cloud infrastructure.

At the same time, Microsoft is the primary customer of CoreWeave, a rapidly growing cloud provider that also buys large volumes of Nvidia hardware—often financed through credit arrangements with private investors and funds.

This creates an interdependent web of capital, compute, and contracts, where the same dollars and chips circulate between a handful of firms dominating AI’s supply chain. Analysts have noted that such tight coupling could magnify shocks if demand or funding conditions change abruptly.

To dig deeper into the relationship between OpenAI and its backers, explore our related post: OpenAI vs Big Tech.

Loading recommendations...

联系我们 contact @ memedata.com