爱迪生复仇:数据中心正从交流电转向直流电。
In Edison’s Revenge, Data Centers Are Transitioning From AC to DC

原始链接: https://spectrum.ieee.org/data-center-dc

## AI 数据中心向直流供电的转变 英伟达的 GTC 大会强调了一个关键挑战:随着 AI 芯片功耗的增加,数据中心基础设施难以跟上。一个主要瓶颈是低效的电力输送,传统上依赖于多次交流电到直流电的转换。Delta、Vertiv 和 Eaton 等公司正在通过转向直流配电设计来应对,尤其是在超大规模数据中心。 目前的数据中心多次转换电力——从高压交流电到低压交流电,然后再到服务器所需的直流电——导致能量损失和大量的铜材需求(每 1MW 机架可能需要 200kg!)。高压直流 (800VDC) 系统旨在绕过这些转换,直接为服务器供电,从而显著提高效率(高达 5%)并减少铜材用量(可能减少 45%)。 早期采用者正在中国出现,像 Mt. Diablo Initiative 这样的倡议正在试验 400V 直流电。虽然全面行业转型需要标准化的安全框架和强大的供应链,但供应商正在开发预计最早在 2026 年即可商业化的 800VDC 生态系统,这标志着数据中心电力架构的重大转变。

## 数据中心转向直流电 一篇最近的IEEE文章引发了Hacker News上关于数据中心电力从传统的交流电 (AC) 转向直流电 (DC) 的讨论。文章暗示了爱迪生直流系统“卷土重来”,因为直流电在现代数据中心环境中具有潜在的效率优势。 然而,评论者们争论了这种转变的优点。虽然DC-DC转换器效率很高,但一位用户认为,三相delta 800Vrms交流电系统在400Hz下具有优势:导线数量更少,最小的趋肤效应允许使用更便宜的布线,并利用现有的、强大的基础设施——尤其是在航空航天工业中。他们还指出了大型直流断路器的挑战和成本,以及使用最少的元件将高频交流电整流为近直流电的简易性。 这场讨论突出了切换电力系统的复杂性,权衡了直流电的优势与高频交流电已建立的效率和可靠性。
相关文章

原文

Last week’s Nvidia GTC conference highlighted new chip architectures to power AI. But as the chips become faster and more powerful, the remainder of data center infrastructure is playing catchup. The power delivery community is responding: Announcements from Delta, Vertiv, and Eaton showcased new designs for the AI era. Complex and inefficient AC to DC power conversions are gradually being replaced by DC configurations, at least in hyperscale data centers.

“While AC distribution remains deeply entrenched, advances in power electronics and the rising demands of AI infrastructure are accelerating interest in DC architectures,” says Chris Thompson, vice president of advanced technology and global microgrids at Vertiv.

AC to DC Conversion Challenges

Today, nearly all data centers are designed around AC utility power. The electrical path includes multiple conversions before power reaches the compute load. Power typically enters the data center as medium-voltage AC (1kV to 35kV), is stepped down to low-voltage AC (480V or 415V) using a transformer, converted to DC inside an uninterruptible power supply (UPS) for battery storage, converted back to AC, and converted again to low-voltage DC (typically 54 V DC) at the server, supplying the DC power computing chips actually require.

“The double conversion process ensures the output AC is clean, stable and suitable for data center servers,” says Luiz Fernando Huet de Bacellar, vice president of engineering and technology at Eaton.

That setup worked well enough for the amounts of power required by traditional data centers. Traditional data center computational racks draw on the order of 10 kW each. For AI, that is starting to approach 1 MW. At that scale, the energy losses, current levels, and copper requirements of AC to DC conversions become increasingly difficult to justify. Every conversion incurs some power loss. On top of that, as the amount of power that needs to be delivered grows, the sheer size of the convertors, as well as the connector requirements of copper busbars, becomes untenable. According to an Nvidia blog, a 1 MW rack could require as much as 200 kg of copper busbar. For a 1 GW data center, it could amount to 200,000 kg of copper.

Benefits of High-Voltage DC Power

By converting 13.8 kV AC grid power directly to 800 VDC at the data center perimeter, most intermediate conversion steps are eliminated. This reduces the number of fans and power supply units, and leads to higher system reliability, lower heat dissipation, improved energy efficiency, and a smaller equipment footprint.

“Each power conversion between the electric grid or power source and the silicon chips inside the servers causes some energy loss,” says Fernando.

Switching from 415 V AC to 800 V DC in electrical distribution enables 85 percent more power to be transmitted through the same conductor size. This happens because higher voltage reduces current demand, lowering resistive losses and making power transfer more efficient. Thinner conductors can handle the same load, reducing copper requirements by 45 percent, a 5 percent improvement in efficiency, and 30 percent lower total cost of ownership for GW-scale facilities.

“In a high-voltage DC architecture, power from the grid is converted from medium-voltage AC to roughly 800 V DC and then distributed throughout the facility on a DC bus,” said Vertiv’s Thompson. “At the rack, compact DC-DC converters step that voltage down for GPUs and CPUs.”

A report from technology advisory group Omdia claims that higher voltage DC data centers have already appeared in China. In the Americas, the Mt. Diablo Initiative (a collaboration among Meta, Microsoft, and the Open Compute Project) is a 400 V DC rack power distribution experiment.

A handful of vendors are trying to get ahead of the game. Vertiv’s 800 V DC ecosystem that integrate with NVIDIA Vera Rubin Ultra Kyber platforms will be commercially available in the second half of 2026. Eaton, too, is well advanced in its 800 V DC systems innovation courtesy of a medium-voltage solid-state transformer (SST) that will sit at the heart of DC power distribution system. Meanwhile Delta, has released 800 V DC in-row 660kW power racks with a total of 480 kW of embedded battery backup units. And, SolarEdge is hard at work on a 99%-efficient SST that will be paired with a native DC UPS and a DC power distribution layer.

But much of the industry is far behind. Patrick Hughes, senior vice president of strategy, technical, and industry affairs for the National Electrical Manufacturers Association, says most innovation is happening at the 400 V DC level, though some are preparing 800 V DC. He believes the industry needs a complete, coordinated ecosystem, including power electronics, protection, connectors, sensing, and service‑safe components that scale together rather than in isolation. That, in turn, requires retooling manufacturing capacity for DC‑specific equipment, expanding semiconductor and materials supply, and clear, long‑term demand commitments that justify major capital investment across the value chain.

“Many are taking a cautious approach, offering limited or adapted solutions while waiting for clearer standards, safety frameworks, and customer commitments,” said Hughes. “Building the supply chain will hinge on stabilizing standards and safety frameworks so suppliers can design, certify, manufacture, and install equipment with confidence.”

From Your Site Articles

Related Articles Around the Web

联系我们 contact @ memedata.com