直流数据中心
Direct Current Data Centers

原始链接: https://terraformindustries.wordpress.com/2026/01/30/direct-current-data-centers/

## 将人工智能扩展到卡达雪夫等级 1:太阳能和电池作为极限 本分析认为,实现卡达雪夫等级 1 状态——利用整个行星的能量——的人工智能计算将由大规模的太阳能和电池基础设施驱动,*不*依赖化石燃料。作者挑战了天然气备份是必要的观点,证明消除天然气组件可以更快、更便宜,从而优化“每美元的 token 数”。 核心思想很简单:GPU 昂贵,但只需要稳定的直流电源。太阳能电池板产生直流电,电池储存它——这是一种自然、高效的组合。最佳系统将由大型太阳能发电场(可能每兆瓦人工智能负载 15 英亩)组成,直接连接到电池组和 GPU 机架,从而消除成本高昂且复杂的组件,如电网、涡轮机和转换器。 虽然土地使用是一种权衡,但太阳能和电池的成本正在迅速下降,使其比燃气替代方案更具经济性。这种方法也为空间人工智能打开了可能性,利用简化的法规和持续的阳光,但目前基于地面的解决方案仍然更具成本效益。 最终,扩展人工智能的途径在于“删除”所有不必要的元素,完全专注于使用清洁、直流的太阳能+电池系统最大化计算能力。实现这一点可以将人类推离目前的能量水平(K=0.65-0.73),并走向充分的行星能量利用(K=1)。

这次黑客新闻的讨论集中在大规模数据中心的能源需求,特别是为人工智能项目提供动力的那些。一个关键点是,对于千兆瓦级别的设施,燃气轮机目前在土地需求和长距离能源传输的复杂性方面,比太阳能更具实用优势。燃气可以提供可调度的、24/7的电力,并具有可扩展的存储能力。 一位评论员认为,理想的解决方案是混合方法——利用两种燃料来源,并在电网的上游进行混合。有人担心政府资金可能会导致效率低下、过度设计的项目,这与发展中国家看到的问题相呼应。 对话还涉及了诸如太空数据中心(可能是一种土地掠夺策略)等未来概念,并质疑了能源密集型人工智能优先于食品生产和交通等基本需求。一些人则为人工智能研究辩护,认为它是一个具有挑战性且资金充足的问题,值得解决,无论其最终应用如何。
相关文章

原文

Casey Handmer, Matt Weickert

This post explains our current views on how humanity will achieve Kardashev Level 1 status by exploiting the full energy resources of an entire planet. More specifically, how pure solar+batteries will power AI scaleup beyond gas turbine manufacturing limits.

It is an extension to my earlier post of March 2024 on using solar to power AI datacenters, and a response of sorts to the Scale Microgrids paper that showed a mix of solar and gas could reduce emissions for the developers and operators of next gen AI datacenters. In that paper, Kyle Baranko, Duncan Campbell and co-authors showed that around 90% solar with local natural gas backup generators would be the fastest way to get power. In this work, we show that taking this trend to its obvious conclusion and deleting all the legacy fuel-based power components can be even faster and cheaper. We also include a discussion of space-based inference. 

Let’s examine this problem from first principles. What is silicon cognition?

You can call it a tensor core, a Blackwell, a GPU, but these are all versions of the same thing. A sliver of silicon with billions of transistors, through which cascade a torrent of electrons converting the entropy of a few volts to the entropy of information generation, and the entropy of waste heat. A GPU is a very complicated switch that regulates current flow, with some other side effects.

For the foreseeable future, the GPU will be the expensive part, currently valued at around $50,000/kW. All it needs to continue to operate is an infinite supply of moderately spicy electrons, that is, a DC power supply at a few volts. Given that making power is much simpler than thinking, the job of the power supply is to be uncomplicated and relatively cheap. In no universe should providing power be the hard part.

Solar and batteries are a natural match to this demand. A solar panel is a slice of silicon (without logic gates) that absorbs solar photons and drives electrons uphill. To a good approximation, a solar module is a constant current source that maxes out at about 40 V. A battery is a reversible chemical reaction that stores and releases electrons, and to a good approximation is a constant voltage source. Modern lithium chemistries hold at about 3.9 V across nearly their entire state-of-charge range. 

For logistical reasons related to the relative scarcity of copper in the crust of the Earth, it makes sense to operate solar cells, batteries, and GPUs in series so that the entire system runs at about 1000 V and each electron can be reused a few hundred times. 

Our radical claim is that, in the limit, Earth-based AI compute will look like this: 

By area, thousands of acres of solar panels. 

By cost, a pile of GPUs. 

In the limit, Earth-based AI compute will be a direct current (DC) solar array connected to a DC battery bank connected to a DC GPU rack. 

This approach brings numerous other advantages: 

  • No grid connection. 
  • No moving parts. 
  • No turbines. 
  • No gas connection. 
  • No nuclear fuel. 
  • No emissions. 
  • No power conversion. 
  • No transformers. 
  • No inverters. 
  • No power transmission. 

None of these parts make the AI smarter, and all of them can potentially intrude onto the critical path. 

Delete.

It sounds nice in theory, but how can this work?

The key metric to optimize is tokens per dollar. For example, take Scale Microgrids’ work on a 90-10 AI system, increase the size of the solar and battery farm enough to get to 99+% uptime, delete the gas power side, and compare overall economic productivity. A gas system that’s used only 1% or 0.1% of the time still costs time and money, and that’s the core reason why deleting it can end up reducing overall cost. 

The graph below shows the tokens per dollar landscape for two hypothetical solar powered AI systems, one with a gas powerplant (blue) and one without (orange). Both have a solar array (size given in nameplate multiples of the peak AI load) and battery size (given in hours of capacity at full load). 

The key insight is that there are two stable attractors. One with a pure gas energy supply, with solar and battery supplementation for vibes, CO2 reduction, or marginal capacity expansion. The other with pure solar and batteries, no gas. The pure gas system capex is minimized with no solar and batteries, as natural gas itself is relatively cheap given no preference for emissions reduction. But the two manifolds intersect along a frontier, and beyond that the solar array and battery are capable enough that it’s actually cheaper to delete the gas powerplant entirely.

This tradeoff does not come at zero cost. In exchange for deleting the cost, complexity, and schedule risk of a gas powerplant comes the sizable land demands of a solar array. To a rough approximation, 15 acres of solar are required per MW of DC AI load. For reference, the USA has about 150 million acres of unpopulated desert west of the Mississippi, enough for 10 TW of AI development. 10 TW is much more than total global electricity generation today. There is plenty.

On the other hand, while fracked gas is relatively abundant (for now) the turbines that convert it into power are hard to make, hard to ramp, and largely already spoken for. If AI seeks growth beyond the production ramp of turbines, it is clear which way the wind is blowing. 

Before we get to the methods section, I’ll give a rough heuristic for performance. Assuming an on-off binary state on the load, a 15 MW solar + 15 MWh battery can get to ~99% utilization anywhere in the US south west, but is that good enough? The short answer is yes – maximizing tokens per dollar spent, or ROI, justifies throttling demand on a few of the longest, coldest nights of the year. 

But it’s actually better than that. Remember that a GPU is a glorified silicon switch intermediating the flow of electrons downhill. Power consumption is proportional to clock frequency multiplied by the square of the voltage (P ~ f V2). GPU power consumption is not fundamental: Token production rate is. If we’ve deleted DC-DC converters then voltage is set by the state of the battery, and frequency is controlled by software. This means that a 3% reduction in token production rate can buy us a 9% reduction in power consumption. So the math changes from 99% utilization to more like 99.7%. This shifts the economics around solar and battery plant sizing considerably, given that GPU frequency modulation allows for a 3x discount in actual utilization and token production.

There is one other implication of these wildly capable and versatile solar+battery AI data centers. They have enough power to operate at full, or nearly full, capacity for the entire year. For 10 months of the year they are oversupplied, and can provide electricity and low grade heat (from their cooling systems) to neighboring customers essentially for the marginal cost of power transport. These could be seasonal or intermittently friendly loads such as the synthetic hydrocarbons and primary materials being pioneered at Terraform, and/or local communities. At Terraform, we believe that power should be as cheap as possible. 

Methods

Epistemology. How is it possible for this lightly evolved monkey to know these things?

You will need:

A year (at least) of real time solar data from a target location. This is data for an EW fixed tilt array in Texas that we generated by feeding fixed south tilt data into a slightly non-trivial geometry model.

A solar PV module IV curve model. This is based on the JAM72S30-540/MR/1500V but they’re all pretty similar.

A Li-ion discharge voltage curve. 

A frequency-power curve for a typical GPU.

Plug these all together in a model that charges the battery when the sun is up, provided the panel voltage is high enough. We initially simulated a system with no power electronics whatsoever, but found that battery charging efficiency was inhibited when the battery state of charge was low, because pulling the panels to a lower voltage actually decreased their efficiency. Given that MPPTs are not that expensive, we could put them back in.

Then, provided the battery can deliver power, the GPUs are powered and we count how many tokens are generated.

Throw in a basic “governor” that throttles the GPU when it predicts the battery will be exhausted before dawn. 

This graph shows performance over a ten day period in winter. Note how the governor throttles output early on the fourth day by rationing power until the following morning. The cubic power consumption of GPUs means that throttling a little bit early is much better for token production than running full blast into a wall and then dropping to zero production until the sun comes back up.

Now run thousands of simulations for every combination of battery size and array size, measuring overall utilization of the load. 

This chart shows yearly utilization of a GPU asset given solar and battery sizes, including our basic governor. Note that a “steepest” ascent starting at zero solar and zero batteries turns first on adequate solar, then adequate batteries, then marginal solar, then marginal batteries. This reflects the shape of the resource curve and the degree of exploitation required to get to the marginal nth 9 of reliability.

This chart shows curtailment reduction with adoption of the minimum viable governor vs some naive on/off operator, showing 2.3-2.6x improvement, which is close to the 3x implied by the GPU’s cubic power consumption. This governor is not very sophisticated, for example, it has no ability to take weather prediction into account. It merely assesses the time of day, the state of the battery and of solar generation and curtails GPU utilization accordingly. 

Throwing in assumptions about capex, we can assess capital efficiency.

This chart shows token production per dollar (in arbitrary units), showing a rather broad peak with considerable flexibility. Adding too much solar or batteries degrades capital efficiency – the correct response is to add more GPUs in this case. 

Because the peak is so broad, there is freedom to choose for one additional preference. That is, we can alter the size of the array and the battery by 20-30% with respect to the load and still get much the same return on capital. Given that land is finite, we may want to maximize tokens per acre while holding development cost constant, which puts us towards the lower edge of the peak in the diagram above. Then, holding land use and tokens per dollar equal, adding more battery towards the bottom right of the peak increases absolute token production on a fixed GPU and solar array asset base. This mirrors actual operational optimization, which is to say, pave all available land with solar, then add GPUs and batteries until revenue peaks. 

At last, the machinery to perform a comparison with a gas or gas-solar hybrid system is in place. Plug in some assumptions around GPU cost, solar cost, battery cost, gas turbine cost, gas fuel cost, and amortization period, and you can produce this chart.

Here we assume that GPUs are $50,000/kW, batteries (including all ancillary power electronics) are $200/kWh, solar is $200/kW, gas turbines are $2500/kW, gas is $55/MWh, and we’re amortizing over 10 years. 

The chart suggests the possibility that under some set of assumptions, it’s actually cheaper to delete the gas power system entirely, so what are those assumptions? For any given cost, select the peak utility point for solar array and battery size and marginalize across these parameters.

The left side of this chart shows where a pure solar system is the best value. As a rough rule of thumb, this is where 1.3 x battery cost + solar cost < $500/kW(h). 

As a sanity check, in early 2026 we’re seeing large scale integrated battery storage systems ship from China for well below $150/kWh, while the cheapest industrial scale solar systems are going in for under $200/kW-DC-nameplate. This is beyond the critical cost threshold – delete the gas system.

Let’s recap.

It is possible and even optimal to run a datacenter on pure solar and battery. The optimum level of availability is between 99% and 99.9% utilization, with the balance taking the form, primarily, of throttled use rather than lights out. 

The pure solar+battery data center is cheaper than a gas-assisted or pure gas data center if solar and battery costs per kW are <~10% of the turbine/gas generator cost. For example, if a 1 GW gas turbine costs $2.5b and the solar array costs $250m/GW and the BESS costs $250m/GWh, then the utility is roughly at parity. 

We estimated the opportunity cost of a month of delay on a 1 GW gas turbine being delivered at about $20m, which probably isn’t high enough to justify deleting the component out of pure suspicion, provided that you are confident it will be delivered eventually. 

On the other hand, we found that it is possible to commission a pure solar+battery data center with high utility and then backfill additional GPU capacity if/when a gas turbine becomes available. 

There is broad latitude for design flexibility within the peak utility (tokens/$) range. For example, there are points with equal utility that have 20% less solar or significantly more availability, depending on secondary constraints such as land availability.

Given that, long term, trends will favor pure solar+batteries and the performance relative to complexity is already favorable, there is an argument that one hyperscaler should probably move aggressively in that direction, so as to obtain differentiation. 

We investigated what the performance hit would be for a pure solar+battery DC power system that deleted non-computational silicon, that is, inverters, converters, and MPPTs between the solar and battery components, and even in the racks themselves. We found that the power system performance relative to cost improved with the deletion of AC and DC-DC conversion components between the arrays and batteries. Again, this is a nod to the future we will converge on.

Space AI

Late 2025 saw much speculation about space-based AI. It seems to me that SpaceX, with their incumbent advantages in launch and Starlink hardware expertise, may be able to ship gigawatts of inference compute into Earth orbit for something like 2x the per token cost of ground-based AI, but that this would still be quite profitable. Why bother? It’s a separate delivery and distribution channel that isn’t congested by the usual permitting and regulatory nonsense at play on the surface, or at least, a different and uniform set. And if you have unlimited launch upmass it helps to have a profitable use case, like Starlink or orbital AI, to soak up that supply. 

Let’s list their respective advantages and disadvantages.

Space AI

Positives:

  • Simplified and unified regulatory regime.
  • High altitude (800 km – 2400 km)  dawn dusk sun synchronous orbits are never in shade (except momentarily during rare lunar eclipses) so don’t need batteries.
  • Don’t need to be cheaper than ground AI, as long as they’re cash flow positive.
  • Infinite source of marginal launch demand can fill in gaps from other customers for a very large rocket.
  • Passive stabilization with dihedral solar arrays is possible.

Negatives:

Ground solar AI

Positives:

  • Mostly a land development play, plenty of unused land on the ground.
  • Batteries are relatively cheap compared to GPUs and enable operation overnight, getting cheaper all the time. 
  • Almost certainly cheaper than launch and space rating of components.
  • Don’t require a million tonnes a year to low Earth orbit to deploy.
  • Easier to maintain/retrofit.
  • Lower latency (closer to end users).
  • Default human Earth surface environment is less hostile to hardware.
  • Can easily cool using HVAC and air. 

Negatives:

  • Regulatory/permitting is painful, byzantine, and locally variable.

Fundamentally this is a bet that GPUs are so valuable on a per gram basis that even launching them to space helps improve the economic utility of a Watt of solar power. 

Finally, an application of space-based solar power that can justify something like the vision of Gerry O’Neill. To be clear, this is because the value of a watt of space-transmitted microwave power encoding an intelligent token of data is about a trillion times higher than the value of a watt of space-transmitted microwave power competing with your local power plant to supply the grid.

Conclusion

Whether in space or on the ground, turbines are irrelevant to reaching Kardashev Level 1. The fastest growing AI will win, and the fastest growing AI must delete all non-essential parts. The only essential parts are a solar array, a battery, and the GPU itself. 

Aside: Carl Sagan’s extension of Kardashev Levels from integers to reals

Planetary power: 1016 W -> K = 1
Stellar power: 1026 W -> K = 2
Galactic power: 1036 W -> K = 3

One step on the Kardashev scale is equivalent to increasing power consumption by a factor of 10 billion. 

What is the current level of humanity?

Global electricity production: 3.5 TW -> K = 0.65.
Global fuel consumption: 20 TW -> K = 0.73.
Global planetary surface use for agriculture: 13% -> K = 0.91.

If all of Earth’s land was paved with solar PV at 26% efficiency -> K = 1.01. 
If the entire Earth including oceans was paved with solar PV at 26% efficiency -> K = 1.06.
If we fill the unshaded dawn/dusk sun synchronous orbital (SSO) band (800 km to 2500 km) with SpaceX AI satellites, 10^17 W available -> K = 1.04.
All of Earth plus the SSO orbital band -> K = 1.085.

Convert the entire Moon into 2 kg/m^2 solar inference at Earth’s solar orbital radius -> K = 1.91.
Convert Mercury into a Dyson sphere (7 kg/m^2 density) @ 26% PV efficiency -> K = 1.9998.

No way to get to K2 without a slightly more efficient solar panel! 

联系我们 contact @ memedata.com