From Chips to Kilovolts: The Rise of the 10-Gigawatt Data Center
The shift from 100-megawatt campuses to 10-gigawatt installations represents a fundamental change in how the industry builds infrastructure. This post examines why the primary constraint on artificial intelligence has moved from silicon availability to grid capacity and how firms are restructuring their entire business models to secure energy. We are entering an era where the ability to manage a regional power grid is as important as the ability to design a neural network.
The shift from megawatt to gigawatt thinking
For the last decade, a 100-megawatt data center was considered a massive undertaking. These facilities were usually built in clusters, drawing from existing municipal infrastructure and relying on standard utility agreements. The scale of the Softbank project in Ohio and Microsoft’s recent moves in the MISO (Midcontinent Independent System Operator) territory change those assumptions. A 10-gigawatt site is not a building; it is a city-scale industrial complex. To put that number in perspective, 10 gigawatts is roughly the peak demand of the entire city of New York on a mild day.
When you scale to this level, you stop being a customer of the grid and start becoming a primary component of it. Standard PUE (Power Usage Effectiveness) metrics, while still relevant for internal efficiency, are being overshadowed by the complexity of the interconnection queue. In many North American markets, the wait time to connect a new large-scale load to the grid now exceeds five years. Developers can no longer wait for utilities to build out capacity. They are forced to fund the construction of high-voltage transmission lines and substations themselves.
This transition marks the end of the "plug and play" era for data centers. In the past, you chose a location based on tax incentives and fiber proximity. Today, the geography of AI is being redrawn by the location of underutilized high-voltage lines and the willingness of local regulators to approve massive energy draws. Ohio has become a hotspot because it sits at the intersection of major transmission corridors that were originally built to serve the heavy manufacturing industry. As those factories closed or modernized, they left behind a robust electrical "skeleton" that tech companies are now reanimating.
Why regional grid operators are the new gatekeepers
Securing the power is only half the battle; the other half is negotiating how that power is delivered. Organizations like MISO and PJM manage the flow of electricity across state lines, ensuring that the surge in demand from a new data center doesn't cause a brownout in a neighboring city. Microsoft’s partnership with MISO is a strategic move to gain better visibility into grid congestion and to coordinate long-term capacity planning.
Grid operators use complex weather-prediction models and load-balancing algorithms to manage the stability of the AC frequency. A 10-gigawatt data center represents a "baseload" demand that is almost entirely flat, which is both a blessing and a curse for the grid. It provides a steady revenue stream for utilities, but it offers zero flexibility. Unlike a residential area where demand drops at night, an AI training cluster runs at 100% load around the clock.
This lack of flexibility is pushing developers to explore "behind-the-meter" solutions. We are seeing a move toward onsite generation, where the data center is co-located with a dedicated power source. This removes the reliance on the public transmission network and shields the operator from price volatility in the wholesale electricity market. If you own the power plant and the data center, you have removed the most significant external risk to your uptime.
The physical limits of heat and transmission
The engineering challenges of a 10-gigawatt site go beyond the electrical load. Moving that much energy requires 765kV transmission lines, which are the largest in use. These lines are difficult to permit and expensive to build. Once the power reaches the site, the challenge shifts to thermal management. 10 gigawatts of electrical input results in 10 gigawatts of heat output.
Conventional air cooling is physically impossible at this density. Even the most advanced HVAC systems cannot move enough air to dissipate the heat generated by several million H100 or B200 GPUs packed into a single geographic footprint. This is forcing a mandatory shift to liquid cooling, either through direct-to-chip cold plates or immersion systems. These cooling loops require massive amounts of water and a sophisticated chemical treatment infrastructure.
The density of these sites also changes the math on latency. While we often think of latency in terms of fiber optics, internal data center latency becomes a factor when a campus spans several miles. Distributing a single training job across a 10-gigawatt site means managing the physical distance between the furthest racks. Engineers are having to design new network topologies that account for the speed of light across several kilometers of cabling, all while ensuring that the power delivery doesn't create electromagnetic interference with the high-speed data links.
Power as the ultimate competitive moat
In the early days of the cloud, the "moat" was the software stack. Later, it became the proprietary silicon. Now, the moat is the energy contract. A company that secures a 20-year agreement for 5 gigawatts of nuclear power has a structural advantage that cannot be disrupted by a better algorithm. This is a return to a more traditional form of industrial sovereignty.
We see this in the race to acquire or restart decommissioned nuclear assets. The focus on SMR (Small Modular Reactor) technology is a direct response to the grid-lock. If a tech firm can deploy its own nuclear reactors on-site, it bypasses the regional grid operator entirely. This creates a "sovereign compute zone" where the only limiting factor is how many chips the firm can buy.
This centralization of energy and compute creates a new set of risks. If the AI race is determined by who can build the biggest "power moat," we may see a future where compute capacity is concentrated in a handful of geographically isolated hubs. These sites become critical national infrastructure, requiring their own security forces and dedicated logistics chains. The "cloud" is becoming increasingly grounded in the physical reality of copper, steel, and uranium.
The end of the virtualized era
The industry spent decades trying to abstract away the underlying hardware. We talked about "serverless" and "the cloud" as if they were ephemeral concepts existing in a void. The 10-gigawatt data center kills that illusion. It reminds us that every token generated by a model has a direct, measurable cost in kilovolts and gallons of water.
The infrastructure teams are no longer just managing Linux kernels and Kubernetes clusters. They are negotiating with state legislatures over line-siting and studying the hydrology of local watersheds. The most successful software companies of the next decade will likely be the ones that are best at heavy civil engineering and utility-scale power management.
Does this physical consolidation lead to a fragile monoculture where a single grid failure can take down the world’s most advanced AI models, or will it force the industry toward a more resilient, decentralized power architecture?