View all Articles
Commentary By Mark P. Mills

The Electric Grid in the Digital Age

Energy, Economics Technology, Technology

Wind and solar will not be able to power it

At the dawn of the 21st century, the National Academy of Sciences published a retrospective on the previous century’s most important inventions. The top three on the list were, in this order: the electric grid, the automobile, and the airplane.

In the book’s afterword, the late Arthur C. Clarke, last century’s most sagacious technology seer, wrote about how easy it is for us to take historical accomplishments “so completely for granted.” He noted in particular that the “harnessing and taming of electricity, first for communications and then for power, is the event that divides our age from all those that have gone before.” Engineers achieved the amazing feat of building a nation-spanning group of electricity grids that now provide power to nearly everyone and everything, anytime, while consuming less than 2 percent of the gross domestic product.

Some things have changed since that list was published. Not only has kilowatt-hour demand increased, but also the nature of demand has changed. Despite an epic recession that stifled growth, U.S. electricity use increased by an amount greater than adding a United Kingdom’s worth of demand. Also, despite greater efficiency, especially in lighting and cooling, residential use of electricity rose by 25 percent and commercial use by about 20 percent. Only industrial use declined, by about 10 percent, much of that from policies that drove manufacturing and mining offshore.

But the critical change since 2000 is not just that the economy is bigger and that we use more electricity. It is our growing dependence on real-time availability as the economy digitalizes. Since data-centric systems operate only on electricity, the digital share of the economy has salience for the future of electric grids. Within five years, real-time digital traffic alone will be greater than all data traffic just five years ago. Real-time capabilities are increasingly vital for everything from financial transactions and navigation to automation controls (including, eventually, self-driving vehicles).

The U.S. Bureau of Economic Analysis estimates that the digital economy is 10 percent of GDP. But that undercounts digital dependencies in day-to-day operations, from financial markets to hospitals. One recent analysis concludes that the digital economy constitutes 35 percent of America’s GDP.

Consider two classes of iconic buildings: There are now hundreds of data centers the size of warehouses in America, compared with — where total square footage under roof is comparable — a couple of dozen skyscrapers of the class of an Empire State Building. Data centers consume ten times more power per square foot than do skyscrapers. And, unlike commercial offices, which can tolerate minor and even extended outages, data centers (and their networks) cannot. Add to all this the future demand from more electric vehicles.

In this context, we see the narrative that a “new energy” economy must now dispense with “old” hydrocarbon technologies that undergird 70 percent of today’s grid. A fusillade of reports claim that a “grand transformation” is upon us — indeed, as the International Monetary Fund intoned in its report “Energy Transition,” “smartphone substitution seemed no more imminent in the early 2000s than large-scale energy substitution seems today.” But whatever one’s views on climate change — the animating issue for “transformation” advocates — the possibilities that there will be a grand disruption of energy technology are constrained by what physics permits, not what pundits claim. 

Of course the technologies at the center of the imagined transformation — wind, solar, and lithium batteries — have undergone dramatic improvements in performance and cost. The combined contribution of wind and solar to U.S. electricity has risen from two-tenths of a percent at the turn of the century to almost 10 percent now, with more yet to come. 

But those who assert that a “clean tech” transformation can emulate the velocity of the digital revolution make a profoundly misguided analogy. The physics of information production has less in common with the physics of energy production than does the physics of bird flight with that of space travel. 

Mining and manufacturing to produce gigatons of physical hardware, whether to build wind turbines or gas turbines, involves the same kinds of industrial activities. Replacing most of America’s existing grid with wind or solar machines over a two-decade period would entail a 700 percent greater rate of grid construction than has ever been accomplished anywhere at any time in the past half century. That scale of industrial mobilization was last seen during the Second World War. Even if pursued, such a Herculean effort would reduce global emissions of carbon dioxide by just 6 percent from present levels. 

Even so, the core question remains: Just how much more electricity can we expect wind and solar to supply, at a price consumers would tolerate, and without compromising reliability?

If America, say, tripled its wind and solar capacity, matching Germany’s share from those sources, we’d still be a long way from a hydrocarbon-free grid. Germany’s massive Energiewende policy hasn’t come close to achieving its stated goal of radically lowering carbon dioxide emissions, but the nation now has Europe’s highest electricity rates.

More relevant in our digital age are implications for reliability, as subsidies and mandates impose on grids more electricity from wind and solar, energy sources that are inherently “variable,” a term-of-art that the Department of Energy uses. “Variable” because, unlike conventional power plants, the output from wind and solar machines is dictated by the vicissitudes of nature. Obviously there’s the diurnal variability, but the greater challenge for reliability is that both wind and solar experience unpredictable episodic declines as well as wide seasonal swings. 

Building a 100-megawatt solar or wind farm instead of a combustion-based 100-megawatt power plant has consequences. Conventional plants produce the same amount of electricity day or night, summer or winter, anytime required. A solar or wind machine varies from 100 megawatts under peak conditions to half that during the off-season and, with daily regularity, falls to zero.

At low levels of market penetration, such wild variability can be compensated for by — and at the expense of — conventional power plants. Thus, an inherent deficit in wind and solar machines gets a free ride on the grid. But it’s worse than that. State utility policies require that markets purchase wind or solar power whenever it’s produced, which deliberately idles expensive conventional assets. It’s a rare business that has the luxury of buyers who are required to purchase their product whenever it’s convenient for the producer to make it rather than when the buyer wants it. 

When conventional power plants are forced from a utilization of roughly 90 percent of the time to, say, half that, all of the capital and ancillary costs remain the same, leading to a radical increase in per-unit costs of power produced. Either consumers pay for that or profits go negative and the power plant is closed.

So far, the consequences of having 10 percent of America’s grids subjected to variable power has been compensated for by ready availability of conventional generation. That cover evaporates as the share of variable power rises and as critical backup from neighboring power plants disappears because they become too expensive to operate at low utilization or are outright banned by state or perhaps federal policies. 

These trends portend both a decline in grid reliability and a rise in net costs to consumers, an outcome already seen in Europe. A massive blackout in the United Kingdom this past summer was triggered by a normal if episodic event — a wind lull at a huge offshore wind farm coincided with a glitch at a conventional power plant. The U.K. system operator is “rethinking” backup. In Europe, and especially Germany, the world’s poster child for the green movement, the availability of extra power on a local grid, or on a neighbor’s, will soon end, as it will here, as more “neighbors” abandon conventional generation.

Germany, once a power exporter, is now an importer and has resorted to expanding coal use in order to keep the lights on and minimize natural-gas imports. A new McKinsey study concludes that high prices are destined to go even higher in Germany, threatening industrial competitiveness and degrading reliability. Their conclusion: Germany must make a “fundamental turn in energy policy.”

And Texas, where the share of power generated from wind is the highest in the nation, at 16 percent, barely skirted several blackouts this past summer, for the same reasons. Wind lulls at times of high demand required emergency alerts for “voluntary” reduction of demand. If the available power had dropped by just 1.5 percent more, the system operator would have been forced to impose rolling blackouts. If one were betting, though: It will be California (again) that will lead the way to green-induced blackouts.

The chief constraint on building grids that can supply power 24/7, 365 days a year has always been the difficult physics of storing electricity at scale. Now, in a kind of lithium-induced delusion, green proponents claim that utility-scale battery storage is viable. It should be self-evident that, for wind and solar, diurnal variability isn’t the only issue in estimating the quantity of batteries that would be needed to keep the lights on, but that’s what most analysts focus on. One must account for both seasonal variability and the inevitable long episodes of no wind or no sun, including times when both are simultaneously not available.

When we take those variabilities into account and include the need to build capacity to produce extra output when there is wind or sun to fill the storage, we find that the wind and solar grid would have to be roughly twice as big as the conventional grid that it would replace. And then we’d still need to build storage.

A national or regional grid would require far more than hours of storage. Historical weather data show frequent periods of one to two days with neither wind nor sun over the entire continent. The sheer scale of the batteries needed to cover that exigency is daunting.

For perspective: One day’s worth of U.S. electricity use would require 500 years of production from Tesla’s “Gigafactory” in Nevada. Even assuming that battery costs drop 50 percent and that someone builds 100 more such factories at $4 billion a pop, we’re talking about $2 trillion spent on storing electricity, not producing it.

Hydrocarbons, on the other hard, are trivially easy to store. Storing oil or natural gas is at least a hundredfold less expensive than storing an equivalent amount of energy in a battery. Thus machines converting chemical energy to electricity can supply power on demand, at low cost. 

If it were true, as advocates claim, that battery, wind, and solar power are cheaper than a natural-gas turbine, it raises a question: Why, rather than lobby local utilities to go “greener,” don’t deep-pocketed businesses just become green cord-cutters?

Tech giants say that their electricity consumption is 100 percent green, or promise that it soon will be. This is mere green-washing bought with credits from investments in wind and solar projects elsewhere on the grid. Their facilities are in fact powered by a conventional grid — 24/7, 365 days a year, by necessity. The reason that companies don’t cut the cord and build their own reliable “clean tech” grid is that it would lead to a 300 to 400 percent jump in power costs.   

This piece originally appeared at National Review



Mark P. Mills is a senior fellow at the Manhattan Institute, a faculty fellow at Northwestern University’s McCormick School of Engineering, and author of the recent report, “The ‘New Energy Economy’: An Exercise in Magical Thinking.” Follow him on Twitter here.

This piece originally appeared in National Review