Transformer shortages: at their core?

The pricing of transformers has risen 1.5x in the past three years along with US imports of transformers by capacity more than doubling in the same timeframe.

Transformers are needed every time voltage steps up or down in the power grid. But lead times have now risen from 12-24 weeks to 1-3 years. And prices have risen 70%. Will these shortages structurally slow new energies and AI? Or improve transformer margins? Or is it just another boom-bust cycle? Answers are explored in this 15-page report.


Three years ago, we wrote an overview of transformers, which are used every time voltage steps up or down in the world’s power grids. As examples at both extremes, efficient power transmission requires high voltages at 200-800kV, while the sockets in your home power electrical appliances at a low, safe 120V (in the US, or 230V in Europe).

The central argument in our 2021 report was that the total capacity of transformers would double or more, and the number of transformers needed in the power grid could rise by 30x as part of the energy transition. The rationale is re-capped and updated on page 2.

Transformer shortages are biting by 2024. Lead times have risen from 12-24 weeks to 1-3 years. Prices have risen by 70%. We are concerned about power grid bottlenecks, powering the rise of AI, long interconnection times for wind and solar, and delays to power electronics order books, per pages 3-4.

Hence in this report, we have attempted to break down the bottlenecks across the transformer supply chain, to see which ones might be persistent; or conversely, which ones might resolve, covering design considerations (page 5), transformer materials (pages 6-7), specialized labor requirements (pages 8-9), the capex costs of new facilities (page 10) and ultimately IRR sensitivities for the costs of transformers (page 11).

If we follow other materials that matter in the energy transition — e.g., lithium, solar modules — then we can clearly see evidence for boom-bust cycles. Hence what are the risks of a boom-bust cycle for transformer manufacturing? Evidence is reviewed on pages 12-14.

Conclusions into transformer shortages, and companies across the supply chain, are summarized on page 15.

Advanced Conductors: current affairs?

Comparison of old transmission line conductors and advanced conductor geometries.

Reconductoring today’s 7M circuit kilometers of transmission lines may help relieve power grid bottlenecks, while avoiding the 10-year ordeal of permitting new lines? Raising voltage may have hidden challenges. But Advanced Conductors stand out in this 16-page report. And the theme could double carbon fiber demand?


Power grids are shaping up to be one of the biggest and most imminent bottlenecks in the energy transition, for the reasons in our note here, having the consequences in our note here, and one of many reasons why new AI data-centers will need to build their own dedicated generation capacity per our note here.

A key challenge for constructing new transmission lines is the long development times, as permitting can take over 10-years. Hence what opportunities exist to raise the capacity of today’s 7M circuit kilometers of existing global transmission lines, e.g., via reconductoring.

The carrying capacity and the cost of a power line are built up from first principles on pages 2-4. Raising capacity requires raising voltage or raising current, ideally without inflating the costs of transmission.

(For more theory, please see our overview of energy units, overview of electricity and/or overview of power transmission, reactive power and harmonic distortion).

The simplest option to increase the capacity of a power transmission line might be to increase the voltage, by upgrading the transformers. Doubling voltage, all else equal, might seem to double the power. But we think material voltage increases may be more challenging than indicated in other recent commentaries, with negative net NPVs, per pages 6-8.

Raising the current through each conductor is the other way to increase power ratings. Usually there are technical and economic issues. But they can be economically addressed with Advanced Conductors, which replace steel strands at the center of Aluminium Conductor Steel Reinforced (ACSR) with composites such as carbon fiber. Properties of Advanced Conductors versus ACSR and economic costs are on pages 9-13.

Materials implications? Carbon fiber is a miracle material, which is 3-10x stronger than steel, but 70-80% lighter. Could Advanced Conductors effectively double global demand for carbon fiber by 2030, taking the carbon fiber market from recent oversupply into bottleneck territory? Forecasts for aluminium and copper are also revised on pages 14-15.

Leading producers of Advanced Conductors are profiled on page 16. One large public conglomerate and three private companies are gearing up. Overall, Advanced Conductors are among the best antidotes we have seen for power grid bottlenecks, based on the cost-modelling in this note.

Arms race: defence versus decarbonization?

Global defence spending from 1960 to 2050 by region. Defence budgets are set to increase in the 2020s following Russia's invasion of Ukraine.

Does defence displace decarbonization as the developed world’s #1 policy goal through 2030, re-allocating $1trn pa of funds? Defence versus decarbonization? Perhaps, but this 10-page note also finds a surprisingly large overlap between the two themes. European capital goods re-accelerate most? Some clean-tech does risk deprioritization?


One of the catalysts for starting Thunder Said Energy, back in 2019, as a research firm for energy technologies and energy transition, was the sense that decarbonization was becoming the largest priority in the world.

Yet today, news headlines would suggest that a different theme is becoming the largest priority. The theme is defence. Comparisons between decarbonization in 2019 and defence today are drawn on page 2.

Defence spending is a deterrent against war, and may increase from $2.4trn in 2023, rising by +$1trn to $3.4trn in 2030, and then by a further +$1trn to $4.4trn in 2050, per our breakdowns of global GDP by region, and discussed on pages 3-4.

If the world allocated $1trn pa more for defence by 2030, and $2trn pa more by 2050, then how would these vast sums compete with energy transition expenditures? For an answer, we turn to our roadmap to net zero, and the costs/capex needed for wind, solar, gas, power grids, efficiency, CCS and nature-based solutions, on pages 4-7.

Winners and losers? The most important part of the note speculates as to winners and losers — by theme, by sector and by company. There is potential for more pragmatism and reindustrialization in Europe. Beware of watermelons. Our key conclusions are distilled on pages 8-10.

Ultimately all military expenditures do go somewhere, and what surprised us most is the overlap between defence versus decarbonization. This is most true for critical infrastructure and some energy technologies.

We have already watched the energy transition become the very hungry caterpillar, encompassing $15trn of market cap across a dozen sectors. Including defence. For example, we have written on super-alloys, Rare Earths and carbon fiber. And new technologies such as power-beaming, military drones and thermoelectrics.

More of our upcoming research will focus on the overlap between decarbonization and strategic infrastructure and technologies. For now, some further reading is the energy history of WWII. And our key conclusions on decarbonization versus defence are in this 10-page note.

Methane leaks: by gas source and use

Methane leakage rates in the gas industry vary by source and use. Across our build-ups, the best-placed value chains are using Marcellus gas in CCGTs (0.2% methane leakage, equivalent to 6kg/boe, 1kg/mcfe, or +2% on Scope 3 emissions) and/or Permian gas in LNG or blue hydrogen value chains (0.3%). Residential gas use is closer to 0.8-1.2%, which is 4-6kg/mcfe; or higher as this is where leaks are most likely under-reported.

Today’s short note explains these conclusions, plus implications for future gas consumption. Underlying numbers are here.


Methane, as explained here, is a 120x more potent greenhouse gas in the atmosphere than CO2. It does degrade over time, mediated by hydroxyl radicals. So its 20-year impact is 34x higher than CO2 and its 100-year impact is 25x higher. Therefore, if c2.7-3.5% of natural gas is “leaked” into the atmosphere, natural gas could be considered a “dirtier” fuel than coal (chart below, model here).

CO2 emissions of natural gas use depending on the amount of methane leaked. Once leaks reach 3% and above, then natural gas could be considered as 'dirty' as coal.

For a fair comparison an important side-note, not reflected in the chart above, is that methane is also leaked into the atmosphere when producing coal. This is because natural gas often desorbs from the surface of coal as it is mined. Our best attempt to quantify the leakage rate is that it is equivalent to 33kg/boe, equivalent to leaking 1.2% of the methane from a gas value chain (apples to apples, using a methane GWP of 25x).

In other words, if a gas value chain is leaking less than 1.2% of its methane, then its methane leakage rates are lower than the energy-equivalent methane leakage rate from producing coal. If a gas value chain is leaking more than 1.2% of its methane, then its methane leakage rate is higher than from producing coal.

One of the challenges for quantifying methane leakage across natural gas value chains is that, by definition, it is a chain. Gas molecules move from upstream production to processing stages such as sweetening and dehydration, through transmission lines, then through distribution lines, then to end consumers such as power plants, LNG plants, ammonia plants, hydrogen reformers, industrial heating and households.

Hence in the title chart above, we have attempted to build up the methane intensity of US gas value chains, looking line by line, and using the data-files indexed below. For example, as of our latest data-pull, methane leakage rates are 0.06% of produced gas in the Appalachian, 0.19% in the Permian, 0.22% in the Bakken, 0.34% in the Gulf Coast and 0.49% in the MidCon.

Putting the pieces together, we think that the total methane leakage rate across the value chain can be as low as 0.2-0.3% when gas from the Marcellus or Permian are used in gas power (e.g., for an AI data-center, LNG plant or for blue hydrogen). This is just 6-10 kg/boe of Scope 1 CO2, or 1-1.5 kg/mcfe (by contrast, combusting natural gas emits 56kg of CO2 per mcfe). And the best producers may achieve even lower, via the growing focus on mitigating methane.

Conversely, using gas for home heating and home cooling likely carries a higher methane leakage rate, of 0.8-1.2%, as there is more small-scale distribution, and smaller residential consumers are not always as discerning about conducting regular maintenance or checking for leaks. 0.8-1.2% methane leakage is equivalent to 23-33 kg/boe, or 4-6 kg/mcfe.

There are also risks with the numbers above, which is the systematic under-reporting of methane leakage rates, both upstream and further downstream in the value chain. Large oil and gas companies are required to measure and report their methane emissions, but Mrs Miggins is not. Hence, we think the numbers in our charts may be skewed to the upside.

All of this supports a growing role for natural gas in combined cycle gas turbines, and helping to alleviate power grid bottlenecks amidst the rise of AI, plus US LNG and blue hydrogen value chains. Our US decarbonization model has power rising from c40% to c50% of the US gas market by 2030, compensated by lower use in residential heat.

Cool customers: AI data-centers and industrial HVAC?

Chips must usually be kept below 27ºC, hence 10-20% of both the capex and energy consumption of a typical data-center is cooling, as explored in this 14-page report. How much does climate matter? What changes lie ahead? And which companies sell into this soon-to-double market for data-center cooling equipment?


Our base case outlook for AI considers 150GW of AI data-centers globally by 2030, underpinning 1,000 TWH pa of new electricity demand. However, at $30,000 per GPU, it is not advisable to cook your chips. 150GW-e of AI data-centers requires 150GW-th of data-center cooling. Hence the data-center cooling market is summarized on page 2.

The commercial cooling industry hinges on industrial HVAC, across heat exchangers, water evaporator units and industrial chillers, and explained from first principles on pages 3-4.

An underlying observation is that increasing demand for chilling capacity pulls on many capital goods categories such as compressors, heat-exchangers, pumps, fans and blowers, storage tanks, piping, VFDs, switchgear, grid connections and engineering and construction. All of the capex ultimately goes somewhere.

The economics of commercial cooling are broken down across capex, electricity, maintenance, utilization and operating decisions on page 4-5.

Another feature of our model is that we can stress-test PUEs and capex costs according to different inputs and outputs, for example, to control for water use (currently up to 10-30ml per GPT query), different climates, or tolerating higher temperatures at the chip-level.

Specifically for data-centers, the market is unusual in that it tolerates higher temperatures than other cooling sub-segments (which typically chill water to 7ºC), but also higher cooling density in kW/rack (pages 6-7).

Location matters. For example, how are the PUEs and capex costs of data-centers different in cool locations such as Norway and Calgary, versus hot, arid locations such as West Texas and the Middle East? Answers for core cooling and overall data-centers are on pages 8-9.

Immersion cooling may offer advantages over direct-to-chip cooling, and thus gain market share, for reasons outlined on pages 10-11.

Ten companies control 60% of the $15bn pa data-center cooling market, including two Western leaders. Best-known is Vertiv. #2 is a global capital goods giant. Key conclusions from our company screen are on pages 12-14.

Energy intensity of AI: chomping at the bit?

Rising energy demands of AI are now the biggest uncertainty in all of global energy. To understand why, this 17-page note gives an overview of AI computing from first principles, across transistors, DRAM, GPUs and deep learning. GPU efficiency will inevitably increase, but compute increases faster. AI most likely uses 300-2,500 TWH in 2030, with a base case of 1,000 TWH.


The energy demands of AI are the fastest growing component of total global energy demand, which will transform the trajectory of gas and power and even regulated gas pipelines, as recapped on pages 2-3.

These numbers are so material that they deserve some deeper consideration. Hence this 17-page note is an overview of AI computation.

Of course, in 17-pages, we can only really scratch the surface, but we do think the report illustrates why computing efficiency will improve by 2-50x by 2030, and total compute will increase 3-5x faster. Thus a range of forecasts is more sensible than a single point estimate.

Transistors made of semiconductor materials, underpin all modern computing by allowing one circuit to control another. The basic working principles of MOSFETs are explained briefly on page 4.

All computers also contain a clock which is an oscillator circuit, generating pulses at a precise frequency. A faster clock accelerates computing, but also amplifies switching losses in transistors, per page 5.

Combinations of transistors can enact logical and arithmetic functions, from simple AND, OR and NAND gates, to matrix multiplications in the tensor cores of GPUs, as shown on page 6.

Transistors and capacitors can be arranged into DRAM cells, the basis of fast-acting computer memory. But DRAM also has a continued energy draw to refresh leakage currents, as quantified on page 7.

GPUs are fundamentally different from CPUs, as they carve up workloads into thousands (sometimes millions) of parallel processing threads, implemented by built-in cores, each integrated with nearby DRAM, and as illustatrated for NVIDIA’s A100 GPU on page 8.

An AI model is just a GPU simulating a neural network. Hence we outline a simple, understandable neural network, training via back-propagation of errors, and the model’s inherent ‘generativity’ on pages 9-10.

A key challenge for energy analysts is bridging between theoretical peak performance at the GPU level and actual performance of AI computing systems. The gap is wide. The shortfall is quantified on page 11.

Our favorite analogy for explaining the shortfall is via the energy consumption of planes, which can in principle reach 80 passenger miles per gallon. Jet engines provide a lot of thrust. But you also need to get the plane into the air (like pulling information from memory), keep it in the air (refreshing data in DRAM) and fuel consumption per passenger falls off a cliff if there are very few passengers (memory bandwidth constraints, underutilization of GFLOPS). See page 12.

If you understand the analogies above, then it is going to be trivial to improve the energy consumption of AI, simply by building larger and more actively used neural network models that crunch more data, and utilize more of the chronically underutilized compute power in GPUs. Other avenues to improve GPU efficiency are on page 13.

The energy consumption of AI is strongly reminiscent of the Jevons effect. Increasing the energy efficiency of GPUs goes hand in hand with increasing the total compute of these models, which will itself rise 3-5x faster, as evidenced by data and case studies on pages 14-15.

Forecasting the future energy demands of AI therefore involves several exponentially increasing variables, which are all inherently uncertain, and then multiplying these numbers together. This produces a wide confidence interval of possible outcomes, around our base case forecast of 1,000 TWH pa. Implications are on pages 16-17.


This note may also be read alongside our overview of the gas and power market implication of AI, as shown below.

Midstream gas: pipelines have pricing power ?!

High utilization can provide hidden upside for transmission operators

FERC regulations are surprisingly interesting!! In theory, gas pipelines are not allowed to have market power. But increasingly, they do have it: gas use is rising, on grid bottlenecks, volatile renewables and AI; while new pipeline investments are being hindered. So who benefits here? Answers are explored in this 13-page report.


There are three major trends underway for gas pipelines in the energy transition. Demand is rising to backstop renewables and power AI data-centers. Pipeline capacity growth is stagnating due to various roadblocks. And yet gas prices are becoming increasingly volatile. These effects are all discussed on pages 2-3.

In any other industry, these conditions — demand surprising to the upside, supply stagnating, and increasing arbitrage — would be a kingmaker. Perfect conditions for incumbents to generate excess returns.

The peculiarity of the US gas pipeline industry is that the companies within this industry are regulated by FERC. Pipeline companies are not allowed to earn excess returns. They must not exercise pricing power, even when they obviously do have it.

Hence the purpose of this note is to explore FERC regulations, to assess what changes in industry conditions might mean for gas pipelines, or conversely, whether these changes will benefit others elsewhere?

A concise overview of regulated gas markets — covering FERC, recourse rates, long-term contracts, open season, firm customers, NPV prioritization, Section 4 and Section 5, capacity scheduling, nominations and capacity release markets — are distilled on pages 5-6.

Ensuring utilization is the most important dimension dictating the economics of pipelines and pipeline companies, as discussed on page 7.

Gas marketers may be the primary beneficiary of evolving market dynamics, for the reasons discussed on page 8.

But can the increasing value of pipelines trickle back to pipeline operators, and boost their returns in ways that are nevertheless compatible with FERC regulations? Our answers to this question are on pages 9-10.

Leading companies in US gas marketing and pipelines are compiled in our screen of US midstream gas, and discussed on pages 11-12.

Implications extend into power markets as well. Increasing market volatility is actually needed as a catalyst to expand energy storage. And similar issues will arise due to power grid bottlenecks. Closing observations are on page 13.

Maxwell’s demon: computation is energy?

Computation, the internet and AI are inextricably linked to energy. Information processing literally is an energy flow. Computation is energy. This note explains the physics, from Maxwell’s demon, to the entropy of information, to the efficiency of computers.


Maxwell’s demon: information and energy?

James Clark Maxwell is one of the founding fathers of modern physics, famous for unifying the equations of electromagnetism. In 1867, Maxwell envisaged a thought experiment that could seemingly violate the laws of thermodynamics.

As a starting point, recall that a gas at, say 300ºK does not contain an even mixture of particles at the exact same velocities, but a distribution of particle speeds, as given by the Maxwell-Boltzmann equations below.

Boltzmann

Now imagine a closed compartment of gas molecules, partitioned into two halves, separated by a trap door. Above the trap door, sits a tiny demon, who can perceive the motion of the gas molecules.

Whenever a fast-moving molecule approaches the trap door from the left, he opens it. Whenever a slow-moving molecules approaches the trap door from the right, he opens it. At all other times, the trap door is closed.

The result is that over time, the demon sort the molecules. The left-hand side contains only slow-moving molecules (cold gas). The right-hand side contains only fast-moving molecules (hot gas).

This seems to violate the first law of thermodynamics, which says that energy cannot be created or destroyed. Useful energy could be extracted by moving heat from the right-hand side to the left-hand side. Thus in a loose sense the demon has ‘created energy’.

It also definitely violates the second law of thermodynamics, which says that entropy always increases in a closed system. The compartment is a closed system. But there is categorically less entropy in the well-sorted system with hot gas on the right and cold gas on the left.

The laws of thermodynamics are inviolable. So clearly there must be some work done on the system, with a corresponding decrease in entropy, by the information processing that Maxwell’s demon has performed.

This suggests that information processing is linked to energy. This point is also front-and-center in 2024, due to the energy demands of AI.

Landauer’s principle: forgetting 1 bit requires >0.018 eV

The mathematical definition of entropy is S = kb ln X, where kb is Boltzmann’s constant (1.381 x 10^-23 J/K) and X is the number of possible microstates of a system.

Hence if you think about the smallest possible transistor in the memory of a computer, which is capable of encoding a zero or a one, then you could say that it has two possible micro-states, and entropy of kb ln (2).

But as soon as our transistor encodes a value (e.g., 1), then it only has 1 possible microstate. ln(1) = 0. Therefore its entropy has fallen by kb ln (2). When entropy decreases in thermodynamics, heat is usually transferred.

Conversely, when our transistor irreversibly ‘forgets’ the value it has encoded, its entropy jumps from zero back to kb ln (2). When entropy increases in thermodynamics, then heat usually needs to be transferred.

You see this in the charts below, which plots the PV-TS plot for a Brayton cycle heat engine that harnesses net work via moving heat from a hot source to a cold sink. Although really an information processor functions more like a heat pump, i.e., a heat engine in reverse. It absorbs net work as it moves heat from an ambient source to a hot sink.

In conclusion, you can think about the encoding and forgetting a bit of information as a kind of thermodynamic cycle, as energy is transferred to perform computation.

The absolute minimum amount of energy that is dissipated is kb T ln (2). At room temperature (i.e., 300ºK), we can plug in Boltzmann’s constant, and derive a minimum computational energy of 2.9 x 10^-21 J per bit of information processing, or in other words 0.018 eV.

This is Landauer’s limit. It might all sound theoretical, but it has actually been demonstrated repeatedly in lab-scale studies: when 1 bit of information is erased, a small amount of heat is released.

How efficient are today’s best supercomputers?

The best super-computers today are reaching computational efficiencies of 50GFLOPS per Watt (chart below). If we assume 32 bit precision per float, then this equates to an energy consumption of 6 x 10^-13 Joules per bit.

In other words, a modern computer is using 200M times more energy than the thermodynamic minimum. Maybe a standard computer uses 1bn times more energy than the thermodynamic minimum.

One reason, of course, is that modern computers flow electricity through semiconductors, which are highly resistive. Indeed, undoped silicon is 100bn times more resistive than copper. For redundancy’s sake, there is also a much larger amount of charge flowing per bit per transistor than just a single electron.

But we can conclude that information processing is energy transfer. Computation is energy flow.

As a final thought, the entirety of the universe is a progression from a singularity of infinite energy density and low entropy (at the Big Bang) to zero energy density and maximum entropy in around 10^23 years from now. The end of the universe is literally the point of maximum entropy. Which means that no information can remain encoded.

There is something poetic, at least to an energy analyst, in the idea that “the universe isn’t over until all information and memories have been forgotten”.

Energy and AI: the power and the glory?  

The power demands of AI will contribute to the largest growth of new generation capacity in history. This 18-page note evaluates the power implications of AI data-centers. Reliability is crucial. Gas demand grows. Annual sales of CCGTs and back-up gensets in the US both rise by 2.5x?

This is the most detailed report we have written to-date on the implications of AI, for power markets, gas and across capital goods.


We started by asking ChatGPT for examples where AI data-centers had installed their own power generation equipment. We received a very detailed list. All erroneous. All hallucinations. Hence there is still a role for a human energy analyst to address these important questions.

Forecasts for the energy demands of AI are broken down from first principles, within the energy demands of the internet, on pages 3-4.

Economics of AI data-centers are also broken down from first principles, across capex, opex, and per EFLOP of compute, on pages 5-7.

Data-centers primarily pull upon gas value chains, as uptime and reliability are crucial to economics, whereas only 6% of a data-center’s energy needs could be self-powered via on-site solar, per pages 8-9.

Combined cycle gas turbines are predicted to emerge as a leading energy source to meet the power demands of AI data-centers, and relative economics are quantified on pages 10-11.

The need for newbuild power equipment also hinges on maximizing uptime and utilization, and avoiding power grid bottlenecks, as outlined on pages 12-13.

To contextualize the growth that lies ahead, we have compiled data on US power generation installations, year by year, technology by technology, running back to 1950, including implications for turbine manufacturers, on pages 14-16.

The impacts of AI on US gas and power markets sharply accelerate US electricity demand, upgrade our US shale forecasts, especially in the Marcellus, and sustain the growth of US gas demand through 2035. Charts and numbers are on pages 17-18.

We look forward to discussing and debating these conclusions with TSE subscription clients.

Energy transition: key conclusions from 1Q24?

Top 250 companies in Thunder Said Energy research. What sectors and what market cap?

This 11-page note summarizes the key conclusions from our energy transition research in 1Q24 and across 1,400 companies that have crossed our screens since 2019. Volatility is rising. Power grids are bottlenecked. Hence what stands out in capital goods, clean-tech, solar, gas value chains and materials? And what is most overlooked?


1,400 companies have been mentioned 3,000 times in our research since 2019, and our energy transition research now includes over 1,300 research notes, data-files and models.

Hence we want to do a better job of summarizing key conclusions, for busy decision-makers, in a regular and concise format (see pages 2-3).

The two key themes from our energy transition research in 1Q24 are rising volatility in global energy markets and rising bottlenecks in the power grid. The implications are summarized on pages 4-5.

The biggest focuses in our energy transition research in 1Q24 have been across the solar supply chain and gas value chains, and where could consensus be wrong? (pages 6-7)

The most overlooked theme in the energy transition is discussed on page 8, and centers on materials value chains.

Specific companies where we have reviewed product offerings or patent libraries in 1Q24 are reviewed on page 9.

The most mentioned companies in our research in 1Q24, and from 2019-2024 more broadly, are discussed (including some specific profiles) on pages 10-11.

The downside of a concise, 11-page report, is that it cannot possibly do justice to the depth and complexity of these topics. A TSE subscription covers access to all of the underlying research and data.

We are also delighted to elaborate on our energy transition conclusions from 1Q24, and discuss them with TSE clients, either over email or over a call.


Copyright: Thunder Said Energy, 2019-2024.