Superconductors: distribution class?

Illustration of a cable made with high-temperature superconducting tape.

High-temperature superconductors (HTSs) carry 20,000x more current than copper, with almost no electrical resistance. They must be cooled to -200ยบC. So costs have been high at 35 past projects. Yet this 16-page report explores whether HTS cables will now accelerate to defray power grid bottlenecks? And who benefits within the supply chain?


Superconductivity is a form of quantum magic, where particular materials show almost no resistance to electrical currents, once their temperature drops below some critical transition temperature.

Hence these materials can theoretically carry infinite quantities of current. The weird quantum effects that give rise to superconductivity are briefly described on page 2.

Hundreds of materials have been shown to demonstrate superconductivity, since the effect was first discovered in mercury in 1911. Some of the key materials, such as Nb3Sn, NbTi, BSCCO and YBCO (REBCOs), are summarized on pages 3-4.

The reason for writing this report is our growing fear over power grid bottlenecks as the biggest bottleneck in energy markets and for energy transition. We have already explored advanced conductors to debottleneck the overhead transmission network.

Yet one of the biggest unsolved challenges is expanding the distribution network in space-constrained urban environments. We outline how high-temperature superconductors could help on pages 5-7.

Superconductors have already been piloted in global power grids, with 35 past projects going back to 2000. So what costs and other details from past superconductor projects stand out on pages 8-9?

The costs of HTS cables are compared with costs of transmission and costs of distribution at conventional projects — both on a top-down and bottom-up basis — on pages 10-12.

Material implications of high-temperature superconductors are also explored, for materials such as silver, superalloys, yttrium, helium and for displacing copper on page 13.

Leading companies in superconductors include six large global producers, including leaders listed in Europe, the US and Japan, plus interesting private companies scaling up capacity. Conclusions from our superconductor company screen are reviewed in pages 14-16.

Finally, there are reasons to wonder whether higher-temperature or even room-temperature superconductors might be developed in the future, from the multi-billion member state space of possible candidates across materials science. We have predicted that AI will ultimately earn its keep by โ€˜figuring outโ€™ state spaces too complex for human brains.

But in the mid-late 2020s, the most interesting angle is that we think YBCO HTSs will play an increasingly large role helping to debottleneck the distribution network, especially in space-constrained urban environments. Could project activity accelerate by 5-50x by 2030?

Low-carbon baseload: walking through fire?

This 16-page report appraises 30 different options for low-carbon, round-the-clock power generation. Their costs range from 6-60 c/kWh. We also consider true CO2 intensity, time-to-market, land use, scalability and power quality. Seven insights follow for powering new grid loads, especially AI data-centers.


Today we are increasingly receiving questions from clients looking to self-generate electricity, for large new loads, while also avoiding power grid bottlenecks. There is especially sharp demand to power new data-centers amidst the rise of AI.

Hence this report aims to compile the most extensive cross-comparison we have attempted to-date, into different sources of low-carbon baseload. We assessed 30 options across 20 different dimensions, in our LCOE database. Our methodology is described on pages 2-3.

The costs of low-carbon baseload range from 6 – 60 c/kWh. Numbers, sensitivities, capex costs, true CO2 intensities, construction times, transmission requirements, land intensity, scalability, ramp rates and reliability are all cross-plotted in the charts on pages 4-8.

Gas value chains are lowest-cost overall in the US, especially when developed directly in shale basins. Observations, discussion points and US gas market conclusions are summarized on pages 9-10.

Pure wind and solar value chains cost an order of magnitude more, when they are required to generate round-the-clock power (defined as having 3 days of battery coverage on cloudy/non-windy days). Observations and discussion points are on pages 11-13.

There are also options that use zero-hydrocarbons. They include blending wind and solar with pre-existing hydro, incubating next-generation nuclear, or housing AI data-centers alongside Iceland’s geothermal hotspots and then moving the data (!). See pages 14-15.

There is no perfect solution, however, on the quest for rapidly scalable low-carbon baseload. Hence, we close by considering whether this will delay the rise of AI, or even entrench high-carbon generation sources that would otherwise be phased out. Different options for generating low-carbon baseload reward careful consideration.

Moving targets: molecules, electrons or data ?!

New AI data-centers are facing bottlenecked power grids. Hence this 15-page note compares the costs of constructing new power lines, gas pipelines or fiber optic links for GW-scale computing. The latter is best. Latency is a non-issue. This work suggests the best locations for AI data-centers and shapes the future of US shale, midstream and fiber-optics?


One of the biggest questions in energy markets this year concerns the rise of AI. Specifically, how is the world going to electrify a possible 150GW of new AI data-centers by 2030, amidst bottlenecked power grids and bottlenecked gas grids?

This 15-page note considers the options for moving molecules, electrons and information to and from AI data-centers. Ultimately, the best locations for AI data-centers will offer the best service, at the lowest cost, and after some acceptably low lead-time.

Mostly moving electricity to an AI data-center requires constructing new AC transmission lines, or possibly even HVDCs. This turns out to be the most costly option and has the highest lead-time. Capex costs (in $M/km), total costs (in c/kWh) and logistical conclusions are on pages 2-3.

Mostly moving gas to an AI data-center requires constructing new gas pipelines, but has the advantage of alleviating power grid bottlenecks, by self-generating power on site. This is a high-cost option in $M/km terms, but a low cost in c/kWh terms, and must also overcome logistical challenges, as discussed on pages 4-5.

Mostly moving data from an AI data-center requires constructing new fiber optic links, but has the advantage of alleviating power grid bottlenecks and gas infrastructure bottlenecks, by siting the data-center within a US shale basin. This has by far the lowest costs in $M/km terms and c/kWh terms, plus other logistical advantages, per pages 6-9.

Latency myths. The pushback we are anticipating is that an AI data-center needs to be close to end-consumers, because otherwise the latencies on AI query responses will be overly high. This is simply untrue. Data and counter-arguments are outlined on pages 10-12.

Locations for AI data-centers may largely be determined by the locations of upstream gas. Further help may come from a cooler, wetter climate, lowering the energy and financial costs of data-center cooling.

There is a strange symmetry bringing the shale and AI industries together: the former suffering bottlenecks in energy demand and export infrastructure; the latter suffering bottlenecks in supply and import infrastructure. Conclusions for shale and midstream are on pages 13-14.

The fiber optic industry also sees large growth, hence we end by noting the market size, growth, material usage and leading companies on page 15.

Transformer shortages: at their core?

The pricing of transformers has risen 1.5x in the past three years along with US imports of transformers by capacity more than doubling in the same timeframe.

Transformers are needed every time voltage steps up or down in the power grid. But lead times have now risen from 12-24 weeks to 1-3 years. And prices have risen 70%. Will these shortages structurally slow new energies and AI? Or improve transformer margins? Or is it just another boom-bust cycle? Answers are explored in this 15-page report.


Three years ago, we wrote an overview of transformers, which are used every time voltage steps up or down in the worldโ€™s power grids. As examples at both extremes, efficient power transmission requires high voltages at 200-800kV, while the sockets in your home power electrical appliances at a low, safe 120V (in the US, or 230V in Europe).

The central argument in our 2021 report was that the total capacity of transformers would double or more, and the number of transformers needed in the power grid could rise by 30x as part of the energy transition. The rationale is re-capped and updated on page 2.

Transformer shortages are biting by 2024. Lead times have risen from 12-24 weeks to 1-3 years. Prices have risen by 70%. We are concerned about power grid bottlenecks, powering the rise of AI, long interconnection times for wind and solar, and delays to power electronics order books, per pages 3-4.

Hence in this report, we have attempted to break down the bottlenecks across the transformer supply chain, to see which ones might be persistent; or conversely, which ones might resolve, covering design considerations (page 5), transformer materials (pages 6-7), specialized labor requirements (pages 8-9), the capex costs of new facilities (page 10) and ultimately IRR sensitivities for the costs of transformers (page 11).

If we follow other materials that matter in the energy transition — e.g., lithium, solar modules — then we can clearly see evidence for boom-bust cycles. Hence what are the risks of a boom-bust cycle for transformer manufacturing? Evidence is reviewed on pages 12-14.

Conclusions into transformer shortages, and companies across the supply chain, are summarized on page 15.

Advanced Conductors: current affairs?

Comparison of old transmission line conductors and advanced conductor geometries.

Reconductoring todayโ€™s 7M circuit kilometers of transmission lines may help relieve power grid bottlenecks, while avoiding the 10-year ordeal of permitting new lines? Raising voltage may have hidden challenges. But Advanced Conductors stand out in this 18-page report. And the theme could double carbon fiber demand?


Power grids are shaping up to be one of the biggest and most imminent bottlenecks in the energy transition, for the reasons in our note here, having the consequences in our note here, and one of many reasons why new AI data-centers will need to build their own dedicated generation capacity per our note here.

A key challenge for constructing new transmission lines is the long development times, as permitting can take over 10-years. Hence what opportunities exist to raise the capacity of today’s 7M circuit kilometers of existing global transmission lines, e.g., via reconductoring.

The carrying capacity and the cost of a power line are built up from first principles on pages 2-4. Raising capacity requires raising voltage or raising current, ideally without inflating the costs of transmission.

(For more theory, please see our overview of energy units, overview of electricity and/or overview of power transmission, reactive power and harmonic distortion).

The simplest option to increase the capacity of a power transmission line might be to increase the voltage, by upgrading the transformers. Doubling voltage, all else equal, might seem to double the power. But we think material voltage increases may be more challenging than indicated in other recent commentaries, with negative net NPVs, per pages 6-8.

Raising the current through each conductor is the other way to increase power ratings. Usually there are technical and economic issues. But they can be economically addressed with Advanced Conductors, which replace steel strands at the center of Aluminium Conductor Steel Reinforced (ACSR) with composites such as carbon fiber. Properties of Advanced Conductors versus ACSR and economic costs are on pages 9-13.

Materials implications? Carbon fiber is a miracle material, which is 3-10x stronger than steel, but 70-80% lighter. Could Advanced Conductors effectively double global demand for carbon fiber by 2030, taking the carbon fiber market from recent oversupply into bottleneck territory? Forecasts for aluminium and copper are also revised on pages 14-15.

Leading producers of Advanced Conductors are profiled on page 16. One large public conglomerate and three private companies are gearing up. Overall, Advanced Conductors are among the best antidotes we have seen for power grid bottlenecks, based on the cost-modelling in this note.

In September-2024, we have added further details into the note, screening Prysmian’s E3X technology that dovetails with advanced conductors and achieves many of the same benefits (page 17) and tabulated evidence that advanced conductors are accelerating (page 18).

Arms race: defence versus decarbonization?

Global defence spending from 1960 to 2050 by region. Defence budgets are set to increase in the 2020s following Russia's invasion of Ukraine.

Does defence displace decarbonization as the developed worldโ€™s #1 policy goal through 2030, re-allocating $1trn pa of funds? Defence versus decarbonization? Perhaps, but this 10-page note also finds a surprisingly large overlap between the two themes. European capital goods re-accelerate most? Some clean-tech does risk deprioritization?


One of the catalysts for starting Thunder Said Energy, back in 2019, as a research firm for energy technologies and energy transition, was the sense that decarbonization was becoming the largest priority in the world.

Yet today, news headlines would suggest that a different theme is becoming the largest priority. The theme is defence. Comparisons between decarbonization in 2019 and defence today are drawn on page 2.

Defence spending is a deterrent against war, and may increase from $2.4trn in 2023, rising by +$1trn to $3.4trn in 2030, and then by a further +$1trn to $4.4trn in 2050, per our breakdowns of global GDP by region, and discussed on pages 3-4.

If the world allocated $1trn pa more for defence by 2030, and $2trn pa more by 2050, then how would these vast sums compete with energy transition expenditures? For an answer, we turn to our roadmap to net zero, and the costs/capex needed for wind, solar, gas, power grids, efficiency, CCS and nature-based solutions, on pages 4-7.

Winners and losers? The most important part of the note speculates as to winners and losers — by theme, by sector and by company. There is potential for more pragmatism and reindustrialization in Europe. Beware of watermelons. Our key conclusions are distilled on pages 8-10.

Ultimately all military expenditures do go somewhere, and what surprised us most is the overlap between defence versus decarbonization. This is most true for critical infrastructure and some energy technologies.

We have already watched the energy transition become the very hungry caterpillar, encompassing $15trn of market cap across a dozen sectors. Including defence. For example, we have written on super-alloys, Rare Earths and carbon fiber. And new technologies such as power-beaming, military drones and thermoelectrics.

More of our upcoming research will focus on the overlap between decarbonization and strategic infrastructure and technologies. For now, some further reading is the energy history of WWII. And our key conclusions on decarbonization versus defence are in this 10-page note.

Cool customers: AI data-centers and industrial HVAC?

Chips must usually be kept below 27ยบC, hence 10-20% of both the capex and energy consumption of a typical data-center is cooling, as explored in this 14-page report. How much does climate matter? What changes lie ahead? And which companies sell into this soon-to-double market for data-center cooling equipment?


Our base case outlook for AI considers 150GW of AI data-centers globally by 2030, underpinning 1,000 TWH pa of new electricity demand. However, at $30,000 per GPU, it is not advisable to cook your chips. 150GW-e of AI data-centers requires 150GW-th of data-center cooling. Hence the data-center cooling market is summarized on page 2.

The commercial cooling industry hinges on industrial HVAC, across heat exchangers, water evaporator units and industrial chillers, and explained from first principles on pages 3-4.

An underlying observation is that increasing demand for chilling capacity pulls on many capital goods categories such as compressors, heat-exchangers, pumps, fans and blowers, storage tanks, piping, VFDs, switchgear, grid connections and engineering and construction. All of the capex ultimately goes somewhere.

The economics of commercial cooling are broken down across capex, electricity, maintenance, utilization and operating decisions on page 4-5.

Another feature of our model is that we can stress-test PUEs and capex costs according to different inputs and outputs, for example, to control for water use (currently up to 10-30ml per GPT query), different climates, or tolerating higher temperatures at the chip-level.

Specifically for data-centers, the market is unusual in that it tolerates higher temperatures than other cooling sub-segments (which typically chill water to 7ยบC), but also higher cooling density in kW/rack (pages 6-7).

Location matters. For example, how are the PUEs and capex costs of data-centers different in cool locations such as Norway and Calgary, versus hot, arid locations such as West Texas and the Middle East? Answers for core cooling and overall data-centers are on pages 8-9.

Immersion cooling may offer advantages over direct-to-chip cooling, and thus gain market share, for reasons outlined on pages 10-11.

Ten companies control 60% of the $15bn pa data-center cooling market, including two Western leaders. Best-known is Vertiv. #2 is a global capital goods giant. Key conclusions from our company screen are on pages 12-14.

Energy intensity of AI: chomping at the bit?

Rising energy demands of AI are now the biggest uncertainty in all of global energy. To understand why, this 17-page note gives an overview of AI computing from first principles, across transistors, DRAM, GPUs and deep learning. GPU efficiency will inevitably increase, but compute increases faster. AI most likely uses 300-2,500 TWH in 2030, with a base case of 1,000 TWH.


The energy demands of AI are the fastest growing component of total global energy demand, which will transform the trajectory of gas and power and even regulated gas pipelines, as recapped on pages 2-3.

These numbers are so material that they deserve some deeper consideration. Hence this 17-page note is an overview of AI computation.

Of course, in 17-pages, we can only really scratch the surface, but we do think the report illustrates why computing efficiency will improve by 2-50x by 2030, and total compute will increase 3-5x faster. Thus a range of forecasts is more sensible than a single point estimate.

Transistors made of semiconductor materials, underpin all modern computing by allowing one circuit to control another. The basic working principles of MOSFETs are explained briefly on page 4.

All computers also contain a clock which is an oscillator circuit, generating pulses at a precise frequency. A faster clock accelerates computing, but also amplifies switching losses in transistors, per page 5.

Combinations of transistors can enact logical and arithmetic functions, from simple AND, OR and NAND gates, to matrix multiplications in the tensor cores of GPUs, as shown on page 6.

Transistors and capacitors can be arranged into DRAM cells, the basis of fast-acting computer memory. But DRAM also has a continued energy draw to refresh leakage currents, as quantified on page 7.

GPUs are fundamentally different from CPUs, as they carve up workloads into thousands (sometimes millions) of parallel processing threads, implemented by built-in cores, each integrated with nearby DRAM, and as illustatrated for NVIDIA’s A100 GPU on page 8.

An AI model is just a GPU simulating a neural network. Hence we outline a simple, understandable neural network, training via back-propagation of errors, and the model’s inherent ‘generativity’ on pages 9-10.

A key challenge for energy analysts is bridging between theoretical peak performance at the GPU level and actual performance of AI computing systems. The gap is wide. The shortfall is quantified on page 11.

Our favorite analogy for explaining the shortfall is via the energy consumption of planes, which can in principle reach 80 passenger miles per gallon. Jet engines provide a lot of thrust. But you also need to get the plane into the air (like pulling information from memory), keep it in the air (refreshing data in DRAM) and fuel consumption per passenger falls off a cliff if there are very few passengers (memory bandwidth constraints, underutilization of GFLOPS). See page 12.

If you understand the analogies above, then it is going to be trivial to improve the energy consumption of AI, simply by building larger and more actively used neural network models that crunch more data, and utilize more of the chronically underutilized compute power in GPUs. Other avenues to improve GPU efficiency are on page 13.

The energy consumption of AI is strongly reminiscent of the Jevons effect. Increasing the energy efficiency of GPUs goes hand in hand with increasing the total compute of these models, which will itself rise 3-5x faster, as evidenced by data and case studies on pages 14-15.

Forecasting the future energy demands of AI therefore involves several exponentially increasing variables, which are all inherently uncertain, and then multiplying these numbers together. This produces a wide confidence interval of possible outcomes, around our base case forecast of 1,000 TWH pa. Implications are on pages 16-17.


This note may also be read alongside our overview of the gas and power market implication of AI, as shown below.

Midstream gas: pipelines have pricing power ?!

High utilization can provide hidden upside for transmission operators

FERC regulations are surprisingly interesting!! In theory, gas pipelines are not allowed to have market power. But increasingly, they do have it: gas use is rising, on grid bottlenecks, volatile renewables and AI; while new pipeline investments are being hindered. So who benefits here? Answers are explored in this 13-page report.


There are three major trends underway for gas pipelines in the energy transition. Demand is rising to backstop renewables and power AI data-centers. Pipeline capacity growth is stagnating due to various roadblocks. And yet gas prices are becoming increasingly volatile. These effects are all discussed on pages 2-3.

In any other industry, these conditions — demand surprising to the upside, supply stagnating, and increasing arbitrage — would be a kingmaker. Perfect conditions for incumbents to generate excess returns.

The peculiarity of the US gas pipeline industry is that the companies within this industry are regulated by FERC. Pipeline companies are not allowed to earn excess returns. They must not exercise pricing power, even when they obviously do have it.

Hence the purpose of this note is to explore FERC regulations, to assess what changes in industry conditions might mean for gas pipelines, or conversely, whether these changes will benefit others elsewhere?

A concise overview of regulated gas markets — covering FERC, recourse rates, long-term contracts, open season, firm customers, NPV prioritization, Section 4 and Section 5, capacity scheduling, nominations and capacity release markets — are distilled on pages 5-6.

Ensuring utilization is the most important dimension dictating the economics of pipelines and pipeline companies, as discussed on page 7.

Gas marketers may be the primary beneficiary of evolving market dynamics, for the reasons discussed on page 8.

But can the increasing value of pipelines trickle back to pipeline operators, and boost their returns in ways that are nevertheless compatible with FERC regulations? Our answers to this question are on pages 9-10.

Leading companies in US gas marketing and pipelines are compiled in our screen of US midstream gas, and discussed on pages 11-12.

Implications extend into power markets as well. Increasing market volatility is actually needed as a catalyst to expand energy storage. And similar issues will arise due to power grid bottlenecks. Closing observations are on page 13.

Energy and AI: the power and the glory? ย 

The power demands of AI will contribute to the largest growth of new generation capacity in history. This 18-page note evaluates the power implications of AI data-centers. Reliability is crucial. Gas demand grows. Annual sales of CCGTs and back-up gensets in the US both rise by 2.5x?

This is the most detailed report we have written to-date on the implications of AI, for power markets, gas and across capital goods.


We started by asking ChatGPT for examples where AI data-centers had installed their own power generation equipment. We received a very detailed list. All erroneous. All hallucinations. Hence there is still a role for a human energy analyst to address these important questions.

Forecasts for the energy demands of AI are broken down from first principles, within the energy demands of the internet, on pages 3-4.

Economics of AI data-centers are also broken down from first principles, across capex, opex, and per EFLOP of compute, on pages 5-7.

Data-centers primarily pull upon gas value chains, as uptime and reliability are crucial to economics, whereas only 6% of a data-center’s energy needs could be self-powered via on-site solar, per pages 8-9.

Combined cycle gas turbines are predicted to emerge as a leading energy source to meet the power demands of AI data-centers, and relative economics are quantified on pages 10-11.

The need for newbuild power equipment also hinges on maximizing uptime and utilization, and avoiding power grid bottlenecks, as outlined on pages 12-13.

To contextualize the growth that lies ahead, we have compiled data on US power generation installations, year by year, technology by technology, running back to 1950, including implications for turbine manufacturers, on pages 14-16.

The impacts of AI on US gas and power markets sharply accelerate US electricity demand, upgrade our US shale forecasts, especially in the Marcellus, and sustain the growth of US gas demand through 2035. Charts and numbers are on pages 17-18.

We look forward to discussing and debating these conclusions with TSE subscription clients.

Copyright: Thunder Said Energy, 2019-2025.