Can solar reach 45% of a power grid?

Can solar reach 45% of a power grid? This has been the biggest pushback on our recent report, scoring solar potential by country, where we argued the best regions globally – California, Australia – could reach 45% solar by 2050. Hence today’s model explores what a 45% solar grid might look like. Generation is 53% solar. 8% is curtailed, 35% is used directly, 7% is used via demand-shifting and 3% is time-shifted via batteries.


Solar can easily reach 30% of a 100MW power grid, as shown in the chart below. Specifically, to calculate this curve, we took the actual distribution of power demand in California, and the actual distribution of solar insolation as calculated from first principles. As both of these variables vary seasonally, we calculated balances for each month separately, then averaged together all twelve months of the year.

Example power grid where solar makes up 30% of a 100MW grid. Yearly average load-profile.

Can solar reach higher shares of the grid? We are going to set a limit that 25% of baseload generation (i.e., non-solar generation) can never be curtailed, as it is needed for grid stability, both instantaneously (e.g., due to inertia) and intra-day, to ramp up if/when solar stops generating. Hence ramping up solar beyond 30% of a grid requires some adaptations.

Can solar reach 45% shares of a grid? We think the answer to this question is yes, and the chart below shows our best attempt to model what such a grid would look like. It uses three adaptations.

Example power grid where solar makes up 45% of a 100MW grid. Yearly average load-profile.

Curtailments are not the end of the world. If 15% of the solar that is generated fails to dispatch, then this requires the ‘other 85%’ to charge about 15% more, in order to achieve the same IRRs. In other words, the LCOPE of all solar in our grid re-inflates from 6-7c/kWh to 7-8c/kWh. This is mildly inflationary, but basically fine. But a challenge for deploying further solar from here, is that a c15% average curtailment rate is associated with a c50% marginal curtailment rate, so building incremental solar from here will likely cost 12-15c/kWh (see below).

Demand shifting is the second adaptation that allows solar to reach higher shares. It is generally cheaper to move parts of the original demand curve (dark blue line) to when the solar is generating (yielding the light blue line), than to store solar energy in a battery and re-release it when solar is not generating. Our model has the total demand curve shifted by +/- 8% on average throughout the year. The need for demand shifting is highest in May, at +/- 15%. And highest at midday in May, when 34MW of excess demand must be absorbed, in our 100MW grid.

EV charging helps to contextualize our demand-shifting numbers. A typical EV has a 70kWh battery, and might charge at 10kW for 7-hours. Hence absorbing c34MW of excess demand in our 100MW grid is going to require 3,400 EVs, or 34EVs per MW of average load. For comparison, our EV sales forecasts show the US reaching 35M EVs on the road by 2030, which would equate to 70 EVs per MW of average US load. As long as about half of the US’s EVs are plugged into a solar-energized charger, there is no problem absorbing this excess demand.

Battery storage is the third adaptation that allows solar to reach higher shares of a power grid. In our scenario above, supplying 45% of the electricity in a 100MW grid with solar, requires building 200MW of solar and 30MW of batteries. The batteries absorb and re-release 6% of the solar generation, and end up providing 3.5% of the total grid. This varies from 2% in December (the month of lowest solar insolation) to 5% in May (the month with the largest excess of solar to absorb or curtail, charts below).

Example power grid where solar makes up 45% of a 100MW grid. December average load-profile.
Example power grid where solar makes up 45% of a 100MW grid. May average load-profile.

The reason we think grids will lean less on batteries is the cost of batteries. Each MW of batteries charges and discharges 240 times per year, which implies a storage spread of 30c/kWh. Across 3% of the total grid, this raises total grid costs by 1c/kWh.

Possible but inflationary? Overall, it is possible for solar to reach 45% of a power grid, along the lines outlined above. But an additional 3-5c/kWh is added in transmission and distribution costs (due to falling grid utilization), +1c/kWh in curtailment costs, +1c/kWh in battery costs, and 55% of the grid must still come from other sources, where infrastructure must still be maintained and included in rate bases. Renewables may add 4-6c/kWh to end consumer costs, in absolute terms, but remain a low-cost way to halve the CO2 intensity of power grids, while achieving other environmental and geopolitical goals.

It is not unrealistic for solar to reach 45% of the grid, in countries that are particularly well-placed for solar, as per our note below.

The model linked below is a nice tool for stress-testing different options around the ultimate share of solar in grids, and the need to lean on curtailment, demand-shifting and batteries. You can vary the installed base of solar, the share of non-solar baseload that cannot be curtailed, percent of excess solar that is demand-shifted, excess solar that is stored, and see the resulting power grid distribution, hour-by-hour, month-by-month.

AI and power grid bottlenecks?

Video presentation regarding Thunder Said Energy's research and insights on AI and power grid bottlenecks.

The number one topic in energy this year has been the rise of AI. Which might not seem like an energy topic. Yet it is inextricably linked with power grid bottlenecks, the single biggest issue for energy markets in the mid-late 2020s. The goal of today’s video is to recap our key conclusions. There is an accompanying presentation for TSE clients.


AI and power grid bottlenecks are becoming inextricably interlinked. The reasons and implications are explored in this video. The button above is for a 17-page presentation, in case you would like to follow along with the video, with underlying charts and data, and is available to TSE clients.

Presentation regarding Thunder Said Energy's research and insights on AI and power grid bottlenecks.

The energy consumption of AI is discussed in the first portion of the video, estimated at 150GW globally in 2030, adding 1,000 TWH of new electricity demand. Together with other electrification initiatives, US electricity demand growth quintuples?

The justification for AI is that it will unlock fascinating efficiency gains and game-changing technologies. We explore this idea in the second section of the video. Areas that stand out to us include next-generation DAC, materials, autonomous vehicles, thermoelectric semiconductors and superconductors.

The more immediate bottlenecks for AI are bottlenecks in powering new AI data-centers, amidst deepening power grid bottlenecks. Data-centers want cheap, reliable, low-carbon power, available ASAP, from the power grid, at a location of their choosing near their end customers. Unfortunately, this is a unicorn. It does not exist. The third portion of the video assesses which variables are ‘must haves’ and can’t be compromised.

The conclusion from AI and power grid bottlenecks leads us to shale basins? When we assess all of the lower-carbon options, their levelized costs and other dimensions that AI will end up prioritizing, then we find it will be best to circumvent long-standing bottlenecks in the power and gas grids, by situating AI data-centers near fast-to-market energy sources, then move the data via fiber-optics.

We look forward to discussing actionable implications of this research with TSE clients. Please email us if we can help you, or if you would like to explore a discussion. In the meantime, our best ideas for further reading are summarized on page 17 of the presentation below.


Methane leaks: by gas source and use

Methane leakage rates in the gas industry vary by source and use. Across our build-ups, the best-placed value chains are using Marcellus gas in CCGTs (0.2% methane leakage, equivalent to 6kg/boe, 1kg/mcfe, or +2% on Scope 3 emissions) and/or Permian gas in LNG or blue hydrogen value chains (0.3%). Residential gas use is closer to 0.8-1.2%, which is 4-6kg/mcfe; or higher as this is where leaks are most likely under-reported.

Today’s short note explains these conclusions, plus implications for future gas consumption. Underlying numbers are here.


Methane, as explained here, is a 120x more potent greenhouse gas in the atmosphere than CO2. It does degrade over time, mediated by hydroxyl radicals. So its 20-year impact is 34x higher than CO2 and its 100-year impact is 25x higher. Therefore, if c2.7-3.5% of natural gas is “leaked” into the atmosphere, natural gas could be considered a “dirtier” fuel than coal (chart below, model here).

CO2 emissions of natural gas use depending on the amount of methane leaked. Once leaks reach 3% and above, then natural gas could be considered as 'dirty' as coal.

For a fair comparison an important side-note, not reflected in the chart above, is that methane is also leaked into the atmosphere when producing coal. This is because natural gas often desorbs from the surface of coal as it is mined. Our best attempt to quantify the leakage rate is that it is equivalent to 33kg/boe, equivalent to leaking 1.2% of the methane from a gas value chain (apples to apples, using a methane GWP of 25x).

In other words, if a gas value chain is leaking less than 1.2% of its methane, then its methane leakage rates are lower than the energy-equivalent methane leakage rate from producing coal. If a gas value chain is leaking more than 1.2% of its methane, then its methane leakage rate is higher than from producing coal.

One of the challenges for quantifying methane leakage across natural gas value chains is that, by definition, it is a chain. Gas molecules move from upstream production to processing stages such as sweetening and dehydration, through transmission lines, then through distribution lines, then to end consumers such as power plants, LNG plants, ammonia plants, hydrogen reformers, industrial heating and households.

Hence in the title chart above, we have attempted to build up the methane intensity of US gas value chains, looking line by line, and using the data-files indexed below. For example, as of our latest data-pull, methane leakage rates are 0.06% of produced gas in the Appalachian, 0.19% in the Permian, 0.22% in the Bakken, 0.34% in the Gulf Coast and 0.49% in the MidCon.

Putting the pieces together, we think that the total methane leakage rate across the value chain can be as low as 0.2-0.3% when gas from the Marcellus or Permian are used in gas power (e.g., for an AI data-center, LNG plant or for blue hydrogen). This is just 6-10 kg/boe of Scope 1 CO2, or 1-1.5 kg/mcfe (by contrast, combusting natural gas emits 56kg of CO2 per mcfe). And the best producers may achieve even lower, via the growing focus on mitigating methane.

Conversely, using gas for home heating and home cooling likely carries a higher methane leakage rate, of 0.8-1.2%, as there is more small-scale distribution, and smaller residential consumers are not always as discerning about conducting regular maintenance or checking for leaks. 0.8-1.2% methane leakage is equivalent to 23-33 kg/boe, or 4-6 kg/mcfe.

There are also risks with the numbers above, which is the systematic under-reporting of methane leakage rates, both upstream and further downstream in the value chain. Large oil and gas companies are required to measure and report their methane emissions, but Mrs Miggins is not. Hence, we think the numbers in our charts may be skewed to the upside.

All of this supports a growing role for natural gas in combined cycle gas turbines, and helping to alleviate power grid bottlenecks amidst the rise of AI, plus US LNG and blue hydrogen value chains. Our US decarbonization model has power rising from c40% to c50% of the US gas market by 2030, compensated by lower use in residential heat.

Maxwell’s demon: computation is energy?

Computation, the internet and AI are inextricably linked to energy. Information processing literally is an energy flow. Computation is energy. This note explains the physics, from Maxwell’s demon, to the entropy of information, to the efficiency of computers.


Maxwell’s demon: information and energy?

James Clark Maxwell is one of the founding fathers of modern physics, famous for unifying the equations of electromagnetism. In 1867, Maxwell envisaged a thought experiment that could seemingly violate the laws of thermodynamics.

As a starting point, recall that a gas at, say 300ºK does not contain an even mixture of particles at the exact same velocities, but a distribution of particle speeds, as given by the Maxwell-Boltzmann equations below.

Boltzmann

Now imagine a closed compartment of gas molecules, partitioned into two halves, separated by a trap door. Above the trap door, sits a tiny demon, who can perceive the motion of the gas molecules.

Whenever a fast-moving molecule approaches the trap door from the left, he opens it. Whenever a slow-moving molecules approaches the trap door from the right, he opens it. At all other times, the trap door is closed.

The result is that over time, the demon sort the molecules. The left-hand side contains only slow-moving molecules (cold gas). The right-hand side contains only fast-moving molecules (hot gas).

This seems to violate the first law of thermodynamics, which says that energy cannot be created or destroyed. Useful energy could be extracted by moving heat from the right-hand side to the left-hand side. Thus in a loose sense the demon has ‘created energy’.

It also definitely violates the second law of thermodynamics, which says that entropy always increases in a closed system. The compartment is a closed system. But there is categorically less entropy in the well-sorted system with hot gas on the right and cold gas on the left.

The laws of thermodynamics are inviolable. So clearly there must be some work done on the system, with a corresponding decrease in entropy, by the information processing that Maxwell’s demon has performed.

This suggests that information processing is linked to energy. This point is also front-and-center in 2024, due to the energy demands of AI.

Landauer’s principle: forgetting 1 bit requires >0.018 eV

The mathematical definition of entropy is S = kb ln X, where kb is Boltzmann’s constant (1.381 x 10^-23 J/K) and X is the number of possible microstates of a system.

Hence if you think about the smallest possible transistor in the memory of a computer, which is capable of encoding a zero or a one, then you could say that it has two possible micro-states, and entropy of kb ln (2).

But as soon as our transistor encodes a value (e.g., 1), then it only has 1 possible microstate. ln(1) = 0. Therefore its entropy has fallen by kb ln (2). When entropy decreases in thermodynamics, heat is usually transferred.

Conversely, when our transistor irreversibly ‘forgets’ the value it has encoded, its entropy jumps from zero back to kb ln (2). When entropy increases in thermodynamics, then heat usually needs to be transferred.

You see this in the charts below, which plots the PV-TS plot for a Brayton cycle heat engine that harnesses net work via moving heat from a hot source to a cold sink. Although really an information processor functions more like a heat pump, i.e., a heat engine in reverse. It absorbs net work as it moves heat from an ambient source to a hot sink.

In conclusion, you can think about the encoding and forgetting a bit of information as a kind of thermodynamic cycle, as energy is transferred to perform computation.

The absolute minimum amount of energy that is dissipated is kb T ln (2). At room temperature (i.e., 300ºK), we can plug in Boltzmann’s constant, and derive a minimum computational energy of 2.9 x 10^-21 J per bit of information processing, or in other words 0.018 eV.

This is Landauer’s limit. It might all sound theoretical, but it has actually been demonstrated repeatedly in lab-scale studies: when 1 bit of information is erased, a small amount of heat is released.

How efficient are today’s best supercomputers?

The best super-computers today are reaching computational efficiencies of 50GFLOPS per Watt (chart below). If we assume 32 bit precision per float, then this equates to an energy consumption of 6 x 10^-13 Joules per bit.

In other words, a modern computer is using 200M times more energy than the thermodynamic minimum. Maybe a standard computer uses 1bn times more energy than the thermodynamic minimum.

One reason, of course, is that modern computers flow electricity through semiconductors, which are highly resistive. Indeed, undoped silicon is 100bn times more resistive than copper. For redundancy’s sake, there is also a much larger amount of charge flowing per bit per transistor than just a single electron.

But we can conclude that information processing is energy transfer. Computation is energy flow.

As a final thought, the entirety of the universe is a progression from a singularity of infinite energy density and low entropy (at the Big Bang) to zero energy density and maximum entropy in around 10^23 years from now. The end of the universe is literally the point of maximum entropy. Which means that no information can remain encoded.

There is something poetic, at least to an energy analyst, in the idea that “the universe isn’t over until all information and memories have been forgotten”.

Electric adventures: conclusions from an EV road trip?

It is a rite of passage for every energy analyst to rent an electric vehicle for an EV road trip, then document their observations and experiences. Our conclusions are that range anxiety is real, chargers benefit retailers, economics are debatable, power grids will be the biggest bottleneck and our EV growth forecasts are not overly optimistic.


(1) Range anxiety is real. Last weekend, we traveled from Brussels to Kortrijk, to Ypres, to the site of Operation Dynamo in Dunkirk, to the Western front of the Somme, as part of a self-educational history trip.

The total journey was 600km (map below). Undertaken in a vehicle with 300km of range. By a driver somewhat anxious about running out of electricity, and themselves needing to be rescued from Dunkirk.

For contrast, the range of an equivalent ICE car is around 800km. Although we did enjoy charging our vehicle in France’s famously low-carbon grid (65% nuclear). Combined with the prevalence of onshore wind in Northern Europe, you can easily convince yourself that you are charging using very low-carbon electricity.

(2). Chargers benefit retailers. We did spend over 2-hours charging at a Level 2 charger, near an out-of-town supermarket in Dunkirk. We passed the time by shopping in the supermarket. Ultimately, my wife and I were unable to resist buying a large bag of madeleine cakes, which would sustain us for the next 2-days. This is the biggest reason we ultimately expect EV chargers to get over-built. They will pay for themselves in footfall.

(3) Economics are debatable. Many commentators argue that electric vehicle charging should be ‘cheaper’ than ICE vehicles, but this was not entirely borne out by our own adventures.

For perspective, €1.8/liter gasoline in Europe is equivalent to $8/gallon, of which c50-65% is tax. Combusted at 15-20% efficiency, this is equivalent to buying useful transportation energy at $1.1/kWh.

Our receipt is below for Friday’s night’s EV charge in Dunkirk, equating to around $0.6/kWh of useful energy. This is about 2-4x higher than the various scenarios in our EV charging model (below). It is comparable to the untaxed cost of gasoline. And 50% below the taxed cost of gasoline.

My own perspective is that I would happily have paid more for a faster charge. As evidenced by my glee, on Sunday morning, when paying €40 for 40kWh at a fast-charger in Belgium, which took a mere 25 minutes!!

(4) Power grids will be the biggest bottleneck. What enabled us to fast charge at 100kW in the video above was a large amount of electrical infrastructure, specifically a 10kV step-down transformer and associated power electronics, to accomodate 3 x 300 kW docks, each with 2 charging points (photo below). The continued build-out of EV infrastructure therefore requires overcoming mounting power grid bottlenecks.

(5) Our EV growth forecasts are not obviously over-optimistic? Overall, our EV experience was a good one. Charging points were widely available. In big towns and small towns. Queues were minimal. Charging was easy (albeit time-consuming).

There was nothing in our experience that made me think I needed to rush home and downgrade my previously published numbers, which see global EV sales ramping up from 14M vehicles in 2023 (10M BEVs, 4M PHEVs) to 50M by 2028 (model below), including the concomitant impacts on our oil demand forecasts.

Post-script. I have listed back to this EV road trip video several time and wish to apologize for some errata. My geography is not as bad as implied by the Betherlands fiasco. At one point, I said “50 kilowatts” when I meant “50 kilowatt hours”. But our biggest mistake… well, it turns out we did have a charging cable, hidden under the front bonnet (photo below). Clearly the final barrier to EV adoption in some cases may simply be the unfamilarity of users :-/.

European gas: anatomy of an energy crisis?

European gas demand across residential heat, commercial heat, electricity and a dozen industries.

Europe suffered a full-blown energy crisis in 2022, hence what happened to gas demand, as prices rose 5x from 2019 levels? European gas demand in 2022 fell -13% overall, including -13% for heating, -6% for electricity and -17% for industry. The data suggest upside for future European gas, global LNG and gas as the leading backup to renewables. Underlying data are available for stress-testing in our gas and power model.


Energy data from Eurostat have pros and cons. The pro is 100 lines of gas market granularity across 27 EU member countries. The cons are that the full 2022 data were only posted online in March-2024, and require careful scrubbing in order to derive meaningful conclusions. We have scrubbed the data and updated our European gas and power model (below).

European gas demand (EU27 basis) fell from 414 bcm in 2021 to 363 bcm in 2022, for a decline of -52 bcm, or -13%. The first conclusion is about price inelasticity. Gas prices averaged $29/mcf in 2022, up 110% YoY, and up 5x from 2019 levels, yet gas demand only fell by 13%. Energy price inelasticity allows for energy market volatility, which we think is structurally increasing in the global energy system, benefitting energy traders, midstream companies and load-shifters (note below).

Heating comprises 40% of Europe’s gas demand, of which 24pp is residential, 11pp is commercial, 3pp is heat/steam sold from power plants to industry and 1pp is agriculture (yes, 1% of Europe’s gas is burned to keep livestock warm). Total heating demand fell -13% in 2022, in line with the total market trend, and demonstrating similar price-inelasticity.

The temperatures of processes used in different economic sectors and their contribution to total global heat demand in TWH per year.

Electricity comprises 30% of Europe’s gas demand, and our thesis has been that gas power will surprise to the upside, entrenching as the leading backup for renewables (note below). 2022 supports this thesis. Gas demand for electricity only fell by -6%, the lowest decline of any major category; and total gas demand for power, at 105bcm was exactly the same as in 2012, despite 3x higher gas prices and doubling wind and solar from 9% to 22% of the mix. These are remarkable and surprising numbers.

Industry comprises 30% of Europe’s gas demand. What is fascinating is how YoY gas demand varied by industry in 2022. Most resilient were the production and distribution of gas itself (-1% YoY), manufacturing food products (-6%) and auto production (-6%). The biggest reductions in gas demand were refineries (-41%) and wood products (-26%) because both can readily switch to other heat sources amidst gas price volatility. Other large reductions in gas demand occurred for chemicals (-26%) and construction (-24%) due to weak economic conditions.

Most strikingly, the European chemicals industry shed a full 1bcfd of gas demand YoY in 2022. This is the portion of European gas demand that seems most at risk to us in the long-term, as the US can produce the same materials, at lower feedstock costs, while possibly also decarbonizing at source, via blue hydrogen value chains (examples below).

The latest data from Eurostat and the IEA both imply that Europe’s total gas demand fell by a further -7% in 2023, due to exceptionally mild weather (heating degree days are also tabulated in our gas and power model). In other words, total European gas demand remains -8bcfd lower than in 2021, equivalent to 60MTpa of LNG, and we wonder how much of this demand can come back with LNG capacity additions, thereby muting fears of over-supplied LNG markets.

LNG ramp-rates: MTpa per month and volatility?

What are the typical ramp-rates of LNG plants, and how volatile are these ramp-ups? We have monthly data on several facilities in our LNG supply-demand model, implying that 4-5MTpa LNG trains tend to ramp at +0.7MTpa/month, with a +/- 35% monthly volatility around this trajectory. Thus do LNG ramps create upside for energy traders?


Qatar is expanding its LNG capacity from 77MTpa to 142MTpa, by adding 8 x 8.1MTpa mega-trains into the 400MTpa global LNG market.

For perspective, 65MTpa of new LNG capacity is almost 1,000 TWH pa of primary energy, whereas the total global solar industry added +400 TWH of generation in 2023 (our latest solar outlook is linked here).

Hence we wonder how fast large LNG projects ramp up? Month-by-month ramp-ups of different LNG facilities are plotted above, where we can get the data, as an excerpt from our LNG supply-demand model.

The historic precedent sees LNG facilities ramp up 4-5MTpa trains at 0.2-2MTpa/month, with an average ramp rate of 0.7MTpa/month.

The ramp-ups are also volatile, with a +/- 35% standard error around the trajectory implied by a perfectly smooth ramp-up. Volatility may benefit energy traders? Let us review some examples below.

Australia ramped up 7 mega-projects with 62MTpa of capacity from 2015 to 2019, over four-years (+1.1MTpa/month), and with surprisingly high volatility (+/- 35% standard error above/below the 1.1MTpa ramp rate). In bottom quartile months, annualized output fell by -1.2MTpa and in bottom decile months if fell by -4.3MTpa.

Sabine Pass ramped up 6 x 5MTpa trains from 2016 to 2022, which also took five years (0.4MTpa/month), and included volatility (+/- 55% standard error), a -2MTpa annualized decline in one-quarter of the months and -3MTpa decline in one-tenth. For example, the facility shut down in August 2020 due to Hurricane Laura.

Freeport LNG ramped up at 0.7MTpa/month with +/- 45% standard error, with a particularly disrupted ramp-up, due to an explosion in June-2022, which took 9 months to remedy. The incident was blamed on deficient valve-testing procedures, which allowed LNG to become isolated, heat up, expand, breach the pipeline and explode. US regulators asked for information on 64 items before permitting a restart, which speaks to the complexity of these ramp-ups (!).

Other LNG facilities have also had volatility during their ramp-ups. Elba Island LNG went offline in May-2020 after a fire. Sabine, Corpus and Freeport cut volumes by 70% peak-to-trough during the worst of the COVID crisis. The average project in our data set ramped up 4-5MTpa LNG trains at 0.7MTpa/month with +/- 35% standard error.

Hence our conclusion is that the start-up of Qatar’s first two LNG trains in 2026 will be gradual, rather than a sudden 16MTpa shock to LNG markets, while LNG traders could even benefit from the volatility? For more perspectives, please see our outlook on the LNG industry.

Email deliverability: who broke the internet?

One of our goals at Thunder Said Energy is to help make everyone smarter on the amazing world of energy, by sending out a daily email to our distribution list. But sending a daily email to 10,000 people turns out to be harder than you’d think. This video explains research email deliverability, SPF, DKIM, DMARC and lessons learned over 15-years.

We also endured an unfortunate issue in December that prevented 4,000 subscribers from receiving our research. We’re very sorry. We hope we’ve fixed it! And some comments follow below to make sure important research reaches you in the future.

This story goes back over fifteen years. In my first ever research job, we used to send research emails to a list of 2,000 investors… via Outlook. At the time, there was a limit that you could only BCC 800 people per email. And so we had to ‘blast out’ each research note in three separate batches, in a somewhat horrific process.

Hence today, large mailing lists tend to be managed by email marketing platforms. An amazing amount of computation goes on behind the scene, in an attempt to ensure emails reach you safely. This explains the common statistics that an email embeds 1Wh of electricity, emits 0.3 grams of CO2, and in aggregate the energy consumption of the internet runs to 800 TWH pa, or 2.5% of all global electricity.

An email list of 10,000 people becomes a somewhat unwieldy beast, and we worry that our research might not reach our clients, who genuinely want to receive it. We send out an email most days to our distribution list at 6:45am Eastern time. If you would like to receive this email, but are for some reason not receiving it, then please contact us, and we will help you resolve the issue.

The most common resolution for clients that are not receiving our emails is for your company’s IT administrator to whitelist our mailing list sending-domain, which is ml.thundersaidenergy.com. For GDPR reasons, the emails are sent from servers in Europe, which has also historically caused some of our US clients to screen out these emails. If your IT department needs any further details, then please do contact us.

We did have a major issue with our research email deliverability in December-2023. We had 3,500 users unsubscribe from our mailing list in a single day, all precisely one minute after our email was sent out. We understand that the cause was client-side mail servers checking all of the links in our outgoing emails (to make sure they are safe), including, unhelpfully, the one-click unsubscribe link that is now required by Google. Apparently we were not alone, and hundreds/thousands of other mailing lists have suffered from this issue. The issue is still under discussion in some angry Reddit threads!

Not all of our research reached all of our clients in December-2023 and early January-2024. What upset us about this, in particular, was that this timing happened to coincide with some of the most important and actionable research we have published over the past five years. In case you missed it, the three most important research notes are copied below.

Energy transition from first principles?

Our top three questions in the energy transition are depicted above. Hence we have become somewhat obsessed with analyzing the energy transition from first principles, to help our clients understand the global energy system, understand new energy technologies and understand key industries. 


Our research notes aim to make smart decision-makers even smarter, covering the key concepts and numbers, while being clear and concise, and dissecting the energy transition from first principles…

Energy theory from first principles: energy units, thermo-dynamics, electricity, electrochemistry, magnets and motors and semiconductors.

Decarbonization technologies from first principles: renewables, batteries, EVs, EV charging, lithium batteries, flow batteries, thermal batteries, SSBs, heat pumps, fusion, geothermal, CCS, DAC, blue hydrogen, green hydrogen, electrofuels, biofuels, landfill gas, biomass, biochar, nature.

Energy efficiency technologies from first principles: EROEI, electric vehiclesLED lighting, VFDs, CHPs, insulation, methane mitigation.

Energetic industries from first principles: the internetindustrial gases, hydrogen, ammonia, steel, battery recycling, trucks, transport, compressors.

The new age of electricity from first principles: transmission, transformers, transistors, harmonic filters, capacitors, reactive power.

The new age of volatility due to renewables, geopolitics, politics, policies.

Latest views on global energysolar innovationwind recalibration, nuclear, grid bottlenecks, shale and how AI is going to save the world.

We would be delighted to help you understand the energy transition from first principles. Please consider joining our distribution list or signing up to access our research.

Energy transition: three reflections on 2023?

In October-2022, we wrote that high interest rates could create an ‘unbridled disaster’ for new energies in 2023. So where could we have done better in helping our clients to navigate this challenging year? Our energy reflections on 2023 suggest some new year’s resolutions for 2024. They are clearer conclusions, predictions over moralizations, and looking through macro noise to keep long-term mega-trends in mind.


What has prompted this self-reflection is looking back on a report from October-2022, where I wrote – direct quote – that “each 1% increase in interest rates re-inflates new energies costs by 10-20%” and hence 2023 could be – again direct quote — an “unbridled disaster” for wind, solar, clean-tech (note below).

I am not bringing this up to do some kind of victory lap. Actually, the opposite, I think could have done a better job of helping my clients to navigate 2023.

The first self-reflection is about big conclusions. I did write that note above about interest rates. But then I also went on to write 37 other notes about different battery chemistries and CCS technologies. Hence a first resolution is to publish clearer summaries, which are clear, concise and regular. For those that do not have time to read all of our publications. Examples below.

The second self-reflection is a distinction, between predictions and normative aspirations. Honestly, I think one of the reasons I did not push harder on the idea that clean technologies could have a tough 2023 was to avoid ruffling feathers. I run a research firm focused on energy transition. I would like to see the world’s energy system materially improve over the course of my lifetime.

If I have a fear for, well, basically all long-term energy analysis being published today, it is that almost all energy forecasters have been brow-beaten into publishing normative aspirations about what should happen, as though they were predictions for what will happen. Really they are very different things. So for 2024, please don’t take it personally, but I am going to try to do less forecasting about what should happen, and more about what will happen.

Predictions of what will happen, not what we ‘want to’ happen?

The third self-reflection is about purpose. Research is about helping decision-makers to make good decisions and build cool stuff. Including in the face of macro turbulence, and going back to first principles (summary below).

After our energy reflections on 2023, we feel very lucky to help 260 world-class decision makers to build cool stuff. In a world that increasingly needs it. So here is wishing you a great wind-down to 2023, and I am looking forward to helping you build cool stuff in 2024.

Copyright: Thunder Said Energy, 2019-2024.