Commodity intensity of global GDP in 30 key charts?

Intensities of oil and other materials for the global GDP have fallen over time, but electricity intensity has increased.

The commodity intensity of global GDP has fallen at -1.2% over the past half-century, as incremental GDP is more services-oriented. So is this effect adequately reflected in our commodity outlooks? This 4-page report plots past, present and forecasted GDP intensity factors, for 30 commodities, from 1973->2050. The -1.5% pa decline in the oil intensity of global GDP is anomalous and could even slow from here. While surprisingly many other commodities show demand increasing in line with, or above GDP growth.


Each $M of global GDP is associated with 80 tons of coal use, 350 bbls of oil, 1,360 mcf of gas, 285 MWH of electricity, 19 tons of steel, 19 tons of wood, 5 tons of plastics, 1.8 tons of ammonia, 1 ton of hydrogen, 0.7 tons of aluminium, 0.3 tons of copper. These inputs are crucial to the global economy, which in turn drives demand for these inputs.

However, a well-known economic effect is that the commodity intensity of GDP declines as GDP rises, or in other words, incremental units of global GDP tend to be more service-oriented and less energy/materials/manufacturing-oriented. This effect is quantified for different commodities on page 2, showing the commodity intensity of GDP from 1950 to 2023, plus our forecasts through 2050.

Oil intensity of global GDP is particularly interesting, showing one of the larger historical declines among different commodity categories in our database. And for good reason. Oil is expensive relative to other energy commodities. And three categories of global oil demand, have been particularly easy to substitute. Hence fifteen different oil product sensitivities to GDP are plotted on page 3.

Each incremental $1k increase in GDP per capita has tended to unlock 0.75 MWH pp pa of primary global energy demand based on regressions back to 1965. This can be explained by incremental global GDP shifting towards services over time. This is charted on page 4.

Overall, the report sense-checks our long-term commodity forecasts, draws out key conclusion into the commodity intensity of GDP, and finds that the historical trend differs sharply by commodity. For surprisingly many commodities, the relationship with GDP is a 1:1 beta, or evan a >1:1 beta, as highlighted on the conclusions page.


Solar plus batteries: the case for co-deployment?

The percentage of solar output dispatched to the grid depending on the capacity of the interconnection and the capacity of co-deployed batteries.

This 9-page study finds unexpectedly strong support for co-deploying grid-scale batteries together with solar. The resultant output is stable, has synthetic inertia, is easier to interconnect in bottlenecked grids, and can be economically justified. What upside for grid-scale batteries?


There are many different reasons that might motivate the deployment of a grid-scale battery, as tabulated on page 2. The most common is at a grid node, for load shifting and power price arbitrage, in ever-steeper duck curves.

But interestingly, we have seen a different model gaining traction in 2022-24, which is co-deploying renewables plus batteries, as explained on page 3.

The key rationale motivating co-deploying grid-scale batteries with renewables is to circumvent power grid bottlenecks and interconnection queues, which are biting for the reasons on page 4.

We can simplify the complexity for how the co-deployment of a battery alongside a utility-scale solar project might work, by returning to a data-file from earlier in 2024, which plotted the output every 5-minutes from a 275MW solar project in Australia. Our ‘battery rules’ and modeling framework are explained on page 5.

The results are fascinating. We find that battery co-deployments can allow a solar installation to dispatch about 95% of its power through a 65% smaller grid connection, while the asset can generate a highly stable, high-inertia power output across c50% of all hours across the year, which makes 2-3x better use of bottlenecked transmission capacity than a raw solar output. These numbers and sensitivities are explained on pages 6-8.

Co-deploying batteries is more expensive than standalone solar, but it can be economically justified, for the reasons on page 9.

Overall, our analysis may help the deployment of solar, LFP batteries and their supply chains.


Long-duration storage: dirtier than gas peakers?

The CO2 credentials of long-duration batteries may be as bad as 0.35-2.0 kg/kWh, which is worse than gas peakers, or even than coal power. Grid-scale batteries are best deployed in high-frequency applications, to maximize power quality, downstream of renewables. But we were surprised to find that there is almost no net climate benefit from turning off gas peakers in favor of long-duration, low-utilization batteries.


The Energy and CO2 paybacks of new energies are often called into question. Mostly wrongly, in our view. A solar installation repays its up-front energy and CO2 costs after 1.5-2 years and goes on to have a 10x energy and CO2 payback. A wind installation repays its up-front energy and CO2 costs after 1-year and goes on to have a 20x energy and CO2 payback. It does not help that there are some outdated studies from the mid-2000s still swirling around on the internet.

The Energy and CO2 paybacks of batteries, however, are more nuanced, and depend on how the batteries are used. Our build-up estimates that producing a lithium ion battery requires 175 kWh/kWh of energy and emits 100 kg/kWh of CO2. When this battery is installed in an electric vehicle, achieving 2,250 charge-discharge cycles over the useful life of the battery, then the breakeven comes after 1-year and the total payback is 10x.

For grid-scale batteries, there are many different potential business models. Our favorite is to install grid-scale batteries downstream of renewables projects, to cushion the high short-term volatility of wind, high short-term volatility of solar, provide synthetic inertia, and thus ensure high power quality in increasingly renewable-heavy grids. We see this model playing around already as renewables+batteries co-deployments accelerate. A battery like this can realistically achieve 1-3 charge-discharge cycles per day.

But some renewables advocates have grander ambitions, to use batteries for long-duration storage, in order to push renewables past their natural limits. Renewables will naturally peak at 50-55% of power grids in the best geographies, which can maybe be increased to 60% with demand shifting and some sensible deployments of batteries, which mainly do intra-day load shifting. But pushing renewables beyond this level will incur astronomical costs.

The idea of long-duration storage is to bridge longer periods of low-wind and/or low-solar generation, via batteries that store power for several days, weeks or months, until this energy is needed. This role is currently served by gas peakers, which make up c40% of the gas generation fleet but have utilization rates ranging from 2-20%. Across our research, we see gas plants increasingly being run as peakers, and rising value of peakers, due to increasing grid volatility. This is all due to the economics of gas peakers. Whereas the costs of long-duration, low-utilization batteries are crazy-high, at several hundred cents per kWh.

But the really crazy feature of long-duration energy storage, if it is done via lithium ion batteries, is that the full-cycle CO2 intensity may actually be higher than sticking with gas or even coal. To see this, consider the chart below. A battery that fully charges just 10-times per year, for 10-years, will amortize 100 kg/kWh of battery manufacturing CO2 across 100 total charge-discharge cycles, thus coming out at 1kg/kWh of CO2 intensity. That is dirtier than coal power.

The CO2 intensity of long-duration battery storage depends on the number of charge-discharge cycles per year and asset lifetime. Low utilization batteries may be more CO2 intensive than fossil generation.

Similarly, a long-duration battery that fully charges and discharges 2x per month, for 12-years, will amortize 100 kg/kWh of up-front battery manufacturing CO2 across 288 charge-discharge cycles over its lifetime, coming out at 0.35 kg/kWh, which is about the same as a typical CCGT power plant today. In other words, there is almost no net climate benefit from turning off the gas peakers in favor of such long-duration, low-utilization batteries.

The chart also confirms that the CO2 credentials are exceptionally good for batteries that get flexed multiple times per day, for a decade or more. Here, the batteries may amortize 100 kg/kWh of up-front manufacturing CO2 across 4,000 – 14,000 charge-discharge cycles, or maybe slightly less if limited by battery degradation, thereby achieving an exceptionally low 0.01-0.03 kg/kWh of total embedded CO2 intensity.

Rapidly cycled batteries are the way to go, for economic costs and CO2 costs, especially where they are co-deployed with renewables in order to ensure high power quality. We cannot see much rationale for replacing gas peakers with long-duration and low-utilizisation batteries. Our own outlook is that gas peakers will retain a long-lasting role in evolving power grids. There are growing opportunities to lower the CO2 footprint of gas power.

Energy research in the age of AI?

How will AI change the research and investment worlds? Our view is that large language models (LLMs) will soon surpass human analysts in assimilating and summarizing information. Hence this video explores three areas where human analysts can continue to earn their keep, and possibly even help decision-makers beat the ‘consensus engines’.


We have spent much of 2024 writing about the rise of AI, and how it will change the energy industry: unlocking new step-changes in industrial efficiency, next-gen DAC or autonomous vehicles; while re-exciting gas generation, compounding grid bottlenecks, wolfing up grids’ spare capacity, boosting fiber-optics, industrial cooling, transformers and harmonic filters.

But how will these AI models change the research and investment worlds? This video sees decreasing value in research that assimilates and summarizes information already floating in the public domain. Machines can increasingly do that. But where will the machines struggle?

By definition, Large Language Models are ‘consensus engines’. These AIs are trained by throwing billions of tokens at an algorithm, which must learn to guess the most likely tokens to follow on from previous tokens. Or in other words, they will average out all of the wonky views on the internet, thereby arriving at a stale consensus!!

AIs are also not human. Humans may retain an edge in understanding what is on the minds of decision-makers, undertaking novel analysis to address these debates, pitching the conclusions in ways that will engage human readers, and relating conclusions back to their actual human experiences. Being a good analyst is ultimately about empathy.

Man versus machine? We are considering a new series of videos, where we will identify a topic that is particularly on the minds of our clients, then pit Rob against ChatGPT. Rob will present his best answer to the question, based on the TSE research library then ask ChatGPT to critique his answer. Then Rob will critique ChatGPT’s answer.

Since AIs are consensus machines, this exercise may help to draw out counter-consensus ideas. So please do write in if you would like to suggest any topics for Rob to debate with ChatGPT in the first instalment of ‘Man versus Machine’.

LNG trucks: Asian equation?

LNG trucking is more expensive than diesel trucking in the developed world. But Asian trucking markets are different, especially China, where exponentially accelerating LNG trucks will displace 150kbpd of oil demand in 2024. This 8-page note explores the costs of LNG trucking and sees 45MTpa of LNG displacing 1Mbpd of diesel?


When we have evaluated LNG trucks in Western markets, such as the US or Europe, they are 20% more expensive than diesel trucks, per our models of truck costs.

What has surprised us is how different Asian trucking is from Western trucking, after tabulating the prices for 100 new trucks, with different fuels, and on sale in different geographies (see page 3).

Hence the costs of trucking are compared in Western markets and Asian markets, especially for China and India, on page 4.

Another key difference is that while the costs of LNG trucking are higher in the US or Europe, they are lower in China or India. A sensitivity analysis is given on page 5.

Other advantages for gas-powered trucks are spurring adoption in China in 2024. Some observations on the advantages are on page 6.

What impacts on global oil markets and global LNG markets, if gas-powered trucking continues accelerating in China and India? We can see 1Mbpd of oil being displaced, and substituted for 45MTpa of LNG, impacting our oil demand models and LNG models.

The other angle that excites us in energy commodities is rising volatility, linked to geopolitics, and the ramp of volatile wind and solar, whose regional output varies +/- 10% per year, and whose global output varies +/- 5% per year. This creates volatility in demand for backups โ€“ e.g., LNG โ€“ and greater regional arbitrage potential. LNG trucks play into this theme. Companies and opportunities are noted on page 8.


Underlying calculations behind this report, and data into truck costs by geography, can be found in our overview of trucking costs.

Can solar reach 45% of a power grid?

Can solar reach 45% of a power grid? This has been the biggest pushback on our recent report, scoring solar potential by country, where we argued the best regions globally – California, Australia – could reach 45% solar by 2050. Hence today’s model explores what a 45% solar grid might look like. Generation is 53% solar. 8% is curtailed, 35% is used directly, 7% is used via demand-shifting and 3% is time-shifted via batteries.


Solar can easily reach 30% of a 100MW power grid, as shown in the chart below. Specifically, to calculate this curve, we took the actual distribution of power demand in California, and the actual distribution of solar insolation as calculated from first principles. As both of these variables vary seasonally, we calculated balances for each month separately, then averaged together all twelve months of the year.

Example power grid where solar makes up 30% of a 100MW grid. Yearly average load-profile.

Can solar reach higher shares of the grid? We are going to set a limit that 25% of baseload generation (i.e., non-solar generation) can never be curtailed, as it is needed for grid stability, both instantaneously (e.g., due to inertia) and intra-day, to ramp up if/when solar stops generating. Hence ramping up solar beyond 30% of a grid requires some adaptations.

Can solar reach 45% shares of a grid? We think the answer to this question is yes, and the chart below shows our best attempt to model what such a grid would look like. It uses three adaptations.

Example power grid where solar makes up 45% of a 100MW grid. Yearly average load-profile.

Curtailments are not the end of the world. If 15% of the solar that is generated fails to dispatch, then this requires the ‘other 85%’ to charge about 15% more, in order to achieve the same IRRs. In other words, the LCOPE of all solar in our grid re-inflates from 6-7c/kWh to 7-8c/kWh. This is mildly inflationary, but basically fine. But a challenge for deploying further solar from here, is that a c15% average curtailment rate is associated with a c50% marginal curtailment rate, so building incremental solar from here will likely cost 12-15c/kWh (see below).

Demand shifting is the second adaptation that allows solar to reach higher shares. It is generally cheaper to move parts of the original demand curve (dark blue line) to when the solar is generating (yielding the light blue line), than to store solar energy in a battery and re-release it when solar is not generating. Our model has the total demand curve shifted by +/- 8% on average throughout the year. The need for demand shifting is highest in May, at +/- 15%. And highest at midday in May, when 34MW of excess demand must be absorbed, in our 100MW grid.

EV charging helps to contextualize our demand-shifting numbers. A typical EV has a 70kWh battery, and might charge at 10kW for 7-hours. Hence absorbing c34MW of excess demand in our 100MW grid is going to require 3,400 EVs, or 34EVs per MW of average load. For comparison, our EV sales forecasts show the US reaching 35M EVs on the road by 2030, which would equate to 70 EVs per MW of average US load. As long as about half of the US’s EVs are plugged into a solar-energized charger, there is no problem absorbing this excess demand.

Battery storage is the third adaptation that allows solar to reach higher shares of a power grid. In our scenario above, supplying 45% of the electricity in a 100MW grid with solar, requires building 200MW of solar and 30MW of batteries. The batteries absorb and re-release 6% of the solar generation, and end up providing 3.5% of the total grid. This varies from 2% in December (the month of lowest solar insolation) to 5% in May (the month with the largest excess of solar to absorb or curtail, charts below).

Example power grid where solar makes up 45% of a 100MW grid. December average load-profile.
Example power grid where solar makes up 45% of a 100MW grid. May average load-profile.

The reason we think grids will lean less on batteries is the cost of batteries. Each MW of batteries charges and discharges 240 times per year, which implies a storage spread of 30c/kWh. Across 3% of the total grid, this raises total grid costs by 1c/kWh.

Possible but inflationary? Overall, it is possible for solar to reach 45% of a power grid, along the lines outlined above. But an additional 3-5c/kWh is added in transmission and distribution costs (due to falling grid utilization), +1c/kWh in curtailment costs, +1c/kWh in battery costs, and 55% of the grid must still come from other sources, where infrastructure must still be maintained and included in rate bases. Renewables may add 4-6c/kWh to end consumer costs, in absolute terms, but remain a low-cost way to halve the CO2 intensity of power grids, while achieving other environmental and geopolitical goals.

It is not unrealistic for solar to reach 45% of the grid, in countries that are particularly well-placed for solar, as per our note below.

The model linked below is a nice tool for stress-testing different options around the ultimate share of solar in grids, and the need to lean on curtailment, demand-shifting and batteries. You can vary the installed base of solar, the share of non-solar baseload that cannot be curtailed, percent of excess solar that is demand-shifted, excess solar that is stored, and see the resulting power grid distribution, hour-by-hour, month-by-month.

AI and power grid bottlenecks?

Video presentation regarding Thunder Said Energy's research and insights on AI and power grid bottlenecks.

The number one topic in energy this year has been the rise of AI. Which might not seem like an energy topic. Yet it is inextricably linked with power grid bottlenecks, the single biggest issue for energy markets in the mid-late 2020s. The goal of todayโ€™s video is to recap our key conclusions. There is an accompanying presentation for TSE clients.


AI and power grid bottlenecks are becoming inextricably interlinked. The reasons and implications are explored in this video. The button above is for a 17-page presentation, in case you would like to follow along with the video, with underlying charts and data, and is available to TSE clients.

Presentation regarding Thunder Said Energy's research and insights on AI and power grid bottlenecks.

The energy consumption of AI is discussed in the first portion of the video, estimated at 150GW globally in 2030, adding 1,000 TWH of new electricity demand. Together with other electrification initiatives, US electricity demand growth quintuples?

The justification for AI is that it will unlock fascinating efficiency gains and game-changing technologies. We explore this idea in the second section of the video. Areas that stand out to us include next-generation DAC, materials, autonomous vehicles, thermoelectric semiconductors and superconductors.

The more immediate bottlenecks for AI are bottlenecks in powering new AI data-centers, amidst deepening power grid bottlenecks. Data-centers want cheap, reliable, low-carbon power, available ASAP, from the power grid, at a location of their choosing near their end customers. Unfortunately, this is a unicorn. It does not exist. The third portion of the video assesses which variables are ‘must haves’ and can’t be compromised.

The conclusion from AI and power grid bottlenecks leads us to shale basins? When we assess all of the lower-carbon options, their levelized costs and other dimensions that AI will end up prioritizing, then we find it will be best to circumvent long-standing bottlenecks in the power and gas grids, by situating AI data-centers near fast-to-market energy sources, then move the data via fiber-optics.

We look forward to discussing actionable implications of this research with TSE clients. Please email us if we can help you, or if you would like to explore a discussion. In the meantime, our best ideas for further reading are summarized on page 17 of the presentation below.


Methane leaks: by gas source and use

Methane leakage rates in the gas industry vary by source and use. Across our build-ups, the best-placed value chains are using Marcellus gas in CCGTs (0.2% methane leakage, equivalent to 6kg/boe, 1kg/mcfe, or +2% on Scope 3 emissions) and/or Permian gas in LNG or blue hydrogen value chains (0.3%). Residential gas use is closer to 0.8-1.2%, which is 4-6kg/mcfe; or higher as this is where leaks are most likely under-reported.

Today’s short note explains these conclusions, plus implications for future gas consumption. Underlying numbers are here.


Methane, as explained here, is a 120x more potent greenhouse gas in the atmosphere than CO2. It does degrade over time, mediated by hydroxyl radicals. So its 20-year impact is 34x higher than CO2 and its 100-year impact is 25x higher. Therefore, if c2.7-3.5% of natural gas is โ€œleakedโ€ into the atmosphere, natural gas could be considered a โ€œdirtierโ€ fuel than coal (chart below,ย model here).

CO2 emissions of natural gas use depending on the amount of methane leaked. Once leaks reach 3% and above, then natural gas could be considered as 'dirty' as coal.

For a fair comparison an important side-note, not reflected in the chart above, is that methane is also leaked into the atmosphere when producing coal. This is because natural gas often desorbs from the surface of coal as it is mined. Our best attempt to quantify the leakage rate is that it is equivalent to 33kg/boe, equivalent to leaking 1.2% of the methane from a gas value chain (apples to apples, using a methane GWP of 25x).

In other words, if a gas value chain is leaking less than 1.2% of its methane, then its methane leakage rates are lower than the energy-equivalent methane leakage rate from producing coal. If a gas value chain is leaking more than 1.2% of its methane, then its methane leakage rate is higher than from producing coal.

One of the challenges for quantifying methane leakage across natural gas value chains is that, by definition, it is a chain. Gas molecules move from upstream production to processing stages such as sweetening and dehydration, through transmission lines, then through distribution lines, then to end consumers such as power plants, LNG plants, ammonia plants, hydrogen reformers, industrial heating and households.

Hence in the title chart above, we have attempted to build up the methane intensity of US gas value chains, looking line by line, and using the data-files indexed below. For example, as of our latest data-pull, methane leakage rates are 0.06% of produced gas in the Appalachian, 0.19% in the Permian, 0.22% in the Bakken, 0.34% in the Gulf Coast and 0.49% in the MidCon.

Putting the pieces together, we think that the total methane leakage rate across the value chain can be as low as 0.2-0.3% when gas from the Marcellus or Permian are used in gas power (e.g., for an AI data-center, LNG plant or for blue hydrogen). This is just 6-10 kg/boe of Scope 1 CO2, or 1-1.5 kg/mcfe (by contrast, combusting natural gas emits 56kg of CO2 per mcfe). And the best producers may achieve even lower, via the growing focus on mitigating methane.

Conversely, using gas for home heating and home cooling likely carries a higher methane leakage rate, of 0.8-1.2%, as there is more small-scale distribution, and smaller residential consumers are not always as discerning about conducting regular maintenance or checking for leaks. 0.8-1.2% methane leakage is equivalent to 23-33 kg/boe, or 4-6 kg/mcfe.

There are also risks with the numbers above, which is the systematic under-reporting of methane leakage rates, both upstream and further downstream in the value chain. Large oil and gas companies are required to measure and report their methane emissions, but Mrs Miggins is not. Hence, we think the numbers in our charts may be skewed to the upside.

All of this supports a growing role for natural gas in combined cycle gas turbines, and helping to alleviate power grid bottlenecks amidst the rise of AI, plus US LNG and blue hydrogen value chains. Our US decarbonization model has power rising from c40% to c50% of the US gas market by 2030, compensated by lower use in residential heat.

Maxwell’s demon: computation is energy?

Computation, the internet and AI are inextricably linked to energy. Information processing literally is an energy flow. Computation is energy. This note explains the physics, from Maxwell’s demon, to the entropy of information, to the efficiency of computers.


Maxwell’s demon: information and energy?

James Clark Maxwell is one of the founding fathers of modern physics, famous for unifying the equations of electromagnetism. In 1867, Maxwell envisaged a thought experiment that could seemingly violate the laws of thermodynamics.

As a starting point, recall that a gas at, say 300ยบK does not contain an even mixture of particles at the exact same velocities, but a distribution of particle speeds, as given by the Maxwell-Boltzmann equations below.

Boltzmann

Now imagine a closed compartment of gas molecules, partitioned into two halves, separated by a trap door. Above the trap door, sits a tiny demon, who can perceive the motion of the gas molecules.

Whenever a fast-moving molecule approaches the trap door from the left, he opens it. Whenever a slow-moving molecules approaches the trap door from the right, he opens it. At all other times, the trap door is closed.

The result is that over time, the demon sort the molecules. The left-hand side contains only slow-moving molecules (cold gas). The right-hand side contains only fast-moving molecules (hot gas).

This seems to violate the first law of thermodynamics, which says that energy cannot be created or destroyed. Useful energy could be extracted by moving heat from the right-hand side to the left-hand side. Thus in a loose sense the demon has ‘created energy’.

It also definitely violates the second law of thermodynamics, which says that entropy always increases in a closed system. The compartment is a closed system. But there is categorically less entropy in the well-sorted system with hot gas on the right and cold gas on the left.

The laws of thermodynamics are inviolable. So clearly there must be some work done on the system, with a corresponding decrease in entropy, by the information processing that Maxwellโ€™s demon has performed.

This suggests that information processing is linked to energy. This point is also front-and-center in 2024, due to the energy demands of AI.

Landauer’s principle: forgetting 1 bit requires >0.018 eV

The mathematical definition of entropy is S = kb ln X, where kb is Boltzmann’s constant (1.381 x 10^-23 J/K) and X is the number of possible microstates of a system.

Hence if you think about the smallest possible transistor in the memory of a computer, which is capable of encoding a zero or a one, then you could say that it has two possible micro-states, and entropy of kb ln (2).

But as soon as our transistor encodes a value (e.g., 1), then it only has 1 possible microstate. ln(1) = 0. Therefore its entropy has fallen by kb ln (2). When entropy decreases in thermodynamics, heat is usually transferred.

Conversely, when our transistor irreversibly ‘forgets’ the value it has encoded, its entropy jumps from zero back to kb ln (2). When entropy increases in thermodynamics, then heat usually needs to be transferred.

You see this in the charts below, which plots the PV-TS plot for a Brayton cycle heat engine that harnesses net work via moving heat from a hot source to a cold sink. Although really an information processor functions more like a heat pump, i.e., a heat engine in reverse. It absorbs net work as it moves heat from an ambient source to a hot sink.

In conclusion, you can think about the encoding and forgetting a bit of information as a kind of thermodynamic cycle, as energy is transferred to perform computation.

The absolute minimum amount of energy that is dissipated is kb T ln (2). At room temperature (i.e., 300ยบK), we can plug in Boltzmann’s constant, and derive a minimum computational energy of 2.9 x 10^-21 J per bit of information processing, or in other words 0.018 eV.

This is Landauer’s limit. It might all sound theoretical, but it has actually been demonstrated repeatedly in lab-scale studies: when 1 bit of information is erased, a small amount of heat is released.

How efficient are today’s best supercomputers?

The best super-computers today are reaching computational efficiencies of 50GFLOPS per Watt (chart below). If we assume 32 bit precision per float, then this equates to an energy consumption of 6 x 10^-13 Joules per bit.

In other words, a modern computer is using 200M times more energy than the thermodynamic minimum. Maybe a standard computer uses 1bn times more energy than the thermodynamic minimum.

One reason, of course, is that modern computers flow electricity through semiconductors, which are highly resistive. Indeed, undoped silicon is 100bn times more resistive than copper. For redundancy’s sake, there is also a much larger amount of charge flowing per bit per transistor than just a single electron.

But we can conclude that information processing is energy transfer. Computation is energy flow.

As a final thought, the entirety of the universe is a progression from a singularity of infinite energy density and low entropy (at the Big Bang) to zero energy density and maximum entropy in around 10^23 years from now. The end of the universe is literally the point of maximum entropy. Which means that no information can remain encoded.

There is something poetic, at least to an energy analyst, in the idea that “the universe isn’t over until all information and memories have been forgotten”.

Electric adventures: conclusions from an EV road trip?

It is a rite of passage for every energy analyst to rent an electric vehicle for an EV road trip, then document their observations and experiences. Our conclusions are that range anxiety is real, chargers benefit retailers, economics are debatable, power grids will be the biggest bottleneck and our EV growth forecasts are not overly optimistic.


(1) Range anxiety is real. Last weekend, we traveled from Brussels to Kortrijk, to Ypres, to the site of Operation Dynamo in Dunkirk, to the Western front of the Somme, as part of a self-educational history trip.

The total journey was 600km (map below). Undertaken in a vehicle with 300km of range. By a driver somewhat anxious about running out of electricity, and themselves needing to be rescued from Dunkirk.

For contrast, the range of an equivalent ICE car is around 800km. Although we did enjoy charging our vehicle in France’s famously low-carbon grid (65% nuclear). Combined with the prevalence of onshore wind in Northern Europe, you can easily convince yourself that you are charging using very low-carbon electricity.

(2). Chargers benefit retailers. We did spend over 2-hours charging at a Level 2 charger, near an out-of-town supermarket in Dunkirk. We passed the time by shopping in the supermarket. Ultimately, my wife and I were unable to resist buying a large bag of madeleine cakes, which would sustain us for the next 2-days. This is the biggest reason we ultimately expect EV chargers to get over-built. They will pay for themselves in footfall.

(3) Economics are debatable. Many commentators argue that electric vehicle charging should be ‘cheaper’ than ICE vehicles, but this was not entirely borne out by our own adventures.

For perspective, โ‚ฌ1.8/liter gasoline in Europe is equivalent to $8/gallon, of which c50-65% is tax. Combusted at 15-20% efficiency, this is equivalent to buying useful transportation energy at $1.1/kWh.

Our receipt is below for Friday’s night’s EV charge in Dunkirk, equating to around $0.6/kWh of useful energy. This is about 2-4x higher than the various scenarios in our EV charging model (below). It is comparable to the untaxed cost of gasoline. And 50% below the taxed cost of gasoline.

My own perspective is that I would happily have paid more for a faster charge. As evidenced by my glee, on Sunday morning, when paying โ‚ฌ40 for 40kWh at a fast-charger in Belgium, which took a mere 25 minutes!!

(4) Power grids will be the biggest bottleneck. What enabled us to fast charge at 100kW in the video above was a large amount of electrical infrastructure, specifically a 10kV step-down transformer and associated power electronics, to accomodate 3 x 300 kW docks, each with 2 charging points (photo below). The continued build-out of EV infrastructure therefore requires overcoming mounting power grid bottlenecks.

(5) Our EV growth forecasts are not obviously over-optimistic? Overall, our EV experience was a good one. Charging points were widely available. In big towns and small towns. Queues were minimal. Charging was easy (albeit time-consuming).

There was nothing in our experience that made me think I needed to rush home and downgrade my previously published numbers, which see global EV sales ramping up from 14M vehicles in 2023 (10M BEVs, 4M PHEVs) to 50M by 2028 (model below), including the concomitant impacts on our oil demand forecasts.

Post-script. I have listed back to this EV road trip video several time and wish to apologize for some errata. My geography is not as bad as implied by the Betherlands fiasco. At one point, I said “50 kilowatts” when I meant “50 kilowatt hours”. But our biggest mistake… well, it turns out we did have a charging cable, hidden under the front bonnet (photo below). Clearly the final barrier to EV adoption in some cases may simply be the unfamilarity of users :-/.

Copyright: Thunder Said Energy, 2019-2024.