Green hydrogen: alkaline versus PEM electrolysers?

Alkaline versus PEM electrolyser

The key difference between an alkaline electrolyser and a proton exchange membrane electrolyser (PEM) is what ion diffuses between the anode and cathode side of the cell. In an alkaline electrolyser, alkaline OH- ions diffuse. In a PEM electrolyser, protons, H+ ions, diffuse. Ten fundamental differences follow.

The lowest cost green hydrogen will come from alkaline electrolysers run at high utilizations, powered by clean, stable grids with excess power (e.g., nuclear, hydro).

PEMFCs are more suited for backstopping renewables, although there is still some debate over the costs, longevity, efficiency and whether intermittent wind/solar can be put to better use elsewhere.


(1) In an alkaline electrolyser, water is broken down at the cathode. 4 x H2O molecules gains 4 x e- and become 2 x H2 + 4 x OH- ions. The OH- ions then diffuse across the cell to the anode. To complete the electrical circuit, 4 x OH- ion surrender 4 x e- at the cathode and become 2 x H2O molecules + 1 x O2 molecule. A schematic is below.

(2) In a PEM electrolyser, the chemistry is very different. Water is broken down at the anode. 2 x H2O molecules surrender 4 x e- and become an O2 molecule + 4 H+ ions (protons). The H+ ions then diffuse across the cell to the cathode. To complete the electrical circuit, at the cathode, 4 x H+ ions gain 4 x e- and become 2 x H2 molecules.

(3) PEMs have Membranes. H+ ions are the smallest ions in the Universe, measuring 0.0008 pico-meters (comparable with other ionic radiuses below). This means protons can diffuse through solid polymers like Nafion, which otherwise resist electricity and resist the flow of almost all other materials; totally isolating the anode and cathode sides of the cell in a PEMFC.

(4) Alkaline Electrolysers have Diaphragms. OH- ions are larger, at 153 pm (which is actually quite large, per the chart above). Thus they will not diffuse through a solid polymer membrane. Consequently, the anode and cathode are separated by a porous diaphragm, bathed in an electrolyte solution of potassium hydroxide, produced via a variant of the chlor-alkali process. This (alkaline) electrolyte also contains OH- ions. This helps, because more OH- ions makes it faster for excess OH- ions to diffuse from high concentration on the cathode side of the cell to low concentration on the anode side of the cell (see (2)).

(5) Safety implications. Alkaline electrolysers are said to be less safe than PEMs. The reason is the porous diaphragm. Instead of bubbling out as a gas on the anode side, very small amounts of oxygen may dissolve, diffuse ‘in the wrong direction’ across the porous diaphragm, and bubble out alongside the hydrogen gas at the cathode side. This is bad. H2 + O2 make an explosive mixture.

(6) Footprint implications. One way to deal with the safety issue above is to place the anode and cathode ‘further apart’ for an alkaline electrolyser. This lowers the chances of oxygen diffusing across the diaphragm. But it also means that alkaline electrolysers are less power-dense.

(7) Efficiency implications. Small amount of current can leak through the KOH solution in an alkaline electrolyser, especially at very large current densities. When a direct current (e-) is added to the cell, we want it to reduce 2 x H+ into H2. However, a small amount of the current may be wasted, converting K+ into K; and a small amount of ‘shunt current’ may flow through the KOH solution directly from cathode to anode. We think real-world PEMs will be around 65% efficient (chart below, write-up here) and alkaline electrolysers will be multiple percentage points lower.

(8) Cost implications. An alkaline electrolyser may be a few $100/kW cheaper than a PEM electrolyser. Because the diaphragm is cheaper than the membrane. The electrodes are cheaper too. Our overview of electrolyser costs is below.

(9) Longevity implications. Today’s PEMs degrade 2x faster than alkaline electrolysers (40,000 hours versus 90,000 hours, as general rules of thumb). This is primarily because the membranes are fragile. And H+ ions are, by definition, acidic. But as with all power-electronics, the rate of degradation is also a function of the input signal and operating conditions.

(10) Flexibility implications. Alkaline electrolysers are not seen to be a good fit for backstopping renewables (chart above). According to one technical paper, “It is well known that alkaline water electrolysers must be operated with a so-called protective current in stand-by/idle conditions (i.e., when no power is provided by renewable energy sources) in order to avoid a substantial performance degradation”. When ion flow stops, there is nothing driving OH- ions across the cell, and pushing the H2 and O2 out of the cell. In turn, this means O2 and H2 bubbles can form. They may accumulate around electrode catalysts. Then when the cell starts up again, the gas bubbles block current flow. In turn, overly large resistance or current densities can then degrade the catalysts.

How does methane increase global temperature?

How does methane increase global temperature?

How does methane increase global temperature? This article outlines the theory. We also review the best equations, linking atmospheric methane concentrations to radiative forcing, and in turn to global temperatures. These formulae suggest 0.7 W/m2 of radiative forcing and 0.35ºC of warming has already occurred due to methane, as atmospheric methane has risen from 720 ppb in 1750 to 1,900 ppb in 2021. This is 20-30% of all warming to-date. There are controversies over mathematical scalars. But on reviewing the evidence, we still strongly believe that decarbonizing the global energy system requires replacing coal and ramping natural gas alongside low-carbon energies.


On the Importance of Reaching Net Zero?

There is a danger that writing anything at all about climate science evokes the unbridled wrath of substantially everyone reading. Hence let us start this article by re-iterating something important: Thunder Said Energy is a research firm focused on the best, most practical and most economical opportunities that can deliver an energy transition. This means supplying over 100,000 TWH of useful human energy by 2050, while removing all of the CO2, and avoiding turning our planet into some kind of Waste Land.

Our roadmap to net zero (note below) is the result of iterating between over 1,000 thematic notes, data-files and models in our research. We absolutely want to see the world achieve important energy transition goals and environmental goals. And part of this roadmap includes a greatly stepped up focus on mitigating methane leaks (our best, most comprehensive note on the topic is also linked below).

However, it is also helpful to understand how methane causes warming. As objectively as possible. This helps to ensure that climate action is effective.

It is also useful to construct simple models, linking atmospheric methane concentrations to global temperature. They will not be perfect models. But an imperfect model is often better than no model.

Methane is a powerful greenhouse gas

An overview of the greenhouse effect is written up in a similar post, quantifying how CO2 increases global temperature (note below). We are not going to repeat all of the theory here. But it may be worth reading this prior article for an overview of the key ideas.

Greenhouse gases absorb and then rapidly re-radiate infra-red radiation. This creates a less direct pathway for solar radiation to be reflected back into space. The ability of different gas molecules to absorb and re-radiate infra-red radiation depends on the energy bands of electrons in those molecules, especially the shared electrons in covalent bonds between non-identical molecules with “dipole moments” (this is why H2O, CO2, CH4 and N2O are all greenhouse gases, while N2, O2 and Ar are not).

There are two reasons that methane is up to 200x more effective than CO2 as a greenhouse gas. The first reason is geometry. CH4 molecules are tetrahedral. CO2 molecules are linear. A tetrahedral molecule can generally absorb energy across a greater range of frequencies than a linear molecule.

The second reason is that methane is 200x less concentrated in the atmosphere, at 1,900 parts per billion, versus CO2 at 416 parts per million. We saw in the post below that radiative forcing is a log function of greenhouse gases. In other words, the first 20ppm of CO2 in the atmosphere explains around one-third of all the warming currently being caused by CO2. Each 1ppm increase in atmospheric CO2 has a ‘diminishing impact’, because it is going to absorb incremental radiation in a band that is already depleted by the pre-existing CO2. Thus small increases in methane cause more warming, as methane is currently present in very low concentrations, and thus at a much steeper part of the radiative forcing curve.

The most commonly quoted value we have seen for the instantaneous global warming potential of methane (instantaneous GWP, or GWP0) is 120x. In other words 1 gram of methane has a warming impact of 120 grams of CO2-equivalent. Although the 20, 50 and 100-year warming impacts are lower (see below).

What formula links methane to radiative forcing?

Our energy-climate model is linked below. It contains the maths and the workings linking methane to radiative forcing. It is based on a formula suggested in the past by the IPCC:

Radiative Forcing from Methane (in W/m2) = Alpha x Methane Concentration (in ppb) ^ 0.5 – Small Adjustment Factor for Methane-N2O interaction. Alpha is suggested at 0.036 in the IPCC’s AR5 models, and the adjustment factor for methane-N2O interactions can be ignored if you are seeking an approximation.

This is the formula that we have used in our chart below (more or less). As usual, we can multiply the radiative forcing by a ‘gamma factor’ which calculates global temperature changes from radiative forcing changes. We have seen the IPCC discuss a gamma factor of 0.5, i.e., 1 W/m2 of incremental radiative forcing x 0.5ºC/[W/m2] gamma factor yields 0.5ºC of temperature increases. However, there are controversies over the correct values of alpha and gamma.

How does methane increase global temperature?

Interaction Effects: Controversies over Alpha Factors?

The alpha factor linking methane to radiative forcing is suggested at 0.036 in the IPCC’s AR3 – AR5 reports. Plugging 0.036 into our formula above would suggest that increasing methane from 720 ppb in pre-industrial times to 1,900 ppb today would have caused 0.52 W/m2 of incremental radiative forcing. In turn, this would be likely to raise global temperatures by 0.27ºC.

Helpfully, this tallies with the values you might see in other well-known sources, such as the Wikipedia page on Radiative Forcing.

However, many technical papers, and even the IPCC’s AR5 report, have argued that alpha should be ‘scaled up’ to account for indirect effects and interaction effects.

Tropospheric Ozone. In the troposphere (up to 15km altitude), ozone is a ridiculously powerful greenhouse gas, quantified at around 1,000x more potent than CO2. It is postulated that the breakdown of atmospheric methane produces peroxyl radicals (ROO*, where R is a carbon-based molecule). In turn, these peroxyl radicals react with oxygen atoms in NOx pollutants, yielding O3. And thus methane is assumed to increase tropospheric ozone. Several authors, including the IPCC, have proposed to scale up alpha values by 20% – 80%, to reflect the warming impacts of this additional ozone.

Stratospheric Water Vapor. Water is a greenhouse gas, but it is usually present at relatively low concentrations in the stratosphere (12-50 km altitude). Water vapor prefers to remain in the troposphere, where it is warmer. However, when methane in the stratosphere decomposes, each CH4 molecules yields 2 H2O molecules, which may remain in the stratosphere. Several authors, including the IPCC, have proposed to scale up alpha values by around 15% to reflect the warming impacts of this additional water vapor in the stratosphere.

Short-wave radiation. Visible light has a wavelength of 400-700nm. Infra-red radiation has a wavelength of 700nm – 1mm and is the band that is mainly considered in radiative forcing calculations. However, recent research also notes that methane can absorb short-wave radiation, with wavelengths extended down to as little as 100-200nm. Some authors have suggested that the radiative forcing of methane could be around 25% higher than is stated in the IPCC (2013) assessment when short-wave radiation is considered. This impact is not currently in IPCC numbers.

Aerosol interactions. Recent research has also alleged that methane lowers the prevalence of climate-cooling aerosols in the atmosphere, and this may increase the warming impacts of CH4 by as much as 40%. This impact is not currently in IPCC numbers.

Hydrogen interactions. Even more recent research has drawn a link between methane and hydrogen GWPs, suggesting an effective GWP of 11x for H2, which is moderated by methane (note below).

N2O interactions. The IPCC formula for radiative forcing of methane suggests a small negative adjustment due to interaction effects with N2O, another greenhouse gas, which has been rising in atmospheric concentration (from 285ppb in 1765 to 320ppb today). The reason is that both N2O and CH4 seem to share an absorption peak at 7-8 microns. Hence it is necessary to avoid double-counting the increased absorption at this wavelength. The downwards adjustment due to this interaction effect is currently around 0.08 W/m2.

The overall impact of these interaction effects could be argued to at least double the instantaneous climate impacts of methane. On this more strict vilification of the methane molecule, rising atmospheric methane would already have caused at least a 1.0 W/m2 increase in radiative forcing, equivalent to 0.5ºC of total temperature increases since 1750 due to methane alone.

Uncertainty is high which softens methane alarmism?

Our sense from reviewing technical papers is that uncertainty is much higher when trying to quantify the climate impacts of methane than when trying to quantify the climate impacts of CO2.

The first justification for this claim is a direct one. When discussing its alpha factors, the IPCC has itself acknowledged an “uncertainty level” of 10% for the scalars used in assessing the warming impacts of CO2. By contrast, it notes 14% uncertainty around the direct impacts of methane, 55% on the interaction with tropospheric ozone, 71% on the interaction with stratospheric water vapor. These are quite high uncertainty levels.

A second point is that methane degrades in the atmosphere, with an average estimated life of 11.2 years, as methane molecules react with hydroxyl radicals. This is why the IPCC has stated that methane has a 10-year GWP of 104.2x CO2, 20-year GWP of 84x CO2, 50-year GWP of 48x CO2 and a 100-year GWP of 28.5x CO2.

There is further uncertainty around the numbers, as methane that enters the atmosphere may not stay in the atmosphere. The lifetime of methane in additional sinks is estimated at 120 years for bacterial uptake in soils, 150-years for stratospheric loss and 200 years for chlorine loss mechanisms. And these sources and sinks are continuously exchanging methane with the atmosphere.

Next, you might have shared our sense, when reading about the interaction effects above, that the mechanisms were complex and vaguely specified. This is because they are. I am not saying this to be some kind of climate sceptic. I am simply observing that if you search google scholar for “methane, ozone, interaction, warming”, and then read the first 3-5 papers that come up, you will find yourself painfully aware of climate complexity. It would be helpful if the mechanisms could be spelled out more clearly. And without moralistic overtures about natural gas being inherently evil, which sometimes simply makes it sound as though a research paper has strayed away from the scientific ideal of objectivity.

Finally, the biggest reason to question the upper estimates of methane’s climate impact are that they do not match the data. There is little doubt that the Earth is warming. The latest data suggest 1.2-1.3C of total warming since pre-industrial times (chart below). Our best guesses, based on our very simple models point to 1.0ºC of warming caused by CO2, 0.35ºC caused by CH4 and around <0.2ºC caused by other greenhouse gases. If you are a mathematical genius, you may have noticed that 1.0 + 0.35 + <0.2 adds up to 1.5C, which is more warming than has been observed. And this is not including any attribution for other factors, such as changing solar intensity or ocean currents. So this may all suggest that our alpha and gamma factors are, if anything, too high. In turn, this may mute the most alarmist fears over the stated alpha factors for methane being materially too low.

Conclusions for gas in the energy transition

How does methane increase global temperature? Of course we need to mitigate methane leaks as part of the energy transition, across agriculture, energy and landfills; using practical and economical methods to decarbonize the entire global energy system. Methane is causing around 20-30% of all the incremental radiative forcing, on the models that we have considered here. If atmospheric methane doubles again to 3,800 ppb, it will cause another 0.2-0.4ºC of warming, as can be stress-tested in our model here.

However, we still believe that natural gas should grow, indeed it should grow 2.5x, as part of the world’s lowest cost roadmap to net zero. The reason is that while we are ramping renewables by over 5x, this is still not enough to offset all demand for fossil energy. And thus where fossil energy remains, pragmatically, over 15GTpa of global CO2 abatement can be achieved by displacing unchecked future coal consumption with gas instead.

Combusting natural gas emits 40-60% less CO2 than combusting coal, for the same amount of energy, which is the primary motivation for coal-to-gas switching (note below). But moreover, methane leaks into the atmosphere from the coal industry are actually higher than methane leaks from the gas industry, both on an absolute basis and per unit of energy, and this is based on objective data from the IEA (note below).

How does CO2 increase global temperature?

How does CO2 increase global temperature?

How does atmospheric CO2 increase global temperature? The purpose of this article is to outline the best formulae we have found linking global temperature to the concentration of CO2 in the atmosphere. In other words, our goal is a simple equation, explaining how CO2 causes warming, which we can use in our models. In turn, this is why our ‘roadmap to net zero’ aims to reach net zero by 2050, stabilize atmospheric CO2 at 450ppm, and we believe this scenario is compatible with 2ºC of warming.

Disclaimer: can a simple equation explain global warming?

Please allow us to start this short note with a disclaimer. We understand that writing anything at all about climate science is apt to incur the unbridled wrath of substantively everyone. We also understand that the world’s climate is complex, and cannot be perfectly captured by simple formulas, any more than ‘world history’ can be.

Nevertheless, the over-arching goal of all of our research is that we are trying to model the best roadmap to decarbonize the global energy system (note below), and the best resultant opportunities in the energy transition.

Hence we think it is useful to have an approximate formula, even if it only “about 80% right”, rather than a conceptual black hole. So that is the purpose of today’s short note.

(1) Thermal theory: inflows and outflows?

The Earth’s temperature will be in balance and remain constant if energy inflows match energy outflows. Energy inflows come from the sun, and are approximated by the ‘solar constant’, at 1,361 W/m2. Energy outflows are radiated back into space, and are approximated by the Stefan-Boltzmann law.

One of the terms in the Stefan-Boltzmann equation is Temperature^4. That is temperature raised to the power of four. In other words, if for some reason, the Earth is not quite radiating/rejecting 1,361 W/m2-equivalent back into space, then its average temperature needs to become a little bit warmer, until it is once again radiating 1,361 W/m2-equivalent back into space. Let’s unpack this bit further…

Physics dictates that any physical body with a temperature above absolute zero will radiate energy from its surface, usually small amounts of energy. The wavelength depends on the body’s chemical properties, and more importantly, its temperature, i.e., its thermal energy.

Energy is the inverse of wavelength. Some electromagnetic radiation has a wavelength of 380-700 nm, in which case we call this radiation ‘visible light’. Some has lower energy, and thus higher wavelength. For example, radiation with a wavelength of 700-1,000 nm is often referred to as ‘infra-red’.

So in conclusion, energy is constantly being radiated back “upwards” towards outer space by the Earth’s surface, i.e., its land and its seas. These wavelengths are generally in the infra-red range. They span from 600 – 20,000 nm. But how much of that energy actually escapes into space?

(2) Greenhouse Gases: CO2, CH4, N2O, et al.

CO2 is a greenhouse gas. This means that as thermal energy is radiated upwards from the Earth’s surface — land and sea — it can excite the electrons in CO2 molecules, into a higher energy state, for a few nano-seconds. These electrons quickly relax back into a lower-energy state. As they do this, they re-radiate energy.

In other words, CO2 molecules absorb energy that is “meant to be” radiating upwards from the Earth’s surface back into space, and instead they scatter it in various other directions, so some of it will be re-absorbed (e.g., back in the world’s oceans).

In passing, this is why some scientists object to CO2 being described as ‘a blanket’. It is not trapping heat. Or storing any heat itself. It is simply re-radiating and re-scattering heat that would otherwise be travelling in a more direct path back into space.

This effect of CO2, and other greenhouse gases, can be described as ‘radiative forcing’ and measured in W/m2. In 2021, with 416ppm of CO2 in the atmosphere, the radiative forcing of atmospheric CO2 is said to be 2.1 W/m2 higher than it was in 1750, back when there was 280 ppm of CO2 in the atmosphere.

So what equations relate atmospheric CO2 concentrations into radiative forcing effects and ultimately into temperatures?

(3) Absorbing equations: how does CO2 impact temperature?

Let us start with an analogy. You are reading a complicated text (maybe this one). On the first reading, you absorb and retain about 50% of the content. One the second reading, you absorb and retain another 25% of the content. On the third reading, you absorb another 13% of the content. And so on. By the tenth reading, you have absorbed 99.9% of the content. But the general idea is that the more you have absorbed on previous readings, the less is left to be absorbed on future readings.

The absorption profile of CO2 is similar. One way to think about this is that even a very small amount of a particular greenhouse gas is going to start absorbing and re-radiating energy at its particular absorption wavelengths. This dilutes the amount of energy with this particular wavelength that remains. This means that there will be less energy with this particular wavelength left to absorb by adding more of molecules of this greenhouse gas into the atmosphere. By definition, the additional gas is ‘trying to’ absorb radiation at a wavelength that is already depleted.

Hence, without any CO2 in the atmosphere, the Earth would be about 6C cooler. The first 20ppm of CO2 explains perhaps 2ºC of all the warming exerted by CO2. The next 20ppm explains 0.8ºC. The next 20ppm explains around 0.6ºC. And so on. There are diminishing returns to adding more and more CO2.

Systems like this are described with logarithmic equations. Thus in the past, the IPCC has suggested various log equations that can relate radiative forcing to atmospheric CO2. The most famous and widely cited formula is below:

Increase in Radiative Forcing (W/m2) = 5.35 x ln (C/C0) … where C is the present concentration of atmospheric CO2 in ppm; and C0 was the concentration of atmospheric CO2 in 1750, which was 280 ppm.

The scalar value of 5.35, in turn, is derived from “radiative transfer calculations with three-dimensional climatological meteorological input data” (source: Myhre, G., E. J. Highwood, K. P. Shine, and F. Stordal, (1998). New estimates of radiative forcing due to well mixed greenhouse gases. Geophys. Res. Lett., 25, 2715–2718). Note these formulae are quite long-standing and go back to 1998-2001.

Thus in the chart below, we have averaged three log formulas, suggested in the past by the IPCC, to calculate radiative forcing from atmospheric CO2. Next, to translate from radiative forcing into temperature we have used a “gamma factor” of 0.5 ºC per W/m2 of radiative forcing, which is also a number suggested in the past by the IPCC.

How does CO2 increase global temperature?

(4) Temperature versus CO2: mathematical implications?

Let us interpret this chart above from left to right. This is intended to be an objective, mathematical exercise, assessing the implications of simple climate formulae proposed long ago by the IPCC.

(a) Much of the greenhouse effect has “already happened“, due to the log relationship in the chart. The chart implies that the total greenhouse effect occurring today from atmospheric CO2 is around 7-8ºC, of which c6-7ºC was already occurring back in 1750.

(b) Raising atmospheric CO2 to 415ppm should have increased global temperatures by around 1ºC since pre-industrial times, just due to CO2, and holding everything else equal, again just taking the IPCC’s formulae at face value.

(c) Raising atmospheric CO2 by another 35ppm to 450ppm would, according to the simple formula, would result in 1.3ºC of warming due to CO2 alone, and 2ºC of total warming, using the simple relationship that two-thirds of all greenhouse warming since 1750 is down to CO2, and the remainder is due to CH4, N2O and other trace gases.

(d) Diminishing impacts? Just reading off the chart, a further 50ppm rise to 500ppm raises temperature by 0.3ºC; a further 50ppm rise to 550ppm raises temperature by 0.27ºC; a further 50ppm rise to 600ppm raises temperature by 0.25ºC; and a further 50ppm rise to 650ppm raises temperature to by 0.23ºC. The point is that the higher atmospheric CO2 rises, then the higher global temperature rises. While it is true that each incremental 50ppm has less impact than the prior 50ppm, the pace of the slowdown is itself quite slow, and not enough to dispel our ultimate need to decarbonize.

(e) There is a theoretical levelling off point due to the log relationship, where incremental CO2 makes effectively no difference to global temperatures, but from the equations above, it is so far in the future as to be irrelevant to practical discussions about decarbonization: it is probably somewhere after atmospheric CO2 has surpassed 4,000 ppm and CO2 has directly induced about 8C of warming.

In conclusion, the simple formulae that we have reviewed suggest that the ‘budget’ for 2ºC of total global warming is reached at a total atmospheric CO2 concentration of 450ppm. Of the budget, about two-thirds is eaten up by CO2, and the remaining one-third is from other anthropogenic greenhouse gases, mainly methane and N2O. This is what goes into our climate modelling, and why we think it is also important to mitigate methane emissions and N2O emissions from activities such as crop production.

(5) Controversies: challenges for simple CO2-warming equations?

Welcome to the section of this short note that is probably going to get controversial. Our intention is not to inflame or enrage anybody here. But we are simply trying to weigh the evidence objectively.

(a) Gamma uncertainty. Our simple climate formula above used a gamma value of 0.5 ºC per W/m2 of radiative forcing. The IPCC has said in the past that this is a good average estimate, and that its gamma value has an uncertainty range of +/- 20%. Even this would be quite uncertain. For example, it means that the CO2 budget for 2ºC of total warming would be anywhere between 420-510ppm of atmospheric CO2, using the formulae above.

(b) More gamma uncertainty. Worse, some sources that have crossed our screens have suggested gamma values as low as 0.31ºC per W/m2 of radiative forcing, and others as high as 0.9ºC per W/m2 of radiative forcing. One of the key uncertainties seems to be around interaction effects and feedback loops. For example, a warmer atmosphere can store more water vapor. And water vapor is itself a greenhouse gas.

(c) What percentage of CO2 emitted by human activities remains in the atmosphere and what percent is absorbed by the oceans? We have assumed almost no incremental CO2 emitted by human activities is absorbed in the oceans, and almost all remains in the atmosphere, in our simple climate model.

(d) Other variables. Our model does not factor in the impacts of other variables that have an impact on climate, such as Milankovitch cycles, solar cycles, ocean currents, massive volcanic eruptions or mysterious Dansgaard-Oeschger events. (And no doubt there are some super-nerdy details of climate science that are not yet fully understood. But surely that does not invalidate the basics. While I may not fully grasp the Higgs-Boson, I still try to avoid cycling into large and immovable obstacles).

(e) We have over-simplified radiative forcing. There is no single formula that perfectly sums up the impact of CO2 on climate, because the Earth is not a single, homogenous disc, pointed directly at the sun. Some of these over-simplifications are obvious. Others are more nuanced. Clouds reduce the mean radiative forcing due to CO2 by about 15%. And the impacts of different gases can be different across vertical profiles through the atmosphere, for example, stratospheric adjustments reduce the impact of CO2 by 15%, compared to a perfectly mixed and homogenous atmosphere.

(f) Warming to-date. Perhaps most controversially, our simple formulae discussed above suggest that with atmospheric CO2 at 415 ppm and a gamma value of 0.5 ºC, the world should already be 1.1ºC warmer than pre-industrial times due to CO2 alone, and this equates to being around 1.6 ºC warmer than pre-industrial times due to a combination of all greenhouse gases. In contrast, the world currently appears to be around 1.1ºC warmer, in total than it was in pre-industrial times, according to the data below. This may mean that our gamma value is too high, or that our formulas are too conservative. Or conversely, it could simply mean that there are time lags between CO2 rising and temperature following. We do not know the answer here, which is unsatisfying.

Conclusions: what is the relationship between CO2 and warming?

How does CO2 increase global temperature? The advantages of simple formulae are that they are simple and can be used to inform our models. The disadvantages are that the global energy-climate system is complex, and will never be captured entirely by simple formulae.

Our aim in this short note is to explain the formulas that we are using in our energy-climate models and in our roadmap to net zero. We think it is possible to stabilize atmospheric CO2 at 450ppm, by reaching ‘net zero’ around 2050, in an integrated roadmap that costs $40/ton of CO2 abated; and reaches an important re-balancing between human activities and the natural world.

450ppm of atmospheric CO2 is possibly a conservative budget. There may be reasons to hope that true ‘gamma’ values for global warming, or forcing co-efficients, are lower than we have modelled. But it makes sense to ‘plan for the worst, hope for the best’. And regardless of specific climate-modelling parameters, it makes sense for a research firm to look for the best combination of economics, practicality, morality and opportunity in the energy transition.

Energy transition: a six page summary from summer-2022?

Energy Transition Macbeth

“It provokes the desire, but it takes away the performance.” That is the porter’s view of alcohol in Act II Scene III of Macbeth, it is also our view of 2022’s impact on the energy transition. The resultant outlook is captured in six concise pages, published in the Walter Scott Journal in Summer-2022.

Further research. Our overview into energy technologies and energy transition for the route to ‘net zero’ is linked here.

Energy transition: five reflections after 3.5 years?

Energy transition key insights

This video covers our top five reflections after 3.5 years, running a research firm focused on energy transition, and since Thunder Said Energy was founded in early 2019.

(1) Inter-connectedness. Value chains are so inter-connected that ultimately costs will determine the winning transition technologies.

(2) Humility. The complexity is so high that the more we have learned, the stupider we feel.

(3) Value in nuances. As a result, there is value in the nuances, which are increasingly interesting to draw out.

(4) ‘Will not should’. Bottlenecks need to be de-bottlenecked as some policy-makers have inadvertently adopted the “worst negotiating strategy in the world”.

(5) Bottom-up opportunities. And finally, we think energy transition and value will be driven by looking for bottom-up opportunities in a consistent framework.

For more perspectives on live, the universe, and everything, please see this video on our top perspectives on the energy transition from the first 2-years since setting up TSE.

All the coal in China: our top ten charts?

China's coal industry

Chinese coal provides 15% of the world’s energy, equivalent to 4 Saudi Arabia’s worth of oil. Global energy markets may become 10% under-supplied if this output plateaus per our ‘net zero’ scenario. Alternatively, might China ramp its coal to cure energy shortages, especially as Europe bids harder for renewables and LNG post-Russia? Today’s note presents our ‘top ten’ charts on China’s opaque coal industry.


China’s coal industry provides 15% of the world’s energy and c22% of its CO2 emissions. These numbers are placed in context on page 2.

China’s coal production policies will sway global energy balances. Key numbers, and their impacts on global energy supply-demand, are laid out on page 3.

China’s coal mines are constellation of c4,000 assets. Some useful rules of thumb are given on the breakdown on page 4.

China’s coal demand is bridged on page 5, including the share of demands for power, industrial heat, residential/commercial heat and coking.

Coal prices are contextualized on page 6-7, comparing Chinese coal with gas, renewables, hydro and nuclear in c/kWh terms.

Coal costs are calculated on page 6-8. We model what price is needed for China to maintain flat-slightly growing output, while earning double-digit returns on investment.

Accelerating Chinese coal depends on policies, however, especially around a tail of smaller and higher cost mines. The skew and implications are explored on page 7-8.

China’s decarbonization is clearly linked to its coal output. We see decarbonization ambitions being thwarted in the 2020s, per page 8.

Methane leaks from China’s coal industry may actually be higher than methane leaks from the West’s gas industry (page 9).

Chinese coal companies are profiled, and compared with Western companies, on pages 10-11.

For an outlook on global coal production, please see our article here.

Battle of the batteries: EVs vs grid storage?

EVs versus grid scale storage shortage

Who will ‘win’ the intensifying competition for finite lithium ion batteries, in a world that is hindered by shortages of lithium, graphite, nickel and cobalt in 2022-25?

Today’s note argues EVs should outcompete grid storage, as the 65kWh battery in a typical EV saves 2-4x more energy and 25-150% more CO2 each year than a comparably sized grid battery.


Competitor #1: Electrification of Transport?

The energy credentials of electric vehicles are laid out in the data-files below. A key finding is their higher efficiency, at 70-80% wagon-to-wheel, where an ICE might only achieve 15-20%. Therefore, energy is saved when an ICE is replaced by an EV. And CO2 is saved by extension, although the precise amount depends on the ‘power source’ for the EV.

When we interrogate our models, the single best use we can find for a 65kWh lithium ion battery is to electrify a taxi that drives 20,000-70,000 miles per year. This is a direct linear pass-through of these vehicles’ high annual mileage, with taxis in New York apparently reaching the upper end of this range. Thus the higher efficiency of EVs (vs ICEs) saves 20-75MWH of energy and 7-25 tons of CO2 pa.

More broadly, there are 1.2bn cars to ‘electrify’ in the world, where the energy and CO2 savings are also a linear function of miles driven, but because ordinary people have their cars parked around 97% of the time, the savings will usually be 10-20MWH per vehicle pa.

(Relatedly, an interesting debate is whether buying a ‘second car’ that is electric is unintentionally hindering energy transition, if that car actually ends up under-utilized while consuming scarce LIBs, which could be put to better use elsewhere. As always, context matters).

Competitor #2: Grid-Scale Batteries?

The other main use case for lithium ion batteries is grid-scale storage, where the energy-saving prize is preventing the curtailment of intermittent wind and solar resources. As an example, curtailment rates ran at c5% in California in 2021 (data below).

The curtailment point is crucial. There might be economic or geopolitical reasons for storing renewables at midday and re-releasing the energy at 7pm in the evening, as explored in the note below. But if routing X renewable MWH into batteries at midday (and thus away from the grid) simply results in X MWH more fossil energy generation at midday instead of X MWH of fossil energy generation at 7pm, then no fossil energy reductions have actually been achieved. In order for batteries reduce fossil energy generation, they must result in more overall renewable dispatch, or in other words, they must prevent curtailment.

There are all kinds of complexities in modelling the ‘energy savings’ here. How often does a battery charge-discharge? What percent of these charge-discharge cycles genuinely prevent curtailment? What proportion of curtailment can actually be avoided in practice with batteries? What round-trip efficiency on the battery?

To spell this out, imagine a perfect, Utopian energy system, where every day, the sun shone evenly, and grid demand was exactly the same. Every day from 10am to 2pm, the grid is over-saturated with solar energy, and it is necessary to curtail the exact same amount of renewables. In this perfect Utopian world, you could install a battery, to store the excess solar instead of curtailing it. Then you could re-release the energy from the battery just after sunset. All good. But the real world is not like this. There is enormous second-by-second, minute-by-minute, hour-by-hour, day-by-day volatility (data below).

Thus look back at the curtailment chart below. If you built a battery that could absorb 0.3% of the grid’s entire installed renewable generation capacity throughout the day, then yes, you would get to charge and discharge it every day to prevent curtailment. But you would only be avoiding about 10% of the total curtailment in the system.

Conversely, if you built a battery that could absorb 30% of the installed renewable generation capacity throughout the day, you could prevent about 99% of the curtailment, but you would only get to use this battery fully to prevent curtailment on about 5 days per year. This latter scenario would absorb a lot of LIBs, without unleashing materially more energy or displacing very much fossil fuel at all.

This is all explored in more detail in our detailed modelling work (data file here, notes below). But we think an “energy optimized” middle ground might be to built 1MW of battery storage for every 100MW of renewables capacity. For the remainder, we would prefer other solutions such as demand-shifting and long-distance transmission networks.

Thus, as a base case, we think a 16kW battery (about the same size as in an EV) at a 1.6MW solar project might save 5MWH of energy that would otherwise have been curtailed, abating 2T of CO2e. So generally, we think a typical EV is going save about 2-4x more energy per than a similarly-sized grid-battery.

Another nice case study on solar-battery integration is given here, for anyone who wants to go into the numbers. In this example, the battery is quite heavily over-sized.

Other considerations: substitution and economics?

Substitution potential? Another consideration is that an EV battery with the right power electronics can double as a grid-scale storage device (note below), absorbing excess renewables to prevent curtailment. But batteries affixed to a wall or on a concrete pad cannot usually double as a battery for a mobile vehicle, for obvious reasons.

Economic potential? We think OEMs producing c$70-100k electric vehicles will resist shutting entire production lines if their lithium input costs rise from $600 to $3k per vehicle. They will simply pass it on to the consumer. We are already seeing vehicle costs inflating for this reason, while consumers of ‘luxury products’ may not be overly price sensitive. By contrast, utility-scale customers are more likely to push back grid scale storage projects, as this is less mission critical, and likely to be more price-sensitive.

Overall, we think the competition for scarce materials is set to intensify as the world is going to be ‘short’ of lithium, graphite, nickel in 2022-25 (notes below). This is going to create an explosive competition for scarce resources. The entire contracting strategies of resource-consuming companies could change as a consequence…

Energy shortages: medieval horrors?

Medieval energy shortages

Energy shortages are gripping the world in 2022. The 1970s are one analogy. But the 14th century was truly medieval. Today’s note reviews its top ten features of Medieval energy shortages. This is not a romantic portrayal of pre-industrial civilization, some simpler time “before fossil fuels”. It is a horror show of deficiencies, in lifespans, food, heat, mobility, freight, materials, light and living conditions. Avoiding devastating energy shortages should be a core ESG goal.


(1) 300-years of progress. Per capita income in England had trebled between 1086 and 1330. It was stoked by trade, including via Europe and the Hanseatic League. And by technology, including world-changing innovations such as plowing with horses, vertical overshot wheels, windmills (introduced from Iran in the late 12th century), wool spinning wheels, horizontal pedal looms, flying buttresses, compasses, sternpost rudders, larger sailing ships and spectacles. Note about two-thirds of these are about ways of harnessing or using energy. To a person born in 1250, it must have looked as though human civilization was on an ever-upwards trajectory.

(2) Downwards trajectory. Europe’s population fell from 70M in 1300 to 52M in 1400. In the UK, numbers fell from 5M in 1300 to 2.5M in 1400, shrinking by 10% during the famines of 1315-1325, 35% in the Great Plague (1347-1351) and 20% in famines of the 1360s. Unrest accelerated too, culminating in the ‘Peasants Revolt’, the first mass rebellion in English history, in 1381. These were dark times, a “lost century”.

(3) Climate and energy. Some accounts say Britain cooled by 1ºC in the 14th Century. There are records of vineyards in 1300, but they have disappeared by 1400. The greatest danger was when two years’ crops failed in succession. 1315-17 is known as the ‘great famine’. But bad years for wheat occur in 1315-17, 1321-3, 1331-2, 1350-2, 1363-4, 1367-8, 1369-71 and 1390-1. As muscle power was the main motive energy source of the world, medieval energy shortages were effectively food shortages, curtailing human progress. And this would last until the industrial revolution.

(4) Living conditions. Life expectancy was lower and more variable than it is today. Half died before adulthood. The median age was 21. Only 5% were over 65. The average man was 5’7”, the average woman 5’2”. Only c10% of Europe’s population lived in towns. Even the largest cities only had 20-200k people (Paris, Venice, Ghent, Milan, Genoa, Florence, Rome, London). Literacy was about 5-10%. Again, many of these are symptoms of a civilization struggling to produce enough food-energy.

(5) Possessions were few. Up to 80% of the population comprised peasant farmers, without much formal income. A laborer earned 1-3 pence per day (£1-3 per year), a skilled laborer such as a carpenter or thatcher earned 3-4 pence (£3-5) and a mason earned 5-6 pence (£5-8). (In the upper echelons of society, a ‘knight’ had an income above £40 per year). Two-thirds of a typical worker’s wages were usually spent on food and drink. Thus a middle income urban individual might at any one time possess only around 100 items worth £5-10 in total. Most are basic necessities (chart below, inventory here). The ratio of incomes:basic products has risen by an amazing 25x since 1400. And for many commodities, costs have continued falling in real terms since 1800 (data-file below).

(6) Mobility. Many manorial subjects (aka villeins) were effectively bound to 1-2 acres of land and lived their entire lives in one cottage. They traveled so little that they did not have or need surnames. Freemen, who could travel, would usually know what was to be found at market towns within a 20-30 mile radius. Horse mobility was much too expensive for the masses, with a riding horse costing maybe £4, around a year’s wages, plus additional costs to feed, stable, ride and toll. So most would travel on foot, at 2-4 mph, with a range of c15-20 miles per day. Horse travel was closer to 4-5 mph. Thus in 1336, a youthful Edward III rode to York covering 55 miles in a day. The ‘land speed records’ of the era varied by season, from 50 miles/day in winter to 80 miles/day in summer, and were determined as much by the roads as the ‘movers’. Again, this would persist until industrial times.

(7) Freight was very limited. Carts existed to transport goods not people. But they were particularly slow, due to bad road conditions, and expensive. Grain was imported during famines, but it physically could not be moved inland. The same trend operated in reverse, as candles (made from animal fat) cost 2x more once transported into a city than they did the countryside. The rural economy revolved around small market towns, which attracted hundreds of people on market days from a 1-5 mile radius, as it was not practical to walk c25-miles to a city, except for crucial specialist goods.

(8) Long-distance travel outside of Europe was practically unknown. So much so that myths abounded. The Terra Australis was rumored to be a vast Southern land, where men could not survive because it was too hot; instead inhabited by “sciopods”, who would lie on their backs and bask in the shade beneath their one giant foot. This mythical level of remoteness made spices from the East among the ultimate luxuries. Pound for pound, ginger might cost 2s (6-days’ wages) cloves 4s (12-days) and saffron 12s (60-days).

Medieval energy shortages
“where spices come from” (Source: Wikimedia Commons)

(9) Biomass was the only heating fuel. The right to gather sticks and timber was granted by manorial lords to their tenants. Every last twig was used. Heavy harvesting reduced woodlands to 7% of England’s land area (they have recovered to 13% today). Winter’s skyline was particularly bleak, as no evergreens have yet been introduced from Scandinavia. Heavy use of wood in construction also made fires a devastating risk, so many cities banned thatched rooves or timber chimneys. Ovens were communal. In a similar vein, the cost of lighting has fallen by 99.99% since these pre-industrial times.

(10) Appearances? “The prime reason to avoid medieval England is not the violence, the bad humour, the poor roads, the inequality of the class system, the approach to religion and heresy or the extreme sexism. It is the sickness”. Havoc was wrought by smoke, open cesspits and dead animals. Dental care was not about preserving teeth, only masking the scent of decay. Soaps were caustic and induced blisters. Excessive combing of hair was frowned upon. Moralists especially castigated the Danes who were “so vain they comb their hair everyday and have a bath every week”.

Medieval energy shortages
This incredible painting is displayed at Niguliste Church, in Tallinn Old Town, from the late medieval workshop of Bernt Notke. Over 30-meters long, it showed the ubiquity of death. A common destination shared by Kings, Queens, Bishops, Knights, Merchants and Peasants (although the former appear to be enjoying a kind of “priority boarding” system).

To an energy analyst, the list above simply translates into energy shortage after energy shortages. We are not left smiling at a romantic image of pre-industrial society, but grimacing at woeful deficiencies in food, light, heat, mobility, freight, materials. It has become common to talk about the 1970s as the stock example of global energy shortages. But the 14th century was truly medieval.

Sources:

Crouzet, F. (2001). A History of the European Economy. 1000-2000. University Press of Virginia, Charlottesville and London.  

Mortimer, I., (2009). The Time Traveller’s Guide to Medieval England. Vintage Books. London.

Wickham, C. (2016). Medieval Europe. Yale University Press.

Helion: linear fusion breakthrough?

Helion linear fusion technology

Helion is developing a linear fusion reactor, which has entirely re-thought the technology (like the ‘Tesla of nuclear fusion’). It could have costs of 1-6c/kWh, be deployed at 50-200MWe modular scale and overcome many challenges of tokamaks. Progress so far includes 100MºC and a $2.2bn fund-raise, the largest of any private fusion company to-date. This note sets out its ‘top ten’ features.

Our overview of nuclear fusion is linked above, spelling out the technology’s game-changing potential in the energy transition. However, fourteen challenges still need to be overcome.

Self-defeatingly, many fusion reactor designs aim to deal with technical complexity via adding engineering complexity. You can do this, but it inherently makes the engineering more costly, with mature reactors likely to surpass 15c/kWh in delivered power.

Helion has taken a different approach, to engineering a fusion reactor. Our ‘top ten features’ are set out below. If you read back through the original fusion report, you will see how different this is…

(1) Costs. Helion has said the reactor will be 1,000x smaller and 500x cheaper than a conventional fusion reactor, with eventual costs seen at 1-6c/kWh. This would indeed be a world-changer for zero carbon electricity (chart below).

(2) Linear Reactor. This is not a tokamak, stellarator or inertial confinement machine (see note). It is a simple, linear design, where pulsed magnetic fields accelerate plasma into a burn-chamber at 1M mph. Colliding plasma particles fuse. The fusion causes the plasma to expand. Energy is then captured from the expanding plasma. It is like fuel in a diesel engine.

(3) Direct electricity generation. Most power generators work by producing heat. The heat turns water into high-pressure steam, which then drives a turbine. Within the turbine, electricity is generated by Faraday’s law, as a moving magnetic field induces a current in stator coils of the turbine (see our note below for a primer on power-electronics). However, a linear reactor containing can exploit Faraday’s law directly. Plasma particles are electro-magnetically charged. So as they expand, they will also induce a current. Some online sources have suggested 95% of the energy released from the plasmas could be converted to electricity, versus c40% in a typical turbine.

(4) Reactor size. The average nuclear fission plant today is around 1GW. Very large fusion plants are gearing up to be similar in size. However, Helion’s linear reactor is seen on the order of c50MW. This is something on the magnitude that can be deployed by individual power consumers, or more ambitiously, on mobile applications, such as in commercial shipping vessels or aviation.

(5) Fewer neutrons. Helion’s target fuel is Helium-3. This is interesting because fusing 2 x Helium-3 nuclei yields a Helium-3 nucleus plus two hydrogen nuclei. There are no net neutron emissions and resultant radioactivity issues (see fusion note). However, the Helium-3 would need to be bred from Deuterium, which is apparently one of the goals in the Polaris demonstration reactor (see below).

(6) Beta. Getting a fusion reactor to work energy-efficiently requires maximizing ‘beta’. Beta is the ratio of plasma field energy to confining magnetic field energy. Helion’s patents cover a field reversed configuration of magnets which will “have the highest betas of any plasma confining system”. During compression, different field coils with successively smaller radius are activated in sequence to compress and accelerate the plasmoids “into a radially converging magnetic field”.  Helion is targeting a beta close to 100%, while tokamaks typically achieve closer to 5%.

(7) Capital. In November-2021, Helion raised a $2.2bn Series-E funding round. This is the largest private fusion raise on record (database below). It is structured as a $500M up-front investment, with an additional $1.7bn tied to performance milestones.

(8) Progress so far. In 2021, Helion became the first private fusion company to heat a fusion plasma to 100MºC. It has sustained plasma for 1ms. It has confined them with magnetic fields over 10 Teslas. Its Trenta prototype has run “nearly every day” for 16-months and completed over 10,000 high-power pulses.

(9) Roadmap to commerciality? Helion is aiming to develop a seventh prototype reactor, named Polaris, which will produce a net electricity gain, hopefully by 2024. It has said in the past that fully commercial reactors could be ‘ready’ by around 2029-30.

(10) Technical Risk. We usually look to de-risk technologies by reviewing their patents. This is not possible for Helion, because we can only find a small number of its patents in the usual public patent databases. Developing a commercial fusion reactor still has enormous challenges. What helps is a landscape of different companies exploring different solutions. For a review of how this has helped to de-risk, for example, plastic pyrolysis, see our recent update below: 60% of the companies have face steeper setbacks than hoped, but a handful are now reaching commercial scale-up.

Other exciting next-generation nuclear companies to cross our screen our highlighted in the data-files below…

To read more about our outlook on nuclear flexibility and how we see nuclear growth accelerating, please see our article here.

Oil and War: ten conclusions from WWII?

Oil and war

The second world war was decided by oil. Each country’s war-time strategy was dictated by its availability, its quality and attempts to secure more of it; including by rationing non-critical uses of it. Ultimately, limiting the oil meant limiting the war. This would all re-shape the future of the oil, gas and midstream industries, and also the whole world. Today’s short essay about oil and war outlines out top ten conclusions from reviewing the history.

(1) War machines run on oil products

Fighter planes, bombers, tanks, battleships, submarines and supply trucks are all highly energy intensive. For example, a tank achieves a fuel economy of around 0.5 miles per gallon. Thus, Erwin Rommel wrote that “neither guns nor ammunition are of much use in modern warfare unless there is sufficient petrol to haul them around… a shortage of petrol is enough to make one weep”.

If the First World War was a war of stagnation, then the Second World War was one of motion. Overall, America’s forces in Europe would use 100x more gasoline in World War II than in World War I. Thus in 1944, General Patton berated Eisenhower that “my men can eat their belts, but my tanks have gotta have gas”.

The fuel for Germany’s war machine was imported from Romania’s Ploiesti fields (c30-40% of total use) and earlier in the War, from the Soviet Union (10-20%). Another achievement of ‘blitzkrieg’ warfare was that the German army initially captured more fuel than it used. Its remaining oil was produced in Germany, as synfuel (c50-60% of total).

Synfuel. Germany had always been an oil-poor, coal-rich nation, relying on the latter for 90% of its energy in the 1930s. But it could manufacture synthetic gasoline by hydrogenating the coal at high temperatures and pressures. The industrial methods were developed by IG Farben, with massive state subsidies (Hitler stated “the production cost [is] of no importance”). In 1936, Hitler re-doubled the subsidies, expecting to be at war by 1940, by which time, 14 hydrogenation plants were producing 72kbpd. By 1943, this was increased to 124kbpd. It was over half of Germany’s total war-time oil use and 90% of the aviation gasoline for the Luftwaffe.

On the other side, America provided 85% of the allies’ total oil. US output rose from 3.7Mbpd to 4.7Mbpd. 7bn bbls were consumed by the US and its allies from 1941-45, of which 6bn bbls was produced in the US.

(2) Securing oil dictated each country’s war strategy.

In 1939, Hitler and Stalin had carved up Europe via the Molotov-Ribbentrop pact, declaring mutual non-aggression against one another. But oil was a key reason that Hitler reneged, and went to war with the Soviet Union, in Operation Barbarossa, in June 1941. Stalin had already occupied Northern Rumania, which was too close for comfort to Ploiesti. Hitler would tell Mussolini that “the Life of the Axis depends on those oilfields”.

Moreover, Hitler wanted the oilfields of the Caucasus, at Maikop, Grozny and Baku. They were crucial. At the end of 1942, Hitler wrote “unless we get the Baku oil, the war is lost”. Even Rommel’s campaign in North Africa was the other arm of a large pincer movement, designed to converge on Baku.

Similarly for Japan, the entire Pacific War (and necessarily antecedent attack on Pearl Harbor), was aimed at capturing crucial oil fields of the Dutch East Indies, to which Japan would then commit 4,000 oilfield workers.

For the Allies, one of the most pressing needs was to ensure clear passage of American Oil across the Atlantic, without being sunk by German U-boats. Hence the massive step-up of cryptography at Bletchley Park under Alan Turing. In March-1943, the Allies broke the U-boat codes, allowing a counter-offensive. In May-1943 alone, 30% of the U-boats in the Atlantic were sunk. Increased arrivals of American oil would be a turning point in the war.

(3) Limiting the oil meant limiting the war.

Germany’s initial blitzkrieg warfare was particularly effective, as the Germans captured more fuel than they used. But they had less luck on their Eastwards offensives. Soviet tanks rank on diesel. Whereas the German Panzers ran on gasoline. And it became increasingly difficult to sustain long, Eastwards supply lines. Stalingrad became Germany’s first clear ‘defeat’ in Europe in 1942-43. 

Fuel shortages were also illustrated in North Africa, where Rommel later said his tactics were “decided more by the petrol gauge than by tactical requirements”. He wrote home to his wife about recurring nightmares of running out of fuel. To make his tank numbers look more intimidating, he even had ‘dummy tanks’ built at workshops in Tripoli, which were then mounted on more fuel-efficient Volkswagens.

Similarly in Japan, oil shortages limited military possibilities. ‘Kamikaze’ tactics were named after the ‘divine wind’, a typhoon which disrupted Kublai Khan’s 13th century invasion fleet. But they were motivated by fuel shortages: no return journey was necessary. And you could sink an American warship with 1-3 kamikaze planes, versus 8-24 bombers and fighters. It made sense if you had an excess of personnel and planes, and a shortage of fuel.

Similarly, in 1944, in the Marianas campaign’s “great turkey shoot”, Japan lost 273 planes and the US lost 29, which has been attributed to a lack of fuel, forcing the Japanese planes to fly directly at the enemy, rather than more tactically or evasively.

Remarkably, back in Europe, it took until May-1944, for Allied bombers to start knocking out Germany’s synthetic fuels industry, in specifically targeted bombing missions, including the largest such facility, run by IG Farben at Leuna. “It was on that day the technological war was decided”, according to Hitler’s Minister of War Production. In the same vein, this note’s title image above shows B-24s bombing the Ploiesti oilfields in May-1944.

By September-1944, Germany’s synthetic fuel output had fallen to 5kbpd. Air operations became impossible. In the final weeks of the War, there simply was no fuel. Hitler was still dictating war plans from his bunker, committing divisions long immobilized by their lack of fuel. In the final days of the War, German army trucks were seen being dragged by oxen.

Swiftly halting oil might even have prevented war. Japan had first attached Manchuria in 1931. As tensions escalated, in 1934, executives from Royal Dutch and Standard of New Jersey suggested that the mere hint of an oil embargo would moderate Japanese aggression, as Japan imported 93% of its oil needs, of which 80% was from the US. In 1937, an embargo was proposed again, when a Japanese air strike damaged four American ships in the Yangtze River. It was 1939 before the policy gained support, as US outrage grew over Japan’s civilian bombings in China. By then it was too late. In early 1941, Roosevelt admitted “If we stopped all oil [to Japan]… it would mean War in the Pacific”. On December 7th, 1941, a Japanese attack on Pearl Harbor forced the Americans’ hand.

(4) Fuel quality swayed the Battle of Britain?

The Messerschmitt 109s in the Luftwaffe were fueled by aviation gasoline derived from coal hydrogenation. This had an octane rating of 87. However, British Spitfires often had access to higher-grade fuel, 100-octane aviation gasoline, supplied by the United States. It was produced using catalytic cracking technology, pioneered in the 1930s, and deployed in vast, 15-story refinery units, at complex US refineries. The US ramped its production of 100-octane gasoline from 40kbpd in 1940 to 514kbpd in 1945. Some sources have suggested the 100-octane fuel enable greater bursts of speed and greater maneuverability, which may have swung the balance in the Battle of Britain.

(5) The modern midstream industry was born.

Moving oil by tankers turned out to be a terrible war-time strategy. In 1942, the US lost one-quarter of all its oil tanker tonnage, as German U-boats sunk 4x more oil tankers than were built. This was not just on trans-Atlantic shipments, but on domestic routes from the Gulf Coast, round Florida, and up the East Coast. Likewise, by 1944-45, Japan was fairly certain that any tanker from the East Indies would be sunk shortly after leaving port.

The first truly inter-continental pipelines were the result. In 1943, ‘Big Inch’ was brought into service, a 1,254-mile x 24” line carrying oil from East Texas, via Illinois, to New Jersey. In 1944, ‘Little Inch’ started up, carrying gasoline and oil products along the same route, but starting even further south, at the US Gulf Coast refining hub, between Texas and Louisiana. The share of East Coast oil arriving by pipeline increased from 4% in 1942 to 40% by the end of 1944.

The first subsea pipeline was also deployed in the second world war, known as PLUTO (the Pipeline Under the Ocean). It ran under the English channel and was intended to supply half of the fuel needs for the Allies to re-take Europe. One of the pumping stations, on the Isle of Wight, was disguised as an ice cream shop, to protect it from German bombers. However, PLUTO was beset by technical issues, and only flowed 150bpd in 1944, around 0.15% of the Allied Forces’ needs.

Other mid-downstream innovations were small portable pipeline systems, invented by Shell, to transport fuel to the front without using trucks; and the five gallon ‘jerry can’. The Allies initially used 10-gallon portable fuel cannisters, but they were too heavy for a single man to wield. The smaller German convention was adopted. And improved, with a spout that prevented dirt from being transferred into vehicle engines.

(6) The modern gas industry was also born.

As the US tried to free up oil supplies from its residential heating sector, Roosevelt wrote to Harold Ickes, his Secretary of the Interior, in 1942, “I wish you would get some of your people to look into the possibility of using natural gas… I am told there are a number of fields in the West and the Southwest where practically no oil has been discovered, but where an enormous amount of natural gas is lying idle in the ground because it is too far to pipe”.

(7) Rationing fuel became necessary everywhere.

In the UK, war-time rationing began almost immediately, with a ‘basic ration’ set at 1,800 miles per year. As supplies dwindled, so did the ration, eventually towards nil. The result was a frenzy of war-time bicycling.

In Japan, there was no domestic oil use at all. Even household supplies of spirits or vegetable oils were commandeered to turn into fuel. Bizarrely, millions were sent to dig up pine roots, deforesting entire hillsides, in the hope that they could be pyrolyzed into an fuel-substitute.

Curtailing US demand was slower. In 1941, Ickes did start implementing measures to lower demand. He recommended a return to the ‘Gasoline-less Sundays’ policy of WWI and ultimately pressed oil companies to cut service station deliveries by 10-15%. Homeowners who heated their houses with oil were politely asked to keep their temperatures below 65ºF in the day, 55ºF at night.

Outright US rationing occurred later, starting in early-1942. First gasoline use was banned for auto-racing. Then general rationing of gasoline started on the East Coast. Even later, nationwide rationing was brought in at 1.5-4 gallons per week, alongside a 35mph speed limit and an outright ban on “non-essential driving” in 1943.

General US oil rationing provoked outrage. Interestingly, it was motivated just as much by rubber shortages as oil shortages. Japan’s capture of the East Indies had cut off 90% of the US’s rubber imports, and what little rubber was available, was largely needed for military vehicles. Ultimately, the consumption of fuel per passenger vehicle was 30% less in 1943 than in 1941.

(8) War-time measures tested civilian resolve.

In WWII, ambivalence was most clearly seen in the US, where support for the War was initially marginal, and conflicted with domestic economic interests.

The State of New Jersey denounced fuel rationing, lest it hamper tourism at its summer resorts. Likewise, in Miami, the tourism industry rebuffed a campaign to turn off 6-miles of beach-front neon lights, which were literally lighting up the coastal waters, so German U-boats could easily pick off the oil tankers.

In direct opposition to war-time interests, some US gasoline stations openly declared they would make as much fuel available to motorists as required, advertising that motorists should come “fill it up”. There will always be a few idiots who go joy-riding during a crisis.

(9) The map of the modern World

The entire future of the 20th century would also be partly decided by ‘who got there first’ in the liberation of Nazi Europe. Thus, Russia’s sphere of influence, was decided in particular by oil supplies in the final months of the War.

The Allies’ path to Berlin in 1944-45 was 8-months slower than it should have been, hampered by logistical challenges of fueling three separate forces, on their path to the heart of Europe. General Patton wrote home in 1944 that “my chief difficulty is not the Germans, but gasoline”.

The lost time was important. It is what allowed the Soviet Union to capture as much ground as it did, including reaching Berlin before the Western Allies. This would help decide the fate of Republics such as East Germany, Poland, Czechoslovakia, Hungary and Yugoslavia. All ended up being ‘liberated’ by the Soviets. This sealed their fate, ending up as part of the greater Soviet Empire.

Further East, oil-short Japan also approached the Soviet Union as a potential seller of crude. However, Churchill and Roosevelt made Stalin a better offer. The return of territories that Czarist Russia had lost to Japan in the humiliating War of 1905, such as Northern Manchuria and the Sakhalin Islands. The latter, ironically, now produces 300kbpd of oil and 12MTpa of LNG.

(10) Scorched Earth after capture (but NOT BEFORE)

Scorched Earth is a phrase that now conjures images of giant plumes of smoke, rising into the air from 600 large Kuwaiti oil wells, as Iraqi forces retreated during the 1990-91 Gulf War.

However, scorched earth policies were implemented everywhere in the Second World War. The Soviets absolutely destroyed Maikop before it was captured, so the Germans could only product 70bpd there by the following year.

In 1940-42, in the Dutch East Indies, a Shell team was drafted in to obliterate the oil fields and refinery complex at Balikpapan before it could fall into Japanese hands, with fifteen sticks of TNT affixed to each tank in the tank farm. It burned for days.

Back at Shell-Mex House, the British also drew up plans to destroy their fuel stocks if invaded. Most incredibly, at the Start of World War II, France even offered Rumania $60M to destroy its oilfields and deprive their Prize to the Germans.

Strangely, some policymakers and investors appear to have had something of a ‘scorched earth’ policy towards the West’s oil and gas industry in recent years. As war re-erupts in the Western world, the history may be a reminder of the strategic need for a well-functioning energy industry. Energy availability has a historical habit of determining the course of wars.  

End note. The world’s best history book has provided the majority of anecdotes and data-points for this article. Source: Yergin, D.(1990). The Prize: The Epic Quest for Oil, Money & Power. Simon & Schuster. London. I cannot recommend this book highly enough. The cover image is from Wikimedia Commons.

Copyright: Thunder Said Energy, 2019-2023.