Many questions that matter in the energy transition are engineering questions, which flow through to energy economics: which technologies work, what do they cost, what energy penalties they have, and which materials do they use? We see an increasing intersection for economics and engineering in our energy transition research.
Behold thermodynamics!! Read the notes below, and you will no longer be tempted to commit exergetic harakiri by converting ratable electricity into some fuel that you can then run through a heat engine.
What does it cost? Some commentators seem to think decarbonizing the planet is going to be simple. If you think the solution is simple, the most likely reason is that you do not understand the complexity of the problem. We have built over 160 economic models, of different technologies in the energy transition, or different materials and manufacturing value chains that will themselves need to be decarbonized. One of our strongest held views is that decarbonization will be easier using low-cost technologies, and a good cut-off for “too expensive” is decarbonization technologies costing well over $100/ton.
How much metal or material? The average metal and material sees demand grow 3-30x in the energy transition, due to the growth of wind, solar, electric vehicles and batteries, and the current use factors in those spaces. But often, it takes some energy economics to determine how much material you can thrift.
Does that company’s technology work? We like to answer this question by reviewing the company’s patents, and scrutinizing the engineering details. All of our patent reviews are linked here, ranging from true breakthroughs with a moat around them, to other companies that seem to have the engineering equivalent of a flat tyre. A recent observation is that many technologies progress more slowly, and are harder to de-risk, than we naively hoped, based on a former career covering large-cap equities. We really like the 5-point framework at the back of this note for assessing the road to maturation.
Humility moment. After five years running a research firm focused on the energy transition, we are still working hard to correct our historical misconceptions, and educate ourselves into how the world’s energy-industrial complex works. If we can help you, field requests, or dig into any topics that are swirling around in your minds, then please do contact us.
Investing involves being paid to take risk. And we think energy transition investing involves being paid to take ten distinct risks, which determine justified returns. This note argues that investors should consider these risk premia, which ones they will seek out, and which ones they will avoid.
Investment strategies for a fast-evolving energy transition?
Energy transition is evolving very quickly. This means that many investors are continually iterating their investment strategies, stepping into new themes/sectors as they emerge, and candidly, it also means that many risks are mis-priced.
Hence we think it is helpful to consider risk premia. Which risk premia are you getting paid to take? Are you getting paid enough? Or worse, are you exposed to risk for which you are not getting paid at all?
Energy Transition: ten risk premia?
We think there are ten risk factors, or risk premia, that determine the justified returns for energy transition investing. Sweeping statements about the global energy system are almost always over-generalizations that turn out to be wrong. Nevertheless, we will make some observations, as we define each risk factor below.
(1) Risk free rate. The risk free rate is a baseline. It is the return available with almost no risk, when investing in cash deposits and medium-term Treasuries. Our perspective is that many technologies in the energy transition will be inflationary. They will stoke inflationary feedback loops. And in turn inflation puts upwards pressure on the risk free rate. Thus rising rates should raise the bar on allocating capital across the board and investors should consider how they are being compensated. Our favorite note on this topic is here.
(2) Credit/equity risk. This risk premium compensates decision makers for the risk of capital loss inherent in owning the debt and equity of companies. It might vary from sub-1% in the senior secured credit of highly creditworthy companies, through to a c3-5% equity risk premium, and c5-10% for smaller/private companies? Our perspective is that there is great enthusiasm to invest in the energy transition. This means credit/equity risk premia for some of the most obvious energy transition stocks may be compressed. But energy transition is also going to pull on many adjacent value chains, which have non-obvious exposure to the energy transition, while their risk premia have not yet compressed. So we think it is a legitimate investment strategy to target “non-obvious” exposure to the energy transition. Our favorite note on this topic is here.
(3) Project risk. Energy transition is the world’s greatest building project. But over 90% of all construction projects take longer than expected, or cost more than expected, and the average over-run is 60%. Opportunities with more of their future value exposed to delivering large projects have higher project risk. Infrastructure investors can happily stomach 5-10% total returns as they accept project risk. This might include building out power grids, pipelines, fiber-optic cables, and PPA-backstopped wind and solar. And thus another legitimate strategy in the energy transition investing is to get paid for appraising and defraying project risks; get paid for the ability to execute projects well.
(4) Liquidity risk. This premium compensates investors against the possible inability to withdraw capital in a liquidity crunch. It is clearly higher for small private companies than publicly listed large-caps. Energy transition sub-sectors that might debatably warrant higher liquidity premia are CCS projects (50-year monitoring requirements after disposal), reforestation projects (40-100 year rotations for CO2 credit issuance) and infrastructure with very long construction times. A legitimate strategy for endowments and pension funds in the energy transition hinges on their large size and longevity, which allows them to withstand greater liquidity risk than other groups of investors. Another legitimate strategy is to earn higher returns by taking higher liquidity risk, building up a portfolio of privately owned companies with exposure to the energy transition, rather than investing in public companies that will tend to have lower liquidity risk premia.
(5) Country risk. This premium compensates investors against deteriorating economic conditions, tax-rises, regulatory penalties or cash flow losses in specific countries. This is becoming relevant for wind and solar, as many investors are increasingly willing to take country risk to achieve higher hurdle rates, and given the vast spread in different countries’ power prices and grid CO2 intensities (chart below). It is no good if only a few countries globally decarbonize. Net zero is a global ambition. And so another very legitimate strategy in the energy transition is to specialize in particular countries, where those country risks can be managed and defrayed, while driving energy transition there.
(6) Technology risk. This risk premium compensates investors for early stage technologies not working as intended, or suffering delays in commercialization, which derail the delivery of future cash flows. One observation in our research has been that some technologies at first glance seem to be mature, but on closer inspection still have material technology risks (such as green hydrogen electrolysers, post-combustion CCS). But technology risk is interesting for two reasons. First, we think that investors can command some of the highest risk premia (i.e., highest expected returns) for taking technology risk, compared to other risk premia on our list. Second, appraising technology risk is a genuine skill, possessed by some investors, a font at the wellspring of “alpha”. We enjoy appraising technology risk by reviewing patents (see below).
(7) New market risk. This risk premium compensates investors for immature markets. For example, if you produce clean ammonia, then you can sell it into existing ammonia fertilizer markets which does not carry any new market risk; or you can sell it as a clean fuel to decarbonize the shipping industry, which involves persuading the shipping industry to iron out ammonia engines, despite challenging combustion properties, prevent NOx emissions, and retrofit existing bulk carriers and container ships, which clearly does carry new market risk. Another interesting example is that in geographies with high renewables penetration, there may be some hidden market risk in reaching ever-higher renewables penetration? Our personal perspective is that new market risk is the most ‘uncompensated’ risk in the energy transition. It is pervasive across many new energies categories, many investors are exposed to this risk, and yet they are not paid for it. Although maybe another legitimate strategy in the energy transition is to collect new market risk premia while helping to create new markets?
(8) Competition risk. This risk premium compensates against unexpected losses of market share and cash generation due to new and emerging competition. It also includes the risk of your technology “getting disrupted” by a new entrant. Across our research, the area that most comes to mind is in batteries. There is always a headline swirling somewhere about a disruptive battery breakthrough that will crater demand for some incumbent material. Arguably, competition risk goes hand in hand with technology risk and new market risk. It is not enough to develop a technology that is 20% better than the incumbent if someone else develops a technology that is 40% better. Again, we think investors may not get compensated fairly for taking competition risk, while excess returns may accrue to investors that can appraise this risk well.
(9) Commodity risk. This risk premium compensates investors for the inherent volatility of commodity markets, which can have deleterious effects on valuations, liquidity, solvency, sanity (!). Consider that within the past five years, oil prices started at $80/bbl, collapsed into negative territory in Apr-2020, then recovered above $120/bbl in mid-2022. Our work has progressively gone deeper into cleaner hydrocarbons, metals, materials. Our energy transition roadmap contains bottlenecks where total global demand must rise by 3-30x. Yet our perspective is that many investors are reluctant to take commodity risk. Stated another way, commodity risk can be well-compensated. If you have the mental resiliency to ride this roller-coaster.
(10) Environmental risk. This risk premium compensates investors against tightening environmental standards, deteriorating environmental acceptance or regulations that lower future cash generation, especially in CO2 emitting value chains. It is sometimes called “stranded asset risk”. The most obvious example is investing in coal, where investors can get paid c10% dividend yields to own some of these incumbents. Our perspective is that environmental risk premia blew out to very high levels in 2019-2021, and they still remain high, especially if you believe that an era of energy shortages lies ahead (note below). Another perspective is that it feels “easier” to get paid a 5-10% environment risk in an insurance policy against future energy shortages, than to earn a 5-10% risk premium by doing the due diligence on a new and emerging technology? And finally, it is a legitimate strategy in the energy transition to own higher-carbon businesses, then improve their environmental performance, so that the market will reward these companies with lower environmental risk premia.
Economic models: moving beyond 10% hurdle rates?
We have constructed over 160 economic models of specific value chains, in new energies and in decarbonizing industries. Usually, we levy a 10% hurdle rate in these models, for comparability. But the justified hurdle rate should strictly depend upon energy transition risk premia discussed above.
Please contact us if we can help you appraise any particular ideas or opportunities, to discuss their justified hurdle rates, or to discuss how these risk premia align with your own energy transition investing strategy.
The very simple spreadsheet behind today’s title chart is available here, in case you want to tweak the numbers.
The key difference between an alkaline electrolyser and a proton exchange membrane electrolyser (PEM) is what ion diffuses between the anode and cathode side of the cell. In an alkaline electrolyser, alkaline OH- ions diffuse. In a PEM electrolyser, protons, H+ ions, diffuse. Ten fundamental differences follow.
The lowest cost green hydrogen will come from alkaline electrolysers run at high utilizations, powered by clean, stable grids with excess power (e.g., nuclear, hydro).
PEMFCs are more suited for backstopping renewables, although there is still some debate over the costs, longevity, efficiency and whether intermittent wind/solar can be put to better use elsewhere.
(1) In an alkaline electrolyser, water is broken down at the cathode. 4 x H2O molecules gains 4 x e- and become 2 x H2 + 4 x OH- ions. The OH- ions then diffuse across the cell to the anode. To complete the electrical circuit, 4 x OH- ion surrender 4 x e- at the cathode and become 2 x H2O molecules + 1 x O2 molecule. A schematic is below.
(2) In a PEM electrolyser, the chemistry is very different. Water is broken down at the anode. 2 x H2O molecules surrender 4 x e- and become an O2 molecule + 4 H+ ions (protons). The H+ ions then diffuse across the cell to the cathode. To complete the electrical circuit, at the cathode, 4 x H+ ions gain 4 x e- and become 2 x H2 molecules.
(3) PEMs have Membranes. H+ ions are the smallest ions in the Universe, measuring 0.0008 pico-meters (comparable with other ionic radiuses below). This means protons can diffuse through solid polymers like Nafion, which otherwise resist electricity and resist the flow of almost all other materials; totally isolating the anode and cathode sides of the cell in a PEMFC.
(4) Alkaline Electrolysers have Diaphragms. OH- ions are larger, at 153 pm (which is actually quite large, per the chart above). Thus they will not diffuse through a solid polymer membrane. Consequently, the anode and cathode are separated by a porous diaphragm, bathed in an electrolyte solution of potassium hydroxide, produced via a variant of the chlor-alkali process. This (alkaline) electrolyte also contains OH- ions. This helps, because more OH- ions makes it faster for excess OH- ions to diffuse from high concentration on the cathode side of the cell to low concentration on the anode side of the cell (see (2)).
(5) Safety implications. Alkaline electrolysers are said to be less safe than PEMs. The reason is the porous diaphragm. Instead of bubbling out as a gas on the anode side, very small amounts of oxygen may dissolve, diffuse ‘in the wrong direction’ across the porous diaphragm, and bubble out alongside the hydrogen gas at the cathode side. This is bad. H2 + O2 make an explosive mixture.
(6) Footprint implications. One way to deal with the safety issue above is to place the anode and cathode ‘further apart’ for an alkaline electrolyser. This lowers the chances of oxygen diffusing across the diaphragm. But it also means that alkaline electrolysers are less power-dense.
(7) Efficiency implications. Small amount of current can leak through the KOH solution in an alkaline electrolyser, especially at very large current densities. When a direct current (e-) is added to the cell, we want it to reduce 2 x H+ into H2. However, a small amount of the current may be wasted, converting K+ into K; and a small amount of ‘shunt current’ may flow through the KOH solution directly from cathode to anode. We think real-world PEMs will be around 65% efficient (chart below, write-up here) and alkaline electrolysers will be multiple percentage points lower.
(8) Cost implications. An alkaline electrolyser may be a few $100/kW cheaper than a PEM electrolyser. Because the diaphragm is cheaper than the membrane. The electrodes are cheaper too. Our overview of electrolyser costs is below.
(9) Longevity implications. Today’s PEMs degrade 2x faster than alkaline electrolysers (40,000 hours versus 90,000 hours, as general rules of thumb). This is primarily because the membranes are fragile. And H+ ions are, by definition, acidic. But as with all power-electronics, the rate of degradation is also a function of the input signal and operating conditions.
(10) Flexibility implications. Alkaline electrolysers are not seen to be a good fit for backstopping renewables (chart above). According to one technical paper, “It is well known that alkaline water electrolysers must be operated with a so-called protective current in stand-by/idle conditions (i.e., when no power is provided by renewable energy sources) in order to avoid a substantial performance degradation”. When ion flow stops, there is nothing driving OH- ions across the cell, and pushing the H2 and O2 out of the cell. In turn, this means O2 and H2 bubbles can form. They may accumulate around electrode catalysts. Then when the cell starts up again, the gas bubbles block current flow. In turn, overly large resistance or current densities can then degrade the catalysts.
One of TSE’s clients asked me if I would present to their team on the topic of “what makes great research?”. I do not have any delusions of grandeur on this front. But I did enjoy pulling some thoughts together. And the result is a nice video, for anyone interested in TSE’s research philosophy, or for anyone building a career in research and investing.
What makes great research?
One of my clients asked me this year if I would present to their team on the topic of “what makes great research?”. I do not have any delusions of grandeur on this front. But I did enjoy pulling some thoughts together. And it makes a nice video, for anyone interested in TSE’s research philosophy, or for anyone starting a career in research.
Really I only came up with one key point. Great research is not about the researcher but the reader. A researcher has one job. It is to help the reader save time and make good decisions. That’s it.
I am just going to pause on this point, because it’s the opposite of everything I learned in the first 20-years of my life. Many moons ago, I did a degree at Oxford University.
At the time, my intellectual hero was the Austrian philosopher Ludwig Wittgenstein, an amazing thinker. So amazing, in fact, that half-a-century later, professional academics and lowly undergraduates are still debating what he was on about.
It is tempting to put revered thinkers on a pedestal. “I want to be like that, but with earnings forecasts”. I would suggest there are three things that a researcher might do differently.
Point #1: Ask Simple Questions?
Years ago, I had an investor client, who told me something. He said: “I never get anywhere, asking complicated questions, when I meet the CEOs of Fortune 500 companies. I am never going to impress them with a question about machine learning. No. I like to ask dumb questions. And I make money when they answer with something unexpected”. This has stuck with me.
These days, I like to spend my weekends walking in the beautiful forests of Estonia, and thinking about my clients. What are the questions they are likely to be worrying about? Sometimes, and this really helps, they write in with a “gift”, which is directly telling you what they are worrying about. And I just think, how can I break that question down, as simply as possible, into something I can research, to see if I’m going to find something unexpected.
Research has a ‘look over here’ aspect. It is like looking under a rock to see if there is buried treasure beneath. Sometimes you are going to look and find something really exciting. And sometimes not. But you have to look. And you have to be clear about what looking involves, and what you are looking for. So that is Point (1). Ask simple questions.
Point #2: Make Complex Things Into Simple Things.
Something that happens all the time, at least to me, is that I set out to answer a simple research question. And then it grows fangs.
Things are as complicated as they are. You cannot paint the ceiling of the Sistine Chapel before you have built the walls.
There’s another story I like in ‘Shoe Dog: Memoir by the Creator of Nike”. The story is that Nike hired a researcher in the 1970s, who went wildly off spec, and wrote a 1000-page tome entitled “On American Selling Prices, Volume 1”. As the former Nike CEO highlights, “What really scared us was the Volume 1”.
It’s like the old Mark Twain quote “I’m sorry I wrote you such a long letter, I didn’t have time to write a short one”.
These days, I try never to write more than 20 pages in a Thunder Said Energy research note. By definition, the goal is to help a busy person, get smart on a specific question, in a way that helps them and saves time.
I do not need to waste 50-pages of your time re-describing each cell of an Excel model in a Word Document. The underlying Excel is always linked in every TSE chart and exhibit.
Likewise, research should made as simple as possible. Use short words. In short sentences. Explain terms. Avoid jargon. Make it all killer, no filler.
The key is to distil away the noise. If I read 100 patents, 99 are going to be boring, and 1 is going to be phenomenally interesting. My job is to filter out the 99 and find you the 1.
If a researcher does 20 hours of work, and condenses it into a key conclusion that can be read in 5 minutes, then the researcher has saved the reader 19 hours and 55 minutes. Just think about the value of that time saving.
Point #3: Earn Trust
It turns out that a 5-minute summary is only useful if people can trust it. No bad mistakes. No biases. All of the workings should be clear and transparent.
A long time ago, in a former life, I knew an analyst that launched coverage of a new stock, rated BUY, with 20% upside to their valuation. After the report had been published, they discovered that the Euro-Dollar FX conversion was wired up the wrong way around in their model. In fact, there was 20% downside to the correct valuation. That is quite bad.
But what is really bad, is that the analyst decided to “solve” the problem by upgrading all of their revenues and margin forecasts, to get back to 20% upside, and then re-publish the model, as though nothing had happened. For me, that’s the end. Do not pass go. Do not collect $200. If your goal is to earn trust, then there’s no way back from there.
So my strong advice is to check your workings so thoroughly that you can rule out big numerical mistakes. I am sure I am not the only analyst who has woken up in a cold sweat at 3am, and told my wife “I just have to go double-check one number in an Excel”.
But putting aside career-destroying Excel boo-boos, you do actually have to be wrong from time to time. It comes with the territory of making actionable and time-bound predictions.
I once hired a lawyer to make 100% sure TSE was paying the correct mix of taxes — as a US company, employing me in Europe, another employee in the UK. He charged me a few thousand dollars. And two weeks later, he came back with “the law here is kind of a grey area”. That is not entirely helpful.
On the other end of the spectrum, I once had a hedge fund client. “You need to upgrade ABC — only a moron could have ABC on sell”. “What about the risk of X?”. “What’s X?”.
The right answer is a balance. We are all fallible decision makers, trying to navigate an uncertain world. The facts and numbers currently make me think X. If there is good reason to re-visit the numbers, then I am open to re-visiting X. Or in the words of Ludwig Wittgenstein, “don’t think, but look”.
So thank you for exploring so many different aspects of the energy transition with me this year. I’m signing off now, and here’s looking forward to sticking by this philosophy in 2023…
How does methane increase global temperature? This article outlines the theory. We also review the best equations, linking atmospheric methane concentrations to radiative forcing, and in turn to global temperatures. These formulae suggest 0.7 W/m2 of radiative forcing and 0.35ºC of warming has already occurred due to methane, as atmospheric methane has risen from 720 ppb in 1750 to 1,900 ppb in 2021. This is 20-30% of all warming to-date. There are controversies over mathematical scalars. But on reviewing the evidence, we still strongly believe that decarbonizing the global energy system requires replacing coal and ramping natural gas alongside low-carbon energies.
On the Importance of Reaching Net Zero?
There is a danger that writing anything at all about climate science evokes the unbridled wrath of substantially everyone reading. Hence let us start this article by re-iterating something important: Thunder Said Energy is a research firm focused on the best, most practical and most economical opportunities that can deliver an energy transition. This means supplying over 100,000 TWH of useful human energy by 2050, while removing all of the CO2, and avoiding turning our planet into some kind of Waste Land.
Our roadmap to net zero (note below) is the result of iterating between over 1,000 thematic notes, data-files and models in our research. We absolutely want to see the world achieve important energy transition goals and environmental goals. And part of this roadmap includes a greatly stepped up focus on mitigating methane leaks (our best, most comprehensive note on the topic is also linked below).
However, it is also helpful to understand how methane causes warming. As objectively as possible. This helps to ensure that climate action is effective.
It is also useful to construct simple models, linking atmospheric methane concentrations to global temperature. They will not be perfect models. But an imperfect model is often better than no model.
Methane is a powerful greenhouse gas
An overview of the greenhouse effect is written up in a similar post, quantifying how CO2 increases global temperature (note below). We are not going to repeat all of the theory here. But it may be worth reading this prior article for an overview of the key ideas.
Greenhouse gases absorb and then rapidly re-radiate infra-red radiation. This creates a less direct pathway for solar radiation to be reflected back into space. The ability of different gas molecules to absorb and re-radiate infra-red radiation depends on the energy bands of electrons in those molecules, especially the shared electrons in covalent bonds between non-identical molecules with “dipole moments” (this is why H2O, CO2, CH4 and N2O are all greenhouse gases, while N2, O2 and Ar are not).
There are two reasons that methane is up to 200x more effective than CO2 as a greenhouse gas. The first reason is geometry. CH4 molecules are tetrahedral. CO2 molecules are linear. A tetrahedral molecule can generally absorb energy across a greater range of frequencies than a linear molecule.
The second reason is that methane is 200x less concentrated in the atmosphere, at 1,900 parts per billion, versus CO2 at 416 parts per million. We saw in the post below that radiative forcing is a log function of greenhouse gases. In other words, the first 20ppm of CO2 in the atmosphere explains around one-third of all the warming currently being caused by CO2. Each 1ppm increase in atmospheric CO2 has a ‘diminishing impact’, because it is going to absorb incremental radiation in a band that is already depleted by the pre-existing CO2. Thus small increases in methane cause more warming, as methane is currently present in very low concentrations, and thus at a much steeper part of the radiative forcing curve.
The most commonly quoted value we have seen for the instantaneous global warming potential of methane (instantaneous GWP, or GWP0) is 120x. In other words 1 gram of methane has a warming impact of 120 grams of CO2-equivalent. Although the 20, 50 and 100-year warming impacts are lower (see below).
What formula links methane to radiative forcing?
Our energy-climate model is linked below. It contains the maths and the workings linking methane to radiative forcing. It is based on a formula suggested in the past by the IPCC:
Radiative Forcing from Methane (in W/m2) = Alpha x Methane Concentration (in ppb) ^ 0.5 – Small Adjustment Factor for Methane-N2O interaction. Alpha is suggested at 0.036 in the IPCC’s AR5 models, and the adjustment factor for methane-N2O interactions can be ignored if you are seeking an approximation.
This is the formula that we have used in our chart below (more or less). As usual, we can multiply the radiative forcing by a ‘gamma factor’ which calculates global temperature changes from radiative forcing changes. We have seen the IPCC discuss a gamma factor of 0.5, i.e., 1 W/m2 of incremental radiative forcing x 0.5ºC/[W/m2] gamma factor yields 0.5ºC of temperature increases. However, there are controversies over the correct values of alpha and gamma.
Interaction Effects: Controversies over Alpha Factors?
The alpha factor linking methane to radiative forcing is suggested at 0.036 in the IPCC’s AR3 – AR5 reports. Plugging 0.036 into our formula above would suggest that increasing methane from 720 ppb in pre-industrial times to 1,900 ppb today would have caused 0.52 W/m2 of incremental radiative forcing. In turn, this would be likely to raise global temperatures by 0.27ºC.
However, many technical papers, and even the IPCC’s AR5 report, have argued that alpha should be ‘scaled up’ to account for indirect effects and interaction effects.
Tropospheric Ozone. In the troposphere (up to 15km altitude), ozone is a ridiculously powerful greenhouse gas, quantified at around 1,000x more potent than CO2. It is postulated that the breakdown of atmospheric methane produces peroxyl radicals (ROO*, where R is a carbon-based molecule). In turn, these peroxyl radicals react with oxygen atoms in NOx pollutants, yielding O3. And thus methane is assumed to increase tropospheric ozone. Several authors, including the IPCC, have proposed to scale up alpha values by 20% – 80%, to reflect the warming impacts of this additional ozone.
Stratospheric Water Vapor. Water is a greenhouse gas, but it is usually present at relatively low concentrations in the stratosphere (12-50 km altitude). Water vapor prefers to remain in the troposphere, where it is warmer. However, when methane in the stratosphere decomposes, each CH4 molecules yields 2 H2O molecules, which may remain in the stratosphere. Several authors, including the IPCC, have proposed to scale up alpha values by around 15% to reflect the warming impacts of this additional water vapor in the stratosphere.
Short-wave radiation. Visible light has a wavelength of 400-700nm. Infra-red radiation has a wavelength of 700nm – 1mm and is the band that is mainly considered in radiative forcing calculations. However, recent research also notes that methane can absorb short-wave radiation, with wavelengths extended down to as little as 100-200nm. Some authors have suggested that the radiative forcing of methane could be around 25% higher than is stated in the IPCC (2013) assessment when short-wave radiation is considered. This impact is not currently in IPCC numbers.
Aerosol interactions. Recent research has also alleged that methane lowers the prevalence of climate-cooling aerosols in the atmosphere, and this may increase the warming impacts of CH4 by as much as 40%. This impact is not currently in IPCC numbers.
Hydrogen interactions. Even more recent research has drawn a link between methane and hydrogen GWPs, suggesting an effective GWP of 11x for H2, which is moderated by methane (note below).
N2O interactions. The IPCC formula for radiative forcing of methane suggests a small negative adjustment due to interaction effects with N2O, another greenhouse gas, which has been rising in atmospheric concentration (from 285ppb in 1765 to 320ppb today). The reason is that both N2O and CH4 seem to share an absorption peak at 7-8 microns. Hence it is necessary to avoid double-counting the increased absorption at this wavelength. The downwards adjustment due to this interaction effect is currently around 0.08 W/m2.
The overall impact of these interaction effects could be argued to at least double the instantaneous climate impacts of methane. On this more strict vilification of the methane molecule, rising atmospheric methane would already have caused at least a 1.0 W/m2 increase in radiative forcing, equivalent to 0.5ºC of total temperature increases since 1750 due to methane alone.
Uncertainty is high which softens methane alarmism?
Our sense from reviewing technical papers is that uncertainty is much higher when trying to quantify the climate impacts of methane than when trying to quantify the climate impacts of CO2.
The first justification for this claim is a direct one. When discussing its alpha factors, the IPCC has itself acknowledged an “uncertainty level” of 10% for the scalars used in assessing the warming impacts of CO2. By contrast, it notes 14% uncertainty around the direct impacts of methane, 55% on the interaction with tropospheric ozone, 71% on the interaction with stratospheric water vapor. These are quite high uncertainty levels.
A second point is that methane degrades in the atmosphere, with an average estimated life of 11.2 years, as methane molecules react with hydroxyl radicals. This is why the IPCC has stated that methane has a 10-year GWP of 104.2x CO2, 20-year GWP of 84x CO2, 50-year GWP of 48x CO2 and a 100-year GWP of 28.5x CO2.
There is further uncertainty around the numbers, as methane that enters the atmosphere may not stay in the atmosphere. The lifetime of methane in additional sinks is estimated at 120 years for bacterial uptake in soils, 150-years for stratospheric loss and 200 years for chlorine loss mechanisms. And these sources and sinks are continuously exchanging methane with the atmosphere.
Next, you might have shared our sense, when reading about the interaction effects above, that the mechanisms were complex and vaguely specified. This is because they are. I am not saying this to be some kind of climate sceptic. I am simply observing that if you search google scholar for “methane, ozone, interaction, warming”, and then read the first 3-5 papers that come up, you will find yourself painfully aware of climate complexity. It would be helpful if the mechanisms could be spelled out more clearly. And without moralistic overtures about natural gas being inherently evil, which sometimes simply makes it sound as though a research paper has strayed away from the scientific ideal of objectivity.
Finally, the biggest reason to question the upper estimates of methane’s climate impact are that they do not match the data. There is little doubt that the Earth is warming. The latest data suggest 1.2-1.3C of total warming since pre-industrial times (chart below). Our best guesses, based on our very simple models point to 1.0ºC of warming caused by CO2, 0.35ºC caused by CH4 and around <0.2ºC caused by other greenhouse gases. If you are a mathematical genius, you may have noticed that 1.0 + 0.35 + <0.2 adds up to 1.5C, which is more warming than has been observed. And this is not including any attribution for other factors, such as changing solar intensity or ocean currents. So this may all suggest that our alpha and gamma factors are, if anything, too high. In turn, this may mute the most alarmist fears over the stated alpha factors for methane being materially too low.
Conclusions for gas in the energy transition
How does methane increase global temperature? Of course we need to mitigate methane leaks as part of the energy transition, across agriculture, energy and landfills; using practical and economical methods to decarbonize the entire global energy system. Methane is causing around 20-30% of all the incremental radiative forcing, on the models that we have considered here. If atmospheric methane doubles again to 3,800 ppb, it will cause another 0.2-0.4ºC of warming, as can be stress-tested in our model here.
However, we still believe that natural gas should grow, indeed it should grow 2.5x, as part of the world’s lowest cost roadmap to net zero. The reason is that while we are ramping renewables by over 5x, this is still not enough to offset all demand for fossil energy. And thus where fossil energy remains, pragmatically, over 15GTpa of global CO2 abatement can be achieved by displacing unchecked future coal consumption with gas instead.
Combusting natural gas emits 40-60% less CO2 than combusting coal, for the same amount of energy, which is the primary motivation for coal-to-gas switching (note below). But moreover, methane leaks into the atmosphere from the coal industry are actually higher than methane leaks from the gas industry, both on an absolute basis and per unit of energy, and this is based on objective data from the IEA (note below).
How does atmospheric CO2 increase global temperature? The purpose of this article is to outline the best formulae we have found linking global temperature to the concentration of CO2 in the atmosphere. In other words, our goal is a simple equation, explaining how CO2 causes warming, which we can use in our models. In turn, this is why our ‘roadmap to net zero’ aims to reach net zero by 2050, stabilize atmospheric CO2 at 450ppm, and we believe this scenario is compatible with 2ºC of warming.
Disclaimer: can a simple equation explain global warming?
Please allow us to start this short note with a disclaimer. We understand that writing anything at all about climate science is apt to incur the unbridled wrath of substantively everyone. We also understand that the world’s climate is complex, and cannot be perfectly captured by simple formulas, any more than ‘world history’ can be.
Hence we think it is useful to have an approximate formula, even if it only “about 80% right”, rather than a conceptual black hole. So that is the purpose of today’s short note.
(1) Thermal theory: inflows and outflows?
The Earth’s temperature will be in balance and remain constant if energy inflows match energy outflows. Energy inflows come from the sun, and are approximated by the ‘solar constant’, at 1,361 W/m2. Energy outflows are radiated back into space, and are approximated by the Stefan-Boltzmann law.
One of the terms in the Stefan-Boltzmann equation is Temperature^4. That is temperature raised to the power of four. In other words, if for some reason, the Earth is not quite radiating/rejecting 1,361 W/m2-equivalent back into space, then its average temperature needs to become a little bit warmer, until it is once again radiating 1,361 W/m2-equivalent back into space. Let’s unpack this bit further…
Physics dictates that any physical body with a temperature above absolute zero will radiate energy from its surface, usually small amounts of energy. The wavelength depends on the body’s chemical properties, and more importantly, its temperature, i.e., its thermal energy.
Energy is the inverse of wavelength. Some electromagnetic radiation has a wavelength of 380-700 nm, in which case we call this radiation ‘visible light’. Some has lower energy, and thus higher wavelength. For example, radiation with a wavelength of 700-1,000 nm is often referred to as ‘infra-red’.
So in conclusion, energy is constantly being radiated back “upwards” towards outer space by the Earth’s surface, i.e., its land and its seas. These wavelengths are generally in the infra-red range. They span from 600 – 20,000 nm. But how much of that energy actually escapes into space?
(2) Greenhouse Gases: CO2, CH4, N2O, et al.
CO2 is a greenhouse gas. This means that as thermal energy is radiated upwards from the Earth’s surface — land and sea — it can excite the electrons in CO2 molecules, into a higher energy state, for a few nano-seconds. These electrons quickly relax back into a lower-energy state. As they do this, they re-radiate energy.
In other words, CO2 molecules absorb energy that is “meant to be” radiating upwards from the Earth’s surface back into space, and instead they scatter it in various other directions, so some of it will be re-absorbed (e.g., back in the world’s oceans).
In passing, this is why some scientists object to CO2 being described as ‘a blanket’. It is not trapping heat. Or storing any heat itself. It is simply re-radiating and re-scattering heat that would otherwise be travelling in a more direct path back into space.
This effect of CO2, and other greenhouse gases, can be described as ‘radiative forcing’ and measured in W/m2. In 2021, with 416ppm of CO2 in the atmosphere, the radiative forcing of atmospheric CO2 is said to be 2.1 W/m2 higher than it was in 1750, back when there was 280 ppm of CO2 in the atmosphere.
So what equations relate atmospheric CO2 concentrations into radiative forcing effects and ultimately into temperatures?
(3) Absorbing equations: how does CO2 impact temperature?
Let us start with an analogy. You are reading a complicated text (maybe this one). On the first reading, you absorb and retain about 50% of the content. One the second reading, you absorb and retain another 25% of the content. On the third reading, you absorb another 13% of the content. And so on. By the tenth reading, you have absorbed 99.9% of the content. But the general idea is that the more you have absorbed on previous readings, the less is left to be absorbed on future readings.
The absorption profile of CO2 is similar. One way to think about this is that even a very small amount of a particular greenhouse gas is going to start absorbing and re-radiating energy at its particular absorption wavelengths. This dilutes the amount of energy with this particular wavelength that remains. This means that there will be less energy with this particular wavelength left to absorb by adding more of molecules of this greenhouse gas into the atmosphere. By definition, the additional gas is ‘trying to’ absorb radiation at a wavelength that is already depleted.
Hence, without any CO2 in the atmosphere, the Earth would be about 6C cooler. The first 20ppm of CO2 explains perhaps 2ºC of all the warming exerted by CO2. The next 20ppm explains 0.8ºC. The next 20ppm explains around 0.6ºC. And so on. There are diminishing returns to adding more and more CO2.
Systems like this are described with logarithmic equations. Thus in the past, the IPCC has suggested various log equations that can relate radiative forcing to atmospheric CO2. The most famous and widely cited formula is below:
Increase in Radiative Forcing (W/m2) = 5.35 x ln (C/C0) … where C is the present concentration of atmospheric CO2 in ppm; and C0 was the concentration of atmospheric CO2 in 1750, which was 280 ppm.
The scalar value of 5.35, in turn, is derived from “radiative transfer calculations with three-dimensional climatological meteorological input data” (source: Myhre, G., E. J. Highwood, K. P. Shine, and F. Stordal, (1998). New estimates of radiative forcing due to well mixed greenhouse gases. Geophys. Res. Lett., 25, 2715–2718). Note these formulae are quite long-standing and go back to 1998-2001.
Thus in the chart below, we have averaged three log formulas, suggested in the past by the IPCC, to calculate radiative forcing from atmospheric CO2. Next, to translate from radiative forcing into temperature we have used a “gamma factor” of 0.5 ºC per W/m2 of radiative forcing, which is also a number suggested in the past by the IPCC.
(4) Temperature versus CO2: mathematical implications?
Let us interpret this chart above from left to right. This is intended to be an objective, mathematical exercise, assessing the implications of simple climate formulae proposed long ago by the IPCC.
(a) Much of the greenhouse effect has “already happened“, due to the log relationship in the chart. The chart implies that the total greenhouse effect occurring today from atmospheric CO2 is around 7-8ºC, of which c6-7ºC was already occurring back in 1750.
(b) Raising atmospheric CO2 to 415ppm should have increased global temperatures by around 1ºC since pre-industrial times, just due to CO2, and holding everything else equal, again just taking the IPCC’s formulae at face value.
(c) Raising atmospheric CO2 by another 35ppm to 450ppm would, according to the simple formula, would result in 1.3ºC of warming due to CO2 alone, and 2ºC of total warming, using the simple relationship that two-thirds of all greenhouse warming since 1750 is down to CO2, and the remainder is due to CH4, N2O and other trace gases.
(d) Diminishing impacts? Just reading off the chart, a further 50ppm rise to 500ppm raises temperature by 0.3ºC; a further 50ppm rise to 550ppm raises temperature by 0.27ºC; a further 50ppm rise to 600ppm raises temperature by 0.25ºC; and a further 50ppm rise to 650ppm raises temperature to by 0.23ºC. The point is that the higher atmospheric CO2 rises, then the higher global temperature rises. While it is true that each incremental 50ppm has less impact than the prior 50ppm, the pace of the slowdown is itself quite slow, and not enough to dispel our ultimate need to decarbonize.
(e) There is a theoretical levelling off point due to the log relationship, where incremental CO2 makes effectively no difference to global temperatures, but from the equations above, it is so far in the future as to be irrelevant to practical discussions about decarbonization: it is probably somewhere after atmospheric CO2 has surpassed 4,000 ppm and CO2 has directly induced about 8C of warming.
In conclusion, the simple formulae that we have reviewed suggest that the ‘budget’ for 2ºC of total global warming is reached at a total atmospheric CO2 concentration of 450ppm. Of the budget, about two-thirds is eaten up by CO2, and the remaining one-third is from other anthropogenic greenhouse gases, mainly methane and N2O. This is what goes into our climate modelling, and why we think it is also important to mitigate methane emissions and N2O emissions from activities such as crop production.
(5) Controversies: challenges for simple CO2-warming equations?
Welcome to the section of this short note that is probably going to get controversial. Our intention is not to inflame or enrage anybody here. But we are simply trying to weigh the evidence objectively.
(a) Gamma uncertainty. Our simple climate formula above used a gamma value of 0.5 ºC per W/m2 of radiative forcing. The IPCC has said in the past that this is a good average estimate, and that its gamma value has an uncertainty range of +/- 20%. Even this would be quite uncertain. For example, it means that the CO2 budget for 2ºC of total warming would be anywhere between 420-510ppm of atmospheric CO2, using the formulae above.
(b) More gamma uncertainty. Worse, some sources that have crossed our screens have suggested gamma values as low as 0.31ºC per W/m2 of radiative forcing, and others as high as 0.9ºC per W/m2 of radiative forcing. One of the key uncertainties seems to be around interaction effects and feedback loops. For example, a warmer atmosphere can store more water vapor. And water vapor is itself a greenhouse gas.
(c) What percentage of CO2 emitted by human activities remains in the atmosphere and what percent is absorbed by the oceans? We have assumed almost no incremental CO2 emitted by human activities is absorbed in the oceans, and almost all remains in the atmosphere, in our simple climate model.
(d) Other variables. Our model does not factor in the impacts of other variables that have an impact on climate, such as Milankovitch cycles, solar cycles, ocean currents, massive volcanic eruptions or mysterious Dansgaard-Oeschger events. (And no doubt there are some super-nerdy details of climate science that are not yet fully understood. But surely that does not invalidate the basics. While I may not fully grasp the Higgs-Boson, I still try to avoid cycling into large and immovable obstacles).
(e) We have over-simplified radiative forcing. There is no single formula that perfectly sums up the impact of CO2 on climate, because the Earth is not a single, homogenous disc, pointed directly at the sun. Some of these over-simplifications are obvious. Others are more nuanced. Clouds reduce the mean radiative forcing due to CO2 by about 15%. And the impacts of different gases can be different across vertical profiles through the atmosphere, for example, stratospheric adjustments reduce the impact of CO2 by 15%, compared to a perfectly mixed and homogenous atmosphere.
(f) Warming to-date. Perhaps most controversially, our simple formulae discussed above suggest that with atmospheric CO2 at 415 ppm and a gamma value of 0.5 ºC, the world should already be 1.1ºC warmer than pre-industrial times due to CO2 alone, and this equates to being around 1.6 ºC warmer than pre-industrial times due to a combination of all greenhouse gases. In contrast, the world currently appears to be around 1.1ºC warmer, in total than it was in pre-industrial times, according to the data below. This may mean that our gamma value is too high, or that our formulas are too conservative. Or conversely, it could simply mean that there are time lags between CO2 rising and temperature following. We do not know the answer here, which is unsatisfying.
Conclusions: what is the relationship between CO2 and warming?
How does CO2 increase global temperature? The advantages of simple formulae are that they are simple and can be used to inform our models. The disadvantages are that the global energy-climate system is complex, and will never be captured entirely by simple formulae.
Our aim in this short note is to explain the formulas that we are using in our energy-climate models and in our roadmap to net zero. We think it is possible to stabilize atmospheric CO2 at 450ppm, by reaching ‘net zero’ around 2050, in an integrated roadmap that costs $40/ton of CO2 abated; and reaches an important re-balancing between human activities and the natural world.
450ppm of atmospheric CO2 is possibly a conservative budget. There may be reasons to hope that true ‘gamma’ values for global warming, or forcing co-efficients, are lower than we have modelled. But it makes sense to ‘plan for the worst, hope for the best’. And regardless of specific climate-modelling parameters, it makes sense for a research firm to look for the best combination of economics, practicality, morality and opportunity in the energy transition.
“It provokes the desire, but it takes away the performance.” That is the porter’s view of alcohol in Act II Scene III of Macbeth, it is also our view of 2022’s impact on the energy transition. The resultant outlook is captured in six concise pages, published in the Walter Scott Journal in Summer-2022.
Further research. Our overview into energy technologies and energy transition for the route to ‘net zero’ is linked here.
Chinese coal provides 15% of the world’s energy, equivalent to 4 Saudi Arabia’s worth of oil. Global energy markets may become 10% under-supplied if this output plateaus per our ‘net zero’ scenario. Alternatively, might China ramp its coal to cure energy shortages, especially as Europe bids harder for renewables and LNG post-Russia? Today’s note presents our ‘top ten’ charts on China’s opaque coal industry.
China’s coal industry provides 15% of the world’s energy and c22% of its CO2 emissions. These numbers are placed in context on page 2.
China’s coal production policies will sway global energy balances. Key numbers, and their impacts on global energy supply-demand, are laid out on page 3.
China’s coal mines are constellation of c4,000 assets. Some useful rules of thumb are given on the breakdown on page 4.
China’s coal demand is bridged on page 5, including the share of demands for power, industrial heat, residential/commercial heat and coking.
Coal prices are contextualized on page 6-7, comparing Chinese coal with gas, renewables, hydro and nuclear in c/kWh terms.
Coal costs are calculated on page 6-8. We model what price is needed for China to maintain flat-slightly growing output, while earning double-digit returns on investment.
Accelerating Chinese coal depends on policies, however, especially around a tail of smaller and higher cost mines. The skew and implications are explored on page 7-8.
China’s decarbonization is clearly linked to its coal output. We see decarbonization ambitions being thwarted in the 2020s, per page 8.
Methane leaks from China’s coal industry may actually be higher than methane leaks from the West’s gas industry (page 9).
Chinese coal companies are profiled, and compared with Western companies, on pages 10-11.
For an outlook on global coal production, please see our article here.
Who will ‘win’ the intensifying competition for finite lithium ion batteries, in a world that is hindered by shortages of lithium, graphite, nickel and cobalt in 2022-25?
Today’s note argues EVs should outcompete grid storage, as the 65kWh battery in a typical EV saves 2-4x more energy and 25-150% more CO2 each year than a comparably sized grid battery.
Competitor #1: Electrification of Transport?
The energy credentials of electric vehicles are laid out in the data-files below. A key finding is their higher efficiency, at 70-80% wagon-to-wheel, where an ICE might only achieve 15-20%. Therefore, energy is saved when an ICE is replaced by an EV. And CO2 is saved by extension, although the precise amount depends on the ‘power source’ for the EV.
When we interrogate our models, the single best use we can find for a 65kWh lithium ion battery is to electrify a taxi that drives 20,000-70,000 miles per year. This is a direct linear pass-through of these vehicles’ high annual mileage, with taxis in New York apparently reaching the upper end of this range. Thus the higher efficiency of EVs (vs ICEs) saves 20-75MWH of energy and 7-25 tons of CO2 pa.
More broadly, there are 1.2bn cars to ‘electrify’ in the world, where the energy and CO2 savings are also a linear function of miles driven, but because ordinary people have their cars parked around 97% of the time, the savings will usually be 10-20MWH per vehicle pa.
(Relatedly, an interesting debate is whether buying a ‘second car’ that is electric is unintentionally hindering energy transition, if that car actually ends up under-utilized while consuming scarce LIBs, which could be put to better use elsewhere. As always, context matters).
Competitor #2: Grid-Scale Batteries?
The other main use case for lithium ion batteries is grid-scale storage, where the energy-saving prize is preventing the curtailment of intermittent wind and solar resources. As an example, curtailment rates ran at c5% in California in 2021 (data below).
The curtailment point is crucial. There might be economic or geopolitical reasons for storing renewables at midday and re-releasing the energy at 7pm in the evening, as explored in the note below. But if routing X renewable MWH into batteries at midday (and thus away from the grid) simply results in X MWH more fossil energy generation at midday instead of X MWH of fossil energy generation at 7pm, then no fossil energy reductions have actually been achieved. In order for batteries reduce fossil energy generation, they must result in more overall renewable dispatch, or in other words, they must prevent curtailment.
There are all kinds of complexities in modelling the ‘energy savings’ here. How often does a battery charge-discharge? What percent of these charge-discharge cycles genuinely prevent curtailment? What proportion of curtailment can actually be avoided in practice with batteries? What round-trip efficiency on the battery?
To spell this out, imagine a perfect, Utopian energy system, where every day, the sun shone evenly, and grid demand was exactly the same. Every day from 10am to 2pm, the grid is over-saturated with solar energy, and it is necessary to curtail the exact same amount of renewables. In this perfect Utopian world, you could install a battery, to store the excess solar instead of curtailing it. Then you could re-release the energy from the battery just after sunset. All good. But the real world is not like this. There is enormous second-by-second, minute-by-minute, hour-by-hour, day-by-day volatility (data below).
Thus look back at the curtailment chart below. If you built a battery that could absorb 0.3% of the grid’s entire installed renewable generation capacity throughout the day, then yes, you would get to charge and discharge it every day to prevent curtailment. But you would only be avoiding about 10% of the total curtailment in the system.
Conversely, if you built a battery that could absorb 30% of the installed renewable generation capacity throughout the day, you could prevent about 99% of the curtailment, but you would only get to use this battery fully to prevent curtailment on about 5 days per year. This latter scenario would absorb a lot of LIBs, without unleashing materially more energy or displacing very much fossil fuel at all.
This is all explored in more detail in our detailed modelling work (data file here, notes below). But we think an “energy optimized” middle ground might be to built 1MW of battery storage for every 100MW of renewables capacity. For the remainder, we would prefer other solutions such as demand-shifting and long-distance transmission networks.
Thus, as a base case, we think a 16kW battery (about the same size as in an EV) at a 1.6MW solar project might save 5MWH of energy that would otherwise have been curtailed, abating 2T of CO2e. So generally, we think a typical EV is going save about 2-4x more energy per than a similarly-sized grid-battery.
Another nice case study on solar-battery integration is given here, for anyone who wants to go into the numbers. In this example, the battery is quite heavily over-sized.
Other considerations: substitution and economics?
Substitution potential? Another consideration is that an EV battery with the right power electronics can double as a grid-scale storage device (note below), absorbing excess renewables to prevent curtailment. But batteries affixed to a wall or on a concrete pad cannot usually double as a battery for a mobile vehicle, for obvious reasons.
Economic potential? We think OEMs producing c$70-100k electric vehicles will resist shutting entire production lines if their lithium input costs rise from $600 to $3k per vehicle. They will simply pass it on to the consumer. We are already seeing vehicle costs inflating for this reason, while consumers of ‘luxury products’ may not be overly price sensitive. By contrast, utility-scale customers are more likely to push back grid scale storage projects, as this is less mission critical, and likely to be more price-sensitive.
Overall, we think the competition for scarce materials is set to intensify as the world is going to be ‘short’ of lithium, graphite, nickel in 2022-25 (notes below). This is going to create an explosive competition for scarce resources. The entire contracting strategies of resource-consuming companies could change as a consequence…
Privacy & Cookies Policy
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.