Real levelized costs can be a misleading metric. The purpose of today’s short note is simply to inform decision-makers who care about levelized costs. Our own modelling preference is to compare costs, on a flat pricing basis, using apples-to-apples assumptions across our economic models.
What are levelized costs?
What are levelized costs? Levelized costs, sometimes abbreviate as LCOEs, aim to summarize the costs of an electricity source with a single number, denoting what electricity price would be needed in order to earn an acceptable return, when constructing and operating a new electricity generating facility. This can enable useful comparisons.
Why do we hate levelized costs? All of the various advantages and disadvantages of a new electricity generation investment, within in a complex, real-world grid cannot entirely be captured by a single number. A couple of recent research notes onto this topic are linked below.
Nevertheless, with this caveat in mind, we have constructed levelized costs models for onshore wind, offshore wind, solar, hydro, nuclear, gas, coal, biomass, diesel gensets and geothermal. All of these numbers are shown on a nominal basis. If our levelized cost is quoted as 8c/kWh, then we assume the power facility will receive 8c/kWh in Year 1, 8c/kWh in Year 2, 8c/kWh in Year 3… and 8c/kWh in Years 25-100.
What are “Real” Levelized Costs?
Some commentators present levelized costs on a ‘real basis’. What this means is that there is an annual escalation factor built into the power price. For example, a levelized cost of 8c/kWh means that the power price starts at 8c/kWh in Year 0, then it increases by, say, 1-2% per year thereafter. Or alternatively, “with inflation”.
Price escalation helps to achieve higher IRRs. And hence if your hurdle rate is constant, real levelized costs (that benefit from pricing escalation) will be lower than nominal levelized costs (that do not benefit from pricing escalation) (chart below).
However, in our view, quoting levelized costs on a ‘real’ basis is misleading. It may distort the rankings of power generation sources, where these sources have different operating lives, or worse, where different cost escalation factors are built into different models.
Paradox #1. Ultimate cost is higher than levelized cost?
One reason we think it is misleading to quote levelized costs on a real basis is that it consigns consumers to higher power prices than advertized. Consumers might be quite annoyed, if an option is quoted to cost 8c/kWh, weighed up against other options, selected on the basis of this 8c/kWh cost, and then 20-years later, they are buying electricity from this 8c/kWh project at 12c/kWh (chart below). “How can the power price be 12c/kWh, when we invested in power plants costing 8c/kWh?”.
Paradox #2. Levelized costs fall when future prices rise?
A second reason we think it is misleading to quote levelized costs on a real basis is that it makes today’s levelized costs dependent upon tomorrow’s inflation. Paradoxically, if inflation expectations rise, then real levelized costs seem to fall.
For example, imagine a project that needs 8 c/kWh over the life of the project to generate a passable IRR. If the base case assumption is for 1% per year inflation, then the real levelized cost is 7c/kWh. But if we raise our inflation expectations to 2% per year, our project sees its levelized costs deflate to 6c/kWh (chart below). Even though it is the exact same project, with the exact same capex and opex??
Without wishing to sound accusatory, we think that some commentators are wedded to a narrative that particular energy technologies are on a path of perennial deflation. This is not entirely true, for example, in the recent case of offshore wind.
In our view, it would be unhelpful for such commentators to preserve their deflation narrative, for example, by systematically revising up their future inflation expectations, so that their quoted cost estimates continue “falling” from one year to the next, even when project developers are very clearly signalling that their costs are rising. Ultimately, we think decarbonization progress requires openly acknowledging challenges, so that decision makers can work to resolve those challenges.
Paradox #3: defeating the purpose of a simple model?
“All models are wrong, but some models are useful”. This is the mantra underlying the 170 economic models that we have built and published into different technologies, power sources and materials, as part of our energy transition research. The other mantra is to make these models as simple as possible, and no simpler!
Hence across all of our models, we have opted to model pricing on a flat basis. This allows the models to ask “what price is needed for an acceptable IRR?”, on a simple, easily understandable, and comparable basis. This also avoids the risk of different models embedding potentially different cost escalation factors, impairing their comparability.
But what about inflation? Yes commodity prices change over time, as is well-captured in our various commodity price databases. But the drivers of these annual pricing variations are difficult to predict ex-ante (including wars, weather, pandemics, recessions). And there are also good arguments that some commodity prices resist inflation over the very long term. Of course, if you wish to adjust our different models, and add annual pricing variations, then you are welcome to do that.
For the reasons above, we will continue favoring flat pricing, and simple, apples-to-apples assumptions, in our economic models.
Blue hydrogen value chains are starting to boom in the US, as they are technically ready, low cost, and are now receiving enormous economic support from the Inflation Reduction Act. But maybe blue hydrogen tightens global LNG markets by as much as 60MTpa by 2030, keeping global LNG prices above $20/mcf, impacting all global energy markets?
World changing themes often emerge from niches, which initially seem peripheral, technical, easy to overlook (‘the internet’ in the 1990s, ‘sub prime mortgages’ in 2007, some strange new virus cases in January-2020). We increasingly think that US blue hydrogen may be a world changing theme. Something that every serious decision maker in energy really needs to understand.
Blue hydrogen value chains are booming in the US. We recently reviewed these value chains in a deep-dive report into ATRs versus SMRs. Without any incentives, blue hydrogen can be economical at around $1/kg, with CO2 intensity less than 1 kg/kg (90% below grey hydrogen) and in turn, the hydrogen can be used to produce blue ammonia, blue steel, blue chemicals and as a low carbon fuel in itself.
How does the inflation reduction act change US hydrogen economics? Substantively all of the US’s merchant hydrogen today is produced using steam methane reformers, where new production facilities require a $0.8/kg hydrogen price for a 10% IRR. Our model is linked below. But autothermal reformers are likely to gain market share, as they allow for greater portions of the CO2 to be captured, and lowering total CO2 intensity below 1 kg of CO2 per kg of hydrogen. Assuming that these autothermal reformers can sell their clean hydrogen at $1/kg, plus two thirds of the $85/ton CO2 disposal credits available under the Inflation Reduction Act, plus $1/kg hydrogen production incentives (for hydrogen with a CO2 intensity of 0.5-1.5 kg/kg CO2 intensity), this will uplift ATR IRRs above 40% (chart below).
How will the blue hydrogen boom impact US gas demand? This is hard to quantify, because we have seen 20MTpa of blue ammonia projects “come out of nowhere” in the past 12-months, and we expect new projects to continue being announced. We think the US can effectively deliver as much gas as is required for the foreseeable future if producers are offered incentive pricing above $3/mcf (model here). Others are worried about resource depletion. Our US shale production forecasts by basin are here. And clearly demand is going to be tightened by flowing more gas into low-carbon hydrogen products. The blue hydrogen boom is constructive for US gas producers.
How does the US blue hydrogen boom impact US LNG economics? Natural gas is the key input for both blue hydrogen ATRs and LNG plants. Effectively, both types of facility are competing for natural gas. At the same gas input prices, it might take an $8/mcf long-term LNG sale price to earn a 10% IRR on a new liquefaction facility costing $750/Tpa, based on our LNG liquefaction models. But to rival the 40% IRRs available on a blue hydrogen ATR requires internal gas buyers to be willing to pay around $25/mcf. And this is for a full-carbon product, whereas blue hydrogen is already over 90% decarbonized.
How does the US blue hydrogen boom impact US LNG supplies? Our global LNG supply model sees global LNG supplies ramping up by 280MTpa to almost 700MTpa by 2030, and even this is insufficient to meet global LNG demand, we think.
Of our 280MTpa LNG supply ramp by 2030, almost half, or +130MTpa is meant to come from the US; and of that almost 60MTpa is pre-FID. The 60MTpa is on a ‘risked basis’, for example, a 10MTpa project with 50% chance of proceeding is counted as 5MTpa. In other words, if a boom in US blue hydrogen outcompetes new LNG plants for gas feedstocks, then this could realistically lower total global LNG supplies in 2030 by almost 10%. In what was already set to be an under-supplied market. You can look project by project in the model below.
What does it mean for global gas markets? Europe ramped up its LNG imports from 8bcfd to 15bcfd in 2022, due to Russia’s invasion of Ukraine. We see Europe potentially requiring 15bcfd of gas supplies through 2030, which is equivalent to 110MTpa of LNG supplies. Our European gas and power model is linked below.
Overall, the boom of US blue hydrogen compounds our fears that international energy markets could be very tight throughout the 2020s and 2030s. Our best note into this topic, and our global energy supply demand models are copied below.
In an unexpected way, these industry trends might seem to be very supportive for LNG incumbents, with LNG demand remaining higher for longer, and medium-term supply growth from US LNG being suppressed by competition from booming blue hydrogen value chains. All of our broader conclusions on LNG in the energy transition are linked here.
The thermodynamic efficiency of materials production averages 20%, within an interquartile range of 5% to 50%. In other words, if the thermodynamic minimum energy needed to make a typical material is 1,500 kWh/ton, then the most likely case is that a typical industrial process for that material today consumes 7,500 kWh/ton of energy.
This note explores the numbers and implications. Calculations are linked here. The work makes us more optimistic on the potential for efficiency gains as part of the energy transition, especially in complex value chains, forms of CO2 capture, and using AI.
Definitions: what is thermodynamic minimum energy for materials?
ฮHfยฐ denotes the Standard Enthalpy of Formation. This is the change of enthalpy (i.e., heat), when 1 mole of a substance forms from its constituent elements in their reference state.
For example, the chemical element silicon will slowly oxidize when exposed to air, yielding pure white sand (Si + O2 -> SiO2). Now for some numbers. This reaction involves a release of enthalpy, of -910 kJ/mol. The molar mass of silicon is 28.1 grams/mol. Hence by division, the energy released per mol of silicon is 32kJ/gram, which, by juggling the energy units, equates to 9,000 kWh/ton. If you walked along the most pristine beach in the world, with some kind of “magic ray gun” that decomposed SiO2 back into Si and O2, then the thermodynamic minimum energy needed is going to be the same number, around 9,000 kWh/ton, to overcome the enthalpy of formation of SiO2. In reality, there actually is a broad landscape of silica producers, who mine high-purity silica, and sell this product on, into value chains that yield glass, solar panels and semiconductors. For the latter two applications, silica might be reduced into silicon ‘metal’ in an electric arc furnace. And overall, we think this industrial process requires around 13,500 kWh/ton of input energy. Or in other words, producing low-purity silicon metal is about 67% efficient.
Hence in our analysis, we can compare the Standard Enthalpy of Formation with the actual industrial energy consumption of producing different materials.
If you want to calculate some of these standard enthalpies of formation yourselves, there are excellent free reference sources online, such as Engineering Toolbox.
Methodology: getting the right baseline?
Other examples are more complex than our silicon study above, and require some careful consideration of the appropriate baseline.
We don’t want to ‘cheat’ in our numbers by starting from some unrealistic baseline (“if I have a working iPhone, right here in my hand, then the energy costs of making an iPhone are zero”).
Ammonia makes a nice example. The standard enthalpy of formation of ammonia (NH3) is -46 kJ/mol. This is an exothermic reaction, releasing 750 kWh/ton of energy. But clearly the global ammonia industry is a net energy consumer, not a net energy producer!!
Remember, the standard enthalpy of formation assumes you are forming ammonia from its constituent elements, i.e., gaseous hydrogen and gaseous nitrogen.
Pure hydrogen does not occur naturally, but itself needs to be produced, most likely by steam methane reforming.
And pure nitrogen does not occur naturally: it needs to be separated out of air, overcoming the entropy of mixing, using air separation technology.
So in our analysis, our thermodynamic minimum (i) adds the minimum thermodynamic work to separate pure nitrogen from air (ii) adds the minimum thermodynamic work to decompose a 50/50 mixture of CH4 and H2O into pure hydrogen (iii) subtracts the energy that could theoretically be harnessed as the nitrogen and hydrogen react together again.
The result is that we think the minimum thermodynamic efficiency for producing ammonia is something around 1,200 kWh/ton. Compared to this baseline, the real-world ammonia industry is typically around 12% efficient.
The point here is that in each case, in our data-file, we need to make some ballpark estimates about what is a fair starting point, and it is from this baseline that the minimum thermodynamic work is calculated.
Conclusion #1: high thermodynamic efficiencies are challenging to achieve in the real world?
The average material in our screen has a c20% thermodynamic efficiency. In other words, real world industrial processes tend to consume around 4-5x more energy than the thermodynamic minimum (chart below), even after 100-200 years of incremental efficiency gains by industrial civilization.
Conclusion #2. Complex value chains tend to be less efficient?
For the bottom quartile of processes in our data-file, the thermodynamic efficiency is below 5%. In other words, real world industrial processes tend to consume over 20x more energy than the thermodynamic minimum. Generally, these appear to be more complex value chains, starting with less concentrated input materials, and demanding more sophisticated output materials.
Another illustration of this point is that the very simple ‘silicon metal’ example, which we discussed above, yields one of the highest efficiencies. So too does thermally decomposing CaCO3 in cement production, or thermally reducing Fe2O3 in iron ore reduction for steel. On the other hand, the industrial processes needed to produce battery materials, lithium, nickel, PGMs are an order of magnitude more complex.
Conclusion #3. Efficiency Opportunities?
There are enormous opportunities in industrial efficiency. If we had run the charts above and found that most materials value chains were already in the mid-90s efficiency levels, then there would not be much opportunity for future improvement. But half of all the materials value chains in the world are around 20% efficient and one-quarter are less than 5% efficient.
This matters as energy efficiency is one of the most important controversies in the future of global energy demand and in the energy transition (notes below). Especially given that human civilization produces over 60 bn tons per year of โstuffโ across 40 different material categories, accounting for 40% of all global energy use and 35% of all global emissions.
Conclusion #4: beware hydrogen hubris?
One of the more surprising bars in our charts is over green hydrogen, where many forecasters are hoping that future value chains will be around two-thirds efficient (build-up here). It is simply interesting to note that this assumption would make hydrogen an outlier on our thermodynamic efficiency chart.
To re-iterate, if it were possible to convert two-thirds of ratable incoming electricity into useful recoverable hydrogen energy, then this would be one of the most efficient materials value chains in all of global industry.
It should be noted, we think, that the Gibbs Free Energy of hydrogen is 237 kJ/mol, while the Enthalpy of Formation is 286kJ/mol, and so there is always going to be a 13% energy loss in trying to recover useful energy back out of hydrogen as an energy carrier.
We would be wary of some studies that quote efficiency relative to the Gibbs Free Energy baseline, rather than the enthalpy baseline. This will tend to overstate efficiency. Being 100% efficient relative to the Gibbs Free Energy baseline still connotes a 13% loss relative to the enthalpy baseline.
Another very interesting point is that in thermodynamic terms, it is materially easier to make hydrogen from reforming methane than from electrolysing water. Methane is 25% hydrogen by mass, while water is 11% hydrogen. While the enthalpy of formation of methane from C and H is -75kJ/mol, the enthalpy of formation of water from H and O is -286kJ/mol. In other words, it takes 3.8x more energy to decompose H2O than CH4, and the decomposition of H2O yields 50% less H2 than the decomposition of CH4. Some might see this as an argument for making hydrogen out of CH4 rather than out of H2O, for example, in blue hydrogen or turquoise hydrogen value chains (more below).
Conclusion #5: still looking for magic bullets in CCS and DAC?
If you are starting from a baseline of air, with 400ppm CO2 concentration (i.e., 0.04%), then the minimum thermodynamic energy needed to separate out CO2 can be calculated via the Entropy of Mixing, and comes out at 134kWh/ton (calculation here). If energy costs 10c/kWh, this is $13/ton. This is not a high energy use or a high energy cost.
The challenge today is that technically ready DAC processes are only around 5% efficient (model here), and even many technically ready CCS applications absorb 10-50% of the energy that was harnessed when emitting that CO2 in the first place (note here).
There is going to be potential for CCS and DAC in the future of materials science. We have screened some of these opportunities, and will continue screening them in our CCS research, but for now, the two notes below capture what we are looking for, to get excited by next-gen membranes and adsorbents.
Many questions that matter in the energy transition are engineering questions, which flow through to energy economics: which technologies work, what do they cost, what energy penalties they have, and which materials do they use? We see an increasing intersection for economics and engineering in our energy transition research.
Behold thermodynamics!! Read the notes below, and you will no longer be tempted to commit exergetic harakiri by converting ratable electricity into some fuel that you can then run through a heat engine.
What does it cost? Some commentators seem to think decarbonizing the planet is going to be simple. If you think the solution is simple, the most likely reason is that you do not understand the complexity of the problem. We have built over 160 economic models, of different technologies in the energy transition, or different materials and manufacturing value chains that will themselves need to be decarbonized. One of our strongest held views is that decarbonization will be easier using low-cost technologies, and a good cut-off for “too expensive” is decarbonization technologies costing well over $100/ton.
What energy penalties? Net energy return on energy invested (EROEI) has risen by 5x over the past 200-years, to 28x in today’s global energy system. Strangely, even our own roadmap to net zero sees this mega-trend reversing to 2050 (note below). But hopefully EROEI declines will not be overly severe or maybe they can even be ameliorated. Yes, you guessed it, engineering equations govern the energy efficiencies of power plants, energy penalties of CCS, energy penalties of hydrogen or energy losses in inverters and power electronics.
How much metal or material? The average metal and material sees demand grow 3-30x in the energy transition, due to the growth of wind, solar, electric vehicles and batteries, and the current use factors in those spaces. But often, it takes some energy economics to determine how much material you can thrift.
Does that company’s technology work? We like to answer this question by reviewing the company’s patents, and scrutinizing the engineering details. All of our patent reviews are linked here, ranging from true breakthroughs with a moat around them, to other companies that seem to have the engineering equivalent of a flat tyre. A recent observation is that many technologies progress more slowly, and are harder to de-risk, than we naively hoped, based on a former career covering large-cap equities. We really like the 5-point framework at the back of this note for assessing the road to maturation.
Humility moment. After five years running a research firm focused on the energy transition, we are still working hard to correct our historical misconceptions, and educate ourselves into how the world’s energy-industrial complex works. If we can help you, field requests, or dig into any topics that are swirling around in your minds, then please do contact us.
Investing involves being paid to take risk. And we think energy transition investing involves being paid to take ten distinct risks, which determine justified returns. This note argues that investors should consider these risk premia, which ones they will seek out, and which ones they will avoid.
Investment strategies for a fast-evolving energy transition?
Thunder Said Energy is a research firm focused on energy technologies and energy transition. Over the past 4-years, we have published over 1,000 research notes, data-files and models, to help our clients appraise different opportunities in the energy transition, across new energies, hydrocarbons and industrial decarbonization.
Energy transition is evolving very quickly. This means that many investors are continually iterating their investment strategies, stepping into new themes/sectors as they emerge, and candidly, it also means that many risks are mis-priced.
Hence we think it is helpful to consider risk premia. Which risk premia are you getting paid to take? Are you getting paid enough? Or worse, are you exposed to risk for which you are not getting paid at all?
Energy Transition: ten risk premia?
We think there are ten risk factors, or risk premia, that determine the justified returns for energy transition investing. Sweeping statements about the global energy system are almost always over-generalizations that turn out to be wrong. Nevertheless, we will make some observations, as we define each risk factor below.
(1) Risk free rate. The risk free rate is a baseline. It is the return available with almost no risk, when investing in cash deposits and medium-term Treasuries. Our perspective is that many technologies in the energy transition will be inflationary. They will stoke inflationary feedback loops. And in turn inflation puts upwards pressure on the risk free rate. Thus rising rates should raise the bar on allocating capital across the board and investors should consider how they are being compensated. Our favorite note on this topic is here.
(2) Credit/equity risk. This risk premium compensates decision makers for the risk of capital loss inherent in owning the debt and equity of companies. It might vary from sub-1% in the senior secured credit of highly creditworthy companies, through to a c3-5% equity risk premium, and c5-10% for smaller/private companies? Our perspective is that there is great enthusiasm to invest in the energy transition. This means credit/equity risk premia for some of the most obvious energy transition stocks may be compressed. But energy transition is also going to pull on many adjacent value chains, which have non-obvious exposure to the energy transition, while their risk premia have not yet compressed. So we think it is a legitimate investment strategy to target “non-obvious” exposure to the energy transition. Our favorite note on this topic is here.
(3) Project risk. Energy transition is the world’s greatest building project. But over 90% of all construction projects take longer than expected, or cost more than expected, and the average over-run is 60%. Opportunities with more of their future value exposed to delivering large projects have higher project risk. Infrastructure investors can happily stomach 5-10% total returns as they accept project risk. This might include building out power grids, pipelines, fiber-optic cables, and PPA-backstopped wind and solar. And thus another legitimate strategy in the energy transition investing is to get paid for appraising and defraying project risks; get paid for the ability to execute projects well.
(4) Liquidity risk. This premium compensates investors against the possible inability to withdraw capital in a liquidity crunch. It is clearly higher for small private companies than publicly listed large-caps. Energy transition sub-sectors that might debatably warrant higher liquidity premia are CCS projects (50-year monitoring requirements after disposal), reforestation projects (40-100 year rotations for CO2 credit issuance) and infrastructure with very long construction times. A legitimate strategy for endowments and pension funds in the energy transition hinges on their large size and longevity, which allows them to withstand greater liquidity risk than other groups of investors. Another legitimate strategy is to earn higher returns by taking higher liquidity risk, building up a portfolio of privately owned companies with exposure to the energy transition, rather than investing in public companies that will tend to have lower liquidity risk premia.
(5) Country risk. This premium compensates investors against deteriorating economic conditions, tax-rises, regulatory penalties or cash flow losses in specific countries. This is becoming relevant for wind and solar, as many investors are increasingly willing to take country risk to achieve higher hurdle rates, and given the vast spread in different countries’ power prices and grid CO2 intensities (chart below). It is no good if only a few countries globally decarbonize. Net zero is a global ambition. And so another very legitimate strategy in the energy transition is to specialize in particular countries, where those country risks can be managed and defrayed, while driving energy transition there.
(6) Technology risk. This risk premium compensates investors for early stage technologies not working as intended, or suffering delays in commercialization, which derail the delivery of future cash flows. One observation in our research has been that some technologies at first glance seem to be mature, but on closer inspection still have material technology risks (such as green hydrogen electrolysers, post-combustion CCS). But technology risk is interesting for two reasons. First, we think that investors can command some of the highest risk premia (i.e., highest expected returns) for taking technology risk, compared to other risk premia on our list. Second, appraising technology risk is a genuine skill, possessed by some investors, a font at the wellspring of “alpha”. We enjoy appraising technology risk by reviewing patents (see below).
(7) New market risk. This risk premium compensates investors for immature markets. For example, if you produce clean ammonia, then you can sell it into existing ammonia fertilizer markets which does not carry any new market risk; or you can sell it as a clean fuel to decarbonize the shipping industry, which involves persuading the shipping industry to iron out ammonia engines, despite challenging combustion properties, prevent NOx emissions, and retrofit existing bulk carriers and container ships, which clearly does carry new market risk. Another interesting example is that in geographies with high renewables penetration, there may be some hidden market risk in reaching ever-higher renewables penetration? Our personal perspective is that new market risk is the most ‘uncompensated’ risk in the energy transition. It is pervasive across many new energies categories, many investors are exposed to this risk, and yet they are not paid for it. Although maybe another legitimate strategy in the energy transition is to collect new market risk premia while helping to create new markets?
(8) Competition risk. This risk premium compensates against unexpected losses of market share and cash generation due to new and emerging competition. It also includes the risk of your technology “getting disrupted” by a new entrant. Across our research, the area that most comes to mind is in batteries. There is always a headline swirling somewhere about a disruptive battery breakthrough that will crater demand for some incumbent material. Arguably, competition risk goes hand in hand with technology risk and new market risk. It is not enough to develop a technology that is 20% better than the incumbent if someone else develops a technology that is 40% better. Again, we think investors may not get compensated fairly for taking competition risk, while excess returns may accrue to investors that can appraise this risk well.
(9) Commodity risk. This risk premium compensates investors for the inherent volatility of commodity markets, which can have deleterious effects on valuations, liquidity, solvency, sanity (!). Consider that within the past five years, oil prices started at $80/bbl, collapsed into negative territory in Apr-2020, then recovered above $120/bbl in mid-2022. Our work has progressively gone deeper into cleaner hydrocarbons, metals, materials. Our energy transition roadmap contains bottlenecks where total global demand must rise by 3-30x. Yet our perspective is that many investors are reluctant to take commodity risk. Stated another way, commodity risk can be well-compensated. If you have the mental resiliency to ride this roller-coaster.
(10) Environmental risk. This risk premium compensates investors against tightening environmental standards, deteriorating environmental acceptance or regulations that lower future cash generation, especially in CO2 emitting value chains. It is sometimes called “stranded asset risk”. The most obvious example is investing in coal, where investors can get paid c10% dividend yields to own some of these incumbents. Our perspective is that environmental risk premia blew out to very high levels in 2019-2021, and they still remain high, especially if you believe that an era of energy shortages lies ahead (note below). Another perspective is that it feels “easier” to get paid a 5-10% environment risk in an insurance policy against future energy shortages, than to earn a 5-10% risk premium by doing the due diligence on a new and emerging technology? And finally, it is a legitimate strategy in the energy transition to own higher-carbon businesses, then improve their environmental performance, so that the market will reward these companies with lower environmental risk premia.
Economic models: moving beyond 10% hurdle rates?
We have constructed over 160 economic models of specific value chains, in new energies and in decarbonizing industries. Usually, we levy a 10% hurdle rate in these models, for comparability. But the justified hurdle rate should strictly depend upon energy transition risk premia discussed above.
Please contact us if we can help you appraise any particular ideas or opportunities, to discuss their justified hurdle rates, or to discuss how these risk premia align with your own energy transition investing strategy.
The very simple spreadsheet behind today’s title chart is available here, in case you want to tweak the numbers.
The key difference between an alkaline electrolyser and a proton exchange membrane electrolyser (PEM) is what ion diffuses between the anode and cathode side of the cell. In an alkaline electrolyser, alkaline OH- ions diffuse. In a PEM electrolyser, protons, H+ ions, diffuse. Ten fundamental differences follow.
The lowest cost green hydrogen will come from alkaline electrolysers run at high utilizations, powered by clean, stable grids with excess power (e.g., nuclear, hydro).
PEMFCs are more suited for backstopping renewables, although there is still some debate over the costs, longevity, efficiency and whether intermittent wind/solar can be put to better use elsewhere.
(1) In an alkaline electrolyser, water is broken down at the cathode. 4 x H2O molecules gains 4 x e- and become 2 x H2 + 4 x OH- ions. The OH- ions then diffuse across the cell to the anode. To complete the electrical circuit, 4 x OH- ion surrender 4 x e- at the cathode and become 2 x H2O molecules + 1 x O2 molecule. A schematic is below.
(2) In a PEM electrolyser, the chemistry is very different. Water is broken down at the anode. 2 x H2O molecules surrender 4 x e- and become an O2 molecule + 4 H+ ions (protons). The H+ ions then diffuse across the cell to the cathode. To complete the electrical circuit, at the cathode, 4 x H+ ions gain 4 x e- and become 2 x H2 molecules.
(3) PEMs have Membranes. H+ ions are the smallest ions in the Universe, measuring 0.0008 pico-meters (comparable with other ionic radiuses below). This means protons can diffuse through solid polymers like Nafion, which otherwise resist electricity and resist the flow of almost all other materials; totally isolating the anode and cathode sides of the cell in a PEMFC.
(4) Alkaline Electrolysers have Diaphragms. OH- ions are larger, at 153 pm (which is actually quite large, per the chart above). Thus they will not diffuse through a solid polymer membrane. Consequently, the anode and cathode are separated by a porous diaphragm, bathed in an electrolyte solution of potassium hydroxide, produced via a variant of the chlor-alkali process. This (alkaline) electrolyte also contains OH- ions. This helps, because more OH- ions makes it faster for excess OH- ions to diffuse from high concentration on the cathode side of the cell to low concentration on the anode side of the cell (see (2)).
(5) Safety implications. Alkaline electrolysers are said to be less safe than PEMs. The reason is the porous diaphragm. Instead of bubbling out as a gas on the anode side, very small amounts of oxygen may dissolve, diffuse โin the wrong directionโ across the porous diaphragm, and bubble out alongside the hydrogen gas at the cathode side. This is bad. H2 + O2 make an explosive mixture.
(6) Footprint implications. One way to deal with the safety issue above is to place the anode and cathode โfurther apartโ for an alkaline electrolyser. This lowers the chances of oxygen diffusing across the diaphragm. But it also means that alkaline electrolysers are less power-dense.
(7) Efficiency implications. Small amount of current can leak through the KOH solution in an alkaline electrolyser, especially at very large current densities. When a direct current (e-) is added to the cell, we want it to reduce 2 x H+ into H2. However, a small amount of the current may be wasted, converting K+ into K; and a small amount of โshunt currentโ may flow through the KOH solution directly from cathode to anode. We think real-world PEMs will be around 65% efficient (chart below, write-up here) and alkaline electrolysers will be multiple percentage points lower.
(8) Cost implications. An alkaline electrolyser may be a few $100/kW cheaper than a PEM electrolyser. Because the diaphragm is cheaper than the membrane. The electrodes are cheaper too. Our overview of electrolyser costs is below.
(9) Longevity implications. Today’s PEMs degrade 2x faster than alkaline electrolysers (40,000 hours versus 90,000 hours, as general rules of thumb). This is primarily because the membranes are fragile. And H+ ions are, by definition, acidic. But as with all power-electronics, the rate of degradation is also a function of the input signal and operating conditions.
(10) Flexibility implications. Alkaline electrolysers are not seen to be a good fit for backstopping renewables (chart above). According to one technical paper, โIt is well known that alkaline water electrolysers must be operated with a so-called protective current in stand-by/idle conditions (i.e., when no power is provided by renewable energy sources) in order to avoid a substantial performance degradationโ. When ion flow stops, there is nothing driving OH- ions across the cell, and pushing the H2 and O2 out of the cell. In turn, this means O2 and H2 bubbles can form. They may accumulate around electrode catalysts. Then when the cell starts up again, the gas bubbles block current flow. In turn, overly large resistance or current densities can then degrade the catalysts.
How does methane increase global temperature? This article outlines the theory. We also review the best equations, linking atmospheric methane concentrations to radiative forcing, and in turn to global temperatures. These formulae suggest 0.7 W/m2 of radiative forcing and 0.35ยบC of warming has already occurred due to methane, as atmospheric methane has risen from 720 ppb in 1750 to 1,900 ppb in 2021. This is 20-30% of all warming to-date. There are controversies over mathematical scalars. But on reviewing the evidence, we still strongly believe that decarbonizing the global energy system requires replacing coal and ramping natural gas alongside low-carbon energies.
On the Importance of Reaching Net Zero?
There is a danger that writing anything at all about climate science evokes the unbridled wrath of substantially everyone reading. Hence let us start this article by re-iterating something important: Thunder Said Energy is a research firm focused on the best, most practical and most economical opportunities that can deliver an energy transition. This means supplying over 100,000 TWH of useful human energy by 2050, while removing all of the CO2, and avoiding turning our planet into some kind of Waste Land.
Our roadmap to net zero (note below) is the result of iterating between over 1,000 thematic notes, data-files and models in our research. We absolutely want to see the world achieve important energy transition goals and environmental goals. And part of this roadmap includes a greatly stepped up focus on mitigating methane leaks (our best, most comprehensive note on the topic is also linked below).
However, it is also helpful to understand how methane causes warming. As objectively as possible. This helps to ensure that climate action is effective.
It is also useful to construct simple models, linking atmospheric methane concentrations to global temperature. They will not be perfect models. But an imperfect model is often better than no model.
Methane is a powerful greenhouse gas
An overview of the greenhouse effect is written up in a similar post, quantifying how CO2 increases global temperature (note below). We are not going to repeat all of the theory here. But it may be worth reading this prior article for an overview of the key ideas.
Greenhouse gases absorb and then rapidly re-radiate infra-red radiation. This creates a less direct pathway for solar radiation to be reflected back into space. The ability of different gas molecules to absorb and re-radiate infra-red radiation depends on the energy bands of electrons in those molecules, especially the shared electrons in covalent bonds between non-identical molecules with “dipole moments” (this is why H2O, CO2, CH4 and N2O are all greenhouse gases, while N2, O2 and Ar are not).
There are two reasons that methane is up to 200x more effective than CO2 as a greenhouse gas. The first reason is geometry. CH4 molecules are tetrahedral. CO2 molecules are linear. A tetrahedral molecule can generally absorb energy across a greater range of frequencies than a linear molecule.
The second reason is that methane is 200x less concentrated in the atmosphere, at 1,900 parts per billion, versus CO2 at 416 parts per million. We saw in the post below that radiative forcing is a log function of greenhouse gases. In other words, the first 20ppm of CO2 in the atmosphere explains around one-third of all the warming currently being caused by CO2. Each 1ppm increase in atmospheric CO2 has a ‘diminishing impact’, because it is going to absorb incremental radiation in a band that is already depleted by the pre-existing CO2. Thus small increases in methane cause more warming, as methane is currently present in very low concentrations, and thus at a much steeper part of the radiative forcing curve.
The most commonly quoted value we have seen for the instantaneous global warming potential of methane (instantaneous GWP, or GWP0) is 120x. In other words 1 gram of methane has a warming impact of 120 grams of CO2-equivalent. Although the 20, 50 and 100-year warming impacts are lower (see below).
What formula links methane to radiative forcing?
Our energy-climate model is linked below. It contains the maths and the workings linking methane to radiative forcing. It is based on a formula suggested in the past by the IPCC:
Radiative Forcing from Methane (in W/m2) = Alpha x Methane Concentration (in ppb) ^ 0.5 – Small Adjustment Factor for Methane-N2O interaction. Alpha is suggested at 0.036 in the IPCC’s AR5 models, and the adjustment factor for methane-N2O interactions can be ignored if you are seeking an approximation.
This is the formula that we have used in our chart below (more or less). As usual, we can multiply the radiative forcing by a ‘gamma factor’ which calculates global temperature changes from radiative forcing changes. We have seen the IPCC discuss a gamma factor of 0.5, i.e., 1 W/m2 of incremental radiative forcing x 0.5ยบC/[W/m2] gamma factor yields 0.5ยบC of temperature increases. However, there are controversies over the correct values of alpha and gamma.
Interaction Effects: Controversies over Alpha Factors?
The alpha factor linking methane to radiative forcing is suggested at 0.036 in the IPCC’s AR3 – AR5 reports. Plugging 0.036 into our formula above would suggest that increasing methane from 720 ppb in pre-industrial times to 1,900 ppb today would have caused 0.52 W/m2 of incremental radiative forcing. In turn, this would be likely to raise global temperatures by 0.27ยบC.
However, many technical papers, and even the IPCC’s AR5 report, have argued that alpha should be ‘scaled up’ to account for indirect effects and interaction effects.
Tropospheric Ozone. In the troposphere (up to 15km altitude), ozone is a ridiculously powerful greenhouse gas, quantified at around 1,000x more potent than CO2. It is postulated that the breakdown of atmospheric methane produces peroxyl radicals (ROO*, where R is a carbon-based molecule). In turn, these peroxyl radicals react with oxygen atoms in NOx pollutants, yielding O3. And thus methane is assumed to increase tropospheric ozone. Several authors, including the IPCC, have proposed to scale up alpha values by 20% – 80%, to reflect the warming impacts of this additional ozone.
Stratospheric Water Vapor. Water is a greenhouse gas, but it is usually present at relatively low concentrations in the stratosphere (12-50 km altitude). Water vapor prefers to remain in the troposphere, where it is warmer. However, when methane in the stratosphere decomposes, each CH4 molecule yields 2 H2O molecules, which may remain in the stratosphere. Several authors, including the IPCC, have proposed to scale up alpha values by around 15% to reflect the warming impacts of this additional water vapor in the stratosphere.
Short-wave radiation. Visible light has a wavelength of 400-700nm. Infra-red radiation has a wavelength of 700nm – 1mm and is the band that is mainly considered in radiative forcing calculations. However, recent research also notes that methane can absorb short-wave radiation, with wavelengths extended down to as little as 100-200nm. Some authors have suggested that the radiative forcing of methane could be around 25% higher than is stated in the IPCC (2013) assessment when short-wave radiation is considered. This impact is not currently in IPCC numbers.
Aerosol interactions. Recent research has also alleged that methane lowers the prevalence of climate-cooling aerosols in the atmosphere, and this may increase the warming impacts of CH4 by as much as 40%. This impact is not currently in IPCC numbers.
Hydrogen interactions. Even more recent research has drawn a link between methane and hydrogen GWPs, suggesting an effective GWP of 11x for H2, which is moderated by methane (note below).
N2O interactions. The IPCC formula for radiative forcing of methane suggests a small negative adjustment due to interaction effects with N2O, another greenhouse gas, which has been rising in atmospheric concentration (from 285ppb in 1765 to 320ppb today). The reason is that both N2O and CH4 seem to share an absorption peak at 7-8 microns. Hence it is necessary to avoid double-counting the increased absorption at this wavelength. The downwards adjustment due to this interaction effect is currently around 0.08 W/m2.
The overall impact of these interaction effects could be argued to at least double the instantaneous climate impacts of methane. On this more strict vilification of the methane molecule, rising atmospheric methane would already have caused at least a 1.0 W/m2 increase in radiative forcing, equivalent to 0.5ยบC of total temperature increases since 1750 due to methane alone.
Uncertainty is high which softens methane alarmism?
Our sense from reviewing technical papers is that uncertainty is much higher when trying to quantify the climate impacts of methane than when trying to quantify the climate impacts of CO2.
The first justification for this claim is a direct one. When discussing its alpha factors, the IPCC has itself acknowledged an “uncertainty level” of 10% for the scalars used in assessing the warming impacts of CO2. By contrast, it notes 14% uncertainty around the direct impacts of methane, 55% on the interaction with tropospheric ozone, 71% on the interaction with stratospheric water vapor. These are quite high uncertainty levels.
A second point is that methane degrades in the atmosphere, with an average estimated life of 11.2 years, as methane molecules react with hydroxyl radicals. This is why the IPCC has stated that methane has a 10-year GWP of 104.2x CO2, 20-year GWP of 84x CO2, 50-year GWP of 48x CO2 and a 100-year GWP of 28.5x CO2.
There is further uncertainty around the numbers, as methane that enters the atmosphere may not stay in the atmosphere. The lifetime of methane in additional sinks is estimated at 120 years for bacterial uptake in soils, 150-years for stratospheric loss and 200 years for chlorine loss mechanisms. And these sources and sinks are continuously exchanging methane with the atmosphere.
Next, you might have shared our sense, when reading about the interaction effects above, that the mechanisms were complex and vaguely specified. This is because they are. I am not saying this to be some kind of climate sceptic. I am simply observing that if you search google scholar for “methane, ozone, interaction, warming”, and then read the first 3-5 papers that come up, you will find yourself painfully aware of climate complexity. It would be helpful if the mechanisms could be spelled out more clearly. And without moralistic overtures about natural gas being inherently evil, which sometimes simply makes it sound as though a research paper has strayed away from the scientific ideal of objectivity.
Finally, the biggest reason to question the upper estimates of methane’s climate impact is that they do not match the data. There is little doubt that the Earth is warming. The latest data suggest 1.2-1.3C of total warming since pre-industrial times (chart below). Our best guesses, based on our very simple models point to 1.0ยบC of warming caused by CO2, 0.35ยบC caused by CH4 and around <0.2ยบC caused by other greenhouse gases. If you are a mathematical genius, you may have noticed that 1.0 + 0.35 + <0.2 adds up to 1.5C, which is more warming than has been observed. And this is not including any attribution for other factors, such as changing solar intensity or ocean currents. So this may all suggest that our alpha and gamma factors are, if anything, too high. In turn, this may mute the most alarmist fears over the stated alpha factors for methane being materially too low.
Conclusions for gas in the energy transition
How does methane increase global temperature? Of course we need to mitigate methane leaks as part of the energy transition, across agriculture, energy and landfills; using practical and economical methods to decarbonize the entire global energy system. Methane is causing around 20-30% of all the incremental radiative forcing, on the models that we have considered here. If atmospheric methane doubles again to 3,800 ppb, it will cause another 0.2-0.4ยบC of warming, as can be stress-tested in our model here.
However, we still believe that natural gas should grow, indeed it should grow 2.5x, as part of the world’s lowest cost roadmap to net zero. The reason is that while we are ramping renewables by over 5x, this is still not enough to offset all demand for fossil energy. And thus where fossil energy remains, pragmatically, over 15GTpa of global CO2 abatement can be achieved by displacing unchecked future coal consumption with gas instead.
Combusting natural gas emits 40-60% less CO2 than combusting coal, for the same amount of energy, which is the primary motivation for coal-to-gas switching (note below). But moreover, methane leaks into the atmosphere from the coal industry are actually higher than methane leaks from the gas industry, both on an absolute basis and per unit of energy, and this is based on objective data from the IEA (note below).
How does atmospheric CO2 increase global temperature? The purpose of this article is to outline the best formulae we have found linking global temperature to the concentration of CO2 in the atmosphere. In other words, our goal is a simple equation, explaining how CO2 causes warming, which we can use in our models. In turn, this is why our ‘roadmap to net zero’ aims to reach net zero by 2050, stabilize atmospheric CO2 at 450ppm, and we believe this scenario is compatible with 2ยบC of warming.
Disclaimer: can a simple equation explain global warming?
Please allow us to start this short note with a disclaimer. We understand that writing anything at all about climate science is apt to incur the unbridled wrath of substantively everyone. We also understand that the world’s climate is complex, and cannot be perfectly captured by simple formulas, any more than ‘world history’ can be.
Hence we think it is useful to have an approximate formula, even if it is only “about 80% right”, rather than a conceptual black hole. So that is the purpose of today’s short note.
(1) Thermal theory: inflows and outflows?
The Earth’s temperature will be in balance and remain constant if energy inflows match energy outflows. Energy inflows come from the sun, and are approximated by the ‘solar constant’, at 1,361 W/m2. Energy outflows are radiated back into space, and are approximated by the Stefan-Boltzmann law.
One of the terms in the Stefan-Boltzmann equation is Temperature^4. That is temperature raised to the power of four. In other words, if for some reason, the Earth is not quite radiating/reflecting 1,361 W/m2-equivalent back into space, then its average temperature needs to become a little bit warmer, until it is once again radiating 1,361 W/m2-equivalent back into space. Let’s unpack this bit further…
Physics dictates that any physical body with a temperature above absolute zero will radiate energy from its surface, usually small amounts of energy. The wavelength depends on the body’s chemical properties, and more importantly, its temperature, i.e., its thermal energy.
Energy is the inverse of wavelength. Some electromagnetic radiation has a wavelength of 380-700 nm, in which case we call this radiation ‘visible light’. Some has lower energy, and thus higher wavelength. For example, radiation with a wavelength of 700-1,000 nm is often referred to as ‘infra-red’.
So in conclusion, energy is constantly being radiated back “upwards” towards outer space by the Earth’s surface, i.e., its land and its seas. These wavelengths are generally in the infra-red range. They span from 600 – 20,000 nm. But how much of that energy actually escapes into space?
(2) Greenhouse Gases: CO2, CH4, N2O, et al.
CO2 is a greenhouse gas. This means that as thermal energy is radiated upwards from the Earth’s surface — land and sea — it can excite the electrons in CO2 molecules, into a higher energy state, for a few nano-seconds. These electrons quickly relax back into a lower-energy state. As they do this, they re-radiate energy.
In other words, CO2 molecules absorb energy that is “meant to be” radiating upwards from the Earth’s surface back into space, and instead they scatter it in various other directions, so some of it will be re-absorbed (e.g., back in the world’s oceans).
In passing, this is why some scientists object to CO2 being described as ‘a blanket’. It is not trapping heat. Or storing any heat itself. It is simply re-radiating and re-scattering heat that would otherwise be travelling in a more direct path back into space.
This effect of CO2, and other greenhouse gases, can be described as ‘radiative forcing’ and measured in W/m2. In 2021, with 416ppm of CO2 in the atmosphere, the radiative forcing of atmospheric CO2 is said to be 2.1 W/m2 higher than it was in 1750, back when there was 280 ppm of CO2 in the atmosphere.
So what equations relate atmospheric CO2 concentrations into radiative forcing effects and ultimately into temperatures?
(3) Absorbing equations: how does CO2 impact temperature?
Let us start with an analogy. You are reading a complicated text (maybe this one). On the first reading, you absorb and retain about 50% of the content. One the second reading, you absorb and retain another 25% of the content. On the third reading, you absorb another 13% of the content. And so on. By the tenth reading, you have absorbed 99.9% of the content. But the general idea is that the more you have absorbed on previous readings, the less is left to be absorbed on future readings.
The absorption profile of CO2 is similar. One way to think about this is that even a very small amount of a particular greenhouse gas is going to start absorbing and re-radiating energy at its particular absorption wavelengths. This dilutes the amount of energy with this particular wavelength that remains. This means that there will be less energy with this particular wavelength left to absorb by adding more of molecules of this greenhouse gas into the atmosphere. By definition, the additional gas is ‘trying to’ absorb radiation at a wavelength that is already depleted.
Hence, without any CO2 in the atmosphere, the Earth would be about 6C cooler. The first 20ppm of CO2 explains perhaps 2ยบC of all the warming exerted by CO2. The next 20ppm explains 0.8ยบC. The next 20ppm explains around 0.6ยบC. And so on. There are diminishing returns to adding more and more CO2.
Systems like this are described with logarithmic equations. Thus in the past, the IPCC has suggested various log equations that can relate radiative forcing to atmospheric CO2. The most famous and widely cited formula is below:
Increase in Radiative Forcing (W/m2) = 5.35 x ln (C/C0) … where C is the present concentration of atmospheric CO2 in ppm; and C0 was the concentration of atmospheric CO2 in 1750, which was 280 ppm.
The scalar value of 5.35, in turn, is derived from โradiative transfer calculations with three-dimensional climatological meteorological input dataโ (source: Myhre, G., E. J. Highwood, K. P. Shine, and F. Stordal, (1998). New estimates of radiative forcing due to well mixed greenhouse gases. Geophys. Res. Lett., 25, 2715โ2718). Note these formulae are quite long-standing and go back to 1998-2001.
Thus in the chart below, we have averaged three log formulas, suggested in the past by the IPCC, to calculate radiative forcing from atmospheric CO2. Next, to translate from radiative forcing into temperature we have used a “gamma factor” of 0.5ยบC per W/m2 of radiative forcing, which is also a number suggested in the past by the IPCC.
(4) Temperature versus CO2: mathematical implications?
Let us interpret this chart above from left to right. This is intended to be an objective, mathematical exercise, assessing the implications of simple climate formulae proposed long ago by the IPCC.
(a) Much of the greenhouse effect has “already happened“, due to the log relationship in the chart. The chart implies that the total greenhouse effect occurring today from atmospheric CO2 is around 7-8ยบC, of which c6-7ยบC was already occurring back in 1750.
(b) Raising atmospheric CO2 to 415ppm should have increased global temperatures by around 1ยบC since pre-industrial times, just due to CO2, and holding everything else equal, again just taking the IPCC’s formulae at face value.
(c) Raising atmospheric CO2 by another 35ppm to 450ppm would, according to the simple formula, would result in 1.3ยบC of warming due to CO2 alone, and 2ยบC of total warming, using the simple relationship that two-thirds of all greenhouse warming since 1750 is down to CO2, and the remainder is due to CH4, N2O and other trace gases.
(d) Diminishing impacts? Just reading off the chart, a further 50ppm rise to 500ppm raises temperature by 0.3ยบC; a further 50ppm rise to 550ppm raises temperature by 0.27ยบC; a further 50ppm rise to 600ppm raises temperature by 0.25ยบC; and a further 50ppm rise to 650ppm raises temperature by 0.23ยบC. The point is that rising atmospheric CO2 also raises global temperature along with it. While it is true that each incremental 50ppm has less impact than the prior 50ppm, the pace of the slowdown is itself quite slow, and not enough to dispel our ultimate need to decarbonize.
(e) There is a theoretical levelling off point due to the log relationship, where incremental CO2 makes effectively no difference to global temperatures, but from the equations above, it is so far in the future as to be irrelevant to practical discussions about decarbonization: it is probably somewhere after atmospheric CO2 has surpassed 4,000 ppm and CO2 has directly induced about 8C of warming.
In conclusion, the simple formulae that we have reviewed suggest that the ‘budget’ for 2ยบC of total global warming is reached at a total atmospheric CO2 concentration of 450ppm. Of the budget, about two-thirds is eaten up by CO2, and the remaining one-third is from other anthropogenic greenhouse gases, mainly methane and N2O. This is what goes into our climate modelling, and why we think it is also important to mitigate methane emissions and N2O emissions from activities such as crop production.
(5) Controversies: challenges for simple CO2-warming equations?
Welcome to the section of this short note that is probably going to get controversial. Our intention is not to inflame or enrage anybody here. But we are simply trying to weigh the evidence objectively.
(a) Gamma uncertainty. Our simple climate formula above used a gamma value of 0.5ยบC per W/m2 of radiative forcing. The IPCC has said in the past that this is a good average estimate, and that its gamma value has an uncertainty range of +/- 20%. Even this would be quite uncertain. For example, it means that the CO2 budget for 2ยบC of total warming would be anywhere between 420-510ppm of atmospheric CO2, using the formulae above.
(b) More gamma uncertainty. Worse, some sources that have crossed our screens have suggested gamma values as low as 0.31ยบC per W/m2 of radiative forcing, and others as high as 0.9ยบC per W/m2 of radiative forcing. One of the key uncertainties seems to be around interaction effects and feedback loops. For example, a warmer atmosphere can store more water vapor. And water vapor is itself a greenhouse gas.
(c) What percentage of CO2 emitted by human activities remains in the atmosphere and what percent is absorbed by the oceans? We have assumed almost no incremental CO2 emitted by human activities is absorbed in the oceans, and almost all remains in the atmosphere, in our simple climate model.
(d) Other variables. Our model does not factor in the impacts of other variables that have an impact on climate, such as Milankovitch cycles, solar cycles, ocean currents, massive volcanic eruptions or mysterious Dansgaard-Oeschger events. (And no doubt there are some super-nerdy details of climate science that are not yet fully understood. But surely that does not invalidate the basics. While I may not fully grasp the Higgs-Boson, I still try to avoid cycling into large and immovable obstacles).
(e) We have over-simplified radiative forcing. There is no single formula that perfectly sums up the impact of CO2 on climate, because the Earth is not a single, homogenous disc, pointed directly at the sun. Some of these over-simplifications are obvious. Others are more nuanced. Clouds reduce the mean radiative forcing due to CO2 by about 15%. And the impacts of different gases can be different across vertical profiles through the atmosphere, for example, stratospheric adjustments reduce the impact of CO2 by 15%, compared to a perfectly mixed and homogenous atmosphere.
(f) Warming to-date. Perhaps most controversially, our simple formulae discussed above suggest that with atmospheric CO2 at 415 ppm and a gamma value of 0.5ยบC, the world should already be 1.1ยบC warmer than pre-industrial times due to CO2 alone, and this equates to being around 1.6ยบC warmer than pre-industrial times due to a combination of all greenhouse gases. In contrast, the world currently appears to be around 1.1ยบC warmer, in total, than it was in pre-industrial times according to the data below. This may mean that our gamma value is too high, or that our formulas are too conservative. Or conversely, it could simply mean that there are time lags between CO2 rising and temperature following. We do not know the answer here, which is unsatisfying.
Conclusions: what is the relationship between CO2 and warming?
How does CO2 increase global temperature? The advantages of simple formulae are that they are simple and can be used to inform our models. The disadvantages are that the global energy-climate system is complex, and will never be captured entirely by simple formulae.
Our aim in this short note is to explain the formulas that we are using in our energy-climate models and in our roadmap to net zero. We think it is possible to stabilize atmospheric CO2 at 450ppm, by reaching ‘net zero’ around 2050, in an integrated roadmap that costs $40/ton of CO2 abated; and reaches an important re-balancing between human activities and the natural world.
450ppm of atmospheric CO2 is possibly a conservative budget. There may be reasons to hope that true ‘gamma’ values for global warming, or forcing co-efficients, are lower than we have modelled. But it makes sense to ‘plan for the worst, hope for the best’. And regardless of specific climate-modelling parameters, it makes sense for a research firm to look for the best combination of economics, practicality, morality and opportunity in the energy transition.
โIt provokes the desire, but it takes away the performance.โ That is the porterโs view of alcohol in Act II Scene III of Macbeth, it is also our view of 2022โs impact on the energy transition. The resultant outlook is captured in six concise pages, published in the Walter Scott Journal in Summer-2022.
Further research. Our overview into energy technologies and energy transition for the route to ‘net zero’ is linked here.
This video covers our top five reflections after 3.5 years, running a research firm focused on energy transition, and since Thunder Said Energy was founded in early 2019.
(1) Inter-connectedness. Value chains are so inter-connected that ultimately costs will determine the winning transition technologies.
(2) Humility. The complexity is so high that the more we have learned, the stupider we feel.
(3) Value in nuances. As a result, there is value in the nuances, which are increasingly interesting to draw out.
(4) ‘Will not should’. Bottlenecks need to be de-bottlenecked as some policy-makers have inadvertently adopted the “worst negotiating strategy in the world”.
(5) Bottom-up opportunities. And finally, we think energy transition and value will be driven by looking for bottom-up opportunities in a consistent framework.
For more perspectives on live, the universe, and everything, please see this video on our top perspectives on the energy transition from the first 2-years since setting up TSE.
Cookies?
This website uses necessary cookies. Our cookies are simply to improve your experience. We do not undertake any advertising or targeting via our cookies. By clicking 'accept' or continuing to use the website, you consent to our use of cookies.AcceptTerms & Conditions
Privacy & Cookies Policy
Privacy Overview
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.