Net EROEI is the best metric for comparing end-to-end energy efficiencies, explored in this 13-page report. Wind and solar currently have EROEIs that are lower and ‘slower’ than today’s global energy mix; stoking upside to energy demand and capex. But future wind and solar EROEIs could improve 2-6x. This will be the make-or-break factor determining the ultimate share of renewables?
Energy efficiency: a riddle, in a mystery, in an enigma?
Projections of future global energy demand depend on energy efficiency gains, which are hoped to step up from <1% per year since 1970, to above 3% per year to 2050. But there is a problem. Energy efficiency is vague. And hard to measure. This 17-page explores three different definitions. We are worried that global energy demand will surprise to the upside as efficiency gains disappoint optimistic forecasts.
Prime movers: efficiency of power generation over time?
How has the efficiency of prime movers increased across industrial history? This data-file profiles the continued progress in the efficiency of power generation over time, from 1650 to 2050e. As a rule of thumb, the energy system has shifted to become ever more efficient over the past 400-years.
In the early industrial revolution, mechanical efficiency ranged from 0.5-2% at coal-fired steam engines of the 18th and 19th centuries, most famously Newcomen’s 3.75kW steam engine of 1712. This is pretty woeful by today’s standards. Yet it was enough to change the world.
Electrical efficiency started at 2% in the first coal-fired power stations built from 1882, starting at London’s Holborn Viaduct and Manhattan’s Pearl Street Station, rising to around 10% by the 1900s, to 30-50% at modern coal-fired power generation using pulverised, then critical, super-critical and ultra-critical steam.
The first functioning gas turbines were constructed in the 1930s, but suffered from high back work ratios and were not as efficient as coal-fired power generation of the time. Gas turbines are inherently more efficient than steam cycles. But realizing the potential took improvements in materials and manufacturing. And the best recuperated Brayton cycles now surpass 60% efficiency in world-leading combined cycle gas turbines.
Renewables, such as wind and solar, offer another step-change upwards in efficiency, and will harness over 80% of the theoretically recoverable energy in diffuse sunlight and blowing in the wind (i.e., relative to the Betz Limit and Shockley-Queisser limit, respectively).
There is a paradox about many energy transition technologies. Long-term battery storage and green hydrogen would depart quite markedly from the historical trend of ever-rising energy efficiency in power cycles. Likewise, there are energy penalties for CCS.
The data-file profiles the efficiency of power generation over time, noting 15 different technologies, their year of introduction, typical size (kW), mechanical efficiency (%), equivalent electrical efficiency (%) and useful notes about how they worked and why they matter.
Gas turbines: operating parameters?
A typical simple-cycle gas turbine is sized at 200MW, and achieves 38% efficiency, as super-heated gases at 1,250ºC temperature and 100-bar pressure expand to drive a turbine. The exhaust gas is still at about 600ºC. In a combined cycle gas plant, this heat can be used to produce steam that drives an additional turbine adding 100MW of power and c20% of efficiency, for a total efficiency of 58%. This data-file tabulates the operating parameters of gas turbines.
Why do gas turbines matter? Recuperated Brayton cycles are going to be a defining technology of the energy transition and a complement to renewables. The thermodynamics are explained here. The key point is that gas-fired power cycles are totally different from steam cycles. They run off a fuel that is 50% lower carbon than coal. They can realistically be 2-3x more efficient per unit of fuel. They are more flexible (data here). And they may also be easier to decarbonize directly (example here).
How does a gas turbine work? First, air is drawn into a compressor. The compression ratio is typically around 20x. The pressurized air is then heated by combusting a fuel. The result is a very hot, very high-pressure gas. This can be used to drive a turbine as it expands. For example, expanding 1 ton of gas from a turbine inlet temperature of 1,250ºC and a turbine inlet pressure of 100-bar, down to an exhaust gas temperature of 600ºC and near-ambient pressures, might see volumes increase by around 25x (chart below).

Simple cycles versus combined cycles. If the 600ºC exhaust gas is simply discharged into the atmosphere, then a typical simple cycle gas turbine will achieve 38% efficiency, converting natural gas into electricity. But there is still a lot of energy in a 600ºC exhaust stream, which can be used to evaporate water, produce high pressure steam, and then drive an entirely separate turbine. This is a combined cycle configuration. And it adds another 20% efficiency, yielding a total efficiency of 58%.
Note that the steam cycle described above, powered by the waste heat from a gas turbine, is effectively the same as the primary heat cycle used in other conventional thermal power plants (Rankine cycle). This is remarkable.
The efficiency of a simple cycle gas turbine depends primarily on the turbine inlet temperature and pressure, which in turn depend on the compression ratio. The most efficient simple cycle gas turbines hit 43% efficiency, with compression ratios of 25-30x, turbine inlet pressures of 140-180 bar and turbine inlet temperatures of 1,400-1,600ºC. It is quite hard to get hotter than this, because things start to melt. But consider, for contrast, that a steam cycle really struggles to surpass 300-500ºC.

Why does a gas turbine look like that? To achieve these high compression ratios a typical gas turbine will have 12-22 separately optimized and sequential compression stages. And to maximize power output in the turbine, it will typically have 4 turbine stages. This explains the classic cross sectional profile of a gas turbine.
How fast does a gas turbine spin? A simple cycle gas turbine typically spins at 3,000-4,000 revolutions per minute (rpm). The compressor is connected to the same shaft as the turbine. The back-work ratio imparted to the compressor is equivalent to around 40-50% of the net work driven through the turbine.
How large is a gas turbine? A typical 200MW gas turbine might take up 60 m2 and weigh 300 tons. Good rules of thumb are 0.3 m2/MW of areal footprint, and 2 tons/MW of weight. Although larger gas turbines are more compact (on a per MW basis).
What is the cost of a gas turbine? A typical gas turbine might cost $200/kWe (chart below). Larger gas turbines have lower costs per MW (chart below). However note that our model of a gas-fired power plant assumes total capex of $850/kW. In other words, total installed capital costs are typically around 4x larger than the turbine itself.
(This multiple may be worth keeping in mind amidst debate about hydrogen electrolyser costs. Some companies have been guiding to $200-300/kWe electrolyser selling prices, and some analysts noting that this realistically means around $1,000-1,200/kW fully installed costs).

Emissions from natural gas power plants are generally low. CO2 intensity is 0.3 kg/kWh from a 60% efficient combined cycle gas turbine (up to 70% below coal power plants). NOx emissions are usually below 25ppm but can be as low as 2ppm in the best models. Many new turbines are also hydrogen ready, and have been qualified for 25-75% hydrogen blending.
Flexibility of a gas fired power plant is middling to high. A typical plant can ramp up or down by 15% of its nameplate capacity per minute, turn down to c25-50% of its load, and start up from cold in 20-minutes. Different examples are tabulated in the data-file.
Our outlook for gas turbines in the energy transition is published here. Leading companies in gas turbines are profiled here. Gas turbine operating parameters are compiled for a dozen gas turbine models in this data-file, as a useful reference, mainly designs from Siemens Energy, GE, Mitsubishi-Hitachi and Ansaldo.
Thermodynamics: Carnot, Rankine, Brayton & beyond?
Engines convert heat into work. They are governed by thermodynamics. This note is not a 1,000 page textbook. The goal is to explain different heat engines, simply, in 13-pages, covering what we think decision makers in the energy transition should know. The theory underpins the appeal of electrification, ultra-efficient gas turbines, CHPs, nuclear HTGRs and new super-critical CO2 power cycles.
Refrigerants: leading chemicals for the rise of heat pumps?
What chemicals are used as refrigerants? This data-file is a breakdown of the c1MTpa market for refrigerants, across refrigerators, air conditioners, in vehicles, industrial chillers, and increasingly, heat pumps. The market is shifting rapidly towards lower-carbon chemicals, including HFOs, propane, iso-butane and even CO2 itself. We still see fluorinated chemicals markets tightening.
Refrigerants are used for cooling. The thermodynamic principle is that these chemicals have low boiling points (averaging around -30ºC). They absorb heat from their surroundings as they evaporate. Then later these vapors are re-liquefied using a compressor.
The global market includes over 1MTpa of refrigerants, for use in refrigerators (around 100 grams per fridge), passenger cars (1 kg per vehicle) and home AC systems (4 kg per home). There is also an industrial heating-cooling industry, including MW-scale chillers that might contain 400kg of refrigerants, to large global LNG plants.
The market is growing. Structurally, heat pumps could add another 4kg of refrigerant demand per household, especially in markets such as Europe with traditionally low penetration of AC. Rapid rises are also occurring in global AC demand.
From the 1930s onwards, CFCs were used as refrigerants. But CFCs are inert enough to reach the middle of the stratosphere, where they are broken down by UV radiation, releasing chlorine radicals. These chlorine radicals break down ozone (O3 into O2). Hence by the 1980s, abnormally low ozone concentrations were observed over the South Pole. Ozone depletion elevates the amount of UV-B radiation reaching Earth, increasing skin cancer and impacting agriculture. And hence CFCs were phased out under the Montreal Protocol of 1989.
CFCs were largely replaced with fluorocarbons, which do not deplete the ozone layer, but do have very high global warming potentials. For example, R-134a, which is tetrafluoroethane, is a 1,430x more potent greenhouse gas than CO2.
The Kigali Amendment was signed by UN Member States in 1989, and commits to phase down high-GWP HFCs by 85% by 2036. This has been supplemented by F-gas regulation in the EU and the AIM Act in the US. High GWP fluorocarbons are effectively banned in new vehicles and stationary applications in the developed world.
In addition, there has long been a market for non-fluorinated chemicals as refrigerants, but the challenge with these alternatives is that they tend to be flammable. Over half of domestic refrigerators use iso-butane as their refrigerant, which is permissible under building codes because each unit only contains about 100 grams of refrigerant (e.g., in Europe, a safety limit has historically been set at around 150 grams of flammable materials in residential properties, and is being revised upwards).
So what outlook for the fluorinated chemicals industry? Overall, we think demand will grow mildly. It is true that regulation is tightening, and phasing out fluorocarbons.
However, some of the leading refrigerants that are being “phased in” as replacement actually use more fluorinated chemicals than the refrigerants they are replacing…
Hydrofluoroolefins (HFOs) have no ozone depleting potential and GWPs <10. As an example, R-1234yf is now used in over 100M vehicles, and comprises 67% fluorine by mass. This is an increase from the 44% fluorine content in R-22, which was the previous incumbent for vehicle AC systems.
Impacts of electric vehicles? You could also argue that EVs will have increasing total refrigerant demand, as there are in-built cooling systems for many fast-chargers.
Using CO2 as a refrigerant could also be an interesting niche. It is clearly helpful for our energy transition ambition to increase the value in capturing and using CO2. But the challenging is that even if 215M annual refrigerator sales all used 100% CO2 as their refrigerant, this would only “utilize” around 25kTpa of CO2, whereas our Roadmap to Net Zero is looking for multi-GTpa scale CCUS.
For heat pumps, we think manufacturers are going to use propane, CO2, HFOs and a small class of low-GWP fluoro-carbons. So there is a small pull on the fluorinated chemicals value chain from the ramp-up of heat pumps. But the main pull on the fluorinated chemicals chain is going to be coming from batteries and solar, as explored in our recent fluorinated polymers research.
Leading Western companies making refrigerants in the data-file include Honeywell, DuPont, Chemours, Arkema, Linde, and others in our fluorinated chemicals screen.
Building automation: energy savings, KNX case studies and companies?
High-quality building automation typically saves 30-40% of the energy needed for lighting, heating and cooling. This matters amidst energy shortages, and reduces payback times on $100-500k up-front capex. This data-file aggregates case studies of KNX energy savings, and screens 70 companies, from Capital Goods giants to private pure-plays.
KNX is an open standard for building automation. Thus a smart building can be given a central nervous system, called a bus, which is a green cable running throughout the electricals of the building. 500 hardware and software vendors from 45 countries make sensors, controllers, actuators and appliances that can be connected to the bus, and communicate with each other via KNX.
This data-file has reviewed 20 case studies of KNX building automation projects. Their average energy saving is 40%. In other words, well-automated ‘smart buildings’ use 40% less energy than they did prior to the building automation project.

In lighting, the average energy saving is 40%. For example, the KNX system might use a presence detector to switch off the lights if no one is in a room. Or it might use a lighting sensor to turn off lights if enough sunlight is already streaming in through the windows. Particularly sophisticated systems will detect ambient light across an entire room, and maintain a constant, pre-determined brightness level, which might mean dimming/amplifying lights nearer to/further from the windows.
In heating, the average energy saving is 30%. The KNX system might use temperature sensors to avoid over-heating a room, especially if no one is present, or scheduled to be present, or at night. Some systems link to window-sensors, and will not heat a room if a window is open. Or air quality sensors in ventilation ducts avoid over-ventilating a room and thereby cooling it.
In air conditioning, the average energy saving is 40%. Typical functionality is similar to heating described above. In addition, the KNX system might close blinds during the day to prevent heat gain in a room where no one is present, or automatically open windows at night to release heat when outside temperatures are cool.
Energy savings are clearly helpful amidst a protracted energy shortage. It is notable from the data-file that KNX projects have historically been expensive, costing $100-500k at a large commercial building where 3,000 KNX devices might be installed over >5,000m2. Payback times are around 5-10 years in normal times. But they are also a direct function of energy prices.
Thus we have screened 70 leading providers of KNX products. It is a large landscape of companies. But we think 30-40% of projects will typically feature at least some components from large European Capital Goods giants, such as ABB, Siemens and Schneider. 7-30% of projects feature components from a dozen, private, specialist companies (chart below).

The data behind the building automation energy savings, KNX case studies, and the full screen of leading companies can be downloaded via the button below.
Heap leaching: energy economics?
This data-file captures the energy economics of leaching processes in the mining industry, especially the costs of heap leaching, for the extraction of copper, nickel, gold, silver, other precious metals, uranium, and Rare Earths. The data-file allows you to stress test costs in $/ton of ore, $/ton of metal, capex, opex, chemicals costs, energy intensity and CO2 intensity.
How does heap leaching work? A sloping area of 500,000 to 1,000,000 m2 is excavated, lined with a polymer membrane (e.g., HDPE). Gradually, finely ground ore is added, reaching typical heights of around 10 meters, or around 15 tons of ore per m2. As the ore piles up, leaching chemicals are sprayed onto the surface. Sulphuric acid dissolves metals such as copper. Sodium cyanide forms soluble complexes with PGMs.
75-85% of the valuable metals might thus be extracted in the pregnant leach solution, which drains off (chart below). This leach solution is then accumulated for further processing, for example, by electrowinning.

The typical costs of heap leaching with H2SO4 might contribute $2.5/kg to the costs of processing an industrial metal, under typical input assumptions. Energy consumption and CO2 intensity are both very low. Most likely around 2kWh/ton of ore; with a fully loaded CO2 intensity of 4 grams per ton of ore.
The typical costs of heap leaching with NaCN might contribute $2-30/Oz to the costs of processing a precious metal, depending on the ore grade. Despite the very low energy cost per ton of ore, the numbers are amplified by an order of magnitude by very low ore grades.
For example, if an ore contains 40 grams/ton of gold-equivalents, and heap leaching recovers 80% of this material, then you will need to process c30 tons of ore to recover 1 kg of gold-equivalents (or 30,000 kg per kg). Thus heap leaching alone can generate 50kg of CO2e per kg of gold. Precious metals and Rare Earth are among the most CO2 intensive materials in the world, although their total CO2 footprint is diminished by using them in small quantities.
80-90% energy and CO2 savings. If you are extracting copper from a 1% ore grade, heap leaching will require 225kWh/ton-Cu of energy, while a subsequent electro-winning step might require 500kWh/ton. By contrast, smelting might require 6,000 kWh/ton. This is the alchemy miracle of hydrometallurgical processing, and why we consider heap leaching to be an efficiency technology.
The energy economics of leaching operations depend on capex, opex, ore grades, recovery rates and chemicals costs. All of these variables can be stress-tested in our model, in the tabs overleaf and are backed up by data that has been aggregated from technical papers.
Energy efficiency: an overview?
Energy efficiency denotes the useful energy that can be consumed divided by the input energy, most often thermal input energy, that must be supplied. Hence this data-file looks across all the different models and data-files, which we have built up across our research into energy technologies and energy transition, in order to give an overview of energy efficiency, comparing and contrasting the energy efficiency of different processes.
Electrification increases efficiency, at the process level. This is a general theme across analysis of transport, industry and heat, seen most notably in replacing internal combustion engine vehicles (15-20% efficient) with electric vehicles (80-90% efficient), both on an apples-to-apples basis. Our other data-files give more underlying detail and bottom-up calculations on vehicle fuel economy.
New energies technologies are not always efficient, and some are simply easier to implement in a world that is in energy surplus than a world that is in energy deficit. The best examples in this category are hydrogen, batteries and CCS.
Conventional energy is also getting more efficient. It would be wrong to conclude that today’s incumbent base of turbines, furnaces and industrial processes are sitting around like turkeys waiting for Thanksgiving. We see amazing potential to improve conventional energy efficiency, from CHPs to improved heat, to improved catalysts.
The overview of energy efficiency simply offers our ‘best number’, which can be taken as a general rule of thumb for the energy efficiency of different processes, along with 1-6 lines of explanation, plus links to c30 of our underlying data-files, which give full and more detailed calculations, category-by-category
PureCycle: polypropylene recycling breakthrough?
This technology review gives an overview of PureCycle Technologies, founded in 2015, headquartered in Ohio/Florida, USA, went public via SPAC in 2021 and currently has c150 employees. The company aims to recycle waste polypropylene into virgin-like polypropylene, preventing plastic waste, while saving 79% of the input energy and 35% of the input CO2 compared with virgin product.
Why is this challenging? Even after sorting and washing, plastic waste is still contaminated with spoiled food, chemicals, dyes and pigments, resulting in recycled product being dark and low-quality. Several other patents have sought to address these issues, but with only varying success, and via complex and/or costly methods.
Controversies have been raised by a critical report from a short-selling firm. However, our usual patent review allowed us to infer how the process is envisaged to work, including good, specific and intelligible details, covering the solvents, filtration methods, medium temperatures and medium-high pressures (data here). This de-risks some of the risks.
This note contains our observations from our PureCycle technology review, the company’s ambitions, challenges, and scores the patent library on our usual dimensions of problem specificity, solution specificity, intelligibility, focus and manufacturing readiness. We also tabulated technical data that is presented in several patents.
Further research. Our recent commentary on PureCycle technology is linked here.