CO2 intensity of materials: an overview?

CO2 intensity of materials

This data-file tabulates the energy intensity and CO2 intensity of materials, in tons/ton of CO2, kWh/ton of electricity and kWh/ton of total energy use. The build-ups are based on 160 economic models that we have constructed, and the data-file is intended as a helpful summary reference. Our key conclusions on the CO2 intensity of materials are below.


Human civilization produces over 60 bn tons per year of ‘stuff’ across 40 different material categories, accounting for 40% of all global energy use and 35% of all global emissions.

Rules of thumb. Producing the average material in our data-file consumes 5,000 kWh/ton of primary energy and emits 2 tons/ton of CO2.

Energy breakdowns. As another rule of thumb, 30% of the energy inputs needed to make a typical material are electricity, 25% are heat and 45% are other input materials.

Ranges. All of these numbers can vary enormously (chart below). Energy intensity of producing materials ranges from 300 kWh/ton (bottom decile) to 150,000 kWh/ton (upper decile).

The average thermodynamic efficiency of producing these industrial materials is quantified at c20%, with an interquartile range from 5% to 50%. This is shown in the chart below and discussed in more detail here.

CO2 intensity of producing different materials also ranges from 0.5 tons/ton (bottom decile) to 140 tons/ton (upper decile).

Strictly, many of the largest contributors to global CO2 emissions, such as steel and cement, are not ‘carbon intensive’ (i.e., emissions are <2 tons/ton), they are simply produced in very large volumes.

Ironically, while we want to achieve an energy transition, it does require ramping up production of materials value chains that truly are CO2 intensive (i.e., emissions are above 20 tons/ton or even 100 tons/ton). This includes PV silicon and silver for solar panels; carbon fiber and rare earths for wind turbines; and lithium and SiC MOSFETs for electric vehicles. Ultimately these value chains also need to decarbonize in some non-inflationary way, which is a focus in our research.

Scope 4 CO2. Another complexity is that everything has a counterfactual. SiC MOFSETs might be energy intensive to produce but they earn their keep in long-term efficiency savings. Hence we recommend that the best way to evaluate total CO2 intensity is on a Scope 1-4 basis (note here).

Simplifications. Please note that in order to make this file remotely useful, we are guilty of simplifying and averaging quite complex and broad-ranging industries. More detail is available on different oil value chains (including oil sands and Permian shale in detail), gas value chains, coal grades, industrial boilers and burners by industry, construction materials and different plastics.

CO2 screening. In some industries, we have been able to aggregate CO2 curves, plotting the different CO2 intensities or energy intensities of different companies. The best example is looking at acreage position by position in the US oil and gas industry, refiners, gas pipelines, gas gathering, gas distribution, ethanol plants.

Other data-files on our website have aimed to tabulate the CO2 intensity of other value chains, but due to quirks of those value chains, we cannot plot the data in kWh/ton or CO2/ton. This includes the CO2 of different forms of transportation, digital processes, or hydrogen.

Agricultural commodities are also not captured in the data-file. We have estimated separately the CO2 intensity of different wood fuels, crop production, how it varies with fertilizer application, palm oil. All of our biofuels research is here.

US gas transmission: by company and by pipeline?

This data-file aggregates granular data into US gas transmission, by company and by pipeline, for 40 major US gas pipelines which transport 45TCF of gas per annum across 185,000 miles; and for 3,200 compressors at 640 related compressor stations.


This data-file aggregates data for 40 large US gas transmission pipelines, covering 185,000 miles, moving the US’s 95bcfd gas market. Underlying data are sources from the EPA’s FLIGHT tool.

Long-distance gas transmission is highly efficient, with just 0.008% of throughput gas thought to leak directly from the pipelines. Around 1% of the throughput gas is used to carry the remaining molecules an average of 5,000 miles from source to destination, with total CO2-equivalent emissions of 0.5 kg/mcfe. Numbers vary by pipeline and by operator.

Five midstream companies transport two-thirds of all US gas, with large inter-state networks, and associated storage and infrastructure.

The largest US gas transmission line is Williams’s Transco, which carries c15% of the nation’s gas from the Gulf Coast to New York.

The longest US gas transmission line is Berkshire Hathaway Energy’s Northern Natural Gas line, running 14,000 miles from West Texas and stretching as far North as Michigan’s Upper Peninsula.

Our outlook in the energy transition is that natural gas will emerge as the most practical and low-carbon backstop to renewables, while volatile renewable generation may create overlooked trading opportunities for companies with gas infrastructure.

In early-2024, we have updated the data-file, screening all US gas transmission by pipeline and by operator, using what are currently the latest EPA disclosures from 2022.

Previously, we undertook a more detailed analysis, matching up separately reported compressor stations to each pipeline (80% of the energy use and CO2e come from compressors), to plot the total CO2 intensity and methane leakage rate, line by line (see backup tabs).

major US gas pipelines ranked

US gas transmission by company is aggregated — for different pipelines and pipeline operators — in the data-file, to identify companies with low CO2 intensity despite high throughputs.

MOSFETs: energy use and power loss calculator?

MOSFETs are fast-acting digital switches, used to transform electricity, across new energies and digital devices. MOSFET power losses are built up from first principles in this data-file, averaging 2% per MOSFET, with a range of 1-10% depending on voltage, switching, on resistance, operating temperature and reverse recovery charges.


MOSFETs and other power transistors matter, as they are the basis for solar inverters, wind converters, electric vehicle traction inverters, AC-DC rectifiers, other DC-DC converters, the power supplies to data-servers, AI and other digital devices.

Transistors are digital switches made of semiconductor materials, which allow one circuit to control another. Our overview of semiconductors explains how a transistor works, from first principles, by depicting a bipolar junction transistor (BJT), which is driven by current.

However, it is better to control a transistor using voltage than current. Ambient electrical fields induce currents that can cause current-driven transistors to misfire. Hence MOSFETs and IGBTs are driven by voltage.

MOSFETs were invented at Bell Labs in 1959 and are now the most used power semiconductor device in the world. Something like 2 x 10^22 transistors have been produced across human history by 2023.

MOSFETs: how do they work?

We are sorry to say it, but it is simply not possible to understand how a MOSFET works, without a basic understanding of voltage, current, conduction band electrons, valence band holes, N-type semiconductor, P-type semiconductor and Fermi Levels. Do not despair! To help decision-makers understand these concepts, we have written an overview of semiconductors and an overview of electricity.

MOSFET stands for Metal Oxide Semiconductor Field Effect Transistor. Actually this is something of a misnomer. The eponymous ‘metal oxide’ is referring to an oxide of silicon metal, or in other words, a highly pure layer of silicon dioxide. It is oxidized to create an insulating layer. In turn, the reason for creating this insulating layer is so that a Field Effect can be induced by a potential difference (voltage) across the gate.

Why can’t current flow through a MOSFET in the off-state? A simplified diagram of an N-channel enhancement MOSFET is shown below. Ordinarily, electrons cannot flow from the source to the drain, due to the PN junction between the body and the drain, which is effectively a reverse-biased diode. A negative voltage at source draws in the mobile holes from the P-type semiconductor. A positive voltage at the drain attracts the mobile electrons in the N-type layer. And this creates a depletion zone where no current can flow, just like in any other diode.

How can current flow through a MOSFET in the on-state? The ‘Field Effect’ occurs when a positive voltage is applied to the gate, raising the Fermi level of the P-type semiconductor. Remember the Fermi Level is the energy level likely to be exceeded by 50% of electrons. A large enough voltage raises the Fermi Level above the lower bound of the conduction band. Suddenly there is a sea of mobile electrons, forming an N-channel, so that electrons can flow from source to drain.

What power losses in a MOSFET?

Resistive losses occur when a current flows through a semiconductor, proportional to the on-resistance of the semiconductor, and a square function of the current. The on resistance of different MOSFETs is typically in the range of 0.1-0.6 Ohms, at power ratings of 1-20kW, based on data-sheets from leading manufacturer, Infineon (as profiled in our screen of SiC and MOSFET companies).

Hence a better depiction of an N-channel enhancement MOFSET follows below. In the chart above, the N-channel through the P-layer is very long and thin, which is going to result in high resistance. Hence in the chart below, the NPN junction is slim-lined, and the on resistance from source to drain is going to be lower, which helps efficiency.

Raising voltage is also going to reduce I2R conduction losses, because less current is flowing. However, voltage is limited by a MOSFET’s breakdown voltage. Above this level, the PN junction will fail to block the flow from source to drain, and the MOSFET will be destroyed (avalanche breakdown). The voltage ratings of different MOSFETs are tabulated in our data-file. A clear advantage for silicon carbide power MOSFETs is their higher breakdown voltage, which allows them to be operated at higher voltages across the board, reducing conductive losses.

Switching losses are also incurred whenever a MOSFET turns on or off. When the MOSFET is off, there is a large potential difference (voltage) between the source and the drain. When the MOSFET turns on, current flows in while the voltage is still high, which dissipates power. And then the voltage falls when current flow is high, which dissipates power. The same effect happens in reverse when the MOSFET is switched off. These losses add up, as the pulse width modulation in an inverter will often exceed 20kHz frequency. And the latest computer chips run with a clock speed in the GHz. Minimizing switching losses is the rationale for soft switching, being progressed by companies such as Hillcrest.

Hillcrest Technology Review

A reverse recovery loss is also incurred by a MOSFET, because every time the MOSFET switches on, the body diode needs to be inverted from reverse bias to forward bias. This physically requires moving charge carriers, or in other words, requires flowing a current. The reverse recovery loss can often be the largest single loss on a MOSFET.

Transistors: IGBTs vs MOSFETs?

IGBTs stand for Integrated Gate Bipolar Transistors, which is another transistor design that has been heavily used in solar, wind, electric vehicles and other new energies applications.

An IGBT is effectively a MOSFET coupled with a Bipolar Junction Transistor, to improve the current controlling ability.

IGBTs are generally more expensive than MOSFETs, and can handle higher currents at lower losses. However when switching speeds are high (above 20kHz), MOSFETs have lower losses than IGBTs, because IGBTs have slow turn off speeds with higher tail currents.

Finally, in the past, it was suggested that IGBTs performed better than MOFSETs above breakdown voltages of 400V, although this is now more nuanced, as there are many high-performance MOSFETs with voltages in the range of 600-2,000 V.

The very highest voltage IGBTs and MOSFET modules we have seen are in the range of 6-12 kV. This explains why so much of new energies requires generating at low-medium voltage then using transformers to step up the power for transmission; or conversely using transformers to step down the voltage for manipulation via power electronics modules.

Formulae for the losses in a power MOSFET?

This data-file aims to calculate the power losses of a power MOSFET from first principles, covering I2R conduction losses, voltage drops across the diode, switching losses and reverse recovery losses, so that important numbers can be stress tested.

Generally, the losses through a MOSFET will range from 1-10%, with a base case of 2% per MOSFET. These numbers consist of conduction losses, voltage drops across the diode layer, switching losses and reverse recovery charges.

Losses add. Many circuit designs contain multiple MOSFETs, or layers of MOSFETs and IGBTs (example below). Roughly, flowing power through 6 MOSFETs, each at c2% losses, explains why the EV fast-charging topology depicted below might have losses in the range of 10-20%.

Power losses in a MOSFET rise as a function of higher switching speeds, as calculated in the data-file, shown in the chart below, and for the reasons stated above. High switching speeds produce a higher quality power signal, but are also more energetically demanding.

Power losses in a MOSFET fall as a function of Voltage, as calculated in the data-file, shown in the chart below, and for the reasons stated above. Although lower voltage MOSFETs face less electrically demanding conditions and are less expensive.

Overall, our model is intended as a simple, 30-line calculator to compute the likely power flow, electricity use and losses in a MOSFET. This should enable decision makers to ballpark the loss rates of MOSFETs, and power electronic devices containing them.

However, interaction effects are severe. Drain current, breakdown voltage, gate voltage, temperature, on resistance, reverse recovery charges and all of the switching times depend on one-another. Hence for specific engineering of MOSFETs it is better to consult data-sheets.

Semiconductor manufacturers also stood out in our recent review of market concentration versus operating margins.

Battery cathode active materials and manufacturing?

Cathode active materials

Lithium ion batteries famously have cathodes containing lithium, nickel, manganese, cobalt, aluminium and/or iron phosphate. But how are these cathode active materials manufactured? This data-file gathers specific details from technical papers and patents by leading companies such as BASF, LG, CATL, Panasonic, Solvay and Arkema.


New energies are entering an age of materials, where an increasing share of costs are accruing to materials companies, while more advanced materials hold the key to continued efficiency gains and cost deflation (per our research note below).

There is a mild temptation to gloss over the complexity of manufacturing battery cathodes, as though you ‘just get some metals and wop them in’. The reality is a complex, ten step process, which also explains some of the challenges ahead for battery recycling.

Cathode manufacturing: ten-stage process?

(1) Lithium is sourced as lithium hydroxide or lithium carbonate in the first stage of manufacturing a lithium ion battery cathode.

(2) Lithium inputs are then combined with transition metals and other additives. The transition metals may include nickel, manganese, cobalt, or iron phosphate precursors. Different cathode chemistries and their resultant properties are covered in this data-file.

(3) Precursor cathode active materials are then typically heated in an oxygen enriched atmosphere for c12-hours at c700ºC. The aim is to calcine away impurities and form coherent metal oxide crystals. Energy use is likely 60-100 kWh/kWh of batteries, per our data-file here. Variations are discussed in the data-file.

(4) Next may follow various stages of sieving, crushing-grinding, acid-treating, dosing with additives, washing and drying, to modify the outer surface of the cathode active materials. The details also vary quite widely between patents in the data-file.

(5) Next the Cathode Active Materials, which are typically 92% of the weight of a finished battery cathode, are mixed with a conductive carbon additive, most often carbon black, but also potentially graphite or carbon nanotubes, which will typically form 5% of the cathode.

(6) A fluorinated polymer binder, most often PVDF is also sourced. This is an inert plastic that physically holds all of the other the active materials together and will typically form 4% of the cathode. PVDF is also used to bind graphite at the anode.

(7) All of these Cathode Active Materials are then dissolved in a solvent, typically N-Methyl-2-Pyrrolidone (NMP), to form a mixed slurry of c60% solids, c40% NMP.

(8) The slurry is deposited onto a 10-20μm thick aluminium current collector (for contrast, the anode side of the cell tends to use a thinner, copper current collector).

(9) The NMP is then evaporated and recovered by heating for 12-hours at 110ºC. Note this is below the 200ºC boiling point of NMP, because PVDF binder is only rated to 120ºC. Hence the process may require a mild vacuum and long heating times so evaporation can occur via Boltzmann statistics (chart below). Energy use is likely 20-40 kWh/kWh, again per our data-file here.

Cathode active materials
Boltzmann probability distribution function versus temperature in Kelvin

The resultant battery cathode will typically have a thickness of 70μm, containing 15 mg/cm2 of active material.

(10) Half-cell manufacturing. To prevent oxidation of the cathode, some processes then immediately manufacture half-cells under an inert atmosphere (e.g., nitrogen, argon). Electrolyte is added to the half-cell, most often by dissolving the ionic salt LiPF6 at 1M concentration in a mixed solvent of ethylene carbonate, ethyl methylene carbonate, di-ethylene carbonate, propylene carbonate, et al. Then a polymer separator is added. The average finished battery cell is 3.5mm thick.

Cathode manufacturing: leading companies?

Two companies stood out as having filed the most patents into battery cathode active materials and manufacturing, often departing quite materially from the simplified description above, suggesting a proprietary process? Details in the data-file.

Materials companies also stood out in the patents and technical papers, as the same companies were often listed as supplying high-grade materials. For example, PVDFs from Solvay, and carbon black from Imerys, although we also wonder about using Huntsman’s new MIRALON product here. Details of metals suppliers, NMP suppliers, separator suppliers and niche equipment suppliers are in the data-file.

Energy intensity of fiber optic cables?

Energy intensity of fiber optic cables

What is the energy intensity of fiber optic cables? Our best estimate is that moving each GB of internet traffic through the fixed network requires 40Wh/GB of energy, across 20 hops, spanning 800km and requiring an average of 0.05 Wh/GB/km. Generally, long-distance transmission is 1-2 orders of magnitude more energy efficient than short-distance.


An optical fiber consists of a glass core, through which light signals can travel ultra-rapidly via total internal reflection, surround by at least one sheathing layer.

A fiber optic cable consists of at least one optical fiber, often many, surrounded by protective exterior layers of sheathing and possibly armoring. By 2020, over 5bn kilometers of fiber-optic cables have been deployed globally. Estimates vary, but the fiber optic cable market is likely worth $10bn per year.

A transceiver (aka a transducer) is an electric device that converts electrical signals into light signals (e.g., a laser-based transmitter) or vice versa (e.g., a photo-diode based receiver). The optical transceiver market is worth around $10bn per year.

The fiber optic network is a constellation of transceivers and fiber optic cables, which are capable of transmitting data between data-centres and internet users. A commonly used acronym is PON which stands for the Passive Optical Network, and simply transmits data.

Bitrate is the capacity of a digital network to transmit information. It is measured in Gbps. Gbps stands for Gigabits per second. 1 Gbps means that 1 billion bits of information can be passed along the fiber optic cable each second (there are 8 bits in a byte).

The frequency of a fiber optic system is measured in MHz. 1 MHz means that the cable can carry 1 million distinct ‘packets’ of information per second (i.e., 1 Mbps). Typical frequencies are 10-100MHz, but can reach into the GHz range.

Many distinct signals can be carried through a fiber optic cable at the same time by “multiplexing” them. This might include carrying them at different frequencies or wavelengths. For example, ‘dense wavelength division multiplexing’ (DWDM) can carry 80-100 different frequencies through the same optical fiber (different colors). The signals can later be de-multiplexed.

Prysmian notes a typical fiber optic cable will enable 1-10 Gbps download speeds, which is 30-50x faster than a comparable copper cable (25-300 Mbps), and an order of magnitude above satellites or DSL (0.5 – 75 Mbps). The world record for data transmission through a fiber optic cable at the time of writing is 1.84 petabits per second (achieved in 2022 by researchers from the University of Denmark, on a single, 7.9km fiber optic cable, split into 37 lines with 223 frequencies per line). This is something equivalent to transmitting 1bn Zoom calls simultaneously.

The strength of a signal in a fiber optic cable is measured in dBm, where 0 dBm is the equivalent of 1mW (1,000th of a Watt). Note that decibels are logarithmic around base 10. Hence 10dBm is equivalent to 10mW, 20dBm is equivalent to 100mW, 30dbM is equivalent to 1W; while -10dBm is 0.1mW, -20dBm is -0.01mW and -30dBm is -0.001mW.

Attenuation is the difference between the launch power of the signal from the transmitter and the power of the signal at the receiver. The rate of attenuation will depend on the precise dimensions and materials of the cable, but a good rule of thumb is in the range of 0.2dB per kilometer. Each connector likely also introduces a 0.75dB loss.

To combat the impacts of attenuation across a longer cable will either require: projecting a higher power level from the initial transmitter; deploying a more sensitive (and thus more expensive) receiver; or installing a series of amplifiers/repeaters along the length of the cable, to boost it (e.g., every 20km). Although this adds to the energy intensity of fiber optic cables.

Another limitation on the length of a cable comes from dispersion. This is not related to the signal attenuating (i.e., getting weaker). It is due to the signal ‘spreading out’ and becoming noisy. To combat attenuation, filtering needs to be applied within the circuitry of the amplifier/repeater. As a general rule, thin “single mode” fibers, with c10μm cores will have longer ranges than thicker “multi-mode” fibers with c50-100μm cores, as the thinner core confines the light more and limits dispersion. So in a sense, a fiber optic cable is the opposite of a pipeline, where greater widths enable greater flow.

Using these definitions, we can compile data into the energy consumption of fixed fiber lines and their bit rates. Using these numbers, we can estimate the power consumption of data transmission infrastructure, which is ‘always on’, transmitting signals on one side of a fiber optic cable and listening for signals on the other side.

Power consumption of fiber optic cables can range from 0.01-100 W/Gbps depending on the length of the cable (chart below). As a mid-point, a 2-5km cable might have a power consumption of 1W/Gbps and consume around 0.1 Wh/GB of data transmission, which equates to 0.05 Wh/GB/km. Numbers can be flexed in the data-file.

Larger and more highly utilized cables will have 1-2 orders-of-magnitude lower energy consumption (chart below). Thus the energy intensity of fiber optic cables is not a fixed number, but highly context-sensitive.

Energy consumption will continue falling, per the historical trend (chart below, data here). The energy in the signal that is transmitted through a fiber optic cable (quoted in dBm), in physics terms, represents 0.05% of the total electricity use of the data transmission network. The energy consumption is not in the laser pulse. It is in encoding and decoding it, and the balance of electronic systems. Hence there is huge room to improve, including through improved cables, improved transceivers, more sensitive photo-diodes, greater frequencies, greater multiplexing.

Overall the energy consumption of a fiber optic cable is very low. It might take 0.05 Wh to move 1GB by 1km. For contrast, the numbers to move 1 ton or 1 person by 1km can be around 15,000x higher (data here).

Vehicles energy transition research
Cost and Energy Use Comparison of Passenger and Freight Transportation Vehicles in $ and kWh per km or mile

Our outlook on the future energy consumption of the internet is written up in our recent research note here, and all of our broader energy demand data are here.

Corning is the leading manufacturer of fiber optic cables and had produced over 1bn kilometers of optical fiber by 2017 (and comes up in our glass fiber research). Prysmian produces 30M km of optical fiber each year across five plants worldwide (and comes up repeatedly in our research). Many sources cite Finisar, a division of Coherent and Molex as having the largest market share in transceivers. Broadcom is a $260bn (market cap) giant, producing connectivity equipment, and has a Top 3 market share in transceivers. Sumitomo is also active, making both cables and transceiver modules. Air Products notes that it supplies industrial gases, such as argon, helium and hydrogen used in production. High-quality silica glass and specialty plastics are also used in the cabling.

US CO2 and Methane Intensity by Basin

US CO2 and Methane Intensity by Basin

The CO2 intensity of oil and gas production is tabulated for 425 distinct company positions across 12 distinct US onshore basins in this data-file. Using the data, we can break down the upstream CO2 intensity (in kg/boe), methane leakage rates (%) and flaring intensity (mcf/boe), by company, by basin and across the US Lower 48.


In this database, we have aggregated and cleaned up 957 MB of data, disclosed by the operators of 425 large upstream oil and gas acreage positions. The data are reported every year to the US EPA, and made publicly available via the EPA FLIGHT tool.

The database covers 70% of the US oil and gas industry from 2021, including 8.8Mbpd of oil, 80bcfd of gas, 22Mboed of total production, 430,000 producing wells, 800,000 pneumatic devices and 60,000 flares. All of this is disaggregated by acreage positions, by operator and by basin. It is a treasure trove for energy and ESG analysts.

CO2 intensity. The mean average upstream oil and gas operation in 2021 emitted 10kg/boe of CO2e. Across the entire data-set, the lower quartile is below 3kg/boe. The upper quartile is above 13kg/boe. The upper decile is above 20kg/boe. And the upper percentile is above 70kg/boe. There is very heavy skew here (chart below).

The main reasons are methane leaks and flaring. The mean average asset in our sample has a methane leakage rate of 0.21%, and a flaring intensity of 0.03 mcf/bbl. There is a growing controversy over methane slip in flaring, which also means these emissions may be higher than reported. Flaring intensity by basin is charted below.

US CO2 intensity has been improving since 2018. CO2 intensity per basin has fallen by 17% over the past three years, while methane leakage rates have fallen by 22%. Activity has clearly stepped up to mitigate methane leaks.

(You can also see in the data-file who has the most work still to do in reducing future methane leaks. For example, one large E&P surprised us, as it has been vocal over its industry-leading CO2 credentials, yet it still has over 1,000 high bleed pneumatic devices across its Permian portfolio, which is about 10% of all the high-bleed pneumatics left in the Lower 48, and each device leaks 4 tons of methane per year!).

Most interesting is to rank the best companies in each basin, using the granular data, to identify leaders and laggards (chart below). A general observation is that larger, listed producers tend to have lower CO2 intensity, fewer methane leaks and lower flaring intensity than small private companies. Half-a-dozen large listed companies stand out, with exceptionally low CO2 intensities. Please consult the data-file for cost curves (like the one below).

Methane leaks and flaring intensity can also be disaggregated by company within each basin. For example, the chart below shows some large Permian producers effectively reporting zero flaring, while others are flaring off over 0.1 mcf/bbl.

All of the underlying data is also aggregated in a useful summary format, across the 425 different acreage positions reporting in to EPA FLIGHT, in case you want to compare different operators on a particularly granular basis.

US Refinery Database: CO2 intensity by facility?

US refinery database

This US refinery database covers 125 US refining facilities, with an average capacity of 150kbpd, and an average CO2 intensity of 33 kg/bbl. Upper quartile performers emitted less than 20 kg/bbl, while lower quartile performers emitted over 40 kg/bbl. The goal of this refinery database is to disaggregate US refining CO2 intensity by company and by facility.


Every year, the c125 core refineries in the US, with c18Mbpd of throughput capacity report granular emissions data to the US EPA. The individual disclosures are something of a minefield, and annoyingly lagged. But this refinery database is our best attempt to tabulate them, clean the data and draw meaningful conclusions.

Some of the larger companies assessed in the data-file include Aramco, BP, Chevron, Citgo, Delek, ExxonMobil, Koch, HF Sinclair, Marathon, Phillips66, PBF, Shell and Valero.

The average US refinery emits 33kg of direct CO2 per barrel of throughputs, we estimate, with a 10x range running from sub-10 kg/bbl to around 100 kg/bbl (chart below).

US refinery database
125 US refineries ranked by CO2 intensity per barrel

Breakdown of direct US refinery emissions? The 33 kg/bbl average CO2 intensity of US refineries comprises 20 kg/bbl of stationary combustion, 8 kg/bbl of other refining processes, 3 kg/bbl of on-site hydrogen generation, 1 kg/bbl of cogeneration, 0.2 kg/bbl associated with methane leaks.

Some care is needed in interpreting the data. Refineries that are more complex, make cleaner fuels, make their own hydrogen (rather than buying merchant hydrogen) and also make petrochemicals are clearly going to have higher CO2 intensities than simple topping refineries. There is a 50% correlation between different refineries’ CO2 intensity (in kg/bbl) and their Nelson Complexity Index.

Correlation between the CO2 intensity of US refiners and their Nelson Complexity Index

Which refiners make their own hydrogen versus purchasing merchant hydrogen from industrial gas companies? This question matters, as hydrogen value chains come into focus. Those who control the Steam Methane Reformers may be readily able to capture CO2 in order to earn $85/ton cash incentives under the IRA’s reformed 45Q program, as discussed in our recent research note into SMRs vs ATRs. One SuperMajor and two pure play refiners stand out as major hydrogen producers, each generating 250-300kTpa of H2.

US refinery database
Which refiners make their own hydrogen versus purchasing merchant hydrogen

How has the CO2 intensity of US refineries changed over the past 3-years? The overall CO2 intensity is unchanged. However, some of the most improved refineries have lowered their CO2 intensities by 2-10 kg/bbl (chart below). Conversely, some Majors have seen their CO2 intensities rise by 2-7 kg/bbl.

US refinery database
Change-in-CO2-intensity-of-different-US-refiners-over-time

For further context and ideas, we have also published summaries of our key conclusions into downstream, vehicles and long-term oil demand. All of our hydrocarbon research is summarized here.

The full refinery database contains a granular breakdown, facility-by-facility, showing each refinery, its owner, its capacity, throughput, utilisation rate and CO2 emissions across six categories: combustion, refining, hydrogen, CoGen, methane emissions and NOx (chart below). The data-file was last updated in 2023 and covers the full US refinery landscape in 2018, 2019 and 2021, going facility by facility, and operator by operator.

Refrigerants: leading chemicals for the rise of heat pumps?

chemicals used as refrigerants

What chemicals are used as refrigerants? This data-file is a breakdown of the c1MTpa market for refrigerants, across refrigerators, air conditioners, in vehicles, industrial chillers, and increasingly, heat pumps. The market is shifting rapidly towards lower-carbon chemicals, including HFOs, propane, iso-butane and even CO2 itself. We still see fluorinated chemicals markets tightening.


Refrigerants are used for cooling. The thermodynamic principle is that these chemicals have low boiling points (averaging around -30ºC). They absorb heat from their surroundings as they evaporate. Then later these vapors are re-liquefied using a compressor.

The global market includes over 1MTpa of refrigerants, for use in refrigerators (around 100 grams per fridge), passenger cars (1 kg per vehicle) and home AC systems (4 kg per home). There is also an industrial heating-cooling industry, including MW-scale chillers that might contain 400kg of refrigerants, to large global LNG plants.

The market is growing. Structurally, heat pumps could add another 4kg of refrigerant demand per household, especially in markets such as Europe with traditionally low penetration of AC. Rapid rises are also occurring in global AC demand.

From the 1930s onwards, CFCs were used as refrigerants. But CFCs are inert enough to reach the middle of the stratosphere, where they are broken down by UV radiation, releasing chlorine radicals. These chlorine radicals break down ozone (O3 into O2). Hence by the 1980s, abnormally low ozone concentrations were observed over the South Pole. Ozone depletion elevates the amount of UV-B radiation reaching Earth, increasing skin cancer and impacting agriculture. And hence CFCs were phased out under the Montreal Protocol of 1989.

CFCs were largely replaced with fluorocarbons, which do not deplete the ozone layer, but do have very high global warming potentials. For example, R-134a, which is tetrafluoroethane, is a 1,430x more potent greenhouse gas than CO2.

The Kigali Amendment was signed by UN Member States in 1989, and commits to phase down high-GWP HFCs by 85% by 2036. This has been supplemented by F-gas regulation in the EU and the AIM Act in the US. High GWP fluorocarbons are effectively banned in new vehicles and stationary applications in the developed world.

In addition, there has long been a market for non-fluorinated chemicals as refrigerants, but the challenge with these alternatives is that they tend to be flammable. Over half of domestic refrigerators use iso-butane as their refrigerant, which is permissible under building codes because each unit only contains about 100 grams of refrigerant (e.g., in Europe, a safety limit has historically been set at around 150 grams of flammable materials in residential properties, and is being revised upwards).

So what outlook for the fluorinated chemicals industry? Overall, we think demand will grow mildly. It is true that regulation is tightening, and phasing out fluorocarbons.

However, some of the leading refrigerants that are being “phased in” as replacement actually use more fluorinated chemicals than the refrigerants they are replacing…

Hydrofluoroolefins (HFOs) have no ozone depleting potential and GWPs <10. As an example, R-1234yf is now used in over 100M vehicles, and comprises 67% fluorine by mass. This is an increase from the 44% fluorine content in R-22, which was the previous incumbent for vehicle AC systems.

Impacts of electric vehicles? You could also argue that EVs will have increasing total refrigerant demand, as there are in-built cooling systems for many fast-chargers.

Using CO2 as a refrigerant could also be an interesting niche. It is clearly helpful for our energy transition ambition to increase the value in capturing and using CO2. But the challenging is that even if 215M annual refrigerator sales all used 100% CO2 as their refrigerant, this would only “utilize” around 25kTpa of CO2, whereas our Roadmap to Net Zero is looking for multi-GTpa scale CCUS.

For heat pumps, we think manufacturers are going to use propane, CO2, HFOs and a small class of low-GWP fluoro-carbons. So there is a small pull on the fluorinated chemicals value chain from the ramp-up of heat pumps. But the main pull on the fluorinated chemicals chain is going to be coming from batteries and solar, as explored in our recent fluorinated polymers research.

Leading Western companies making refrigerants in the data-file include Honeywell, DuPont, Chemours, Arkema, Linde, and others in our fluorinated chemicals screen.

Solar: energy payback and embedded energy?

Energy payback of solar

What is the energy payback and embedded energy of solar? We have aggregated the consumption of 10 different materials (in kg/kW) and around 10 other line-items across manufacturing and transportation (in kWh/kW). Our base case estimate is 2.5 MWH/kWe of solar. The average energy payback of solar is 1.5-years. Numbers and sensitivities can be stress-tested in the data-file.


Our base case estimate covers a standard 560W solar panel, as is being manufactured in 2022-23, weighing 30kg, and having an efficiency of 22%.

By mass, this solar panel is about 65% glass, 15% aluminium, c10% polymers (mainly EVA encapsulants and PVF back-sheet), c3% copper. Photovoltaic silicon is only 5% of the panel by mass, but about 40% by embedded energy.

Another 10kg of material is contained in the balance of project, across inverters, wiring, structural supports, other electronics. Thus the energy embedded in manufacturing the panel is likely only around 60% of the total energy embedded in a finished solar project.

Energy payback of solar

Our base case is that it will take around 2.5 MWH of up-front energy and release almost 3 tons of CO2 per kW of installed solar capacity. In turn, this suggests an energy payback of around 1.5-years and a CO2 payback of around 1.8-years.

The complexity of the solar value chain is enormous. Often it is also opaque. Thus the numbers can vary widely. We think there will be solar projects installed with an energy payback around 1-year at best and around 4-years at worst.

Our numbers do not include energy costs of power grid infrastructure or battery back-ups. This is simply a build-up for a vanilla project, trying to be as granular and objective as possible.

Inputs for the embedded energy and CO2 of different materials are drawn from our other CO2 screening work and economic models.

The other great benefit of constructing a detailed bill of materials for a solar installation is that we can use it to inform our solar cost estimates. Our best guess is that materials will comprise around half of the total installed cost of a solar installation in 2021-22 (chart below). There is going to be a truly remarkable pull on some of these materials from scaling up solar capacity additions.

We absolutely want to scale solar in the energy transition. This will be easiest from a position of energy surplus.

Please download the data-file to stress test our numbers around the embedded energy needed to construct a solar project, and the energy payback of solar.

Crop production: how much does nitrogen fertilizer increase yields?

How much does fertilizer increase crop yields?

How much does fertilizer increase crop yields? To answer this question, we tabulated data from technical papers. Aggregating all of the global data, a good rule of thumb is that up to 200kg of nitrogen can be applied per acre, increasing corn crop yields from 60 bushels per acre (with no fertilizer) to 160 bushels per acre (at 200 kg/acre).


The relationship is almost logarithmic. The first 40 kg/acre of nitrogen application doubles crop yields, from 60 bushels per acre to around 130 kg/bushel. The next 20 kg/acre adds another 5% to crop yields. The next 20kg/acre adds 4%. The next 20kg/acre adds 3%. And so on. Ever greater fertilizer applications have diminishing returns.

In 2022-23, many decision-makers and ESG investors are asking whether energy shortages will translate into fertilizer shortages, which in turn translate into food shortages. The answer depends. A 10kg/acre cut in nitrogen fertilizer may have a negligible 0-2% impact on yields in the most intensive developed world farming. Whereas it may have a disastrous, >10% impact in the developed world, on the “left hand side” of our logarithmic curves.

The scatter is broad, and shows that corn yields are a complex function of climate, weather, crop rotations, soil types, irrigation, other soil nutrients; and the nuances of how/when fertilizers are applied in the growing cycle. Nitrogen that is applied in the form of ammonia, ammonium nitrate, urea or NPK is always prone to denitrification, leaching, volatilization, and being uptaken by non-crop plants.

Moreover, while this data-file evaluates corn, the world’s most important crop by energy output, the relationship may be different for other crops. Corn is particularly demanding of nitrogen in its reproductive stages of growth. This ridiculously prolific crop will have 55% of its entire biomass invested in its ‘ears’ by the time of maturity. These ears contain so much nitrogen than around 70% is sourced by remobilizing nitrogen out of leaves and stems.

How much does fertilizer increase crop yields? For economic reasons. And to minimize the CO2 intensity of crop production. As a rule of thumb, the CO2 intensity of corn crop production is 75kg/boe, of which 50kg/boe is due to nitrogen fertilizer.

A constructive conclusion is that the first c40-80 kg/acre of nitrogen application does not increase CO2 intensity, or may even decrease it due to much greater yields. Best-fit formulae are derived in this data-file, using the data. So are our notes from technical papers, of which our favorite and most helpful was this paper from PennState.

We still see upside in conservation agriculture, and question marks over excessive reliance upon some biofuels as part of the energy transition.

Copyright: Thunder Said Energy, 2019-2024.