Industry data for the energy transition. Many of our data-files simply aim to quantify how specific industrial processes work, how they emit CO2, and how they can be decarbonized.
The purpose is simply to help decision-makers understand opportunities and challenges in the energy transition, backed up with data, which is transparent and easily accessible.
Examples include the volatility of solar power generation, what it costs to maintain a wind turbine, what drives the degradation rates of grid-scale batteries, gas flaring rates, the conductivity of different metals, or the share of energy sources in different countries’ power mixes.
If we can construct a data-file to help you, or aggregate any particular industrial data-sets that matter for the energy transition, then please contact us, and we will be delighted if we can help you save time and get useful industry data for the energy transition.
This data-file tabulates the five ‘Big Oil’ Super-Majors’ development capex from the mid-1990s, in headline terms (billions of dollars) and in per-barrel terms ($/boe of production). Real development capex quadrupled from $6/boe in 1995-2000 to $24/boe in 2010-15, and has since collapsed to $10/boe.
The peer group of Super-Majors comprises ExxonMobil, Chevron, BP, Shell and TOTAL, which comprise c10% of the world’s oil production and 12% of the world’s gas production. As a good rule of thumb, this group can be thought of as c10% of global production.
Development capex by region: gaining share? The US has always been the most favored destination, attracting c25% of all development capex, both offshore (e.g., Gulf of Mexico) and increasingly for short-cycle shale. However, the share of these companies’ development capex in the US has averaged around 32% in the past three years.
Development capex by region: losing share? Development projects in Africa and Europe have fallen most out of favor. Development capex in Africa peaked at $17bn in 2009, almost 25% of the group’s total development capex, and has since fallen back to $5bn per year, or 8% of the group’s total development capex.
It is somewhat terrifying to consider that the industry needed to spend an average of $15/boe (real terms) on development capex in order to hold its organic production “flattish” (including some large acquisitions in 2014-17, such as Shell buying BG).
Another scary data-point is that this peer group of Super-Majors spent $18/boe (real) on development projects in the decade from 2004-14 (which is 80% more than recent levels of spending) yet its net production declined by 1.5% per year over this timeframe.
Similar data for the Super-Majors’ exploration capex over time is tabulated here.
Under-investment across the entire energy industry may foreshadow a sustained shortage of energy, especially if 50% lower-carbon gas is intended to replace coal as part of the energy transition, per our roadmap to net zero. Hence one cannot help wondering about energy shortages, energy pragmatism and our fears of another up-cycle.
This data-file aggregates the Oil Majors’ development capex, across ExxonMobil, Chevron, BP, Shell and TOTAL disclosures, apples-to-apples, back to 1995, based on supplementary oil and gas disclosures, in the SEC’s EDGAR archives.
This database tabulates the typical fuel consumption of offshore vessels, in bpd and MWH/day. We think a typical offshore construction vessel will consume 300bpd, a typical rig consumes 200bpd, supply vessels consume 150bpd, cable-lay vessels consume 150bpd, dredging vessels consume 100bpd and medium-sized support vessels consume 50bpd. Examples are given in each category, with typical variations in the range of +/- 50%.
The flue gas of a typical combustion facility contains c7% CO2, 60ppm of NOx, 40ppm of SOx and 2ppm of particulate dusts. This is our conclusion from tabulating data across 75 large combustion facilities, mainly power generation facilities in Europe. However, the range is broad. As a rule of thumb, gas is cleanest, biomass and coal are worse, while some diesel-fired units are associated with the lowest air quality in our sample.
Sulphur oxides (SOx) cause acidification, in the air, in rain and in natural habitats. Hence limits are placed on the sulphur emissions in the exhaust gases of large power facilities. The limits are typically 50-250ppm in Europe, 120ppm in the US and 75-300ppm in China. We think European coal plants emit 20-400 ppm of SOx, with an average of 85ppm, which has been reduced by installing gas scrubber units in recent years. Emissions from natural gas plants are effectively nil.
Nitrogen Oxides (NOx) cause ground-level ozones and smogs to form, which can contribute to respiratory problems. Thus limits in the exhaust gases of large power plants are 60-130ppm in Europe, 90-120ppm in the US and 75-150ppm in China. We think the average coal plant in Europe emits NOx at 110pm. The numbers are highest for large diesel plants averaging 160ppm, high for biomass plants averaging 80ppm, and lowest for gas turbines averaging 25ppm at CCGTs.
Particulates and dusts are combustion products that become airborne and are later deposited on buildings, machinery, natural habitats or worst of all inhaled. Dusts are limited to 3-9 ppm in the emissions of large power plants in Europe, 17ppm in the US and 22ppm in China. The average coal plant emits at 9 ppm in Europe, due to the installation of electrostatic precipitators and other exhaust gas treatments. Again, biomass and diesel plants can have high particulate emissions. Gas fired power plants seem to have particulate emissions well below 1ppm.
Underlying data on different power plants are broken down in this data-file. Note that European databases report estimated SOx, NOx and particulate emissions for large combustion facilities in tons, but we have applied our own back-of-the-envelope conversion factors, to translate the data into ppm and mg/m3 emissions intensities.
In post-combustion CCS facilities, amines react with CO2, which can later be re-released via steam-treating, and sent for sequestration. However, CCS plants have amine make-up rates, to replace amines that degrade (chemically, thermally) and evaporate off. This data-file quantifies make-up rates of amines in kg/ton.
We estimate that amine make-up rates will most likely run at 1.75 kg/ton using MEA, which is the most common amine used in CCS plants. However the ranges are very broad. One widely cited study quotes a range of 0.2-3.7 kg/ton, and another suggests poorly managed plants could have make-up rates as high as 10kg/ton.
This matters as amines comprise $3/ton of our cost build up for post-combustion CCS, using the mid-point estimate of 1.75 kg/ton and a $1,700/ton long-term MEA price. Cash margins are around $11/ton. Hence lower amine use can uplift margins by 10-20%; or conversely, higher amine use can add $3-5/ton, maybe even $10/ton to the cost of CCS.
Amine degradation mechanisms are complex, occurring via oxidative and thermal pathways. Oxidative degradation is accelerated by impurities acting as catalysts, especially metals. Sulphur impurities produce heat stable salts. Evaporative losses correlate with temperature. Operating power plants at partial loads or as peakers can also increase evaporative emissions. Some of the breakdown products, such as nitrosamines are toxic, especially if there are NOXs in the feed gas.
Another question mark is whether some studies are under-estimating amine degradation due to short study times, and as degradation cascades. Primary breakdown products react. Some studies show quite sudden failure after a few thousand hours.
Coal versus gas. It is hard to pinpoint a clear difference in the degradation rates between post-combustion CCS on coal versus gas. On the one hand, coal will have more impurities, but these tend to be cleaned out before CCS, using electrostatic precipitation and wet gas scrubbing. On the other hand, coal plants will burn off more oxygen, so there are lower oxygen concentrations in the flue gas, and less oxidative degradation.
What is the typical planting density for reforestation projects globally? This matters as it can determine the costs of reforestation. Hence in this data-file we have collated data from 25 different case studies globally, which have tended to plant a median of 670 seedlings per acre (1,650 per hectare). However, the range is broad, from 400 seedlings per acre in low-density Southern US forestry to 4,000 seedlings per acre in mangrove restoration projects.
Planting density depends on forestry practices, because as trees grow, they compete more for light, water and nutrients. Hence starting with a higher stand density will tend to require more thinning to avoid over-crowding and weak trees. Conversely, starting with a lower stand density will under-utilize valuable land in early years.
Planting density depends on forestry objectives. Dense stands may favor early harvesting, which is fine when growing crops for pulp, paper, wood-based fuel or lower-value sawlogs. However less dense stands may favor longer growing cycles, to produce high value timber, locking carbon away in long-lived construction materials.
Planting density depends on species. For example, mangroves are famously dense-growing. European pine and spruce forests can also be very dense (minimal branching). Whereas large broadleaves can be very extensive if grown to 100+ years.
Planting density depends on climate. For example, some rainforest reforestation projects have favored dense planting, to prevent slower growing trees from being outcompeted by fast-growing jungle plants. Conversely, Southern US forestry has some of the lowest planting density, falling by over 80% in the past 50-years, as survival has become better and better, averaging 88%.
Planting density depends on seedling cost. Another study notes that planting density can be optimized, year by year, depending on the costs of seedlings, favoring lower densities (and less subsequent thinning) to mute the impacts of seedling shortages.
This data-file aggregates the pricing of different wood products, as storing carbon in long-lived materials matters amidst the energy transition. It can also add economic value. While upgrading raw timber into high value materials can uplift realized pricing in reforestation projects by 20-60x, which improves the permanence of nature-based CO2 removal credits.
Stumpage prices reflect the prices of timber at the immediate point of harvesting in a forest. We think stumpage prices are typically around $40/m3, ranging from $20-80/m3, depending on the timber type and location.
Whole logs that have been de-limbed, transported out of the forest, partially dried, and possibly de-barked are around 3-4x more valuable reaching $100-200/m3.
Sawn beams are 3x more valuable again, with pricing recently exceeding $500/m3, after the additional steps of drying, grading, cutting into specific shapes, and other possible treating steps.
Paper comes next, and we think pricing around $875/m3 is often sufficient for a 10% IRR at a new paper mill, while recent paper pricing has run around the $1,000/m3 mark.
Board materials are another 2-4x more valuable gain. Plywood, which is formed by unravelling entire logs into long, continuous sheets, might price above $1,000/m3, while lower grade board materials below $1,000/m3 will be formed by binding and re-constituting wood-chip (Oriented strand board, OSB) or sawdust (medium-density fibreboard, MDF).
Engineered timber products included glulam and cross laminated timber. These can have more variable pricing, but in both cases, we think recent pricing has run above $1,500/m3. We think CLT is an interesting alternative to steel in construction.
Hardwood flooring can be among the most valuable timber products. Again, pricing is variable, but it can be 2x more expensive than engineered timber products.
How has the efficiency of prime movers increased across industrial history? This data-file profiles the continued progress in the efficiency of power generation over time, from 1650 to 2050e. As a rule of thumb, the energy system has shifted to become ever more efficient over the past 400-years.
In the early industrial revolution, mechanical efficiency ranged from 0.5-2% at coal-fired steam engines of the 18th and 19th centuries, most famously Newcomen’s 3.75kW steam engine of 1712. This is pretty woeful by today’s standards. Yet it was enough to change the world.
Electrical efficiency started at 2% in the first coal-fired power stations built from 1882, starting at London’s Holborn Viaduct and Manhattan’s Pearl Street Station, rising to around 10% by the 1900s, to 30-50% at modern coal-fired power generation using pulverised, then critical, super-critical and ultra-critical steam.
The first functioning gas turbines were constructed in the 1930s, but suffered from high back work ratios and were not as efficient as coal-fired power generation of the time. Gas turbines are inherently more efficient than steam cycles. But realizing the potential took improvements in materials and manufacturing. And the best recuperated Brayton cycles now surpass 60% efficiency in world-leading combined cycle gas turbines.
Renewables, such as wind and solar, offer another step-change upwards in efficiency, and will harness over 80% of the theoretically recoverable energy in diffuse sunlight and blowing in the wind (i.e., relative to the Betz Limit and Shockley-Queisser limit, respectively).
There is a paradox about many energy transition technologies. Long-term battery storage and green hydrogen would depart quite markedly from the historical trend of ever-rising energy efficiency in power cycles. Likewise, there are energy penalties for CCS.
The data-file profiles the efficiency of power generation over time, noting 15 different technologies, their year of introduction, typical size (kW), mechanical efficiency (%), equivalent electrical efficiency (%) and useful notes about how they worked and why they matter.
Wind power energy paybacks? This data-file estimates 3MWH of energy is consumed in manufacturing and installing 1kW of offshore wind turbines, the energy payback time is usually around 1-year, and total energy return on energy invested (EROEI) will be above 20x. These estimates are based on bottom-up modelling and top-down technical papers.
The average wind energy project has an energy intensity of 3MWH/kW, which is repaid after c1-year, for a total energy return on energy investment above 20x, over a 20-25 year operating life.
One observation from reviewing technical papers is that many have rough methodologies. Some are still basing numbers upon small, <1MW turbines, which are no longer representative. Conversely, others are incomplete, and have not fully captured materials costs.
Hence we have built up our own bottom-up estimates for the energy intensity of wind power, and the EROEI of wind turbines.
The largest individual contributors to the up-front energy costs of wind turbines are transporting materials to the site (0.75MWH/kW), steel (0.6MWH/kW), other materials (0.3MWH/kW), large offshore vessels that install foundations and turbines (0.3 MWH/kW) and the tail of 20-40 smaller vessels that support offshore operations (data here).
The average CO2 intensity of wind turbines is suggested at 10-20g/kWh (0.01-0.02kg/kWh). This coheres with the technical papers that we reviewed, and our own bottom-up estimates.
Wind power energy paybacks will vary with individual project parameters, and we think that a realistic range for offshore wind projects is 15-30x EROEI.
The most important parameteris the location of the project, which will determine energy generated per year, but also transportation distances and steel requirements.
DC-DC power converters are used to alter the voltage in DC circuits, such as in wind turbines, solar MPPT, batteries and digital/computing devices. This data-file is a breakdown of DC-DC power converters’ electrical efficiency, which will typically be around 95%. Losses are higher at low loads. We think there will be upside for increasingly high-quality and efficient power electronics as part of the energy transition.
What are DC-DC converters and why do they matter?
DC-DC converters are used to alter voltage and current in DC circuits. The global market is $15-20bn per year and growing at 12% per year. These devices are used seemingly everywhere as the world electrifies. From IT equipment and smartphones, to the MPPT in solar, to wind turbines, to charging and discharging batteries, such as in electric vehicles.
DC-DC converters invariably contain one or more MOSFETs, diodes, inductors and capacitors. Think about the MOSFET as a fast-acting switch, the inductor as resisting change in current (voltage fluctuates instead, as energy is stored in an expanding and collapsing magnetic field) and the capacitor as resisting change in voltage (current fluctuates instead, as energy is stored in an expanding and collapsing electric field).
In a buck converter, voltage is lowered. The MOSFET turns the entire circuit on and off. Power can only flow from the source to the load for a fraction of the time. But the inductor and capacitor keep the output voltage and current relatively stable. It follows that the delivered voltage will be some fraction of the input voltage, depending on the percent of time that the MOSFET is on (aka the duty cycle).
In a boost converter, voltage is raised. When the MOSFET switches on, it short-circuits the inductor. A spike of power flows into the inductor, creating an electromagnetic field. Then when the MOSFET turns off, this electro-magnetic field collapses, creating a sharp burst of higher voltage. Again the capacitor keeps the output current relatively stable.
What determines DC-DC power converters’ efficiency?
But what is the efficiency of a DC-DC converter? We think a good base case is around 95%, which is an optimized balance across a dozen different input variables. We think value will accrue to leading semi-conductor and power-electronics companies, that can improve the efficiency of electrification technologies. Recent examples include Silicon Carbide (SiC) semi-conductors, Zero-Voltage Switching.
Step-up or step-down ratio. In our model, we have captured a buck converter, where in the base case, the voltage is being stepped down by 50%. This is what underpins our 95% efficiency calculation. However, the efficiency falls to only 90%, if we step-down by 75%, as effectively all of the losses rise proportionately.
Power output. We also optimized our device for a nominal 1.0 Amps of current. At 10x lower current, efficiency falls to 87% as reverse recovery losses in the diode become disproportionately large. At 10x higher current, efficiency falls to 77% as inductor losses become disproportionately large.
Switching frequency. We optimized our device for 1MHz switching frequency. At 0.1MHz, efficiency falls to 89% and the losses are dominated by resistance in the MOSFETs and inductors. At 10MHz, efficiency falls to 78% and is dominated by the switching loss and reverse recovery loss on the diode.
Is the power grid becoming a bottleneck for the continued acceleration of renewables? The median approval time to connect to the grid for a new US power project has climbed by 30-days/year since 2001; and has doubled since 2015, to over 1,000 days (almost 3-years) in 2021. Wind and solar projects are now taking longest to inter-connect, due to their prevalence, lower power quality and remoteness. This data-file evaluates the data, looks for de-bottlenecking opportunities, and wonders about changing terms of trade in power markets.
Accelerating wind and solar are a crucial part of our roadmap to net zero. But we have also been worrying about bottlenecks, especially in power grids. Project developers are increasingly required to fund new power transmission infrastructure, before they are allowed to interconnect, usually costing $100-300/kW, but sometimes costing as much as the renewables projects themselves (data here). If there is one research note that spells out the upside we see in power grids and electrification, then it is this one. We also see upside in long-distance transmission, HVDCs, STATCOMs, transformers, various batteries.
Other technical papers have also raised the issue of rising interconnection times and power grid bottlenecks for wind and solar. And the US’s Lawrence Berkeley National Laboratory has also started tracking the ‘queue’ of power projects waiting to inter-connect. We have downloaded their database, spent about a day cleaning the data (especially the dates), and aimed to derive some conclusions below.
Methodological notes. The raw LBL database contains a read-out of over 24,000 US power projects that sought to inter-connect to a regional power grid, going back to 1995. However, 13,000 of these applications were withdrawn, 8,000 are still active/pending and 3,500 are classed as operational. 2,500 of the projects have complete data on (a) when they applied for permission to inter-connect to the grid and (b) when they were ultimately granted that permission, allowing us to calculate (c) the approval time (by subtracting (b) from (a)). But be warned, this is not a fully complete data-set. And some States, which have clearly constructed large numbers of utility-scale power projects, seemed not to report any data at all. Nevertheless, we think there are some interesting conclusions.
The median time to receive approval to inter-connect a new US power project to the grid has risen at an average rate of 30-days per year over the past two decades and took over 1,000 days in 2021, which is 2.8 years. This has doubled from a recent trough level of 500 days in 2015 (chart below) and a relatively flat level of 400-days in the mid-2000s.
Renewables projects now take longer to receive approval to connect to the power grid. Wind projects have always taken longer to receive approvals. And recent wind projects continued taking 30% longer than the total sample of approved projects in 2019-21. More interestingly, however, solar projects have gone from taking 50% less time to receive grid connection approvals in the mid-2000s to taking 10% longer than average, especially in 2020 and 2021. Why might this be? We consider five factors…
#1. Project quantity is probably the largest bottleneck. The numbers of different projects receiving permission to connect to the grid are tabulated below. A surge in wind projects in 2005-2012 correlates with the first peak in inter-connection approval times on the chart above. And a more recent peak in utility-scale solar, battery and wind projects correlates with the recent peak of approval times in 2020-21. This suggests a key reason it is taking more time to approve new inter-connections is that grid operators are backlogged. It would be helpful to resolve the backlog. And we wonder if the result might be a change in the terms of trade: favoring grid operators more, favoring capital goods companies more, and requiring project developers to be more accommodating?
#2.Project sizing does not directly explain inter-connection approval times. The average utility-scale solar project has become larger over time (now surpassing 150MW). But wind projects have always been larger than the average power project seeking approval to connect to the grid. And there are many small gas, coal and nuclear projects that take longer to receive connection approval than large ones. So we do not think there is a direct link between power project size and the time needed to approve an inter-connection. However, there may be an indirect link. It is clearly going to take longer to study the impacts of connecting 10 x 100 MW solar projects (in 10x separate locations), than 1 x 1,000 MW nuclear plant, even though both have the same nameplate capacity.
#3. Connection voltage does not explain inter-connection approval times. The median project in the database is connecting into the grid at 130kV. The median wind project is at 145kV. The median gas project is at 135kV. The median solar project is at 110kV. The median battery project is connecting at 140kV. Although we do think that moving power over longer distances is increasingly going to favor higher voltage transmission and also pull on the transformer market.
#4. Power quality seems to explain relative approval times and increasingly so. Another interesting trend is the difference in interconnection approval times between different types of power projects. Wind and solar projects now take 30% and 10% longer than average to receive approvals. Whereas gas, batteries and hydro now take 15%, 50% and 90% less time than average to receive approvals. We think this is linked to power quality. On a standalone basis, wind and solar may tend to reduce the inertia, frequency regulation, reactive power compensation and balancing of power grids. Whereas gas power plants, batteries and hydro typically help with these metrics (each in their own way). We think this adds evidence in support of our power grids thesis.
#5. Remote projects take longer to approve, as they will likely require more incremental transmission lines. The shortest interconnection times across all power projects were in Texas, which already has a very large power grid, arguably the best energy endowment and infrastructure in the world. But other more densely populated states (Michigan, Illinois) tended to have 50% lower times to approve inter-connections than some of the least densely populated states (the Dakotas, Iowa, Montana), where we think new power generation likely needs to be moved further to reach demand centers. Location matters for levelized cost of electricity. Again, we think this evidence also supports our power transmission thesis.
Overall the data suggest that there are growing bottlenecks to inter-connect renewables to power grids; especially in areas with a surge of activity, where power quality is increasingly important, and in more remote areas that require new transmission infrastructure. We think this trend will continue. It would be helpful to debottleneck the bottlenecks, to sustain the upwards trajectory of wind and solar. But we do think the terms of trade are shifting in favor of grid operators, power electronics, transmission infrastructure, developers that can use their own power and consumers that can demand shift.
Cookies?
This website uses necessary cookies. Our cookies are simply to improve your experience. We do not undertake any advertising or targeting via our cookies. By clicking 'accept' or continuing to use the website, you consent to our use of cookies.AcceptRead More
Privacy & Cookies Policy
Privacy Overview
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.