Offshore wind: will costs follow Moore’s Law?

Some commentators expect the levelized costs of offshore wind to fall another two-thirds by 2050. The justification is some eolian equivalent of Moore’s Law. Our 16-page report draws five contrasts. Wind costs are most likely to move sideways, even as the industry builds larger turbines. Implications are explored for companies.


Deflating wind costs are explored on pages 2-3. Deflation is important. But consensus forecasts could be dangerously wrong in our opinion.

Our report lays out five reasons why wind looks different to Moore’s Law, which has doubled computing performance every 18-months since 1965.

(1) Offshore wind costs are not following Moore’s Law yet. And after reviewing 50 patents from one of the world’s leading wind developers, we think the industry’s largest focus is not on costs (pages 4-5)

(2) Making turbines ever-larger is “the opposite” of making transistors ever-smaller. We review the physics and a simple issue around extrapolation (pages 5-6)

(3) Larger turbines face larger challenges. Unlike Moore’s Law, physics “work against” the up-scaling of wind turbines (pages 7-9).

(4) Larger turbines are more carbon intensive, using advanced materials that are 10-25x more costly and CO2-emitting, paradoxically requiring more fossil fuels. This looks like “the opposite” of the bootstrapping that helped propel Moore’s Law (pages 10-13)

(5) Wind turbines crowd out wind turbines, as grids ultimately become saturated with highly inter-correlated wind generation. This re-inflates marginal costs. Again, this is the opposite of bootstrapping (pages 14-15).

Our conclusions for companies are drawn out on page 16.

Britain’s industrial revolution: what happened to energy demand?

Britain’s remarkable industrialization in the 18th and 19th centuries was part of the world’s first great energy transition. In this short note, we have aggregated data, estimated the end uses of different energy sources in the Industrial Revolution, and drawn five key conclusions for the current Energy Transition.


In this short note, we have sourced and interpolated long run data into energy supplies in England and Wales, by decade, from 1560-1860. The graph is a hockey stick, with Britain’s total energy supplies ramping up 30x from 18TWH to 515TWH per year. Part of this can be attributed to England’s population rising 6x, from around 3M people to 18M people over the same timeframe. The remainder of the chart is dominated by a vast increase in coal from the 1750s onwards.

A more comparable way to present the data is shown below (and tabulated here). We have divided through by population to present the data on a per-capita basis. But we have also adjusted each decade’s data by estimated efficiency factors, to yield a measure of the total useful energy consumed per person. For example, coal supplies rose 40x from 1660 to 1860, but per-capita end use of coal energy only rose c6.5x, because the prime movers of the early industrial revolution were inefficient. This note presents our top five conclusions from evaluating the data.

Five Conclusions into Energy Demand from the Industrial Revolution

(1) Context. Useful energy demand per capita trebled from 1MWH pp pa in the 1600s to over 3MWH pp pa in the mid-19th century, an unprecedented increase.

For comparison, today’s useful energy consumption per capita in the developed world is 6x higher again, as compared with the 1850s. A key challenge for energy transition in the developed world is that people want to keep consuming 20MWH pp pa of energy, rather than reverting to pre-industrial or early-industrial energy levels. As a rough indicator, 20MWH is the annual energy output of c$120-150k of solar panels spread across 600 m2 (model here).

Furthermore, today’s useful energy consumption in the emerging world is only c2x higher than Britain in the 1860s. I.e., large parts of the emerging world are in very early stages of industrialization, comparable to where Britain was 150-years ago. Models of global decarbonization must therefore allow energy access to continue rising in the emerging world (charts below), and woe-betide any attempt to stop this train.

(2) Shortages as a driver of transition? One of the great cliches among energy analysts is that we “didn’t emerge from the stone age because we ran out of stone”. In Britain’s case, in fact, the data suggest we did shift from wood to coal combustion as we began to run out of wood.

Wood use and total energy use both declined in the 16th Century, and coal first began ramping up as an alternative heating fuel (charts above). In 1560, Britain’s heating fuel was 70% wood and 30% coal. By 1660, it was 70% coal and 30% wood. This was long before the first coal-fired pumps, machines or locomotives.

This is another reminder that energy transitions tend to occur when incumbent energy sources are under-supplied and highly priced, per our research below. Peak supply tends to preceed peak demand, not the other way around.

(3) Energy transition and abolitionism? Amazingly, human labor was the joint-largest source of useful energy around 1600, at c25% of total final energy consumption. But reliance upon human muscle power as a prime mover was bound up in one of the greatest atrocities of human history: the coercion of millions of Africans, slaves and serfs; to row in galleys, transport bulk materials and work land.

By the time Britain banned the slave trade in 1807, human muscle power was supplying just 10% of usable energy. By the time of the Abolition Act in 1833, it was closer to 5%.

Some people today feel that unmitigated CO2 emissions is an equally great modern-day evil. On this model, it could be the vast ramp-up of renewable energy that eventually helps to phase out conventional energy. But our current models below do not suggest that renewables can reach sufficient size or scale for this feat until around 2100.

What is also different today is that policy-makers seem intent on banning incumbent energy sources before we have transitioned to alternatives. We have never found a good precedent for bans working in past energy systems. Although US Prohibition, from 1920-1933, makes an interesting case study.

(4) Jevons Paradox states that more efficient energy technologies cause demand to rise (not fall) as better ways of consuming energy simply lead to more consumption.

Hence no major energy source in history has ‘peaked’ in absolute terms. Even biomass and animal traction remain higher in absolute terms than before the industrial revolution, both globally and in our UK data from 1560-1860.

Jevons Paradox is epitomized by the continued emergence of new coal-consuming technologies in the chart below, which in turn stoked the ascent of coal-powered demand, while wood demand was not totally displaced.

The fascinating modern-day equivalent would suggest that the increasing supply of renewable electricity technologies will create new demand for electric vehicles, drones, flying cars, smart energy and digitization; rather than simply substituting out fossil fuels.

(5) Long timeframes. Any analysis of long-term energy markets inevitably concludes that transitions take decades, even centuries. This is visible in the 300-year evolution plotted above, and in the full data-set linked below. Attempts to speed up the transition create the paradox of very high costs or potential bubbles. We have also compiled a helpful guide into transition timings by mapping twenty prior technology transitions. Our recent research, summarized below, covers all of these topics, for further information.


Source: Wrigley, E. A. (2011). Energy and the English Industrial Revolution, Cambridge, TSE Estimates. With thanks to the Renewable Energy Foundation for sharing the data-set.

Prevailing wind: new opportunities in grid volatility?

UK wind power has almost trebled since 2016. But its output is volatile, now varying between 0-50% of the total grid. Hence this 14-page note assesses the volatility, using granular, hour-by-hour data from 2020. EV charging and smart energy systems screen as the best new opportunities. Gas-fired backups also remain crucial to ensure grid stability. The outlook for grid-scale batteries has actually worsened. Finally, downside risks are quantified for future realized wind power prices.


This rise of renewables in the UK power grid is profiled on page 2, showing how wind has displaced coal and gas to-date.

But wind is volatile, as is shown on page 3, thus the hourly volatility within the UK grid is 2.5x higher than in 2016.

Power prices have debatably increased due to the scale-up of wind, as shown on page 4.

But price volatility measures are mixed, as presented on pages 5-6. We conclude that the latest data actually challenge the case for grid-scale batteries and green hydrogen.

Downside volatility has increased most, as is quantified on pages 7-8, finding a vast acceleration in negative power pricing, particularly in 2020.

The best opportunities are therefore in absorbing excess wind power. EV charging and smart energy systems are shown to be best-placed to benefit, on pages 9-10.

Upside volatility in power prices has not increased yet, but it will do, if gas plants shutter. The challenge is presented on pages 11-13, including comparisons with Californian solar.

Future power prices realized by wind assets are also likely to be lower than the average power prices across the UK grid, as is quantified on page 14. This may be a risk for unsubsidized wind projects, or when contracts for difference have expired.

Geothermal energy: what future in the transition?

Drilling wells and lifting fluids to the surface are core skills in the oil and gas industry. Hence could geothermal be a natural fit in the energy transition? This 17-page note finds next-generation geothermal economics can be very competitive, both for power and heat. Pilot projects are accelerating and new companies are forming. But the greatest challenge is execution, which may give a natural advantage to incumbent oil and gas companies.


The development of the geothermal industry to-date is summarized on pages 2-4. We also explain the rationale for geothermal in the energy transition.

The costs of a geothermal projects can be disaggregated across wells (page 5), pumping (page 6-7) and power turbines (pages 8-9). We draw out rules of thumb, to help you understand the energy economics.

The greatest challenge is geological complexity, as argued on page 10. It is crucial to find the best rocks and mitigate execution risks.

Base case economics? Our estimates of marginal costs are presented for traditional geothermal power (page 11), next-generation deep geothermal electricity (page 12) and using geothermal heat directly (page 13).

Leading companies are profiled on pages 14-16, after tabulating 8,000 patents. We also reviewed incumbent suppliers, novel pilots, and earlier-stage companies.

We conclude that geothermal energy is a natural fit for incumbent oil and gas companies to diversify into renewables, and arguably a much better fit than wind and solar (page 17).

Biomass and BECCS: what future in the transition?

20% of Europe’s renewable electricity currently comes from biomass, mainly wood pellets, burned in facilities such as Drax’s 2.6GW Yorkshire plant. But what are the economics and prospects for biomass power as the energy transition evolves? This 18-page analysis leaves us cautious.


Arguments in favor of biomass are outlined on pages 2-3, using the carbon cycle to show how biomass could be considered zero-carbon in principle.

Examples of biomass power plants are described on pages 4-5, focusing upon Drax and RWE, and drawing upon data from 340 woody biomass facilities in US power.

The economics of producing biomass pellets are presented on pages 6-7, including a detailed description, capex breakdown, and critique of input assumptions.

The economics of burning biomass pellets to generate electricity are presented on pages 8-9, again with a detailed description and critique of input assumptions.

The economics of capturing and disposing of the CO2 are presented on pages 10-12, allowing us to build up a full end-to-end abatement cost for BECCS.

Energy economics are disaggregated on pages 13-14, in order to derive a measure of energy return on energy invested (EROEI) and CO2 intensity (in kg/kWh). Surprisingly, we find the EROEI for BECCS to be negative.

Is it sustainable? We answer this question on 15-17, arguing that biomass energy and BEECS, properly considered, both have a higher CO2 intensity than gas.

Conclusions and implications are presented on pages 18, including bridges for the total CO2 intensity of biomass and BECCS.

Solar energy: is 50% efficiency now attainable?

Most commercial solar cells achieve 15-25% efficiencies, converting incoming solar energy into usable electricity. But a new record has been published in 2020, achieving 47.1% conversion efficiency. The paper used “a monolithic, series-connected, six-junction inverted metamorphic structure under 143 suns concentration”. Our goal in this short note is to explain the achievement and its implications.


“A monolithic, series-connected, six-junction inverted metamorphic structure under 143 suns concentration”

p-n junctions are the foundation of all solar cells. Each side of the junction is doped, so that the “p”-side will surrender electrons, while the “n”-side will accept electrons. When incoming solar energy strikes the junction, it may dislodge an electron and leave behind a hole. The liberated electron will propagate towards the “n”-side, while the holes will propagate towards the “p”-side, thus creating a direct current.

Bandgap is the energy needed to dislodge an electron from its usual orbit, so it is free to move through a p-n junction. The energy in light varies with wavelength (lower-wavelength equals lower energy). Light waves below the bandgap will not suffice to dislodge electrons: they will pass through the material and the energy will not be captured. Light waves above the bandgap will have excess energy left over after dislodging electrons: the excess energy will be lost as heat.

Single-junction solar cells are composed of p-n junctions made of a single material, most commonly crystalline silicon, in today’s commercial solar industry. Silicon atoms have a bandgap of 1.4eV and achieve optimum conversion efficiency in light with 700-1,000nm wavelengths (red and infra-red). They do not capture energy efficiently from lower energy, lower wavelength light (such as 400-700nm) or very high wavelength light (1,000nm+).

Multi-Junction solar cells aim to overcome the limitations of single-junction solar cells, combining multiple p-n junctions, made of multiple solar materials, to capture a broader range of the spectrum. For example, the six-junction solar cell discussed in this note has six separate junctions, connected in series, to capture light from c350-1700nm wavelengths, which is tantamount to c65-85% of all the energy in sunlight.

Group III-V alloys are used in different combinations in each of these junctions, to tune its bandgap, to capture a different wavelength of light. These alloys are composed of elements from Groups III and V in the periodic table. Group III includes boron, aluminium, gallium and indium. Group V includes nitrogen, phosphorus, arsenic and antimony.

The junctions are usually stacked with the highest energy absorber on top (i.e., junction 6). Photons that lack sufficient energy to dislodged electrons in junction 6 will pass through it, and have a additional chances of being absorbed in junction 5, through to junction 1.

The challenge is how to stack these six junctions on top of each other in a way that limits recombination and resistance, both of which are going to impair solar cell efficiency.

The challenge of recombination?

Recombination occurs when dislodged electron and holes re-combine in a solar cell, thereby lowering the current reaching the current collectors. If recombination re-emits photons, it is known as radiative recombination. Group III-V solar cells are particularly sensitive to recombination around dislocations.

Dislocations are abrupt changes in the crystal structures in a material. A physical effect is that dislocations allow atoms to glide or slip past one another at low stress levels. An optoelectronic effect is to impede current and encourage recombination of electrons and holes.

One type of dislocation, known as a threading dislocation because of its shape, extends beyond the surface of the strained layer and throughout the material, so it can be particularly deleterious to solar cell performance.

Multi-junction solar cells are particularly prone to dislocations because each junction is made of a different material. These materials are lattice-mismatched monoliths.

Monolithic materials are formed a single, continuous and unbroken crystal structure, all the way to its edges, with minimal defects or grain boundaries. This means it does not suffer from grain boundaries or dislocations, and in turn, efficiency losses from recombination should be minimized. But it is very difficult to manufacture monolithic materials from lattice-mismatched components.

Lattice-mismatched materials have different lattice constants. This means that they are composed of crystals of different sizes. In turn, this means they will not adhere well to one-another. Their boundaries are prone dislocations.

The solution: metamorphic epitaxy?

A technique called metamorphic epitaxy was used to create the monolithic six-junction solar cells described above, and overcome the inter-related challenges of recobination at dislocations in lattice mismatched materials.

Epitaxy is the process of orientation-controlled growth of crystals on top of other crystals. The 47% efficient solar cell used a variant called organometallic vapour phase epitaxy (OMVPE).

Metamorphic epitaxy minimizes dislocations around the active site of an engineered material. This is achieved by relieving the strain around lattice-mismatched boundaries by encouraging dislocations to occur away from the active site of the material. Specifically, materials known as Compositionally Graded Buffers (CGBs) were introduced in between the fourth to sixth junctions of the six-junction solar cell, as thse were the boundaries most prone to dislocations.

Specifically, these six-junction solar cells were monolithically grown on a single 2×3 cm GaAs substrate, at 550-750C temperatures, in an atmospheric-pressure OMVPE system.  “Growth begins with the high-bandgap lattice-matched junctions [on the bottom], leaving these high-power-producing junctions without dislocations”.

Then the cell was then inverted as the high bandgap lattices need to be situated on the top of the cell. (In other words, the cell is printed upside down and then turned over). Gold was electroplated onto contact of the inverted structure (literally, “gold-plated”!), then the cell was epoxied onto a flat silicon wafer. The GaAs substrate was removed by chemical etching. A front-side grid of NiAu was deposited by photolithography. Finally an anti-reflective coating of MgF2/ZnS/MgF2/ZnS was thermally deposited on the top of the cell.

The full 6J IMM structure consisted of 140 layers, including individual compositional step-graded buffer layers. The total growth time was 7.5 h.

1 sun’s concentration?

Under 1 sun’s solar intensity, the cell described above achieved 39.2% efficiency. This is the highest 1-Sun conversion efficiency demonstrated by any technology to-date. The prior record is 38.8% for a five-junction bonded III–V solar cell.

The efficiency is very high, because the voltages of each junction add up to a high total voltage. However, the current density in each junction was low. The efficiency could have been higher with a higher current density, which in turn, is achieved by concentrating the incoming sunlight.

143 suns’ concentration?

Concentration of incident light improves solar cell efficiency. The reason is that more concentrated light dislodges more electrons. More dislodged electrons means a higher current density. In turn, a higher current density raises the bandgap for dislodging further electrons (it is harder to remove further electrons from a material that has already lost some electrons). So even more energy can be absorbed when additional light strikes the cell.

Concentrating solar light is also desirable as a way to lower costs, as multi-junction solar cells are expensive to produce. Concentrating the light from 1 square meter onto 1 square centimeter, for example, reduces the area of solar materials required by a factor of 10,000.

Joule losses set the upper limit on the solar concentration that will maximize efficiency. Joule losses are the loss of electricity as heat when electric current passes through a conductor. They are a square function of current and a linear function of resistance. So they rise quadratically as solar intensity rises linearly.

Lower resistance will help to limit joule losses. In the solar cell described above, several challenges were observed keeping resistance low.

Each junction is connected in series in the cell. The current flows between each junction through a “tunnel interconnection”. Resistance through these tunnel junctions was found to rise with current, placing a practical limit on solar concentrations.

Internal resistances within each junction were also higher than desired. They were found to have been elevated by the temperatures during epitaxy and during dopant diffusion (particularly in Zn-containing layers).

At the top junction, the 2.1eV bandgap material required a high resistance to conduct charge laterally to the metal grid fingers that serve as current collectors for the cell’s electrical circuit.

Reducing the effective series resistance to 0.015 Ω cm2 is seen to be possible, by analogy to previous four-junction solar cells, which would allow the six-junction cell described above to surpass 50% efficiency at 1,000–2,000 Suns. The maximum theoretical efficiency is 62%.

Commercial implications?

47-50% efficient solar cells are a good incremental improvement. To put the ‘breakthrough’ into context, the previous record for a multi-junction solar cell was 46% efficiency at 508 suns, using a four-junction device. There is scope for multi-junction solar cell efficiency to improve further.

The cell was also very small, at 0.1cm2. When solar ‘records’ are measured, usually the stipulation is required that a cell must be 1cm2 in area, as a testing criterion.

Its production was very complex taking 7.5-hours to assemble 140 separate layers. Complex structures are expensive and more prone to degradation, which makes commerciality challenging.

We conclude that a 47-50% efficient solar cell is a tremendous technical achievement. But the evidence does not yet suggest proximity to commercialising ultra-efficient multi-junction solar cells like this at mass scale.


Source: Geisz, J. F., France, R. M., Schulte, K. L., Steiner, M. A., Norman, A. G., Guthrey, H. L., Young, M. R., Song, T. & Moriarty, T. (2020). Six-junction III–V solar cells with 47.1% conversion efficiency under 143 Suns concentration. National Renewable Energy Laboratory (NREL), Golden, CO, USA.

What oil price is best for energy transition?

It is possible to decarbonize all of global energy by 2050. But $30/bbl oil prices would stall this energy transition, killing the relative economics of electric vehicles, renewables, industrial efficiency, flaring reductions, CO2 sequestration and new energy R&D. This 15-page note looks line by line through our models of oil industry decarbonization. We find stable, $60/bbl oil is ‘best’ for the transition.


Our roadmap for the energy transition is outlined on pages 2-4, obviating 45Mbpd of long-term oil demand by 2050, looking across each component of the oil market.

Vehicle fuel economy stalls when oil prices are below $30/bbl, amplifying purchases of inefficient trucks and making EV purchases deeply uneconomical (pages 5-6).

Industrial efficiency stalls when oil prices are below $30/bbl, as oil outcompetes renewables and more efficient heating technologies (page 7).

Cleaning up oil and gas is harder at low oil prices, cutting funding for flaring reduction, methane mitigation, digitization initiatives and power from shore (pages 8-9).

New energy technologies are developed more slowly when fossil fuel prices are depressed, based on R&D budgets, patent filings and venturing data (pages 10-11).

CO2 sequestration is one of the largest challenges in our energy transition models. CO2-EOR is promising, but the economics do not work below $40/bbl oil prices (pages 12-14).

Our conclusion is that policymakers should exclude high-carbon barrels from the oil market to avoid persistent, depressed oil prices (as outlined on page 15).

Will renewable growth slow down from 2020?

The growth of renewables has been revolutionary, with wind and solar costs emerging towards the bottom of the global cost curve, scaling up at a pace of 300TWH pa. However, we find unsettling evidence that the market could slow in the 2020s, as curtailment accelerates in heartland markets such as California, Germany and the UK. The rationale, and all the underlying data, are included in this Excel file.

Ramp Renewables? Portfolio Perspectives.

It is often said that Oil Majors should become Energy Majors by transitioning to renewables. But what is the best balance based on portfolio theory? Our 7-page note answers this question, by constructing a mean-variance optimisation model. We find a c0-20% weighting to renewables maximises risk-adjusted returns. The best balance is 5-13%. But beyond a c35% allocation, both returns and risk-adjusted returns decline rapidly.


Pages 2-3 outline our methodology for assessing the optimal risk-adjusted returns of a Major energy company’s portfolio, including the risk, return and correlations of traditional investment options: upstream, downstream and chemicals.

Page 4 quantifies the lower returns that are likely to be achieved on renewable investment options, such as wind, solar and CCS, based on our recent modeling.

Pages 5-6 present an “efficient frontier” of portfolio allocations, balanced between traditional investment options and renewables, with different risk and return profiles.

Pages 6-7 draw conclusions about the optimal portfolios, showing how to maximise returns, minimise risk and maximise risk-adjusted returns (Sharpe ratio).

The work suggests oil companies should primarily remain oil companies, working hard to improve the efficiency and lower the CO2-intensities of their base businesses.

Patent Leaders in Energy

Technology leadership is crucial in energy. It drives costs, returns and future resiliency. Hence, we have reviewed 3,000 recent patent filings, across the 25 largest energy companies, in order to quantify our “Top Ten” patent leaders in energy.


This 34-page note ranks the industry’s “Top 10 technology-leaders”: in upstream, offshore, deep-water, shale, LNG, gas-marketing, downstream, chemicals, digital and renewables.

For each topic, we profile the leading company, its edge and the proximity of the competition.

Companies covered by the analysis include Aramco, BP, Chevron, Conoco, Devon, Eni, EOG, Equinor, ExxonMobil, Occidental, Petrobras, Repsol, Shell, Suncor and TOTAL.

Upstream technology leaders have been discussed in greater depth in our April-2020 update, linked here.


More information? Please do not hesitate to contact us, if you would like more information about accessing this document, or taking out a TSE subscription.