Study on UK Wind and Solar potential fails on costs

Oxford University’s Smith School of Enterprise and the Environment in August published a report “Could Britain’s energy demand be met entirely by wind and solar?“, a short briefing “Wind and solar power could significantly exceed Britain’s energy needs” with a press release here. Being a (slightly) manic beancounter, I will review the underlying assumptions, particularly the costs.

Summary Points

  • Projected power demand is likely high, as demand will likely fall as energy becomes more expensive.
  • Report assumes massively increased load factors for wind turbines. A lot of this increase is from using benchmarks contingent on technological advances.
  • The theoretical UK scaling up of wind power is implausible. 3.8x for onshore wind, 9.4x for fixed offshore and >4000x for floating offshore wind. This to be achieved in less than 27 years.
  • Most recent cost of capital figures are from 2018, well before the recent steep rises in interest rates. Claim of falling discount rates is false.
  • The current wind turbine capacity is still a majority land based, with a tiny fraction floating offshore. A shift in the mix to more expensive technologies leads to an 82% increase in average levelised costs. Even with the improbable load capacity increases, the average levilised cost increase to 37%.
  • Biggest cost rise is from the need for storing days worth of electricity. The annual cost could be greater than the NHS 2023/24 budget.
  • The authors have not factored in the considerable risks of diminishing marginal returns.

Demand Estimates

The briefing summary states

299 TWh/year is an average of 34 GW, compared with 30 GW average demand in 2022 at grid.iamkate.com. I have no quibble with this value. But what is the five-fold increase by 2050 made-up of?

From page 7 of the full report.

So 2050 maximum energy demand will be slightly lower than today? For wind (comprising 78% of potential renewables output) the report reviews the estimates in Table 1, reproduced below as Figure 1

Figure 1: Table 1 from page 10 of the working paper

The study has quite high estimates of output compared to previously, but things have moved on. This is of course output per year. If the wind turbines operated at 100% capacity then the required for 24 hours a day, 365.25 days a year would be 265.5 GW, made up of 23.5GW for onshore, 64GW for fixed offshore and 178GW for floating offshore. In my opinion 1500 TWh is very much on the high side, as demand will fall as energy becomes far more expensive. Car use will fall, as will energy use in domestic heating when the considerably cheaper domestic gas is abandoned.

Wind Turbine Load Factors

Wind turbines don’t operate at anything like 100% of capacity. The report does not assume this. But it does assume load factors of 35% for onshore and 55% for offshore. Currently floating offshore is insignificant, so offshore wind can be combined together. The UK Government produces quarterly data on renewables, including load factors. In 2022 this average about 28% for onshore wind (17.6% in Q3 to 37.6% in Q1) and 41% for offshore wind (25.9% in Q3 to 51.5% in Q4). This data, shown in four charts in Figure 2 does not seem to shown an improving trend in load capacity.

Figure 2 : Four charts illustrating UK wind load capacities and total capacities

The difference is in the report using benchmark standards, not extrapolating from existing experience. See footnote 19 on page 15. The first ref sited is a 2019 DNV study for the UK Department for Business, Energy & Industrial Strategy. The title – “Potential to improve Load Factor of offshore wind farms in the UK to 2035” – should give a clue as to why benchmark figures might be inappropriate to calculate future average loads. Especially when the report discusses new technologies and much larger turbines being used, whilst also assuming some load capacity improvements from reduced downtimes for maintenance.

Scaling up

The report states on page 10

From the UK Government quarterly data on renewables, these are the figures for Q3 2022. Q1 2023 gives 15.2 GW onshore and 14.1 GW offshore. This offshore value was almost entirely fixed. Current offshore floating capacity is 78 MW (0.078 GW). This implies that to reach the reports objectives of 2050 with 1500 TwH, onshore wind needs to increase 3.8 times, offshore fixed wind 9.4 times and offshore floating wind over 4000 times. Could diminishing returns, in both output capacities and costs per unit of capacity set in with this massive scaling up? Or maintenance problems from rapidly installing floating wind turbines of a size much greater than anything currently in service? On the other hand, the report notes that Scotland has higher average wind speeds than “Wales or Britain”, to which I suspect they mean that Scotland has higher average wind speeds to the rest of the UK. If so, they could be assuming a good proportion of the floating wind turbines will be located off Scotland, where wind speeds are higher and therefore the sea more treacherous. This map of just 19 GW of proposed floating wind turbines is indicative.

Cost of Capital

On page 36 the report states

You indeed find these rates on “Table 2.7: Technology-specific hurdle rates provided by Europe Economics”. My quibble is not that they are 2018 rates, but that during 2008-2020 interests rates were at historically low levels. In a 2023 paper it should recognise that globally interest rates have leapt since then. In the UK, base rates have risen from 0.1% in 2020 to 5.25% at the beginning of August 2023. This will surely affect the discount rates in use.

Wind turbine mix

Costs of wind turbines vary from project to project. However, the location determines the scale of costs. It is usually cheaper to put up a wind turbine on land than fix it to a sea bed, then construct a cable to land. This in turn is cheaper than anchoring a floating turbine to a sea bed often in water too deep to fix to the sea bed. If true, moving from land to floating offshore will increase average costs. For this comparison I will use some 2021 levilized costs of energy for wind turbines from US National Renewable Energy Laboratory (NREL).

Figure 3 : Page 6 of the NREL presentation 2021 Cost of Wind Energy Review

The levilized costs are $34 MWh for land-based, $78 MWh for fixed offshore, and $133 MWh for floating offshore. Based on the 2022 outputs, the UK weighted average levilized cost was about $60 MWh. On the same basis, the report’s weighted average levilized cost for 2050 is about $110 MWh. But allowing for 25% load capacity improvements for onshore and 34% for offshore brings average levilized cost down to $82 MWh. So the different mix of wind turbine types leads to an 83% average cost increase, but efficiency improvements bring this down to 37%. Given the use of benchmarks discussed above it would be reasonable to assume that the prospective mix variance cost increase is over 50%, ceteris paribus.

The levilized costs from the USA can be somewhat meaningless for the UK in the future, with maybe different cost structures. Rather than speculating, it is worth understanding why the levilized cost of floating wind turbines is 70% more than offshore fixed wind turbines, and 290% more (almost 4 times) than onshore wind turbines. To this end I have broken down the levilized costs into their component parts.

Figure 3 : NREL Levilized Costs of Wind 2021 Component Breakdown. A) Breakdown of total costs B) Breakdown of “Other Capex” in chart A

Observations

  • Financial costs are NOT the costs of borrowing on the original investment. The biggest element is cost contingency, followed by commissioning costs. Therefore, I assume that the likely long-term rise interest rates will impact the whole levilized cost.
  • Costs of turbines are a small part of the difference in costs.
  • Unsurprisingly, operating cost, including maintenance, are significantly higher out at sea than on land. Similarly for assembly & installation and for electrical infrastructure.
  • My big surprise is how much greater the cost of foundations are for a floating wind turbine are than a fixed offshore wind turbine. This needs further investigation. In the North Sea there is plenty of experience of floating massive objects with oil rigs, so the technology is not completely new.

What about the batteries?

The above issues may be trivial compared to the issue of “battery” storage for when 100% of electricity comes from renewables, for when the son don’t shine and the wind don’t blow. This is particularly true in the UK when there can be a few day of no wind, or even a few weeks of well below average wind. Interconnectors will help somewhat, but it is likely that neighbouring countries could be experiencing similar weather systems, so might not have any spare. This requires considerable storage of electricity. How much will depend on the excess renewables capacity, the variability weather systems relative to demand, and the acceptable risk of blackouts, or of leaving less essential users with limited or no power. As a ballpark estimate, I will assume 10 days of winter storage. 1500 TWh of annual usage gives 171 GW per hour on average. In winter this might be 200 GW per hour, or 48000 GWh for 10 or 48 million Mwh. The problem is how much would this cost?

In April 2023 it a 30 MWh storage system was announced costing £11 million. This was followed in May by a 99 MWh system costing £30 million. These respectively cost £367,000 and £333,000 per MWh. I will assume there will be considerable cost savings in scaling this up, with a cost of £100,000 per MWh. Multiplying this by 48,000,000 gives a cost estimate of £4.8 trillion, or nearly twice the 2022 UK GDP of £2.5 trillion. If one assumes a 25 year life of these storage facilities, this gives a more “modest” £192 billion annual cost. If this is divided by an annual usage of 1500 TWh it comes out at a cost of 12.8p KWh. These costs could be higher if interest rates are higher. The £192 billion costs are more than the 2023/24 NHS Budget.

This storage requirement could be conservative. On the other hand, if overall energy demand is much lower, due to energy being unaffordable it could be somewhat less. Without fossil fuel backup, there will be a compromise between costs energy storage and rationing with the risk of blackouts.

Calculating the risks

The approach of putting out a report with grandiose claims based on a number of assumptions, then expecting the public to accept those claims as gospel is just not good enough. There are risks that need to be quantified. Then, as a project progresses these risks can be managed, so the desired objectives are achieved in a timely manner using the least resources possible. These are things that ought to be rigorously reviewed before a project is adopted, learning from past experience and drawing on professionals in a number of disciplines. As noted above, there are a number of assumptions made where there are risks of cost overruns and/or shortfalls in claimed delivery. However, the biggest risks come from the law of diminishing marginal returns, a concept that has been understood for over 2 00 years. For offshore wind the optimal sites will be chosen first. Subsequent sites for a given technology will become more expensive per unit of output. There is also the technical issue of increased numbers of wind turbines having a braking effect on wind speeds, especially under stable conditions.

Concluding Comments

Technically, the answer to the question “could Britain’s energy demand be met entirely by wind and solar?” is in the affirmative, but not nearly so positively at the Smith School makes out. There are underlying technical assumptions that will likely not be borne out with further investigations. However, in terms of costs and reliable power output, the answer is strongly in the negative. This is an example of where rigorous review is needed before accepting policy proposals into the public arena. After all, the broader justification of contributing towards preventing “dangerous climate change” is upheld in that an active global net zero policy does not exist. Therefore, the only justification is on the basis of being net beneficial to the UK. From the above analysis, this is certainly not the case.

BP Energy Outlook’s 2023 fantasy scenarios

If a fantasy is something impossible, or highly improbable, then I believe that I more than justify the claim concerning the latest BP Energy Outlook. A lot of ground will be covered but will be summarised at the end.

Trigger warning. For those who really believe that current climate policies are about saving the planet, please exit now.

The BP Energy Outlook 2023 was published on 26th June. From the introduction

Energy Outlook 2023 is focused on three main scenarios: Accelerated, Net Zero and New Momentum. These scenarios are not predictions of what is likely to happen or what BP would like to happen. Rather they explore the possible implications of different judgements and
assumptions concerning the nature of the energy transition and the uncertainties around those judgements.

One might assume that the order is some sort of ascent, or decent. That is not the case, as New Momentum is the least difficult to achieve, then Accelerated, with Net Zero being the hardest to achieve. The most extreme case is Net Zero. Is this in line with what is known as Net Zero in the UNFCCC COP process? From the UNEP Emissions Gap Report 2018 Executive Summary, major point 2

Global greenhouse gas emissions show no signs of peaking. Global CO2 emissions from energy and industry increased in 2017, following a three-year period of stabilization. Total annual greenhouse gases emissions, including from land-use change, reached a record high of 53.5 GtCO2e in 2017, an increase of 0.7 GtCO2e compared with 2016. In contrast, global GHG emissions in 2030 need to be approximately 25 percent and 55 percent lower than in 2017 to put the world on a least-cost pathway to limiting global warming to 2°C and 1.5°C
respectively.

With Net Zero being accomplished for 2°C in 2070 and 1.5°C in 2050, this gives 20 years of 2017 emissions from 2020 for 2°C of warming and just 12 years for 1.5°C. Figure 1 in the BP Energy Outlook 2023 Report, reproduced below, is roughly midway between 12 and 20 years of emissions, although with only about three-quarters of the emissions, in equivalent CO2 tonnes that the UN uses for policy. This seems quite reasonable course to take to keep things simple.

The BP Energy Outlook summarises the emissions pathways in a key chart, reproduced below.

Fig 1 : BP Energy Outlook 2023 scenario projections, with historical emissions up to 2019

One would expect the least onerous scenario would be based on current trends. The description says otherwise.

New Momentum is designed to capture the broad trajectory along which the global energy system is currently travelling. It places weight on the marked increase in global ambition
for decarbonization in recent years, as well as on the manner and speed of decarbonization seen over the recent past. CO2e emissions in New Momentum peak in the 2020s and by 2050 are around 30% below 2019 levels.

That is the most realistic scenario based on current global policies is still based on a change in actual policies. How much though? Fig 1 above, shows, actual emissions up to 2019 are increasing, then a decrease in all three scenarios from 2020 onwards.

At Notalotofpeopleknowthat, in an article on this report, the slightly narrower CO2 emissions narrow CO2 emissions are shown.

Fig 2 : Global CO2 Emissions 2011-2022 from notalotofpeopleknowthat

There was a significant drop in emissions in 2020 due to covid lockdowns, but emissions more than recovered to break new records in 2022. But all scenarios in Fig 1 show a decline in emissions from 2019 to 2025. Neither do emissions show signs of peaking? The UNEP Emissions GAP Report 2022 forecasts that GHG emissions (the broadest measure of emissions) could be up to 9% higher than in 2017, with a near zero chance of being the same. The key emissions gap chart is reproduced in Fig 3.

Fig 3. Emissions gap chart ES.3 from UNEP Emissions Gap Report 2022

Clearly under current policies global GHG emissions will rise this decade. The “new momentum” was nowhere in sight last October, nor was there any sight of emissions peaking after COP27 at Sharm el-Sheikh in December. Nor is there any real prospect of that happening at COP28 in United Arab Emirates (an oil state) later this year.

Yet even this chart is flawed. The 2°C main target for 2030 is 41 GtCO2e and the 1.5°C main target is 33 GtCO2e. Both are not centred in their ranges. From the EGR 2018, a 25% reduction on 53.5 is 40, and a 55% reduction 24. But at least there is some pretence of trying to reconcile desired policy with the most probable reality.

It gets worse…

In the lead-up to COP21 Paris 2015 countries submitted “Intended Nationally Determined Contributions” (INDCs). The UNFCCC said thank you and filed them. There appears to be no review or rejection of any INDCs that clearly violated the global objective of substantially reducing global greenhouse gas emissions by 2030. Thus an INDC was not rejected in the contribution was highly negative. That is if the target implied massively increasing emissions. The major example of this is China. Their top targets of peaking CO2 emissions around 2030 & “to lower carbon dioxide emissions per unit of GDP by 60% to 65% from the
2005 level” (page 21) can be achieved even if emissions more than double between 2015 and 2030. This is simply based on the 1990-2010 GDP average growth of 10% and the emissions growth of 6%. Both India and Turkey (page 5) plan to double emissions in the same period. (page 5) and Pakistan to quadruple theirs (page 26). Iran plans to cut its emissions by 4% up to 2030 compared with a BAU scenario. Which is some sort of increase.

There are plenty of other non-OECD countries planning to increase their emissions. As of mid-2023 no major country seems to have reversed course. Why is this important? The answer lies in a combination of the Paris Agreement & the data

The flaw in the Paris Agreement

Although nearly every country has signed the Paris Agreement, few have understood its real lack of teeth in gaining reductions in global emissions. Article 4.1 states

In order to achieve the long-term temperature goal set out in Article 2,
Parties aim to reach global peaking of greenhouse gas emissions as soon as
possible, recognizing that peaking will take longer for developing country Parties,
and to undertake rapid reductions thereafter in accordance with best available
science, so as to achieve a balance between anthropogenic emissions by sources
and removals by sinks of greenhouse gases in the second half of this century, on the
basis of equity, and in the context of sustainable development and efforts to
eradicate poverty.

The agreement lacks any firm commitments but does make a clear distinction between developed and developing countries. The latter countries have no obligation even to slow down emissions growth in the near future. Furthermore, the “developed” countries are quite small in population. These are basically all the members of the OECD. This includes some of the upper middle-income countries like Turkey, Costa Rica and Columbia, but excludes the small Gulf States with very high per capita incomes.

BP is perhaps better known for its annual Statistical Review of World Energy. The 2023 edition was published on the same day as the Energy Outlook but for the first time by the Energy Institute. From this, I have used the CO2 emissions data to split out the world emissions into four groups – OECD, China, India, and Rest of the World. The OECD countries collectively have a population of about 1.38bn, or about the same as India or China.

Fig 4: Global Emissions from the Energy Institute Statistical Review of World Energy 2023 shown in MtCO2e and shares.

From 1990 to 2022, OECD countries increased their emissions by 1%, India by 320%, China by 370% and ROW by 45%. As a result the OECD share of global emissions fell from 52% in 1990 to 32%. Even if all the non-OECD countries kept the emissions constant in the 2020s, the 2°C target could only be achieved by OECD countries reducing their emissions by nearly 80% and for the 1.5°C target by over 170%. The reality is that obtaining deep global emissions cuts are about as much fantasy as believing an Official Monster Raving Loony Party candidate could win a seat in the House of Commons. Their electoral record is here.

The forgotten element….

By 2050, we find that nearly 60 per cent of oil and fossil methane gas, and 90 per cent of coal must remain unextracted to keep within a 1.5 °C carbon budget.

Welsby, D., Price, J., Pye, S. et al. Unextractable fossil fuels in a 1.5 °C world. Nature 597, 230–234 (2021).

It has been estimated that to have at least a 50 per cent chance of keeping warming below 2°C throughout the twenty-first century, the cumulative carbon emissions between 2011 and 2050 need to be limited to around 1,100 gigatonnes of carbon dioxide (Gt CO2). However, the greenhouse gas emissions contained in present estimates of global fossil fuel reserves are around three times higher than this, and so the unabated use of all current fossil fuel reserves
is incompatible with a warming limit of 2°C

McGlade, C., Ekins, P. The geographical distribution of fossil fuels unused when limiting global warming to 2 °C. Nature 517, 187–190 (2015).

I am not aware of any global agreement to keep most of the considerable reserves of fossil fuels in the ground. Yet is clear from these two papers that meeting climate objectives requires this. Of course, the authors of the BP Energy Outlook may not be aware of these papers. But they will be aware of the Statistical Review of World Energy. It has estimates of reserves for oil, gas, and coal. They have not been updated for two years, but there are around 50 years of gas & oil and well over 100 years of coal left. Once

Key points covered

  • Energy Outlook scenarios do not include an unchanged policy
  • All three scenarios show a decline between 2019 & 2025. 2022 actual emissions were higher than 2019.
  • In aggregate Paris climate commitments mean an increase emissions by 2030, something ignored by the scenarios.
  • The Paris Agreement exempts developing countries from even curbing their emissions growth in the near term. Accounting for virtually all the emissions growth since 1990 and around two-thirds of current emissions makes significantly reducing global emissions quite impossible.
  • Then totally bypassing the policy issue of keeping most of the available fossil fuels in the ground.

Given all the above labelling the BP Energy Outlook 2023 scenarios “fantasies” is quite mild. Even though they may be theoretically possible there is no general recognition of the policy constraints which would lead to action plans to overcome these constraints. But in the COP process and amongst activists around the world there is just a belief that proclaiming the need for policy will achieve a global transformation.

Kevin Marshall

Sea Level Rise Misinformation in IPCC AR6

UNIPCC AGR6 WG1 SPM page 5 makes the following highly misleading claim about sea level rise acceleration.

Below is why this statement is highly misleading.

1. In 1993 there was a switch in sea level rise measurements from tide gauges to satellites. The data for the latter is available from University of Colorado Sea Level Research Group. The main graph is below.

Figure 1 : Sea Level Rise data plot from University of Colorado Sea Level Research Group

2. From the data the average sea levels in 2018 were 43.8mm higher than in 2006. An average rise of 3.65mm yr-1. Rounded this is the IPCC’s 3.7mm yr-1

3. From the satellites from 1993 to 2006 sea levels rose by 37.5mm or 2.9mm yr-1. From the IPCC sea levels rose by 1.9 mm yr-1 from 1971 to 2006 or 66.5mm yr-1. Therefore from 1971 to 1993 from tide gauge data sea levels rose 29mm in 22 years or 1.32mm yr-1.

4. Simply by switching the method of calculating level rise from some averages of tide gauges to satellite observations doubles the rate of sea level rise. That is most apparent sea level rise acceleration is not in actual measurement. But how robust is the satellite data set?

5. The data I downloaded for the satellite tide data set was from Dec 1992 to Jul 2021. Sea levels rose by around 100.31mm in this period or nearly 4 inches. However, there are considerable changes from year to year. In the 28.5 years, the annual average of rate of rise was 3.5mm, but the rise in individual years varied from a fall of 1.65mm in 1998 to a rise 10.54mm in 2012. See Figure 2. The question is whether these changes are real or due to estimates.

Figure 2 – Change in annual average sea levels on previous year mm from the satellite sea level data set.

6. Actual readings are about 10 days apart, yet have significant differences from one to another both up and down. The average change between readings is 1.8mm or an absolute change of 1821mm or 18 times the net change. Figure 3 compares this with the net changes.

Figure 3 – Annual change & Sum of absolute changes between readings from the satellite data.                                        

7. Given that in the satellite data there is a small apparent acceleration of sea level rise in 28 years when total rise was 100mm or 1/18 of absolute changes in sea level from one reading to another, and given satellites give at least 2x the rate of rise as tide gauges how can anyone claim using that data that sea level rise is accelerating? With individual tide gauges there is at least a control over time as they are in fixed place, but where is the control from the satellite data? Especially given that the sea level varies by many metres across the surface of the earth and that the satellite data gives a completely different set of results to the tide gauges.

8. Regardless of whether the satellite data and the tide gauges give good estimates, the fact that they give significantly results means that splicing together is misleading.

9. The statement about sea level rise is at the summary level. One would expect in the full WG1 report to repeat this claim, then give some sort of justification for the splicing together of two datasets with quite different estimates. The relevant section is Chapter 9.6, starting on numbered page 1287, or pdf sheet 1304 of the full report, or pdf sheet 77 of the Chapter 9 download. I can find no restatement nor justification for the splicing two different data sets together, nor any explicit recognition that the two measures are quite different. The discussion of each set is in separate sections 9.6.1.1 & 9.6.1.2, but nothing to reconcile the two.

10. There is a claim of acceleration in tide gauge average seal levels in the period 1901 to 2010, despite there being no discernible acceleration in the calculated figure for 1901 to 1992 in the SPM statement.1 But what does confining estimates of acceleration in the tide gauges to saying it was 1902–2010 (–0.002 to +0.019 mm yr–2) miss out?

  • Even acceleration of +0.01 mm yr-2 is compatible with 1.3 mm yr-1 for both 1901-1971 and & 1971-1993, but not +0.019 mm yr–2.
  • Top of the range acceleration of +0.019 mm yr–2 from 1902–2010 still leaves rates of sea level rise for the period 1993-2010 well below that of the satellites for the period 1993-2010.
  • Quoting the same acceleration for the whole period 1902-2010 suggests that the data does not show significantly increased rates of sea level rise after 1990. If there was an increase in acceleration in the rate sea level in line with satellite measurements surely a report trying to demonstrate human-caused climate change would highlight this. After all, the rate of rise in CO2 levels accelerated in the late 1950s when Mauna Loa CO2 levels started to be recorded and the current warming phase only started in the late 1970s.
  • The range –0.002 to +0.019 mm yr–2 includes zero. That is the data estimated range includes the null hypothesis that there was no acceleration in sea level rise in the period 1902–2010. For normal science this is a grave issue, but for post-normal science (based on a priori beliefs) it is irrelevant.

Concluding comments

The claim of accelerating sea level rise in UNIPCC AGR6 WG1 SPM is mostly based upon splicing together two different data sets. There is no attempt within the detailed report to justify this splicing. Further, the sea level rise acceleration estimate range from the tide gauges does not exclude zero, whilst the upper range does not allow convergence with the satellite data. Most of the implied acceleration in AR6 WG1 SPM Para A.1.7 is grossly misleading propaganda.

Notes

  1. “The SROCC found that four of the five available tide gauge reconstructions that extend back to at least 1902 showed a robust acceleration (high confidence) of GMSL rise over the 20th century, with estimates for the period 1902–2010 (–0.002 to +0.019 mm yr–2) that were consistent with AR5. New tide gauge reconstructions published since SROCC (Dangendorf et al., 2019; Frederikse et al., 2020b) support this assessment and suggest that increased ocean heat uptake related to changes in Southern Hemisphere winds and increased mass loss from Greenland are the primary physical mechanisms for the acceleration (Section 2.3.3.3). Therefore, the SROCC assessment on the acceleration of GMSL rise over the 20th century is maintained.” (9.6.1.1 page 1287)

Kevin VS Marshall

Hansen’s 1988 Scenario C against Transient Climate Response in IPCC TAR 2001

In a recent comment at Cliscep Jit made the following request

I’ve been considering compiling some killer graphs. A picture paints a thousand words, etc, and in these days of short attention spans, that could be useful. I wanted perhaps ten graphs illustrating “denialist talking points” which, set in a package, would be to the unwary alarmist like being struck by a wet fish. Necessarily they would have to be based on unimpeachable data.

One of the most famous graphs in climate is of the three scenarios used in Congressional Testimony of Dr James Hansen June 23 1988. Copies are poor, being copies of a type-written manuscript. The following is from SeaLevel.info website.

Fig 3 of Hansen’s Congressional Test June 23 1988

The reason for choosing this version rather than the clearer version in the paper online is that the blurb contains the assumptions behind the scenarios. In particular “scenario C drastically reduces trace gas emissions between 1990 and 2000.” In the original article states

scenario C drastically reduces trace gas emissions between 1990 and 2000 such that greenhouse forcing ceases to increase after 2000.

In current parlance this is net zero. In the graph this results in temperature peaking about 2007.

In the IPCC Third Assessment Report (TAR) 2001 there is the concept of Transient Climate Response.

TAR WG1 Figure 9.1: Global mean temperature change for 1%/yr CO2 increase with subsequent stabilisation at 2xCO2 and 4cCO2. The red curves are from a coupled AOGCM simulation (GFDL_R15_a) while the green curves are from a simple illustrative model with no exchange of energy with the deep ocean. The transient climate response, TCR, is the temperature change at the time of CO2 doubling and the equilibrium climate sensitivity, T2x, is the temperature change after the system has reached a new equilibrium for doubled CO2, i.e., after the additional warming commitment has been realised.

Thus, conditional on CO2 rising at 1% a year and the eventual warming from a doubling of CO2 being around 3C, then at the point when doubling has been reached temperatures will have risen by about 2C. From the Mauna Loa data annual average CO2 levels have risen from 316 ppm in 1959 to 414 ppm in 2020. That is 31% in 60 years or less than 0.5% a year. Assuming 3C of eventual warming from a CO2 doubling then the long time period of the transient climate response

  • much less than 1C of warming could so far have resulted from the rise in CO2 since 1959
  • it could be decades after net zero is achieved that warming will cease.
  • the rates of natural absorption of CO2 from the atmosphere are of huge significance.
  • Calculation of climate sensitivity even with many decades CO2 data and temperature is near impossible unless constraining assumptions are made about the contribution of natural factors; the rate of absorption of CO2 from the atmosphere; outgassing or absorption of CO2 by the oceans; & the time period for the increase in temperatures from actual rates of CO2 increase.
  • That is, change in a huge number variables within a range of acceptable mainstream beliefs significantly impacts the estimates of emissions pathways to constrain warming to 1.5C or 2C.
  • If James Hansen in 1988 was not demonstrably wrong false about the response time of the climate system and neither is TAR on the transient climate response, then it could be not be possible to exclude within the range of both the possibility that 1.5C of warming might not be achieved this century and that 2C of warming will be surpassed even if global net zero emissions is achieved a week from now.

Kevin Marshall

Excess and Covid-19 death rates

Last month the Daily Mail had an article on excess deaths against Covid deaths over 12 months for 30 countries. This was based on a more detailed article in the Economist. What I found surprising that the countries making the headlines here in the UK for Covid deaths – UK, USA & Brazil – were well down the list in terms of excess deaths per 100,000 population. Since then the Economist has extended the data set to include 78 countries and the cities of Istanbul and Jakarta. The time period also varies, from around 180 to 400 days, though mostly for about a year.

Given this limitation. there are number of observations that can be made.

  • Overall the 78 countries account for well under half the global population. Notable absences from the Economist data set are China, India, Indonesia (except Jakarta), Pakistan, Bangladesh and Nigeria.
  • Total excess deaths are around 50% higher than than reported Covid deaths overall. That is 3.64 as against 2.43 million.
  • Excess deaths are slightly negative in a small number of countries. Most notably are Japan, South Korea, Taiwan, Malaysia and Philippines. Is this a cultural issue or a policy issue?
  • The worst country for excess deaths is Peru with 503 deaths per 100,000 for the period Mar 30th 2020-May 2nd 2021. Even allowing for the longer period, Peru is well above any other country. Covid deaths at 62,110 are just 38% of the excess deaths.
  • Next on the list with excess deaths per 100,000 and covid deaths as a percentage of excess deaths in brackets are Bulgaria (433, 53%), Mexico (354, 45%), Russia (338, 20%), Serbia (320, 24%), Lithuania (319, 43%), Ecuador (319, 34%), North Macedonia (304, 50%), Czechia (300, 81%) and Slovakia (270, 64%). Britain and USA, for comparison, are respectively 26th (180, 126%) and 25th (182, 93%).
  • All countries in the top 10 are either in Central / South America or Eastern Europe. Of the top 20, only South Africa (14th) and Portugal (20th) are outside these areas.
  • If countries are separated in excess death rankings by geography, maybe comparisons should be made between similar countries? In Western Europe the five large countries are Italy in 23rd (197, 74%) Britain in 26th (180, 126%), Spain in 28th (170, 100%), France in 37th (126, 125%) and Germany in 57th (63, 155%). Why should Germany be so much lower on excess and Covid deaths? Might it be that the Germans lead in following instructions, whilst the Italians & Spanish ignore them and the British tend more to think rules apply to people in general, but with many worthy exceptions for themselves and their immediate peers?
  • Peru not only has the highest excess death rate in the world, but some of the most draconian anti-covid policies. Could it be that some of the high excess deaths are the result of the policies? In Brazil, where lockdown policies are determined at state level, in some areas people are both deprived of a means to earn a living and get no assistance from the state.

There are many ways to look at the data. The Economist excess and covid deaths data gives far more insights than just crude deaths totals. Superficially it would suggest that problem areas are not, like early last year, in Western Europe, but in the Eastern Europe & South America. With the lowest death rates in the Far East, globally there are huge disparities that cannot be explained by differences in policy responses. It is more likely cultural factors play the greater role, although it is perfectly understandable why policy-makers would poo-poo what strongly suggested by the data. Moreover, with a lack of data from much of the world, and likely under reporting of Covid deaths in many countries, the true scale of the pandemic is likely vastly understated.

Kevin Marshall

Imperial self-congratulations on saving 3 million lives

Late last night there was an article at the BBC that I cannot find anymore on the BBC website. Fortunately my history still has the link.

The article linked to a pre-publication article in Nature from the team at Imperial College – Estimating the effects of non-pharmaceutical interventions on COVID-19 in Europe

On pdf page 16 of 18. Modelled estimates that by 4th May lockdowns across 11 countries saved 2.8 – 3.1 million lives.
From pdf page 6 / article page 5 there are four countries listed on the Fig 1, the last being the UK.

The graphics show for all countries that the day of the lockdowns saw a massive step reduction in daily new infections & the magic R. For the UK modelled daily infections spiked at over 400,000 on the day of the lockdown and were less than 100,000 the day after. R from around 4 at lockdown to 1 the day after. Prime Minister Boris Johnson gave a very powerful and sincere speech ordering the lockdown, but he did not realize how transformative the power of his words would be. It was the same in all the other countries surveyed. Further, all countries surveyed had a massive spike in infections in the few days leading up to lockdown being imposed. If only the wisdom of the scientists had been listened to a few days earlier – at slightly different dates in each country – then thousands of more lives could have been saved,

I believe that the key to these startling conclusions lies in the model assumptions not the data of the natural world. We have no idea of the total number of coronavirus cases in any country at the start of lockdown, just that the identified number of cases. Thus whilst the model estimates of the number of cases cannot be proved wrong, it is very unlikely. I can think of five reasons to back up my claim, with particular reference to the UK.

First, the measures taken prior to the lockdown had no impact on coronavirus cases and hardly any on R. This includes the stopping of visitors to care homes, social distancing measures, voluntary closing of public places like pubs and the start of home working.

Second, Prime Minister Boris going on TV ordering a lockdown had an immediate impact. Same for other leaders, such as President Macron in France. This is nonsense. People were locked down in households, so there would have still been infections within the households in the few days after lockdown.

Third, we know that many of the coronavirus deaths were of people infected whilst in a hospitals or care homes of people who were infected within care homes. The lockdown did not segregate people within those communities.

Fourth, the assumed pre-lockdown spike followed by a massive drop in daily new infections was not followed a few days later by any such corresponding pattern in daily deaths. It is far easier to make the case for a zero impact of lockdowns rather than this extreme impact. The truth, if perfect data were available, is likely to be nearer zero lives saved than 3 million.

Fifth, in Italy the lockdown was not imposed nationally on the same day. The North of Italy was locked down first, followed by the rest of the country some days later. Like with the imposition of less draconian measures pre-lockdown in the UK, this should have seen a less immediate effect than suggested by Figure 1.

Are the authors and peer-reviewers at Nature likely to be aware of the problems with the headline BBC claims and the underlying paper? Compare the caption to “Extended Data Table 1” (pdf page 16)

Forecasted deaths since the beginning of the epidemic up to 4th May in our model vs. a counterfactual model assuming no interventions had taken place.

to the report from the BBC;

Lockdowns have saved more than three million lives from coronavirus in Europe, a study estimates. …….

They estimated 3.2 million people would have died by 4 May if not for measures such as closing businesses and telling people to stay at home.

That meant lockdown saved around 3.1 million lives, including 470,000 in the UK, 690,000 in France and 630,000 in Italy, the report in the journal Nature shows.

from the Evening Standard;

Around three million deaths may have been prevented by coronavirus lockdowns across Europe, research suggests.

from yahoo! News;

By comparing the number of deaths against those predicted by their model in the absence of interventions, the researchers believe that  3.1 million deaths have been averted due to non-pharmaceutical measures.

and from eCAM Biomed – GLOBAL IMPACT FUND

This study found that nonpharmaceutical interventions, including national “lockdowns,” could have averted approximately 3.1 million COVID-19 deaths across 11 European countries.

These press reports is not disimilar to the title of the Imperial College press release

Lockdown and school closures in Europe may have prevented 3.1m deaths

I would suggest that they are different from the caption to Extended Table 1. The difference is between comparison of actual data to modelled data based on some unlikely assumptions and actual lives saved in the real world. The difference is between the lockdowns having saved 3 million lives and having saved many times less. It is between the desisions of governments sacrificing economic prosperity to save hundreds of thousands of lives, and ruining the lives of millions based on pseudo-science for near zero benefits. The authors should be aware of this and so should the reviewers of the world’s leading science journal.
Are we going to see a quiet withdrawal, like of the BBC report?

Kevin Marshall

Dumb hard left proclamation replaces pluralistic thirst for knowledge and understanding

Last week Guido Fawkes had a little piece that, in my opinion, illustrates how nasty the world is becoming. I quote in full.

IMPERIAL COLLEGE DROPS “IMPERIAL” MOTTO
ROOTED IN POWER & OPPRESSION

In response to representations from students inspired by the Black Lives Matter movement Imperial College’s President, Professor Alice Gast, has announced they are dropping their “imperialist” Latin motto.

“I have heard from many of you with concerns about the university motto and its appearance on our crest. The Latin motto appears on a ribbon below the crest and is commonly translated to ‘Scientific knowledge, the crowning glory and the safeguard of the empire’. We have removed this ribbon and the motto in a revised crest you can see below in this briefing. This modified crest is already in use by my office and the Advancement team and will be integrated into all of our materials over the coming year. We will commission a group to examine Imperial’s history and legacy. We have a long way to go, but we will get better. We will build upon our community’s spirit, commitment and drive. We will draw strength from your commitment and support.”

The College’s motto, coined in 1908, was ‘Scientia imperii decus et tutamen’ which translates as ‘Scientific knowledge, the crowning glory and the safeguard of the empire’. As Titania McGrath might say this motto “is a reminder of a historical legacy that is rooted in colonial power and oppression”. That’s an actual quote from the college’s President, in the interests of diversity she is erasing the past. As someone once wrote “Who controls the past controls the future. Who controls the present controls the past.”

UPDATE:This old article from 1995 describes the arms and motto of Imperial College, paying particular attention to the deliberate ambiguity of the Latin:

Thus DECUS ET TUTAMEN translates as ‘an honour and a protection’. The rest of the motto is deliberately ambiguous. SCIENTIA means ‘knowledge’ but is also intended as a pun on the English word ‘science’. IMPERII could mean ‘power’, ‘dominion over’, ‘universal’, ‘of the empire’, ‘of the state’, or ‘superior’; and again is intended as a pun on the English word ‘imperial’.

Because of this ambiguity the full motto can be translated in many different ways. One translation could be: ‘Dominion over science is an honour and a protection’. A more politically correct translation might be: ‘Universal knowledge is beautiful and necessary’.

The Black Lives Matter translation of the motto – ‘Scientific knowledge, the crowning glory and the safeguard of the empire’ – might be valid, but so are many other formulations. Indeed, although Britain at the start of the last century was the most powerful nation and ruled the most extensive empire in history, along with competing with the United States & Germany as the leaders in the pursuit of scientific knowledge, the motto has proved untrue. Imperialists who backed the foundation of Imperial College who thought that scientific knowledge would safeguard empire were mistaken. What is left is an Imperial College ranked about tenth in world rankings of universities so it is a glorious product of imperialist thinking. Given that it is still thriving it is more glorious that the majestic ruins of earlier empires, such as the Colesium in Rome or the Parthenon in Athens.

Deeper than this is that the motto is deliberately a pun. It is superfically meaningful in different ways to those from a diverse range of backgrounds and belief systems. But to those with deeper understanding – achieved through high level study and reflection – know that more than one valid perspective is possible. That also leads into the realisation that our own knowledge, or the collective that of any groups that we might identifying as belonging to, is not the totality of all knowledge possible, and might be even turn out to be false some time in the future. This humility gives a basis for furthering understanding of both the natural world and the place of people within it. Rather than shutting out alternative perspectives, we should develop understanding of our own world view, and aiming to understand others. This is analogous to the onus in English Common Law for the prosecution to prove a clearly defined case beyond reasonable doubt based on the evidence. It is also applies to the key aim of the scientific method. Conjectures about the natural world are ultimately validated by experiments based in the natural world.

Consider the alternative “ideal” that we are heading towards at an alarming rate of knots. What counts as knowledge is the collective opinion of those on the self-proclaimed moral high ground. In this perspective those who do not accept the wisdom of the intellectual consensus are guilty of dishonesty and should not be heard. All language and observations of the natural world are interpreted through this ideological position. Any conflict of is resolved by the consensus. Is it far fetched? A quote from Merchants of Doubt – Oreskes Conway 2010.

Sunday Times exaggerates price gauging on Amazon

It has been many months since last posting on this blog due to being engaged in setting up a small business after many years working as a management accountant, mostly in manufacturing. For the first time this year I purchased the Sunday Times. What caught my attention was an article “Amazon sellers rolling in dough from flour crisis“. As my speciality was product costing I noticed a few inaccuracies and exaggerations.

Sunday Times article from page 6 of print edition 03/05/20


The first issue was on the fees.

Amazon sells many products directly to consumers but more than half of its sales are from third-party sellers on its “Marketplace”. They pay fees to the online giant of up to 15% of the sale price.

The fees at least 15% of the sale price. This is if the seller despatches for themselves, incurring the cost of postage and packing.

Let us take an example of the price rises.

A packet of 45 rolls of Andrex lavatory roll rose from under £24 to more than £95.

For somebody purchasing from Amazon with prime, they will get free postage on purchases over £20. So they can get 45 rolls delivered for about the standard price in a supermarket of 5 x 9 roll packs at £4.50 each. Using an third party app (which might not be accurate) for the Classic Clean Andrex, I find that third party sellers were selling at £23.45 up to March 8 when stocks ran out. Further Amazon were selling for about 3 days at £18.28 until Sunday March 8, when they also ran out. Apart from on Fri Mar 13 Amazon did not have supplies until late April. It was during this period that 3rd party sellers were selling at between £50 & £99.99. Any items offered for sale sold very quickly.

Now suppose an enterprising individual managed to grab some Andrex from a wholesaler (est. £15 inc. VAT) and list them for sale on Amazon. How much would they make? If they already had an account (cost about £30 per month) they could despatch themselves. They would need a large box (at least 45 x 45 x 35 cm) which they might be able to buy for under £30 for a pack of 15. They would have to pay postage. It is £20.45 at the Post Office. If anyone can find a carrier (for 6.5kg) cheaper than £12, including insurance and pick up, please let me know in the comments. If the selling at £50, the costs would be at least £7.50 + £15 + £2 + £12 = £36.50. To make a quick buck it is a lot of work.

This is, however, a bad example. Let us try a much lower weight product. The classic Uno Card Game, that the Sunday Times claims was listed at £5.60 on March 1st and £5.60-£17.99 on April 30th. This compares with £7.00 at Argos & Sainsbury’s and £6.97 at Asda. The inaccuracy here is with the range of prices, as there were multiple sellers on both dates, with £5.60 being the price sold by Amazon themselves. Actual selling prices fluctuate during March and this evening the prices are between £5.49 and £17.99. It is usually the case with popular products that there are multiple sellers. During March and April Amazon were out of stock, with actual selling prices between £4.99 and £19.00. Most often it was in the range of £9.00-£11.50.

A final example from the Sunday Times is for Carex Handwash Sensitive 250ml. As an antibacterial product, as soon as the Government recommended frequent hand washing the product sold out in supermarkets. As such it was ripe for making super-profits dueing the period of panic buying. This product used to be frequently available at £1 or slightly more. The Sunday Times lists the Amazon price at £1.99 on March 1st and at £5.98-£11.50 on April 30th. My App shows a single seller at £7.99, with the March 1st price of £3.25. The Sunday Times have probably picked up a different listing that is no longer available. The best value Carex antibacterial at the time of writing appears to be this listing for 2 x 250ml, where the price ranges from £9.49 to £13.16 including delivery. Selling at around £3.64 prior to March, the selling price peaked at around £32.99 in mid-March.

Whilst the Sunday Times article may not have the best examples, it does highlight that some people have made extraordinary profits by either being in the right place at the right time, or by quickly reacting to changing market information and anticipating the irrational panic buying of many shoppers. Here the problem is not with entrepreneurs meeting demand, but with consumers listening to the fearmongers in the media and social media believing that a pandemic “shutdown” would stop the movement of goods, along with a cultural ignorance of the ability of markets to rapidly respond to new information. In the supermarkets shelves many needlessly emptied shelves. Much of the fresh food bought in panic was binned. Further, many households will not be purchasing any more rice, tinned tomatoes and toilet rolls for many months. Since then there has been an extraordinary response by suppliers and supermarkets in filling the shortages. The slowest responses to shortages have been where the state is the dominant purchaser or the monopoly supplier and purchaser. The former is in the area of PPE, and the latter in the area of coronovirus testing.

Finally, there is a puzzle as to why there is such a range of prices available for an item on Amazon. A reason is that many of the the high-priced sellers were competitive, but the price has fallen dramtically. Another is that the higher-priced sellers are either hoping people make a mistake, or have “shops” on Amazon, that lure people in with low price products and hope they occassionally buy other, over-priced products. Like the old-fashinoned supermarket loss-leaders, but on steroids. Alternatively they may have the products listed elsewhere (e.g. actual shops or on Ebay) and/or a range of products with the extraordinary profits of the few offsetting the long-term write-offs of the many. There is the possibility that these hopefuls will be the future failures, as will be the majority of entrepreneurial ventures in any field.

Kevin Marshall

How misleading economic assumptions can show Brexit making people worse off

Last week the BBC News headlined Brexit deal means ‘£70bn hit to UK by 2029′ITV news had a similar report. The source, NIESR, summarizes their findings as follows:-

Fig 1 : NIESR headline findings 

£70bn appears to be a lot of money, but this is a 10 year forecast on an economy that currently has a GDP of £2,000bn. The difference is about one third of one percent a year. The “no deal” scenario is just £40bn worse than the current deal on offer, hardly an apocalyptic scenario that should not be countenanced. Put another way, if underlying economic growth is 2%, from the NIESR in ten years the economy will be between 16% and 22% larger.   In economic forecasting, the longer the time frame, the more significant the underlying assumptions. The reports are based on an NIESR open-access paper  Prospects for the UK Economy – Arno Hantzsche, Garry Young, first published 29 Oct 2019. The key basis is contained in Figures 1 & 2, reproduced below.

Fig 2 : Figures 1 & 2 from the NIESR paper “Prospects for the UK Economy

The two key figures purport to show that Brexit has made a difference. Business investment growth has apparently ground to a halt since mid-2016 and economic growth slowed. What it does not show is a decline in business investment, nor a halting of economic growth.

After these figures the report states:-

The reason that investment has been affected so much by the Brexit vote is that businesses fear that trade with the EU will be sufficiently costly in the future – especially with a no-deal Brexit – that new investment will not pay off. Greater clarity about the future relationship, especially removing the no-deal threat, might encourage some of that postponed investment to take place. But that would depend on the type of deal that is ultimately negotiated. A deal that preserved the current close trading relationship between the UK and EU could result in an upsurge in investment. In contrast, a deal that would make it certain that there would be more trade barriers between the UK and EU in the future would similarly remove the risk of no deal but at the same time eliminate the possibility of closer economic ties, offsetting any boost to economic activity.

This statement asserts, without evidence, that the cause of the change in investment trend is singular. That is due to business fears over Brexit. There is no corroborating evidence to back this assumption, such as surveys of business confidence, or decline in the stock markets. Nor is there a comparison with countries other than the UK, to show that any apparent shifts are due to other causes, such as the normal business cycle. Yet it is this singular assumed cause of the apparent divergence from trend that is used as the basis of forecasting for different policy scenarios a decade into the future.

The rest of this article will concentrate of the alternative evidence, to show that any alleged change in economic trends are either taken out of context or did not occur as a result of Brexit. For this I use World Bank data over a twenty year period, comparing to the Euro area. If voting to leave the EU has had a significant impact in economic trends 

Net Foreign Direct Investment

There is no data for the narrow business investment at the World Bank. The alternative is net foreign direct investment.


Fig 3 : Data for net foreign direct investment from 1999 to 2018 for the Euro area and the UK.

UK net foreign direct investment was strongly negative in 2014 to 2016, becoming around zero in 2017 and 2018. The Euro area shows an opposite trend. Politically, in 2014 UKIP won the UK elections to the European Parliament, followed in 2015 by a promise of a referendum on the EU. Maybe the expectation of Britain voting to leave the EU could have had impact? More likely this net outflow is connected to the decline in the value of the pound. From xe.com

Fig 4 : 10 year GBP to USD exchange rates. Source xe.com

The three years of net negative FDI were years of steep declines in the value of the pound. In the years before and after, when exchange rates were more stable, net FDI was near zero.

GDP growth rates %

The NIESR choose to show the value of quarterly output to show a purported decline in the rate of economic growth post EU Referendum. More visible are the GDP growth rates.

Fig 5 : Annual GDP growth rates for the Euro area and the UK from 1999 to 2018. 

The Euro area and the UK suffered a economic crash of similar magnitude in 2008 and 2009. From 2010 to 2018 the UK has enjoyed unbroken economic growth, peaking in 2014. Growth rates were declining well before the EU referendum. The Euro area was again in recession in 2012 and 2013, which more than offsets the stronger growth than the UK from 2016 to 2018. In the years 2010 to 2018 Euro area GDP growth averaged 1.4%, compared with 1.5% for the years 1999 to 2009. In the UK it was 1.9% in both periods. The NIESR is essentially claiming that leaving the EU without a deal will reduce UK growth to levels comparable with most of the EU. 

Unemployment – total and youth

Another matrix is unemployment rates. If voting to leave has impacted business investment and economic growth, one would expect a lagged impact on unemployment.

Fig 6 : Unemployment rates (total and youth) for the Euro area and the UK from 1999 to 2019. The current year is to September.

Unemployment in the Euro area has always been consistently higher than in the UK. The second recession in 2012 and 2013 in the Euro area resulted in unemployment peaking at least two years later than the UK. But in both places there has been over five years of falling unemployment. Brexit seems to have zero impact on the trend in the UK, where unemployment is now the lowest since the early 1970s. 

The average rates of total unemployment for the period 1999-2018 are 8.2% in the Euro area and 6.0% in the UK. For youth unemployment they are 20.9% and 14.6% respectively. 

The reason for higher rates of unemployment in EU countries for decades is largely down to greater regulatory rigidities than the UK. 

Concluding comments

NIESR’s assumptions that the slowdowns in business investment and economic growth are soley due to the uncertainties created by Brexit are not supported by the wider evidence. Without support for that claim, the ten year forecasts of slower economic growth due to Brexit fail entirely. Instead Britain should be moving away from EU stagnation with high youth unemployment, charting a better course that our European neighbours will want to follow. 

Kevin Marshall

Cummings, Brexit and von Neumann

Over at Cliscep, Geoff Chambers has been reading some blog articles by Dominic Cummings, now senior advisor to PM Boris Johnson, and formerly the key figure behind the successful Vote Leave Campign in the 2016 EU Referendum. In a 2014 article on game theory Cummings demonstrates he has actually read the Von Neumann’s articles and seminal 1944 book “A Theory of Games and Economic Behavior” that he quotes. I am sure that he has drawn on secondary sources as well.
A key quote in the Cummings article is from Von Neumann’s 1928 paper.

‘Chess is not a game. Chess is a well-defined computation. You may not be able to work out the answers, but in theory there must be a solution, a right procedure in any position. Now, real games are not like that at all. Real life is not like that. Real life consists of bluffing, of little tactics of deception, of asking yourself what is the other man going to think I mean to do. And that is what games are about in my theory.’

Cummings states the paper

introduced the concept of the minimax: choose a strategy that minimises the possible maximum loss.

Neoclassical economics starts from the assumption of utlity maximisation based on everyone being in the same position and having the same optimal preferences. In relationships they are usually just suppliers and demanders, with both sides gaining. Game theory posits that there may be net are trade-offs in relationships, with possibilities of some parties gaining at the expense of others. What Von Neumann (and also Cummings) do not fully work out is a consequence of people bluffing. As they do not reveal preferences it is not possible to quantify the utility they receive. As such mathematics is only of use in working through hypothetical situations not for empirically working out optimal strategies in most real world sitautions. But the discipline imposed by laying out the problem on game theory is to recognize that opponents in the game both have different preferences and may be bluffing.

In my view one has to consider the situation of the various groups in the Brexit “game”.

The EU is a major player whose gains or losses from Brexit need to be considered. More important that the economic aspects (the loss of 15% of EU GDP; a huge net contributor to the EU budget and a growing economy when the EU as a whole is heading towards recession) is the loss face at having to compromise for a deal, or the political repurcussions of an Indpendent Britain being at least as successful as a member.

By coming out as the major national party of Remain the Liberal Democrats have doubled their popular support. However, in so doing they have taken an extreme position, which belies their traditional occupation of the centre ground in British politics. Further, in a hung Parliament it is unlikely that they would go into coalition with either the Conservatives or Labour.  The nationalist Plaid Cymru and SNP have similar positions. In a hung Parliament the SNP might go into coalition with Labour, but only on the condition of another Scottish Independance Referendum.

The Labour Party have a problem. Comparing Chris Hanretty’s estimated the referendum vote split for the 574 parliamentary constituencies in England and Wales for the EU Referendum with 2015 General Election Results, Labour seats are more deeply divided than the country as a whole. Whilst Labour held just 40% of the seats, they had just over half the 231 seats with a 60% or more Leave vote, and almost two-thirds of the 54 seats with a 60% or more Remain vote. Adding in the constituencies where Labour came second by a margin of less 12% if the vote, (the seats need to win a Parliamentary majority) I derived the following chart.

Tactically, Labour would have move towards a Leave position, but most of the MPs were very pro-Remain and a clear majority of Labour voters likely voted remain. Even in some Labour constituencies where the constituency as a whole voted Leave, a majority of Labour voters may voted Remain. Yet leading members of the current Labour leadership and a disproportionate number of the vast leadership are in very pro-Remain, London constituencies.

The Conservative-held seats had a less polarised in the spread of opinion. Whilst less than 30% of their 330 England and Wales voted >60% Leave, the vast majority voted Leave and very few were virulently pro-Remain.

But what does this tell us about a possible Dominic Cummings strategy in the past few weeks?

A major objective since Boris Johnson became Prime Minister and Cummings was appointed less than two months ago has been a drive to Leave the EU on 31st October. The strategy has been to challenge the EU to compromise on the Withdrawal Agreement to obtain a deal acceptable to the UK Parliament. Hilary Benn’s EU Surrender Act was passed to hamper the negotiating position of the Prime Minister, thus shielding the EU from either having to either compromise or being seen by the rest of the world as being instransigent against reasonable and friendly approaches. Also, it has been to force other parties, particularly Labour, to clarify where they stand. As a result, Labour seems to a clear Remain policy. In forcing the Brexit policy the Government have lost their Parliamentary majority. However, they have caused Jeremy Corbyn to conduct a complete about-turn on a General Election, called for an ummediate election, then twice turning down the opportunity to call one.

Back to the application of game theory to the current Brexit situation I believe there to be a number of possible options.

  1. Revoke Article 50 and remain in the EU. The Lib Dem, Green, SNP amd Plaid Cymru position.
  2. Labour’s current option of negotiating a Withdrawal Agreement to liking, then hold a second referendum on leaving with Withdrawal Agreement or reamining in the EU. As I understand the current situation, the official Labour position would be to Remain, but members of a Labour Cabinet would be allowed a free vote. That is Labour would respect the EU Referendum result only very superficially, whilst not permitting to break away for the umbrella of EU institutions and dik tats.
  3. To leave on a Withdrawal Agreement negotiated by PM Boris Johnson and voted through Parliament.
  4. To leave the EU without a deal.
  5. To extend Article 50 indefinitely until the public opinion gets so fed up that it can be revoked.

Key to this is understanding the perspectives of all sides. For Labour (and many others in Parliament) the biggest expressed danger is a no-deal Brexit. This I believe is either a bluff on their part, or a failure to get a proper sense of proportion. This is illustrated by reading the worst case No Deal Yellowhammer Document (released today) as a prospective reality rather than a “brain storm” working paper as a basis for contingency planning. By imagining such situations, however unrealistic, action plans can be created to prevent the worst impacts should they arise. Posting maximum losses allows the impacts to be minimized. Governments usually kept such papers confidential precisely due to political opponents and journalists evaluating as them as credible scenarios which will not be mitigated against.

Labour’s biggest fear – and many others who have blocked Brexit – is of confronting the voters. This is especially due to telling Leave voters they were stupid for voting the way they did, or were taken in by lies. Although the country is evenly split between Leave and Remain supporting parties, the more divided nature of the Remainers is that the Conservatives will likely win a majority on around a third of the vote. Inputting yesterday’s YouGov/Times opinion poll results into in the Electoral Calculus User-Defined poll gives the Conservatives a 64 majority with just 32% of the vote.

I think when regional differences are taken into account the picture is slightly different. The SNP will likely end up with 50 seats, whilst Labour could lose seats to the Brexit Party in the North and maybe to the Lib Dems. If the Conservatives do not win a majority, the fifth scenario is most likely to play out.

In relation to Cummings and Game Theory, I would suggest that the game is still very much in play, with new moves to be made and further strategies to come into play. It is Cummings and other Government advisors who will be driving the game forward, with the Remainers being the blockers.

Kevin Marshall

Updated 29/09/19