Imperial self-congratulations on saving 3 million lives

Late last night there was an article at the BBC that I cannot find anymore on the BBC website. Fortunately my history still has the link.

The article linked to a pre-publication article in Nature from the team at Imperial College – Estimating the effects of non-pharmaceutical interventions on COVID-19 in Europe

On pdf page 16 of 18. Modelled estimates that by 4th May lockdowns across 11 countries saved 2.8 – 3.1 million lives.
From pdf page 6 / article page 5 there are four countries listed on the Fig 1, the last being the UK.

The graphics show for all countries that the day of the lockdowns saw a massive step reduction in daily new infections & the magic R. For the UK modelled daily infections spiked at over 400,000 on the day of the lockdown and were less than 100,000 the day after. R from around 4 at lockdown to 1 the day after. Prime Minister Boris Johnson gave a very powerful and sincere speech ordering the lockdown, but he did not realize how transformative the power of his words would be. It was the same in all the other countries surveyed. Further, all countries surveyed had a massive spike in infections in the few days leading up to lockdown being imposed. If only the wisdom of the scientists had been listened to a few days earlier – at slightly different dates in each country – then thousands of more lives could have been saved,

I believe that the key to these startling conclusions lies in the model assumptions not the data of the natural world. We have no idea of the total number of coronavirus cases in any country at the start of lockdown, just that the identified number of cases. Thus whilst the model estimates of the number of cases cannot be proved wrong, it is very unlikely. I can think of five reasons to back up my claim, with particular reference to the UK.

First, the measures taken prior to the lockdown had no impact on coronavirus cases and hardly any on R. This includes the stopping of visitors to care homes, social distancing measures, voluntary closing of public places like pubs and the start of home working.

Second, Prime Minister Boris going on TV ordering a lockdown had an immediate impact. Same for other leaders, such as President Macron in France. This is nonsense. People were locked down in households, so there would have still been infections within the households in the few days after lockdown.

Third, we know that many of the coronavirus deaths were of people infected whilst in a hospitals or care homes of people who were infected within care homes. The lockdown did not segregate people within those communities.

Fourth, the assumed pre-lockdown spike followed by a massive drop in daily new infections was not followed a few days later by any such corresponding pattern in daily deaths. It is far easier to make the case for a zero impact of lockdowns rather than this extreme impact. The truth, if perfect data were available, is likely to be nearer zero lives saved than 3 million.

Fifth, in Italy the lockdown was not imposed nationally on the same day. The North of Italy was locked down first, followed by the rest of the country some days later. Like with the imposition of less draconian measures pre-lockdown in the UK, this should have seen a less immediate effect than suggested by Figure 1.

Are the authors and peer-reviewers at Nature likely to be aware of the problems with the headline BBC claims and the underlying paper? Compare the caption to “Extended Data Table 1” (pdf page 16)

Forecasted deaths since the beginning of the epidemic up to 4th May in our model vs. a counterfactual model assuming no interventions had taken place.

to the report from the BBC;

Lockdowns have saved more than three million lives from coronavirus in Europe, a study estimates. …….

They estimated 3.2 million people would have died by 4 May if not for measures such as closing businesses and telling people to stay at home.

That meant lockdown saved around 3.1 million lives, including 470,000 in the UK, 690,000 in France and 630,000 in Italy, the report in the journal Nature shows.

from the Evening Standard;

Around three million deaths may have been prevented by coronavirus lockdowns across Europe, research suggests.

from yahoo! News;

By comparing the number of deaths against those predicted by their model in the absence of interventions, the researchers believe that  3.1 million deaths have been averted due to non-pharmaceutical measures.

and from eCAM Biomed – GLOBAL IMPACT FUND

This study found that nonpharmaceutical interventions, including national “lockdowns,” could have averted approximately 3.1 million COVID-19 deaths across 11 European countries.

These press reports is not disimilar to the title of the Imperial College press release

Lockdown and school closures in Europe may have prevented 3.1m deaths

I would suggest that they are different from the caption to Extended Table 1. The difference is between comparison of actual data to modelled data based on some unlikely assumptions and actual lives saved in the real world. The difference is between the lockdowns having saved 3 million lives and having saved many times less. It is between the desisions of governments sacrificing economic prosperity to save hundreds of thousands of lives, and ruining the lives of millions based on pseudo-science for near zero benefits. The authors should be aware of this and so should the reviewers of the world’s leading science journal.
Are we going to see a quiet withdrawal, like of the BBC report?

Kevin Marshall

Dumb hard left proclamation replaces pluralistic thirst for knowledge and understanding

Last week Guido Fawkes had a little piece that, in my opinion, illustrates how nasty the world is becoming. I quote in full.

IMPERIAL COLLEGE DROPS “IMPERIAL” MOTTO
ROOTED IN POWER & OPPRESSION

In response to representations from students inspired by the Black Lives Matter movement Imperial College’s President, Professor Alice Gast, has announced they are dropping their “imperialist” Latin motto.

“I have heard from many of you with concerns about the university motto and its appearance on our crest. The Latin motto appears on a ribbon below the crest and is commonly translated to ‘Scientific knowledge, the crowning glory and the safeguard of the empire’. We have removed this ribbon and the motto in a revised crest you can see below in this briefing. This modified crest is already in use by my office and the Advancement team and will be integrated into all of our materials over the coming year. We will commission a group to examine Imperial’s history and legacy. We have a long way to go, but we will get better. We will build upon our community’s spirit, commitment and drive. We will draw strength from your commitment and support.”

The College’s motto, coined in 1908, was ‘Scientia imperii decus et tutamen’ which translates as ‘Scientific knowledge, the crowning glory and the safeguard of the empire’. As Titania McGrath might say this motto “is a reminder of a historical legacy that is rooted in colonial power and oppression”. That’s an actual quote from the college’s President, in the interests of diversity she is erasing the past. As someone once wrote “Who controls the past controls the future. Who controls the present controls the past.”

UPDATE:This old article from 1995 describes the arms and motto of Imperial College, paying particular attention to the deliberate ambiguity of the Latin:

Thus DECUS ET TUTAMEN translates as ‘an honour and a protection’. The rest of the motto is deliberately ambiguous. SCIENTIA means ‘knowledge’ but is also intended as a pun on the English word ‘science’. IMPERII could mean ‘power’, ‘dominion over’, ‘universal’, ‘of the empire’, ‘of the state’, or ‘superior’; and again is intended as a pun on the English word ‘imperial’.

Because of this ambiguity the full motto can be translated in many different ways. One translation could be: ‘Dominion over science is an honour and a protection’. A more politically correct translation might be: ‘Universal knowledge is beautiful and necessary’.

The Black Lives Matter translation of the motto – ‘Scientific knowledge, the crowning glory and the safeguard of the empire’ – might be valid, but so are many other formulations. Indeed, although Britain at the start of the last century was the most powerful nation and ruled the most extensive empire in history, along with competing with the United States & Germany as the leaders in the pursuit of scientific knowledge, the motto has proved untrue. Imperialists who backed the foundation of Imperial College who thought that scientific knowledge would safeguard empire were mistaken. What is left is an Imperial College ranked about tenth in world rankings of universities so it is a glorious product of imperialist thinking. Given that it is still thriving it is more glorious that the majestic ruins of earlier empires, such as the Colesium in Rome or the Parthenon in Athens.

Deeper than this is that the motto is deliberately a pun. It is superfically meaningful in different ways to those from a diverse range of backgrounds and belief systems. But to those with deeper understanding – achieved through high level study and reflection – know that more than one valid perspective is possible. That also leads into the realisation that our own knowledge, or the collective that of any groups that we might identifying as belonging to, is not the totality of all knowledge possible, and might be even turn out to be false some time in the future. This humility gives a basis for furthering understanding of both the natural world and the place of people within it. Rather than shutting out alternative perspectives, we should develop understanding of our own world view, and aiming to understand others. This is analogous to the onus in English Common Law for the prosecution to prove a clearly defined case beyond reasonable doubt based on the evidence. It is also applies to the key aim of the scientific method. Conjectures about the natural world are ultimately validated by experiments based in the natural world.

Consider the alternative “ideal” that we are heading towards at an alarming rate of knots. What counts as knowledge is the collective opinion of those on the self-proclaimed moral high ground. In this perspective those who do not accept the wisdom of the intellectual consensus are guilty of dishonesty and should not be heard. All language and observations of the natural world are interpreted through this ideological position. Any conflict of is resolved by the consensus. Is it far fetched? A quote from Merchants of Doubt – Oreskes Conway 2010.

Sunday Times exaggerates price gauging on Amazon

It has been many months since last posting on this blog due to being engaged in setting up a small business after many years working as a management accountant, mostly in manufacturing. For the first time this year I purchased the Sunday Times. What caught my attention was an article “Amazon sellers rolling in dough from flour crisis“. As my speciality was product costing I noticed a few inaccuracies and exaggerations.

Sunday Times article from page 6 of print edition 03/05/20


The first issue was on the fees.

Amazon sells many products directly to consumers but more than half of its sales are from third-party sellers on its “Marketplace”. They pay fees to the online giant of up to 15% of the sale price.

The fees at least 15% of the sale price. This is if the seller despatches for themselves, incurring the cost of postage and packing.

Let us take an example of the price rises.

A packet of 45 rolls of Andrex lavatory roll rose from under £24 to more than £95.

For somebody purchasing from Amazon with prime, they will get free postage on purchases over £20. So they can get 45 rolls delivered for about the standard price in a supermarket of 5 x 9 roll packs at £4.50 each. Using an third party app (which might not be accurate) for the Classic Clean Andrex, I find that third party sellers were selling at £23.45 up to March 8 when stocks ran out. Further Amazon were selling for about 3 days at £18.28 until Sunday March 8, when they also ran out. Apart from on Fri Mar 13 Amazon did not have supplies until late April. It was during this period that 3rd party sellers were selling at between £50 & £99.99. Any items offered for sale sold very quickly.

Now suppose an enterprising individual managed to grab some Andrex from a wholesaler (est. £15 inc. VAT) and list them for sale on Amazon. How much would they make? If they already had an account (cost about £30 per month) they could despatch themselves. They would need a large box (at least 45 x 45 x 35 cm) which they might be able to buy for under £30 for a pack of 15. They would have to pay postage. It is £20.45 at the Post Office. If anyone can find a carrier (for 6.5kg) cheaper than £12, including insurance and pick up, please let me know in the comments. If the selling at £50, the costs would be at least £7.50 + £15 + £2 + £12 = £36.50. To make a quick buck it is a lot of work.

This is, however, a bad example. Let us try a much lower weight product. The classic Uno Card Game, that the Sunday Times claims was listed at £5.60 on March 1st and £5.60-£17.99 on April 30th. This compares with £7.00 at Argos & Sainsbury’s and £6.97 at Asda. The inaccuracy here is with the range of prices, as there were multiple sellers on both dates, with £5.60 being the price sold by Amazon themselves. Actual selling prices fluctuate during March and this evening the prices are between £5.49 and £17.99. It is usually the case with popular products that there are multiple sellers. During March and April Amazon were out of stock, with actual selling prices between £4.99 and £19.00. Most often it was in the range of £9.00-£11.50.

A final example from the Sunday Times is for Carex Handwash Sensitive 250ml. As an antibacterial product, as soon as the Government recommended frequent hand washing the product sold out in supermarkets. As such it was ripe for making super-profits dueing the period of panic buying. This product used to be frequently available at £1 or slightly more. The Sunday Times lists the Amazon price at £1.99 on March 1st and at £5.98-£11.50 on April 30th. My App shows a single seller at £7.99, with the March 1st price of £3.25. The Sunday Times have probably picked up a different listing that is no longer available. The best value Carex antibacterial at the time of writing appears to be this listing for 2 x 250ml, where the price ranges from £9.49 to £13.16 including delivery. Selling at around £3.64 prior to March, the selling price peaked at around £32.99 in mid-March.

Whilst the Sunday Times article may not have the best examples, it does highlight that some people have made extraordinary profits by either being in the right place at the right time, or by quickly reacting to changing market information and anticipating the irrational panic buying of many shoppers. Here the problem is not with entrepreneurs meeting demand, but with consumers listening to the fearmongers in the media and social media believing that a pandemic “shutdown” would stop the movement of goods, along with a cultural ignorance of the ability of markets to rapidly respond to new information. In the supermarkets shelves many needlessly emptied shelves. Much of the fresh food bought in panic was binned. Further, many households will not be purchasing any more rice, tinned tomatoes and toilet rolls for many months. Since then there has been an extraordinary response by suppliers and supermarkets in filling the shortages. The slowest responses to shortages have been where the state is the dominant purchaser or the monopoly supplier and purchaser. The former is in the area of PPE, and the latter in the area of coronovirus testing.

Finally, there is a puzzle as to why there is such a range of prices available for an item on Amazon. A reason is that many of the the high-priced sellers were competitive, but the price has fallen dramtically. Another is that the higher-priced sellers are either hoping people make a mistake, or have “shops” on Amazon, that lure people in with low price products and hope they occassionally buy other, over-priced products. Like the old-fashinoned supermarket loss-leaders, but on steroids. Alternatively they may have the products listed elsewhere (e.g. actual shops or on Ebay) and/or a range of products with the extraordinary profits of the few offsetting the long-term write-offs of the many. There is the possibility that these hopefuls will be the future failures, as will be the majority of entrepreneurial ventures in any field.

Kevin Marshall

How misleading economic assumptions can show Brexit making people worse off

Last week the BBC News headlined Brexit deal means ‘£70bn hit to UK by 2029′ITV news had a similar report. The source, NIESR, summarizes their findings as follows:-

Fig 1 : NIESR headline findings 

£70bn appears to be a lot of money, but this is a 10 year forecast on an economy that currently has a GDP of £2,000bn. The difference is about one third of one percent a year. The “no deal” scenario is just £40bn worse than the current deal on offer, hardly an apocalyptic scenario that should not be countenanced. Put another way, if underlying economic growth is 2%, from the NIESR in ten years the economy will be between 16% and 22% larger.   In economic forecasting, the longer the time frame, the more significant the underlying assumptions. The reports are based on an NIESR open-access paper  Prospects for the UK Economy – Arno Hantzsche, Garry Young, first published 29 Oct 2019. The key basis is contained in Figures 1 & 2, reproduced below.

Fig 2 : Figures 1 & 2 from the NIESR paper “Prospects for the UK Economy

The two key figures purport to show that Brexit has made a difference. Business investment growth has apparently ground to a halt since mid-2016 and economic growth slowed. What it does not show is a decline in business investment, nor a halting of economic growth.

After these figures the report states:-

The reason that investment has been affected so much by the Brexit vote is that businesses fear that trade with the EU will be sufficiently costly in the future – especially with a no-deal Brexit – that new investment will not pay off. Greater clarity about the future relationship, especially removing the no-deal threat, might encourage some of that postponed investment to take place. But that would depend on the type of deal that is ultimately negotiated. A deal that preserved the current close trading relationship between the UK and EU could result in an upsurge in investment. In contrast, a deal that would make it certain that there would be more trade barriers between the UK and EU in the future would similarly remove the risk of no deal but at the same time eliminate the possibility of closer economic ties, offsetting any boost to economic activity.

This statement asserts, without evidence, that the cause of the change in investment trend is singular. That is due to business fears over Brexit. There is no corroborating evidence to back this assumption, such as surveys of business confidence, or decline in the stock markets. Nor is there a comparison with countries other than the UK, to show that any apparent shifts are due to other causes, such as the normal business cycle. Yet it is this singular assumed cause of the apparent divergence from trend that is used as the basis of forecasting for different policy scenarios a decade into the future.

The rest of this article will concentrate of the alternative evidence, to show that any alleged change in economic trends are either taken out of context or did not occur as a result of Brexit. For this I use World Bank data over a twenty year period, comparing to the Euro area. If voting to leave the EU has had a significant impact in economic trends 

Net Foreign Direct Investment

There is no data for the narrow business investment at the World Bank. The alternative is net foreign direct investment.


Fig 3 : Data for net foreign direct investment from 1999 to 2018 for the Euro area and the UK.

UK net foreign direct investment was strongly negative in 2014 to 2016, becoming around zero in 2017 and 2018. The Euro area shows an opposite trend. Politically, in 2014 UKIP won the UK elections to the European Parliament, followed in 2015 by a promise of a referendum on the EU. Maybe the expectation of Britain voting to leave the EU could have had impact? More likely this net outflow is connected to the decline in the value of the pound. From xe.com

Fig 4 : 10 year GBP to USD exchange rates. Source xe.com

The three years of net negative FDI were years of steep declines in the value of the pound. In the years before and after, when exchange rates were more stable, net FDI was near zero.

GDP growth rates %

The NIESR choose to show the value of quarterly output to show a purported decline in the rate of economic growth post EU Referendum. More visible are the GDP growth rates.

Fig 5 : Annual GDP growth rates for the Euro area and the UK from 1999 to 2018. 

The Euro area and the UK suffered a economic crash of similar magnitude in 2008 and 2009. From 2010 to 2018 the UK has enjoyed unbroken economic growth, peaking in 2014. Growth rates were declining well before the EU referendum. The Euro area was again in recession in 2012 and 2013, which more than offsets the stronger growth than the UK from 2016 to 2018. In the years 2010 to 2018 Euro area GDP growth averaged 1.4%, compared with 1.5% for the years 1999 to 2009. In the UK it was 1.9% in both periods. The NIESR is essentially claiming that leaving the EU without a deal will reduce UK growth to levels comparable with most of the EU. 

Unemployment – total and youth

Another matrix is unemployment rates. If voting to leave has impacted business investment and economic growth, one would expect a lagged impact on unemployment.

Fig 6 : Unemployment rates (total and youth) for the Euro area and the UK from 1999 to 2019. The current year is to September.

Unemployment in the Euro area has always been consistently higher than in the UK. The second recession in 2012 and 2013 in the Euro area resulted in unemployment peaking at least two years later than the UK. But in both places there has been over five years of falling unemployment. Brexit seems to have zero impact on the trend in the UK, where unemployment is now the lowest since the early 1970s. 

The average rates of total unemployment for the period 1999-2018 are 8.2% in the Euro area and 6.0% in the UK. For youth unemployment they are 20.9% and 14.6% respectively. 

The reason for higher rates of unemployment in EU countries for decades is largely down to greater regulatory rigidities than the UK. 

Concluding comments

NIESR’s assumptions that the slowdowns in business investment and economic growth are soley due to the uncertainties created by Brexit are not supported by the wider evidence. Without support for that claim, the ten year forecasts of slower economic growth due to Brexit fail entirely. Instead Britain should be moving away from EU stagnation with high youth unemployment, charting a better course that our European neighbours will want to follow. 

Kevin Marshall

Cummings, Brexit and von Neumann

Over at Cliscep, Geoff Chambers has been reading some blog articles by Dominic Cummings, now senior advisor to PM Boris Johnson, and formerly the key figure behind the successful Vote Leave Campign in the 2016 EU Referendum. In a 2014 article on game theory Cummings demonstrates he has actually read the Von Neumann’s articles and seminal 1944 book “A Theory of Games and Economic Behavior” that he quotes. I am sure that he has drawn on secondary sources as well.
A key quote in the Cummings article is from Von Neumann’s 1928 paper.

‘Chess is not a game. Chess is a well-defined computation. You may not be able to work out the answers, but in theory there must be a solution, a right procedure in any position. Now, real games are not like that at all. Real life is not like that. Real life consists of bluffing, of little tactics of deception, of asking yourself what is the other man going to think I mean to do. And that is what games are about in my theory.’

Cummings states the paper

introduced the concept of the minimax: choose a strategy that minimises the possible maximum loss.

Neoclassical economics starts from the assumption of utlity maximisation based on everyone being in the same position and having the same optimal preferences. In relationships they are usually just suppliers and demanders, with both sides gaining. Game theory posits that there may be net are trade-offs in relationships, with possibilities of some parties gaining at the expense of others. What Von Neumann (and also Cummings) do not fully work out is a consequence of people bluffing. As they do not reveal preferences it is not possible to quantify the utility they receive. As such mathematics is only of use in working through hypothetical situations not for empirically working out optimal strategies in most real world sitautions. But the discipline imposed by laying out the problem on game theory is to recognize that opponents in the game both have different preferences and may be bluffing.

In my view one has to consider the situation of the various groups in the Brexit “game”.

The EU is a major player whose gains or losses from Brexit need to be considered. More important that the economic aspects (the loss of 15% of EU GDP; a huge net contributor to the EU budget and a growing economy when the EU as a whole is heading towards recession) is the loss face at having to compromise for a deal, or the political repurcussions of an Indpendent Britain being at least as successful as a member.

By coming out as the major national party of Remain the Liberal Democrats have doubled their popular support. However, in so doing they have taken an extreme position, which belies their traditional occupation of the centre ground in British politics. Further, in a hung Parliament it is unlikely that they would go into coalition with either the Conservatives or Labour.  The nationalist Plaid Cymru and SNP have similar positions. In a hung Parliament the SNP might go into coalition with Labour, but only on the condition of another Scottish Independance Referendum.

The Labour Party have a problem. Comparing Chris Hanretty’s estimated the referendum vote split for the 574 parliamentary constituencies in England and Wales for the EU Referendum with 2015 General Election Results, Labour seats are more deeply divided than the country as a whole. Whilst Labour held just 40% of the seats, they had just over half the 231 seats with a 60% or more Leave vote, and almost two-thirds of the 54 seats with a 60% or more Remain vote. Adding in the constituencies where Labour came second by a margin of less 12% if the vote, (the seats need to win a Parliamentary majority) I derived the following chart.

Tactically, Labour would have move towards a Leave position, but most of the MPs were very pro-Remain and a clear majority of Labour voters likely voted remain. Even in some Labour constituencies where the constituency as a whole voted Leave, a majority of Labour voters may voted Remain. Yet leading members of the current Labour leadership and a disproportionate number of the vast leadership are in very pro-Remain, London constituencies.

The Conservative-held seats had a less polarised in the spread of opinion. Whilst less than 30% of their 330 England and Wales voted >60% Leave, the vast majority voted Leave and very few were virulently pro-Remain.

But what does this tell us about a possible Dominic Cummings strategy in the past few weeks?

A major objective since Boris Johnson became Prime Minister and Cummings was appointed less than two months ago has been a drive to Leave the EU on 31st October. The strategy has been to challenge the EU to compromise on the Withdrawal Agreement to obtain a deal acceptable to the UK Parliament. Hilary Benn’s EU Surrender Act was passed to hamper the negotiating position of the Prime Minister, thus shielding the EU from either having to either compromise or being seen by the rest of the world as being instransigent against reasonable and friendly approaches. Also, it has been to force other parties, particularly Labour, to clarify where they stand. As a result, Labour seems to a clear Remain policy. In forcing the Brexit policy the Government have lost their Parliamentary majority. However, they have caused Jeremy Corbyn to conduct a complete about-turn on a General Election, called for an ummediate election, then twice turning down the opportunity to call one.

Back to the application of game theory to the current Brexit situation I believe there to be a number of possible options.

  1. Revoke Article 50 and remain in the EU. The Lib Dem, Green, SNP amd Plaid Cymru position.
  2. Labour’s current option of negotiating a Withdrawal Agreement to liking, then hold a second referendum on leaving with Withdrawal Agreement or reamining in the EU. As I understand the current situation, the official Labour position would be to Remain, but members of a Labour Cabinet would be allowed a free vote. That is Labour would respect the EU Referendum result only very superficially, whilst not permitting to break away for the umbrella of EU institutions and dik tats.
  3. To leave on a Withdrawal Agreement negotiated by PM Boris Johnson and voted through Parliament.
  4. To leave the EU without a deal.
  5. To extend Article 50 indefinitely until the public opinion gets so fed up that it can be revoked.

Key to this is understanding the perspectives of all sides. For Labour (and many others in Parliament) the biggest expressed danger is a no-deal Brexit. This I believe is either a bluff on their part, or a failure to get a proper sense of proportion. This is illustrated by reading the worst case No Deal Yellowhammer Document (released today) as a prospective reality rather than a “brain storm” working paper as a basis for contingency planning. By imagining such situations, however unrealistic, action plans can be created to prevent the worst impacts should they arise. Posting maximum losses allows the impacts to be minimized. Governments usually kept such papers confidential precisely due to political opponents and journalists evaluating as them as credible scenarios which will not be mitigated against.

Labour’s biggest fear – and many others who have blocked Brexit – is of confronting the voters. This is especially due to telling Leave voters they were stupid for voting the way they did, or were taken in by lies. Although the country is evenly split between Leave and Remain supporting parties, the more divided nature of the Remainers is that the Conservatives will likely win a majority on around a third of the vote. Inputting yesterday’s YouGov/Times opinion poll results into in the Electoral Calculus User-Defined poll gives the Conservatives a 64 majority with just 32% of the vote.

I think when regional differences are taken into account the picture is slightly different. The SNP will likely end up with 50 seats, whilst Labour could lose seats to the Brexit Party in the North and maybe to the Lib Dems. If the Conservatives do not win a majority, the fifth scenario is most likely to play out.

In relation to Cummings and Game Theory, I would suggest that the game is still very much in play, with new moves to be made and further strategies to come into play. It is Cummings and other Government advisors who will be driving the game forward, with the Remainers being the blockers.

Kevin Marshall

Updated 29/09/19

How climate damage costings from EPA Scientists are misleading and how to correct

The Los Angeles Times earlier this month had an article

From ruined bridges to dirty air, EPA scientists price out the cost of climate change. (Hattip Climate Etc.)

By the end of the century, the manifold consequences of unchecked climate change will cost the U.S. hundreds of billions of dollars per year, according to a new study by scientists at the Environmental Protection Agency.
…..
However, they also found that cutting emissions of carbon dioxide and other greenhouse gases, and proactively adapting to a warming world, would prevent a lot of the damage, reducing the annual economic toll in some sectors by more than half.

The article is based on the paper
Climate damages and adaptation potential across diverse sectors of the United States – Jeremy Martinich & Allison Crimmins – Nature Climate Change 2019

The main problem is with the cost alternatives, contained within Figure 2 of the article.

Annual economic damages from climate change under two RCP scenarios. RCP8.5 has no mitigation and RCP4.5 massive mitigation. Source Martinich & Crimmins 2019 Figure 2

I have a lot of issues with the cost estimates. But the fundamental issue centers around costs that are missing from the RCP4.5 costs to enable a proper analysis to be made.

The LA Times puts forward the 2006 Stern Review – The Economics of Climate Change – as an earlier attempt at calculating “the costs of global warming and the benefits of curtailing emissions.
The major policy headline from the Stern Review (4.7MB pdf)

Using the results from formal economic models, the Review estimates that if we don’t act, the overall costs and risks of climate change will be equivalent to losing at least 5% of global GDP each year, now and forever. If a wider range of risks and impacts is taken into account, the estimates of damage could rise to 20% of GDP or more. In contrast, the costs of action – reducing greenhouse gas emissions to avoid the worst impacts of climate change – can be limited to around 1% of global GDP each year.

The Stern Review implies a straight alternative. There are either the costs of unmitigated climate change OR the costs of mitigation policy. The RCP4.5 is the residual climate damage costs after costly policies have been successfully applied. The Stern Review quotation only looked at the policy costs, not the residual climate damage costs after policy has been applied, whereas Martinich & Crimmins 2019 only looks at the residual climate damage costs and not the policy costs.

The costs of any mitigation policy to combat climate change must include both the policy costs and the damage costs. But this is not the most fundamental problem.

The fundamental flaw in climate mitigation policy justifications

The estimated damage costs of climate change result from global emissions of greenhouse gases, which raise the average levels of atmospheric greenhouse gases which in turn raise global average temperatures. This rise in global average temperatures is what is supposed to create the damage costs.
By implication, the success of mitigation policies in reducing climate damage costs is measured in relation to the reduction in global emissions. But 24 annual COP meetings have failed to even get vague policy intentions will collectively stabilize emissions at the current levels. From the UNEP Emissions Gap Report 2018 is Figure ES.3 showing the gap between intentions and the required emissions to constrain global warming to 1.5°C and 2.0°C.

Current mitigation policies in the aggregate will achieve very little. If a country were to impose additional policies, the marginal impact would be very small in reducing global emissions. By implication, any climate mitigation policy costs imposed by a country, or sub-division of that country, will only result in very minor reductions in the future economic damage costs to that country. This is even if the climate mitigation policies are the most economically efficient, getting the biggest reductions for a given amount of expenditure. As climate mitigation is net costly, under current climate mitigation policies will necessarily impose burdens on the current generation, whilst doing far less in reducing the climate impacts on future generations in the policy area. Conversely, elimination of costly policies will be net beneficial to that country. Given the global demands for climate mitigation, politically best policy is to do as little possible, whilst appearing to be as virtuous as possible.

Is there a way forward for climate policy?

A basic principle in considering climate mitigation is derived from Ralph Niebuhr’s Serenity Prayer

God, grant me the serenity to accept the things I cannot change,
Courage to change the things I can,
And wisdom to know the difference.

An emended prayer (or mantra) for policy-makers would be to change things for the better. I would propose not having perfect knowledge of the future, but having a reasonable expectation that it will change the world for the better. If policy is costly, the benefits should exceed the costs. If policy-makers are aiming to serve their own group, or humanity as a whole, then they should have the serenity accept that there are the costs and harms of policy. In this light consider a quote by Nobel Laureate Prof William Nordhaus from an article in the American Economic Review last year.

The reality is that most countries are on a business-as-usual (BAU) trajectory of minimal policies to reduce their emissions; they are taking noncooperative policies that are in their national interest, but far from ones which would represent a global cooperative policy.

Nordhaus agrees with the UNEP emissions gap report. Based on the evidence of COP24 Katowice it is highly unlikely most countries will not do a sudden about-face, implementing policies that are clearly against their national interest. If this is incorrect, maybe someone can start by demonstrating to countries that rely on fossil fuel production for a major part of their national income – such as Russia, Saudi Arabia, Iran and other Middle Eastern countries – how leaving fossil fuels in the ground and embracing renewables is in their national interests. In the meantime, how many politicians will public accept that it is not in their power to reduce global emissions, but continue implementing policies whose success requires that they are part of policies that collectively will reduce global emissions?

If climate change is going to cause future damages what are the other options?

Martinich & Crimmins 2019 have done some of the work in estimating the future costs of climate change for the United States. Insofar as these are accurate forecasts, actions can be taken to reduce those future risks. But those future costs are contingent on a whole series of assumptions. Mostly crucially, the large magnitude of the damage costs are usually contingent  on dumb economic actor assumptions. That is, people have zero behavioral response to changing conditions over many decades. Two examples I looked at last year illustrate the dumb economic actor assumptions.

A Government Report last Summer claimed that unmitigated climate change would result in 7000 excess heat deaths in the UK by the 2050s. The amount of warming was small. The underlying report was based on the coldest region of England and Wales only experiencing average summer temperatures in the 2050s on a par with those of London (the warmest region) today. Most of the excess deaths would be in the over 75s in hospitals and care homes. The “dumb actors” in this case are the health professionals caring for patients in extreme heatwave in the 2050s in exactly the same way as they do today, even though the temperatures would be slightly higher. Nobody would think to try adapt practices through learning from places with hotter summers than the UK at present. That is from the vast majority of countries in the world.

Last year a paper in Nature Plants went by the title “Decreases in global beer supply due to extreme drought and heat”. I noted the paper made a whole serious of dubious assumptions, including two “dumb actor” assumptions,  to arrive at the conclusion that beer prices in some places could double due to global warming. One was that although in the agriculture models barley yields would shrink globally by 16% by 2100 compared to today contingent on a rise of global average temperatures of over 3.0°C , in Montana and North Dakota yields could double. The lucky farmers in these areas would not try to increase output, nor would farmers faced with shrinking yields reduce output. Another was that large price discrepancies in a bottle of beer would open up over the next 80 years between adjacent countries. This includes between Britain and Ireland, despite most of the beer sold being produced by large brewing companies, often in plants in third countries. No one would have the wit to buy a few thousand bottles of beer in Britain and re-sell it at a huge profit in higher-priced Ireland.

If the prospective climate damage costs in Martinich & Crimmins 2019 are based on similar “dumb actor” assumptions then any costly adaptation policies derived from the report might be largely unnecessary. People on the ground will have more effective localized, efficient, adaptation strategies. Generalized regulations and investments based on the models will fail on a benefit cost basis.

Concluding comments

Martinich & Crimmins 2019 look at US climate damage costs under two scenarios, one with little or no climate mitigation policies, the other with considerable successful climate mitigation. In the climate mitigation scenario they fail to add in the costs of climate mitigation policies. More importantly, actual climate mitigation policies have only been enacted by a small minority of countries, so costs expended on mitigation will not be met by significant reductions in future climate costs. Whilst in reality any climate mitigation policies is likely to lead to worse outcomes than doing nothing at all, the paper implies the opposite.
Further, the assumptions behind Martinich & Crimmins 2019 need to be carefully checked. If it includes “dumb economic actor” assumptions then on this alone the long-term economic damage costs might be grossly over-estimated. There is a real risk that adaptation policies based on these climate damage projections will lead to worse outcomes than doing nothing.
Overall, if policy-makers want to make a positive difference to the world in combating climate change, they should acquire the wisdom to identify areas where they can only do net harms. In the current environment, that will take an extreme level of courage. However, these justifications are far less onerous than the rigorous testing and approval process that new medical treatments need to go through before being allowed in general circulation.

Kevin Marshall

Nobel Laureate William Nordhaus demonstrates that pursuing climate mitigation will make a nation worse off

Summary

Nobel Laureate Professor William Nordhaus shows that the optimal climate mitigation policy is for far less mitigation than UNIPCCC proposes. That is to constrain warming by 2100 to 3.5°C instead of 2°C or less. But this optimal policy is based on a series of assumptions, including that policy is optimal and near universally applied. The current situation, with most countries without any effective mitigation policies, is that climate mitigation policies within a country will likely make that country worse off, even if they would be better off with the same policies were near universally applied. Countries applying costly climate mitigation policies are making their people worse off.

Context

Last week Bjorn Lomborg tweeted a chart derived from Nordhaus paper from August 2018 in the American Economic Review.

The paper citation is

Nordhaus, William. 2018. “Projections and Uncertainties about Climate Change in an Era of Minimal Climate Policies.” American Economic Journal: Economic Policy10 (3): 333-60.

The chart shows the optimal climate mitigation policy, based upon minimization of (a) the combined projected costs of climate mitigation policy and (b) residual net costs from human-caused climate change, is much closer to the non-policy option of 4.1°C than restraining warming to 2.5°C. By the assumptions of Nordhaus’s model greater warming constraint can only be achieved through much greater policy costs. The abstract concludes

The study confirms past estimates of likely rapid climate change over the next century if major climate-change policies are not taken. It suggests that it is unlikely that nations can achieve the 2°C target of international agreements, even if ambitious policies are introduced in the near term. The required carbon price needed to achieve current targets has risen over time as policies have been delayed.

A statement whose implications are ignored

This study is based on mainstream projections of greenhouse gas emissions and the resultant warming. Prof Nordhaus is in the climate mainstream, not a climate agnostic like myself. Given this, I find the opening statement interesting. (My bold)

Climate change remains the central environmental issue of today. While the Paris Agreement on climate change of 2015 (UN 2015) has been ratified, it is limited to voluntary emissions reductions for major countries, and the United States has withdrawn and indeed is moving backward. No binding agreement for emissions reductions is currently in place following the expiration of the Kyoto Protocol in 2012. Countries have agreed on a target temperature limit of 2°C, but this is far removed from actual policies, and is probably infeasible, as will be seen below.
The reality is that most countries are on a business-as-usual (BAU) trajectory of minimal policies to reduce their emissions; they are taking noncooperative policies that are in their national interest, but far from ones which would represent a global cooperative policy.

Although there is a paper agreement to constrain emissions commensurate with 2°C of warming, most countries are doing nothing – or next to nothing – to control their emissions. The real world situation is completely different to assumptions made in the model. The implications of this are skirted over by Nordhaus, but will be explored below.

The major results at the beginning of the paper are
  • The estimate of the SCC has been revised upward by about 50 percent since the last full version in 2013.
  • The international target for climate change with a limit of 2°C appears to be infeasible with reasonably accessible technologies even with very ambitious abatement strategies.
  • A target of 2.5°C is technically feasible but would require extreme and virtually universal global policy measures in the near future.

SCC is the social cost of carbon. The conclusions about policy are not obtained from understating the projected costs of climate change. Yet the aim to limit warming to 2°C appears infeasible. By implication limiting warming beyond this – such as to 1.5°C – should not be considered by rational policy-makers. Even a target of 2.5°C requires special conditions to be fulfilled and still is less optimal than doing nothing. The conclusion from the paper without going any further is achieving the aims of the Paris Climate Agreement will make the world a worse place than doing nothing. The combined costs of policy and any residual costs of climate change will be much greater than the projected costs of climate change.

Some assumptions

This outputs of a model are achieved by making a number of assumptions. When evaluating whether the model results are applicable to real world mitigation policy consideration needs to be given to whether those assumptions hold true, and the impact on policy if violated. I have picked some of the assumptions. The ones that are a direct or near direct quote, are in italics.

  1. Mitigation policies are optimal.
  2. Mitigation policies are almost universally applied in the near future.
  3. The abatement-cost function is highly convex, reflecting the sharp diminishing returns to reducing emissions.
  4. For the DICE model it is assumed that the rate of decarbonization going forward is −1.5 percent per year.
  5. The existence of a “backstop technology,” which is a technology that produces energy services with zero greenhouse gas (GHG) emissions.
  6. Assumed that there are no “negative emissions” technologies initially, but that negative emissions are available after 2150.
  7. Assumes that damages can be reasonably well approximated by a quadratic function of temperature change.
  8. Equilibrium climate sensitivity (ECS) is a mean warming of 3.1°C for an equilibrium CO2 doubling.

This list is far from exhaustive. For instance, it does not include assumptions about the discount rate, economic growth or emissions growth. However, the case against current climate mitigation policies, or proposed policies, can be made by consideration of the first four.

Implications of assumptions being violated

I am using a deliberately strong term for the assumptions not holding.

Clearly a policy is not optimal if it does not work, or even spends money to increase emissions. More subtle is using sub-optimal policies. For instance, raising the cost of electricity is less regressive the poor are compensated. As a result the emissions reductions are less, and there cost per tonne of CO2  mitigated rises. Or nuclear power is not favoured, so is replaced by a more expensive system of wind turbines and backup energy storage. These might be trivial issues if in general policy was focussed on the optimal policy of a universal carbon tax. No country is even close. Attempts to impose carbon taxes in France and Australia have proved deeply unpopular.

Given the current state of affairs described by Nordhaus in the introduction, the most violated assumption is that mitigation policy is not universally applied. Most countries have no effective climate mitigation policies, and very few have policies in place that are likely to result in any where near the huge global emission cuts required to achieve the 2°C warming limit. (The most recent estimate from the UNEP Emissions Gap Report 2018 is that global emissions need to be 25% lower in 2030 than in 2017). Thus globally the costs of unmitigated climate change will be close to the unmitigated 3% of GDP, with globally the policy costs being a small fraction of 1% of GDP. But a country that spends 1% of GDP on policy – even if that is optimal policy – will only see a miniscule reduction in its expected climate costs. Even the USA with about one seventh of global emissions, on Nordhaus’s assumptions efficiently spending 1% of output might expect future climate costs to fall by maybe 0.1%. The policy cost to mitigation cost for a country on its own is quite different to the entire world working collectively on similar policies. Assumption four of a reduction of 1.5% in global emissions illustrates the point in a slightly different way. If the USA started cutting its emissions by an additional 1.5% a year (they are falling without policy) then it would likely mean global emissions would keep on increasing.

The third assumption is another that is sufficient on its own to undermine climate mitigation. The UK and some States in America are pursuing what would be a less than 2°C pathway if it were universally applied. That means they are committing to a highly convex policy cost curve, (often made steeper by far from optimal policies) with virtually no benefits for future generations.

Best Policies under the model assumptions

The simplest alternative to climate mitigation policies could be to have no policies at all. However, if the climate change cost functions are a true representation, and given the current Paris Agreement this is not a viable option for those less thick-skinned than President Trump, or who have a majority who believe in climate change. Economic theory can provide some insights into the strategies to be employed. For instance if the climate cost curve is a quadratic as in Nordhaus (or steeper – in Stern I believe it was at least a quartic) there are rapidly diminishing returns to mitigation policies in terms of costs mitigated. For a politician who wants to serve their the simplest strategies are to

  • Give the impression of doing something to appear virtuous
  • Incur as little cost as possible, especially those that are visible to the majority
  • Benefit special interest groups, especially those with climate activist participants
  • Get other countries to bear the real costs of mitigation.

This implies that many political leaders who want to serve the best interests of their countries need to adopt a strategy of showing they are doing one thing to appear virtuous, whilst in reality doing something quite different.

In the countries dependent of extracting and exporting fossil fuels for a large part of their national income (e.g. the Gulf States, Russia, Kazakhstan, Turkmenistan etc.) different priorities and much higher marginal policy costs for global mitigation are present. In particular, if, as part of climate policies other countries were to shut down existing fossil fuel extraction, or fail to develop new sources of supply to a significant extent then market prices would rise, to the benefit of other producers.

Conclusion

Using Nordhaus’s model assumptions, if the World as a whole fulfilled the Paris Climate Agreement collectively with optimal policies, then the world would be worse off than if it did nothing. That is due to most countries pursuing little or no actual climate mitigation policies. Within this context, pursuing any costly climate mitigation policies will make a country worse off than doing nothing.

Assuming political leaders have the best interests of their country at heart, and regardless of whether they regard climate change a problem, the optimal policy strategy is to impose as little costly policy as possible for maximum appearance of being virtuous, whilst doing the upmost to get other countries to pursue costly mitigation policies.

Finally

I reached the conclusion that climate mitigation will always make a nation worse off ,using neoclassical graphical analysis, in October 2013.

Kevin Marshall

Australian Beer Prices set to Double Due to Global Warming?

Earlier this week Nature Plants published a new paper Decreases in global beer supply due to extreme drought and heat

The Scientific American has an article “Trouble Brewing? Climate Change Closes In on Beer Drinkers” with the sub-title “Increasing droughts and heat waves could have a devastating effect on barley stocks—and beer prices”. The Daily Mail headlines with “Worst news ever! Australian beer prices are set to DOUBLE because of global warming“. All those climate deniers in Australia have denied future generations the ability to down a few cold beers with their barbecued steaks tofu salads.

This research should be taken seriously, as it is by a crack team of experts across a number of disciplines and Universities. Said, Steven J Davis of University of California at Irvine,

The world is facing many life-threatening impacts of climate change, so people having to spend a bit more to drink beer may seem trivial by comparison. But … not having a cool pint at the end of an increasingly common hot day just adds insult to injury.

Liking the odd beer or three I am really concerned about this prospect, so I rented the paper for 48 hours to check it out. What a sensation it is. Here a few impressions.

Layers of Models

From the Introduction, there were a series of models used.

  1. Created an extreme events severity index for barley based on extremes in historical data for 1981-2010.
  2. Plugged this into five different Earth Systems models for the period 2010-2099. Use this against different RCP scenarios, the most extreme of which shows over 5 times the warming of the 1981-2010 period. What is more severe climate events are a non-linear function of temperature rise.
  3. Then model the impact of these severe weather events on crop yields in 34 World Regions using a “process-based crop model”.
  4. (W)e examine the effects of the resulting barley supply shocks on the supply and price of beer in each region using a global general equilibrium model (Global Trade Analysis Project model, GTAP).
  5. Finally, we compare the impacts of extreme events with the impact of changes in mean climate and test the sensitivity of our results to key sources of uncertainty, including extreme events of different severities, technology and parameter settings in the economic model.

What I found odd was they made no allowance for increasing demand for beer over a 90 year period, despite mentioning in the second sentence that

(G)lobal demand for resource-intensive animal products (meat and dairy) processed foods and alcoholic beverages will continue to grow with rising incomes.

Extreme events – severity and frequency

As stated in point 2, the paper uses different RCP scenarios. These featured prominently in the IPCC AR5 of 2013 and 2014. They go from RCP2.6, which is the most aggressive mitigation scenario, through to RCP 8.5 the non-policy scenario which projected around 4.5C of warming from 1850-1870 through to 2100, or about 3.8C of warming from 2010 to 2090.

Figure 1 has two charts. On the left it shows that extreme events will increase intensity with temperature. RCP2.6 will do very little, but RCP8.5 would result by the end of the century with events 6 times as intense today. Problem is that for up to 1.5C there appears to be no noticeable change what so ever.  That is about the same amount of warming the world has experienced from 1850-2010 per HADCRUT4 there will be no change. Beyond that things take off. How the models empirically project well beyond known experience for a completely different scenario defeats me. It could be largely based on their modelling assumptions, which is in turn strongly tainted by their beliefs in CAGW. There is no reality check that it is the models that their models are not falling apart, or reliant on arbitrary non-linear parameters.

The right hand chart shows that extreme events are porjected to increase in frequency as well. Under RCP 2.6 ~ 4% chance of an extreme event, rising to ~ 31% under RCP 8.5. Again, there is an issue of projecting well beyond any known range.

Fig 2 average barley yield shocks during extreme events

The paper assumes that the current geographical distribution and area of barley cultivation is maintained. They have modelled in 2099, from the 1981-2010 a gridded average yield change with 0.5O x 0.5O resolution to create four colorful world maps representing each of the four RCP emissions scenarios. At the equator, each grid is about 56 x 56 km for an area of 3100 km2, or 1200 square miles. Of course, nearer the poles the area diminishes significantly. This is quite a fine level of detail for projections based on 30 years of data to radically different circumstances 90 years in the future. The results show. Map a) is for RCP 8.5. On average yields are projected to be 17% down. As Paul Homewood showed in a post on the 17th, this projected yield fall should be put in the context of a doubling of yields per hectare since the 1960s.

This increase in productivity has often solely ascribed to the improvements in seed varieties (see Norman Borlaug), mechanization and use of fertilizers. These have undoubtably have had a large parts to play in this productivity improvement. But also important is that agriculture has become more intensive. Forty years ago it was clear that there was a distinction between the intensive farming of Western Europe and the extensive farming of the North American prairies and the Russian steppes. It was not due to better soils or climate in Western Europe. This difference can be staggering. In the Soviet Union about 30% of agricultural output came from around 1% of the available land. These were the plots that workers on the state and collective farms could produce their own food and sell surplus in the local markets.

Looking at chart a in Figure 2, there are wide variations about this average global decrease of 17%.

In North America Montana and North Dakota have areas where barley shocks during extreme years will lead to mean yield changes over 90% higher normal, and the areas around have >50% higher than normal. But go less than 1000 km North into Canada to the Calgary/Saskatoon area and there are small decreases in yields.

In Eastern Bolivia – the part due North of Paraguay – there is the biggest patch of > 50% reductions in the world. Yet 500-1000 km away there is a North-South strip (probably just 56km wide) with less than a 5% change.

There is a similar picture in Russia. On the Kazakhstani border, there are areas of > 50% increases, but in a thinly populated band further North and West, going from around Kirov to Southern Finland is where there are massive decreases in yields.

Why, over the course of decades, would those with increasing yields not increase output, and those with decreasing yields not switch to something else defeats me. After all, if overall yields are decreasing due to frequent extreme weather events, the farmers would be losing money, and those farmers do well when overall yields are down will be making extraordinary profits.

A Weird Economic Assumption

Building up to looking at costs, their is a strange assumption.

(A)nalysing the relative changes in shares of barley use, we find that in most case barley-to-beer shares shrink more than barley-to-livestock shares, showing that food commodities (in this case, animals fed on barley) will be prioritized over luxuries such as beer during extreme events years.

My knowledge of farming and beer is limited, but I believe that cattle can be fed on other things than barley. For instance grass, silage, and sugar beet. Yet, beers require precise quantities of barley and hops of certain grades.

Further, cattle feed is a large part of the cost of a kilo of beef or a litre of milk. But it takes around 250-400g of malted barley to produce a litre of beer. The current wholesale price of malted barley is about £215 a tonne or 5.4 to 8.6p a litre. About cheapest 4% alcohol lager I can find in a local supermarket is £3.29 for 10 x 250ml bottles, or £1.32 a litre. Take off 20% VAT and excise duty leaves 30p a litre for raw materials, manufacturing costs, packaging, manufacturer’s margin, transportation, supermarket’s overhead and supermarket’s margin. For comparison four pints (2.276 litres) of fresh milk costs £1.09 in the same supermarket, working out at 48p a litre. This carries no excise duty or VAT. It might have greater costs due to refrigeration, but I would suggest it costs more to produce, and that feed is far more than 5p a litre.

I know that for a reasonable 0.5 litre bottle of ale it is £1.29 to £1.80 a bottle in the supermarkets I shop in, but it is the cheapest that will likely suffer the biggest percentage rise from increase in raw material prices. Due to taxation and other costs, large changes in raw material prices will have very little impact on final retail costs. Even less so in pubs where a British pint (568ml) varies from the £4 to £7 a litre equivalent.

That is, the assumption is the opposite of what would happen in a free market. In the face of a shortage, farmers will substitute barley for other forms of cattle feed, whilst beer manufacturers will absorb the extra cost.

Disparity in Costs between Countries

The most bizarre claim in the article in contained in the central column of Figure 4, which looks at the projected increases in the cost of a 500 ml bottle of beer in US dollars. Chart h shows this for the most extreme RCP 8.5 model.

I was very surprised that a global general equilibrium model would come up with such huge disparities in costs after 90 years. After all, my understanding of these models used utility-maximizing consumers, profit-maximizing producers, perfect information and instantaneous adjustment. Clearly there is something very wrong with this model. So I decided to compare where I live in the UK with neighbouring Ireland.

In the UK and Ireland there are similar high taxes on beer, with Ireland being slightly more. Both countries have lots of branches of the massive discount chain. They also have some products on their website aldi.co.uk and aldi.ie.  In Ireland a 500 ml can of Sainte Etienne Lager is €1.09 or €2.18 a litre or £1.92 a litre. In the UK it is £2.59 for 4 x 440ml cans or £1.59 a litre. The lager is about 21% more in Ireland. But the tax difference should only be about 15% on a 5% beer (Saint Etienne is 4.8%). Aldi are not making bigger profits in Ireland, they just may have higher costs in Ireland, or lesser margins on other items. It is also comparing a single can against a multipack. So pro-rata the £1.80 ($2.35) bottle of beer in the UK would be about $2.70 in Ireland. Under the RCP 8.5 scenario, the models predict the bottle of beer to rise by $1.90 in the UK and $4.84 in Ireland. Strip out the excise duty and VAT and the price differential goes from zero to $2.20.

Now suppose you were a small beer manufacturer in England, Wales or Scotland. If beer was selling for $2.20 more in Ireland than in the UK, would you not want to stick 20,000 bottles in a container and ship it to Dublin?

If the researchers really understood the global brewing industry, they would realize that there are major brands sold across the world. Many are brewed across in a number of countries to the same recipe. It is the barley that is shipped to the brewery, where equipment and techniques are identical with those in other parts of the world. This researchers seem to have failed to get away from their computer models to conduct field work in a few local bars.

What can be learnt from this?

When making projections well outside of any known range, the results must be sense-checked. Clearly, although the researchers have used an economic model they have not understood the basics of economics. People are not dumb  automatons waiting for some official to tell them to change their patterns of behavior in response to changing circumstances. They notice changes in the world around them and respond to it. A few seize the opportunities presented and can become quite wealthy as a result. Farmers have been astute enough to note mounting losses and change how and what they produce. There is also competition from regions. For example, in the 1960s Brazil produced over half the world’s coffee. The major region for production in Brazil was centered around Londrina in the North-East of Parana state. Despite straddling the Tropic of Capricorn, every few years their would be a spring-time frost which would destroy most of the crop. By the 1990s most of the production had moved north to Minas Gerais, well out of the frost belt. The rich fertile soils around Londrina are now used for other crops, such as soya, cassava and mangoes. It was not out of human design that the movement occurred, but simply that the farmers in Minas Gerais could make bumper profits in the frost years.

The publication of this article shows a problem of peer review. Nature Plants is basically a biology journal. Reviewers are not likely to have specialist skills in climate models or economic theory, though those selected should have experience in agricultural models. If peer review is literally that, it will fail anyway in an inter-disciplinary subject, where the participants do not have a general grounding in all the disciplines. In this paper it is not just economics, but knowledge of product costing as well. It is academic superiors from the specialisms that are required for review, not inter-disciplinary peers.

Kevin Marshall

 

Why can’t I reconcile the emissions to achieve 1.5C or 2C of Warming?

Introduction

At heart I am beancounter. That is when presented with figures I like to understand how they are derived. When it comes to the claims about the quantity of GHG emissions that are required to exceed 2°C of warming I cannot get even close, unless by making some a series of  assumptions, some of which are far from being robust. Applying the same set of assumptions I cannot derive emissions consistent with restraining warming to 1.5°C

Further the combined impact of all the assumptions is to create a storyline that appears to me only as empirically as valid as an infinite number of other storylines. This includes a large number of plausible scenarios where much greater emissions can be emitted before 2°C of warming is reached, or where (based on alternative assumptions) plausible scenarios even 2°C of irreversible warming is already in the pipeline.  

Maybe an expert climate scientist will clearly show the errors of this climate sceptic, and use it as a means to convince the doubters of climate science.

What I will attempt here is something extremely unconventional in the world of climate. That is I will try to state all the assumptions made by highlighting them clearly. Further, I will show my calculations and give clear references, so that anyone can easily follow the arguments.

Note – this is a long post. The key points are contained in the Conclusions.

The aim of constraining warming to 1.5 or 2°C

The Paris Climate Agreement was brought about by the UNFCCC. On their website they state.

The Paris Agreement central aim is to strengthen the global response to the threat of climate change by keeping a global temperature rise this century well below 2 degrees Celsius above pre-industrial levels and to pursue efforts to limit the temperature increase even further to 1.5 degrees Celsius. 

The Paris Agreement states in Article 2

1. This Agreement, in enhancing the implementation of the Convention, including its objective, aims to strengthen the global response to the threat of climate change, in the context of sustainable development and efforts to eradicate
poverty, including by:

(a) Holding the increase in the global average temperature to well below 2°C above pre-industrial levels and pursuing efforts to limit the temperature increase to 1.5°C above pre-industrial levels, recognizing that this would significantly reduce the risks and impacts of climate change;

Translating this aim into mitigation policy requires quantification of global emissions targets. The UNEP Emissions Gap Report 2017 has a graphic showing estimates of emissions before 1.5°C or 2°C warming levels is breached.

Figure 1 : Figure 3.1 from the UNEP Emissions Gap Report 2017

The emissions are of all greenhouse gas emissions, expressed in billions of tonnes of CO2 equivalents. From 2010, the quantity of emissions before the either 1.5°C or 2°C is breached are respectively about 600 GtCO2e and 1000 GtCO2e. It is these two figures that I cannot reconcile when using the same  assumptions to calculate both figures. My failure to reconcile is not just a minor difference. Rather, on the same assumptions that 1000 GtCO2e can be emitted before 2°C is breached, 1.5°C is already in the pipeline. In establishing the problems I encounter I will clearly endeavor to clearly state the assumptions made and look at a number of examples.

 Initial assumptions

1 A doubling of CO2 will eventually lead to 3°C of rise in global average temperatures.

This despite the 2013 AR5 WG1 SPM stating on page 16

Equilibrium climate sensitivity is likely in the range 1.5°C to 4.5°C

And stating in a footnote on the same page.

No best estimate for equilibrium climate sensitivity can now be given because of a lack of agreement on values across assessed lines of evidence and studies.

2 Achieving full equilibrium climate sensitivity (ECS) takes many decades.

This implies that at any point in the last few years, or any year in the future there will be warming in progress (WIP).

3 Including other greenhouse gases adds to warming impact of CO2.

Empirically, the IPCC’s Fifth Assessment Report based its calculations on 2010 when CO2 levels were 390 ppm. The AR5 WG3 SPM states in the last sentence on page 8

For comparison, the CO2-eq concentration in 2011 is estimated to be 430 ppm (uncertainty range 340 to 520 ppm)

As with climate sensitivity, the assumption is the middle of an estimated range. In this case over one fifth of the range has the full impact of GHGs being less than the impact of CO2 on its own.

4 All the rise in global average temperature since the 1800s is due to rise in GHGs. 

5 An increase in GHG levels will eventually lead to warming unless action is taken to remove those GHGs from the atmosphere, generating negative emissions. 

These are restrictive assumptions made for ease of calculations.

Some calculations

First a calculation to derive the CO2 levels commensurate with 2°C of warming. I urge readers to replicate these for themselves.
From a Skeptical Science post by Dana1981 (Dana Nuccitelli) “Pre-1940 Warming Causes and Logic” I obtained a simple equation for a change in average temperature T for a given change in CO2 levels.

ΔTCO2 = λ x 5.35 x ln(B/A)
Where A = CO2 level in year A (expressed in parts per million), and B = CO2 level in year B.
I use λ = .809, so that if B = 2A, ΔTCO2 = 3.00

Pre-industrial CO2 levels were 280ppm. 3°C of warming is generated by CO2 levels of 560 ppm, and 2°C of warming is when CO2 levels reach 444 ppm.

From the Mauna Loa CO2 data, average CO2 levels averaged 407 ppm in 2017. Given the assumption (3) and further assuming the impact of other GHGs is unchanged, 2°C of warming would have been surpassed in around 2016 when CO2 levels averaged 404 ppm. The actual rise in global average temperatures is from HADCRUT4 is about half that amount, hence the assumption that the impact of a rise in CO2 takes an inordinately long time for the actual warming to reveal itself. Even with the assumption that 100% of the warming since around 1800 is due to the increase in GHG levels warming in progress (WIP) is about the same as revealed warming. Yet the Sks article argues that some of the early twentieth century warming was due to other than the rise in GHG levels.

This is the crux of the reconciliation problem. From this initial calculation and based on the assumptions, the 2°C warming threshold has recently been breached, and by the same assumptions 1.5°C was likely breached in the 1990s. There are a lot of assumptions here, so I could have missed something or made an error. Below I go into some key examples that verify this initial conclusion. Then I look at how, by introducing a new assumption it is claimed that 2°C warming is not yet reached.

100 Months and Counting Campaign 2008

Trust, yet verify has a post We are Doomed!

This tracks through the Wayback Machine to look at the now defunct 100monthsandcounting.org campaign, sponsored by the left-wing New Economics Foundation. The archived “Technical Note” states that the 100 months was from August 2008, making the end date November 2016. The choice of 100 months turns out to be spot-on with the actual data for CO2 levels; the central estimate of the CO2 equivalent of all GHG emissions by the IPCC in 2014 based on 2010 GHG levels (and assuming other GHGs are not impacted); and the central estimate for Equilibrium Climate Sensitivity (ECS) used by the IPCC. That is, take 430 ppm CO2e, and at 14 ppm for 2°C of warming.
Maybe that was just a fluke or they were they giving a completely misleading forecast? The 100 Months and Counting Campaign was definitely not agreeing with the UNEP Emissions GAP Report 2017 in making the claim. But were they correctly interpreting what the climate consensus was saying at the time?

The 2006 Stern Review

The “Stern Review: The Economics of Climate Change” (archived access here) that was commissioned to provide benefit-cost justification for what became the Climate Change Act 2008. From the Summary of Conclusions

The costs of stabilising the climate are significant but manageable; delay would be dangerous and much more costly.

The risks of the worst impacts of climate change can be substantially reduced if greenhouse gas levels in the atmosphere can be stabilised between 450 and 550ppm CO2 equivalent (CO2e). The current level is 430ppm CO2e today, and it is rising at more than 2ppm each year. Stabilisation in this range would require emissions to be at least 25% below current levels by 2050, and perhaps much more.

Ultimately, stabilisation – at whatever level – requires that annual emissions be brought down to more than 80% below current levels. This is a major challenge, but sustained long-term action can achieve it at costs that are low in comparison to the risks of inaction. Central estimates of the annual costs of achieving stabilisation between 500 and 550ppm CO2e are around 1% of global GDP, if we start to take strong action now.

If we take assumption 1 that a doubling of CO2 levels will eventually lead to 3.0°C of warming and from a base CO2 level of 280ppm, then the Stern Review is saying that the worst impacts can be avoided if temperature rise is constrained to 2.1 – 2.9°C, but only in the range of 2.5 to 2.9°C does the mitigation cost estimate of 1% of GDP apply in 2006. It is not difficult to see why constraining warming to 2°C or lower would not be net beneficial. With GHG levels already at 430ppm CO2e, and CO2 levels rising at over 2ppm per annum, the 2°C of warming level of 444ppm (or the rounded 450ppm) would have been exceeded well before any global reductions could be achieved.

There is a curiosity in the figures. When the Stern Review was published in 2006 estimated GHG levels were 430ppm CO2e, as against CO2 levels for 2006 of 382ppm. The IPCC AR5 states

For comparison, the CO2-eq concentration in 2011 is estimated to be 430 ppm (uncertainty range 340 to 520 ppm)

In 2011, when CO2 levels averaged 10ppm higher than in 2006 at 392ppm, estimated GHG levels were the same. This is a good example of why one should take note of uncertainty ranges.

IPCC AR4 Report Synthesis Report Table 5.1

A year before the 100 Months and Counting campaign The IPCC produced its Fourth Climate Synthesis Report. The 2007 Synthesis Report on Page 67 (pdf) there is table 5.1 of emissions scenarios.

Figure 2 : Table 5.1. IPCC AR4 Synthesis Report Page 67 – Without Footnotes

I inputted the various CO2-eq concentrations into my amended version of Dana Nuccitelli’s magic equation and compared to the calculation warming in Table 5.1

Figure 3 : Magic Equation calculations of warming compared to Table 5.1. IPCC AR4 Synthesis Report

My calculations of warming are the same as that of the IPCC to one decimal place except for the last two calculations. Why are there these rounding differences? From a little fiddling in Excel, it would appear to me that the IPCC got the warming results from a doubling of 3 when calculating to two decimal places, whilst my version of the formula is to four decimal places.

Note the following

  • That other GHGs are translatable into CO2 equivalents. Once translated other GHGs they can be treated as if they were CO2.
  • There is no time period in this table. The 100 Months and Counting Campaign merely punched in existing numbers and made a forecast ahead of the GHG levels that would reach the 2°C of warming.
  • No mention of a 1.5°C warming scenario. If constraining warming to 1.5°C did not seem credible in 2007, which should it be credible in 2014 or 2017, when CO2 levels are higher?

IPCC AR5 Report Highest Level Summary

I believe that the underlying estimates of emissions to achieve the 1.5°C or 2°C  of warming used by the UNFCCC and UNEP come from the UNIPCC Fifth Climate Assessment Report (AR5), published in 2013/4. At this stage I introduce an couple of empirical assumptions from IPCC AR5.

6 Cut-off year for historical data is 2010 when CO2 levels were 390 ppm (compared to 280 ppm in pre-industrial times) and global average temperatures were about 0.8°C above pre-industrial times.

Using the magic equation above, and the 390 ppm CO2 levels, there is around 1.4°C of warming due from CO2. Given 0.8°C of revealed warming to 2010, the residual “warming-in-progress” was 0.6°C.

The highest level of summary in AR5 is a Presentation to summarize the central findings of the Summary for Policymakers of the Synthesis Report, which in turn brings together the three Working Group Assessment Reports. This Presentation can be found at the bottom right of the IPCC AR5 Synthesis Report webpage. Slide 33 of 35 (reproduced below as Figure 4) gives the key policy point. 1000 GtCO2 of emissions from 2011 onwards will lead to 2°C. This is very approximate but concurs with the UNEP emissions gap report.

Figure 4 : Slide 33 of 35 of the AR5 Synthesis Report Presentation.

Now for some calculations.

1900 GtCO2 raised CO2 levels by 110 ppm (390-110). 1 ppm = 17.3 GtCO2

1000 GtCO2 will raise CO2 levels by 60 ppm (450-390).  1 ppm = 16.7 GtCO2

Given the obvious roundings of the emissions figures, the numbers fall out quite nicely.

Last year I divided CDIAC CO2 emissions (from the Global Carbon Project) by Mauna Loa CO2 annual mean growth rates (data) to produce the following.

Figure 5 : CDIAC CO2 emissions estimates (multiplied by 3.664 to convert from carbon units to CO2 units) divided by Mauna Loa CO2 annual mean growth rates in ppm.

17GtCO2 for a 1ppm rise is about right for the last 50 years.

To raise CO2 levels from 390 to 450 ppm needs about 17 x (450-390) = 1020 GtCO2. Slide 33 is a good approximation of the CO2 emissions to raise CO2 levels by 60 ppm.

But there are issues

  • If ECS = 3.00, and 17 GtCO2 of emissions to raise CO2 levels by 1 ppm, then it is only 918 (17*54) GtCO2 to achieve 2°C of warming. Alternatively, in future if there are assume 1000 GtCO2 to achieve 2°C  of warming it will take 18.5 GtCO2 to raise CO2 levels by 1 ppm, as against 17 GtCO2 in the past. It is only by using 450 ppm as commensurate with 2°C of warming that past and future stacks up.
  • If ECS = 3,  from CO2 alone 1.5°C would be achieved at 396 ppm or a further 100 GtCO2 of emissions. This CO2 level was passed in 2013 or 2014.
  • The calculation falls apart if other GHGs are included.  Emissions are assumed equivalent to 430 ppm at 2011. Therefore with all GHGs considered the 2°C warming would be achieved with 238 GtCO2e of emissions ((444-430)*17) and the 1.5°C of warming was likely passed in the 1990s.
  • If actual warming since pre-industrial times to 2010 was 0.8°C, ECS = 3, and the rise in all GHG levels was equivalent to a rise in CO2 from 280 to 430 ppm, then the residual “warming-in-progress” (WIP) was just over 1°C. That it is the WIP exceeds the total revealed warming in well over a century. If there is a short-term temperature response is half or more of the value of full ECS, it would imply even the nineteenth century emissions are yet to have the full impact on global average temperatures.

What justification is there for effectively disregarding the impact of other greenhouse emissions when it was not done previously?

This offset is to be found in section C – The Drivers of Climate Change – in AR5 WG1 SPM . In particular the breakdown, with uncertainties, in table SPM.5. Another story is how AR5 reached the very same conclusion as AR4 WG1 SPM page 4 on the impact of negative anthropogenic forcings but with a different methodology, hugely different estimates of aerosols along with very different uncertainty bands. Further, these historical estimates are only for the period 1951-2010, whilst the starting date for 1.5°C or 2°C is 1850.

From this a further assumption is made when considering AR5.

7 The estimated historical impact of other GHG emissions (Methane, Nitrous Oxide…) has been effectively offset by the cooling impacts of aerosols and precusors. It is assumed that this will carry forward into the future.

UNEP Emissions Gap Report 2014

Figure 1 above is figure 3.1 from the UNEP Emissions GAP Report 2017. The equivalent report from 2014 puts this 1000 GtCO2 of emissions in a clearer context. First a quotation with two accompanying footnotes.

As noted by the IPCC, scientists have determined that an increase in global temperature is proportional to the build-up of long-lasting greenhouse gases in the atmosphere, especially carbon dioxide. Based on this finding, they have estimated the maximum amount of carbon dioxide that could be emitted over time to the atmosphere and still stay within the 2 °C limit. This is called the carbon dioxide emissions budget because, if the world stays within this budget, it should be possible to stay within the 2 °C global warming limit. In the hypothetical case that carbon dioxide was the only human-made greenhouse gas, the IPCC estimated a total carbon dioxide budget of about 3 670 gigatonnes of carbon dioxide (Gt CO2 ) for a likely chance3 of staying within the 2 °C limit . Since emissions began rapidly growing in the late 19th century, the world has already emitted around 1 900 Gt CO2 and so has used up a large part of this budget. Moreover, human activities also result in emissions of a variety of other substances that have an impact on global warming and these substances also reduce the total available budget to about 2 900 Gt CO2 . This leaves less than about 1 000 Gt CO2 to “spend” in the future4 .

3 A likely chance denotes a greater than 66 per cent chance, as specified by the IPCC.

4 The Working Group III contribution to the IPCC AR5 reports that scenarios in its category which is consistent with limiting warming to below 2 °C have carbon dioxide budgets between 2011 and 2100 of about 630-1 180 GtCO2

The numbers do not fit, unless the impact of other GHGs are ignored. As found from slide 33, there is 2900 GtCO2 to raise atmospheric CO2 levels by 170 ppm, of which 1900 GtC02 has been emitted already. The additional marginal impact of other historical greenhouse gases of 770 GtCO2 is ignored. If those GHG emissions were part of historical emissions as the statement implies, then that marginal impact would be equivalent to an additional 45 ppm (770/17) on top of the 390 ppm CO2 level. That is not far off the IPCC estimated CO2-eq concentration in 2011 of 430 ppm (uncertainty range 340 to 520 ppm). But by the same measure 3670 GTCO2e would increase CO2 levels by 216 ppm (3670/17) from 280 to 496 ppm. With ECS = 3, this would eventually lead to a temperature increase of almost 2.5°C.

Figure 1 above is figure 3.1 from the UNEP Emissions GAP Report 2017. The equivalent report from the 2014 report ES.1

Figure 6 : From the UNEP Emissions Gap Report 2014 showing two emissions pathways to constrain warming to 2°C by 2100.

Note that this graphic goes through to 2100; only uses the CO2 emissions; does not have quantities; and only looks at constraining temperatures to 2°C.  To achieve the target requires a period of negative emissions at the end of the century.

A new assumption is thus required to achieve emissions targets.

8 Sufficient to achieve the 1.5°C or 2°C warming targets likely requires many years of net negative emissions at the end of the century.

A Lower Level Perspective from AR5

A simple pie chart does not seem to make sense. Maybe my conclusions are contradicted by the more detailed scenarios? The next level of detail is to be found in table SPM.1 on page 22 of the AR5 Synthesis Report – Summary for Policymakers.

Figure 7 : Table SPM.1 on Page 22 of AR5 Synthesis Report SPM, without notes. Also found as Table 3.1 on Page 83 of AR5 Synthesis Report 

The comment for <430 ppm (the level of 2010) is "Only a limited number of individual model studies have explored levels below 430 ppm CO2-eq. ” Footnote j reads

In these scenarios, global CO2-eq emissions in 2050 are between 70 to 95% below 2010 emissions, and they are between 110 to 120% below 2010 emissions in 2100.

That is, net global emissions are negative in 2100. Not something mentioned in the Paris Agreement, which only has pledges through to 2030. It is consistent with the UNEP Emissions GAP report 2014 Table ES.1. The statement does not refer to a particular level below 430 ppm CO2-eq, which equates to 1.86°C. So how is 1.5°C of warming not impossible without massive negative emissions? In over 600 words of notes there is no indication. For that you need to go to the footnotes to the far more detailed Table 6.3 AR5 WG3 Chapter 6 (Assessing Transformation Pathways – pdf) Page 431. Footnote 7 (Bold mine)

Temperature change is reported for the year 2100, which is not directly comparable to the equilibrium warming reported in WGIII AR4 (see Table 3.5; see also Section 6.3.2). For the 2100 temperature estimates, the transient climate response (TCR) is the most relevant system property.  The assumed 90% range of the TCR for MAGICC is 1.2–2.6 °C (median 1.8 °C). This compares to the 90% range of TCR between 1.2–2.4 °C for CMIP5 (WGI Section 9.7) and an assessed likely range of 1–2.5 °C from multiple lines of evidence reported in the WGI AR5 (Box 12.2 in Section 12.5).

The major reason that 1.5°C of warming is not impossible (but still more unlikely than likely) for CO2 equivalent levels that should produce 2°C+ of warming being around for decades is because the full warming impact takes so long to filter through.  Further, Table 6.3 puts Peak CO2-eq levels for 1.5-1.7°C scenarios at 465-530 ppm, or eventual warming of 2.2 to 2.8°C. Climate WIP is the difference. But in 2018 WIP might be larger than all the revealed warming in since 1870, and certainly since the mid-1970s.

Within AR5 when talking about constraining warming to 1.5°C or 2.0°C it is only the warming which is estimated to be revealed in 2100. There is no indication of how much warming in progress (WIP) there is in 2100 under the various scenarios, therefore I cannot reconcile back the figures. However, for GHG  would appear that the 1.5°C figure relies upon a period of over 100 years for impact of GHGs on warming failing to come through as (even netting off other GHGs with the negative impact of aerosols) by 2100 CO2 levels would have been above 400 ppm for over 85 years, and for most of those significantly above that level.

Conclusions

The original aim of this post was to reconcile the emissions sufficient to prevent 1.5°C or 2°C of warming being exceeded through some calculations based on a series of restrictive assumptions.

  • ECS = 3.0°C, despite the IPCC being a best estimate across different studies. The range is 1.5°C to 4.5°C.
  • All the temperature rise since the 1800s is assumed due to rises in GHGs. There is evidence that this might not be the case.
  • Other GHGs are netted off against aerosols and precursors. Given that “CO2-eq concentration in 2011 is estimated to be 430 ppm (uncertainty range 340 to 520 ppm)” when CO2 levels were around 390 ppm, this assumption is far from robust.
  • Achieving full equilibrium takes many decades. So long in fact that the warming-in-progress (WIP) may currently exceed all the revealed warming in over 150 years, even based on the assumption that all of that revealed historical warming is due to rises in GHG levels.

Even with these assumptions, keeping warming within 1.5°C or 2°C seems to require two assumptions that were not recognized a few years ago. First is to assume net negative global emissions for many years at the end of the century. Second is to talk about projected warming in 2100 rather than warming as a resultant on achieving full ECS.

The whole exercise appears to rest upon a pile of assumptions. Amending the assumptions means one way means admitting that 1.5°C or 2°C of warming is already in the pipeline, or the other way means admitting climate sensitivity is much lower. Yet there appears to be a very large range of empirical assumptions to chose from there could be there are a very large number of scenarios that are as equally valid as the ones used in the UNEP Emissions Gap Report 2017.

Kevin Marshall

More Coal-Fired Power Stations in Asia

A lovely feature of the GWPF site is its extracts of articles related to all aspects of climate and related energy policies. Yesterday the GWPF extracted from an opinion piece in the Hong Kong-based South China Morning Post A new coal war frontier emerges as China and Japan compete for energy projects in Southeast Asia.
The GWPF’s summary:-

Southeast Asia’s appetite for coal has spurred a new geopolitical rivalry between China and Japan as the two countries race to provide high-efficiency, low-emission technology. More than 1,600 coal plants are scheduled to be built by Chinese corporations in over 62 countries. It will make China the world’s primary provider of high-efficiency, low-emission technology.

A summary point in the article is not entirely accurate. (Italics mine)

Because policymakers still regard coal as more affordable than renewables, Southeast Asia’s industrialisation continues to consume large amounts of it. To lift 630 million people out of poverty, advanced coal technologies are considered vital for the region’s continued development while allowing for a reduction in carbon emissions.

Replacing a less efficient coal-fired power station with one of the latest technology will reduce carbon (i.e CO2) emissions per unit of electricity produced. In China, these efficiency savings replacement process may outstrip the growth in power supply from fossil fuels. But in the rest of Asia, the new coal-fired power stations will be mostly additional capacity in the coming decades, so will lead to an increase in CO2 emissions. It is this additional capacity that will be primarily responsible for driving the economic growth that will lift the poor out of extreme poverty.

The newer technologies are important in other types emissions. That is the particle emissions that has caused high levels of choking pollution and smogs in many cities of China and India. By using the new technologies, other countries can avoid the worst excesses of this pollution, whilst still using a cheap fuel available from many different sources of supply. The thrust in China will likely be to replace the high pollution power stations with new technologies or adapt them to reduce the emissions and increase efficiencies. Politically, it is a different way of raising living standards and quality of life than by increasing real disposable income per capita.

Kevin Marshall