How misleading economic assumptions can show Brexit making people worse off

Last week the BBC News headlined Brexit deal means ‘£70bn hit to UK by 2029′ITV news had a similar report. The source, NIESR, summarizes their findings as follows:-

Fig 1 : NIESR headline findings 

£70bn appears to be a lot of money, but this is a 10 year forecast on an economy that currently has a GDP of £2,000bn. The difference is about one third of one percent a year. The “no deal” scenario is just £40bn worse than the current deal on offer, hardly an apocalyptic scenario that should not be countenanced. Put another way, if underlying economic growth is 2%, from the NIESR in ten years the economy will be between 16% and 22% larger.   In economic forecasting, the longer the time frame, the more significant the underlying assumptions. The reports are based on an NIESR open-access paper  Prospects for the UK Economy – Arno Hantzsche, Garry Young, first published 29 Oct 2019. The key basis is contained in Figures 1 & 2, reproduced below.

Fig 2 : Figures 1 & 2 from the NIESR paper “Prospects for the UK Economy

The two key figures purport to show that Brexit has made a difference. Business investment growth has apparently ground to a halt since mid-2016 and economic growth slowed. What it does not show is a decline in business investment, nor a halting of economic growth.

After these figures the report states:-

The reason that investment has been affected so much by the Brexit vote is that businesses fear that trade with the EU will be sufficiently costly in the future – especially with a no-deal Brexit – that new investment will not pay off. Greater clarity about the future relationship, especially removing the no-deal threat, might encourage some of that postponed investment to take place. But that would depend on the type of deal that is ultimately negotiated. A deal that preserved the current close trading relationship between the UK and EU could result in an upsurge in investment. In contrast, a deal that would make it certain that there would be more trade barriers between the UK and EU in the future would similarly remove the risk of no deal but at the same time eliminate the possibility of closer economic ties, offsetting any boost to economic activity.

This statement asserts, without evidence, that the cause of the change in investment trend is singular. That is due to business fears over Brexit. There is no corroborating evidence to back this assumption, such as surveys of business confidence, or decline in the stock markets. Nor is there a comparison with countries other than the UK, to show that any apparent shifts are due to other causes, such as the normal business cycle. Yet it is this singular assumed cause of the apparent divergence from trend that is used as the basis of forecasting for different policy scenarios a decade into the future.

The rest of this article will concentrate of the alternative evidence, to show that any alleged change in economic trends are either taken out of context or did not occur as a result of Brexit. For this I use World Bank data over a twenty year period, comparing to the Euro area. If voting to leave the EU has had a significant impact in economic trends 

Net Foreign Direct Investment

There is no data for the narrow business investment at the World Bank. The alternative is net foreign direct investment.


Fig 3 : Data for net foreign direct investment from 1999 to 2018 for the Euro area and the UK.

UK net foreign direct investment was strongly negative in 2014 to 2016, becoming around zero in 2017 and 2018. The Euro area shows an opposite trend. Politically, in 2014 UKIP won the UK elections to the European Parliament, followed in 2015 by a promise of a referendum on the EU. Maybe the expectation of Britain voting to leave the EU could have had impact? More likely this net outflow is connected to the decline in the value of the pound. From xe.com

Fig 4 : 10 year GBP to USD exchange rates. Source xe.com

The three years of net negative FDI were years of steep declines in the value of the pound. In the years before and after, when exchange rates were more stable, net FDI was near zero.

GDP growth rates %

The NIESR choose to show the value of quarterly output to show a purported decline in the rate of economic growth post EU Referendum. More visible are the GDP growth rates.

Fig 5 : Annual GDP growth rates for the Euro area and the UK from 1999 to 2018. 

The Euro area and the UK suffered a economic crash of similar magnitude in 2008 and 2009. From 2010 to 2018 the UK has enjoyed unbroken economic growth, peaking in 2014. Growth rates were declining well before the EU referendum. The Euro area was again in recession in 2012 and 2013, which more than offsets the stronger growth than the UK from 2016 to 2018. In the years 2010 to 2018 Euro area GDP growth averaged 1.4%, compared with 1.5% for the years 1999 to 2009. In the UK it was 1.9% in both periods. The NIESR is essentially claiming that leaving the EU without a deal will reduce UK growth to levels comparable with most of the EU. 

Unemployment – total and youth

Another matrix is unemployment rates. If voting to leave has impacted business investment and economic growth, one would expect a lagged impact on unemployment.

Fig 6 : Unemployment rates (total and youth) for the Euro area and the UK from 1999 to 2019. The current year is to September.

Unemployment in the Euro area has always been consistently higher than in the UK. The second recession in 2012 and 2013 in the Euro area resulted in unemployment peaking at least two years later than the UK. But in both places there has been over five years of falling unemployment. Brexit seems to have zero impact on the trend in the UK, where unemployment is now the lowest since the early 1970s. 

The average rates of total unemployment for the period 1999-2018 are 8.2% in the Euro area and 6.0% in the UK. For youth unemployment they are 20.9% and 14.6% respectively. 

The reason for higher rates of unemployment in EU countries for decades is largely down to greater regulatory rigidities than the UK. 

Concluding comments

NIESR’s assumptions that the slowdowns in business investment and economic growth are soley due to the uncertainties created by Brexit are not supported by the wider evidence. Without support for that claim, the ten year forecasts of slower economic growth due to Brexit fail entirely. Instead Britain should be moving away from EU stagnation with high youth unemployment, charting a better course that our European neighbours will want to follow. 

Kevin Marshall

Cummings, Brexit and von Neumann

Over at Cliscep, Geoff Chambers has been reading some blog articles by Dominic Cummings, now senior advisor to PM Boris Johnson, and formerly the key figure behind the successful Vote Leave Campign in the 2016 EU Referendum. In a 2014 article on game theory Cummings demonstrates he has actually read the Von Neumann’s articles and seminal 1944 book “A Theory of Games and Economic Behavior” that he quotes. I am sure that he has drawn on secondary sources as well.
A key quote in the Cummings article is from Von Neumann’s 1928 paper.

‘Chess is not a game. Chess is a well-defined computation. You may not be able to work out the answers, but in theory there must be a solution, a right procedure in any position. Now, real games are not like that at all. Real life is not like that. Real life consists of bluffing, of little tactics of deception, of asking yourself what is the other man going to think I mean to do. And that is what games are about in my theory.’

Cummings states the paper

introduced the concept of the minimax: choose a strategy that minimises the possible maximum loss.

Neoclassical economics starts from the assumption of utlity maximisation based on everyone being in the same position and having the same optimal preferences. In relationships they are usually just suppliers and demanders, with both sides gaining. Game theory posits that there may be net are trade-offs in relationships, with possibilities of some parties gaining at the expense of others. What Von Neumann (and also Cummings) do not fully work out is a consequence of people bluffing. As they do not reveal preferences it is not possible to quantify the utility they receive. As such mathematics is only of use in working through hypothetical situations not for empirically working out optimal strategies in most real world sitautions. But the discipline imposed by laying out the problem on game theory is to recognize that opponents in the game both have different preferences and may be bluffing.

In my view one has to consider the situation of the various groups in the Brexit “game”.

The EU is a major player whose gains or losses from Brexit need to be considered. More important that the economic aspects (the loss of 15% of EU GDP; a huge net contributor to the EU budget and a growing economy when the EU as a whole is heading towards recession) is the loss face at having to compromise for a deal, or the political repurcussions of an Indpendent Britain being at least as successful as a member.

By coming out as the major national party of Remain the Liberal Democrats have doubled their popular support. However, in so doing they have taken an extreme position, which belies their traditional occupation of the centre ground in British politics. Further, in a hung Parliament it is unlikely that they would go into coalition with either the Conservatives or Labour.  The nationalist Plaid Cymru and SNP have similar positions. In a hung Parliament the SNP might go into coalition with Labour, but only on the condition of another Scottish Independance Referendum.

The Labour Party have a problem. Comparing Chris Hanretty’s estimated the referendum vote split for the 574 parliamentary constituencies in England and Wales for the EU Referendum with 2015 General Election Results, Labour seats are more deeply divided than the country as a whole. Whilst Labour held just 40% of the seats, they had just over half the 231 seats with a 60% or more Leave vote, and almost two-thirds of the 54 seats with a 60% or more Remain vote. Adding in the constituencies where Labour came second by a margin of less 12% if the vote, (the seats need to win a Parliamentary majority) I derived the following chart.

Tactically, Labour would have move towards a Leave position, but most of the MPs were very pro-Remain and a clear majority of Labour voters likely voted remain. Even in some Labour constituencies where the constituency as a whole voted Leave, a majority of Labour voters may voted Remain. Yet leading members of the current Labour leadership and a disproportionate number of the vast leadership are in very pro-Remain, London constituencies.

The Conservative-held seats had a less polarised in the spread of opinion. Whilst less than 30% of their 330 England and Wales voted >60% Leave, the vast majority voted Leave and very few were virulently pro-Remain.

But what does this tell us about a possible Dominic Cummings strategy in the past few weeks?

A major objective since Boris Johnson became Prime Minister and Cummings was appointed less than two months ago has been a drive to Leave the EU on 31st October. The strategy has been to challenge the EU to compromise on the Withdrawal Agreement to obtain a deal acceptable to the UK Parliament. Hilary Benn’s EU Surrender Act was passed to hamper the negotiating position of the Prime Minister, thus shielding the EU from either having to either compromise or being seen by the rest of the world as being instransigent against reasonable and friendly approaches. Also, it has been to force other parties, particularly Labour, to clarify where they stand. As a result, Labour seems to a clear Remain policy. In forcing the Brexit policy the Government have lost their Parliamentary majority. However, they have caused Jeremy Corbyn to conduct a complete about-turn on a General Election, called for an ummediate election, then twice turning down the opportunity to call one.

Back to the application of game theory to the current Brexit situation I believe there to be a number of possible options.

  1. Revoke Article 50 and remain in the EU. The Lib Dem, Green, SNP amd Plaid Cymru position.
  2. Labour’s current option of negotiating a Withdrawal Agreement to liking, then hold a second referendum on leaving with Withdrawal Agreement or reamining in the EU. As I understand the current situation, the official Labour position would be to Remain, but members of a Labour Cabinet would be allowed a free vote. That is Labour would respect the EU Referendum result only very superficially, whilst not permitting to break away for the umbrella of EU institutions and dik tats.
  3. To leave on a Withdrawal Agreement negotiated by PM Boris Johnson and voted through Parliament.
  4. To leave the EU without a deal.
  5. To extend Article 50 indefinitely until the public opinion gets so fed up that it can be revoked.

Key to this is understanding the perspectives of all sides. For Labour (and many others in Parliament) the biggest expressed danger is a no-deal Brexit. This I believe is either a bluff on their part, or a failure to get a proper sense of proportion. This is illustrated by reading the worst case No Deal Yellowhammer Document (released today) as a prospective reality rather than a “brain storm” working paper as a basis for contingency planning. By imagining such situations, however unrealistic, action plans can be created to prevent the worst impacts should they arise. Posting maximum losses allows the impacts to be minimized. Governments usually kept such papers confidential precisely due to political opponents and journalists evaluating as them as credible scenarios which will not be mitigated against.

Labour’s biggest fear – and many others who have blocked Brexit – is of confronting the voters. This is especially due to telling Leave voters they were stupid for voting the way they did, or were taken in by lies. Although the country is evenly split between Leave and Remain supporting parties, the more divided nature of the Remainers is that the Conservatives will likely win a majority on around a third of the vote. Inputting yesterday’s YouGov/Times opinion poll results into in the Electoral Calculus User-Defined poll gives the Conservatives a 64 majority with just 32% of the vote.

I think when regional differences are taken into account the picture is slightly different. The SNP will likely end up with 50 seats, whilst Labour could lose seats to the Brexit Party in the North and maybe to the Lib Dems. If the Conservatives do not win a majority, the fifth scenario is most likely to play out.

In relation to Cummings and Game Theory, I would suggest that the game is still very much in play, with new moves to be made and further strategies to come into play. It is Cummings and other Government advisors who will be driving the game forward, with the Remainers being the blockers.

Kevin Marshall

Updated 29/09/19

How climate damage costings from EPA Scientists are misleading and how to correct

The Los Angeles Times earlier this month had an article

From ruined bridges to dirty air, EPA scientists price out the cost of climate change. (Hattip Climate Etc.)

By the end of the century, the manifold consequences of unchecked climate change will cost the U.S. hundreds of billions of dollars per year, according to a new study by scientists at the Environmental Protection Agency.
…..
However, they also found that cutting emissions of carbon dioxide and other greenhouse gases, and proactively adapting to a warming world, would prevent a lot of the damage, reducing the annual economic toll in some sectors by more than half.

The article is based on the paper
Climate damages and adaptation potential across diverse sectors of the United States – Jeremy Martinich & Allison Crimmins – Nature Climate Change 2019

The main problem is with the cost alternatives, contained within Figure 2 of the article.

Annual economic damages from climate change under two RCP scenarios. RCP8.5 has no mitigation and RCP4.5 massive mitigation. Source Martinich & Crimmins 2019 Figure 2

I have a lot of issues with the cost estimates. But the fundamental issue centers around costs that are missing from the RCP4.5 costs to enable a proper analysis to be made.

The LA Times puts forward the 2006 Stern Review – The Economics of Climate Change – as an earlier attempt at calculating “the costs of global warming and the benefits of curtailing emissions.
The major policy headline from the Stern Review (4.7MB pdf)

Using the results from formal economic models, the Review estimates that if we don’t act, the overall costs and risks of climate change will be equivalent to losing at least 5% of global GDP each year, now and forever. If a wider range of risks and impacts is taken into account, the estimates of damage could rise to 20% of GDP or more. In contrast, the costs of action – reducing greenhouse gas emissions to avoid the worst impacts of climate change – can be limited to around 1% of global GDP each year.

The Stern Review implies a straight alternative. There are either the costs of unmitigated climate change OR the costs of mitigation policy. The RCP4.5 is the residual climate damage costs after costly policies have been successfully applied. The Stern Review quotation only looked at the policy costs, not the residual climate damage costs after policy has been applied, whereas Martinich & Crimmins 2019 only looks at the residual climate damage costs and not the policy costs.

The costs of any mitigation policy to combat climate change must include both the policy costs and the damage costs. But this is not the most fundamental problem.

The fundamental flaw in climate mitigation policy justifications

The estimated damage costs of climate change result from global emissions of greenhouse gases, which raise the average levels of atmospheric greenhouse gases which in turn raise global average temperatures. This rise in global average temperatures is what is supposed to create the damage costs.
By implication, the success of mitigation policies in reducing climate damage costs is measured in relation to the reduction in global emissions. But 24 annual COP meetings have failed to even get vague policy intentions will collectively stabilize emissions at the current levels. From the UNEP Emissions Gap Report 2018 is Figure ES.3 showing the gap between intentions and the required emissions to constrain global warming to 1.5°C and 2.0°C.

Current mitigation policies in the aggregate will achieve very little. If a country were to impose additional policies, the marginal impact would be very small in reducing global emissions. By implication, any climate mitigation policy costs imposed by a country, or sub-division of that country, will only result in very minor reductions in the future economic damage costs to that country. This is even if the climate mitigation policies are the most economically efficient, getting the biggest reductions for a given amount of expenditure. As climate mitigation is net costly, under current climate mitigation policies will necessarily impose burdens on the current generation, whilst doing far less in reducing the climate impacts on future generations in the policy area. Conversely, elimination of costly policies will be net beneficial to that country. Given the global demands for climate mitigation, politically best policy is to do as little possible, whilst appearing to be as virtuous as possible.

Is there a way forward for climate policy?

A basic principle in considering climate mitigation is derived from Ralph Niebuhr’s Serenity Prayer

God, grant me the serenity to accept the things I cannot change,
Courage to change the things I can,
And wisdom to know the difference.

An emended prayer (or mantra) for policy-makers would be to change things for the better. I would propose not having perfect knowledge of the future, but having a reasonable expectation that it will change the world for the better. If policy is costly, the benefits should exceed the costs. If policy-makers are aiming to serve their own group, or humanity as a whole, then they should have the serenity accept that there are the costs and harms of policy. In this light consider a quote by Nobel Laureate Prof William Nordhaus from an article in the American Economic Review last year.

The reality is that most countries are on a business-as-usual (BAU) trajectory of minimal policies to reduce their emissions; they are taking noncooperative policies that are in their national interest, but far from ones which would represent a global cooperative policy.

Nordhaus agrees with the UNEP emissions gap report. Based on the evidence of COP24 Katowice it is highly unlikely most countries will not do a sudden about-face, implementing policies that are clearly against their national interest. If this is incorrect, maybe someone can start by demonstrating to countries that rely on fossil fuel production for a major part of their national income – such as Russia, Saudi Arabia, Iran and other Middle Eastern countries – how leaving fossil fuels in the ground and embracing renewables is in their national interests. In the meantime, how many politicians will public accept that it is not in their power to reduce global emissions, but continue implementing policies whose success requires that they are part of policies that collectively will reduce global emissions?

If climate change is going to cause future damages what are the other options?

Martinich & Crimmins 2019 have done some of the work in estimating the future costs of climate change for the United States. Insofar as these are accurate forecasts, actions can be taken to reduce those future risks. But those future costs are contingent on a whole series of assumptions. Mostly crucially, the large magnitude of the damage costs are usually contingent  on dumb economic actor assumptions. That is, people have zero behavioral response to changing conditions over many decades. Two examples I looked at last year illustrate the dumb economic actor assumptions.

A Government Report last Summer claimed that unmitigated climate change would result in 7000 excess heat deaths in the UK by the 2050s. The amount of warming was small. The underlying report was based on the coldest region of England and Wales only experiencing average summer temperatures in the 2050s on a par with those of London (the warmest region) today. Most of the excess deaths would be in the over 75s in hospitals and care homes. The “dumb actors” in this case are the health professionals caring for patients in extreme heatwave in the 2050s in exactly the same way as they do today, even though the temperatures would be slightly higher. Nobody would think to try adapt practices through learning from places with hotter summers than the UK at present. That is from the vast majority of countries in the world.

Last year a paper in Nature Plants went by the title “Decreases in global beer supply due to extreme drought and heat”. I noted the paper made a whole serious of dubious assumptions, including two “dumb actor” assumptions,  to arrive at the conclusion that beer prices in some places could double due to global warming. One was that although in the agriculture models barley yields would shrink globally by 16% by 2100 compared to today contingent on a rise of global average temperatures of over 3.0°C , in Montana and North Dakota yields could double. The lucky farmers in these areas would not try to increase output, nor would farmers faced with shrinking yields reduce output. Another was that large price discrepancies in a bottle of beer would open up over the next 80 years between adjacent countries. This includes between Britain and Ireland, despite most of the beer sold being produced by large brewing companies, often in plants in third countries. No one would have the wit to buy a few thousand bottles of beer in Britain and re-sell it at a huge profit in higher-priced Ireland.

If the prospective climate damage costs in Martinich & Crimmins 2019 are based on similar “dumb actor” assumptions then any costly adaptation policies derived from the report might be largely unnecessary. People on the ground will have more effective localized, efficient, adaptation strategies. Generalized regulations and investments based on the models will fail on a benefit cost basis.

Concluding comments

Martinich & Crimmins 2019 look at US climate damage costs under two scenarios, one with little or no climate mitigation policies, the other with considerable successful climate mitigation. In the climate mitigation scenario they fail to add in the costs of climate mitigation policies. More importantly, actual climate mitigation policies have only been enacted by a small minority of countries, so costs expended on mitigation will not be met by significant reductions in future climate costs. Whilst in reality any climate mitigation policies is likely to lead to worse outcomes than doing nothing at all, the paper implies the opposite.
Further, the assumptions behind Martinich & Crimmins 2019 need to be carefully checked. If it includes “dumb economic actor” assumptions then on this alone the long-term economic damage costs might be grossly over-estimated. There is a real risk that adaptation policies based on these climate damage projections will lead to worse outcomes than doing nothing.
Overall, if policy-makers want to make a positive difference to the world in combating climate change, they should acquire the wisdom to identify areas where they can only do net harms. In the current environment, that will take an extreme level of courage. However, these justifications are far less onerous than the rigorous testing and approval process that new medical treatments need to go through before being allowed in general circulation.

Kevin Marshall

Nobel Laureate William Nordhaus demonstrates that pursuing climate mitigation will make a nation worse off

Summary

Nobel Laureate Professor William Nordhaus shows that the optimal climate mitigation policy is for far less mitigation than UNIPCCC proposes. That is to constrain warming by 2100 to 3.5°C instead of 2°C or less. But this optimal policy is based on a series of assumptions, including that policy is optimal and near universally applied. The current situation, with most countries without any effective mitigation policies, is that climate mitigation policies within a country will likely make that country worse off, even if they would be better off with the same policies were near universally applied. Countries applying costly climate mitigation policies are making their people worse off.

Context

Last week Bjorn Lomborg tweeted a chart derived from Nordhaus paper from August 2018 in the American Economic Review.

The paper citation is

Nordhaus, William. 2018. “Projections and Uncertainties about Climate Change in an Era of Minimal Climate Policies.” American Economic Journal: Economic Policy10 (3): 333-60.

The chart shows the optimal climate mitigation policy, based upon minimization of (a) the combined projected costs of climate mitigation policy and (b) residual net costs from human-caused climate change, is much closer to the non-policy option of 4.1°C than restraining warming to 2.5°C. By the assumptions of Nordhaus’s model greater warming constraint can only be achieved through much greater policy costs. The abstract concludes

The study confirms past estimates of likely rapid climate change over the next century if major climate-change policies are not taken. It suggests that it is unlikely that nations can achieve the 2°C target of international agreements, even if ambitious policies are introduced in the near term. The required carbon price needed to achieve current targets has risen over time as policies have been delayed.

A statement whose implications are ignored

This study is based on mainstream projections of greenhouse gas emissions and the resultant warming. Prof Nordhaus is in the climate mainstream, not a climate agnostic like myself. Given this, I find the opening statement interesting. (My bold)

Climate change remains the central environmental issue of today. While the Paris Agreement on climate change of 2015 (UN 2015) has been ratified, it is limited to voluntary emissions reductions for major countries, and the United States has withdrawn and indeed is moving backward. No binding agreement for emissions reductions is currently in place following the expiration of the Kyoto Protocol in 2012. Countries have agreed on a target temperature limit of 2°C, but this is far removed from actual policies, and is probably infeasible, as will be seen below.
The reality is that most countries are on a business-as-usual (BAU) trajectory of minimal policies to reduce their emissions; they are taking noncooperative policies that are in their national interest, but far from ones which would represent a global cooperative policy.

Although there is a paper agreement to constrain emissions commensurate with 2°C of warming, most countries are doing nothing – or next to nothing – to control their emissions. The real world situation is completely different to assumptions made in the model. The implications of this are skirted over by Nordhaus, but will be explored below.

The major results at the beginning of the paper are
  • The estimate of the SCC has been revised upward by about 50 percent since the last full version in 2013.
  • The international target for climate change with a limit of 2°C appears to be infeasible with reasonably accessible technologies even with very ambitious abatement strategies.
  • A target of 2.5°C is technically feasible but would require extreme and virtually universal global policy measures in the near future.

SCC is the social cost of carbon. The conclusions about policy are not obtained from understating the projected costs of climate change. Yet the aim to limit warming to 2°C appears infeasible. By implication limiting warming beyond this – such as to 1.5°C – should not be considered by rational policy-makers. Even a target of 2.5°C requires special conditions to be fulfilled and still is less optimal than doing nothing. The conclusion from the paper without going any further is achieving the aims of the Paris Climate Agreement will make the world a worse place than doing nothing. The combined costs of policy and any residual costs of climate change will be much greater than the projected costs of climate change.

Some assumptions

This outputs of a model are achieved by making a number of assumptions. When evaluating whether the model results are applicable to real world mitigation policy consideration needs to be given to whether those assumptions hold true, and the impact on policy if violated. I have picked some of the assumptions. The ones that are a direct or near direct quote, are in italics.

  1. Mitigation policies are optimal.
  2. Mitigation policies are almost universally applied in the near future.
  3. The abatement-cost function is highly convex, reflecting the sharp diminishing returns to reducing emissions.
  4. For the DICE model it is assumed that the rate of decarbonization going forward is −1.5 percent per year.
  5. The existence of a “backstop technology,” which is a technology that produces energy services with zero greenhouse gas (GHG) emissions.
  6. Assumed that there are no “negative emissions” technologies initially, but that negative emissions are available after 2150.
  7. Assumes that damages can be reasonably well approximated by a quadratic function of temperature change.
  8. Equilibrium climate sensitivity (ECS) is a mean warming of 3.1°C for an equilibrium CO2 doubling.

This list is far from exhaustive. For instance, it does not include assumptions about the discount rate, economic growth or emissions growth. However, the case against current climate mitigation policies, or proposed policies, can be made by consideration of the first four.

Implications of assumptions being violated

I am using a deliberately strong term for the assumptions not holding.

Clearly a policy is not optimal if it does not work, or even spends money to increase emissions. More subtle is using sub-optimal policies. For instance, raising the cost of electricity is less regressive the poor are compensated. As a result the emissions reductions are less, and there cost per tonne of CO2  mitigated rises. Or nuclear power is not favoured, so is replaced by a more expensive system of wind turbines and backup energy storage. These might be trivial issues if in general policy was focussed on the optimal policy of a universal carbon tax. No country is even close. Attempts to impose carbon taxes in France and Australia have proved deeply unpopular.

Given the current state of affairs described by Nordhaus in the introduction, the most violated assumption is that mitigation policy is not universally applied. Most countries have no effective climate mitigation policies, and very few have policies in place that are likely to result in any where near the huge global emission cuts required to achieve the 2°C warming limit. (The most recent estimate from the UNEP Emissions Gap Report 2018 is that global emissions need to be 25% lower in 2030 than in 2017). Thus globally the costs of unmitigated climate change will be close to the unmitigated 3% of GDP, with globally the policy costs being a small fraction of 1% of GDP. But a country that spends 1% of GDP on policy – even if that is optimal policy – will only see a miniscule reduction in its expected climate costs. Even the USA with about one seventh of global emissions, on Nordhaus’s assumptions efficiently spending 1% of output might expect future climate costs to fall by maybe 0.1%. The policy cost to mitigation cost for a country on its own is quite different to the entire world working collectively on similar policies. Assumption four of a reduction of 1.5% in global emissions illustrates the point in a slightly different way. If the USA started cutting its emissions by an additional 1.5% a year (they are falling without policy) then it would likely mean global emissions would keep on increasing.

The third assumption is another that is sufficient on its own to undermine climate mitigation. The UK and some States in America are pursuing what would be a less than 2°C pathway if it were universally applied. That means they are committing to a highly convex policy cost curve, (often made steeper by far from optimal policies) with virtually no benefits for future generations.

Best Policies under the model assumptions

The simplest alternative to climate mitigation policies could be to have no policies at all. However, if the climate change cost functions are a true representation, and given the current Paris Agreement this is not a viable option for those less thick-skinned than President Trump, or who have a majority who believe in climate change. Economic theory can provide some insights into the strategies to be employed. For instance if the climate cost curve is a quadratic as in Nordhaus (or steeper – in Stern I believe it was at least a quartic) there are rapidly diminishing returns to mitigation policies in terms of costs mitigated. For a politician who wants to serve their the simplest strategies are to

  • Give the impression of doing something to appear virtuous
  • Incur as little cost as possible, especially those that are visible to the majority
  • Benefit special interest groups, especially those with climate activist participants
  • Get other countries to bear the real costs of mitigation.

This implies that many political leaders who want to serve the best interests of their countries need to adopt a strategy of showing they are doing one thing to appear virtuous, whilst in reality doing something quite different.

In the countries dependent of extracting and exporting fossil fuels for a large part of their national income (e.g. the Gulf States, Russia, Kazakhstan, Turkmenistan etc.) different priorities and much higher marginal policy costs for global mitigation are present. In particular, if, as part of climate policies other countries were to shut down existing fossil fuel extraction, or fail to develop new sources of supply to a significant extent then market prices would rise, to the benefit of other producers.

Conclusion

Using Nordhaus’s model assumptions, if the World as a whole fulfilled the Paris Climate Agreement collectively with optimal policies, then the world would be worse off than if it did nothing. That is due to most countries pursuing little or no actual climate mitigation policies. Within this context, pursuing any costly climate mitigation policies will make a country worse off than doing nothing.

Assuming political leaders have the best interests of their country at heart, and regardless of whether they regard climate change a problem, the optimal policy strategy is to impose as little costly policy as possible for maximum appearance of being virtuous, whilst doing the upmost to get other countries to pursue costly mitigation policies.

Finally

I reached the conclusion that climate mitigation will always make a nation worse off ,using neoclassical graphical analysis, in October 2013.

Kevin Marshall

Australian Beer Prices set to Double Due to Global Warming?

Earlier this week Nature Plants published a new paper Decreases in global beer supply due to extreme drought and heat

The Scientific American has an article “Trouble Brewing? Climate Change Closes In on Beer Drinkers” with the sub-title “Increasing droughts and heat waves could have a devastating effect on barley stocks—and beer prices”. The Daily Mail headlines with “Worst news ever! Australian beer prices are set to DOUBLE because of global warming“. All those climate deniers in Australia have denied future generations the ability to down a few cold beers with their barbecued steaks tofu salads.

This research should be taken seriously, as it is by a crack team of experts across a number of disciplines and Universities. Said, Steven J Davis of University of California at Irvine,

The world is facing many life-threatening impacts of climate change, so people having to spend a bit more to drink beer may seem trivial by comparison. But … not having a cool pint at the end of an increasingly common hot day just adds insult to injury.

Liking the odd beer or three I am really concerned about this prospect, so I rented the paper for 48 hours to check it out. What a sensation it is. Here a few impressions.

Layers of Models

From the Introduction, there were a series of models used.

  1. Created an extreme events severity index for barley based on extremes in historical data for 1981-2010.
  2. Plugged this into five different Earth Systems models for the period 2010-2099. Use this against different RCP scenarios, the most extreme of which shows over 5 times the warming of the 1981-2010 period. What is more severe climate events are a non-linear function of temperature rise.
  3. Then model the impact of these severe weather events on crop yields in 34 World Regions using a “process-based crop model”.
  4. (W)e examine the effects of the resulting barley supply shocks on the supply and price of beer in each region using a global general equilibrium model (Global Trade Analysis Project model, GTAP).
  5. Finally, we compare the impacts of extreme events with the impact of changes in mean climate and test the sensitivity of our results to key sources of uncertainty, including extreme events of different severities, technology and parameter settings in the economic model.

What I found odd was they made no allowance for increasing demand for beer over a 90 year period, despite mentioning in the second sentence that

(G)lobal demand for resource-intensive animal products (meat and dairy) processed foods and alcoholic beverages will continue to grow with rising incomes.

Extreme events – severity and frequency

As stated in point 2, the paper uses different RCP scenarios. These featured prominently in the IPCC AR5 of 2013 and 2014. They go from RCP2.6, which is the most aggressive mitigation scenario, through to RCP 8.5 the non-policy scenario which projected around 4.5C of warming from 1850-1870 through to 2100, or about 3.8C of warming from 2010 to 2090.

Figure 1 has two charts. On the left it shows that extreme events will increase intensity with temperature. RCP2.6 will do very little, but RCP8.5 would result by the end of the century with events 6 times as intense today. Problem is that for up to 1.5C there appears to be no noticeable change what so ever.  That is about the same amount of warming the world has experienced from 1850-2010 per HADCRUT4 there will be no change. Beyond that things take off. How the models empirically project well beyond known experience for a completely different scenario defeats me. It could be largely based on their modelling assumptions, which is in turn strongly tainted by their beliefs in CAGW. There is no reality check that it is the models that their models are not falling apart, or reliant on arbitrary non-linear parameters.

The right hand chart shows that extreme events are porjected to increase in frequency as well. Under RCP 2.6 ~ 4% chance of an extreme event, rising to ~ 31% under RCP 8.5. Again, there is an issue of projecting well beyond any known range.

Fig 2 average barley yield shocks during extreme events

The paper assumes that the current geographical distribution and area of barley cultivation is maintained. They have modelled in 2099, from the 1981-2010 a gridded average yield change with 0.5O x 0.5O resolution to create four colorful world maps representing each of the four RCP emissions scenarios. At the equator, each grid is about 56 x 56 km for an area of 3100 km2, or 1200 square miles. Of course, nearer the poles the area diminishes significantly. This is quite a fine level of detail for projections based on 30 years of data to radically different circumstances 90 years in the future. The results show. Map a) is for RCP 8.5. On average yields are projected to be 17% down. As Paul Homewood showed in a post on the 17th, this projected yield fall should be put in the context of a doubling of yields per hectare since the 1960s.

This increase in productivity has often solely ascribed to the improvements in seed varieties (see Norman Borlaug), mechanization and use of fertilizers. These have undoubtably have had a large parts to play in this productivity improvement. But also important is that agriculture has become more intensive. Forty years ago it was clear that there was a distinction between the intensive farming of Western Europe and the extensive farming of the North American prairies and the Russian steppes. It was not due to better soils or climate in Western Europe. This difference can be staggering. In the Soviet Union about 30% of agricultural output came from around 1% of the available land. These were the plots that workers on the state and collective farms could produce their own food and sell surplus in the local markets.

Looking at chart a in Figure 2, there are wide variations about this average global decrease of 17%.

In North America Montana and North Dakota have areas where barley shocks during extreme years will lead to mean yield changes over 90% higher normal, and the areas around have >50% higher than normal. But go less than 1000 km North into Canada to the Calgary/Saskatoon area and there are small decreases in yields.

In Eastern Bolivia – the part due North of Paraguay – there is the biggest patch of > 50% reductions in the world. Yet 500-1000 km away there is a North-South strip (probably just 56km wide) with less than a 5% change.

There is a similar picture in Russia. On the Kazakhstani border, there are areas of > 50% increases, but in a thinly populated band further North and West, going from around Kirov to Southern Finland is where there are massive decreases in yields.

Why, over the course of decades, would those with increasing yields not increase output, and those with decreasing yields not switch to something else defeats me. After all, if overall yields are decreasing due to frequent extreme weather events, the farmers would be losing money, and those farmers do well when overall yields are down will be making extraordinary profits.

A Weird Economic Assumption

Building up to looking at costs, their is a strange assumption.

(A)nalysing the relative changes in shares of barley use, we find that in most case barley-to-beer shares shrink more than barley-to-livestock shares, showing that food commodities (in this case, animals fed on barley) will be prioritized over luxuries such as beer during extreme events years.

My knowledge of farming and beer is limited, but I believe that cattle can be fed on other things than barley. For instance grass, silage, and sugar beet. Yet, beers require precise quantities of barley and hops of certain grades.

Further, cattle feed is a large part of the cost of a kilo of beef or a litre of milk. But it takes around 250-400g of malted barley to produce a litre of beer. The current wholesale price of malted barley is about £215 a tonne or 5.4 to 8.6p a litre. About cheapest 4% alcohol lager I can find in a local supermarket is £3.29 for 10 x 250ml bottles, or £1.32 a litre. Take off 20% VAT and excise duty leaves 30p a litre for raw materials, manufacturing costs, packaging, manufacturer’s margin, transportation, supermarket’s overhead and supermarket’s margin. For comparison four pints (2.276 litres) of fresh milk costs £1.09 in the same supermarket, working out at 48p a litre. This carries no excise duty or VAT. It might have greater costs due to refrigeration, but I would suggest it costs more to produce, and that feed is far more than 5p a litre.

I know that for a reasonable 0.5 litre bottle of ale it is £1.29 to £1.80 a bottle in the supermarkets I shop in, but it is the cheapest that will likely suffer the biggest percentage rise from increase in raw material prices. Due to taxation and other costs, large changes in raw material prices will have very little impact on final retail costs. Even less so in pubs where a British pint (568ml) varies from the £4 to £7 a litre equivalent.

That is, the assumption is the opposite of what would happen in a free market. In the face of a shortage, farmers will substitute barley for other forms of cattle feed, whilst beer manufacturers will absorb the extra cost.

Disparity in Costs between Countries

The most bizarre claim in the article in contained in the central column of Figure 4, which looks at the projected increases in the cost of a 500 ml bottle of beer in US dollars. Chart h shows this for the most extreme RCP 8.5 model.

I was very surprised that a global general equilibrium model would come up with such huge disparities in costs after 90 years. After all, my understanding of these models used utility-maximizing consumers, profit-maximizing producers, perfect information and instantaneous adjustment. Clearly there is something very wrong with this model. So I decided to compare where I live in the UK with neighbouring Ireland.

In the UK and Ireland there are similar high taxes on beer, with Ireland being slightly more. Both countries have lots of branches of the massive discount chain. They also have some products on their website aldi.co.uk and aldi.ie.  In Ireland a 500 ml can of Sainte Etienne Lager is €1.09 or €2.18 a litre or £1.92 a litre. In the UK it is £2.59 for 4 x 440ml cans or £1.59 a litre. The lager is about 21% more in Ireland. But the tax difference should only be about 15% on a 5% beer (Saint Etienne is 4.8%). Aldi are not making bigger profits in Ireland, they just may have higher costs in Ireland, or lesser margins on other items. It is also comparing a single can against a multipack. So pro-rata the £1.80 ($2.35) bottle of beer in the UK would be about $2.70 in Ireland. Under the RCP 8.5 scenario, the models predict the bottle of beer to rise by $1.90 in the UK and $4.84 in Ireland. Strip out the excise duty and VAT and the price differential goes from zero to $2.20.

Now suppose you were a small beer manufacturer in England, Wales or Scotland. If beer was selling for $2.20 more in Ireland than in the UK, would you not want to stick 20,000 bottles in a container and ship it to Dublin?

If the researchers really understood the global brewing industry, they would realize that there are major brands sold across the world. Many are brewed across in a number of countries to the same recipe. It is the barley that is shipped to the brewery, where equipment and techniques are identical with those in other parts of the world. This researchers seem to have failed to get away from their computer models to conduct field work in a few local bars.

What can be learnt from this?

When making projections well outside of any known range, the results must be sense-checked. Clearly, although the researchers have used an economic model they have not understood the basics of economics. People are not dumb  automatons waiting for some official to tell them to change their patterns of behavior in response to changing circumstances. They notice changes in the world around them and respond to it. A few seize the opportunities presented and can become quite wealthy as a result. Farmers have been astute enough to note mounting losses and change how and what they produce. There is also competition from regions. For example, in the 1960s Brazil produced over half the world’s coffee. The major region for production in Brazil was centered around Londrina in the North-East of Parana state. Despite straddling the Tropic of Capricorn, every few years their would be a spring-time frost which would destroy most of the crop. By the 1990s most of the production had moved north to Minas Gerais, well out of the frost belt. The rich fertile soils around Londrina are now used for other crops, such as soya, cassava and mangoes. It was not out of human design that the movement occurred, but simply that the farmers in Minas Gerais could make bumper profits in the frost years.

The publication of this article shows a problem of peer review. Nature Plants is basically a biology journal. Reviewers are not likely to have specialist skills in climate models or economic theory, though those selected should have experience in agricultural models. If peer review is literally that, it will fail anyway in an inter-disciplinary subject, where the participants do not have a general grounding in all the disciplines. In this paper it is not just economics, but knowledge of product costing as well. It is academic superiors from the specialisms that are required for review, not inter-disciplinary peers.

Kevin Marshall

 

Why can’t I reconcile the emissions to achieve 1.5C or 2C of Warming?

Introduction

At heart I am beancounter. That is when presented with figures I like to understand how they are derived. When it comes to the claims about the quantity of GHG emissions that are required to exceed 2°C of warming I cannot get even close, unless by making some a series of  assumptions, some of which are far from being robust. Applying the same set of assumptions I cannot derive emissions consistent with restraining warming to 1.5°C

Further the combined impact of all the assumptions is to create a storyline that appears to me only as empirically as valid as an infinite number of other storylines. This includes a large number of plausible scenarios where much greater emissions can be emitted before 2°C of warming is reached, or where (based on alternative assumptions) plausible scenarios even 2°C of irreversible warming is already in the pipeline.  

Maybe an expert climate scientist will clearly show the errors of this climate sceptic, and use it as a means to convince the doubters of climate science.

What I will attempt here is something extremely unconventional in the world of climate. That is I will try to state all the assumptions made by highlighting them clearly. Further, I will show my calculations and give clear references, so that anyone can easily follow the arguments.

Note – this is a long post. The key points are contained in the Conclusions.

The aim of constraining warming to 1.5 or 2°C

The Paris Climate Agreement was brought about by the UNFCCC. On their website they state.

The Paris Agreement central aim is to strengthen the global response to the threat of climate change by keeping a global temperature rise this century well below 2 degrees Celsius above pre-industrial levels and to pursue efforts to limit the temperature increase even further to 1.5 degrees Celsius. 

The Paris Agreement states in Article 2

1. This Agreement, in enhancing the implementation of the Convention, including its objective, aims to strengthen the global response to the threat of climate change, in the context of sustainable development and efforts to eradicate
poverty, including by:

(a) Holding the increase in the global average temperature to well below 2°C above pre-industrial levels and pursuing efforts to limit the temperature increase to 1.5°C above pre-industrial levels, recognizing that this would significantly reduce the risks and impacts of climate change;

Translating this aim into mitigation policy requires quantification of global emissions targets. The UNEP Emissions Gap Report 2017 has a graphic showing estimates of emissions before 1.5°C or 2°C warming levels is breached.

Figure 1 : Figure 3.1 from the UNEP Emissions Gap Report 2017

The emissions are of all greenhouse gas emissions, expressed in billions of tonnes of CO2 equivalents. From 2010, the quantity of emissions before the either 1.5°C or 2°C is breached are respectively about 600 GtCO2e and 1000 GtCO2e. It is these two figures that I cannot reconcile when using the same  assumptions to calculate both figures. My failure to reconcile is not just a minor difference. Rather, on the same assumptions that 1000 GtCO2e can be emitted before 2°C is breached, 1.5°C is already in the pipeline. In establishing the problems I encounter I will clearly endeavor to clearly state the assumptions made and look at a number of examples.

 Initial assumptions

1 A doubling of CO2 will eventually lead to 3°C of rise in global average temperatures.

This despite the 2013 AR5 WG1 SPM stating on page 16

Equilibrium climate sensitivity is likely in the range 1.5°C to 4.5°C

And stating in a footnote on the same page.

No best estimate for equilibrium climate sensitivity can now be given because of a lack of agreement on values across assessed lines of evidence and studies.

2 Achieving full equilibrium climate sensitivity (ECS) takes many decades.

This implies that at any point in the last few years, or any year in the future there will be warming in progress (WIP).

3 Including other greenhouse gases adds to warming impact of CO2.

Empirically, the IPCC’s Fifth Assessment Report based its calculations on 2010 when CO2 levels were 390 ppm. The AR5 WG3 SPM states in the last sentence on page 8

For comparison, the CO2-eq concentration in 2011 is estimated to be 430 ppm (uncertainty range 340 to 520 ppm)

As with climate sensitivity, the assumption is the middle of an estimated range. In this case over one fifth of the range has the full impact of GHGs being less than the impact of CO2 on its own.

4 All the rise in global average temperature since the 1800s is due to rise in GHGs. 

5 An increase in GHG levels will eventually lead to warming unless action is taken to remove those GHGs from the atmosphere, generating negative emissions. 

These are restrictive assumptions made for ease of calculations.

Some calculations

First a calculation to derive the CO2 levels commensurate with 2°C of warming. I urge readers to replicate these for themselves.
From a Skeptical Science post by Dana1981 (Dana Nuccitelli) “Pre-1940 Warming Causes and Logic” I obtained a simple equation for a change in average temperature T for a given change in CO2 levels.

ΔTCO2 = λ x 5.35 x ln(B/A)
Where A = CO2 level in year A (expressed in parts per million), and B = CO2 level in year B.
I use λ = .809, so that if B = 2A, ΔTCO2 = 3.00

Pre-industrial CO2 levels were 280ppm. 3°C of warming is generated by CO2 levels of 560 ppm, and 2°C of warming is when CO2 levels reach 444 ppm.

From the Mauna Loa CO2 data, average CO2 levels averaged 407 ppm in 2017. Given the assumption (3) and further assuming the impact of other GHGs is unchanged, 2°C of warming would have been surpassed in around 2016 when CO2 levels averaged 404 ppm. The actual rise in global average temperatures is from HADCRUT4 is about half that amount, hence the assumption that the impact of a rise in CO2 takes an inordinately long time for the actual warming to reveal itself. Even with the assumption that 100% of the warming since around 1800 is due to the increase in GHG levels warming in progress (WIP) is about the same as revealed warming. Yet the Sks article argues that some of the early twentieth century warming was due to other than the rise in GHG levels.

This is the crux of the reconciliation problem. From this initial calculation and based on the assumptions, the 2°C warming threshold has recently been breached, and by the same assumptions 1.5°C was likely breached in the 1990s. There are a lot of assumptions here, so I could have missed something or made an error. Below I go into some key examples that verify this initial conclusion. Then I look at how, by introducing a new assumption it is claimed that 2°C warming is not yet reached.

100 Months and Counting Campaign 2008

Trust, yet verify has a post We are Doomed!

This tracks through the Wayback Machine to look at the now defunct 100monthsandcounting.org campaign, sponsored by the left-wing New Economics Foundation. The archived “Technical Note” states that the 100 months was from August 2008, making the end date November 2016. The choice of 100 months turns out to be spot-on with the actual data for CO2 levels; the central estimate of the CO2 equivalent of all GHG emissions by the IPCC in 2014 based on 2010 GHG levels (and assuming other GHGs are not impacted); and the central estimate for Equilibrium Climate Sensitivity (ECS) used by the IPCC. That is, take 430 ppm CO2e, and at 14 ppm for 2°C of warming.
Maybe that was just a fluke or they were they giving a completely misleading forecast? The 100 Months and Counting Campaign was definitely not agreeing with the UNEP Emissions GAP Report 2017 in making the claim. But were they correctly interpreting what the climate consensus was saying at the time?

The 2006 Stern Review

The “Stern Review: The Economics of Climate Change” (archived access here) that was commissioned to provide benefit-cost justification for what became the Climate Change Act 2008. From the Summary of Conclusions

The costs of stabilising the climate are significant but manageable; delay would be dangerous and much more costly.

The risks of the worst impacts of climate change can be substantially reduced if greenhouse gas levels in the atmosphere can be stabilised between 450 and 550ppm CO2 equivalent (CO2e). The current level is 430ppm CO2e today, and it is rising at more than 2ppm each year. Stabilisation in this range would require emissions to be at least 25% below current levels by 2050, and perhaps much more.

Ultimately, stabilisation – at whatever level – requires that annual emissions be brought down to more than 80% below current levels. This is a major challenge, but sustained long-term action can achieve it at costs that are low in comparison to the risks of inaction. Central estimates of the annual costs of achieving stabilisation between 500 and 550ppm CO2e are around 1% of global GDP, if we start to take strong action now.

If we take assumption 1 that a doubling of CO2 levels will eventually lead to 3.0°C of warming and from a base CO2 level of 280ppm, then the Stern Review is saying that the worst impacts can be avoided if temperature rise is constrained to 2.1 – 2.9°C, but only in the range of 2.5 to 2.9°C does the mitigation cost estimate of 1% of GDP apply in 2006. It is not difficult to see why constraining warming to 2°C or lower would not be net beneficial. With GHG levels already at 430ppm CO2e, and CO2 levels rising at over 2ppm per annum, the 2°C of warming level of 444ppm (or the rounded 450ppm) would have been exceeded well before any global reductions could be achieved.

There is a curiosity in the figures. When the Stern Review was published in 2006 estimated GHG levels were 430ppm CO2e, as against CO2 levels for 2006 of 382ppm. The IPCC AR5 states

For comparison, the CO2-eq concentration in 2011 is estimated to be 430 ppm (uncertainty range 340 to 520 ppm)

In 2011, when CO2 levels averaged 10ppm higher than in 2006 at 392ppm, estimated GHG levels were the same. This is a good example of why one should take note of uncertainty ranges.

IPCC AR4 Report Synthesis Report Table 5.1

A year before the 100 Months and Counting campaign The IPCC produced its Fourth Climate Synthesis Report. The 2007 Synthesis Report on Page 67 (pdf) there is table 5.1 of emissions scenarios.

Figure 2 : Table 5.1. IPCC AR4 Synthesis Report Page 67 – Without Footnotes

I inputted the various CO2-eq concentrations into my amended version of Dana Nuccitelli’s magic equation and compared to the calculation warming in Table 5.1

Figure 3 : Magic Equation calculations of warming compared to Table 5.1. IPCC AR4 Synthesis Report

My calculations of warming are the same as that of the IPCC to one decimal place except for the last two calculations. Why are there these rounding differences? From a little fiddling in Excel, it would appear to me that the IPCC got the warming results from a doubling of 3 when calculating to two decimal places, whilst my version of the formula is to four decimal places.

Note the following

  • That other GHGs are translatable into CO2 equivalents. Once translated other GHGs they can be treated as if they were CO2.
  • There is no time period in this table. The 100 Months and Counting Campaign merely punched in existing numbers and made a forecast ahead of the GHG levels that would reach the 2°C of warming.
  • No mention of a 1.5°C warming scenario. If constraining warming to 1.5°C did not seem credible in 2007, which should it be credible in 2014 or 2017, when CO2 levels are higher?

IPCC AR5 Report Highest Level Summary

I believe that the underlying estimates of emissions to achieve the 1.5°C or 2°C  of warming used by the UNFCCC and UNEP come from the UNIPCC Fifth Climate Assessment Report (AR5), published in 2013/4. At this stage I introduce an couple of empirical assumptions from IPCC AR5.

6 Cut-off year for historical data is 2010 when CO2 levels were 390 ppm (compared to 280 ppm in pre-industrial times) and global average temperatures were about 0.8°C above pre-industrial times.

Using the magic equation above, and the 390 ppm CO2 levels, there is around 1.4°C of warming due from CO2. Given 0.8°C of revealed warming to 2010, the residual “warming-in-progress” was 0.6°C.

The highest level of summary in AR5 is a Presentation to summarize the central findings of the Summary for Policymakers of the Synthesis Report, which in turn brings together the three Working Group Assessment Reports. This Presentation can be found at the bottom right of the IPCC AR5 Synthesis Report webpage. Slide 33 of 35 (reproduced below as Figure 4) gives the key policy point. 1000 GtCO2 of emissions from 2011 onwards will lead to 2°C. This is very approximate but concurs with the UNEP emissions gap report.

Figure 4 : Slide 33 of 35 of the AR5 Synthesis Report Presentation.

Now for some calculations.

1900 GtCO2 raised CO2 levels by 110 ppm (390-110). 1 ppm = 17.3 GtCO2

1000 GtCO2 will raise CO2 levels by 60 ppm (450-390).  1 ppm = 16.7 GtCO2

Given the obvious roundings of the emissions figures, the numbers fall out quite nicely.

Last year I divided CDIAC CO2 emissions (from the Global Carbon Project) by Mauna Loa CO2 annual mean growth rates (data) to produce the following.

Figure 5 : CDIAC CO2 emissions estimates (multiplied by 3.664 to convert from carbon units to CO2 units) divided by Mauna Loa CO2 annual mean growth rates in ppm.

17GtCO2 for a 1ppm rise is about right for the last 50 years.

To raise CO2 levels from 390 to 450 ppm needs about 17 x (450-390) = 1020 GtCO2. Slide 33 is a good approximation of the CO2 emissions to raise CO2 levels by 60 ppm.

But there are issues

  • If ECS = 3.00, and 17 GtCO2 of emissions to raise CO2 levels by 1 ppm, then it is only 918 (17*54) GtCO2 to achieve 2°C of warming. Alternatively, in future if there are assume 1000 GtCO2 to achieve 2°C  of warming it will take 18.5 GtCO2 to raise CO2 levels by 1 ppm, as against 17 GtCO2 in the past. It is only by using 450 ppm as commensurate with 2°C of warming that past and future stacks up.
  • If ECS = 3,  from CO2 alone 1.5°C would be achieved at 396 ppm or a further 100 GtCO2 of emissions. This CO2 level was passed in 2013 or 2014.
  • The calculation falls apart if other GHGs are included.  Emissions are assumed equivalent to 430 ppm at 2011. Therefore with all GHGs considered the 2°C warming would be achieved with 238 GtCO2e of emissions ((444-430)*17) and the 1.5°C of warming was likely passed in the 1990s.
  • If actual warming since pre-industrial times to 2010 was 0.8°C, ECS = 3, and the rise in all GHG levels was equivalent to a rise in CO2 from 280 to 430 ppm, then the residual “warming-in-progress” (WIP) was just over 1°C. That it is the WIP exceeds the total revealed warming in well over a century. If there is a short-term temperature response is half or more of the value of full ECS, it would imply even the nineteenth century emissions are yet to have the full impact on global average temperatures.

What justification is there for effectively disregarding the impact of other greenhouse emissions when it was not done previously?

This offset is to be found in section C – The Drivers of Climate Change – in AR5 WG1 SPM . In particular the breakdown, with uncertainties, in table SPM.5. Another story is how AR5 reached the very same conclusion as AR4 WG1 SPM page 4 on the impact of negative anthropogenic forcings but with a different methodology, hugely different estimates of aerosols along with very different uncertainty bands. Further, these historical estimates are only for the period 1951-2010, whilst the starting date for 1.5°C or 2°C is 1850.

From this a further assumption is made when considering AR5.

7 The estimated historical impact of other GHG emissions (Methane, Nitrous Oxide…) has been effectively offset by the cooling impacts of aerosols and precusors. It is assumed that this will carry forward into the future.

UNEP Emissions Gap Report 2014

Figure 1 above is figure 3.1 from the UNEP Emissions GAP Report 2017. The equivalent report from 2014 puts this 1000 GtCO2 of emissions in a clearer context. First a quotation with two accompanying footnotes.

As noted by the IPCC, scientists have determined that an increase in global temperature is proportional to the build-up of long-lasting greenhouse gases in the atmosphere, especially carbon dioxide. Based on this finding, they have estimated the maximum amount of carbon dioxide that could be emitted over time to the atmosphere and still stay within the 2 °C limit. This is called the carbon dioxide emissions budget because, if the world stays within this budget, it should be possible to stay within the 2 °C global warming limit. In the hypothetical case that carbon dioxide was the only human-made greenhouse gas, the IPCC estimated a total carbon dioxide budget of about 3 670 gigatonnes of carbon dioxide (Gt CO2 ) for a likely chance3 of staying within the 2 °C limit . Since emissions began rapidly growing in the late 19th century, the world has already emitted around 1 900 Gt CO2 and so has used up a large part of this budget. Moreover, human activities also result in emissions of a variety of other substances that have an impact on global warming and these substances also reduce the total available budget to about 2 900 Gt CO2 . This leaves less than about 1 000 Gt CO2 to “spend” in the future4 .

3 A likely chance denotes a greater than 66 per cent chance, as specified by the IPCC.

4 The Working Group III contribution to the IPCC AR5 reports that scenarios in its category which is consistent with limiting warming to below 2 °C have carbon dioxide budgets between 2011 and 2100 of about 630-1 180 GtCO2

The numbers do not fit, unless the impact of other GHGs are ignored. As found from slide 33, there is 2900 GtCO2 to raise atmospheric CO2 levels by 170 ppm, of which 1900 GtC02 has been emitted already. The additional marginal impact of other historical greenhouse gases of 770 GtCO2 is ignored. If those GHG emissions were part of historical emissions as the statement implies, then that marginal impact would be equivalent to an additional 45 ppm (770/17) on top of the 390 ppm CO2 level. That is not far off the IPCC estimated CO2-eq concentration in 2011 of 430 ppm (uncertainty range 340 to 520 ppm). But by the same measure 3670 GTCO2e would increase CO2 levels by 216 ppm (3670/17) from 280 to 496 ppm. With ECS = 3, this would eventually lead to a temperature increase of almost 2.5°C.

Figure 1 above is figure 3.1 from the UNEP Emissions GAP Report 2017. The equivalent report from the 2014 report ES.1

Figure 6 : From the UNEP Emissions Gap Report 2014 showing two emissions pathways to constrain warming to 2°C by 2100.

Note that this graphic goes through to 2100; only uses the CO2 emissions; does not have quantities; and only looks at constraining temperatures to 2°C.  To achieve the target requires a period of negative emissions at the end of the century.

A new assumption is thus required to achieve emissions targets.

8 Sufficient to achieve the 1.5°C or 2°C warming targets likely requires many years of net negative emissions at the end of the century.

A Lower Level Perspective from AR5

A simple pie chart does not seem to make sense. Maybe my conclusions are contradicted by the more detailed scenarios? The next level of detail is to be found in table SPM.1 on page 22 of the AR5 Synthesis Report – Summary for Policymakers.

Figure 7 : Table SPM.1 on Page 22 of AR5 Synthesis Report SPM, without notes. Also found as Table 3.1 on Page 83 of AR5 Synthesis Report 

The comment for <430 ppm (the level of 2010) is "Only a limited number of individual model studies have explored levels below 430 ppm CO2-eq. ” Footnote j reads

In these scenarios, global CO2-eq emissions in 2050 are between 70 to 95% below 2010 emissions, and they are between 110 to 120% below 2010 emissions in 2100.

That is, net global emissions are negative in 2100. Not something mentioned in the Paris Agreement, which only has pledges through to 2030. It is consistent with the UNEP Emissions GAP report 2014 Table ES.1. The statement does not refer to a particular level below 430 ppm CO2-eq, which equates to 1.86°C. So how is 1.5°C of warming not impossible without massive negative emissions? In over 600 words of notes there is no indication. For that you need to go to the footnotes to the far more detailed Table 6.3 AR5 WG3 Chapter 6 (Assessing Transformation Pathways – pdf) Page 431. Footnote 7 (Bold mine)

Temperature change is reported for the year 2100, which is not directly comparable to the equilibrium warming reported in WGIII AR4 (see Table 3.5; see also Section 6.3.2). For the 2100 temperature estimates, the transient climate response (TCR) is the most relevant system property.  The assumed 90% range of the TCR for MAGICC is 1.2–2.6 °C (median 1.8 °C). This compares to the 90% range of TCR between 1.2–2.4 °C for CMIP5 (WGI Section 9.7) and an assessed likely range of 1–2.5 °C from multiple lines of evidence reported in the WGI AR5 (Box 12.2 in Section 12.5).

The major reason that 1.5°C of warming is not impossible (but still more unlikely than likely) for CO2 equivalent levels that should produce 2°C+ of warming being around for decades is because the full warming impact takes so long to filter through.  Further, Table 6.3 puts Peak CO2-eq levels for 1.5-1.7°C scenarios at 465-530 ppm, or eventual warming of 2.2 to 2.8°C. Climate WIP is the difference. But in 2018 WIP might be larger than all the revealed warming in since 1870, and certainly since the mid-1970s.

Within AR5 when talking about constraining warming to 1.5°C or 2.0°C it is only the warming which is estimated to be revealed in 2100. There is no indication of how much warming in progress (WIP) there is in 2100 under the various scenarios, therefore I cannot reconcile back the figures. However, for GHG  would appear that the 1.5°C figure relies upon a period of over 100 years for impact of GHGs on warming failing to come through as (even netting off other GHGs with the negative impact of aerosols) by 2100 CO2 levels would have been above 400 ppm for over 85 years, and for most of those significantly above that level.

Conclusions

The original aim of this post was to reconcile the emissions sufficient to prevent 1.5°C or 2°C of warming being exceeded through some calculations based on a series of restrictive assumptions.

  • ECS = 3.0°C, despite the IPCC being a best estimate across different studies. The range is 1.5°C to 4.5°C.
  • All the temperature rise since the 1800s is assumed due to rises in GHGs. There is evidence that this might not be the case.
  • Other GHGs are netted off against aerosols and precursors. Given that “CO2-eq concentration in 2011 is estimated to be 430 ppm (uncertainty range 340 to 520 ppm)” when CO2 levels were around 390 ppm, this assumption is far from robust.
  • Achieving full equilibrium takes many decades. So long in fact that the warming-in-progress (WIP) may currently exceed all the revealed warming in over 150 years, even based on the assumption that all of that revealed historical warming is due to rises in GHG levels.

Even with these assumptions, keeping warming within 1.5°C or 2°C seems to require two assumptions that were not recognized a few years ago. First is to assume net negative global emissions for many years at the end of the century. Second is to talk about projected warming in 2100 rather than warming as a resultant on achieving full ECS.

The whole exercise appears to rest upon a pile of assumptions. Amending the assumptions means one way means admitting that 1.5°C or 2°C of warming is already in the pipeline, or the other way means admitting climate sensitivity is much lower. Yet there appears to be a very large range of empirical assumptions to chose from there could be there are a very large number of scenarios that are as equally valid as the ones used in the UNEP Emissions Gap Report 2017.

Kevin Marshall

More Coal-Fired Power Stations in Asia

A lovely feature of the GWPF site is its extracts of articles related to all aspects of climate and related energy policies. Yesterday the GWPF extracted from an opinion piece in the Hong Kong-based South China Morning Post A new coal war frontier emerges as China and Japan compete for energy projects in Southeast Asia.
The GWPF’s summary:-

Southeast Asia’s appetite for coal has spurred a new geopolitical rivalry between China and Japan as the two countries race to provide high-efficiency, low-emission technology. More than 1,600 coal plants are scheduled to be built by Chinese corporations in over 62 countries. It will make China the world’s primary provider of high-efficiency, low-emission technology.

A summary point in the article is not entirely accurate. (Italics mine)

Because policymakers still regard coal as more affordable than renewables, Southeast Asia’s industrialisation continues to consume large amounts of it. To lift 630 million people out of poverty, advanced coal technologies are considered vital for the region’s continued development while allowing for a reduction in carbon emissions.

Replacing a less efficient coal-fired power station with one of the latest technology will reduce carbon (i.e CO2) emissions per unit of electricity produced. In China, these efficiency savings replacement process may outstrip the growth in power supply from fossil fuels. But in the rest of Asia, the new coal-fired power stations will be mostly additional capacity in the coming decades, so will lead to an increase in CO2 emissions. It is this additional capacity that will be primarily responsible for driving the economic growth that will lift the poor out of extreme poverty.

The newer technologies are important in other types emissions. That is the particle emissions that has caused high levels of choking pollution and smogs in many cities of China and India. By using the new technologies, other countries can avoid the worst excesses of this pollution, whilst still using a cheap fuel available from many different sources of supply. The thrust in China will likely be to replace the high pollution power stations with new technologies or adapt them to reduce the emissions and increase efficiencies. Politically, it is a different way of raising living standards and quality of life than by increasing real disposable income per capita.

Kevin Marshall

 

Ocean Impact on Temperature Data and Temperature Homgenization

Pierre Gosselin’s notrickszone looks at a new paper.

Temperature trends with reduced impact of ocean air temperature – Frank LansnerJens Olaf Pepke Pedersen.

The paper’s abstract.

Temperature data 1900–2010 from meteorological stations across the world have been analyzed and it has been found that all land areas generally have two different valid temperature trends. Coastal stations and hill stations facing ocean winds are normally more warm-trended than the valley stations that are sheltered from dominant oceans winds.

Thus, we found that in any area with variation in the topography, we can divide the stations into the more warm trended ocean air-affected stations, and the more cold-trended ocean air-sheltered stations. We find that the distinction between ocean air-affected and ocean air-sheltered stations can be used to identify the influence of the oceans on land surface. We can then use this knowledge as a tool to better study climate variability on the land surface without the moderating effects of the ocean.

We find a lack of warming in the ocean air sheltered temperature data – with less impact of ocean temperature trends – after 1950. The lack of warming in the ocean air sheltered temperature trends after 1950 should be considered when evaluating the climatic effects of changes in the Earth’s atmospheric trace amounts of greenhouse gasses as well as variations in solar conditions.

More generally, the paper’s authors are saying that over fairly short distances temperature stations will show different climatic trends. This has a profound implication for temperature homogenization. From Venema et al 2012.

The most commonly used method to detect and remove the effects of artificial changes is the relative homogenization approach, which assumes that nearby stations are exposed to almost the same climate signal and that thus the differences between nearby stations can be utilized to detect inhomogeneities (Conrad and Pollak, 1950). In relative homogeneity testing, a candidate time series is compared to multiple surrounding stations either in a pairwise fashion or to a single composite reference time series computed for multiple nearby stations. 

Lansner and Pederson are, by implication, demonstrating that the principle assumption on which homogenization is based (that nearby temperature stations are exposed to almost the same climatic signal) is not valid. As a result data homogenization will not only eliminate biases in the temperature data (such a measurement biases, impacts of station moves and the urban heat island effect where it impacts a minority of stations) but will also adjust out actual climatic trends. Where the climatic trends are localized and not replicated in surrounding areas, they will be eliminated by homogenization. What I found in early 2015 (following the examples of Paul Homewood, Euan Mearns and others) is that there are examples from all over the world where the data suggests that nearby temperature stations are exposed to different climatic signals. Data homogenization will, therefore, cause quite weird and unstable results. A number of posts were summarized in my post Defining “Temperature Homogenisation”.  Paul Matthews at Cliscep corroborated this in his post of February 2017 “Instability og GHCN Adjustment Algorithm“.

During my attempts to understand the data, I also found that those who support AGW theory not only do not question their assumptions but also have strong shared beliefs in what the data ought to look like. One of the most significant in this context is a Climategate email sent on Mon, 12 Oct 2009 by Kevin Trenberth to Michael Mann of Hockey Stick fame, and copied to Phil Jones of the Hadley centre, Thomas Karl of NOAA, Gavin Schmidt of NASA GISS, plus others.

The fact is that we can’t account for the lack of warming at the moment and it is a travesty that we can’t. The CERES data published in the August BAMS 09 supplement on 2008 shows there should be even more warming: but the data are surely wrong. Our observing system is inadequate. (emphasis mine)

Homogenizing data a number of times, and evaluating the unstable results in the context of strongly-held beliefs will bring the trends evermore into line with those beliefs. There is no requirement for some sort of conspiracy behind deliberate data manipulation for this emerging pattern of adjustments. Indeed a conspiracy in terms of a group knowing the truth and deliberately perverting that evidence does not really apply. Another reason for the conspiracy not applying is the underlying purpose of homogenization. It is to allow that temperature station to be representative of the surrounding area. Without that, it would not be possible to compile an average for the surrounding area, from which the global average in constructed. It is this requirement, in the context of real climatic differences over relatively small areas, I would suggest leads to the deletions of “erroneous” data and the infilling of estimated data elsewhere.

The gradual bringing the temperature data sets into line will beliefs is most clearly shown in the NASA GISS temperature data adjustments. Climate4you produces regular updates of the adjustments since May 2008. Below is the March 2018 version.

The reduction of the 1910 to 1940 warming period (which is at odds with theory) and the increase in the post-1975 warming phase (which correlates with the rise in CO2) supports my contention of the influence of beliefs.

Kevin Marshall

 

Scotland now to impose Minimum Pricing for Alcohol

This week the British Supreme Court cleared the way for the Alcohol (Minimum Pricing) (Scotland) Act 2012 to be enacted. The Scotch Whisky Association (SWA) had mounted a legal challenge to try to halt the price hike, which it said was disproportionate’ and illegal under European law. (Daily Mail) The Act will mandate that retailers have to charge a minimum of 50p per unit of alcohol. This will only affect the price of alcohol in off-licences and supermarkets. In the pub, the price of a pint with 5% ABV is already much higher than the implied price of £1.42. I went round three supermarkets – Asda, Sainsbury’s and Aldi – to see the biggest price hikes implied in the rise.

The extra profit is kept by the retailer, though gross profits may fall as sales volume falls. Premium brands only fall below the minimum price in promotions. With the exception of discounter Aldi, the vast majority of shelf space is occupied by alcohol above the minimum price. Further, there is no escalator. The minimum price will stay the same for the six years that the legislation is in place. However, the Scottish Government claims that 51% of alcohol sold in off-trade is less than 50 pence per unit. The promotions have a big impact. The Scottish people will be deprived of these offers. Many will travel across the border to places like Carlisle and Berwick, to acquire their cheap booze. Or enterprising folks will break the law by illegal sales. This could make booze more accessible to underage drinkers and bring them into regular contact with petty criminals. However, will it reduce the demand for booze? The Scottish Government website quotes Health Secretary Shona Robison.

“This is a historic and far-reaching judgment and a landmark moment in our ambition to turn around Scotland’s troubled relationship with alcohol.

“In a ruling of global significance, the UK Supreme Court has unanimously backed our pioneering and life-saving alcohol pricing policy.

“This has been a long journey and in the five years since the Act was passed, alcohol related deaths in Scotland have increased. With alcohol available for sale at just 18 pence a unit, that death toll remains unacceptably high.

“Given the clear and proven link between consumption and harm, minimum pricing is the most effective and efficient way to tackle the cheap, high strength alcohol that causes so much damage to so many families.

Is minimum pricing effective? Clearly, it will make some alcohol more expensive. But it must be remembered that the tax on alcohol is already very high. The cheapest booze on my list, per unit of alcohol, is the 3 litre box of Perry (Pear Cider) at £4.29. The excise duty is £40.38 per hectolitre. With VAT at 20%, tax is £1.92, or 45% of the retail price. The £16 bottles of spirits (including two well-known brands of Scottish Whisky) are at 40% alcohol. With excise duty at £28.74 per litre of pure alcohol, tax is £13.33 or 83% of the retail price. It has been well-known that alcohol is highly inelastic with respect to price so very large increases in price will make very little difference to demand. This is born out by a graphic from a 2004 report Alcohol Harm Reduction Strategy for England of the UK alcohol consumption in the last century.

In the early part of the twentieth century, there was sharp fall in alcohol consumption from 1900 to the 1930s. There was a sharp drop in the First World War, but after the war the decline continued the pre-war trend. This coincided with a religious revival and the temperance movement. It was started in the nineteenth century by organisations such as the Salvation Army and the Methodists, but taken up by other Christian denominations. In other words, it was a massive cultural change from the bottom, where it became socially unacceptable for many even to touch alcohol. Conversely, the steep decline in religion in the post-war period was accompanied by the rise in alcohol consumption.

The minimum price for alcohol is a fiscal solution being proposed for cultural problems. The outcome of a minimum price will be monopoly profits for the supermarkets and the manufacturers of alcoholic drinks.

It is true that a lot of crime is committed by those intoxicated, other social problems are caused and there are health issues. But the solution is not to increase the price of alcohol. The solution is to change people. The Revival of the early twentieth century, (begun before the outbreak of the Great War in 1914) saw both a fall in alcohol consumption and a fall in crime levels, that continued through the Great Depression. But it was not lacking of alcohol that reduced crime on the early twentieth. Instead, both reductions had a common root in the Christian Faith.

The Scottish Government will no doubt see a fall in sales of alcohol. But this will not represent the reduction in consumption, as cheaper booze will be imported from England, including Scottish Whisky. All that they are doing is treating people as statistics to be dictated to, and manipulated by, their shallow moralistic notions.

Kevin Marshall

 

The Morning Star’s denial of the Venezuelan Dictatorship

Guido Fawkes has an excellent example of the hard left’s denial of realities that conflict with their beliefs. From the Daily Politics, this is Morning Star editor Ben Chacko saying that the UN Human Rights Watch report on Venezuela was one-sided.

The Human Rights report can be found here.

The recent demonstrations need to be put into context. There are two contexts that can be drawn upon. The Socialist Side (with which many Socialists will disagree) is from Morning Star’s piece of 25th August The Bolivarian Revolution hangs in the balance.

They say

One of the world’s largest producers of oil, on which 95 per cent of its economy depends, the Bolivarian socialist government of Venezuela has, over the last 17 years, used its oil revenues to cut poverty by half and reduce extreme poverty to 5.4 per cent.

The government has built social housing; boosted literacy; provided free healthcare and free education from primary school to universities and introduced arts, music and cultural analysis programmes and many others targeting specific problems at the local level.

This is sentance emphasises the hard-left bias.

The mainly middle-class protesters, most without jobs and income, accused President Nicolas Maduro of dictatorship and continued with their daily demonstrations and demands for a change of government. 

Folks without “jobs or income” are hardly middle-class, but might be former middle-class. They have been laid low by circumstances. Should they be blaming the Government or forces outside the Government’s control?

 

From Capx.co on 16th August – Socialism – not oil prices – is to blame for Venezuela’s woes. Also from upi.com on 17th February – Venezuela: 75% of the population lost 19 pounds amid crisis. This is the classic tale of socialism’s failure.

  • Government control of food supplies leads to shortages, which leads to rationing, which leads to more shortages and black market profiteering. This started in 2007 when oil prices were high, but not yet at the record high.
  • Inflation is rampant, potentially rising from 720% in 2016 to 1600% this year. This is one of the highest rates in the world.
  • The weight loss is due to food shortages. It is the poorest who suffer the most, though most of the population are in extreme poverty.
  • An oil-based economy needs to diversify. Venezuela has not. It needs to use high oil prices to invest in infrastructure. Instead, the Chavez regime expropriated the oil production from successful private companies and handed to Government Cronies. A graphic from Forbes illustrates the problem.

About a decade ago at the height of the oil price boom, Venezuela’s known oil reserves more than tripled, yet production fell. It now has the highest oil reserves of any country in the world.

  • Crime has soared, whilst people are going hungry.
  • Maybe a million children are missing school through hunger and lack of resources to run schools. Short-run “successes” based on expropriating the wealth of others have reversed to create a situation far worse than before Chavez came to power.
  • Oil prices are in real terms above the level they were from 1986 to 2003 (with the exception of a peak for the first Gulf War) and comparable to the peak reached in 1973 with the setting up of the OPEC Cartel and oil embargo.

The reality is that Socialism always fails. But there is always a hardcore always in denial, always coming up with empty excuses for failure, often blaming it on others. With the rise of Jeremy Corbyn (who receives a copy of the Morning Star daily), this hardcore has have taken over the Labour Party. The example of Venezuela indicates the long-term consequences of their attaining power.

Kevin Marshall