Australian Beer Prices set to Double Due to Global Warming?

Earlier this week Nature Plants published a new paper Decreases in global beer supply due to extreme drought and heat

The Scientific American has an article “Trouble Brewing? Climate Change Closes In on Beer Drinkers” with the sub-title “Increasing droughts and heat waves could have a devastating effect on barley stocks—and beer prices”. The Daily Mail headlines with “Worst news ever! Australian beer prices are set to DOUBLE because of global warming“. All those climate deniers in Australia have denied future generations the ability to down a few cold beers with their barbecued steaks tofu salads.

This research should be taken seriously, as it is by a crack team of experts across a number of disciplines and Universities. Said, Steven J Davis of University of California at Irvine,

The world is facing many life-threatening impacts of climate change, so people having to spend a bit more to drink beer may seem trivial by comparison. But … not having a cool pint at the end of an increasingly common hot day just adds insult to injury.

Liking the odd beer or three I am really concerned about this prospect, so I rented the paper for 48 hours to check it out. What a sensation it is. Here a few impressions.

Layers of Models

From the Introduction, there were a series of models used.

  1. Created an extreme events severity index for barley based on extremes in historical data for 1981-2010.
  2. Plugged this into five different Earth Systems models for the period 2010-2099. Use this against different RCP scenarios, the most extreme of which shows over 5 times the warming of the 1981-2010 period. What is more severe climate events are a non-linear function of temperature rise.
  3. Then model the impact of these severe weather events on crop yields in 34 World Regions using a “process-based crop model”.
  4. (W)e examine the effects of the resulting barley supply shocks on the supply and price of beer in each region using a global general equilibrium model (Global Trade Analysis Project model, GTAP).
  5. Finally, we compare the impacts of extreme events with the impact of changes in mean climate and test the sensitivity of our results to key sources of uncertainty, including extreme events of different severities, technology and parameter settings in the economic model.

What I found odd was they made no allowance for increasing demand for beer over a 90 year period, despite mentioning in the second sentence that

(G)lobal demand for resource-intensive animal products (meat and dairy) processed foods and alcoholic beverages will continue to grow with rising incomes.

Extreme events – severity and frequency

As stated in point 2, the paper uses different RCP scenarios. These featured prominently in the IPCC AR5 of 2013 and 2014. They go from RCP2.6, which is the most aggressive mitigation scenario, through to RCP 8.5 the non-policy scenario which projected around 4.5C of warming from 1850-1870 through to 2100, or about 3.8C of warming from 2010 to 2090.

Figure 1 has two charts. On the left it shows that extreme events will increase intensity with temperature. RCP2.6 will do very little, but RCP8.5 would result by the end of the century with events 6 times as intense today. Problem is that for up to 1.5C there appears to be no noticeable change what so ever.  That is about the same amount of warming the world has experienced from 1850-2010 per HADCRUT4 there will be no change. Beyond that things take off. How the models empirically project well beyond known experience for a completely different scenario defeats me. It could be largely based on their modelling assumptions, which is in turn strongly tainted by their beliefs in CAGW. There is no reality check that it is the models that their models are not falling apart, or reliant on arbitrary non-linear parameters.

The right hand chart shows that extreme events are porjected to increase in frequency as well. Under RCP 2.6 ~ 4% chance of an extreme event, rising to ~ 31% under RCP 8.5. Again, there is an issue of projecting well beyond any known range.

Fig 2 average barley yield shocks during extreme events

The paper assumes that the current geographical distribution and area of barley cultivation is maintained. They have modelled in 2099, from the 1981-2010 a gridded average yield change with 0.5O x 0.5O resolution to create four colorful world maps representing each of the four RCP emissions scenarios. At the equator, each grid is about 56 x 56 km for an area of 3100 km2, or 1200 square miles. Of course, nearer the poles the area diminishes significantly. This is quite a fine level of detail for projections based on 30 years of data to radically different circumstances 90 years in the future. The results show. Map a) is for RCP 8.5. On average yields are projected to be 17% down. As Paul Homewood showed in a post on the 17th, this projected yield fall should be put in the context of a doubling of yields per hectare since the 1960s.

This increase in productivity has often solely ascribed to the improvements in seed varieties (see Norman Borlaug), mechanization and use of fertilizers. These have undoubtably have had a large parts to play in this productivity improvement. But also important is that agriculture has become more intensive. Forty years ago it was clear that there was a distinction between the intensive farming of Western Europe and the extensive farming of the North American prairies and the Russian steppes. It was not due to better soils or climate in Western Europe. This difference can be staggering. In the Soviet Union about 30% of agricultural output came from around 1% of the available land. These were the plots that workers on the state and collective farms could produce their own food and sell surplus in the local markets.

Looking at chart a in Figure 2, there are wide variations about this average global decrease of 17%.

In North America Montana and North Dakota have areas where barley shocks during extreme years will lead to mean yield changes over 90% higher normal, and the areas around have >50% higher than normal. But go less than 1000 km North into Canada to the Calgary/Saskatoon area and there are small decreases in yields.

In Eastern Bolivia – the part due North of Paraguay – there is the biggest patch of > 50% reductions in the world. Yet 500-1000 km away there is a North-South strip (probably just 56km wide) with less than a 5% change.

There is a similar picture in Russia. On the Kazakhstani border, there are areas of > 50% increases, but in a thinly populated band further North and West, going from around Kirov to Southern Finland is where there are massive decreases in yields.

Why, over the course of decades, would those with increasing yields not increase output, and those with decreasing yields not switch to something else defeats me. After all, if overall yields are decreasing due to frequent extreme weather events, the farmers would be losing money, and those farmers do well when overall yields are down will be making extraordinary profits.

A Weird Economic Assumption

Building up to looking at costs, their is a strange assumption.

(A)nalysing the relative changes in shares of barley use, we find that in most case barley-to-beer shares shrink more than barley-to-livestock shares, showing that food commodities (in this case, animals fed on barley) will be prioritized over luxuries such as beer during extreme events years.

My knowledge of farming and beer is limited, but I believe that cattle can be fed on other things than barley. For instance grass, silage, and sugar beet. Yet, beers require precise quantities of barley and hops of certain grades.

Further, cattle feed is a large part of the cost of a kilo of beef or a litre of milk. But it takes around 250-400g of malted barley to produce a litre of beer. The current wholesale price of malted barley is about £215 a tonne or 5.4 to 8.6p a litre. About cheapest 4% alcohol lager I can find in a local supermarket is £3.29 for 10 x 250ml bottles, or £1.32 a litre. Take off 20% VAT and excise duty leaves 30p a litre for raw materials, manufacturing costs, packaging, manufacturer’s margin, transportation, supermarket’s overhead and supermarket’s margin. For comparison four pints (2.276 litres) of fresh milk costs £1.09 in the same supermarket, working out at 48p a litre. This carries no excise duty or VAT. It might have greater costs due to refrigeration, but I would suggest it costs more to produce, and that feed is far more than 5p a litre.

I know that for a reasonable 0.5 litre bottle of ale it is £1.29 to £1.80 a bottle in the supermarkets I shop in, but it is the cheapest that will likely suffer the biggest percentage rise from increase in raw material prices. Due to taxation and other costs, large changes in raw material prices will have very little impact on final retail costs. Even less so in pubs where a British pint (568ml) varies from the £4 to £7 a litre equivalent.

That is, the assumption is the opposite of what would happen in a free market. In the face of a shortage, farmers will substitute barley for other forms of cattle feed, whilst beer manufacturers will absorb the extra cost.

Disparity in Costs between Countries

The most bizarre claim in the article in contained in the central column of Figure 4, which looks at the projected increases in the cost of a 500 ml bottle of beer in US dollars. Chart h shows this for the most extreme RCP 8.5 model.

I was very surprised that a global general equilibrium model would come up with such huge disparities in costs after 90 years. After all, my understanding of these models used utility-maximizing consumers, profit-maximizing producers, perfect information and instantaneous adjustment. Clearly there is something very wrong with this model. So I decided to compare where I live in the UK with neighbouring Ireland.

In the UK and Ireland there are similar high taxes on beer, with Ireland being slightly more. Both countries have lots of branches of the massive discount chain. They also have some products on their website aldi.co.uk and aldi.ie.  In Ireland a 500 ml can of Sainte Etienne Lager is €1.09 or €2.18 a litre or £1.92 a litre. In the UK it is £2.59 for 4 x 440ml cans or £1.59 a litre. The lager is about 21% more in Ireland. But the tax difference should only be about 15% on a 5% beer (Saint Etienne is 4.8%). Aldi are not making bigger profits in Ireland, they just may have higher costs in Ireland, or lesser margins on other items. It is also comparing a single can against a multipack. So pro-rata the £1.80 ($2.35) bottle of beer in the UK would be about $2.70 in Ireland. Under the RCP 8.5 scenario, the models predict the bottle of beer to rise by $1.90 in the UK and $4.84 in Ireland. Strip out the excise duty and VAT and the price differential goes from zero to $2.20.

Now suppose you were a small beer manufacturer in England, Wales or Scotland. If beer was selling for $2.20 more in Ireland than in the UK, would you not want to stick 20,000 bottles in a container and ship it to Dublin?

If the researchers really understood the global brewing industry, they would realize that there are major brands sold across the world. Many are brewed across in a number of countries to the same recipe. It is the barley that is shipped to the brewery, where equipment and techniques are identical with those in other parts of the world. This researchers seem to have failed to get away from their computer models to conduct field work in a few local bars.

What can be learnt from this?

When making projections well outside of any known range, the results must be sense-checked. Clearly, although the researchers have used an economic model they have not understood the basics of economics. People are not dumb  automatons waiting for some official to tell them to change their patterns of behavior in response to changing circumstances. They notice changes in the world around them and respond to it. A few seize the opportunities presented and can become quite wealthy as a result. Farmers have been astute enough to note mounting losses and change how and what they produce. There is also competition from regions. For example, in the 1960s Brazil produced over half the world’s coffee. The major region for production in Brazil was centered around Londrina in the North-East of Parana state. Despite straddling the Tropic of Capricorn, every few years their would be a spring-time frost which would destroy most of the crop. By the 1990s most of the production had moved north to Minas Gerais, well out of the frost belt. The rich fertile soils around Londrina are now used for other crops, such as soya, cassava and mangoes. It was not out of human design that the movement occurred, but simply that the farmers in Minas Gerais could make bumper profits in the frost years.

The publication of this article shows a problem of peer review. Nature Plants is basically a biology journal. Reviewers are not likely to have specialist skills in climate models or economic theory, though those selected should have experience in agricultural models. If peer review is literally that, it will fail anyway in an inter-disciplinary subject, where the participants do not have a general grounding in all the disciplines. In this paper it is not just economics, but knowledge of product costing as well. It is academic superiors from the specialisms that are required for review, not inter-disciplinary peers.

Kevin Marshall

 

Why can’t I reconcile the emissions to achieve 1.5C or 2C of Warming?

Introduction

At heart I am beancounter. That is when presented with figures I like to understand how they are derived. When it comes to the claims about the quantity of GHG emissions that are required to exceed 2°C of warming I cannot get even close, unless by making some a series of  assumptions, some of which are far from being robust. Applying the same set of assumptions I cannot derive emissions consistent with restraining warming to 1.5°C

Further the combined impact of all the assumptions is to create a storyline that appears to me only as empirically as valid as an infinite number of other storylines. This includes a large number of plausible scenarios where much greater emissions can be emitted before 2°C of warming is reached, or where (based on alternative assumptions) plausible scenarios even 2°C of irreversible warming is already in the pipeline.  

Maybe an expert climate scientist will clearly show the errors of this climate sceptic, and use it as a means to convince the doubters of climate science.

What I will attempt here is something extremely unconventional in the world of climate. That is I will try to state all the assumptions made by highlighting them clearly. Further, I will show my calculations and give clear references, so that anyone can easily follow the arguments.

Note – this is a long post. The key points are contained in the Conclusions.

The aim of constraining warming to 1.5 or 2°C

The Paris Climate Agreement was brought about by the UNFCCC. On their website they state.

The Paris Agreement central aim is to strengthen the global response to the threat of climate change by keeping a global temperature rise this century well below 2 degrees Celsius above pre-industrial levels and to pursue efforts to limit the temperature increase even further to 1.5 degrees Celsius. 

The Paris Agreement states in Article 2

1. This Agreement, in enhancing the implementation of the Convention, including its objective, aims to strengthen the global response to the threat of climate change, in the context of sustainable development and efforts to eradicate
poverty, including by:

(a) Holding the increase in the global average temperature to well below 2°C above pre-industrial levels and pursuing efforts to limit the temperature increase to 1.5°C above pre-industrial levels, recognizing that this would significantly reduce the risks and impacts of climate change;

Translating this aim into mitigation policy requires quantification of global emissions targets. The UNEP Emissions Gap Report 2017 has a graphic showing estimates of emissions before 1.5°C or 2°C warming levels is breached.

Figure 1 : Figure 3.1 from the UNEP Emissions Gap Report 2017

The emissions are of all greenhouse gas emissions, expressed in billions of tonnes of CO2 equivalents. From 2010, the quantity of emissions before the either 1.5°C or 2°C is breached are respectively about 600 GtCO2e and 1000 GtCO2e. It is these two figures that I cannot reconcile when using the same  assumptions to calculate both figures. My failure to reconcile is not just a minor difference. Rather, on the same assumptions that 1000 GtCO2e can be emitted before 2°C is breached, 1.5°C is already in the pipeline. In establishing the problems I encounter I will clearly endeavor to clearly state the assumptions made and look at a number of examples.

 Initial assumptions

1 A doubling of CO2 will eventually lead to 3°C of rise in global average temperatures.

This despite the 2013 AR5 WG1 SPM stating on page 16

Equilibrium climate sensitivity is likely in the range 1.5°C to 4.5°C

And stating in a footnote on the same page.

No best estimate for equilibrium climate sensitivity can now be given because of a lack of agreement on values across assessed lines of evidence and studies.

2 Achieving full equilibrium climate sensitivity (ECS) takes many decades.

This implies that at any point in the last few years, or any year in the future there will be warming in progress (WIP).

3 Including other greenhouse gases adds to warming impact of CO2.

Empirically, the IPCC’s Fifth Assessment Report based its calculations on 2010 when CO2 levels were 390 ppm. The AR5 WG3 SPM states in the last sentence on page 8

For comparison, the CO2-eq concentration in 2011 is estimated to be 430 ppm (uncertainty range 340 to 520 ppm)

As with climate sensitivity, the assumption is the middle of an estimated range. In this case over one fifth of the range has the full impact of GHGs being less than the impact of CO2 on its own.

4 All the rise in global average temperature since the 1800s is due to rise in GHGs. 

5 An increase in GHG levels will eventually lead to warming unless action is taken to remove those GHGs from the atmosphere, generating negative emissions. 

These are restrictive assumptions made for ease of calculations.

Some calculations

First a calculation to derive the CO2 levels commensurate with 2°C of warming. I urge readers to replicate these for themselves.
From a Skeptical Science post by Dana1981 (Dana Nuccitelli) “Pre-1940 Warming Causes and Logic” I obtained a simple equation for a change in average temperature T for a given change in CO2 levels.

ΔTCO2 = λ x 5.35 x ln(B/A)
Where A = CO2 level in year A (expressed in parts per million), and B = CO2 level in year B.
I use λ = .809, so that if B = 2A, ΔTCO2 = 3.00

Pre-industrial CO2 levels were 280ppm. 3°C of warming is generated by CO2 levels of 560 ppm, and 2°C of warming is when CO2 levels reach 444 ppm.

From the Mauna Loa CO2 data, average CO2 levels averaged 407 ppm in 2017. Given the assumption (3) and further assuming the impact of other GHGs is unchanged, 2°C of warming would have been surpassed in around 2016 when CO2 levels averaged 404 ppm. The actual rise in global average temperatures is from HADCRUT4 is about half that amount, hence the assumption that the impact of a rise in CO2 takes an inordinately long time for the actual warming to reveal itself. Even with the assumption that 100% of the warming since around 1800 is due to the increase in GHG levels warming in progress (WIP) is about the same as revealed warming. Yet the Sks article argues that some of the early twentieth century warming was due to other than the rise in GHG levels.

This is the crux of the reconciliation problem. From this initial calculation and based on the assumptions, the 2°C warming threshold has recently been breached, and by the same assumptions 1.5°C was likely breached in the 1990s. There are a lot of assumptions here, so I could have missed something or made an error. Below I go into some key examples that verify this initial conclusion. Then I look at how, by introducing a new assumption it is claimed that 2°C warming is not yet reached.

100 Months and Counting Campaign 2008

Trust, yet verify has a post We are Doomed!

This tracks through the Wayback Machine to look at the now defunct 100monthsandcounting.org campaign, sponsored by the left-wing New Economics Foundation. The archived “Technical Note” states that the 100 months was from August 2008, making the end date November 2016. The choice of 100 months turns out to be spot-on with the actual data for CO2 levels; the central estimate of the CO2 equivalent of all GHG emissions by the IPCC in 2014 based on 2010 GHG levels (and assuming other GHGs are not impacted); and the central estimate for Equilibrium Climate Sensitivity (ECS) used by the IPCC. That is, take 430 ppm CO2e, and at 14 ppm for 2°C of warming.
Maybe that was just a fluke or they were they giving a completely misleading forecast? The 100 Months and Counting Campaign was definitely not agreeing with the UNEP Emissions GAP Report 2017 in making the claim. But were they correctly interpreting what the climate consensus was saying at the time?

The 2006 Stern Review

The “Stern Review: The Economics of Climate Change” (archived access here) that was commissioned to provide benefit-cost justification for what became the Climate Change Act 2008. From the Summary of Conclusions

The costs of stabilising the climate are significant but manageable; delay would be dangerous and much more costly.

The risks of the worst impacts of climate change can be substantially reduced if greenhouse gas levels in the atmosphere can be stabilised between 450 and 550ppm CO2 equivalent (CO2e). The current level is 430ppm CO2e today, and it is rising at more than 2ppm each year. Stabilisation in this range would require emissions to be at least 25% below current levels by 2050, and perhaps much more.

Ultimately, stabilisation – at whatever level – requires that annual emissions be brought down to more than 80% below current levels. This is a major challenge, but sustained long-term action can achieve it at costs that are low in comparison to the risks of inaction. Central estimates of the annual costs of achieving stabilisation between 500 and 550ppm CO2e are around 1% of global GDP, if we start to take strong action now.

If we take assumption 1 that a doubling of CO2 levels will eventually lead to 3.0°C of warming and from a base CO2 level of 280ppm, then the Stern Review is saying that the worst impacts can be avoided if temperature rise is constrained to 2.1 – 2.9°C, but only in the range of 2.5 to 2.9°C does the mitigation cost estimate of 1% of GDP apply in 2006. It is not difficult to see why constraining warming to 2°C or lower would not be net beneficial. With GHG levels already at 430ppm CO2e, and CO2 levels rising at over 2ppm per annum, the 2°C of warming level of 444ppm (or the rounded 450ppm) would have been exceeded well before any global reductions could be achieved.

There is a curiosity in the figures. When the Stern Review was published in 2006 estimated GHG levels were 430ppm CO2e, as against CO2 levels for 2006 of 382ppm. The IPCC AR5 states

For comparison, the CO2-eq concentration in 2011 is estimated to be 430 ppm (uncertainty range 340 to 520 ppm)

In 2011, when CO2 levels averaged 10ppm higher than in 2006 at 392ppm, estimated GHG levels were the same. This is a good example of why one should take note of uncertainty ranges.

IPCC AR4 Report Synthesis Report Table 5.1

A year before the 100 Months and Counting campaign The IPCC produced its Fourth Climate Synthesis Report. The 2007 Synthesis Report on Page 67 (pdf) there is table 5.1 of emissions scenarios.

Figure 2 : Table 5.1. IPCC AR4 Synthesis Report Page 67 – Without Footnotes

I inputted the various CO2-eq concentrations into my amended version of Dana Nuccitelli’s magic equation and compared to the calculation warming in Table 5.1

Figure 3 : Magic Equation calculations of warming compared to Table 5.1. IPCC AR4 Synthesis Report

My calculations of warming are the same as that of the IPCC to one decimal place except for the last two calculations. Why are there these rounding differences? From a little fiddling in Excel, it would appear to me that the IPCC got the warming results from a doubling of 3 when calculating to two decimal places, whilst my version of the formula is to four decimal places.

Note the following

  • That other GHGs are translatable into CO2 equivalents. Once translated other GHGs they can be treated as if they were CO2.
  • There is no time period in this table. The 100 Months and Counting Campaign merely punched in existing numbers and made a forecast ahead of the GHG levels that would reach the 2°C of warming.
  • No mention of a 1.5°C warming scenario. If constraining warming to 1.5°C did not seem credible in 2007, which should it be credible in 2014 or 2017, when CO2 levels are higher?

IPCC AR5 Report Highest Level Summary

I believe that the underlying estimates of emissions to achieve the 1.5°C or 2°C  of warming used by the UNFCCC and UNEP come from the UNIPCC Fifth Climate Assessment Report (AR5), published in 2013/4. At this stage I introduce an couple of empirical assumptions from IPCC AR5.

6 Cut-off year for historical data is 2010 when CO2 levels were 390 ppm (compared to 280 ppm in pre-industrial times) and global average temperatures were about 0.8°C above pre-industrial times.

Using the magic equation above, and the 390 ppm CO2 levels, there is around 1.4°C of warming due from CO2. Given 0.8°C of revealed warming to 2010, the residual “warming-in-progress” was 0.6°C.

The highest level of summary in AR5 is a Presentation to summarize the central findings of the Summary for Policymakers of the Synthesis Report, which in turn brings together the three Working Group Assessment Reports. This Presentation can be found at the bottom right of the IPCC AR5 Synthesis Report webpage. Slide 33 of 35 (reproduced below as Figure 4) gives the key policy point. 1000 GtCO2 of emissions from 2011 onwards will lead to 2°C. This is very approximate but concurs with the UNEP emissions gap report.

Figure 4 : Slide 33 of 35 of the AR5 Synthesis Report Presentation.

Now for some calculations.

1900 GtCO2 raised CO2 levels by 110 ppm (390-110). 1 ppm = 17.3 GtCO2

1000 GtCO2 will raise CO2 levels by 60 ppm (450-390).  1 ppm = 16.7 GtCO2

Given the obvious roundings of the emissions figures, the numbers fall out quite nicely.

Last year I divided CDIAC CO2 emissions (from the Global Carbon Project) by Mauna Loa CO2 annual mean growth rates (data) to produce the following.

Figure 5 : CDIAC CO2 emissions estimates (multiplied by 3.664 to convert from carbon units to CO2 units) divided by Mauna Loa CO2 annual mean growth rates in ppm.

17GtCO2 for a 1ppm rise is about right for the last 50 years.

To raise CO2 levels from 390 to 450 ppm needs about 17 x (450-390) = 1020 GtCO2. Slide 33 is a good approximation of the CO2 emissions to raise CO2 levels by 60 ppm.

But there are issues

  • If ECS = 3.00, and 17 GtCO2 of emissions to raise CO2 levels by 1 ppm, then it is only 918 (17*54) GtCO2 to achieve 2°C of warming. Alternatively, in future if there are assume 1000 GtCO2 to achieve 2°C  of warming it will take 18.5 GtCO2 to raise CO2 levels by 1 ppm, as against 17 GtCO2 in the past. It is only by using 450 ppm as commensurate with 2°C of warming that past and future stacks up.
  • If ECS = 3,  from CO2 alone 1.5°C would be achieved at 396 ppm or a further 100 GtCO2 of emissions. This CO2 level was passed in 2013 or 2014.
  • The calculation falls apart if other GHGs are included.  Emissions are assumed equivalent to 430 ppm at 2011. Therefore with all GHGs considered the 2°C warming would be achieved with 238 GtCO2e of emissions ((444-430)*17) and the 1.5°C of warming was likely passed in the 1990s.
  • If actual warming since pre-industrial times to 2010 was 0.8°C, ECS = 3, and the rise in all GHG levels was equivalent to a rise in CO2 from 280 to 430 ppm, then the residual “warming-in-progress” (WIP) was just over 1°C. That it is the WIP exceeds the total revealed warming in well over a century. If there is a short-term temperature response is half or more of the value of full ECS, it would imply even the nineteenth century emissions are yet to have the full impact on global average temperatures.

What justification is there for effectively disregarding the impact of other greenhouse emissions when it was not done previously?

This offset is to be found in section C – The Drivers of Climate Change – in AR5 WG1 SPM . In particular the breakdown, with uncertainties, in table SPM.5. Another story is how AR5 reached the very same conclusion as AR4 WG1 SPM page 4 on the impact of negative anthropogenic forcings but with a different methodology, hugely different estimates of aerosols along with very different uncertainty bands. Further, these historical estimates are only for the period 1951-2010, whilst the starting date for 1.5°C or 2°C is 1850.

From this a further assumption is made when considering AR5.

7 The estimated historical impact of other GHG emissions (Methane, Nitrous Oxide…) has been effectively offset by the cooling impacts of aerosols and precusors. It is assumed that this will carry forward into the future.

UNEP Emissions Gap Report 2014

Figure 1 above is figure 3.1 from the UNEP Emissions GAP Report 2017. The equivalent report from 2014 puts this 1000 GtCO2 of emissions in a clearer context. First a quotation with two accompanying footnotes.

As noted by the IPCC, scientists have determined that an increase in global temperature is proportional to the build-up of long-lasting greenhouse gases in the atmosphere, especially carbon dioxide. Based on this finding, they have estimated the maximum amount of carbon dioxide that could be emitted over time to the atmosphere and still stay within the 2 °C limit. This is called the carbon dioxide emissions budget because, if the world stays within this budget, it should be possible to stay within the 2 °C global warming limit. In the hypothetical case that carbon dioxide was the only human-made greenhouse gas, the IPCC estimated a total carbon dioxide budget of about 3 670 gigatonnes of carbon dioxide (Gt CO2 ) for a likely chance3 of staying within the 2 °C limit . Since emissions began rapidly growing in the late 19th century, the world has already emitted around 1 900 Gt CO2 and so has used up a large part of this budget. Moreover, human activities also result in emissions of a variety of other substances that have an impact on global warming and these substances also reduce the total available budget to about 2 900 Gt CO2 . This leaves less than about 1 000 Gt CO2 to “spend” in the future4 .

3 A likely chance denotes a greater than 66 per cent chance, as specified by the IPCC.

4 The Working Group III contribution to the IPCC AR5 reports that scenarios in its category which is consistent with limiting warming to below 2 °C have carbon dioxide budgets between 2011 and 2100 of about 630-1 180 GtCO2

The numbers do not fit, unless the impact of other GHGs are ignored. As found from slide 33, there is 2900 GtCO2 to raise atmospheric CO2 levels by 170 ppm, of which 1900 GtC02 has been emitted already. The additional marginal impact of other historical greenhouse gases of 770 GtCO2 is ignored. If those GHG emissions were part of historical emissions as the statement implies, then that marginal impact would be equivalent to an additional 45 ppm (770/17) on top of the 390 ppm CO2 level. That is not far off the IPCC estimated CO2-eq concentration in 2011 of 430 ppm (uncertainty range 340 to 520 ppm). But by the same measure 3670 GTCO2e would increase CO2 levels by 216 ppm (3670/17) from 280 to 496 ppm. With ECS = 3, this would eventually lead to a temperature increase of almost 2.5°C.

Figure 1 above is figure 3.1 from the UNEP Emissions GAP Report 2017. The equivalent report from the 2014 report ES.1

Figure 6 : From the UNEP Emissions Gap Report 2014 showing two emissions pathways to constrain warming to 2°C by 2100.

Note that this graphic goes through to 2100; only uses the CO2 emissions; does not have quantities; and only looks at constraining temperatures to 2°C.  To achieve the target requires a period of negative emissions at the end of the century.

A new assumption is thus required to achieve emissions targets.

8 Sufficient to achieve the 1.5°C or 2°C warming targets likely requires many years of net negative emissions at the end of the century.

A Lower Level Perspective from AR5

A simple pie chart does not seem to make sense. Maybe my conclusions are contradicted by the more detailed scenarios? The next level of detail is to be found in table SPM.1 on page 22 of the AR5 Synthesis Report – Summary for Policymakers.

Figure 7 : Table SPM.1 on Page 22 of AR5 Synthesis Report SPM, without notes. Also found as Table 3.1 on Page 83 of AR5 Synthesis Report 

The comment for <430 ppm (the level of 2010) is "Only a limited number of individual model studies have explored levels below 430 ppm CO2-eq. ” Footnote j reads

In these scenarios, global CO2-eq emissions in 2050 are between 70 to 95% below 2010 emissions, and they are between 110 to 120% below 2010 emissions in 2100.

That is, net global emissions are negative in 2100. Not something mentioned in the Paris Agreement, which only has pledges through to 2030. It is consistent with the UNEP Emissions GAP report 2014 Table ES.1. The statement does not refer to a particular level below 430 ppm CO2-eq, which equates to 1.86°C. So how is 1.5°C of warming not impossible without massive negative emissions? In over 600 words of notes there is no indication. For that you need to go to the footnotes to the far more detailed Table 6.3 AR5 WG3 Chapter 6 (Assessing Transformation Pathways – pdf) Page 431. Footnote 7 (Bold mine)

Temperature change is reported for the year 2100, which is not directly comparable to the equilibrium warming reported in WGIII AR4 (see Table 3.5; see also Section 6.3.2). For the 2100 temperature estimates, the transient climate response (TCR) is the most relevant system property.  The assumed 90% range of the TCR for MAGICC is 1.2–2.6 °C (median 1.8 °C). This compares to the 90% range of TCR between 1.2–2.4 °C for CMIP5 (WGI Section 9.7) and an assessed likely range of 1–2.5 °C from multiple lines of evidence reported in the WGI AR5 (Box 12.2 in Section 12.5).

The major reason that 1.5°C of warming is not impossible (but still more unlikely than likely) for CO2 equivalent levels that should produce 2°C+ of warming being around for decades is because the full warming impact takes so long to filter through.  Further, Table 6.3 puts Peak CO2-eq levels for 1.5-1.7°C scenarios at 465-530 ppm, or eventual warming of 2.2 to 2.8°C. Climate WIP is the difference. But in 2018 WIP might be larger than all the revealed warming in since 1870, and certainly since the mid-1970s.

Within AR5 when talking about constraining warming to 1.5°C or 2.0°C it is only the warming which is estimated to be revealed in 2100. There is no indication of how much warming in progress (WIP) there is in 2100 under the various scenarios, therefore I cannot reconcile back the figures. However, for GHG  would appear that the 1.5°C figure relies upon a period of over 100 years for impact of GHGs on warming failing to come through as (even netting off other GHGs with the negative impact of aerosols) by 2100 CO2 levels would have been above 400 ppm for over 85 years, and for most of those significantly above that level.

Conclusions

The original aim of this post was to reconcile the emissions sufficient to prevent 1.5°C or 2°C of warming being exceeded through some calculations based on a series of restrictive assumptions.

  • ECS = 3.0°C, despite the IPCC being a best estimate across different studies. The range is 1.5°C to 4.5°C.
  • All the temperature rise since the 1800s is assumed due to rises in GHGs. There is evidence that this might not be the case.
  • Other GHGs are netted off against aerosols and precursors. Given that “CO2-eq concentration in 2011 is estimated to be 430 ppm (uncertainty range 340 to 520 ppm)” when CO2 levels were around 390 ppm, this assumption is far from robust.
  • Achieving full equilibrium takes many decades. So long in fact that the warming-in-progress (WIP) may currently exceed all the revealed warming in over 150 years, even based on the assumption that all of that revealed historical warming is due to rises in GHG levels.

Even with these assumptions, keeping warming within 1.5°C or 2°C seems to require two assumptions that were not recognized a few years ago. First is to assume net negative global emissions for many years at the end of the century. Second is to talk about projected warming in 2100 rather than warming as a resultant on achieving full ECS.

The whole exercise appears to rest upon a pile of assumptions. Amending the assumptions means one way means admitting that 1.5°C or 2°C of warming is already in the pipeline, or the other way means admitting climate sensitivity is much lower. Yet there appears to be a very large range of empirical assumptions to chose from there could be there are a very large number of scenarios that are as equally valid as the ones used in the UNEP Emissions Gap Report 2017.

Kevin Marshall

Milk loss yields down to heat stress

Last week, Wattupwiththat post “Climate Study: British Children Won’t Know What Milk Tastes Like”. Whilst I greatly admire Anthony Watts, I think this title entirely misses the point.
It refers to an article at the Conservation “How climate change will affect dairy cows and milk production in the UK – new study” by two authors at Aberystwyth University, West Wales. This in turn is a write up of a Plos One article published in May “Spatially explicit estimation of heat stress-related impacts of climate change on the milk production of dairy cows in the United Kingdom“. The reason I disagree is that even with very restrictive assumptions, this paper shows that even with large changes in temperature, the unmitigated costs of climate change are very small. The authors actually give some financial figures. Referring to the 2190s the PLOS One abstract ends:-

In the absence of mitigation measures, estimated heat stress-related annual income loss for this region by the end of this century may reach £13.4M in average years and £33.8M in extreme years.

The introduction states

The value of UK milk production is around £4.6 billion per year, approximately 18% of gross agricultural economic output.

For the UK on average Annual Milk Loss (AML) due to heat stress is projected to rise from 40 kg/cow to over 170 kg/cow. Based on current yields it is from 0.5% to 1.8% in average years. The most extreme region is the south-east where average AML is projected to rise from 80 kg/cow to over 320 kg/cow. That is from 1% to 4.2% in average years. That is, if UK dairy farmers totally ignore the issue of heat stress for decades the industry could see average revenue losses from heat stress rise on average from £23m to £85m. The financial losses are based on constant prices of £0.30 per litre.

With modeled estimates over very long periods, it is worth checking the assumptions.

Price per liter of milk

The profits are based upon a constant price of £0.30 a liter. But prices can fluctuate according to market conditions. Data on annual average prices paid is available from AHDB Dairy, ” a levy-funded, not-for-profit organisation working on behalf of Britain’s dairy farmers.” Each month, since 2004, there are reported the annual average prices paid by dairies over a certain size available here. That is 35-55 in any one month. I have taken the minimum and maximum prices for reported in June each year and shown in Figure 1.

Even annual average milk prices fluctuate depending on market conditions. If milk production is reduced in summer months due to an unusual heat wave causing heat stress, ceteris paribus, prices will rise. It could be that a short-term reduction in supply would increase average farming profits if prices are not fixed. It is certainly not valid to assume fixed prices over many decades.

Dumb farmers

From the section in the paper “Milk loss estimation methods

It was assumed that temperature and relative humidity were the same for all systems, and that no mitigation practices were implemented. We also assumed that cattle were not significantly different from the current UK breed types, even though breeding for heat stress tolerance is one of the proposed measures to mitigate effects of climate change on dairy farms.

This paper is looking at over 70 years in the future. If heatwaves were increasing, so yields falling and cattle were suffering, is it valid to assume that farmers will ignore the problem? Would they not learn from areas with more extreme heatwaves in summer elsewhere such as in central Europe? After all in the last 70 years (since the late 1940s) breeding has increased milk yields phenomenally (from AHDB data, milk yields per cow have increased 15% from 2001/2 to 2016/7 alone) so a bit of breeding to cope with heatwaves should be a minor issue.

The Conversation article states the implausible assumptions in a concluding point.

These predictions assume that nothing is done to mitigate the problems of heat stress. But there are many parts of the world that are already much hotter than the UK where milk is produced, and much is known about what can be done to protect the welfare of the animals and minimise economic losses from heat stress. These range from simple adaptations, such as the providing shade, to installing fans and water misting systems.

Cattle breeding for increased heat tolerance is another potential, which could be beneficial for maintaining pasture-based systems. In addition, changing the location of farming operations is another practice used to address economic challenges worldwide.

What is not recognized here is that farmers in a competitive market have to adapt in the light of new information to stay in business. That is the authors are telling farmers what they will be fully aware of to the extent that their farms conform to the average. Effectively assuming people and dumb, then telling them obvious, is hardly going to get those people to take on board one’s viewpoints.

Certainty of global warming

The Conversation article states

Using 11 different climate projection models, and 18 different milk production models, we estimated potential milk loss from UK dairy cows as climate conditions change during the 21st century. Given this information, our final climate projection analysis suggests that average ambient temperatures in the UK will increase by up to about 3.5℃ by the end of the century.

This warming is consistent with the IPCC global average warming projections using RCP8.5 non-mitigation policy scenario. There are two alternative, indeed opposite, perspectives that might lead rational decision-makers to think this quantity of warming is less than certain.

First, the mainstream media, where the message being put out is that the Paris Climate Agreement can constrain global warming to 2°C or 1.5°C above the levels of the mid-nineteenth century. With around 1°C of warming already if it is still possible to constrain additional global warming to 0.5°C, why should one assume that 3.5°C of warming for the UK is more than a remote possibility in planning?

Second, one could look at the track record of global warming projections from the climate models. The real global warming scare kicked-off with James Hansen’s testimony to Congress in 1988. Despite actual greenhouse gas emissions being closely aligned with rapid warming, actual global warming has been most closely aligned with the assumption of the impact of GHG emissions being eliminated by 2000. Now, if farming decision-makers want to still believe that emissions are the major driver of global warming, they can find plenty of excuses for the failure linked from here. But, rational decision-makers tend to look at the track record and thus take consistent decision-makers with more than a pinch of salt.

Planning horizons

The Conversation article concludes

(W)e estimate that by 2100, heat stress-related annual income losses of average size dairy farms in the most affected regions may vary between £2,000-£6,000 and £6,000-£14,000 (in today’s value), in average and extreme years respectively. Armed with these figures, farmers need to begin planning for a hotter UK using cheaper, longer-term options such as planting trees or installing shaded areas.

This compares to the current the UK average annual dairy farm business income of £80,000 according to the PLOS One article.

There are two sides to investment decision-making. There are potential benefits – in this case avoidance of profit loss – netted against the potential benefits. ADHB Dairy gives some figures for the average herd size in the UK. In 2017 it averaged 146 cows, almost double the 75 cows in 1996. In South East England, that is potentially £41-£96 a cow, if the average herd size there is same as the UK average. If the costs rose in a linear fashion, that would be around 50p to just over a pound a year per cow in the most extreme affected region. But the PLOS One article states that costs will rise exponentially. That means there will be no business justification for evening considering heat stress for the next few decades.

For that investment to be worthwhile, it would require the annual cost of mitigating heat stress to be less than these amounts. Most crucially, rational decision-makers apply some sort of NPV calculation to investments. This includes a discount rate. If most of the costs are to be incurred decades from now – beyond the working lives of the current generation of farmers – then there is no rational reason to take into account heat stress even if global warming is certain.

Summary

The Paper Spatially explicit estimation of heat stress-related impacts of climate change on the milk production of dairy cows in the United Kingdom makes a number of assumptions to reach its headline conclusion of decreased milk yields due to heat stress by the end of the century. The assumption of constant prices defies the economic reality that prices fluctuate with changing supply. The assumption of dumb farmers defies the reality of a competitive market, where they have to respond to new information to stay in business. The assumption of 3.5°C warming in the UK can be taken as unlikely from either the belief Paris Climate Agreement with constrain further warming to 1°C or less OR that the inability of past climate projections to conform to the pattern of warming should give more than reasonable doubt that current projections are credible.  Further the authors seem to be unaware of the planning horizons of normal businesses. Where there will be no significant costs for decades, applying any sort of discount rate to potential investments will mean instant dismissal of any consideration of heat stress issues at the end of the century by the current generation of farmers.

Taking all these assumptions together makes one realize that it is quite dangerous for specialists in another field to take the long range projections of climate models and apply to their own areas, without also considering the economic and business realities.

Kevin Marshall 

Plan B Environmental Activists deservedly lose High Court battle over Carbon Target

Breaking News

From Belfast Telegraph & itv.com and Science Matters (my bold)

Lawyers for the charity previously argued the Government should have, in light of the current scientific consensus, gone further than its original target of reducing carbon levels by 2050 to 80% of those present in 1990.

They said the decision not to amend the 2050 target put the UK in breach of its international obligations under the Paris Agreement on Climate Change and was influenced by the Government’s belief that a “more ambitious target was not feasible”.

At a hearing on July 4, Jonathan Crow QC told the court: “The Secretary of State’s belief that he needs to have regard to what is feasible, rather than what is necessary, betrays a fundamental misunderstanding of the scheme of the 2008 Act and must be quashed.

“All of the individual claimants are deeply concerned about climate change.”

The barrister argued the Secretary of State’s “continuing refusal” to amend the 2050 target means the UK is playing “Russian roulette with two bullets, instead of one”.

But, refusing permission for a full hearing, Mr Justice Supperstone said Plan B Earth’s arguments were based on an “incorrect interpretation” of the Paris Agreement.

He said: “In my view the Secretary of State was plainly entitled … to refuse to change the 2050 target at the present time.

In a previous post I wrote that

Taking court action to compel Governments to enforce the Paris Climate Agreement is against the real spirit of that Agreement. Controlling global GHG emissions consistent with 2°C, or 1.5°C is only an aspiration, made unachievable by allowing developing countries to decide for themselves when to start reducing their emissions. ……. Governments wanting to both be players on the world stage and serve their countries give the appearance of taking action of controlling emissions, whilst in substance doing very little. This is the real spirit of the Paris Climate Agreement. To take court action to compel a change of policy action in the name of that Agreement should be struck off on that basis.

Now I would not claim Mr Justice Supperstone supports my particular interpretation of the Paris Agreement as an exercise in political maneuvering allowing Governments to appear to be one thing, whilst doing another. But we are both agreed that “Plan B Earth’s arguments were based on an “incorrect interpretation” of the Paris Agreement.

The UNFCCC PDF of the Paris Agreement is here to check. Then check against my previous post, which argues that if the Government acted in the true spirit of the Paris Agreement, it would suspend the costly Climate Change Act 2008 and put efforts into being seen to be doing something about climate change. Why

  • China was praised for joining the emissions party by proposing to stop increasing emissions by 2030.
  • Very few of the INDC emissions will make real large cuts in emissions.
  • The aggregate forecast impact of all the INDC submissions, if fully enacted, will see global  emissions slightly higher than today in 2030, when according to the UNEP emissions GAP report 2017 for 1.5°C warming target they need to be 30% lower in just 12 years time. Paris Agreement Article 4.1 states something that is empirically incompatible with that aim.

In order to achieve the long-term temperature goal set out in Article 2, Parties aim to reach global peaking of greenhouse gas emissions as soon as possible, recognizing that peaking will take longer for developing country Parties,

  • The Paris Agreement allows “developing” countries to keep on increasing their emissions. With about two-thirds of global emissions (and over 80% of the global population), 30% emissions cuts may not be achieved even if all the developed countries cut emissions to zero in 12 years.
  • Nowhere does the Paris Agreement recognize the many countries who rely on fossil fuels for a large part of their national income, for instance in the Middle East and Russia. Cutting emissions to near zero by mid-century would impoverish them within a generation. Yet, with the developing countries also relying on cheap fossil fuels to promote high levels of economic growth for political stability and to meeting the expectations of their people (e.g. Pakistan, Indonesia, India, Turkey) most of the world can carry on for decades whilst some enlightened Governments in the West damage the economic futures of their countries for appearances sake. Activists trying to dictate Government policy through the Courts in a supposedly democratic country ain’t going to change their minds.

Plan B have responded to the judgement. I find this statement interesting.

Tim Crosland, Director of Plan B and former government lawyer, said: ‘We are surprised and disappointed by this ruling and will be lodging an appeal.

‘We consider it clear and widely accepted that the current carbon target is not compatible with the Paris Agreement. Neither the government nor the Committee on Climate Change suggested during our correspondence with them prior to the claim that the target was compatible.

Indeed, it was only in January of this year that the Committee published a report accepting that the Paris Agreement was ‘likely to require’ a more ambitious 2050 target

What I find interesting is that only point that a lawyer has for contradicting Mr Justice Supperstone’s statement that “Plan B Earth’s arguments were based on an “incorrect interpretation” of the Paris Agreement” is with reference to a report by the Committee on Climate Change. From the CCC website

The Committee on Climate Change (the CCC) is an independent, statutory body established under the Climate Change Act 2008.

Our purpose is to advise the UK Government and Devolved Administrations on emissions targets and report to Parliament on progress made in reducing greenhouse gas emissions and preparing for climate change.

The Committee is set up for partisan aims and, from its’s latest report, appears to be quite zealous in fulfilling those aims. Even as a secondary source (to a document which is easy to read) it should be tainted. But, I would suggest that to really understand the aims of the Paris Agreement you need to read the original and put it in the context of the global empirical and political realities. From my experience, the climate enlightened will keep on arguing for ever, and get pretty affronted when anyone tries to confront their blinkered perspectives.

Kevin Marshall

Why Plan B’s Climate Court Action should be dismissed

Summary

Taking court action to compel Governments to enforce the Paris Climate Agreement is against the real spirit of that Agreement. Controlling global GHG emissions consistent with 2°C, or 1.5°C is only an aspiration, made unachievable by allowing developing countries to decide for themselves when to start reducing their emissions. In the foreseeable future, the aggregate impact of emissions reduction policies will fail to even reduce global emissions. Therefore, costly emissions reductions policies will always end up being net harmful to the countries where they are imposed. Governments wanting to both be players on the world stage and serve their countries give the appearance of taking action of controlling emissions, whilst in substance doing very little. This is the real spirit of the Paris Climate Agreement. To take court action to compel a change of policy action in the name of that Agreement should be struck off on that basis. I use activist group Plan B’s case before the British Court to get the British Government to make even deeper emissions cuts than those under the Climate Change Act 2008.

Plan B’s Case at the High court

Last week BBC’s environment analyst Roger Harrabin reported Court action to save young from climate bill.

The campaigners – known collectively as Plan B – argue that if the UK postpones emissions cuts, the next generation will be left to pick up the bill.

It is seeking permission from a judge to launch formal legal action.

The government has promised to review its climate commitments.

A spokesperson said it was committed to tackling emissions.

But Plan B believes ministers may breach the law if they don’t cut emissions deeper – in line with an international agreement made in Paris at the end of 2015 to restrict global temperature rise to as close to 1.5C as possible.

From an obscure website crowdjustice

Plan B claim that the government is discriminating against the young by failing to cut emissions fast enough. During the hearing, they argued that the UK government’s current target of limiting global temperature rises to 2°C was not ambitious enough, and that the target ought to be lowered to 1.5°C, in line with the Paris Agreement that the UK ratified in 2015. Justice Supperstone postponed the decision until a later date.

Plan B on their own website state

Plan B is supporting the growing global movement of climate litigation, holding governments and corporations to account for climate harms, fighting for the future for all people, all animals and all life on earth.

What is the basis of discrimination?

The widely-accepted hypothesis is that unless global greenhouse gas (GHG) emissions are reduced to near zero in little more than a generation, global average temperature rise will rise more than 2°C above pre-industrial levels. A further hypothesis is that this in turn will cause catastrophic climate change. Consequent on both hypotheses being true gives the case for policy action. Therefore, failure to reduce global GHG emissions will imperil the young.

A further conjecture is that if all signatories to the Paris Agreement fulfil their commitments it is sufficient to prevent 1.5°C or 2°C of warming. There are a number of documents to consider.

First is the INDC submissions (i.e. Nation States communications of their intended nationally determined contributions), collected together at the UNFCCC website. Most are in English.  To find a country submission I suggest clicking on the relevant letter of the alphabet.

Second, to prevent my readers being send on a wild goose chase through small country submissions, some perspective is needed on relative magnitude of emissions. A clear secondary source (but only based on CO2 emissions) BP Data Analysis Global CO2 Emissions 1965-2017. More data on GHG emissions are from the EU Commissions EDGAR Emissions data and the World Resources Institute CAIT Climate Data Explorer.

Third is the empirical scale of the policy issue. The UNEP emissions Gap Report 2017 (pdf), published in October last year is the latest attempt to estimate the scale of the policy issue. The key is the diagram reproduced below.

The total of all commitments will still see aggregate emissions rising into the future. That is, the aggregate impact of all the nationally determined contributions is to see emissions rising well into the future. So the response it to somehow persuade Nations States to change their vague commitments to such an extent that aggregate emissions pathways sufficient to prevent 1.5°C or 2°C of warming?

The relevant way to do this ought to be through the Paris Agreement.

Fourth is the Adoption Paris Agreement itself, as held on the UNFCCC website (pdf).

 

Paris Agreement key points

I would draw readers to Article 2.1(a)

  • Holding the increase in the global average temperature to well below 2°C above pre-industrial levels and pursuing efforts to limit the temperature increase to 1.5°C above pre-industrial levels, recognizing that this would significantly reduce the risks and impacts of climate change;

Article 2.2

  • This Agreement will be implemented to reflect equity and the principle of common but differentiated responsibilities and respective capabilities, in the light of different national circumstances.

My interpretation is that the cumulative aggregate reduction will be only achieved by if those countries that (in the light of their national circumstances) fail to follow the aggregate pathways, are offset by other countries cutting their emissions by a greater amount. It is a numbers game. It is not just a case of compelling some countries to meet the 1.5°C pathway but to compel them to exceed it by some margin.

I would also draw readers to Article 4.1

In order to achieve the long-term temperature goal set out in Article 2, Parties aim to reach global peaking of greenhouse gas emissions as soon as possible, recognizing that peaking will take longer for developing country Parties, and to undertake rapid reductions thereafter in accordance with best available science, so as to achieve a balance between anthropogenic emissions by sources and removals by sinks of greenhouse gases in the second half of this century, on the basis of equity, and in the context of sustainable development and efforts to eradicate poverty.

My reading is that any country defined as “developing” has only an aim of reducing emissions after peaking of their emissions. When they choose to do so depends on a number of criteria. There is not clear mechanism for deciding this, and no surrender of decision-making by countries to external bodies.

Implications of the Paris Agreement

Many developing countries emissions are increasing their emissions. They agreement does not compel them to change course in the near future. Empirically that means to achieve the goals the aggregate emission reductions of countries reducing their emissions must be such that they cancel out the emissions increases in the developing countries. Using EDGAR figures for GHG emissions, and the Rio Declaration 1992 for developing countries (called Non-Annex countries) I estimate they accounted for 64% of global GHG emissions in 2012, the latest year available.

 

All other sources sum to 19 GtCO2e, the same as the emissions gap between the unconditional INDC case and the 1.5°C case. This presents a stark picture. Even if emissions from all other sources are eliminated by 2030, AND the developing countries do not increase their emissions to 2030, cumulative global emissions are very likely to exceed the 1.5°C and the 2°C warming targets unless the developing countries reduce their emissions rapidly after 2030. That is close down fairly new fossil fuel power stations; remove from the road millions of cars, lorries and buses; and reduce the aspirations of the emerging middle classes to improving life styles. The reality is quite the opposite. No new policies are on the horizon that would significantly reduce global GHG emissions, either from the developed countries in the next couple of years, or the developing countries to start in just over a decade from now. Reading the comments in the INDC emissions (e.g. Indonesia, Pakistan, India), a major reason is that these governments are not willing to sacrifice the futures of their young through risking economic growth and political stability to cut their emissions. So rather than Plan B take the UK Government  to a UK Court, they should be persuading those Governments who do not share their views (most of them) of the greater importance of their case. After all, unlike proper pollution (such as smoke), it does not matter where the emissions are generated in relation to the people affected.

It gets worse. It could be argued that the countries that most affected by mitigation policies are not the poorest seeing economic growth and political stability smashed. It is the fossil fuel dependent countries. McGlade and Ekins 2015 (The geographical distribution of fossil fuels unused when limiting global warming to 2°C) estimated, said to achieve even 2°C target 75% of proven reserves and 100% of new discoveries must be left in the ground. Using these global estimates and the BP estimated proven reserves of fossil fuels I created the following apportionment by major countries.

 

The United States has the greatest proven fossil fuel reserves in terms of potential emissions. But if one looks at fossil fuel revenues relative to GDP, it is well down the league table. To achieve emission targets countries such like Russia, Saudi Arabia, Kuwait, Turkmenistan, Iraq, and Iran must all be persuaded to shut down their down sales of fossil fuels long before the reserves are exhausted, or markets from developing countries dry up. To do this in a generation would decimate their economies. However, given the increase in fossil fuel usage from developing countries, and the failure of developed countries to significantly reduce emissions through policy this hardly seems a large risk.

However, this misses the point. The spirit of the Paris Agreement is not to cut emissions, but to be seen to be doing something about climate change. For instance, China were held up by the likes of President Obama for aiming to both top out its emissions by 2030, and reduce emissions per unit of GDP. The USA and the EU did this decades ago, so China’s commitments are little more than a Business-as-usual scenario. Many other countries emissions reduction “targets” are attainable without much actual policy. For example, Brazil’s commitment is to “reduce greenhouse gas emissions by 43% below 2005 levels in 2030.” It sounds impressive, until one reads this comment under “Fairness and Ambition

Brazil’s current actions in the global effort against climate change represent one of the largest undertakings by any single country to date, having reduced its emissions by 41% (GWP-100; IPCC SAR) in 2012 in relation to 2005 levels.

Brazil intends to reduce emissions by a further 2% compared to 2005 levels. Very few targets are more than soft targets relative to current or projected trends. Yet the outcome of COP21 Paris enabled headlines throughout the world to proclaim a deal had been reached “to limit global warming to “well below” 2C, aiming for 1.5C”. It enables most Governments to juggle being key players on a world stage, have alarmists congratulating them on doing their bit on saving the planet, whilst making sure that serving the real needs of their countries is not greatly impeded. It is mostly win-win as long as countries do not really believe that targets are achievable. This is where Britain has failed. Under Tony Blair, when the fever of climate alarmism at its height, backed up by the political spin of New Labour and a Conservative opposition wanting to ditch its unelectable image, Green activists wrote the Climate Change Act 2008 with the strict targets to be passed. Britain swallowed climate alarmism whole, and now as a country that keep its promises is implementing useless and costly policies. But they have kept some form of moderation in policies until now. This is illustrated by a graphic from a Committee on Climate Change report last week “Reducing UK emissions 2018 – Progress Report to Parliament” (pdf) (and referenced at cliscep)

Whilst emissions have come down in the power sector they are flat in transport, industry and in buildings. Pushing real and deep reductions in these sectors means for young people pushing up the costs of motoring (placing driving a car out of the reach of many), of industry (raising costs relative to the countries – especially the non-policy developing countries) and buildings in a country where planning laws make home-owning unaffordable for many and where costs of renting is very high. This on top of further savings in the power industry will be ever more costly as the law of diminishing returns sets in. Forcing more urgent policy actions will increase the financial and other burdens on the young people of today, but do virtually nothing to reach the climate aspirations of the Paris Agreement due to Britain now having less than 1% of global emissions. The Government could be forced out of political fudging to impose policies that will be net harmful to the young and future generations.

Plan B are using an extreme activist interpretation. As reported in Climate Home News after the postponement.

“The UK is not doing enough,” Tim Crosland, director of Plan B told Climate Home News. “The benchmark target is now out of place. We are arguing that it is a breach of human rights.”

The UK has committed to cut emissions by at least 80% of 1990 levels by 2050, with an aim to limit global temperature rise to 2C.

Under the 2008 Climate Change Act, the secretary can revise the target to reflect significant developments in climate change science or in international law or policy.

Plan B want to see the target lowered to be in line with 1.5C, the lower target of the Paris Agreement, which the UK ratified in 2016.

As stated, insofar as the Paris Climate Agreement is a major development of policy, it is one of appearing to do a lot whilst doing very little. By these terms, the stronger case is for repealing the Act, not strengthening its clauses. 

But what if I am wrong on this Paris Agreement being just an exercise in appearances? This then it should be recognized that developing countries will only start to reduce their emissions at some time in the future. By implication, for the world to meet the 1.5°C warming limit, developing countries should be pursuing and emissions reduction pathway much steeper than the 25% reduction between 2015 and 2030 implied in the Emissions GAP Report graphic. It should be at least 50% and nearer 100% in the next decade. Given that the Climate Change Act was brought in so that Britain could lead the world on climate change, Plan B should be looking for a 100% reduction by the end of the year. 

Kevin Marshall

 

Hansen et al 1988 Global Warming Predictions 30 Years on

Last month marked the 30th anniversary of the James Hansen’s Congressional Testimony that kicked off the attempts to control greenhouse gas emissions. The testimony was clearly an attempt, by linking human greenhouse emissions to dangerous global warming, to influence public policy. Unlike previous attempts (such as by then Senator Al Gore), Hansen’s testimony was hugely successful. But do the scientific projections that underpinned the testimony hold up against the actual data? The key part of that testimony was a graph from the Hansen et al 1988* Global climate changes as forecast by Goddard Institute for Space Studies three-dimensional model, produced below.


Figure 1: Hansen et al 1988 – Figure 3(a) in the Congressional Testimony

Note the language of the title of the paper. This is a forecast of global average temperatures contingent upon certain assumptions. The ambiguous part is the assumptions.

The assumptions of Hansen et. al 1988

From the paper.

4. RADIATIVE FORCING IN SCENARIOS A, B AND C

4.1. Trace Gases

  We define three trace gas scenarios to provide an indication of how the predicted climate trend depends upon trace gas growth rates. Scenarios A assumes that growth rates of trace gas emissions typical of the 1970s and 1980s will continue indefinitely; the assumed annual growth averages about 1.5% of current emissions, so the net greenhouse forcing increase exponentially. Scenario B has decreasing trace gas growth rates, such that the annual increase of the greenhouse climate forcing remains approximately constant at the present level. Scenario C drastically reduces trace gas growth between 1990 and 2000 such that the greenhouse climate forcing ceases to increase after 2000.

Scenario A is easy to replicate. Each year increase emissions by 1.5% on the previous year. Scenario B assumes that growth emissions are growing, and policy takes time to be enacted. To bring emissions down to the current level (in 1987 or 1988), reduction is required. Scenario C one presumes are such that trace gas levels are not increasing. As trace gas levels were increasing in 1988 and (from Scenario B) continuing emissions at the 1988 level would continue to increase atmospheric levels the levels of emissions would have been considerably lower than in 1988 by the year 2000. They might be above zero, as small amounts of emissions may not have an appreciable impact on atmospheric levels.

The graph formed Fig. 3. of James Hansen’s testimony to Congress. The caption to the graph repeats the assumptions.

Scenario A assumes continued growth rates of trace gas emissions typical of the past 20 years, i.e., about 1.5% yr-1 emission growth; scenario B has emission rates approximately fixed at current rates; scenario C drastically reduces traces gas emissions between 1990 and 2000.

This repeats the assumptions. Scenario B fixes annual emissions at the levels of the late 1980s, whilst scenario C sees drastic emission reductions.

James Hansen in his speech gave a more succinct description.

We have considered cases ranging from business as usual, which is scenario A, to draconian emission cuts, scenario C, which would totally eliminate net trace gas growth by year 2000.

Note that the resultant warming from fixing emissions at the current rate (Scenario B) is much closer in warming impacts to Scenario A (emissions growth of +1.5% year-on-year) than Scenario C that stops global warming. Yet Scenario B results from global policy being successfully implemented to stop the rise in global emissions.

Which Scenario most closely fits the Actual Data?

To understand which scenario most closely fits the data, we need to look at that trace gas emissions data. There are a number of sources, which give slightly different results. One source, and that which ought to be the most authoritative, is the IPCC Fifth Assessment Report WG3 Summary for Policy Makers graphic SPM.1 is reproduced in Figure 2.

 Figure 2 : AR5 WG3 SPM.1 Total annual anthropogenic GHG emissions (GtCO2eq/yr) by groups of gases 1970-2010. FOLU is Forestry and Other Land Use.

Note that in Figure 2 the other greenhouse gases – F-Gases, N2O and CH4 – are expressed in CO2 equivalents. It is very easy to see which of the three scenarios fits. The historical data up until 1988 shows increasing emissions. After that data emissions have continued to increase. Indeed there is some acceleration, stated on the graph comparing 2000-2010 (+2.2%/yr) with 1970-2000 (+1.3%/yr) . In 2010 GHG emissions were not similar to those in the 1980s (about 35 GtCO2e) but much higher. By implication, Scenario C, which assumed draconian emissions cuts is the furthest away from the reality of what has happened. Before considering how closely Scenario A compares to temperature rise, the question is therefore how close actual emissions have increased compared to the +1.5%/yr in scenario A.

From my own rough calculations, total GHG emissions from 1990 to 2010 rose about 29% or 1.3% a year, compared to 41% or 1.7% a year in the period 1970 to 1990. Exponential growth of 1.3% is not far short of the 1.5%. The assumed 1.5% growth rates would have resulted in 2010 emissions of 51 GtCO2e instead of the 49 GtCO2e estimated, well within the margin of error. That is actual trends over 20 years were pretty much the business as usual scenario. The narrower CO2 emissions from fossil fuels and industrial sources from 1990 to 2010 rose about 42% or 1.8% a year, compared to 51% or 2.0% a year in the period 1970 to 1990, above the Scenario A.

The breakdown is shown in Figure 3.

Figure 3 : Rough calculations of exponential emissions growth rates from AR5 WG1 SPM Figure SPM.1 

These figures are somewhat out of date. The UNEP Emissions Gap Report 2017 (pdf) estimated GHG emissions in 2016 at 51.9 GtCO2e. This represents a slowdown in emissions growth in recent years.

Figure 4 shows are the actual decadal exponential growth trends in estimated GHG emissions (with a linear trend to the 51.9 GtCO2e of emissions in 2016 from the UNEP Emissions Gap Report 2017 (pdf)) to my interpretations of the scenario assumptions. That is, from 1990 in Scenario A for 1.5% annual growth in emissions; in Scenario B for emissions to reduce from 38 to 35 GtCO2e in(level of 1987) in the 1990s and continue indefinitely: in Scenario C to reduce to 8 GtCO2e in the 1990s.

Figure 4 : Hansen et al 1988 emissions scenarios, starting in 1990, compared to actual trends from UNIPCC and UNEP data. Scenario A – 1.5% pa emissions growth; Scenario B – Linear decline in emissions from 38 GtCO2e in 1990 to 35 GtCO2e in 2000, constant thereafter; Scenario C – Linear decline  in emissions from 38 GtCO2e in 1990 to 8 GtCO2e in 2000, constant thereafter. 

This overstates the differences between A and B, as it is the cumulative emissions that matter. From my calculations, although in Scenario B 2010 emissions are 68% of Scenario A, cumulative emissions for period 1991-2010 are 80% of Scenario A.

Looking at cumulative emissions is consistent with the claims from the various UN bodies, that limiting to global temperature rise to 1.5°C or 2.0°C of warming relative to some point is contingent of a certain volume of emissions not been exceeded. One of the most recent the key graphic from the UNEP Emissions Gap Report 2017.

Figure 5 : Figure ES.2 from the UNEP Emissions Gap Report 2017, showing the projected emissions gap in 2030 relative to 1.5°C or 2.0°C warming targets. 

Warming forecasts against “Actual” temperature variation

Hansen’s testimony was a clear case of political advocacy. By making Scenario B constant the authors are making a bold policy statement. That is, to stop catastrophic global warming (and thus prevent potentially catastrophic changes to climate systems) requires draconian reductions in emissions. Simply maintaining emissions at the levels of the mid-1980s will make little difference. That is due to the forcing being related to the cumulative quantity of emissions.

Given that the data is not in quite in line with scenario A, if the theory is correct, then I would expect:-

  1. Warming trend to be somewhere between Scenario A and Scenario B. Most people accept 4.2equilibrium climate sensitivity of the Hansen model was 4.2ºC for a doubling of CO2 was too high. The IPCC now uses 3ºC for ECS. More recent research has it much lower still. However, although the rate of the warming might be less, the pattern of warming over time should be similar.
  2. Average temperatures after 2010 to be significantly higher than in 1987.
  3. The rate of warming in the 1990s to be marginally lower than in the period 1970-1990, but still strongly positive.
  4. The rate of warming in the 2000s to be strongly positive marginally higher than in the 1990s.

From the model Scenario C, there seems to be about a five year lag in the model between changes in emission rates and changes in temperatures. However, looking at the actual temperature data there is quite a different warming pattern. Five years ago C3 Headlines had a post 2013: The NASA/Hansen Climate Model Prediction of Global Warming Vs. Climate Reality.  The main graphic is in Figure 6

Figure 6 : C3 Headlines – NASA Hansen Prediction Vs Reality

The first thing to note is that the Scenario Assumptions are incorrect. Not only are they labelled as CO2, not GHG emissions, but are all stated wrongly. Stating them correctly would show a greater contradiction between forecasts and reality. However, the Scenario data appears to be reproduced correctly, and the actual graph appears to be in line with a graphic produced last month by Gavin Schmidt last month in his defense of Hansen’s predictions.

The data contradicts the forecasts. Although average temperatures are clearly higher than in in 1987, they are not in line with the forecast of Scenario A which is closest to the actual emissions trends. The rise is way below 70% of the model implied by inputting the lower IPCC climate sensitivity, and allowing for GHG emissions being fractional below the 1.5% per annum of Scenario A. But the biggest problem is where the main divergence occurred. Rather than warming accelerating slightly in the 2000s (after a possible slowdown in the 1990s),  there was no slowdown in the 1990s, but it either collapsed to zero, or massively reduced, depending on the data set was used. This is in clear contradiction of the model. Unless there is an unambiguous and verifiable explanation (rather than a bunch of waffly and contradictory excuses ), the model should be deemed to be wrong. There could be natural and largely unknown natural factors or random data noise that could explain the discrepancy. But equally (and quite plausibly) those same factors could have contributed to the late twentieth century warming.

This simple comparison has an important implication for policy. As there is no clear evidence to link most of the observed warming to GHG emissions, by implication there is no clear support for the belief that reducing GHG emissions will constrain future warming. But reducing global GHG emissions is merely an aspiration. As the graphic in Figure 5 clearly demonstrates, over twenty months after the Paris Climate Agreement was signed there is still no prospect of aggregate GHG emissions falling through policy. Hansen et. al 1988 is therefore a double failure; both as a scientific forecast and a tool for policy advocacy in terms of reducing GHG emissions. If only the supporters would realize their failure, and the useless and costly climate policies could be dismantled.

Kevin Marshall

*Hansen, J., I. Fung, A. Lacis, D. Rind, S. Lebedeff, R. Ruedy, G. Russell, and P. Stone, 1988: Global climate changes as forecast by Goddard Institute for Space Studies three-dimensional model. J. Geophys. Res., 93, 9341-9364, doi:10.1029/JD093iD08p09341.

Does data coverage impact the HADCRUT4 and NASA GISS Temperature Anomalies?

Introduction

This post started with the title “HADCRUT4 and NASA GISS Temperature Anomalies – a Comparison by Latitude“.  After deriving a global temperature anomaly from the HADCRUT4 gridded data, I was intending to compare the results with GISS’s anomalies by 8 latitude zones. However, this opened up an intriguing issue. Are global temperature anomalies impacted by a relative lack of data in earlier periods? The leads to a further issue of whether infilling of the data can be meaningful, and hence be considered to “improve” the global anomaly calculation.

A Global Temperature Anomaly from HADCRUT4 Gridded Data

In a previous post, I looked at the relative magnitudes of early twentieth century and post-1975 warming episodes. In the Hadley datasets, there is a clear divergence between the land and sea temperature data trends post-1980, a feature that is not present in the early warming episode. This is reproduced below as Figure 1.

Figure 1 : Graph of Hadley Centre 7 year moving average temperature anomalies for Land (CRUTEM4), Sea (HADSST3) and Combined (HADCRUT4)

The question that needs to be answered is whether the anomalous post-1975 warming on the land is due to real divergence, or due to issues in the estimation of global average temperature anomaly.

In another post – The magnitude of Early Twentieth Century Warming relative to Post-1975 Warming – I looked at the NASA Gistemp data, which is usefully broken down into 8 Latitude Zones. A summary graph is shown in Figure 2.

Figure 2 : NASA Gistemp zonal anomalies and the global anomaly

This is more detail than the HADCRUT4 data, which is just presented as three zones of the Tropics, along with Northern and Southern Hemispheres. However, the Hadley Centre, on their HADCRUT4 Data: download page, have, under  HadCRUT4 Gridded data: additional fields, a file HadCRUT.4.6.0.0.median_ascii.zip. This contains monthly anomalies for 5o by 5o grid cells from 1850 to 2017. There are 36 zones of latitude and 72 zones of longitude. Over 2016 months, there are over 5.22 million grid cells, but only 2.51 million (48%) have data. From this data, I have constructed a global temperature anomaly. The major issue in the calculation is that the grid cells are of different areas. A grid cell nearest to the equator at 0o to 5o has about 23 times the area of a grid cell adjacent to the poles at 85o to 90o. I used the appropriate weighting for each band of latitude.

The question is whether I have calculated a global anomaly similar to the Hadley Centre. Figure 3 is a reconciliation with the published global anomaly mean (available from here) and my own.

Figure 3 : Reconciliation between HADCRUT4 published mean and calculated weighted average mean from the Gridded Data

Prior to 1910, my calculations are slightly below the HADCRUT 4 published data. The biggest differences are in 1956 and 1915. Overall the differences are insignificant and do not impact on the analysis.

I split down the HADCRUT4 temperature data by eight zones of latitude on a similar basis to NASA Gistemp. Figure 4 presents the results on the same basis as Figure 2.

Figure 4 : Zonal surface temperature anomalies a the global anomaly calculated using the HADCRUT4 gridded data.

Visually, there are a number of differences between the Gistemp and HADCRUT4-derived zonal trends.

A potential problem with the global average calculation

The major reason for differences between HADCRUT4 & Gistemp is that the latter has infilled estimated data into areas where there is no data. Could this be a problem?

In Figure 5, I have shown the build-up in global coverage. That is the percentage of 5o by 5o grid cells with an anomaly in the monthly data.

Figure 5 : HADCRUT4 Change in the percentage coverage of each zone in the HADCRUT4 gridded data. 

Figure 5 shows a build-up in data coverage during the late nineteenth and early twentieth centuries. The World Wars (1914-1918 & 1939-1945) had the biggest impact on the Southern Hemisphere data collection. This is unsurprising when one considers it was mostly fought in the Northern Hemisphere, and European powers withdrew resources from their far-flung Empires to protect the mother countries. The only zones with significantly less than 90% grid coverage in the post-1975 warming period are the Arctic and the region below 45S. That is around 19% of the global area.

Finally, comparing comparable zones in the Northen and Southern hemispheres, the tropics seem to have comparable coverage, whilst for the polar, temperate and mid-latitude areas the Northern Hemisphere seems to have better coverage after 1910.

This variation in coverage can potentially lead to wide discrepancies between any calculated temperature anomalies and a theoretical anomaly based upon one with data in all the 5o by 5o grid cells. As an extreme example, with my own calculation, if just one of the 72 grid cells in a band of latitude had a figure, then an “average” would have been calculated for a band right around the world 555km (345 miles) from North to South for that month for that band. In the annual figures by zone, it only requires one of the 72 grid cells, in one of the months, in one of the bands of latitude to have data to calculate an annual anomaly. For the tropics or the polar areas, that is just one in 4320 data points to create an anomaly. This issue will impact early twentieth-century warming episode far more than the post-1975 one. Although I would expect the Hadley centre to have done some data cleanup of the more egregious examples in their calculation, potentially lack of data in grid cells could have quite random impacts, thus biasing the global temperature anomaly trends to an unknown, but significant extent. An appreciation of how this could impact can be appreciated from an example of NASA GISS Global Maps.

NASA GISS Global Maps Temperature Trends Example

NASA GISS Global Maps from GHCN v3 Data provide maps with the calculated change in average temperatures. I have run the maps to compare annual data for 1940 with a baseline of 1881-1910, capturing much of the early twentieth-century warming. I have run the maps at both the 1200km and 250km smoothing.

Figure 6 : NASA GISS Global anomaly Map and average anomaly by Latitude comparing 1940 with a baseline of 1881 to 1910 and a 1200km smoothing radius

Figure 7 : NASA GISS Global anomaly Map and average anomaly by Latitude comparing 1940 with a baseline of 1881 to 1910 and a 250km smoothing radius. 

With respect to the maps in figures 6 & 7

  • There is no apparent difference in the sea data between the 1200km and 250km smoothing radius, except in the polar regions with more cover in the former. The differences lie in the land area.
  • The grey areas with insufficient data all apply to the land or ocean areas in polar regions.
  • Figure 6, with 1200km smoothing, has most of the land infilled, whilst the 250km smoothing shows the lack of data coverage for much of South America, Africa, the Middle East, South-East Asia and Greenland.

Even with these land-based differences in coverage, it is clear that from either map that at any latitude there are huge variations in calculated average temperature change. For instance, take 40N. This line of latitude is North of San Francisco on the West Coast USA, clips Philidelphia on the East Coast. On the other side of the Atlantic, Madrid, Ankara and Beijing are at about 40N. There are significant points on the line on latitude with estimate warming greater than 1C (e.g. California), whilst at the same time in Eastern Europe, cooling may have exceeded 1C in the period. More extreme is at 60N (Southern Alaska, Stockholm, St Petersburg) the difference in temperature along the line of latitude is over 3C. This compares to a calculated global rise of 0.40C.

This lack of data may have contributed (along with a faulty algorithm) to the differences in the Zonal mean charts by Latitude. The 1200km smoothing radius chart bears little relation to the 250km smoothing radius. For instance:-

  •  1200km shows 1.5C warming at 45S, 250km about zero. 45S cuts through South Island, New Zealand.
  • From the equator to 45N, 1200km shows rise from 0.5C to over 2.0C, 250km shows drop from less than 0.5C to near zero, then rise to 0.2C. At around 45N lies Ottowa, Maine, Bordeaux, Belgrade, Crimea and the most Northern point in Japan.

The differences in the NASA Giss Maps, in a period when available data covered only around half the 2592 5o by 5o grid cells, indicate quite huge differences in trends between different areas. As a consequence, trying to interpolate warming trends from one area to adjacent areas appears to give quite different results in terms of trends by latitude.

Conclusions and Further Questions

The issue I originally focussed upon was the relative size of the early twentieth-century warming to the Post-1975. The greater amount of warming in the later period seemed to be due to the greater warming on land covering just 30% of the total global area. The sea temperature warming phases appear to be pretty much the same.

The issue that I focussed upon was a data issue. The early twentieth century had much less data coverage than after 1975. Further, the Southern Hemisphere had worse data coverage than the Northern Hemisphere, except in the Tropics. This means that in my calculation of a global temperature anomaly from the HADCRUT4 gridded data (which in aggregate was very similar to the published HADCRUT4 anomaly) the average by latitude will not be comparing like with like in the two warming periods. In particular, in the early twentieth-century, a calculation by latitude will not average right the way around the globe, but only on a limited selection of bands of longitude. On average this was about half, but there are massive variations. This would be alright if the changes in anomalies were roughly the same over time by latitude. But an examination of NASA GISS global maps for a period covering the early twentieth-century warming phase reveals that trends in anomalies at the same latitude are quite different over time. This implies that there could be large, but unknown, biases in the data.

I do not believe the analysis ends here. There are a number of areas that I (or others) can try to explore.

  1. Does the NASA GISS infilling of the data get us closer or further away from a what a global temperature anomaly would look like with full data coverage? My guess, based on the extreme example of Antartica trends (discussed here) is that the infilling will move away from the more perfect trend. The data could show otherwise.
  2. Are the changes in data coverage on land more significant than the global average or less? Looking at CRUTEM4 data could resolve this question.
  3. Would anomalies based upon similar grid coverage after 1900 give different relative trend patterns to the published ones based on dissimilar grid coverage?

Whether I get the time to analyze these is another issue.

Finally, the problem of trends varying considerably and quite randomly across the globe is the same issue that I found with land data homogenisation discussed here and here. To derive a temperature anomaly for a grid cell, it is necessary to make the data homogeneous. In standard homogenisation techniques, it is assumed that the underlying trends in an area is pretty much the same. Therefore, any differences in trend between adjacent temperature stations will be as a result of data imperfections. I found numerous examples where there were likely differences in trend between adjacent temperature stations. Homogenisation will, therefore, eliminate real but local climatic trends. Averaging incomplete global data where missing data could contain regional but unknown data trends may cause biases at a global scale.

Kevin Marshall

 

 

More Coal-Fired Power Stations in Asia

A lovely feature of the GWPF site is its extracts of articles related to all aspects of climate and related energy policies. Yesterday the GWPF extracted from an opinion piece in the Hong Kong-based South China Morning Post A new coal war frontier emerges as China and Japan compete for energy projects in Southeast Asia.
The GWPF’s summary:-

Southeast Asia’s appetite for coal has spurred a new geopolitical rivalry between China and Japan as the two countries race to provide high-efficiency, low-emission technology. More than 1,600 coal plants are scheduled to be built by Chinese corporations in over 62 countries. It will make China the world’s primary provider of high-efficiency, low-emission technology.

A summary point in the article is not entirely accurate. (Italics mine)

Because policymakers still regard coal as more affordable than renewables, Southeast Asia’s industrialisation continues to consume large amounts of it. To lift 630 million people out of poverty, advanced coal technologies are considered vital for the region’s continued development while allowing for a reduction in carbon emissions.

Replacing a less efficient coal-fired power station with one of the latest technology will reduce carbon (i.e CO2) emissions per unit of electricity produced. In China, these efficiency savings replacement process may outstrip the growth in power supply from fossil fuels. But in the rest of Asia, the new coal-fired power stations will be mostly additional capacity in the coming decades, so will lead to an increase in CO2 emissions. It is this additional capacity that will be primarily responsible for driving the economic growth that will lift the poor out of extreme poverty.

The newer technologies are important in other types emissions. That is the particle emissions that has caused high levels of choking pollution and smogs in many cities of China and India. By using the new technologies, other countries can avoid the worst excesses of this pollution, whilst still using a cheap fuel available from many different sources of supply. The thrust in China will likely be to replace the high pollution power stations with new technologies or adapt them to reduce the emissions and increase efficiencies. Politically, it is a different way of raising living standards and quality of life than by increasing real disposable income per capita.

Kevin Marshall

 

HADCRUT4, CRUTEM4 and HADSST3 Compared

In the previous post, I compared early twentieth-century warming with the post-1975 warming in the Berkeley Earth Global temperature anomaly. From a visual inspection of the graphs, I determined that the greater warming in the later period is due to more land-based warming, as the warming in the oceans (70% of the global area) was very much the same. The Berkeley Earth data ends in 2013, so does not include the impact of the strong El Niño event in the last three years.

Global average temperature series page of the Met Office Hadley Centre Observation Datasets has the average annual temperature anomalies for CRUTEM4 (land-surface air temperature) and HADSST3 (sea-surface temperature)  and HADCRUT4 (combined). From these datasets, I have derived the graph in Figure 1.

Figure 1 : Graph of Hadley Centre annual temperature anomalies for Land (CRUTEM4), Sea (HADSST3) and Combined (HADCRUT4)

  Comparing the early twentieth-century with 1975-2010,

  • Land warming is considerably greater in the later period.
  • Combined land and sea warming is slightly more in the later period.
  • Sea surface warming is slightly less in the later period.
  • In the early period, the surface anomalies for land and sea have very similar trends, whilst in the later period, the warming of the land is considerably greater than the sea surface warming.

The impact is more clearly shown with 7 year centred moving average figures in Figure 2.

Figure 2 : Graph of Hadley Centre 7 year moving average temperature anomalies for Land (CRUTEM4), Sea (HADSST3) and Combined (HADCRUT4)

This is not just a feature of the HADCRUT dataset. NOAA Global Surface Temperature Anomalies for land, ocean and combined show similar patterns. Figure 3 is on the same basis as Figure 2.

Figure 3 : Graph of NOAA 7 year moving average temperature anomalies for Land, Ocean and Combined.

The major common feature is that the estimated land temperature anomalies have shown a much greater warming trend that the sea surface anomalies since 1980, but no such divergence existed in the early twentieth century warming period. Given that the temperature data sets are far from complete in terms of coverage, and the data is of variable quality, is this divergence a reflection of the true average temperature anomalies based on far more complete and accurate data? There are a number of alternative possibilities that need to be investigated to help determine (using beancounter terminology) whether the estimates are a true and fair reflection of the prespective that more perfect data and techniques would provide. My list might be far from exhaustive.

  1. The sea-surface temperature set understates the post-1975 warming trend due to biases within data set.
  2. The spatial distribution of data changed considerably over time. For instance, in recent decades more data has become available from the Arctic, a region with the largest temperature increases in both the early twentieth century and post-1975.
  3. Land data homogenization techniques may have suppressed differences in climate trends where data is sparser. Alternatively, due to relative differences in climatic trends between nearby locations increasing over time, the further back in time homogenization goes, the more accentuated these differences and therefore the greater the suppression of genuine climatic differences. These aspects I discussed here and here.
  4. There is deliberate manipulation of the data to exaggerate recent warming. Having looked at numerous examples three years ago, this is a perspective that I do not believe to have had any significant impact. However, simply believing something not to be the case, even with examples, does not mean that it is not there.
  5. Strong beliefs about how the data should look have, over time and multiple data adjustments created biases within the land temperature anomalies.

What I do believe is that an expert opinion to whether this divergence between the land and sea surface anomalies is a “true and fair view” of the actual state of affairs can only be reached by a detailed examination of the data. Jumping to conclusions – which is evident from many people across the broad spectrum of opinions on catastrophic anthropogenic global warming debate – will fall short of the most rounded opinion that can be gleaned from the data.

Kevin Marshall

 

The magnitude of Early Twentieth Century Warming relative to Post-1975 Warming

I was browsing the Berkeley Earth website and came across their estimate of global average temperature change. Reproduced as Figure 1.

Figure 1 – BEST Global Temperature anomaly

What clearly stands out is the 10-year moving average line. It clearly shows warming from in the early twentieth century, (the period 1910 to 1940) being very similar warming from the mid-1970s to the end of the series in both time period and magnitude. Maybe the later warming period is up to one-tenth of a degree Celsius greater than the earlier one. The period from 1850 to 1910 shows stasis or a little cooling, but with high variability. The period from the 1940s to the 1970s shows stasis or slight cooling, and low variability.

This is largely corroborated by HADCRUT4, or at least the version I downloaded in mid-2014.

Figure 2 – HADCRUT4 Global Temperature anomaly

HADCRUT4 estimates that the later warming period is about three-twentieths of a degree Celsius greater than the earlier period and that the recent warming is slightly less than the BEST data.

The reason for the close fit is obvious. 70% of the globe is ocean and for that BEST use the same HADSST dataset as HADCRUT4. Graphics of HADSST are a little hard to come by, but KevinC at skepticalscience usefully produced a comparison of the latest HADSST3 in 2012 with the previous version.

Figure 3  – HADSST Ocean Temperature anomaly from skepticalscience 

This shows the two periods having pretty much the same magnitudes of warming.

It is the land data where the differences lie. The BEST Global Land temperature trend is reproduced below.

Figure 4 – BEST Global Land Temperature anomaly

For BEST global land temperatures, the recent warming was much greater than the early twentieth-century warming. This implies that the sea surface temperatures showed pretty much the same warming in the two periods. But if greenhouse gases were responsible for a significant part of global warming then the warming for both land and sea would be greater after the mid-1970s than in the early twentieth century. Whilst there was a rise in GHG levels in the early twentieth century, it was less than in the period from 1945 to 1975, when there was no warming, and much less than the post-1975 when CO2 levels rose massively. Whilst there can be alternative explanations for the early twentieth-century warming and the subsequent lack of warming for 30 years (when the post-WW2 economic boom which led to a continual and accelerating rise in CO2 levels), without such explanations being clear and robust the attribution of post-1975 warming to rising GHG levels is undermined. It could be just unexplained natural variation.

However, as a preliminary to examining explanations of warming trends, as a beancounter, I believe it is first necessary to examine the robustness of the figures. In looking at temperature data in early 2015, one aspect that I found unsatisfactory with the NASA GISS temperature data was the zonal data. GISS usefully divide the data between 8 bands of latitude, which I have replicated as 7 year centred moving averages in Figure 5.

Figure 5 – NASA Gistemp zonal anomalies and the global anomaly

What is significant is that some of the regional anomalies are far greater in magnitude

The most Southerly is for 90S-64S, which is basically Antarctica, an area covering just under 5% of the globe. I found it odd that there should a temperature anomaly for the region from the 1880s, when there were no weather stations recording on the frozen continent until the mid-1950s. The nearest is Base Orcadas located at 60.8 S 44.7 W, or about 350km north of 64 S. I found that whilst the Base Orcadas temperature anomaly was extremely similar to the Antarctica Zonal anomaly in the period until 1950, it was quite dissimilar in the period after.

Figure 6. Gistemp 64S-90S annual temperature anomaly compared to Base Orcadas GISS homogenised data.

NASA Gistemp has attempted to infill the missing temperature anomaly data by using the nearest data available. However, in this case, Base Orcadas appears to climatically different than the average anomalies for Antarctica, and from the global average as well. The result of this is to effectively cancel out the impact of the massive warming in the Arctic on global average temperatures in the early twentieth century. A false assumption has effectively shrunk the early twentieth-century warming. The shrinkage will be small, but it undermines the NASA GISS being the best estimate of a global temperature anomaly given the limited data available.

Rather than saying that the whole exercise of determining a valid comparison the two warming periods since 1900 is useless, I will instead attempt to evaluate how much the lack of data impacts on the anomalies. To this end, in a series of posts, I intend to look at the HADCRUT4 anomaly data. This will be a top-down approach, looking at monthly anomalies for 5o by 5o grid cells from 1850 to 2017, available from the Met Office Hadley Centre Observation Datasets. An advantage over previous analyses is the inclusion of anomalies for the 70% of the globe covered by ocean. The focus will be on the relative magnitudes of the early twentieth-century and post-1975 warming periods. At this point in time, I have no real idea of the conclusions that can be drawn from the analysis of the data.

Kevin Marshall