Australian Beer Prices set to Double Due to Global Warming?

Earlier this week Nature Plants published a new paper Decreases in global beer supply due to extreme drought and heat

The Scientific American has an article “Trouble Brewing? Climate Change Closes In on Beer Drinkers” with the sub-title “Increasing droughts and heat waves could have a devastating effect on barley stocks—and beer prices”. The Daily Mail headlines with “Worst news ever! Australian beer prices are set to DOUBLE because of global warming“. All those climate deniers in Australia have denied future generations the ability to down a few cold beers with their barbecued steaks tofu salads.

This research should be taken seriously, as it is by a crack team of experts across a number of disciplines and Universities. Said, Steven J Davis of University of California at Irvine,

The world is facing many life-threatening impacts of climate change, so people having to spend a bit more to drink beer may seem trivial by comparison. But … not having a cool pint at the end of an increasingly common hot day just adds insult to injury.

Liking the odd beer or three I am really concerned about this prospect, so I rented the paper for 48 hours to check it out. What a sensation it is. Here a few impressions.

Layers of Models

From the Introduction, there were a series of models used.

  1. Created an extreme events severity index for barley based on extremes in historical data for 1981-2010.
  2. Plugged this into five different Earth Systems models for the period 2010-2099. Use this against different RCP scenarios, the most extreme of which shows over 5 times the warming of the 1981-2010 period. What is more severe climate events are a non-linear function of temperature rise.
  3. Then model the impact of these severe weather events on crop yields in 34 World Regions using a “process-based crop model”.
  4. (W)e examine the effects of the resulting barley supply shocks on the supply and price of beer in each region using a global general equilibrium model (Global Trade Analysis Project model, GTAP).
  5. Finally, we compare the impacts of extreme events with the impact of changes in mean climate and test the sensitivity of our results to key sources of uncertainty, including extreme events of different severities, technology and parameter settings in the economic model.

What I found odd was they made no allowance for increasing demand for beer over a 90 year period, despite mentioning in the second sentence that

(G)lobal demand for resource-intensive animal products (meat and dairy) processed foods and alcoholic beverages will continue to grow with rising incomes.

Extreme events – severity and frequency

As stated in point 2, the paper uses different RCP scenarios. These featured prominently in the IPCC AR5 of 2013 and 2014. They go from RCP2.6, which is the most aggressive mitigation scenario, through to RCP 8.5 the non-policy scenario which projected around 4.5C of warming from 1850-1870 through to 2100, or about 3.8C of warming from 2010 to 2090.

Figure 1 has two charts. On the left it shows that extreme events will increase intensity with temperature. RCP2.6 will do very little, but RCP8.5 would result by the end of the century with events 6 times as intense today. Problem is that for up to 1.5C there appears to be no noticeable change what so ever.  That is about the same amount of warming the world has experienced from 1850-2010 per HADCRUT4 there will be no change. Beyond that things take off. How the models empirically project well beyond known experience for a completely different scenario defeats me. It could be largely based on their modelling assumptions, which is in turn strongly tainted by their beliefs in CAGW. There is no reality check that it is the models that their models are not falling apart, or reliant on arbitrary non-linear parameters.

The right hand chart shows that extreme events are porjected to increase in frequency as well. Under RCP 2.6 ~ 4% chance of an extreme event, rising to ~ 31% under RCP 8.5. Again, there is an issue of projecting well beyond any known range.

Fig 2 average barley yield shocks during extreme events

The paper assumes that the current geographical distribution and area of barley cultivation is maintained. They have modelled in 2099, from the 1981-2010 a gridded average yield change with 0.5O x 0.5O resolution to create four colorful world maps representing each of the four RCP emissions scenarios. At the equator, each grid is about 56 x 56 km for an area of 3100 km2, or 1200 square miles. Of course, nearer the poles the area diminishes significantly. This is quite a fine level of detail for projections based on 30 years of data to radically different circumstances 90 years in the future. The results show. Map a) is for RCP 8.5. On average yields are projected to be 17% down. As Paul Homewood showed in a post on the 17th, this projected yield fall should be put in the context of a doubling of yields per hectare since the 1960s.

This increase in productivity has often solely ascribed to the improvements in seed varieties (see Norman Borlaug), mechanization and use of fertilizers. These have undoubtably have had a large parts to play in this productivity improvement. But also important is that agriculture has become more intensive. Forty years ago it was clear that there was a distinction between the intensive farming of Western Europe and the extensive farming of the North American prairies and the Russian steppes. It was not due to better soils or climate in Western Europe. This difference can be staggering. In the Soviet Union about 30% of agricultural output came from around 1% of the available land. These were the plots that workers on the state and collective farms could produce their own food and sell surplus in the local markets.

Looking at chart a in Figure 2, there are wide variations about this average global decrease of 17%.

In North America Montana and North Dakota have areas where barley shocks during extreme years will lead to mean yield changes over 90% higher normal, and the areas around have >50% higher than normal. But go less than 1000 km North into Canada to the Calgary/Saskatoon area and there are small decreases in yields.

In Eastern Bolivia – the part due North of Paraguay – there is the biggest patch of > 50% reductions in the world. Yet 500-1000 km away there is a North-South strip (probably just 56km wide) with less than a 5% change.

There is a similar picture in Russia. On the Kazakhstani border, there are areas of > 50% increases, but in a thinly populated band further North and West, going from around Kirov to Southern Finland is where there are massive decreases in yields.

Why, over the course of decades, would those with increasing yields not increase output, and those with decreasing yields not switch to something else defeats me. After all, if overall yields are decreasing due to frequent extreme weather events, the farmers would be losing money, and those farmers do well when overall yields are down will be making extraordinary profits.

A Weird Economic Assumption

Building up to looking at costs, their is a strange assumption.

(A)nalysing the relative changes in shares of barley use, we find that in most case barley-to-beer shares shrink more than barley-to-livestock shares, showing that food commodities (in this case, animals fed on barley) will be prioritized over luxuries such as beer during extreme events years.

My knowledge of farming and beer is limited, but I believe that cattle can be fed on other things than barley. For instance grass, silage, and sugar beet. Yet, beers require precise quantities of barley and hops of certain grades.

Further, cattle feed is a large part of the cost of a kilo of beef or a litre of milk. But it takes around 250-400g of malted barley to produce a litre of beer. The current wholesale price of malted barley is about £215 a tonne or 5.4 to 8.6p a litre. About cheapest 4% alcohol lager I can find in a local supermarket is £3.29 for 10 x 250ml bottles, or £1.32 a litre. Take off 20% VAT and excise duty leaves 30p a litre for raw materials, manufacturing costs, packaging, manufacturer’s margin, transportation, supermarket’s overhead and supermarket’s margin. For comparison four pints (2.276 litres) of fresh milk costs £1.09 in the same supermarket, working out at 48p a litre. This carries no excise duty or VAT. It might have greater costs due to refrigeration, but I would suggest it costs more to produce, and that feed is far more than 5p a litre.

I know that for a reasonable 0.5 litre bottle of ale it is £1.29 to £1.80 a bottle in the supermarkets I shop in, but it is the cheapest that will likely suffer the biggest percentage rise from increase in raw material prices. Due to taxation and other costs, large changes in raw material prices will have very little impact on final retail costs. Even less so in pubs where a British pint (568ml) varies from the £4 to £7 a litre equivalent.

That is, the assumption is the opposite of what would happen in a free market. In the face of a shortage, farmers will substitute barley for other forms of cattle feed, whilst beer manufacturers will absorb the extra cost.

Disparity in Costs between Countries

The most bizarre claim in the article in contained in the central column of Figure 4, which looks at the projected increases in the cost of a 500 ml bottle of beer in US dollars. Chart h shows this for the most extreme RCP 8.5 model.

I was very surprised that a global general equilibrium model would come up with such huge disparities in costs after 90 years. After all, my understanding of these models used utility-maximizing consumers, profit-maximizing producers, perfect information and instantaneous adjustment. Clearly there is something very wrong with this model. So I decided to compare where I live in the UK with neighbouring Ireland.

In the UK and Ireland there are similar high taxes on beer, with Ireland being slightly more. Both countries have lots of branches of the massive discount chain. They also have some products on their website aldi.co.uk and aldi.ie.  In Ireland a 500 ml can of Sainte Etienne Lager is €1.09 or €2.18 a litre or £1.92 a litre. In the UK it is £2.59 for 4 x 440ml cans or £1.59 a litre. The lager is about 21% more in Ireland. But the tax difference should only be about 15% on a 5% beer (Saint Etienne is 4.8%). Aldi are not making bigger profits in Ireland, they just may have higher costs in Ireland, or lesser margins on other items. It is also comparing a single can against a multipack. So pro-rata the £1.80 ($2.35) bottle of beer in the UK would be about $2.70 in Ireland. Under the RCP 8.5 scenario, the models predict the bottle of beer to rise by $1.90 in the UK and $4.84 in Ireland. Strip out the excise duty and VAT and the price differential goes from zero to $2.20.

Now suppose you were a small beer manufacturer in England, Wales or Scotland. If beer was selling for $2.20 more in Ireland than in the UK, would you not want to stick 20,000 bottles in a container and ship it to Dublin?

If the researchers really understood the global brewing industry, they would realize that there are major brands sold across the world. Many are brewed across in a number of countries to the same recipe. It is the barley that is shipped to the brewery, where equipment and techniques are identical with those in other parts of the world. This researchers seem to have failed to get away from their computer models to conduct field work in a few local bars.

What can be learnt from this?

When making projections well outside of any known range, the results must be sense-checked. Clearly, although the researchers have used an economic model they have not understood the basics of economics. People are not dumb  automatons waiting for some official to tell them to change their patterns of behavior in response to changing circumstances. They notice changes in the world around them and respond to it. A few seize the opportunities presented and can become quite wealthy as a result. Farmers have been astute enough to note mounting losses and change how and what they produce. There is also competition from regions. For example, in the 1960s Brazil produced over half the world’s coffee. The major region for production in Brazil was centered around Londrina in the North-East of Parana state. Despite straddling the Tropic of Capricorn, every few years their would be a spring-time frost which would destroy most of the crop. By the 1990s most of the production had moved north to Minas Gerais, well out of the frost belt. The rich fertile soils around Londrina are now used for other crops, such as soya, cassava and mangoes. It was not out of human design that the movement occurred, but simply that the farmers in Minas Gerais could make bumper profits in the frost years.

The publication of this article shows a problem of peer review. Nature Plants is basically a biology journal. Reviewers are not likely to have specialist skills in climate models or economic theory, though those selected should have experience in agricultural models. If peer review is literally that, it will fail anyway in an inter-disciplinary subject, where the participants do not have a general grounding in all the disciplines. In this paper it is not just economics, but knowledge of product costing as well. It is academic superiors from the specialisms that are required for review, not inter-disciplinary peers.

Kevin Marshall

 

Why can’t I reconcile the emissions to achieve 1.5C or 2C of Warming?

Introduction

At heart I am beancounter. That is when presented with figures I like to understand how they are derived. When it comes to the claims about the quantity of GHG emissions that are required to exceed 2°C of warming I cannot get even close, unless by making some a series of  assumptions, some of which are far from being robust. Applying the same set of assumptions I cannot derive emissions consistent with restraining warming to 1.5°C

Further the combined impact of all the assumptions is to create a storyline that appears to me only as empirically as valid as an infinite number of other storylines. This includes a large number of plausible scenarios where much greater emissions can be emitted before 2°C of warming is reached, or where (based on alternative assumptions) plausible scenarios even 2°C of irreversible warming is already in the pipeline.  

Maybe an expert climate scientist will clearly show the errors of this climate sceptic, and use it as a means to convince the doubters of climate science.

What I will attempt here is something extremely unconventional in the world of climate. That is I will try to state all the assumptions made by highlighting them clearly. Further, I will show my calculations and give clear references, so that anyone can easily follow the arguments.

Note – this is a long post. The key points are contained in the Conclusions.

The aim of constraining warming to 1.5 or 2°C

The Paris Climate Agreement was brought about by the UNFCCC. On their website they state.

The Paris Agreement central aim is to strengthen the global response to the threat of climate change by keeping a global temperature rise this century well below 2 degrees Celsius above pre-industrial levels and to pursue efforts to limit the temperature increase even further to 1.5 degrees Celsius. 

The Paris Agreement states in Article 2

1. This Agreement, in enhancing the implementation of the Convention, including its objective, aims to strengthen the global response to the threat of climate change, in the context of sustainable development and efforts to eradicate
poverty, including by:

(a) Holding the increase in the global average temperature to well below 2°C above pre-industrial levels and pursuing efforts to limit the temperature increase to 1.5°C above pre-industrial levels, recognizing that this would significantly reduce the risks and impacts of climate change;

Translating this aim into mitigation policy requires quantification of global emissions targets. The UNEP Emissions Gap Report 2017 has a graphic showing estimates of emissions before 1.5°C or 2°C warming levels is breached.

Figure 1 : Figure 3.1 from the UNEP Emissions Gap Report 2017

The emissions are of all greenhouse gas emissions, expressed in billions of tonnes of CO2 equivalents. From 2010, the quantity of emissions before the either 1.5°C or 2°C is breached are respectively about 600 GtCO2e and 1000 GtCO2e. It is these two figures that I cannot reconcile when using the same  assumptions to calculate both figures. My failure to reconcile is not just a minor difference. Rather, on the same assumptions that 1000 GtCO2e can be emitted before 2°C is breached, 1.5°C is already in the pipeline. In establishing the problems I encounter I will clearly endeavor to clearly state the assumptions made and look at a number of examples.

 Initial assumptions

1 A doubling of CO2 will eventually lead to 3°C of rise in global average temperatures.

This despite the 2013 AR5 WG1 SPM stating on page 16

Equilibrium climate sensitivity is likely in the range 1.5°C to 4.5°C

And stating in a footnote on the same page.

No best estimate for equilibrium climate sensitivity can now be given because of a lack of agreement on values across assessed lines of evidence and studies.

2 Achieving full equilibrium climate sensitivity (ECS) takes many decades.

This implies that at any point in the last few years, or any year in the future there will be warming in progress (WIP).

3 Including other greenhouse gases adds to warming impact of CO2.

Empirically, the IPCC’s Fifth Assessment Report based its calculations on 2010 when CO2 levels were 390 ppm. The AR5 WG3 SPM states in the last sentence on page 8

For comparison, the CO2-eq concentration in 2011 is estimated to be 430 ppm (uncertainty range 340 to 520 ppm)

As with climate sensitivity, the assumption is the middle of an estimated range. In this case over one fifth of the range has the full impact of GHGs being less than the impact of CO2 on its own.

4 All the rise in global average temperature since the 1800s is due to rise in GHGs. 

5 An increase in GHG levels will eventually lead to warming unless action is taken to remove those GHGs from the atmosphere, generating negative emissions. 

These are restrictive assumptions made for ease of calculations.

Some calculations

First a calculation to derive the CO2 levels commensurate with 2°C of warming. I urge readers to replicate these for themselves.
From a Skeptical Science post by Dana1981 (Dana Nuccitelli) “Pre-1940 Warming Causes and Logic” I obtained a simple equation for a change in average temperature T for a given change in CO2 levels.

ΔTCO2 = λ x 5.35 x ln(B/A)
Where A = CO2 level in year A (expressed in parts per million), and B = CO2 level in year B.
I use λ = .809, so that if B = 2A, ΔTCO2 = 3.00

Pre-industrial CO2 levels were 280ppm. 3°C of warming is generated by CO2 levels of 560 ppm, and 2°C of warming is when CO2 levels reach 444 ppm.

From the Mauna Loa CO2 data, average CO2 levels averaged 407 ppm in 2017. Given the assumption (3) and further assuming the impact of other GHGs is unchanged, 2°C of warming would have been surpassed in around 2016 when CO2 levels averaged 404 ppm. The actual rise in global average temperatures is from HADCRUT4 is about half that amount, hence the assumption that the impact of a rise in CO2 takes an inordinately long time for the actual warming to reveal itself. Even with the assumption that 100% of the warming since around 1800 is due to the increase in GHG levels warming in progress (WIP) is about the same as revealed warming. Yet the Sks article argues that some of the early twentieth century warming was due to other than the rise in GHG levels.

This is the crux of the reconciliation problem. From this initial calculation and based on the assumptions, the 2°C warming threshold has recently been breached, and by the same assumptions 1.5°C was likely breached in the 1990s. There are a lot of assumptions here, so I could have missed something or made an error. Below I go into some key examples that verify this initial conclusion. Then I look at how, by introducing a new assumption it is claimed that 2°C warming is not yet reached.

100 Months and Counting Campaign 2008

Trust, yet verify has a post We are Doomed!

This tracks through the Wayback Machine to look at the now defunct 100monthsandcounting.org campaign, sponsored by the left-wing New Economics Foundation. The archived “Technical Note” states that the 100 months was from August 2008, making the end date November 2016. The choice of 100 months turns out to be spot-on with the actual data for CO2 levels; the central estimate of the CO2 equivalent of all GHG emissions by the IPCC in 2014 based on 2010 GHG levels (and assuming other GHGs are not impacted); and the central estimate for Equilibrium Climate Sensitivity (ECS) used by the IPCC. That is, take 430 ppm CO2e, and at 14 ppm for 2°C of warming.
Maybe that was just a fluke or they were they giving a completely misleading forecast? The 100 Months and Counting Campaign was definitely not agreeing with the UNEP Emissions GAP Report 2017 in making the claim. But were they correctly interpreting what the climate consensus was saying at the time?

The 2006 Stern Review

The “Stern Review: The Economics of Climate Change” (archived access here) that was commissioned to provide benefit-cost justification for what became the Climate Change Act 2008. From the Summary of Conclusions

The costs of stabilising the climate are significant but manageable; delay would be dangerous and much more costly.

The risks of the worst impacts of climate change can be substantially reduced if greenhouse gas levels in the atmosphere can be stabilised between 450 and 550ppm CO2 equivalent (CO2e). The current level is 430ppm CO2e today, and it is rising at more than 2ppm each year. Stabilisation in this range would require emissions to be at least 25% below current levels by 2050, and perhaps much more.

Ultimately, stabilisation – at whatever level – requires that annual emissions be brought down to more than 80% below current levels. This is a major challenge, but sustained long-term action can achieve it at costs that are low in comparison to the risks of inaction. Central estimates of the annual costs of achieving stabilisation between 500 and 550ppm CO2e are around 1% of global GDP, if we start to take strong action now.

If we take assumption 1 that a doubling of CO2 levels will eventually lead to 3.0°C of warming and from a base CO2 level of 280ppm, then the Stern Review is saying that the worst impacts can be avoided if temperature rise is constrained to 2.1 – 2.9°C, but only in the range of 2.5 to 2.9°C does the mitigation cost estimate of 1% of GDP apply in 2006. It is not difficult to see why constraining warming to 2°C or lower would not be net beneficial. With GHG levels already at 430ppm CO2e, and CO2 levels rising at over 2ppm per annum, the 2°C of warming level of 444ppm (or the rounded 450ppm) would have been exceeded well before any global reductions could be achieved.

There is a curiosity in the figures. When the Stern Review was published in 2006 estimated GHG levels were 430ppm CO2e, as against CO2 levels for 2006 of 382ppm. The IPCC AR5 states

For comparison, the CO2-eq concentration in 2011 is estimated to be 430 ppm (uncertainty range 340 to 520 ppm)

In 2011, when CO2 levels averaged 10ppm higher than in 2006 at 392ppm, estimated GHG levels were the same. This is a good example of why one should take note of uncertainty ranges.

IPCC AR4 Report Synthesis Report Table 5.1

A year before the 100 Months and Counting campaign The IPCC produced its Fourth Climate Synthesis Report. The 2007 Synthesis Report on Page 67 (pdf) there is table 5.1 of emissions scenarios.

Figure 2 : Table 5.1. IPCC AR4 Synthesis Report Page 67 – Without Footnotes

I inputted the various CO2-eq concentrations into my amended version of Dana Nuccitelli’s magic equation and compared to the calculation warming in Table 5.1

Figure 3 : Magic Equation calculations of warming compared to Table 5.1. IPCC AR4 Synthesis Report

My calculations of warming are the same as that of the IPCC to one decimal place except for the last two calculations. Why are there these rounding differences? From a little fiddling in Excel, it would appear to me that the IPCC got the warming results from a doubling of 3 when calculating to two decimal places, whilst my version of the formula is to four decimal places.

Note the following

  • That other GHGs are translatable into CO2 equivalents. Once translated other GHGs they can be treated as if they were CO2.
  • There is no time period in this table. The 100 Months and Counting Campaign merely punched in existing numbers and made a forecast ahead of the GHG levels that would reach the 2°C of warming.
  • No mention of a 1.5°C warming scenario. If constraining warming to 1.5°C did not seem credible in 2007, which should it be credible in 2014 or 2017, when CO2 levels are higher?

IPCC AR5 Report Highest Level Summary

I believe that the underlying estimates of emissions to achieve the 1.5°C or 2°C  of warming used by the UNFCCC and UNEP come from the UNIPCC Fifth Climate Assessment Report (AR5), published in 2013/4. At this stage I introduce an couple of empirical assumptions from IPCC AR5.

6 Cut-off year for historical data is 2010 when CO2 levels were 390 ppm (compared to 280 ppm in pre-industrial times) and global average temperatures were about 0.8°C above pre-industrial times.

Using the magic equation above, and the 390 ppm CO2 levels, there is around 1.4°C of warming due from CO2. Given 0.8°C of revealed warming to 2010, the residual “warming-in-progress” was 0.6°C.

The highest level of summary in AR5 is a Presentation to summarize the central findings of the Summary for Policymakers of the Synthesis Report, which in turn brings together the three Working Group Assessment Reports. This Presentation can be found at the bottom right of the IPCC AR5 Synthesis Report webpage. Slide 33 of 35 (reproduced below as Figure 4) gives the key policy point. 1000 GtCO2 of emissions from 2011 onwards will lead to 2°C. This is very approximate but concurs with the UNEP emissions gap report.

Figure 4 : Slide 33 of 35 of the AR5 Synthesis Report Presentation.

Now for some calculations.

1900 GtCO2 raised CO2 levels by 110 ppm (390-110). 1 ppm = 17.3 GtCO2

1000 GtCO2 will raise CO2 levels by 60 ppm (450-390).  1 ppm = 16.7 GtCO2

Given the obvious roundings of the emissions figures, the numbers fall out quite nicely.

Last year I divided CDIAC CO2 emissions (from the Global Carbon Project) by Mauna Loa CO2 annual mean growth rates (data) to produce the following.

Figure 5 : CDIAC CO2 emissions estimates (multiplied by 3.664 to convert from carbon units to CO2 units) divided by Mauna Loa CO2 annual mean growth rates in ppm.

17GtCO2 for a 1ppm rise is about right for the last 50 years.

To raise CO2 levels from 390 to 450 ppm needs about 17 x (450-390) = 1020 GtCO2. Slide 33 is a good approximation of the CO2 emissions to raise CO2 levels by 60 ppm.

But there are issues

  • If ECS = 3.00, and 17 GtCO2 of emissions to raise CO2 levels by 1 ppm, then it is only 918 (17*54) GtCO2 to achieve 2°C of warming. Alternatively, in future if there are assume 1000 GtCO2 to achieve 2°C  of warming it will take 18.5 GtCO2 to raise CO2 levels by 1 ppm, as against 17 GtCO2 in the past. It is only by using 450 ppm as commensurate with 2°C of warming that past and future stacks up.
  • If ECS = 3,  from CO2 alone 1.5°C would be achieved at 396 ppm or a further 100 GtCO2 of emissions. This CO2 level was passed in 2013 or 2014.
  • The calculation falls apart if other GHGs are included.  Emissions are assumed equivalent to 430 ppm at 2011. Therefore with all GHGs considered the 2°C warming would be achieved with 238 GtCO2e of emissions ((444-430)*17) and the 1.5°C of warming was likely passed in the 1990s.
  • If actual warming since pre-industrial times to 2010 was 0.8°C, ECS = 3, and the rise in all GHG levels was equivalent to a rise in CO2 from 280 to 430 ppm, then the residual “warming-in-progress” (WIP) was just over 1°C. That it is the WIP exceeds the total revealed warming in well over a century. If there is a short-term temperature response is half or more of the value of full ECS, it would imply even the nineteenth century emissions are yet to have the full impact on global average temperatures.

What justification is there for effectively disregarding the impact of other greenhouse emissions when it was not done previously?

This offset is to be found in section C – The Drivers of Climate Change – in AR5 WG1 SPM . In particular the breakdown, with uncertainties, in table SPM.5. Another story is how AR5 reached the very same conclusion as AR4 WG1 SPM page 4 on the impact of negative anthropogenic forcings but with a different methodology, hugely different estimates of aerosols along with very different uncertainty bands. Further, these historical estimates are only for the period 1951-2010, whilst the starting date for 1.5°C or 2°C is 1850.

From this a further assumption is made when considering AR5.

7 The estimated historical impact of other GHG emissions (Methane, Nitrous Oxide…) has been effectively offset by the cooling impacts of aerosols and precusors. It is assumed that this will carry forward into the future.

UNEP Emissions Gap Report 2014

Figure 1 above is figure 3.1 from the UNEP Emissions GAP Report 2017. The equivalent report from 2014 puts this 1000 GtCO2 of emissions in a clearer context. First a quotation with two accompanying footnotes.

As noted by the IPCC, scientists have determined that an increase in global temperature is proportional to the build-up of long-lasting greenhouse gases in the atmosphere, especially carbon dioxide. Based on this finding, they have estimated the maximum amount of carbon dioxide that could be emitted over time to the atmosphere and still stay within the 2 °C limit. This is called the carbon dioxide emissions budget because, if the world stays within this budget, it should be possible to stay within the 2 °C global warming limit. In the hypothetical case that carbon dioxide was the only human-made greenhouse gas, the IPCC estimated a total carbon dioxide budget of about 3 670 gigatonnes of carbon dioxide (Gt CO2 ) for a likely chance3 of staying within the 2 °C limit . Since emissions began rapidly growing in the late 19th century, the world has already emitted around 1 900 Gt CO2 and so has used up a large part of this budget. Moreover, human activities also result in emissions of a variety of other substances that have an impact on global warming and these substances also reduce the total available budget to about 2 900 Gt CO2 . This leaves less than about 1 000 Gt CO2 to “spend” in the future4 .

3 A likely chance denotes a greater than 66 per cent chance, as specified by the IPCC.

4 The Working Group III contribution to the IPCC AR5 reports that scenarios in its category which is consistent with limiting warming to below 2 °C have carbon dioxide budgets between 2011 and 2100 of about 630-1 180 GtCO2

The numbers do not fit, unless the impact of other GHGs are ignored. As found from slide 33, there is 2900 GtCO2 to raise atmospheric CO2 levels by 170 ppm, of which 1900 GtC02 has been emitted already. The additional marginal impact of other historical greenhouse gases of 770 GtCO2 is ignored. If those GHG emissions were part of historical emissions as the statement implies, then that marginal impact would be equivalent to an additional 45 ppm (770/17) on top of the 390 ppm CO2 level. That is not far off the IPCC estimated CO2-eq concentration in 2011 of 430 ppm (uncertainty range 340 to 520 ppm). But by the same measure 3670 GTCO2e would increase CO2 levels by 216 ppm (3670/17) from 280 to 496 ppm. With ECS = 3, this would eventually lead to a temperature increase of almost 2.5°C.

Figure 1 above is figure 3.1 from the UNEP Emissions GAP Report 2017. The equivalent report from the 2014 report ES.1

Figure 6 : From the UNEP Emissions Gap Report 2014 showing two emissions pathways to constrain warming to 2°C by 2100.

Note that this graphic goes through to 2100; only uses the CO2 emissions; does not have quantities; and only looks at constraining temperatures to 2°C.  To achieve the target requires a period of negative emissions at the end of the century.

A new assumption is thus required to achieve emissions targets.

8 Sufficient to achieve the 1.5°C or 2°C warming targets likely requires many years of net negative emissions at the end of the century.

A Lower Level Perspective from AR5

A simple pie chart does not seem to make sense. Maybe my conclusions are contradicted by the more detailed scenarios? The next level of detail is to be found in table SPM.1 on page 22 of the AR5 Synthesis Report – Summary for Policymakers.

Figure 7 : Table SPM.1 on Page 22 of AR5 Synthesis Report SPM, without notes. Also found as Table 3.1 on Page 83 of AR5 Synthesis Report 

The comment for <430 ppm (the level of 2010) is "Only a limited number of individual model studies have explored levels below 430 ppm CO2-eq. ” Footnote j reads

In these scenarios, global CO2-eq emissions in 2050 are between 70 to 95% below 2010 emissions, and they are between 110 to 120% below 2010 emissions in 2100.

That is, net global emissions are negative in 2100. Not something mentioned in the Paris Agreement, which only has pledges through to 2030. It is consistent with the UNEP Emissions GAP report 2014 Table ES.1. The statement does not refer to a particular level below 430 ppm CO2-eq, which equates to 1.86°C. So how is 1.5°C of warming not impossible without massive negative emissions? In over 600 words of notes there is no indication. For that you need to go to the footnotes to the far more detailed Table 6.3 AR5 WG3 Chapter 6 (Assessing Transformation Pathways – pdf) Page 431. Footnote 7 (Bold mine)

Temperature change is reported for the year 2100, which is not directly comparable to the equilibrium warming reported in WGIII AR4 (see Table 3.5; see also Section 6.3.2). For the 2100 temperature estimates, the transient climate response (TCR) is the most relevant system property.  The assumed 90% range of the TCR for MAGICC is 1.2–2.6 °C (median 1.8 °C). This compares to the 90% range of TCR between 1.2–2.4 °C for CMIP5 (WGI Section 9.7) and an assessed likely range of 1–2.5 °C from multiple lines of evidence reported in the WGI AR5 (Box 12.2 in Section 12.5).

The major reason that 1.5°C of warming is not impossible (but still more unlikely than likely) for CO2 equivalent levels that should produce 2°C+ of warming being around for decades is because the full warming impact takes so long to filter through.  Further, Table 6.3 puts Peak CO2-eq levels for 1.5-1.7°C scenarios at 465-530 ppm, or eventual warming of 2.2 to 2.8°C. Climate WIP is the difference. But in 2018 WIP might be larger than all the revealed warming in since 1870, and certainly since the mid-1970s.

Within AR5 when talking about constraining warming to 1.5°C or 2.0°C it is only the warming which is estimated to be revealed in 2100. There is no indication of how much warming in progress (WIP) there is in 2100 under the various scenarios, therefore I cannot reconcile back the figures. However, for GHG  would appear that the 1.5°C figure relies upon a period of over 100 years for impact of GHGs on warming failing to come through as (even netting off other GHGs with the negative impact of aerosols) by 2100 CO2 levels would have been above 400 ppm for over 85 years, and for most of those significantly above that level.

Conclusions

The original aim of this post was to reconcile the emissions sufficient to prevent 1.5°C or 2°C of warming being exceeded through some calculations based on a series of restrictive assumptions.

  • ECS = 3.0°C, despite the IPCC being a best estimate across different studies. The range is 1.5°C to 4.5°C.
  • All the temperature rise since the 1800s is assumed due to rises in GHGs. There is evidence that this might not be the case.
  • Other GHGs are netted off against aerosols and precursors. Given that “CO2-eq concentration in 2011 is estimated to be 430 ppm (uncertainty range 340 to 520 ppm)” when CO2 levels were around 390 ppm, this assumption is far from robust.
  • Achieving full equilibrium takes many decades. So long in fact that the warming-in-progress (WIP) may currently exceed all the revealed warming in over 150 years, even based on the assumption that all of that revealed historical warming is due to rises in GHG levels.

Even with these assumptions, keeping warming within 1.5°C or 2°C seems to require two assumptions that were not recognized a few years ago. First is to assume net negative global emissions for many years at the end of the century. Second is to talk about projected warming in 2100 rather than warming as a resultant on achieving full ECS.

The whole exercise appears to rest upon a pile of assumptions. Amending the assumptions means one way means admitting that 1.5°C or 2°C of warming is already in the pipeline, or the other way means admitting climate sensitivity is much lower. Yet there appears to be a very large range of empirical assumptions to chose from there could be there are a very large number of scenarios that are as equally valid as the ones used in the UNEP Emissions Gap Report 2017.

Kevin Marshall

More Coal-Fired Power Stations in Asia

A lovely feature of the GWPF site is its extracts of articles related to all aspects of climate and related energy policies. Yesterday the GWPF extracted from an opinion piece in the Hong Kong-based South China Morning Post A new coal war frontier emerges as China and Japan compete for energy projects in Southeast Asia.
The GWPF’s summary:-

Southeast Asia’s appetite for coal has spurred a new geopolitical rivalry between China and Japan as the two countries race to provide high-efficiency, low-emission technology. More than 1,600 coal plants are scheduled to be built by Chinese corporations in over 62 countries. It will make China the world’s primary provider of high-efficiency, low-emission technology.

A summary point in the article is not entirely accurate. (Italics mine)

Because policymakers still regard coal as more affordable than renewables, Southeast Asia’s industrialisation continues to consume large amounts of it. To lift 630 million people out of poverty, advanced coal technologies are considered vital for the region’s continued development while allowing for a reduction in carbon emissions.

Replacing a less efficient coal-fired power station with one of the latest technology will reduce carbon (i.e CO2) emissions per unit of electricity produced. In China, these efficiency savings replacement process may outstrip the growth in power supply from fossil fuels. But in the rest of Asia, the new coal-fired power stations will be mostly additional capacity in the coming decades, so will lead to an increase in CO2 emissions. It is this additional capacity that will be primarily responsible for driving the economic growth that will lift the poor out of extreme poverty.

The newer technologies are important in other types emissions. That is the particle emissions that has caused high levels of choking pollution and smogs in many cities of China and India. By using the new technologies, other countries can avoid the worst excesses of this pollution, whilst still using a cheap fuel available from many different sources of supply. The thrust in China will likely be to replace the high pollution power stations with new technologies or adapt them to reduce the emissions and increase efficiencies. Politically, it is a different way of raising living standards and quality of life than by increasing real disposable income per capita.

Kevin Marshall

 

Ocean Impact on Temperature Data and Temperature Homgenization

Pierre Gosselin’s notrickszone looks at a new paper.

Temperature trends with reduced impact of ocean air temperature – Frank LansnerJens Olaf Pepke Pedersen.

The paper’s abstract.

Temperature data 1900–2010 from meteorological stations across the world have been analyzed and it has been found that all land areas generally have two different valid temperature trends. Coastal stations and hill stations facing ocean winds are normally more warm-trended than the valley stations that are sheltered from dominant oceans winds.

Thus, we found that in any area with variation in the topography, we can divide the stations into the more warm trended ocean air-affected stations, and the more cold-trended ocean air-sheltered stations. We find that the distinction between ocean air-affected and ocean air-sheltered stations can be used to identify the influence of the oceans on land surface. We can then use this knowledge as a tool to better study climate variability on the land surface without the moderating effects of the ocean.

We find a lack of warming in the ocean air sheltered temperature data – with less impact of ocean temperature trends – after 1950. The lack of warming in the ocean air sheltered temperature trends after 1950 should be considered when evaluating the climatic effects of changes in the Earth’s atmospheric trace amounts of greenhouse gasses as well as variations in solar conditions.

More generally, the paper’s authors are saying that over fairly short distances temperature stations will show different climatic trends. This has a profound implication for temperature homogenization. From Venema et al 2012.

The most commonly used method to detect and remove the effects of artificial changes is the relative homogenization approach, which assumes that nearby stations are exposed to almost the same climate signal and that thus the differences between nearby stations can be utilized to detect inhomogeneities (Conrad and Pollak, 1950). In relative homogeneity testing, a candidate time series is compared to multiple surrounding stations either in a pairwise fashion or to a single composite reference time series computed for multiple nearby stations. 

Lansner and Pederson are, by implication, demonstrating that the principle assumption on which homogenization is based (that nearby temperature stations are exposed to almost the same climatic signal) is not valid. As a result data homogenization will not only eliminate biases in the temperature data (such a measurement biases, impacts of station moves and the urban heat island effect where it impacts a minority of stations) but will also adjust out actual climatic trends. Where the climatic trends are localized and not replicated in surrounding areas, they will be eliminated by homogenization. What I found in early 2015 (following the examples of Paul Homewood, Euan Mearns and others) is that there are examples from all over the world where the data suggests that nearby temperature stations are exposed to different climatic signals. Data homogenization will, therefore, cause quite weird and unstable results. A number of posts were summarized in my post Defining “Temperature Homogenisation”.  Paul Matthews at Cliscep corroborated this in his post of February 2017 “Instability og GHCN Adjustment Algorithm“.

During my attempts to understand the data, I also found that those who support AGW theory not only do not question their assumptions but also have strong shared beliefs in what the data ought to look like. One of the most significant in this context is a Climategate email sent on Mon, 12 Oct 2009 by Kevin Trenberth to Michael Mann of Hockey Stick fame, and copied to Phil Jones of the Hadley centre, Thomas Karl of NOAA, Gavin Schmidt of NASA GISS, plus others.

The fact is that we can’t account for the lack of warming at the moment and it is a travesty that we can’t. The CERES data published in the August BAMS 09 supplement on 2008 shows there should be even more warming: but the data are surely wrong. Our observing system is inadequate. (emphasis mine)

Homogenizing data a number of times, and evaluating the unstable results in the context of strongly-held beliefs will bring the trends evermore into line with those beliefs. There is no requirement for some sort of conspiracy behind deliberate data manipulation for this emerging pattern of adjustments. Indeed a conspiracy in terms of a group knowing the truth and deliberately perverting that evidence does not really apply. Another reason for the conspiracy not applying is the underlying purpose of homogenization. It is to allow that temperature station to be representative of the surrounding area. Without that, it would not be possible to compile an average for the surrounding area, from which the global average in constructed. It is this requirement, in the context of real climatic differences over relatively small areas, I would suggest leads to the deletions of “erroneous” data and the infilling of estimated data elsewhere.

The gradual bringing the temperature data sets into line will beliefs is most clearly shown in the NASA GISS temperature data adjustments. Climate4you produces regular updates of the adjustments since May 2008. Below is the March 2018 version.

The reduction of the 1910 to 1940 warming period (which is at odds with theory) and the increase in the post-1975 warming phase (which correlates with the rise in CO2) supports my contention of the influence of beliefs.

Kevin Marshall

 

Scotland now to impose Minimum Pricing for Alcohol

This week the British Supreme Court cleared the way for the Alcohol (Minimum Pricing) (Scotland) Act 2012 to be enacted. The Scotch Whisky Association (SWA) had mounted a legal challenge to try to halt the price hike, which it said was disproportionate’ and illegal under European law. (Daily Mail) The Act will mandate that retailers have to charge a minimum of 50p per unit of alcohol. This will only affect the price of alcohol in off-licences and supermarkets. In the pub, the price of a pint with 5% ABV is already much higher than the implied price of £1.42. I went round three supermarkets – Asda, Sainsbury’s and Aldi – to see the biggest price hikes implied in the rise.

The extra profit is kept by the retailer, though gross profits may fall as sales volume falls. Premium brands only fall below the minimum price in promotions. With the exception of discounter Aldi, the vast majority of shelf space is occupied by alcohol above the minimum price. Further, there is no escalator. The minimum price will stay the same for the six years that the legislation is in place. However, the Scottish Government claims that 51% of alcohol sold in off-trade is less than 50 pence per unit. The promotions have a big impact. The Scottish people will be deprived of these offers. Many will travel across the border to places like Carlisle and Berwick, to acquire their cheap booze. Or enterprising folks will break the law by illegal sales. This could make booze more accessible to underage drinkers and bring them into regular contact with petty criminals. However, will it reduce the demand for booze? The Scottish Government website quotes Health Secretary Shona Robison.

“This is a historic and far-reaching judgment and a landmark moment in our ambition to turn around Scotland’s troubled relationship with alcohol.

“In a ruling of global significance, the UK Supreme Court has unanimously backed our pioneering and life-saving alcohol pricing policy.

“This has been a long journey and in the five years since the Act was passed, alcohol related deaths in Scotland have increased. With alcohol available for sale at just 18 pence a unit, that death toll remains unacceptably high.

“Given the clear and proven link between consumption and harm, minimum pricing is the most effective and efficient way to tackle the cheap, high strength alcohol that causes so much damage to so many families.

Is minimum pricing effective? Clearly, it will make some alcohol more expensive. But it must be remembered that the tax on alcohol is already very high. The cheapest booze on my list, per unit of alcohol, is the 3 litre box of Perry (Pear Cider) at £4.29. The excise duty is £40.38 per hectolitre. With VAT at 20%, tax is £1.92, or 45% of the retail price. The £16 bottles of spirits (including two well-known brands of Scottish Whisky) are at 40% alcohol. With excise duty at £28.74 per litre of pure alcohol, tax is £13.33 or 83% of the retail price. It has been well-known that alcohol is highly inelastic with respect to price so very large increases in price will make very little difference to demand. This is born out by a graphic from a 2004 report Alcohol Harm Reduction Strategy for England of the UK alcohol consumption in the last century.

In the early part of the twentieth century, there was sharp fall in alcohol consumption from 1900 to the 1930s. There was a sharp drop in the First World War, but after the war the decline continued the pre-war trend. This coincided with a religious revival and the temperance movement. It was started in the nineteenth century by organisations such as the Salvation Army and the Methodists, but taken up by other Christian denominations. In other words, it was a massive cultural change from the bottom, where it became socially unacceptable for many even to touch alcohol. Conversely, the steep decline in religion in the post-war period was accompanied by the rise in alcohol consumption.

The minimum price for alcohol is a fiscal solution being proposed for cultural problems. The outcome of a minimum price will be monopoly profits for the supermarkets and the manufacturers of alcoholic drinks.

It is true that a lot of crime is committed by those intoxicated, other social problems are caused and there are health issues. But the solution is not to increase the price of alcohol. The solution is to change people. The Revival of the early twentieth century, (begun before the outbreak of the Great War in 1914) saw both a fall in alcohol consumption and a fall in crime levels, that continued through the Great Depression. But it was not lacking of alcohol that reduced crime on the early twentieth. Instead, both reductions had a common root in the Christian Faith.

The Scottish Government will no doubt see a fall in sales of alcohol. But this will not represent the reduction in consumption, as cheaper booze will be imported from England, including Scottish Whisky. All that they are doing is treating people as statistics to be dictated to, and manipulated by, their shallow moralistic notions.

Kevin Marshall

 

The Morning Star’s denial of the Venezuelan Dictatorship

Guido Fawkes has an excellent example of the hard left’s denial of realities that conflict with their beliefs. From the Daily Politics, this is Morning Star editor Ben Chacko saying that the UN Human Rights Watch report on Venezuela was one-sided.

The Human Rights report can be found here.

The recent demonstrations need to be put into context. There are two contexts that can be drawn upon. The Socialist Side (with which many Socialists will disagree) is from Morning Star’s piece of 25th August The Bolivarian Revolution hangs in the balance.

They say

One of the world’s largest producers of oil, on which 95 per cent of its economy depends, the Bolivarian socialist government of Venezuela has, over the last 17 years, used its oil revenues to cut poverty by half and reduce extreme poverty to 5.4 per cent.

The government has built social housing; boosted literacy; provided free healthcare and free education from primary school to universities and introduced arts, music and cultural analysis programmes and many others targeting specific problems at the local level.

This is sentance emphasises the hard-left bias.

The mainly middle-class protesters, most without jobs and income, accused President Nicolas Maduro of dictatorship and continued with their daily demonstrations and demands for a change of government. 

Folks without “jobs or income” are hardly middle-class, but might be former middle-class. They have been laid low by circumstances. Should they be blaming the Government or forces outside the Government’s control?

 

From Capx.co on 16th August – Socialism – not oil prices – is to blame for Venezuela’s woes. Also from upi.com on 17th February – Venezuela: 75% of the population lost 19 pounds amid crisis. This is the classic tale of socialism’s failure.

  • Government control of food supplies leads to shortages, which leads to rationing, which leads to more shortages and black market profiteering. This started in 2007 when oil prices were high, but not yet at the record high.
  • Inflation is rampant, potentially rising from 720% in 2016 to 1600% this year. This is one of the highest rates in the world.
  • The weight loss is due to food shortages. It is the poorest who suffer the most, though most of the population are in extreme poverty.
  • An oil-based economy needs to diversify. Venezuela has not. It needs to use high oil prices to invest in infrastructure. Instead, the Chavez regime expropriated the oil production from successful private companies and handed to Government Cronies. A graphic from Forbes illustrates the problem.

About a decade ago at the height of the oil price boom, Venezuela’s known oil reserves more than tripled, yet production fell. It now has the highest oil reserves of any country in the world.

  • Crime has soared, whilst people are going hungry.
  • Maybe a million children are missing school through hunger and lack of resources to run schools. Short-run “successes” based on expropriating the wealth of others have reversed to create a situation far worse than before Chavez came to power.
  • Oil prices are in real terms above the level they were from 1986 to 2003 (with the exception of a peak for the first Gulf War) and comparable to the peak reached in 1973 with the setting up of the OPEC Cartel and oil embargo.

The reality is that Socialism always fails. But there is always a hardcore always in denial, always coming up with empty excuses for failure, often blaming it on others. With the rise of Jeremy Corbyn (who receives a copy of the Morning Star daily), this hardcore has have taken over the Labour Party. The example of Venezuela indicates the long-term consequences of their attaining power.

Kevin Marshall

Forest Trump – stupid is as stupid does

Last Tuesdays’ BBC climate propaganda piece for the day was ‘Donald Trump forest’ climate change project gains momentum,

A campaign to plant trees to compensate for the impact of President Trump’s climate policies has 120,000 pledges.
The project was started by campaigners upset at what they call the president’s “ignorance” on climate science.
Trump Forest allows people either to plant locally or pay for trees in a number of poorer countries.
Mr Trump says staying in the climate pact will damage the US economy, cost jobs and give a competitive advantage to countries such as India and China.
The organisers say they need to plant an area the size of Kentucky to offset the Trump effect.

Trump Forest website (motto Where ignorance grows trees) explains

Breathe easy, Mr President.

 US President Donald Trump doesn’t believe in the science of human-caused climate change. He wants to ignore one of the greatest threats to healthy life on Earth.

Trump wants to bring back coal despite scientists telling us we cannot afford to burn it, and despite economists telling us there’s more money to be made and more jobs available in renewable energy.

So we’re planting a forest to soak up the extra greenhouse gases Trump plans to put into our atmosphere.

We’re planting a global forest to offset Trump’s monumental stupidity.

The claim Trump wants to “bring back coal” or, just to rescind the policies to phase it out, is a question of that can be answered by the empirical evidence. The BP statistical review of World Energy 2016 has estimates of coal consumption by country, measured in millions of barrels of oil equivalent. For the USA I have created a graph.

US coal consumption in 2015 was 31% below the level of 2015, but it is far from being phased out. Further, much of the fall in consumption is primarily down to government policy, but from switching to cleaner and cheaper shale gas. Add the two together in terms of millions of tonnes of oil equivalent, and consumption of the two fossil fuels has hardly changed in 20 years.

Natural Gas is not only cleaner, in terms of far fewer particulates emitted when burnt, it has the added benefit, for climate alarmists, of having around half the CO2 emissions. As a result, net emissions have been falling.

However, global warming is allegedly the result of rising levels of greenhouse gases, which in turn are mostly the result of increasing fossil fuel emissions. How does the falling consumption of coal in the USA compare to the global picture? Again the BP estimates give a fairly clear answer.

In 1965 the USA accounted for 20.8% of global coal consumption, and other rich OECD countries 42.3%. Fifty years later the shares had fallen to 10.3% and 15.2%. Yet the combined OECD consumption had increased by 11%. The lesson from this is that to reduce global GHG emissions requires that developing countries reduce their emissions. China,, which now accounts for just over 50% of global coal consumption, has committed to peak its emissions by 2030. India, whose coal consumption exceeded that is the USA for the first time in 2015, has no such commitment. With a similar population to China, fast economic growth will lead to fairly high rates of increase in coal consumption over the next few years. Into the distant future, the ROW, with around half the global population, are likely to see significant increases in coal consumption.

The switch from coal to shale gas is a major reason why total USA emissions have been falling, as evidenced in this graph from the USA INDC Submission.

The 2025 target is a bit a cheat. Most of the reduction would have been achieved without any policy. In fact, about one third had been achieved by 2013.

Trump Forest have a science page to explain the thinking behind the scheme. It states

If executed in its entirety the Clean Power Plan would prevent approximately 650 million tons of carbon dioxide from reaching the atmosphere over the next 8 years. Along with other actions, including tailpipe regulations (which Trump has also moved to eliminate), the United States was steering toward meeting its target for the Paris Agreement.

Also

The Paris Agreement, negotiated in the French capital in December 2015 was agreed to by over 190 nations. It is the first time the global community has agreed to address climate change by striving to keep the average global temperature increase below 2°C.

So how does the 650mtCO2e over 8 years measure up against those of the global community in the context of “striving to keep the average global temperature increase below 2°C”?

The UNFCCC produced a useful graphic, summarizing all the INDC submissions.

 

Without the 650mtCO2e claimed reduction from the US Clean Air Plan if fully implemented, global emissions will be just over 1% higher. Rather than global emissions being about 12.5% above the 2°C warming path they might be 14%.  In other words, even if a doubling of CO2 (or equivalent) will lead to 3°C and such warming will have catastrophic consequences (despite the lack of strong, let alone incontrovertible, evidence) the US Clean Air Plan would back no noticeable difference to climate change. using the figures presented by the UNFCCC.

It gets worse. Under the science, Trump Forest have the following graphic, lifted from Climate Interactive.

I have looked at Climate Interactive’s figure before. At least from their figures in December 2015, they claimed that future per capita emissions in the USA would rise without policy, whilst since the 1973 oil crisis per capita emissions had been falling. It was the same with the EU, only their per capita emissions had been falling since 1980. For China and Russia per capita emissions are shown rise through the rough. It is as though without them the guiding hand of the green apostles, Governments will deliberately wastefully burn ever-increasing amounts of fossil fuels. rather than promote the welfare of their nations. This is a graphic I produced from the Climate Interactives C-ROADS software version v4.026v.071 RCP8.5 baseline scenario and the built-in population forecasts.

China is the most globally significant. Despite a forecast decline in population to 1.00 billion in 2100, GHG emissions are forecast to peak at nearly 43GtCO2e by in 2090. That compares with 49GtCO2e from over 7 billion people in 2010. Conversely, non-policy developing countries (who do not want to game-playing ny committing to emissions reductions), are forecast to do disasterously economically and hence have very low emissions growth. That includes India, 50+ African nations, Pakistan, Bangladesh, Philippines, Vietnam, Indonesia, Saudi Arabia, Iran, Iraq etc.

The mere act of countries signing a bit of paper produces most of the drop in emissions. The 650mtCO2e claimed reduction from the US Clean Air Plan if the marginal impact of the policy is taken into account, rather than the difference between an unreasonable forecast and an objective.

It gets worse. The elimination of cheap sources of energy, along with the plethora of regulations, make energy more expensive. Apart from directly harming the living standards of households, this will increase energy costs to business, especially the high energy using industries such as steel, aluminum, and bulk chemicals. US industries will be placed at a competitive disadvantage to their competitors in non-policy emerging economies. Some of the US savings from the policy will be from emissions increases elsewhere. There are no built-in safeguards to stop this happening.

It gets worse. Emerging economies not only have lower labour productivity per unit of output, they also have less efficient use of energy per unit of output. Further, countries like China and India have a larger element of coal in the energy mix than the USA. For these reasons an unintended consequence of reducing emissions in the USA (and other developed countries) through shifting production overseas could be a net increase global emissions. Virtue signaling achieves the opposite of intentions.

However, the real world must not be allowed to confront the anointed in their evangelical zeal to save the world from Donald Trump. They might have to accept that their Virtue signaling are both wrong and if fully implemented will cause great net harm. That would seriously hurt their feelings. Like in the 1994 film Forrest Gump, the lesson is that the really stupid people are not those with naturally low IQs, but those with intelligence who do stupid things. This is what Forest Trump’s backers have achieved.

Like in the 1994 film Forrest Gump, the lesson is that the really stupid people are not those with naturally low IQs, but those with intelligence who do stupid things. This is what Forest Trump’s backers have achieved.

Kevin Marshall

 

Results of Sea-Level Rise Extremism in Miami-Dade

A couple of years ago I posted* in response to a post on sea level rise at The Conversation

A senior geology professor in Miami, who also chairs the science committee for the Miami-Dade Climate Change Advisory Task Force, has views on future sea level rise that are way more extreme than the available evidence.

My Summary started

The claim by Professor Wanless at the Conversation that sea levels could rise by 1.25 to 2m by 2100 is way too extreme it is based on top-slicing the estimates on a NOAA 2012 report. The top-end estimates were not included by the UNIPCC in its AR5 Sept 2013 report. In fact, the UNIPCC report stated it had low-confidence in estimates of sea level rise above its top-end 0.82m.

The Task Force has now concluded. The Miami-Dade Climate Change website states

The Sea Level Rise Task Force, formed in July 2013, developed several important recommendations, which are being implemented in Miami-Dade County. The Task Force reviewed relevant data and prior studies and reports regarding the potential impact of sea level rise on public services and facilities, real estate, water and other ecological resources, and property and infrastructure.

The Introduction to the extensive report report states (with conversions into mm inserted):-

Since reliable record keeping began over 100 years ago at the tide gauge in Key West, the average sea level has risen approximately 228 millimeters (or 9 inches). This rise has been primarily due to thermal expansion (as warmer water occupies more volume) and to melting from glaciers and ice sheets. Over the next century, the rate of sea level rise is very likely to accelerate due to increased melting from land-based ice sheets, in particular Greenland. Recognizing the need for clear, consistent, and local information about future sea level rise projections, The Southeast Florida Regional Climate Change Compact developed the, “Unified Sea Level Rise Projection for Southeast Florida”. The updated projection, published in 2015, was developed by a panel of well-respected and informed scientists using the most recent and best available data. The projection (Figure 1) estimates that the region can expect to see average sea levels 6 to 10 inches (150 to 205 mm) higher by 2030 than they were in 1992, 14 to 34 inches (355 to 860 mm) higher by 2060, and 31 to 81 inches higher (790 to 2060 mm) by 2100. There is a more certain estimate for near-term changes and a greater uncertainty for estimates at the end of this century. This change in average sea levels will amplify the risks of storm surge and nuisance flooding.

This implies a massive acceleration in the rate of sea level-rise. In the last couple of years the rate of sea level rise has indeed accelerated. The NOAA data now shows a rate of 237 mm a century, by from 228 mm when the report was written. It is likely a blip and well with the margin of error.

To see how much sea level rise will have to accelerate to meet the forecasts, I will assume that from 1992 to 2015 the sea levels rose by 60 mm (2.4 inches) or 2.6 mm a year.

From 2016 to 2030 sea levels will need to rise by 6 to 10 mm a year on average, or about three or four times the current rate.

From 2016 to 2060 sea levels will need to rise by 8.5 to 23 mm a year on average, or about three or nine times the current rate.

From 2016 to 2100 sea levels will need to rise by 8.5 to 23.5 mm a year on average, or about three or nine times the current rate.

The impact of Professor Wanless on the Committee’s output should be clearly seen. A straight line forecast would be a 8 to 9 inch sea level rise by 2100. Many of the recommendations for planning will be based on a 2 foot 6 inch rise to a 6 foot 6 inch rise. Any reasonable person should take a measure to the Miami-Dade area – which is very low-lying and imagine the difference between a dyke 12 inches high and a dyke seven feet high along the Miami sea front.

Alternatively imagine the effect on property prices in Miami-Dade (2.6 million) and on neighbouring Broward and Palm Beach (3.1 million) if people really swallowed this whole. The tiny community of Fairbourne (724 people) in West Wales have had their properties made virtually value-less by a Welsh Government report and the alarmist reporting by the BBC.

*Thanks to Paul Homewood for a reminder to update my earlier post in his look at false alarmism on sea level rise in the Thames Estuary.

Kevin Marshall

UK votes for divorce from EU

The unexpected has happened. Despite the efforts of most of the British political establishment, the UK has voted by a narrow margin to leave the European Union. It should be viewed as a divorce which the interested parties had tried to prevent Like with a divorce, there needs to be deep breaths all round to accept the future dissolution. Like a divorce with children involved, Britain and the EU need to work constructively to achieve the best futures for all.
British politicians need to reflect as well. Maybe two-thirds supported Remain. Many were in line with their constituents, especially in London, Scotland and the M4 corridor where Prime Minister David Cameron’s constituency lies. But most of the North of England, particularly in the Labour Heartlands, voted for Leave. The MPs have to clearly state that they accept the result, and will join in obtaining the best futures for Britain and the countries of Europe. Those who cannot accept this should recognize they have no future in public service and resign from leading roles in politics.

Kevin Marshall

Britain Stronger in Europe Letter

I received a campaign letter from Britain Stronger in Europe today headed

RE: THE FACTS YOU NEED TO KNOW ABOUT EUROPE AND THE EU REFERENDUM

Putting the “RE:” in front is a bit presumptuous. It is not a reply to my request. However, I believe in looking at both sides of the argument, so here is my analysis. First the main points in the body of the letter:-

  1. JOBS – Over 3 million UK jobs are linked to our exports to the EU.
  2. BUSINESSES – Over 200,000 UK Businesses trade with the EU, helping them create jobs in the UK.
  3. FAMILY FINANCES – Leaving the EU will cost the average UK household at least £850 a year, and potentially as much as £1,700, according to research released by the London School of Economics.
  4. PRICES – Being in Europe means lower prices in UK shops, saving the average UK household over £350 a year. If we left Europe, your weekly shop would cost more.
  5. BENEFITS vs COSTS – For every £1 we put into the EU, we get almost £10 back through increased trade, investment, jobs, growth and lower prices.
  6. INVESTMENT – The UK gets £66 million of investment every day from EU countries – that’s more than we pay to be a member of the EU.

The first two points are facts, but only show part of the picture. The UK not only exports to the EU, but also imports. Indeed there is a net deficit with the EU, and a large deficit in goods. It is only due to a net surplus in services – mostly in financial services based in the City of London – that the trade deficit is not larger. The ONS provides a useful graphic illustrating both the declining share of exports to the EU, and the increasing deficit, reproduced below.

No one in the UK is suggesting that Brexit would mean a decline in trade, and it would be counter-productive for the EU not to reach a trade agreement with an independent UK when the EU has this large surplus.

The impact on FAMILY FINANCES is based upon the Centre for Economic Performance, an LSE affiliated organisation. There is both a general paper and a technical paper to back up the claims. They are modelled estimates of the future, not facts. The modelled costs assume Britain exits the European Union without any trade agreements, despite this being in the economic interests of both the UK and the EU. The report also does a slight of hand in estimating the contributions the UK will make post Brexit. From page 18 the technical paper

We assume that the UK would keep contributing 83% of the current per capita contribution as Norway does in order to remain in the single market (House of Commons, 2013). This leads to a fiscal saving of about 0.09%.

The table at the foot of report page 22 (pdf page 28) gives the breakdown of the estimate from 2011 figures. The Norway figures are gross and have a fixed cost element. The UK economy is about six times that of Norway, so would not end up spending nearly as much per capita even on the same basis. The UK figures are also a net figure. The UK pays into the EU twice as much as it gets out. Ever since joining the Common Market in 1973 Britain has been the biggest loser in terms of net contributions, despite the rebates that Mrs Thatcher secured with much effort in the 1980s.

The source of the PRICES information is again from the Centre for Economic Performance, but again with no direct reference. I assume it is from the same report, and forms part of the modelled forecast costs.

The BENEFITS vs COSTS statement is not comparing like with like. The alleged benefits to the UK are not all due to being a member of a club, but as a consequence of being an open economy trading with its neighbours. A true BENEFITS vs COSTS comparison would be future scenarios of Brexit vs Remain. Leading economist Patrick Minford has published a paper for the Institute of Economic Affairs, who finds there is a net benefit in leaving, particularly when likely future economic growth is taken into account.

The INVESTMENT issue is just part of the BENEFITS vs COSTS statement. So, like with the PRICES statement it is making one point into two.

 In summary, Britain Stronger in Europe claims I need to know six facts relevant to the referendum decision, but actually fails to provide a one. The actual facts are not solely due to the UK being a member of the European Union, whilst the relevant statements are opinions on modelled future scenarios that are unlikely to happen. The choice is between a various possible future scenarios in the European Union and possible future scenarios outside. The case for remain should be proclaiming the achievements of the European Union in making a positive difference to the lives of the 500 million people in the 28 States, along with future pathways where it will build on these achievements. The utter lack of these arguments, in my opinion, is the strongest argument for voting to leave.

Kevin Marshall

 

Copy of letter from Britain Stronger in Europe