Study on UK Wind and Solar potential fails on costs

Oxford University’s Smith School of Enterprise and the Environment in August published a report “Could Britain’s energy demand be met entirely by wind and solar?“, a short briefing “Wind and solar power could significantly exceed Britain’s energy needs” with a press release here. Being a (slightly) manic beancounter, I will review the underlying assumptions, particularly the costs.

Summary Points

  • Projected power demand is likely high, as demand will likely fall as energy becomes more expensive.
  • Report assumes massively increased load factors for wind turbines. A lot of this increase is from using benchmarks contingent on technological advances.
  • The theoretical UK scaling up of wind power is implausible. 3.8x for onshore wind, 9.4x for fixed offshore and >4000x for floating offshore wind. This to be achieved in less than 27 years.
  • Most recent cost of capital figures are from 2018, well before the recent steep rises in interest rates. Claim of falling discount rates is false.
  • The current wind turbine capacity is still a majority land based, with a tiny fraction floating offshore. A shift in the mix to more expensive technologies leads to an 82% increase in average levelised costs. Even with the improbable load capacity increases, the average levilised cost increase to 37%.
  • Biggest cost rise is from the need for storing days worth of electricity. The annual cost could be greater than the NHS 2023/24 budget.
  • The authors have not factored in the considerable risks of diminishing marginal returns.

Demand Estimates

The briefing summary states

299 TWh/year is an average of 34 GW, compared with 30 GW average demand in 2022 at grid.iamkate.com. I have no quibble with this value. But what is the five-fold increase by 2050 made-up of?

From page 7 of the full report.

So 2050 maximum energy demand will be slightly lower than today? For wind (comprising 78% of potential renewables output) the report reviews the estimates in Table 1, reproduced below as Figure 1

Figure 1: Table 1 from page 10 of the working paper

The study has quite high estimates of output compared to previously, but things have moved on. This is of course output per year. If the wind turbines operated at 100% capacity then the required for 24 hours a day, 365.25 days a year would be 265.5 GW, made up of 23.5GW for onshore, 64GW for fixed offshore and 178GW for floating offshore. In my opinion 1500 TWh is very much on the high side, as demand will fall as energy becomes far more expensive. Car use will fall, as will energy use in domestic heating when the considerably cheaper domestic gas is abandoned.

Wind Turbine Load Factors

Wind turbines don’t operate at anything like 100% of capacity. The report does not assume this. But it does assume load factors of 35% for onshore and 55% for offshore. Currently floating offshore is insignificant, so offshore wind can be combined together. The UK Government produces quarterly data on renewables, including load factors. In 2022 this average about 28% for onshore wind (17.6% in Q3 to 37.6% in Q1) and 41% for offshore wind (25.9% in Q3 to 51.5% in Q4). This data, shown in four charts in Figure 2 does not seem to shown an improving trend in load capacity.

Figure 2 : Four charts illustrating UK wind load capacities and total capacities

The difference is in the report using benchmark standards, not extrapolating from existing experience. See footnote 19 on page 15. The first ref sited is a 2019 DNV study for the UK Department for Business, Energy & Industrial Strategy. The title – “Potential to improve Load Factor of offshore wind farms in the UK to 2035” – should give a clue as to why benchmark figures might be inappropriate to calculate future average loads. Especially when the report discusses new technologies and much larger turbines being used, whilst also assuming some load capacity improvements from reduced downtimes for maintenance.

Scaling up

The report states on page 10

From the UK Government quarterly data on renewables, these are the figures for Q3 2022. Q1 2023 gives 15.2 GW onshore and 14.1 GW offshore. This offshore value was almost entirely fixed. Current offshore floating capacity is 78 MW (0.078 GW). This implies that to reach the reports objectives of 2050 with 1500 TwH, onshore wind needs to increase 3.8 times, offshore fixed wind 9.4 times and offshore floating wind over 4000 times. Could diminishing returns, in both output capacities and costs per unit of capacity set in with this massive scaling up? Or maintenance problems from rapidly installing floating wind turbines of a size much greater than anything currently in service? On the other hand, the report notes that Scotland has higher average wind speeds than “Wales or Britain”, to which I suspect they mean that Scotland has higher average wind speeds to the rest of the UK. If so, they could be assuming a good proportion of the floating wind turbines will be located off Scotland, where wind speeds are higher and therefore the sea more treacherous. This map of just 19 GW of proposed floating wind turbines is indicative.

Cost of Capital

On page 36 the report states

You indeed find these rates on “Table 2.7: Technology-specific hurdle rates provided by Europe Economics”. My quibble is not that they are 2018 rates, but that during 2008-2020 interests rates were at historically low levels. In a 2023 paper it should recognise that globally interest rates have leapt since then. In the UK, base rates have risen from 0.1% in 2020 to 5.25% at the beginning of August 2023. This will surely affect the discount rates in use.

Wind turbine mix

Costs of wind turbines vary from project to project. However, the location determines the scale of costs. It is usually cheaper to put up a wind turbine on land than fix it to a sea bed, then construct a cable to land. This in turn is cheaper than anchoring a floating turbine to a sea bed often in water too deep to fix to the sea bed. If true, moving from land to floating offshore will increase average costs. For this comparison I will use some 2021 levilized costs of energy for wind turbines from US National Renewable Energy Laboratory (NREL).

Figure 3 : Page 6 of the NREL presentation 2021 Cost of Wind Energy Review

The levilized costs are $34 MWh for land-based, $78 MWh for fixed offshore, and $133 MWh for floating offshore. Based on the 2022 outputs, the UK weighted average levilized cost was about $60 MWh. On the same basis, the report’s weighted average levilized cost for 2050 is about $110 MWh. But allowing for 25% load capacity improvements for onshore and 34% for offshore brings average levilized cost down to $82 MWh. So the different mix of wind turbine types leads to an 83% average cost increase, but efficiency improvements bring this down to 37%. Given the use of benchmarks discussed above it would be reasonable to assume that the prospective mix variance cost increase is over 50%, ceteris paribus.

The levilized costs from the USA can be somewhat meaningless for the UK in the future, with maybe different cost structures. Rather than speculating, it is worth understanding why the levilized cost of floating wind turbines is 70% more than offshore fixed wind turbines, and 290% more (almost 4 times) than onshore wind turbines. To this end I have broken down the levilized costs into their component parts.

Figure 3 : NREL Levilized Costs of Wind 2021 Component Breakdown. A) Breakdown of total costs B) Breakdown of “Other Capex” in chart A

Observations

  • Financial costs are NOT the costs of borrowing on the original investment. The biggest element is cost contingency, followed by commissioning costs. Therefore, I assume that the likely long-term rise interest rates will impact the whole levilized cost.
  • Costs of turbines are a small part of the difference in costs.
  • Unsurprisingly, operating cost, including maintenance, are significantly higher out at sea than on land. Similarly for assembly & installation and for electrical infrastructure.
  • My big surprise is how much greater the cost of foundations are for a floating wind turbine are than a fixed offshore wind turbine. This needs further investigation. In the North Sea there is plenty of experience of floating massive objects with oil rigs, so the technology is not completely new.

What about the batteries?

The above issues may be trivial compared to the issue of “battery” storage for when 100% of electricity comes from renewables, for when the son don’t shine and the wind don’t blow. This is particularly true in the UK when there can be a few day of no wind, or even a few weeks of well below average wind. Interconnectors will help somewhat, but it is likely that neighbouring countries could be experiencing similar weather systems, so might not have any spare. This requires considerable storage of electricity. How much will depend on the excess renewables capacity, the variability weather systems relative to demand, and the acceptable risk of blackouts, or of leaving less essential users with limited or no power. As a ballpark estimate, I will assume 10 days of winter storage. 1500 TWh of annual usage gives 171 GW per hour on average. In winter this might be 200 GW per hour, or 48000 GWh for 10 or 48 million Mwh. The problem is how much would this cost?

In April 2023 it a 30 MWh storage system was announced costing £11 million. This was followed in May by a 99 MWh system costing £30 million. These respectively cost £367,000 and £333,000 per MWh. I will assume there will be considerable cost savings in scaling this up, with a cost of £100,000 per MWh. Multiplying this by 48,000,000 gives a cost estimate of £4.8 trillion, or nearly twice the 2022 UK GDP of £2.5 trillion. If one assumes a 25 year life of these storage facilities, this gives a more “modest” £192 billion annual cost. If this is divided by an annual usage of 1500 TWh it comes out at a cost of 12.8p KWh. These costs could be higher if interest rates are higher. The £192 billion costs are more than the 2023/24 NHS Budget.

This storage requirement could be conservative. On the other hand, if overall energy demand is much lower, due to energy being unaffordable it could be somewhat less. Without fossil fuel backup, there will be a compromise between costs energy storage and rationing with the risk of blackouts.

Calculating the risks

The approach of putting out a report with grandiose claims based on a number of assumptions, then expecting the public to accept those claims as gospel is just not good enough. There are risks that need to be quantified. Then, as a project progresses these risks can be managed, so the desired objectives are achieved in a timely manner using the least resources possible. These are things that ought to be rigorously reviewed before a project is adopted, learning from past experience and drawing on professionals in a number of disciplines. As noted above, there are a number of assumptions made where there are risks of cost overruns and/or shortfalls in claimed delivery. However, the biggest risks come from the law of diminishing marginal returns, a concept that has been understood for over 2 00 years. For offshore wind the optimal sites will be chosen first. Subsequent sites for a given technology will become more expensive per unit of output. There is also the technical issue of increased numbers of wind turbines having a braking effect on wind speeds, especially under stable conditions.

Concluding Comments

Technically, the answer to the question “could Britain’s energy demand be met entirely by wind and solar?” is in the affirmative, but not nearly so positively at the Smith School makes out. There are underlying technical assumptions that will likely not be borne out with further investigations. However, in terms of costs and reliable power output, the answer is strongly in the negative. This is an example of where rigorous review is needed before accepting policy proposals into the public arena. After all, the broader justification of contributing towards preventing “dangerous climate change” is upheld in that an active global net zero policy does not exist. Therefore, the only justification is on the basis of being net beneficial to the UK. From the above analysis, this is certainly not the case.

Key Error in Climate Policy Illustrated

Good example of the key logical error in climate policy justifications is illustrated by an article posed in a Los Angeles Times article and repeated by Prof. Roger Pielke Jnr on Twitter. This error completely undermines the case for cutting greenhouse gas emissions.

The question is

What’s more important: Keeping the lights on 24 hours a day, 365 days a year, or solving the climate crisis?

It looks to be a trade-off question. But is it a real trade-off?

Before going further I will make some key assumptions for the purposes of this exercise. This is simply to focus in on the key issue.

  1. There is an increasing human-caused climate crisis, that will only get much worse, unless…
  2. Human greenhouse gas (GHG) emissions are cut to zero in the next few decades.
  3. The only costs of solving the climate crisis to the people of California are the few blackouts every year. This will remain fixed into the future. So the fact that California’s electricity costs are substantially higher than the US national average I shall assume for this exercise are nothing to do with any particular state climate-related policies.
  4. The relevant greenhouse gases are well-mixed in the atmosphere. Thus the emissions of California, do not sit in a cloud forever above the sunshine state, but are evenly dispersed over the whole of the earth’s atmosphere.
  5. Global GHG emissions are the aggregate emissions of all nation states (plus international emissions from sea and air). The United States’ GHG emissions are the aggregate emissions of all its member states.

Let us put the blackouts in context. The State of California has a helpful graphic showing a breakdown of the state GHG emissions.

Figure 1: California’s greenhouse gas emissions in 2020 broken out by economic sector

Electricity production, including imports, accounts for just 16% of California’s GHG emissions or about 60 MMtCO2e. Globally in 2020 global GHG emissions were just over 50,000 MMtCO2e. So the replacing existing electricity production from fossil fuels with renewables will cut global emissions by 0.12%. Replacing all GHG emissions from other sources will cut global emissions by 0.74%. So California alone cannot solve the climate crisis. There is no direct trade-off, but rather enduring the blackouts (or other costs) for a very marginal impact on climate change for the people of California. These tiny benefits of course will be shared by the 7960 million people who do not live in California.

More generally, the error is in assuming that the world follows the “leaders” on climate change. Effectively, the world the rest of the world is assumed to think as the climate consensus. An example is from the UK in March 2007 when then Environment Minister David Miliband was promoting a Climate Bill, that later became the Climate Change Act 2008.

In the last 16 years under the UNFCCC COP process there has been concerted efforts to get all countries to come “onboard”, so that the combined impact of local and country-level sacrifices produces the total benefit of stopping climate change. Has this laudable aim been achieved?

I will just go back to 2015, despite the United Nations Framework Convention on Climate Change Treaty (that set up the UNFCCC body) entering into force in March 1994. In preparation for COP 21 Paris most countries submitted “Intended Nationally Determined Contributions” (INDCs). The submissions outlined what post-2020 climate actions they intended to take under a new international agreement, now called the Paris Agreement. On the 1st November 2015 the UNFCCC produced a Synthesis Report of the aggregate impact of the INDCs submitted up to 1st October. The key chart is reproduced below.

Figure 2 : Summary results on the aggregate effect of INDCs to 1st November 2015.

The aggregate impact is for emissions still to rise through to 2030, with no commitments made thereafter. COP21 Paris failed in it’s objectives of a plan to reduce global emissions as was admitted in the ADOPTION OF THE PARIS AGREEMENT communique of 12/12/2015.

  1. Notes with concern that the estimated aggregate greenhouse gas emission levels in 2025 and 2030 resulting from the intended nationally determined contributions do not fall within least-cost 2 ˚C scenarios but rather lead to a projected level of 55 gigatonnes in 2030, and also notes that much greater emission reduction efforts will be required than those associated with the intended nationally determined contributions in order to hold the increase in the global average temperature to below 2 ˚C above pre-industrial levels by reducing emissions to 40 gigatonnes or to 1.5 ˚C above pre-industrial levels by reducing to a level to be identified in the special report referred to in paragraph 21 below;

Paragraph 21 states

  1. Invites the Intergovernmental Panel on Climate Change to provide a special report in 2018 on the impacts of global warming of 1.5 °C above pre-industrial levels and related global greenhouse gas emission pathways;

The request lead, 32 months later, to the scary IPCC SR1.5 of 2018. The annual COP meetings have also been pushing very hard for massive changes. Has this worked?

Figure 3 : Fig ES.3 from UNEP Emissions Gap Report 2022 demonstrating that global emissions have not yet peaked

The answer from the UNEP Emissions Gap Report 2022 executive summary Fig ES.3 is a clear negative. The chart, reproduced above as Figure 3, shows that no significant changes have been made to the commitments since 2015, in that aggregate global emissions will still be higher in 2030 than in 2015. Indeed the main estimate is for emissions in 2030 is 58 GtCO2e, up from 55 GtCO2e in 2015. Attempts to control global emissions, hence the climate, have failed.

Thus, in the context of the above assumptions the question for the people of California becomes.

What’s more important: Keeping a useless policy that is causing blackouts, or not?

To help clarify the point, there is a useful analogy with medicine.

If a treatment is not working, but causing harm to the patient, should you cease treatment?

In medicine, like in climate policy, whether or not the diagnosis was correct is irrelevant. Morally it is wrong to administer useless and harmful policies / treatments. However, there will be strong resistance to any form of recognition of the reality that climate mitigation has failed.

Although the failure to reduce emissions at the global level is more than sufficient to nullify any justification for emissions reductions at sub-global levels, there are many other reasons that would further improve the case for a rational policy-maker to completely abandon all climate mitigation policies.

What would constitute AGW being a major problem?

Ron Clutz has an excellent post. This time on he reports on A Critical Framework for Climate Change. In the post Ron looks at the Karoly/Tamblyn–Happer Dialogue on Global Warming at Best Schools particularly at Happer’s major statement. In my opinion these dialogues are extremely useful, as (to use an old-fashioned British term) are antagonists are forces by skilled opponents to look at the issues in terms of a level playing field. With the back and forth of the dialogue, the relative strengths and weaknesses are exposed. This enables those on the outside to compare and contrast for themselves. Further, as such dialogues never fully resolve anything completely, can point to new paths to develop understanding. 

Ron reprints two flow charts. Whilst the idea is of showing the issues in this way to highlight the issues is extremely useful correct. I have issues with the detail. 

 

In particular on the scientific question “Is it a major problem?“, I do not think the “No” answers are correct.
If there was no MWP, Roman warming, or Bronze Age warming then this would be circumstantial evidence for current warming being human-caused. If there has been 3 past warming phases at about  1000, 2000 and 3000 years ago, then this is strong circumstantial evidence that current warming is at least in part due to some unknown natural or random factors. Without any past warming phases at all then it would point to the distinctive uniqueness of the current warming, but that still does not mean not necessarily mean that it is a major net problem. There could be benefits as well as adverse consequences to warming. But the existence of previous warming phases under many studies and only being able to claim by flawed statistics that the majority of warming since 1950 in human-caused (when there was some net warming for at least 100 years before that suggests a demonstrable marginal impact of human causes far less than 100% of total warming. Further there is issues with

(a) the quantity of emissions a trace gas to raise that the atmospheric levels by a unit amount

(b) the amount of warming from a doubling of the trace gas – climate sensitivity

(c) the time taken for rises on a trace gas to raise temperatures.

As these are all extremely difficult to measure, so a huge range of equally valid answers. It is an example of the underdetermination of scientific theory.

At the heart of the underdetermination of scientific theory by evidence is the simple idea that the evidence available to us at a given time may be insufficient to determine what beliefs we should hold in response to it.

But even if significant human-caused warming can be established, this does not constitute a major problem. Take sea-level rise. If it could be established that human-caused warming was leading to sea level rise, this may not, on a human scale, be a major problem. At current rates sea levels from the satellites are rising on average by 3.5mm per year. The average adjusted level from the tide gauges are less than that – and the individual tide gauges show little or no acceleration in the last century. But if that rate accelerated to three or four times that level, it is not catastrophic in terms of human planning timescales. 

The real costs to humans are expressed in values. The really large costs of climate change are based not so much on the physical side, but implausible assumptions about the lack of human responses to ongoing changes to the environment. In economics, the neoclassical assumptions of utility maximisation and profit maximisation are replaced by the dumb actor assumption.

An extreme example I found last year. In Britain it was projected that unmitigated global warming could lead to 7000 excess heatwave deaths in the 2050s compared to today. The projection was most of these deaths would occur in over 75s dying in hospitals and care homes. The assumption was that medical professionals and care workers would carry on treating those in the care in the same way as currently, oblivious to increasing suffering and death rates.  

Another extreme example from last year was an article in Nature Plants (a biology journal) Decreases in global beer supply due to extreme drought and heatThere were at least two examples of the dumb actor assumption. First was failure by farmers to adjust output according to changing yields and hence profits. For instance in Southern Canada (Calgary / Edmonton) barley yields under the most extreme warming scenario were projected to fall by around 10-20% by the end of the century. But in parts of Montana and North Dakota – just a few hundred miles south – they would almost double. It was assumed that farmers would continue producing at the same rates regardless, with Canadian farmers making losses and those in Northern USA making massive windfall profits. The second was in retail. For instance the price of a 500ml bottle of beer in Ireland was projected to increase under the most extreme scenario in Ireland by $4.84 compared to $1.90 in neighbouring Britain. Given that most of the beer sold comes from the same breweries; current retail prices in UK and Ireland are comparable (In Ireland higher taxes mean prices up to 15% higher); cost of a 500ml bottle is about $2.00-$2.50 in the UK; and lack of significant trade barriers, there is plenty of scope with even a $1.00 differential for an entrepreneur to purchase a lorry load of beer in the UK and ship it over the Irish Sea. 

On the other hand nearly of the short-term forecasts of an emerging major problem have turned out to be false, or highly extreme. Examples are

  • Himalayan Glaciers will disappear by 2035
  • Up to 50% reductions in crop yields in some African Countries by 2020
  • Arctic essentially ice-free in the summer of 2013
  • Children in the UK not knowing what snow is a few years after 2000
  • In the UK after 2009, global warming will result in milder and wetter summers

Another example of the distinction between a mere difference and a major problem is the February weather. Last week the UK experienced some record high daytime maximum temperatures of 15-20C. It was not a problem. In fact, accompanied by very little wind and clear skies it was extremely pleasant for most people. Typical weather for the month is light rain, clouds and occasional gales. Children on half-term holidays were able to play outside, and when back in school last week many lessons were diverted to the outdoors. Over in Los Angeles, average highs were 61F (16C) compared to  February average of 68F (20C). This has created issues for the elderly staying warm, but created better skiing conditions in the mountains. More different than a major problem. 

So in summary, for AGW to be a major problem it is far from sufficient to establish that most of the global warming is human caused. It is necessary to establish that the impact of that warming is net harmful on a global scale.

Kevin Marshall

 

Two false claims on climate change by the IPPR

An IPPR report  This is a crisis: Facing up to the age of environmental breakdown published yesterday, withing a few hours received criticism from Paul Homewood at notalotofpeopleknowthat, Paul Matthews at cliscep and Andrew Montford at The GWPF.  has is based on an April 2018 paper by billionaire Jeremy Grantham. Two major issues, that I want cover in this post are contained in a passage on page 13.

Climate Change : Average global surface temperature increases have accelerated, from an average of 0.007 °C per year from 1900–1950 to 0.025 °C from 1998–2016 (Grantham 2018). ……. Since 1950, the number of floods across the world has increased by 15 times, extreme temperature events by 20 times, and wildfires sevenfold (GMO analysis of EM-DAT 2018).

These two items are lifted from an April 2018 paper The Race of Our Lives Revisited by British investor Jeremy Grantham CBE. I will deal with each in turn.

Warming acceleration

The claim concerning how warming has accelerated comes from Exhibit 2 of The Race of Our Lives Revisited.

The claimed Gistemp trends are as follows

1900 to 1958  – 0.007 °C/year

1958 to 2016  – 0.015 °C/year

1998 to 2016  – 0.025 °C/year

Using the Skeptical Science trend calculator for Gistemp I get the following figures.

1900 to 1958  – 0.066 ±0.024 °C/decade

1958 to 2016  – 0.150 ±0.022 °C/decade

1998 to 2016  – 0.139 ±0.112 °C/decade

That is odd. Warming rates seem to be slightly lower for 1998-2016 compared to 1958-2016, not higher. This is how Grantham may have derived the incorrect 1998-2016 figure.

For 1998-2016 the range of uncertainty is 0.003 to 0.025 °C/year.

It would appear that the 1900 to 1958 & 1958 to 2016 warming rates are as from the trend calculator, whilst the 1998 to 2016 warming rate of 0.025 °C/year is at the top end of the 2σ uncertainty range.

Credit for spotting this plausible explanation should go to Mike Jackson.

Increase in climate-related disasters since 1950

The IPPR report states

Since 1950, the number of floods across the world has increased by 15 times, extreme temperature events by 20 times, and wildfires sevenfold

Exhibit 7 of The Race of Our Lives Revisited.

The 15 times “Floods” increase is for 2001-2017 compared to 1950-1966.
The 20 times “Extreme Temperature Events” increase is for 1996-2017 compared to 1950-1972.
The 7 times “Wildfires” increase is for 1984-2017 compared to 1950-1983.

Am I alone in thinking there is something a bit odd in the statement about being from 1950? Grantham is comparing different time periods, yet IPPR make it appear the starting point is from a single year?

But is the increase in the data replicated in reality?

Last year I downloaded all the EM-DAT – The International Disasters Database – from 1900 to the present day. Their disaster types I have classified into four categories.

Over 40% are the “climate”-related disaster types from Grantham’s analysis. Note that this lists the number of “occurrences” in a year. If, within a country in a year there is more than one occurrence of a disaster type, they are lumped together.

I have split the number of occurrences by the four categories by decade. The 2010s is only for 8.5 years.

Climate” disasters have increased in the database. Allowing for 8.5 years in the current decade, compared to 1900-1949, “Climate” disasters are 65 times more frequent. Similarly epidemics are 47 times more frequent, geological events 16 times and “other” disasters 34 times.

Is this based on reality, or just vastly improved reporting of disasters from the 1980s? The real impacts are indicated by the numbers of reports deaths. 

The number of reported disaster deaths has decreased massively compared to the early twentieth century in all four categories, despite the number of reported disasters increasing many times. Allowing for 8.5 years in the current decade, compared to 1900-1949, “Climate” disaster deaths are down 84%. Similarly epidemic deaths are down by 98% and”other” disasters down by 97%. Geological disaster deaths are, however, up by 27%. The reported 272,431 deaths in the 2010s that I have classified under “Geology” includes the estimated 222,570 estimated deaths in the 2010 Haitian Earthquake.

If one looks at the death rate per reported occurrence, “Climate” disaster death rates have declined by 97.7% between 1900-1949 and the 2010s. Due to the increase in reporting, and the more than doubling of the world population, this decline is most likely understated. 

The Rôle of Progressives in Climate Mitigation

The IPPR describes itself as The Progressive Policy Think Tank. From the evidence of the two issues above they have not actually thought about what they are saying. Rather they have just copied the highly misleading data from Jeremy Grantham. There appears to be no real climate crisis emerging when one examines the available data properly. The death rate from extreme weather related events has declined by at least 97.7% between the first half of the twentieth century  and the current decade. This is a very important point for policy. Humans have adapted to the current climate conditions, just have they have reduced the impact of infectious diseases and are increasingly adapting to the impacts of earthquakes and tsunamis. If the climate becomes more extreme, or sea level rise accelerates significantly humans will adapt as well.

There is a curious symmetry here between the perceived catastrophic problem and the perceived efficacy of the solution. That for governments to reduce global emissions to zero. The theory is that rising human emissions, mostly from the burning of fossil fuels, are going to cause dangerous climate change. Global emissions involve 7600 million people in nearly 200 countries. Whatever the UK does, with less than 1% of the global population and less than 1% of global emissions makes no difference to global emissions.

Globally, there are two major reasons that reducing global emissions will fail.

First is that developing countries, with 80%+ of the global population and 65% of emissions, are specifically exempted from any obligation to reduce their emissions. (see Paris Agreement Articles 2.1(a), 2.2 and 4.1) Based on the evidence of the UNEP Emissions GAP Report 2018, and from the COP24 Katowizce meeting in December, there is no change of heart in prospect.

Second is that the reserves of fossil fuels, both proven and estimated, are both considerable and spread over many countries. Reducing global emissions to zero in a generation would mean leaving in the ground fossil fuels that provide a significant part of government revenue in countries such as Russia, Iran, Saudi Arabia, and Turkmenistan. Keeping some fossil fuels in the ground in the UK, Canada, Australia or the United States will increase the global prices and thus the production elsewhere.

The IPPR is promoting is costly and ideological policies in the UK, that will have virtually zero benefits for future generations in terms of climate catastrophes averted. In my book such policies are both regressive and authoritarian, based on failing to understand to the distinction between the real very marginal impacts of policy and the theoretical total impacts.

If IPPR, or even the climate academics, gave proper thought to the issue, then they would conclude the correct response will be to more accurately predict the type, timing, magnitude and location of future climate catastrophes. This information will help people on the ground adapt to those circumstances. In the absence of that information, the best way of adapting to changing climate is the same way as people have been able to adapt to extreme events, whether weather or geological. That is through sustained long-term economic growth, in the initial stages promoted by cheap and reliable energy sources. If there is a real environmental breakdown on its way, the Progressives, with their false claims and exaggerations, will be best kept well away from the scene. Their ideological beliefs render them incapable of getting a rounded perspective on the issues and the damage their policies will cause.

Kevin Marshall

East Antarctica Glacial Melting through the filter of BBC reporting

An indication of how little solid evidence there is for catastrophic anthropogenic global warming comes from a BBC story story carried during the COP24 Katowice conference in December. It carried the headline “East Antarctica’s glaciers are stirring” and began

Nasa says it has detected the first signs of significant melting in a swathe of glaciers in East Antarctica.

The region has long been considered stable and unaffected by some of the more dramatic changes occurring elsewhere on the continent.

But satellites have now shown that ice streams running into the ocean along one-eighth of the eastern coastline have thinned and sped up.

If this trend continues, it has consequences for future sea levels.

There is enough ice in the drainage basins in this sector of Antarctica to raise the height of the global oceans by 28m – if it were all to melt out.

Reading this excerpt one could draw a conclusion that the drainage basins on “one-eighth of the eastern coastline” have sufficient ice to raise sea levels by 28m. But that is not the case, at the melting of all of Antarctica would only raise sea levels by 60m. The map reproduced from NASA’s own website is copied below.

The study area is no where near a third or more of Antarctica. Further, although it might be one eighth of the eastern coastline, it is far less than the coastline of East Antarctica, which is two-thirds or more of the total area.

NASA does not mention the 28m of potential sea level rise in its article, only 3 metres from the disappearance of the Totten Glacier. So how large is this catchment area? From a Washington Post article in 2015 there is a map.

The upper reaches of the catchment area may include Vostok Station, known for being the location of the lowest reliably measured natural temperature on Earth of −89.2 °C (−128.6 °F). The highest temperature recorded in over 60 years is −14.0 °C. In other words, what is being suggested is that a slight increase in ocean current temperatures will cause, through gravity, the slippage of a glaciers hundreds of miles long into the ocean covering ten times the Totten Glacier catchment.

The Guardian article of 11th December also does not mention the potential 28m of sea level rise. This looks to be an insertion by the BBC making the significance of the NASA research appear orders of magnitude more important than the reality.

The BBC’s audio interview with Dr Catherine Walker gives some clarification of the magnitude of the detected changes. At 2.30 there is a question on the scale of the changes.

Physically the fastest changing one is Vincennes Bay which is why we were looking at that one. And, for instance, in 2017 they changed average about .5 meters a year. So that is pretty small.

Losing 0.5 metres out of hundreds of thousands of length is not very significant. It just shows the accuracy of the measurements. Dr Walker than goes on to relate this to Fleming Glacier in West Antarctica, which is losing about 8 meters a year. The interview continues:-

Q. But the point is that compared to 2008 there is definitely an acceleration here.
A. Yes. We have shown that looking at 2008 and today they have increased their rate of mass loss by 5 times.
Q. So it is not actually a large signal is it? How do we describe this then. Is this East Antarctica waking up? Is it going to become a West Antarctica here in a few decades time or something?
A. I think its hard, but East Antarctica given how cold it is, and it still does have that layer insulating it from warm Antarctic circumpolar current … that really eats away at West Antarctica. We’ve seen it get up under Totten, so of you know, but it is not continuous you know. Every so often it comes up and (…….) a little bit.

There is acceleration detected over a decade, but for the disappearance of the glacier would take tens or hundreds of thousands of years. 

Walker goes into say that for the small changes to further increase

you would have to change the Antarctic circumpolar current significantly. But the fact that you are seeing these subtle changes I guess you could say Antartica is waking up.
We are seeing these smaller glaciers – which couldn’t be seen before – see them also respond to the oceans. So the oceans are warming enough now to make a real difference in these small glaciers.

This last carry-away point – about glaciers smaller than Totten – is not related to the earlier comments. It is not ocean warming but movements in the warm Antarctic circumpolar current that seem to impact on West Antarctica and this small section of the East Antarctica coast. That implies a heat transfer from elsewhere could be the cause as much as additional heat.

This account misses out on another possible cause of the much higher rates of glacier movement in West Antarctica. It might be just a spooky coincidence, but the areas of most rapid melt seem to have a volcanoes beneath them.

Yet even these small movements in glaciers should be looked at in the context of net change in ice mass. Is the mass loss from moving glaciers offset by snow accumulation?
In June 2018 Jay Zwally claimed his 2015 paper showing net mass gain in Antarctica is confirmed in a forthcoming study. It is contentious (as is anything that contradicts the consensus. But the mainstream estimate of 7.6 mm of sea-level rise over 25 years is just 0.30mm a year. It is in Eastern Antarctica that the difference lies. 

From the Daily Caller

Zwally’s 2015 study said an isostatic adjustment of 1.6 millimeters was needed to bring satellite “gravimetry and altimetry” measurements into agreement with one another.

Shepherd’s paper cites Zwally’s 2015 study several times, but only estimates eastern Antarctic mass gains to be 5 gigatons a year — yet this estimate comes with a margin of error of 46 gigatons.

Zwally, on the other hand, claims ice sheet growth is anywhere from 50 gigatons to 200 gigatons a year.

In perspective the Shepard study has a central estimate of 2,720 billion tonnes of ice loss in 25 years leaving about 26,500,000 billion tonnes. That is a 0.01% reduction. 

As a beancounter I prefer any study that attempts to reconcile and understanding differing data sets. It is looking at differences (whether of different data sets; different time periods; hypothesis or forecast and empirical reality, word definitions etc.) that one can greater understanding of a subject, or at least start to map out the limits of one’s understanding. 

On the measure of reconciliation, I should tend towards the Zwally estimates with isostatic adjustment. But the differences are so small in relation to the data issues that one can only say that there is more than reasonable doubt about against the claim Antarctica lost mass in the last 25 years. The data issues are most clearly shown by figure 6 Zwally et al 2015, reproduced below.

Each colour band is for 25mm per annum whereas the isostatic adjustment is 1.6mm pa. In the later period the vast majority of Antarctica is shown as gaining ice, nearly all at 0-50mm pa. The greatest ice loss from 1992 to 2008 is from West Antarctica and around the Totten Glacier in East Antarctica. This contradicts the BBC headline “East Antarctica’s glaciers are stirring“, but not the detail of the article nor the NASA headline “More glaciers in East Antarctica are waking up“.

Concluding Comments

There are a number of concluding statements that can be made about the BBC article, along with the context of the NASA study.

  1. The implied suggestion by the BBC that recent glacier loss over a decade in part of East Antarctica could be a portent to 28m of sea level rise is gross alarmism. 
  2. The BBC’s headline “East Antarctica’s glaciers are stirring” implies the melt is new in area, but the article makes clear this is not the case. 
  3. There is no evidence put forward in the BBC, or elsewhere, to demonstrate that glacier melt in Antarctica is due to increased global ocean heat content or due to average surface temperature increase. Most, or all, could be down to shifts in ocean currents and volcanic activity. 
  4. Most, or all of any ice loss from glaciers to the oceans will be offset by ice gain elsewhere.  There are likely more areas gaining ice than losing it and overall in Antarctica there could be a net gain if ice.
  5. Although satellites can perform measures with increasing accuracy, especially glacier retreat and movement, the fine changes in ice mass are so small that adjustment and modelling assumptions for East Antarctica can make the difference between net gain or loss.

The NASA study of some of East Antarctica’s glaciers has to be understood in the context of when it was published. It was during the COP24 conference to control global emissions, with the supposed aim of saving the world from potential dangerous human-caused climate change. The BBC dressed it up the study make it appear that the study was a signal of this danger, when it was a trivial, localized (and likely) example of natural climate variation. The prominence given to such a study indicates the lack of strong evidence for a big problem that could justify costly emissions reduction policies. 

Kevin Marshall

Natural Variability in Alaskan Glacier Advances and Retreats

One issue with global warming is discerning how much of that warming is human caused. Global temperature data is only available since 1850. That might contain biases within the data, some recognized (like the urban heat island effect) and others maybe less so. Going further back is notoriously difficult, with proxies for temperature having to be used. Given that (a) recent warming  in the Arctic has been significantly greater than warming at other latitudes (see here) and (b) the prominence given a few years ago to the impact of melting ice sheets, the retreat of Arctic glaciers ought to be a useful proxy. I was reminded of this with yesterday’s Microsoft screensaver of Johns Hopkins Glacier and inlet in Glacier Bay National Park, Alaska.

The caption caught my eye

By 1879, when John Muir arrived here, he noticed that the huge glacier had retreated and the bay was now clogged with multiple smaller glaciers.
I did a quick search on how for more information on this retreat. At the National Park Service website, there are four images of the estimated glacier extent.
The glacier advanced from 1680 to 1750, retreated dramatically in the next 130 years to 1880, and then retreated less dramatically in the last 130+ years. This does not fit the picture of unprecedented global warming since 1950.

The National Park Service has more detail on the glacial history of the area, with four maps of the estimated glacial extent.

The glacial advance after 1680 enveloped a village of some early peoples. This is not something new to me. Previous estimates of glacier movement in Glacier Bay have only been of the retreat. For instance this map from a 2012 WUWT article shows the approximate retreat extents, not the earlier advance. Is this recently discovered information.

I have marked up the John Hopkins Glacier where the current front is about 50 miles from the glacier extent in 1750.
The National Park Service has a more detailed map of Glacier Bay, with more detailed estimated positions of the glacier terminus at various dates. From this map the greatest measured retreat of John Hopkins Glacier was in 1929. By 1966 it had expanded over a mile and the current terminus in slightly in front of the 1966 terminus. This is an exception to the other glaciers in Glacier Bay which are still retreating, but at a slower rate than in the nineteenth century.

As the human-caused warming is supposed to have predominately after 1950 the glacial advance and retreat patterns of the extensive Glacier Bay area do not appear to conform to those signals.

A cross check is from the Berkeley Earth temperature anomaly for Anchorage.

Whilst it might explain minor glacial advances from the 1929 to 1966, it does not explain the more significant glacial retreat in the nineteenth century, nor the lack of significant glacial retreat from the 1970s.

Kevin Marshall

UNEP Emissions Gap Report 2018 Part 1 – The BBC Response

Over the past year I have mentioned a number of times to UNEP Emissions Gap Report 2017. The successor 2018 EGR (ninth in the series) has now been published. This is the first in a series of short posts looking at the issues with the report. First up is an issue with the reporting by the BBC.
On the 27th Matt Macgarth posted an article Climate change: CO2 emissions rising for first time in four years.
The sub-heading gave the real thrust of the article.

Global efforts to tackle climate change are way off track says the UN, as it details the first rise in CO2 emissions in four years.

Much of the rest of the article gives a fair view of EGR18.  But there is a misleading figure. Under “No peaking?” the article has a figure titled

Number of countries that have pledged to cap emissions by decade and percentage of emissions covered”.

In the report Figure 2.1 states

Number of countries that have peaked or are committed to peaking their emissions, by decade (aggregate) and percentage of global emissions covered (aggregate).

The shortened BBC caption fails to recognize that countries in the past peaked their emissions unintentionally.  In looking at Climate Interactive‘s bogus emissions projections at the end of 2015 I found that, collectively, the current EU28 countries peaked their emissions in 1980. In the USA emissions per capita peaked in 1973. Any increases since then have been less than the rise in population. Yet Climate Interactive’s RCP8.5, non-policy, projection apportionment by country assumed that 

(a) Emissions per capita would start to increase again in the EU and USA after falling for decades

(b) In China and Russia emissions per capita would increase for decades to levels many times that of any country.

(c) In India and African countries emissions per capita would hardly change through to 2100, on the back of stalled economic growth. For India, the projected drop in economic growth was so severe that on Dec 30th 2015 to achieve the projection the Indian economy would have needed to have shrunk by over 20% before Jan 1st 2016. 

Revising the CO2 emissions projections (about 75% of the GHG emissions EGR18 refers to) would have largely explained the difference between the resultant 4.5°C of warming in 2100 from the BAU scenario of all GHG emissions and the 3.5°C consequential on the INDC submissions. I produced a short summary of more reasonable projections in December 2015.

Note that EGR18 now states the fully implemented INDC submissions will achieve 3.2°C of warming in 2100 instead of 3.5°C that CI was claiming three years ago.

The distinction between outcomes consequential on economic activity and those resultant from the deliberate design of policy is important if one wants to distinguish between commitments that inflict economic pain on their citizens (e.g. the UK) and commitments that are almost entirely diplomatic hot air (the vast majority). The BBC fails to make the distinction historically and in the future, whilst EGR18 merely fails with reference to the future.  

The conclusion is that the BBC should correct its misreporting, and the UN should start distinguishing between hot air and substantive policy to could cut emissions. But that would mean recognizing climate mitigation is not just useless, but net harmful to every nation that enacts policy that will make deep cuts in actual emissions,

Kevin Marshall

Why can’t I reconcile the emissions to achieve 1.5C or 2C of Warming?

Introduction

At heart I am beancounter. That is when presented with figures I like to understand how they are derived. When it comes to the claims about the quantity of GHG emissions that are required to exceed 2°C of warming I cannot get even close, unless by making some a series of  assumptions, some of which are far from being robust. Applying the same set of assumptions I cannot derive emissions consistent with restraining warming to 1.5°C

Further the combined impact of all the assumptions is to create a storyline that appears to me only as empirically as valid as an infinite number of other storylines. This includes a large number of plausible scenarios where much greater emissions can be emitted before 2°C of warming is reached, or where (based on alternative assumptions) plausible scenarios even 2°C of irreversible warming is already in the pipeline.  

Maybe an expert climate scientist will clearly show the errors of this climate sceptic, and use it as a means to convince the doubters of climate science.

What I will attempt here is something extremely unconventional in the world of climate. That is I will try to state all the assumptions made by highlighting them clearly. Further, I will show my calculations and give clear references, so that anyone can easily follow the arguments.

Note – this is a long post. The key points are contained in the Conclusions.

The aim of constraining warming to 1.5 or 2°C

The Paris Climate Agreement was brought about by the UNFCCC. On their website they state.

The Paris Agreement central aim is to strengthen the global response to the threat of climate change by keeping a global temperature rise this century well below 2 degrees Celsius above pre-industrial levels and to pursue efforts to limit the temperature increase even further to 1.5 degrees Celsius. 

The Paris Agreement states in Article 2

1. This Agreement, in enhancing the implementation of the Convention, including its objective, aims to strengthen the global response to the threat of climate change, in the context of sustainable development and efforts to eradicate
poverty, including by:

(a) Holding the increase in the global average temperature to well below 2°C above pre-industrial levels and pursuing efforts to limit the temperature increase to 1.5°C above pre-industrial levels, recognizing that this would significantly reduce the risks and impacts of climate change;

Translating this aim into mitigation policy requires quantification of global emissions targets. The UNEP Emissions Gap Report 2017 has a graphic showing estimates of emissions before 1.5°C or 2°C warming levels is breached.

Figure 1 : Figure 3.1 from the UNEP Emissions Gap Report 2017

The emissions are of all greenhouse gas emissions, expressed in billions of tonnes of CO2 equivalents. From 2010, the quantity of emissions before the either 1.5°C or 2°C is breached are respectively about 600 GtCO2e and 1000 GtCO2e. It is these two figures that I cannot reconcile when using the same  assumptions to calculate both figures. My failure to reconcile is not just a minor difference. Rather, on the same assumptions that 1000 GtCO2e can be emitted before 2°C is breached, 1.5°C is already in the pipeline. In establishing the problems I encounter I will clearly endeavor to clearly state the assumptions made and look at a number of examples.

 Initial assumptions

1 A doubling of CO2 will eventually lead to 3°C of rise in global average temperatures.

This despite the 2013 AR5 WG1 SPM stating on page 16

Equilibrium climate sensitivity is likely in the range 1.5°C to 4.5°C

And stating in a footnote on the same page.

No best estimate for equilibrium climate sensitivity can now be given because of a lack of agreement on values across assessed lines of evidence and studies.

2 Achieving full equilibrium climate sensitivity (ECS) takes many decades.

This implies that at any point in the last few years, or any year in the future there will be warming in progress (WIP).

3 Including other greenhouse gases adds to warming impact of CO2.

Empirically, the IPCC’s Fifth Assessment Report based its calculations on 2010 when CO2 levels were 390 ppm. The AR5 WG3 SPM states in the last sentence on page 8

For comparison, the CO2-eq concentration in 2011 is estimated to be 430 ppm (uncertainty range 340 to 520 ppm)

As with climate sensitivity, the assumption is the middle of an estimated range. In this case over one fifth of the range has the full impact of GHGs being less than the impact of CO2 on its own.

4 All the rise in global average temperature since the 1800s is due to rise in GHGs. 

5 An increase in GHG levels will eventually lead to warming unless action is taken to remove those GHGs from the atmosphere, generating negative emissions. 

These are restrictive assumptions made for ease of calculations.

Some calculations

First a calculation to derive the CO2 levels commensurate with 2°C of warming. I urge readers to replicate these for themselves.
From a Skeptical Science post by Dana1981 (Dana Nuccitelli) “Pre-1940 Warming Causes and Logic” I obtained a simple equation for a change in average temperature T for a given change in CO2 levels.

ΔTCO2 = λ x 5.35 x ln(B/A)
Where A = CO2 level in year A (expressed in parts per million), and B = CO2 level in year B.
I use λ = .809, so that if B = 2A, ΔTCO2 = 3.00

Pre-industrial CO2 levels were 280ppm. 3°C of warming is generated by CO2 levels of 560 ppm, and 2°C of warming is when CO2 levels reach 444 ppm.

From the Mauna Loa CO2 data, average CO2 levels averaged 407 ppm in 2017. Given the assumption (3) and further assuming the impact of other GHGs is unchanged, 2°C of warming would have been surpassed in around 2016 when CO2 levels averaged 404 ppm. The actual rise in global average temperatures is from HADCRUT4 is about half that amount, hence the assumption that the impact of a rise in CO2 takes an inordinately long time for the actual warming to reveal itself. Even with the assumption that 100% of the warming since around 1800 is due to the increase in GHG levels warming in progress (WIP) is about the same as revealed warming. Yet the Sks article argues that some of the early twentieth century warming was due to other than the rise in GHG levels.

This is the crux of the reconciliation problem. From this initial calculation and based on the assumptions, the 2°C warming threshold has recently been breached, and by the same assumptions 1.5°C was likely breached in the 1990s. There are a lot of assumptions here, so I could have missed something or made an error. Below I go into some key examples that verify this initial conclusion. Then I look at how, by introducing a new assumption it is claimed that 2°C warming is not yet reached.

100 Months and Counting Campaign 2008

Trust, yet verify has a post We are Doomed!

This tracks through the Wayback Machine to look at the now defunct 100monthsandcounting.org campaign, sponsored by the left-wing New Economics Foundation. The archived “Technical Note” states that the 100 months was from August 2008, making the end date November 2016. The choice of 100 months turns out to be spot-on with the actual data for CO2 levels; the central estimate of the CO2 equivalent of all GHG emissions by the IPCC in 2014 based on 2010 GHG levels (and assuming other GHGs are not impacted); and the central estimate for Equilibrium Climate Sensitivity (ECS) used by the IPCC. That is, take 430 ppm CO2e, and at 14 ppm for 2°C of warming.
Maybe that was just a fluke or they were they giving a completely misleading forecast? The 100 Months and Counting Campaign was definitely not agreeing with the UNEP Emissions GAP Report 2017 in making the claim. But were they correctly interpreting what the climate consensus was saying at the time?

The 2006 Stern Review

The “Stern Review: The Economics of Climate Change” (archived access here) that was commissioned to provide benefit-cost justification for what became the Climate Change Act 2008. From the Summary of Conclusions

The costs of stabilising the climate are significant but manageable; delay would be dangerous and much more costly.

The risks of the worst impacts of climate change can be substantially reduced if greenhouse gas levels in the atmosphere can be stabilised between 450 and 550ppm CO2 equivalent (CO2e). The current level is 430ppm CO2e today, and it is rising at more than 2ppm each year. Stabilisation in this range would require emissions to be at least 25% below current levels by 2050, and perhaps much more.

Ultimately, stabilisation – at whatever level – requires that annual emissions be brought down to more than 80% below current levels. This is a major challenge, but sustained long-term action can achieve it at costs that are low in comparison to the risks of inaction. Central estimates of the annual costs of achieving stabilisation between 500 and 550ppm CO2e are around 1% of global GDP, if we start to take strong action now.

If we take assumption 1 that a doubling of CO2 levels will eventually lead to 3.0°C of warming and from a base CO2 level of 280ppm, then the Stern Review is saying that the worst impacts can be avoided if temperature rise is constrained to 2.1 – 2.9°C, but only in the range of 2.5 to 2.9°C does the mitigation cost estimate of 1% of GDP apply in 2006. It is not difficult to see why constraining warming to 2°C or lower would not be net beneficial. With GHG levels already at 430ppm CO2e, and CO2 levels rising at over 2ppm per annum, the 2°C of warming level of 444ppm (or the rounded 450ppm) would have been exceeded well before any global reductions could be achieved.

There is a curiosity in the figures. When the Stern Review was published in 2006 estimated GHG levels were 430ppm CO2e, as against CO2 levels for 2006 of 382ppm. The IPCC AR5 states

For comparison, the CO2-eq concentration in 2011 is estimated to be 430 ppm (uncertainty range 340 to 520 ppm)

In 2011, when CO2 levels averaged 10ppm higher than in 2006 at 392ppm, estimated GHG levels were the same. This is a good example of why one should take note of uncertainty ranges.

IPCC AR4 Report Synthesis Report Table 5.1

A year before the 100 Months and Counting campaign The IPCC produced its Fourth Climate Synthesis Report. The 2007 Synthesis Report on Page 67 (pdf) there is table 5.1 of emissions scenarios.

Figure 2 : Table 5.1. IPCC AR4 Synthesis Report Page 67 – Without Footnotes

I inputted the various CO2-eq concentrations into my amended version of Dana Nuccitelli’s magic equation and compared to the calculation warming in Table 5.1

Figure 3 : Magic Equation calculations of warming compared to Table 5.1. IPCC AR4 Synthesis Report

My calculations of warming are the same as that of the IPCC to one decimal place except for the last two calculations. Why are there these rounding differences? From a little fiddling in Excel, it would appear to me that the IPCC got the warming results from a doubling of 3 when calculating to two decimal places, whilst my version of the formula is to four decimal places.

Note the following

  • That other GHGs are translatable into CO2 equivalents. Once translated other GHGs they can be treated as if they were CO2.
  • There is no time period in this table. The 100 Months and Counting Campaign merely punched in existing numbers and made a forecast ahead of the GHG levels that would reach the 2°C of warming.
  • No mention of a 1.5°C warming scenario. If constraining warming to 1.5°C did not seem credible in 2007, which should it be credible in 2014 or 2017, when CO2 levels are higher?

IPCC AR5 Report Highest Level Summary

I believe that the underlying estimates of emissions to achieve the 1.5°C or 2°C  of warming used by the UNFCCC and UNEP come from the UNIPCC Fifth Climate Assessment Report (AR5), published in 2013/4. At this stage I introduce an couple of empirical assumptions from IPCC AR5.

6 Cut-off year for historical data is 2010 when CO2 levels were 390 ppm (compared to 280 ppm in pre-industrial times) and global average temperatures were about 0.8°C above pre-industrial times.

Using the magic equation above, and the 390 ppm CO2 levels, there is around 1.4°C of warming due from CO2. Given 0.8°C of revealed warming to 2010, the residual “warming-in-progress” was 0.6°C.

The highest level of summary in AR5 is a Presentation to summarize the central findings of the Summary for Policymakers of the Synthesis Report, which in turn brings together the three Working Group Assessment Reports. This Presentation can be found at the bottom right of the IPCC AR5 Synthesis Report webpage. Slide 33 of 35 (reproduced below as Figure 4) gives the key policy point. 1000 GtCO2 of emissions from 2011 onwards will lead to 2°C. This is very approximate but concurs with the UNEP emissions gap report.

Figure 4 : Slide 33 of 35 of the AR5 Synthesis Report Presentation.

Now for some calculations.

1900 GtCO2 raised CO2 levels by 110 ppm (390-110). 1 ppm = 17.3 GtCO2

1000 GtCO2 will raise CO2 levels by 60 ppm (450-390).  1 ppm = 16.7 GtCO2

Given the obvious roundings of the emissions figures, the numbers fall out quite nicely.

Last year I divided CDIAC CO2 emissions (from the Global Carbon Project) by Mauna Loa CO2 annual mean growth rates (data) to produce the following.

Figure 5 : CDIAC CO2 emissions estimates (multiplied by 3.664 to convert from carbon units to CO2 units) divided by Mauna Loa CO2 annual mean growth rates in ppm.

17GtCO2 for a 1ppm rise is about right for the last 50 years.

To raise CO2 levels from 390 to 450 ppm needs about 17 x (450-390) = 1020 GtCO2. Slide 33 is a good approximation of the CO2 emissions to raise CO2 levels by 60 ppm.

But there are issues

  • If ECS = 3.00, and 17 GtCO2 of emissions to raise CO2 levels by 1 ppm, then it is only 918 (17*54) GtCO2 to achieve 2°C of warming. Alternatively, in future if there are assume 1000 GtCO2 to achieve 2°C  of warming it will take 18.5 GtCO2 to raise CO2 levels by 1 ppm, as against 17 GtCO2 in the past. It is only by using 450 ppm as commensurate with 2°C of warming that past and future stacks up.
  • If ECS = 3,  from CO2 alone 1.5°C would be achieved at 396 ppm or a further 100 GtCO2 of emissions. This CO2 level was passed in 2013 or 2014.
  • The calculation falls apart if other GHGs are included.  Emissions are assumed equivalent to 430 ppm at 2011. Therefore with all GHGs considered the 2°C warming would be achieved with 238 GtCO2e of emissions ((444-430)*17) and the 1.5°C of warming was likely passed in the 1990s.
  • If actual warming since pre-industrial times to 2010 was 0.8°C, ECS = 3, and the rise in all GHG levels was equivalent to a rise in CO2 from 280 to 430 ppm, then the residual “warming-in-progress” (WIP) was just over 1°C. That it is the WIP exceeds the total revealed warming in well over a century. If there is a short-term temperature response is half or more of the value of full ECS, it would imply even the nineteenth century emissions are yet to have the full impact on global average temperatures.

What justification is there for effectively disregarding the impact of other greenhouse emissions when it was not done previously?

This offset is to be found in section C – The Drivers of Climate Change – in AR5 WG1 SPM . In particular the breakdown, with uncertainties, in table SPM.5. Another story is how AR5 reached the very same conclusion as AR4 WG1 SPM page 4 on the impact of negative anthropogenic forcings but with a different methodology, hugely different estimates of aerosols along with very different uncertainty bands. Further, these historical estimates are only for the period 1951-2010, whilst the starting date for 1.5°C or 2°C is 1850.

From this a further assumption is made when considering AR5.

7 The estimated historical impact of other GHG emissions (Methane, Nitrous Oxide…) has been effectively offset by the cooling impacts of aerosols and precusors. It is assumed that this will carry forward into the future.

UNEP Emissions Gap Report 2014

Figure 1 above is figure 3.1 from the UNEP Emissions GAP Report 2017. The equivalent report from 2014 puts this 1000 GtCO2 of emissions in a clearer context. First a quotation with two accompanying footnotes.

As noted by the IPCC, scientists have determined that an increase in global temperature is proportional to the build-up of long-lasting greenhouse gases in the atmosphere, especially carbon dioxide. Based on this finding, they have estimated the maximum amount of carbon dioxide that could be emitted over time to the atmosphere and still stay within the 2 °C limit. This is called the carbon dioxide emissions budget because, if the world stays within this budget, it should be possible to stay within the 2 °C global warming limit. In the hypothetical case that carbon dioxide was the only human-made greenhouse gas, the IPCC estimated a total carbon dioxide budget of about 3 670 gigatonnes of carbon dioxide (Gt CO2 ) for a likely chance3 of staying within the 2 °C limit . Since emissions began rapidly growing in the late 19th century, the world has already emitted around 1 900 Gt CO2 and so has used up a large part of this budget. Moreover, human activities also result in emissions of a variety of other substances that have an impact on global warming and these substances also reduce the total available budget to about 2 900 Gt CO2 . This leaves less than about 1 000 Gt CO2 to “spend” in the future4 .

3 A likely chance denotes a greater than 66 per cent chance, as specified by the IPCC.

4 The Working Group III contribution to the IPCC AR5 reports that scenarios in its category which is consistent with limiting warming to below 2 °C have carbon dioxide budgets between 2011 and 2100 of about 630-1 180 GtCO2

The numbers do not fit, unless the impact of other GHGs are ignored. As found from slide 33, there is 2900 GtCO2 to raise atmospheric CO2 levels by 170 ppm, of which 1900 GtC02 has been emitted already. The additional marginal impact of other historical greenhouse gases of 770 GtCO2 is ignored. If those GHG emissions were part of historical emissions as the statement implies, then that marginal impact would be equivalent to an additional 45 ppm (770/17) on top of the 390 ppm CO2 level. That is not far off the IPCC estimated CO2-eq concentration in 2011 of 430 ppm (uncertainty range 340 to 520 ppm). But by the same measure 3670 GTCO2e would increase CO2 levels by 216 ppm (3670/17) from 280 to 496 ppm. With ECS = 3, this would eventually lead to a temperature increase of almost 2.5°C.

Figure 1 above is figure 3.1 from the UNEP Emissions GAP Report 2017. The equivalent report from the 2014 report ES.1

Figure 6 : From the UNEP Emissions Gap Report 2014 showing two emissions pathways to constrain warming to 2°C by 2100.

Note that this graphic goes through to 2100; only uses the CO2 emissions; does not have quantities; and only looks at constraining temperatures to 2°C.  To achieve the target requires a period of negative emissions at the end of the century.

A new assumption is thus required to achieve emissions targets.

8 Sufficient to achieve the 1.5°C or 2°C warming targets likely requires many years of net negative emissions at the end of the century.

A Lower Level Perspective from AR5

A simple pie chart does not seem to make sense. Maybe my conclusions are contradicted by the more detailed scenarios? The next level of detail is to be found in table SPM.1 on page 22 of the AR5 Synthesis Report – Summary for Policymakers.

Figure 7 : Table SPM.1 on Page 22 of AR5 Synthesis Report SPM, without notes. Also found as Table 3.1 on Page 83 of AR5 Synthesis Report 

The comment for <430 ppm (the level of 2010) is "Only a limited number of individual model studies have explored levels below 430 ppm CO2-eq. ” Footnote j reads

In these scenarios, global CO2-eq emissions in 2050 are between 70 to 95% below 2010 emissions, and they are between 110 to 120% below 2010 emissions in 2100.

That is, net global emissions are negative in 2100. Not something mentioned in the Paris Agreement, which only has pledges through to 2030. It is consistent with the UNEP Emissions GAP report 2014 Table ES.1. The statement does not refer to a particular level below 430 ppm CO2-eq, which equates to 1.86°C. So how is 1.5°C of warming not impossible without massive negative emissions? In over 600 words of notes there is no indication. For that you need to go to the footnotes to the far more detailed Table 6.3 AR5 WG3 Chapter 6 (Assessing Transformation Pathways – pdf) Page 431. Footnote 7 (Bold mine)

Temperature change is reported for the year 2100, which is not directly comparable to the equilibrium warming reported in WGIII AR4 (see Table 3.5; see also Section 6.3.2). For the 2100 temperature estimates, the transient climate response (TCR) is the most relevant system property.  The assumed 90% range of the TCR for MAGICC is 1.2–2.6 °C (median 1.8 °C). This compares to the 90% range of TCR between 1.2–2.4 °C for CMIP5 (WGI Section 9.7) and an assessed likely range of 1–2.5 °C from multiple lines of evidence reported in the WGI AR5 (Box 12.2 in Section 12.5).

The major reason that 1.5°C of warming is not impossible (but still more unlikely than likely) for CO2 equivalent levels that should produce 2°C+ of warming being around for decades is because the full warming impact takes so long to filter through.  Further, Table 6.3 puts Peak CO2-eq levels for 1.5-1.7°C scenarios at 465-530 ppm, or eventual warming of 2.2 to 2.8°C. Climate WIP is the difference. But in 2018 WIP might be larger than all the revealed warming in since 1870, and certainly since the mid-1970s.

Within AR5 when talking about constraining warming to 1.5°C or 2.0°C it is only the warming which is estimated to be revealed in 2100. There is no indication of how much warming in progress (WIP) there is in 2100 under the various scenarios, therefore I cannot reconcile back the figures. However, for GHG  would appear that the 1.5°C figure relies upon a period of over 100 years for impact of GHGs on warming failing to come through as (even netting off other GHGs with the negative impact of aerosols) by 2100 CO2 levels would have been above 400 ppm for over 85 years, and for most of those significantly above that level.

Conclusions

The original aim of this post was to reconcile the emissions sufficient to prevent 1.5°C or 2°C of warming being exceeded through some calculations based on a series of restrictive assumptions.

  • ECS = 3.0°C, despite the IPCC being a best estimate across different studies. The range is 1.5°C to 4.5°C.
  • All the temperature rise since the 1800s is assumed due to rises in GHGs. There is evidence that this might not be the case.
  • Other GHGs are netted off against aerosols and precursors. Given that “CO2-eq concentration in 2011 is estimated to be 430 ppm (uncertainty range 340 to 520 ppm)” when CO2 levels were around 390 ppm, this assumption is far from robust.
  • Achieving full equilibrium takes many decades. So long in fact that the warming-in-progress (WIP) may currently exceed all the revealed warming in over 150 years, even based on the assumption that all of that revealed historical warming is due to rises in GHG levels.

Even with these assumptions, keeping warming within 1.5°C or 2°C seems to require two assumptions that were not recognized a few years ago. First is to assume net negative global emissions for many years at the end of the century. Second is to talk about projected warming in 2100 rather than warming as a resultant on achieving full ECS.

The whole exercise appears to rest upon a pile of assumptions. Amending the assumptions means one way means admitting that 1.5°C or 2°C of warming is already in the pipeline, or the other way means admitting climate sensitivity is much lower. Yet there appears to be a very large range of empirical assumptions to chose from there could be there are a very large number of scenarios that are as equally valid as the ones used in the UNEP Emissions Gap Report 2017.

Kevin Marshall

Increasing Extreme Weather Events?

Over at Cliscep, Ben Pile posted Misleading Figures Behind the New Climate Economy. Ben looked at the figures behind the recent New Climate Economy Report from the Global Commission on the Economy and Climate, which claims to be

… a major international initiative to examine how countries can achieve economic growth while dealing with the risks posed by climate change. The Commission comprises former heads of government and finance ministers and leaders in the fields of economics and business, and was commissioned by seven countries – Colombia, Ethiopia, Indonesia, Norway, South Korea, Sweden and the United Kingdom – as an independent initiative to report to the international community.

In this post I will briefly look at Figure 1 from the report, re-posted by Ben Pile.

Fig 1 – Global Occurrences of Extreme Weather Events from New Economy Climate Report

Clearly these graphs seem to demonstrate a rapidly worsening situation. However, I am also aware of a report a few years ago authored by Indur Goklany, and published by The Global Warming Policy Foundation  – GLOBAL DEATH TOLL FROM EXTREME WEATHER EVENTS DECLINING

Figure 2 : From Goklany 2010 – Global Death and Death Rates Due to Extreme Weather Events, 1900–2008. Source: Goklany (2009), based on EM-DAT (2009), McEvedy and Jones (1978), and WRI (2009).

 

Note that The International Disaster Database is EM-DAT. The website is here to check. Clearly these show two very different pictures of events. The climate consensus (or climate alarmist) position is that climate change is getting much worse. The climate sceptic (or climate denier) position is that is that human-caused climate change is somewhat exaggerated. Is one side outright lying, or is their some truth in both sides?

Indur Goklany recognizes the issue in his report. His Figure 2, I reproduce as figure 3.

Figure 3: Average Number of Extreme Weather Events per Year by Decade, 1900–2008.  Source: Goklany (2009), based on EM-DAT (2009).

I am from a management accounting background. That means that I check my figures. This evening I registered at the EM-DAT website and downloaded the figures to verify the data. The website looks at all sorts of disaster information, not just climate information. It collates

Figure 4 : No of Climatic Occurrences per decade from EM-DAT. Note that 2010-2016 pro rata is similar to 2000-2009

The updated figures through to 2016 show that pro rata, in the current decade occurrences if climate-related events as similar to the last decade. If one is concerned about the human impacts, deaths are more relevant.

Figure 5 : No of Climatic Deaths per decade from EM-DAT. Note that 2010-2016 pro rata is similar to 2000-2009

This shows unprecedented flood deaths in the 1930s. Of the 163218 flood deaths in 6 occurrences, 142000 were due to a flood in China in 1935. Wikipedia’s Ten deadliest natural disasters since 1900 lists at No.8 1935 Yangtze river flood, with 145000 dead. At No.1 is 1931 China floods with 1-4 million deaths. EM-DAT has not registered this disaster.

The decade 1970-1979 was extreme for deaths from storms. 300000 deaths were due to a Bangladesh storm in 1970. Wikipedia’s Ten deadliest natural disasters since 1900 lists at No.2 1970 Bhola cyclone, with ≥500,000.

The decade 1990-1999 had a high flood death toll. Bangladesh 1991 stands out with 138987 dead. Wikipedia No.10 is 1991 Bangladesh cyclone with 138866 dead.

In the decade 2000-2009 EM-DAT records the Myanmar Storm of 2008 with 138366 dead. If Wikipedia had a top 11 deadliest natural disasters since 1900, then Cyclone Nargis of 2 May 2008 could have made the list. From the BBC, with 200000 estimated dead, it would have qualified. But from the Red Cross 84500 Cyclone Nargis may have not made the top 20.

This leaves a clear issue of data. The International Disaster Database will accept occurrences of disasters according to clear criteria. For the past 20-30 years disasters have been clearly recorded. The build-up of a tropical cylone / hurricane is monitored by satellites and film crews are on hand to televise across the world pictures of damaged buildings, dead bodies, and victims lamenting the loss of homes. As I write Hurricane Florence is about to pound the Carolinas, and evacuations have been ordered. The Bhola Cyclone of 1970 was no doubt more ferocious and impacted on a far greater number of people. But the primary reason for the extreme deaths in 1970 Bangladesh was lack of warning and a lack of evacuation places. Even in the Wizard of Oz, based on 1930s United States, in a Tornado most families had a storm cellar. In the extreme poverty of 1970 Bangladesh there was nothing. Now, after decades of moderate growth and some rudimentary warning systems, it is unlikely that a similar storm would cause even a tenth of the death toll.

Even more significant, is that even if (as I hope) Hurricane Florence causes no deaths and limited property damage, it will be sufficiently documented to qualify for an entry on the International Disaster Database. But the quality of evidence for the 1931 China Floods, occurring in a civil war between the Communists and the Kuomintang forces, would be insufficient to qualify for entry. This is why one must be circumspect in interpreting this sort of data over periods when the quality and availability of data varies significantly. The issue I have is not with EM-DAT, but those who misinterpret the data for an ideological purpose.

Kevin Marshall

Excess Deaths from 2018 Summer Heatwaves

Last month I looked at the claims by the UK Environmental Audit Committee warning of 7,000 heat-related deaths in the 2050s, finding it was the result a making a number of untenable assumptions. Even if the forecast turned out to be true, cold deaths would still be more than five times the hot deaths. With the hottest summer since 1976, it is not surprising that there have been efforts to show there are excess heat deaths.

On the 6th August, The Daily Express headlined UK heatwave turns KILLER: 1,000 more people die this summer than average as temps soar.

Deaths were up in all seven weeks from June 2 to July 20, which saw temperatures reach as high as 95F (35C).

A total of 955 people more than the average have died in England and Wales since the summer began, according to the Office for National Statistics (ONS).

On the 3rd August the Guardian posted Deaths rose 650 above average during UK heatwave – with older people most at risk.

The height of the heatwave was from 25 June to 9 July, according to the Met Office, a run of 15 consecutive days with temperatures above 28C. The deaths registered during the weeks covering this period were 663 higher than the average for the same weeks over the previous five years, a Guardian analysis of data from the Office of National Statistics shows.

Note the Guardian’s lower figure was from a shorter time period.

I like to put figures in context, so I looked up the ONS Dataset:Deaths registered monthly in England and Wales

There they have detailed data from 2006 to July 2018. Estimating the excess deaths from these figures needs some estimation of other factors. However, some indication of excess deaths can be gleaned from taking the variation from the average. In July 2018 there were 40,624 recorded deaths, as against an average of 38,987 deaths in July in the years 2006-2018. There were therefore 1,637 deaths more than average. I have charted the variation from average for each year.

There were above average deaths in July 2018, but there similar figure in the same month in 2014 and 2015. Maybe the mean July temperatures from the Central England Temperature Record show a similar variation?

Not really. July 2006 had high mean temperatures and average deaths, whilst 2015 had low mean temperatures and higher than average deaths.

There is a further element to consider. Every month so far this year has had higher than average deaths. Below I have graphed the variation by month.

January is many times more significant than July. In the first seven months of this year there were 30,000 more deaths recorded than the January-July average for 2006 to 2018. But is this primarily due to the cold start to the year followed by a barbecue summer? Looking at the variations from average 300,000 deaths for the period January to July period, it does not seem this is the case.

Looking at individual months, if extreme temperatures alone caused excess deaths I would expect an even bigger peak during in January 2010 when there was record cold than this year. In January 2010 there were 48363 recorded deaths, against 64157 in January 2018 and a 2006-2018 average of 52383. Clearly there is a large seasonal element to deaths as the average for July is 39091, or three-quarters of the January level. But discerning the temperature related element is extremely tricky, and any estimates of excess deaths to a precise number should be treated with extreme caution.

Kevin Marshall