Hansen et al 1988 Global Warming Predictions 30 Years on

Last month marked the 30th anniversary of the James Hansen’s Congressional Testimony that kicked off the attempts to control greenhouse gas emissions. The testimony was clearly an attempt, by linking human greenhouse emissions to dangerous global warming, to influence public policy. Unlike previous attempts (such as by then Senator Al Gore), Hansen’s testimony was hugely successful. But do the scientific projections that underpinned the testimony hold up against the actual data? The key part of that testimony was a graph from the Hansen et al 1988* Global climate changes as forecast by Goddard Institute for Space Studies three-dimensional model, produced below.


Figure 1: Hansen et al 1988 – Figure 3(a) in the Congressional Testimony

Note the language of the title of the paper. This is a forecast of global average temperatures contingent upon certain assumptions. The ambiguous part is the assumptions.

The assumptions of Hansen et. al 1988

From the paper.

4. RADIATIVE FORCING IN SCENARIOS A, B AND C

4.1. Trace Gases

  We define three trace gas scenarios to provide an indication of how the predicted climate trend depends upon trace gas growth rates. Scenarios A assumes that growth rates of trace gas emissions typical of the 1970s and 1980s will continue indefinitely; the assumed annual growth averages about 1.5% of current emissions, so the net greenhouse forcing increase exponentially. Scenario B has decreasing trace gas growth rates, such that the annual increase of the greenhouse climate forcing remains approximately constant at the present level. Scenario C drastically reduces trace gas growth between 1990 and 2000 such that the greenhouse climate forcing ceases to increase after 2000.

Scenario A is easy to replicate. Each year increase emissions by 1.5% on the previous year. Scenario B assumes that growth emissions are growing, and policy takes time to be enacted. To bring emissions down to the current level (in 1987 or 1988), reduction is required. Scenario C one presumes are such that trace gas levels are not increasing. As trace gas levels were increasing in 1988 and (from Scenario B) continuing emissions at the 1988 level would continue to increase atmospheric levels the levels of emissions would have been considerably lower than in 1988 by the year 2000. They might be above zero, as small amounts of emissions may not have an appreciable impact on atmospheric levels.

The graph formed Fig. 3. of James Hansen’s testimony to Congress. The caption to the graph repeats the assumptions.

Scenario A assumes continued growth rates of trace gas emissions typical of the past 20 years, i.e., about 1.5% yr-1 emission growth; scenario B has emission rates approximately fixed at current rates; scenario C drastically reduces traces gas emissions between 1990 and 2000.

This repeats the assumptions. Scenario B fixes annual emissions at the levels of the late 1980s, whilst scenario C sees drastic emission reductions.

James Hansen in his speech gave a more succinct description.

We have considered cases ranging from business as usual, which is scenario A, to draconian emission cuts, scenario C, which would totally eliminate net trace gas growth by year 2000.

Note that the resultant warming from fixing emissions at the current rate (Scenario B) is much closer in warming impacts to Scenario A (emissions growth of +1.5% year-on-year) than Scenario C that stops global warming. Yet Scenario B results from global policy being successfully implemented to stop the rise in global emissions.

Which Scenario most closely fits the Actual Data?

To understand which scenario most closely fits the data, we need to look at that trace gas emissions data. There are a number of sources, which give slightly different results. One source, and that which ought to be the most authoritative, is the IPCC Fifth Assessment Report WG3 Summary for Policy Makers graphic SPM.1 is reproduced in Figure 2.

 Figure 2 : AR5 WG3 SPM.1 Total annual anthropogenic GHG emissions (GtCO2eq/yr) by groups of gases 1970-2010. FOLU is Forestry and Other Land Use.

Note that in Figure 2 the other greenhouse gases – F-Gases, N2O and CH4 – are expressed in CO2 equivalents. It is very easy to see which of the three scenarios fits. The historical data up until 1988 shows increasing emissions. After that data emissions have continued to increase. Indeed there is some acceleration, stated on the graph comparing 2000-2010 (+2.2%/yr) with 1970-2000 (+1.3%/yr) . In 2010 GHG emissions were not similar to those in the 1980s (about 35 GtCO2e) but much higher. By implication, Scenario C, which assumed draconian emissions cuts is the furthest away from the reality of what has happened. Before considering how closely Scenario A compares to temperature rise, the question is therefore how close actual emissions have increased compared to the +1.5%/yr in scenario A.

From my own rough calculations, total GHG emissions from 1990 to 2010 rose about 29% or 1.3% a year, compared to 41% or 1.7% a year in the period 1970 to 1990. Exponential growth of 1.3% is not far short of the 1.5%. The assumed 1.5% growth rates would have resulted in 2010 emissions of 51 GtCO2e instead of the 49 GtCO2e estimated, well within the margin of error. That is actual trends over 20 years were pretty much the business as usual scenario. The narrower CO2 emissions from fossil fuels and industrial sources from 1990 to 2010 rose about 42% or 1.8% a year, compared to 51% or 2.0% a year in the period 1970 to 1990, above the Scenario A.

The breakdown is shown in Figure 3.

Figure 3 : Rough calculations of exponential emissions growth rates from AR5 WG1 SPM Figure SPM.1 

These figures are somewhat out of date. The UNEP Emissions Gap Report 2017 (pdf) estimated GHG emissions in 2016 at 51.9 GtCO2e. This represents a slowdown in emissions growth in recent years.

Figure 4 shows are the actual decadal exponential growth trends in estimated GHG emissions (with a linear trend to the 51.9 GtCO2e of emissions in 2016 from the UNEP Emissions Gap Report 2017 (pdf)) to my interpretations of the scenario assumptions. That is, from 1990 in Scenario A for 1.5% annual growth in emissions; in Scenario B for emissions to reduce from 38 to 35 GtCO2e in(level of 1987) in the 1990s and continue indefinitely: in Scenario C to reduce to 8 GtCO2e in the 1990s.

Figure 4 : Hansen et al 1988 emissions scenarios, starting in 1990, compared to actual trends from UNIPCC and UNEP data. Scenario A – 1.5% pa emissions growth; Scenario B – Linear decline in emissions from 38 GtCO2e in 1990 to 35 GtCO2e in 2000, constant thereafter; Scenario C – Linear decline  in emissions from 38 GtCO2e in 1990 to 8 GtCO2e in 2000, constant thereafter. 

This overstates the differences between A and B, as it is the cumulative emissions that matter. From my calculations, although in Scenario B 2010 emissions are 68% of Scenario A, cumulative emissions for period 1991-2010 are 80% of Scenario A.

Looking at cumulative emissions is consistent with the claims from the various UN bodies, that limiting to global temperature rise to 1.5°C or 2.0°C of warming relative to some point is contingent of a certain volume of emissions not been exceeded. One of the most recent the key graphic from the UNEP Emissions Gap Report 2017.

Figure 5 : Figure ES.2 from the UNEP Emissions Gap Report 2017, showing the projected emissions gap in 2030 relative to 1.5°C or 2.0°C warming targets. 

Warming forecasts against “Actual” temperature variation

Hansen’s testimony was a clear case of political advocacy. By making Scenario B constant the authors are making a bold policy statement. That is, to stop catastrophic global warming (and thus prevent potentially catastrophic changes to climate systems) requires draconian reductions in emissions. Simply maintaining emissions at the levels of the mid-1980s will make little difference. That is due to the forcing being related to the cumulative quantity of emissions.

Given that the data is not in quite in line with scenario A, if the theory is correct, then I would expect:-

  1. Warming trend to be somewhere between Scenario A and Scenario B. Most people accept 4.2equilibrium climate sensitivity of the Hansen model was 4.2ºC for a doubling of CO2 was too high. The IPCC now uses 3ºC for ECS. More recent research has it much lower still. However, although the rate of the warming might be less, the pattern of warming over time should be similar.
  2. Average temperatures after 2010 to be significantly higher than in 1987.
  3. The rate of warming in the 1990s to be marginally lower than in the period 1970-1990, but still strongly positive.
  4. The rate of warming in the 2000s to be strongly positive marginally higher than in the 1990s.

From the model Scenario C, there seems to be about a five year lag in the model between changes in emission rates and changes in temperatures. However, looking at the actual temperature data there is quite a different warming pattern. Five years ago C3 Headlines had a post 2013: The NASA/Hansen Climate Model Prediction of Global Warming Vs. Climate Reality.  The main graphic is in Figure 6

Figure 6 : C3 Headlines – NASA Hansen Prediction Vs Reality

The first thing to note is that the Scenario Assumptions are incorrect. Not only are they labelled as CO2, not GHG emissions, but are all stated wrongly. Stating them correctly would show a greater contradiction between forecasts and reality. However, the Scenario data appears to be reproduced correctly, and the actual graph appears to be in line with a graphic produced last month by Gavin Schmidt last month in his defense of Hansen’s predictions.

The data contradicts the forecasts. Although average temperatures are clearly higher than in in 1987, they are not in line with the forecast of Scenario A which is closest to the actual emissions trends. The rise is way below 70% of the model implied by inputting the lower IPCC climate sensitivity, and allowing for GHG emissions being fractional below the 1.5% per annum of Scenario A. But the biggest problem is where the main divergence occurred. Rather than warming accelerating slightly in the 2000s (after a possible slowdown in the 1990s),  there was no slowdown in the 1990s, but it either collapsed to zero, or massively reduced, depending on the data set was used. This is in clear contradiction of the model. Unless there is an unambiguous and verifiable explanation (rather than a bunch of waffly and contradictory excuses ), the model should be deemed to be wrong. There could be natural and largely unknown natural factors or random data noise that could explain the discrepancy. But equally (and quite plausibly) those same factors could have contributed to the late twentieth century warming.

This simple comparison has an important implication for policy. As there is no clear evidence to link most of the observed warming to GHG emissions, by implication there is no clear support for the belief that reducing GHG emissions will constrain future warming. But reducing global GHG emissions is merely an aspiration. As the graphic in Figure 5 clearly demonstrates, over twenty months after the Paris Climate Agreement was signed there is still no prospect of aggregate GHG emissions falling through policy. Hansen et. al 1988 is therefore a double failure; both as a scientific forecast and a tool for policy advocacy in terms of reducing GHG emissions. If only the supporters would realize their failure, and the useless and costly climate policies could be dismantled.

Kevin Marshall

*Hansen, J., I. Fung, A. Lacis, D. Rind, S. Lebedeff, R. Ruedy, G. Russell, and P. Stone, 1988: Global climate changes as forecast by Goddard Institute for Space Studies three-dimensional model. J. Geophys. Res., 93, 9341-9364, doi:10.1029/JD093iD08p09341.

Charles Moore nearly gets Climate Change Politics post Paris Agreement

Charles Moore of the Telegraph has long been one of the towering figures of the mainstream media. In Donald Trump has the courage and wit to look at ‘green’ hysteria and say: no deal (see also at GWPF, Notalotofpeopleknowthat and Tallbloke) he understands not only the impact of Trump withdrawing from the climate agreement on future global emissions, but recognizes that two other major developed countries – Germany and Japan – whilst committed to reduce their emissions and spending lots of money on renewables are also investing heavily in coal. So without climate policy, the United States is reducing its emissions, but with climate commitments, emissions in Japan and Germany are increasing their emissions. However, there is one slight inaccuracy in Charles Moore’s account. He states

As for “Paris”, this is failing, chiefly for the reason that poorer countries won’t decarbonise unless richer ones pay them stupendous sums.

It is worse than this. Many of the poorer countries have not said they will decarbonize. Rather they have said that they will use the money to reduce emissions relative to a business as usual scenario.

Take Pakistan’s INDC. In 2015 they estimate emissions were 405 MtCO2e, up from 182 in 1994. As a result of ambitious planned economic growth, they forecast a BAU of 1603 MtCO2e in 2030. However, they can reduce that by 20% with about $40 billion in finance. That is, with $40bn, average annual emissions growth from 2015-2030 will still be twice that of 1994-2015. Plus Pakistan would like $7-$14bn pa for adaptation to climate change. The INDC Table 7 summarizes the figures.

Or Bangladesh’s INDC. Estimated BAU increase in emissions from 2011 to 2030 is 264%. They will unconditionally cut this by 5% and conditionally by a further 15%. The BAU is 7.75% annual emissions growth, cut to 7.5% unconditionally and 6% with lots of finance. The INDC Table 7 summarizes the figures.

I do not blame either country for taking such an approach, or the many others adopting similar strategies. They are basically saying that they will do nothing that impedes trying to raise living standards through high levels of sustained economic growth. They will play the climate change game, so long as nobody demands that Governments compromise on serving the best interests of their peoples. If only the Government’s of the so-called developed nations would play similar games, rather than impose useless burdens on the people they are supposed to be serving.

There is another category of countries that will not undertake to reduce their emissions – the OPEC members. Saudi Arabia, Iran, Venezuela, Kuwait, UAE and Qatar have all made submissions. Only Iran gives a figure. It will unilaterally cut emissions by 4% against BAU. With the removal of “unjust sanctions” and some financial assistance and technology transfer it conditional offer would be much more. But nowhere is the BAU scenario stated in figures. The reason these OPEC countries will not play ball is quite obvious. To achieve the IPCC objective of constraining warming to 2°C according to McGlade and Ekins 2015 (The geographical distribution of fossil fuels unused when limiting global warming to 2°C) would mean leaving 75% of proven reserves of fossil fuels in the ground and all of the unproven reserves. I did an approximate breakdown by major countries last year, using the BP Statistical Review of World Energy 2016.

It does not take a genius to work out that meeting the 2°C climate mitigation target would shut down a major part of the economies of fossil fuel producing countries in about two decades. No-one has proposed either compensating them, or finding alternatives.

But the climate alarmist community are too caught up in their Groupthink to notice the obvious huge harms that implementing global climate mitigation policies would entail.

Kevin Marshall

Does data coverage impact the HADCRUT4 and NASA GISS Temperature Anomalies?

Introduction

This post started with the title “HADCRUT4 and NASA GISS Temperature Anomalies – a Comparison by Latitude“.  After deriving a global temperature anomaly from the HADCRUT4 gridded data, I was intending to compare the results with GISS’s anomalies by 8 latitude zones. However, this opened up an intriguing issue. Are global temperature anomalies impacted by a relative lack of data in earlier periods? The leads to a further issue of whether infilling of the data can be meaningful, and hence be considered to “improve” the global anomaly calculation.

A Global Temperature Anomaly from HADCRUT4 Gridded Data

In a previous post, I looked at the relative magnitudes of early twentieth century and post-1975 warming episodes. In the Hadley datasets, there is a clear divergence between the land and sea temperature data trends post-1980, a feature that is not present in the early warming episode. This is reproduced below as Figure 1.

Figure 1 : Graph of Hadley Centre 7 year moving average temperature anomalies for Land (CRUTEM4), Sea (HADSST3) and Combined (HADCRUT4)

The question that needs to be answered is whether the anomalous post-1975 warming on the land is due to real divergence, or due to issues in the estimation of global average temperature anomaly.

In another post – The magnitude of Early Twentieth Century Warming relative to Post-1975 Warming – I looked at the NASA Gistemp data, which is usefully broken down into 8 Latitude Zones. A summary graph is shown in Figure 2.

Figure 2 : NASA Gistemp zonal anomalies and the global anomaly

This is more detail than the HADCRUT4 data, which is just presented as three zones of the Tropics, along with Northern and Southern Hemispheres. However, the Hadley Centre, on their HADCRUT4 Data: download page, have, under  HadCRUT4 Gridded data: additional fields, a file HadCRUT.4.6.0.0.median_ascii.zip. This contains monthly anomalies for 5o by 5o grid cells from 1850 to 2017. There are 36 zones of latitude and 72 zones of longitude. Over 2016 months, there are over 5.22 million grid cells, but only 2.51 million (48%) have data. From this data, I have constructed a global temperature anomaly. The major issue in the calculation is that the grid cells are of different areas. A grid cell nearest to the equator at 0o to 5o has about 23 times the area of a grid cell adjacent to the poles at 85o to 90o. I used the appropriate weighting for each band of latitude.

The question is whether I have calculated a global anomaly similar to the Hadley Centre. Figure 3 is a reconciliation with the published global anomaly mean (available from here) and my own.

Figure 3 : Reconciliation between HADCRUT4 published mean and calculated weighted average mean from the Gridded Data

Prior to 1910, my calculations are slightly below the HADCRUT 4 published data. The biggest differences are in 1956 and 1915. Overall the differences are insignificant and do not impact on the analysis.

I split down the HADCRUT4 temperature data by eight zones of latitude on a similar basis to NASA Gistemp. Figure 4 presents the results on the same basis as Figure 2.

Figure 4 : Zonal surface temperature anomalies a the global anomaly calculated using the HADCRUT4 gridded data.

Visually, there are a number of differences between the Gistemp and HADCRUT4-derived zonal trends.

A potential problem with the global average calculation

The major reason for differences between HADCRUT4 & Gistemp is that the latter has infilled estimated data into areas where there is no data. Could this be a problem?

In Figure 5, I have shown the build-up in global coverage. That is the percentage of 5o by 5o grid cells with an anomaly in the monthly data.

Figure 5 : HADCRUT4 Change in the percentage coverage of each zone in the HADCRUT4 gridded data. 

Figure 5 shows a build-up in data coverage during the late nineteenth and early twentieth centuries. The World Wars (1914-1918 & 1939-1945) had the biggest impact on the Southern Hemisphere data collection. This is unsurprising when one considers it was mostly fought in the Northern Hemisphere, and European powers withdrew resources from their far-flung Empires to protect the mother countries. The only zones with significantly less than 90% grid coverage in the post-1975 warming period are the Arctic and the region below 45S. That is around 19% of the global area.

Finally, comparing comparable zones in the Northen and Southern hemispheres, the tropics seem to have comparable coverage, whilst for the polar, temperate and mid-latitude areas the Northern Hemisphere seems to have better coverage after 1910.

This variation in coverage can potentially lead to wide discrepancies between any calculated temperature anomalies and a theoretical anomaly based upon one with data in all the 5o by 5o grid cells. As an extreme example, with my own calculation, if just one of the 72 grid cells in a band of latitude had a figure, then an “average” would have been calculated for a band right around the world 555km (345 miles) from North to South for that month for that band. In the annual figures by zone, it only requires one of the 72 grid cells, in one of the months, in one of the bands of latitude to have data to calculate an annual anomaly. For the tropics or the polar areas, that is just one in 4320 data points to create an anomaly. This issue will impact early twentieth-century warming episode far more than the post-1975 one. Although I would expect the Hadley centre to have done some data cleanup of the more egregious examples in their calculation, potentially lack of data in grid cells could have quite random impacts, thus biasing the global temperature anomaly trends to an unknown, but significant extent. An appreciation of how this could impact can be appreciated from an example of NASA GISS Global Maps.

NASA GISS Global Maps Temperature Trends Example

NASA GISS Global Maps from GHCN v3 Data provide maps with the calculated change in average temperatures. I have run the maps to compare annual data for 1940 with a baseline of 1881-1910, capturing much of the early twentieth-century warming. I have run the maps at both the 1200km and 250km smoothing.

Figure 6 : NASA GISS Global anomaly Map and average anomaly by Latitude comparing 1940 with a baseline of 1881 to 1910 and a 1200km smoothing radius

Figure 7 : NASA GISS Global anomaly Map and average anomaly by Latitude comparing 1940 with a baseline of 1881 to 1910 and a 250km smoothing radius. 

With respect to the maps in figures 6 & 7

  • There is no apparent difference in the sea data between the 1200km and 250km smoothing radius, except in the polar regions with more cover in the former. The differences lie in the land area.
  • The grey areas with insufficient data all apply to the land or ocean areas in polar regions.
  • Figure 6, with 1200km smoothing, has most of the land infilled, whilst the 250km smoothing shows the lack of data coverage for much of South America, Africa, the Middle East, South-East Asia and Greenland.

Even with these land-based differences in coverage, it is clear that from either map that at any latitude there are huge variations in calculated average temperature change. For instance, take 40N. This line of latitude is North of San Francisco on the West Coast USA, clips Philidelphia on the East Coast. On the other side of the Atlantic, Madrid, Ankara and Beijing are at about 40N. There are significant points on the line on latitude with estimate warming greater than 1C (e.g. California), whilst at the same time in Eastern Europe, cooling may have exceeded 1C in the period. More extreme is at 60N (Southern Alaska, Stockholm, St Petersburg) the difference in temperature along the line of latitude is over 3C. This compares to a calculated global rise of 0.40C.

This lack of data may have contributed (along with a faulty algorithm) to the differences in the Zonal mean charts by Latitude. The 1200km smoothing radius chart bears little relation to the 250km smoothing radius. For instance:-

  •  1200km shows 1.5C warming at 45S, 250km about zero. 45S cuts through South Island, New Zealand.
  • From the equator to 45N, 1200km shows rise from 0.5C to over 2.0C, 250km shows drop from less than 0.5C to near zero, then rise to 0.2C. At around 45N lies Ottowa, Maine, Bordeaux, Belgrade, Crimea and the most Northern point in Japan.

The differences in the NASA Giss Maps, in a period when available data covered only around half the 2592 5o by 5o grid cells, indicate quite huge differences in trends between different areas. As a consequence, trying to interpolate warming trends from one area to adjacent areas appears to give quite different results in terms of trends by latitude.

Conclusions and Further Questions

The issue I originally focussed upon was the relative size of the early twentieth-century warming to the Post-1975. The greater amount of warming in the later period seemed to be due to the greater warming on land covering just 30% of the total global area. The sea temperature warming phases appear to be pretty much the same.

The issue that I focussed upon was a data issue. The early twentieth century had much less data coverage than after 1975. Further, the Southern Hemisphere had worse data coverage than the Northern Hemisphere, except in the Tropics. This means that in my calculation of a global temperature anomaly from the HADCRUT4 gridded data (which in aggregate was very similar to the published HADCRUT4 anomaly) the average by latitude will not be comparing like with like in the two warming periods. In particular, in the early twentieth-century, a calculation by latitude will not average right the way around the globe, but only on a limited selection of bands of longitude. On average this was about half, but there are massive variations. This would be alright if the changes in anomalies were roughly the same over time by latitude. But an examination of NASA GISS global maps for a period covering the early twentieth-century warming phase reveals that trends in anomalies at the same latitude are quite different over time. This implies that there could be large, but unknown, biases in the data.

I do not believe the analysis ends here. There are a number of areas that I (or others) can try to explore.

  1. Does the NASA GISS infilling of the data get us closer or further away from a what a global temperature anomaly would look like with full data coverage? My guess, based on the extreme example of Antartica trends (discussed here) is that the infilling will move away from the more perfect trend. The data could show otherwise.
  2. Are the changes in data coverage on land more significant than the global average or less? Looking at CRUTEM4 data could resolve this question.
  3. Would anomalies based upon similar grid coverage after 1900 give different relative trend patterns to the published ones based on dissimilar grid coverage?

Whether I get the time to analyze these is another issue.

Finally, the problem of trends varying considerably and quite randomly across the globe is the same issue that I found with land data homogenisation discussed here and here. To derive a temperature anomaly for a grid cell, it is necessary to make the data homogeneous. In standard homogenisation techniques, it is assumed that the underlying trends in an area is pretty much the same. Therefore, any differences in trend between adjacent temperature stations will be as a result of data imperfections. I found numerous examples where there were likely differences in trend between adjacent temperature stations. Homogenisation will, therefore, eliminate real but local climatic trends. Averaging incomplete global data where missing data could contain regional but unknown data trends may cause biases at a global scale.

Kevin Marshall

 

 

More Coal-Fired Power Stations in Asia

A lovely feature of the GWPF site is its extracts of articles related to all aspects of climate and related energy policies. Yesterday the GWPF extracted from an opinion piece in the Hong Kong-based South China Morning Post A new coal war frontier emerges as China and Japan compete for energy projects in Southeast Asia.
The GWPF’s summary:-

Southeast Asia’s appetite for coal has spurred a new geopolitical rivalry between China and Japan as the two countries race to provide high-efficiency, low-emission technology. More than 1,600 coal plants are scheduled to be built by Chinese corporations in over 62 countries. It will make China the world’s primary provider of high-efficiency, low-emission technology.

A summary point in the article is not entirely accurate. (Italics mine)

Because policymakers still regard coal as more affordable than renewables, Southeast Asia’s industrialisation continues to consume large amounts of it. To lift 630 million people out of poverty, advanced coal technologies are considered vital for the region’s continued development while allowing for a reduction in carbon emissions.

Replacing a less efficient coal-fired power station with one of the latest technology will reduce carbon (i.e CO2) emissions per unit of electricity produced. In China, these efficiency savings replacement process may outstrip the growth in power supply from fossil fuels. But in the rest of Asia, the new coal-fired power stations will be mostly additional capacity in the coming decades, so will lead to an increase in CO2 emissions. It is this additional capacity that will be primarily responsible for driving the economic growth that will lift the poor out of extreme poverty.

The newer technologies are important in other types emissions. That is the particle emissions that has caused high levels of choking pollution and smogs in many cities of China and India. By using the new technologies, other countries can avoid the worst excesses of this pollution, whilst still using a cheap fuel available from many different sources of supply. The thrust in China will likely be to replace the high pollution power stations with new technologies or adapt them to reduce the emissions and increase efficiencies. Politically, it is a different way of raising living standards and quality of life than by increasing real disposable income per capita.

Kevin Marshall

 

HADCRUT4, CRUTEM4 and HADSST3 Compared

In the previous post, I compared early twentieth-century warming with the post-1975 warming in the Berkeley Earth Global temperature anomaly. From a visual inspection of the graphs, I determined that the greater warming in the later period is due to more land-based warming, as the warming in the oceans (70% of the global area) was very much the same. The Berkeley Earth data ends in 2013, so does not include the impact of the strong El Niño event in the last three years.

Global average temperature series page of the Met Office Hadley Centre Observation Datasets has the average annual temperature anomalies for CRUTEM4 (land-surface air temperature) and HADSST3 (sea-surface temperature)  and HADCRUT4 (combined). From these datasets, I have derived the graph in Figure 1.

Figure 1 : Graph of Hadley Centre annual temperature anomalies for Land (CRUTEM4), Sea (HADSST3) and Combined (HADCRUT4)

  Comparing the early twentieth-century with 1975-2010,

  • Land warming is considerably greater in the later period.
  • Combined land and sea warming is slightly more in the later period.
  • Sea surface warming is slightly less in the later period.
  • In the early period, the surface anomalies for land and sea have very similar trends, whilst in the later period, the warming of the land is considerably greater than the sea surface warming.

The impact is more clearly shown with 7 year centred moving average figures in Figure 2.

Figure 2 : Graph of Hadley Centre 7 year moving average temperature anomalies for Land (CRUTEM4), Sea (HADSST3) and Combined (HADCRUT4)

This is not just a feature of the HADCRUT dataset. NOAA Global Surface Temperature Anomalies for land, ocean and combined show similar patterns. Figure 3 is on the same basis as Figure 2.

Figure 3 : Graph of NOAA 7 year moving average temperature anomalies for Land, Ocean and Combined.

The major common feature is that the estimated land temperature anomalies have shown a much greater warming trend that the sea surface anomalies since 1980, but no such divergence existed in the early twentieth century warming period. Given that the temperature data sets are far from complete in terms of coverage, and the data is of variable quality, is this divergence a reflection of the true average temperature anomalies based on far more complete and accurate data? There are a number of alternative possibilities that need to be investigated to help determine (using beancounter terminology) whether the estimates are a true and fair reflection of the prespective that more perfect data and techniques would provide. My list might be far from exhaustive.

  1. The sea-surface temperature set understates the post-1975 warming trend due to biases within data set.
  2. The spatial distribution of data changed considerably over time. For instance, in recent decades more data has become available from the Arctic, a region with the largest temperature increases in both the early twentieth century and post-1975.
  3. Land data homogenization techniques may have suppressed differences in climate trends where data is sparser. Alternatively, due to relative differences in climatic trends between nearby locations increasing over time, the further back in time homogenization goes, the more accentuated these differences and therefore the greater the suppression of genuine climatic differences. These aspects I discussed here and here.
  4. There is deliberate manipulation of the data to exaggerate recent warming. Having looked at numerous examples three years ago, this is a perspective that I do not believe to have had any significant impact. However, simply believing something not to be the case, even with examples, does not mean that it is not there.
  5. Strong beliefs about how the data should look have, over time and multiple data adjustments created biases within the land temperature anomalies.

What I do believe is that an expert opinion to whether this divergence between the land and sea surface anomalies is a “true and fair view” of the actual state of affairs can only be reached by a detailed examination of the data. Jumping to conclusions – which is evident from many people across the broad spectrum of opinions on catastrophic anthropogenic global warming debate – will fall short of the most rounded opinion that can be gleaned from the data.

Kevin Marshall

 

The magnitude of Early Twentieth Century Warming relative to Post-1975 Warming

I was browsing the Berkeley Earth website and came across their estimate of global average temperature change. Reproduced as Figure 1.

Figure 1 – BEST Global Temperature anomaly

What clearly stands out is the 10-year moving average line. It clearly shows warming from in the early twentieth century, (the period 1910 to 1940) being very similar warming from the mid-1970s to the end of the series in both time period and magnitude. Maybe the later warming period is up to one-tenth of a degree Celsius greater than the earlier one. The period from 1850 to 1910 shows stasis or a little cooling, but with high variability. The period from the 1940s to the 1970s shows stasis or slight cooling, and low variability.

This is largely corroborated by HADCRUT4, or at least the version I downloaded in mid-2014.

Figure 2 – HADCRUT4 Global Temperature anomaly

HADCRUT4 estimates that the later warming period is about three-twentieths of a degree Celsius greater than the earlier period and that the recent warming is slightly less than the BEST data.

The reason for the close fit is obvious. 70% of the globe is ocean and for that BEST use the same HADSST dataset as HADCRUT4. Graphics of HADSST are a little hard to come by, but KevinC at skepticalscience usefully produced a comparison of the latest HADSST3 in 2012 with the previous version.

Figure 3  – HADSST Ocean Temperature anomaly from skepticalscience 

This shows the two periods having pretty much the same magnitudes of warming.

It is the land data where the differences lie. The BEST Global Land temperature trend is reproduced below.

Figure 4 – BEST Global Land Temperature anomaly

For BEST global land temperatures, the recent warming was much greater than the early twentieth-century warming. This implies that the sea surface temperatures showed pretty much the same warming in the two periods. But if greenhouse gases were responsible for a significant part of global warming then the warming for both land and sea would be greater after the mid-1970s than in the early twentieth century. Whilst there was a rise in GHG levels in the early twentieth century, it was less than in the period from 1945 to 1975, when there was no warming, and much less than the post-1975 when CO2 levels rose massively. Whilst there can be alternative explanations for the early twentieth-century warming and the subsequent lack of warming for 30 years (when the post-WW2 economic boom which led to a continual and accelerating rise in CO2 levels), without such explanations being clear and robust the attribution of post-1975 warming to rising GHG levels is undermined. It could be just unexplained natural variation.

However, as a preliminary to examining explanations of warming trends, as a beancounter, I believe it is first necessary to examine the robustness of the figures. In looking at temperature data in early 2015, one aspect that I found unsatisfactory with the NASA GISS temperature data was the zonal data. GISS usefully divide the data between 8 bands of latitude, which I have replicated as 7 year centred moving averages in Figure 5.

Figure 5 – NASA Gistemp zonal anomalies and the global anomaly

What is significant is that some of the regional anomalies are far greater in magnitude

The most Southerly is for 90S-64S, which is basically Antarctica, an area covering just under 5% of the globe. I found it odd that there should a temperature anomaly for the region from the 1880s, when there were no weather stations recording on the frozen continent until the mid-1950s. The nearest is Base Orcadas located at 60.8 S 44.7 W, or about 350km north of 64 S. I found that whilst the Base Orcadas temperature anomaly was extremely similar to the Antarctica Zonal anomaly in the period until 1950, it was quite dissimilar in the period after.

Figure 6. Gistemp 64S-90S annual temperature anomaly compared to Base Orcadas GISS homogenised data.

NASA Gistemp has attempted to infill the missing temperature anomaly data by using the nearest data available. However, in this case, Base Orcadas appears to climatically different than the average anomalies for Antarctica, and from the global average as well. The result of this is to effectively cancel out the impact of the massive warming in the Arctic on global average temperatures in the early twentieth century. A false assumption has effectively shrunk the early twentieth-century warming. The shrinkage will be small, but it undermines the NASA GISS being the best estimate of a global temperature anomaly given the limited data available.

Rather than saying that the whole exercise of determining a valid comparison the two warming periods since 1900 is useless, I will instead attempt to evaluate how much the lack of data impacts on the anomalies. To this end, in a series of posts, I intend to look at the HADCRUT4 anomaly data. This will be a top-down approach, looking at monthly anomalies for 5o by 5o grid cells from 1850 to 2017, available from the Met Office Hadley Centre Observation Datasets. An advantage over previous analyses is the inclusion of anomalies for the 70% of the globe covered by ocean. The focus will be on the relative magnitudes of the early twentieth-century and post-1975 warming periods. At this point in time, I have no real idea of the conclusions that can be drawn from the analysis of the data.

Kevin Marshall

 

 

Climate Alarmist Bob Ward’s poor analysis of Research Data

After Christopher Booker’s excellent new Report for the GWPF “Global Warming: A Case Study In Groupthink” was published on 20th February, Bob Ward (Policy and Communications Director at the Grantham Research Institute on Climate Change and the Environment at the LSE) typed a rebuttal article “Do male climate change ‘sceptics’ have a problem with women?“. Ward commenced the article with a highly misleading statement.

On 20 February, the Global Warming Policy Foundation launched a new pamphlet at the House of Lords, attacking the mainstream media for not giving more coverage to climate change ‘sceptics’.

I will lead it to the reader to judge for themselves how misleading the statement is by reading the report or alternatively reading his summary at Capx.co.

At Cliscep (reproduced at WUWT), Jaime Jessop has looked into Ward’s distractive claims about the GWPF gender bias. This comment by Ward particularly caught my eye.

A tracking survey commissioned by the Department for Business, Energy and Industrial Strategy showed that, in March 2017, 7.6% answered “I don’t think there is such a thing as climate change” or “Climate change is caused entirely caused by natural processes”, when asked for their views. Among men the figure was 8.1%, while for women it was 7.1%.

I looked at the Tracking Survey. It is interesting that the Summary of Key Findings contains no mention of gender bias, nor of beliefs on climate change. It is only in the Wave 21 full dataset spreadsheet that you find the results of the question 22.

Q22. Thinking about the causes of climate change, which, if any, of the following best describes your opinion?
[INVERT ORDER OF RESPONSES 1-5]
1. Climate change is entirely caused by natural processes
2. Climate change is mainly caused by natural processes
3. Climate change is partly caused by natural processes and partly caused by human activity
4. Climate change is mainly caused by human activity
5. Climate change is entirely caused by human activity
6. I don’t think there is such a thing as climate change.
7. Don’t know
8. No opinion

Note that the first option presented to the questionee is 5, then 4, then 3, then 2, then 1. There may, therefore, be an inbuilt bias in overstating the support for Climate Change being attributed to human activity. But the data is clearly presented, so a quick pivot table was able to check Ward’s results.

The sample was of 2180 – 1090 females and 1090 males. Adding the responses  to “I don’t think there is such a thing as climate change” or “Climate change is caused entirely caused by natural processes” I get 7.16% for females – (37+41)/1090 – and 8.17% for males – (46+43)/1090. Clearly, Bob Ward has failed to remember what he was taught in high school about roundings.

Another problem is that this is raw data. The opinion pollsters have taken time and care to adjust for various demographic factors by adding a weighting to each line. On this basis, Ward should have reported 6.7% for females, 7.6% for males and 7.1% overall.

More importantly, if males tend to be more sceptical of climate change than females, then they will be less alarmist than females. But the data says something different. Of the weighted responses, to those who opted for the most extreme “Climate change is entirely caused by natural processes“, 12.5% were female and 14.5% were male. Very fractionally at the extreme, men are proportionality more alarmist than females than they are sceptical. More importantly, men are slightly more extreme in their opinions on climate change (for or against) than women.

The middle ground is the response to “Climate change is partly caused by natural processes and partly caused by human activity“. The weighted response was 44.5% female and 40.7% male, confirming that men are more extreme in their views than women.

There is a further finding that can be drawn. The projections by the IPCC for future unmitigated global warming assume that all, or the vast majority of, global warming since 1850 is human-caused. Less than 41.6% of British women and 43.2% of British men agree with this assumption that justifies climate mitigation policies.

Below are my summaries. My results are easily replicated for those with an intermediate level of proficiency in Excel.

Learning Note

The most important lesson for understanding data is to analyse that data from different perspectives, against different hypotheses. Bob Ward’s claim of a male gender bias towards climate scepticism in an opinion survey, upon a slightly broader analysis, becomes one where British males are slightly more extreme and forthright in their views than British females whether for or against. This has parallels to my conclusion when looking at the 2013 US study The Role of Conspiracist Ideation and Worldviews in Predicting Rejection of Science – Stephan Lewandowsky, Gilles E. Gignac, Klaus Oberauer. Here I found that rather than the paper’s finding that conspiracist ideation being “associated with the rejection of all scientific propositions tested”, the data strongly indicated that people with strong opinions on one subject, whether for or against, tend to have strong opinions on other subjects, whether for or against. Like with any bias of perspective, (ideological, religious, gender, race, social class, national, football team affiliation etc.) the way to counter bias is to concentrate on the data. Opinion polls are a poor starting point, but at least they may report on perspectives outside of one’s own immediate belief systems. 

Kevin Marshall

“Were going to miss the 2°C Warming target” study and IPCC AR5 WG3 Chapter 6

WUWT had a post on 22nd January

Study: we’re going to miss (and overshoot) the 2°C warming target

This comment (from a University of Southhampton pre-publication news release) needs some explanation to relate it to IPCC AR5.

Through their projections, Dr Goodwin and Professor Williams advise that cumulative carbon emissions needed to remain below 195-205 PgC (from the start of 2017) to deliver a likely chance of meeting the 1.5°C warming target while a 2°C warming target requires emissions to remain below 395-455 PgC.

The PgC is peta-grams of Carbon. For small weights, one normally uses grams. For larger weights one uses kilograms. For still larger weights one uses tonnes. Under the Imperial measurement system, one uses ounces, pounds and tons. So one peta-gram is a billion (or giga) tonne.
Following the IPCC’s convention, GHG emissions are expressed in units of CO2, not carbon. Other GHGs are expressed in CO2e. So 1 PgC = 3.664 GtCO2e.

So the emissions from the start of 2017 are 715-750 GtCO2e for 1.5°C of warming and 1447-1667 GtCO2e for 2°C of warming. To make comparable to IPCC AR5, (specifically to table 6.3 from IPCC AR5 WG3 Chapter 6 p431), one needs to adjust for two things – the IPCC’s projections are from 5 years earlier, and for CO2 emissions only, about 75% of GHG emissions.

The IPCC’s projections of CO2 emissions are 630-1180 GtCO2 for 1.5-1.7°C of warming and 960-1550 GtCO2e for 1.7-2.1°C of warming.

With GHG emissions roughly 50 GtCO2e a year and CO2 emissions 40 GtCO2 a year, from the IPCC’s figures updated from the start of 2017 and expressed in GtCO2e are 570-1300 GtCO2e for 1.5-1.7°C of warming and 1010-1800 GtCO2e for 1.7-2.1°C of warming.

Taking the mid-points of the IPCC’s and the Goodwin-Williams figures, the new projections are saying that at current emissions levels, 1.5°C will be breached four years earlier, and 2°C will be breached one year later. Only the mid-points are 1.6°C and 1.9°C, so it makes no real difference whatsoever. The Goodwin-Williams figures just narrow the ranges and use different units of measure.

But there is still a major problem. Consider this mega table 6.3 reproduced, at lower quality, below.

Notice Column A is for CO2 equivalent concentration in 2100 (ppm CO2eq). Current CO2 levels are around 405 ppm, but GHG gas levels are around 450 ppm CO2eq. Then notice columns G and H, with a joint heading of Concentration (ppm). Column G is for CO2 levels in 2100 and Column H is for CO2 equivalent levels. Note also that for the first few rows of data, Column H is greater than Column A, implying that sometime this century peak CO2 levels will be higher than at the end of the century, and (subject to the response period of the climate system to changes in greenhouse gas levels)  average global temperatures could (subject to the models being correct) exceed the projected 2100 levels. How much though?

Using a magic equation at the skeptical science blog, and (after correcting to make a doubling of CO2 convert to exactly 3°C of warming) assume that all changes in CO2 levels instantly translate into average temperature changes. Further, I assume that other greenhouse gases are irrelevant to the warming calculation, and peak CO2 concentrations are calculated from peak GHG, 2100 GHG, and 2100 CO2 concentrations. I derived the following table.

The 1.5°C warming scenario is actually 1.5-1.7°C warming in 2100, with a mid-point of 1.6°C. The peak implied temperatures are about 2°C.

The 2°C warming scenario is actually 1.7-2.1°C warming in 2100, with a mid-point of 1.9°C. The peak implied temperatures are about 2.3°C, with 2.0°C of warming in 2100 implying about 2.4°C peak temperature rise.

So when the IPCC talk about constraining temperature rise, it is about projected temperature rise in 2100, not about stopping global average temperature rise breaching 1.5°C or 2°C barriers.

Now consider the following statement from the University of Southhampton pre-publication news release, emphasis mine.

“Immediate action is required to develop a carbon-neutral or carbon-negative future or, alternatively, prepare adaptation strategies for the effects of a warmer climate,” said Dr Goodwin, Lecturer in Oceanography and Climate at Southampton. “Our latest research uses a combination of a model and historical data to constrain estimates of how long we have until 1.5°C or 2°C warming occurs. We’ve narrowed the uncertainty in surface warming projections by generating thousands of climate simulations that each closely match observational records for nine key climate metrics, including warming and ocean heat content.”

Professor Williams, Chair in Ocean Sciences at Liverpool, added: “This study is important by providing a narrower window of how much carbon we may emit before reaching 1.5°C or 2°C warming. There is a real need to take action now in developing and adopting the new technologies to move to a more carbon-efficient or carbon-neutral future as we only have a limited window before reaching these warming targets.” This work is particularly timely given the work this year of the Intergovernmental Panel on Climate Change (IPCC) to develop a Special Report on the Impacts of global warming of 1.5°C above pre-industrial levels.

Summary

The basic difference between IPCC AR5 Chapter 6 Table 6.3 and the new paper is the misleading message that various emissions policy scenarios will prevent warming breaching either 1.5°C or 2°C of warming when the IPCC scenarios are clear that this is the 2100 warming level. The IPCC scenarios imply that before 2100 warming could peak at respectively around 1.75°C or 2.4°C.  My calculations can be validated through assuming (a) a doubling of CO2 gives 3°C of warming, (b) other GHGs are irrelevant, (c) there no significant lag between the rise in CO2 level and rise in global average temperature.

Kevin Marshall

 

Is China leading the way on climate mitigation?

At the Conversation is an article on China’s lead in renewable energy.
China wants to dominate the world’s green energy markets – here’s why is by University of Sheffield academic Chris G Pope. The article starts:-

If there is to be an effective response to climate change, it will probably emanate from China. The geopolitical motivations are clear. Renewable energy is increasingly inevitable, and those that dominate the markets in these new technologies will likely have the most influence over the development patterns of the future. As other major powers find themselves in climate denial or atrophy, China may well boost its power and status by becoming the global energy leader of tomorrow.

The effective response ought to be put into the global context. At the end of October UNEP produced its Emissions Gap Report 2017, just in time for the COP23 meeting in Bonn. The key figure on the aimed for constraint of warming to 1.5°C to 2°C from pre-industrial levels – an “effective polcy response” – is E5.2, reproduced below.

An “effective response” by any one country is at least reducing it’s emissions substantially by 2030 compared with now at the start of 2018. To be a world leader in response to climate change requires reducing emissions in the next 12 years by more than the required global average of 20-30%.

Climate Action Tracker – which, unlike myself strongly promotes climate mitigation – rates China’s overall policies as Highly Insufficient in terms of limiting warming to 1.5°C to 2°C. The reason is that they forecast on the basis of current policies emissions will increase in China in the next few years, instead of rapidly decreasing.

So why has Chris Pope got China’s policy so radically wrong? After all, I accept the following statement.

Today, five of the world’s six top solar-module manufacturers, five of the largest wind turbine manufacturers, and six of the ten major car manufacturers committed to electrification are all Chinese-owned. Meanwhile, China is dominant in the lithium sector – think: batteries, electric vehicles and so on – and a global leader in smart grid investment and other renewable energy technologies.

Reducing net emissions means not just have lots of wind turbines, hydro schemes, solar farms and electric cars. It means those renewable forms of energy replacing CO2 energy sources. The problem is that renewables are adding to total energy production, along with fossil fuels. The principal source of China’s energy for electricity and heating is coal. The Global Coal Plant Tracker at endcoal.org has some useful statistics. In terms of coal-fired power stations, China now has 922 GW of coal-fired power stations operating (47% of the global total) with a further 153 GW “Announced + Pre-permit + Permitted” (28%) and 147 GW under construction (56%). Further, from 2006 to mid-2017, China’s Newly Operating Coal Plants had a capacity of 667 GW, fully 70% of the global total. Endcoal.org estimates that coal-fired power stations account for 72% of global GHG emissions from the energy sector, with the energy-sector contributing to 41% of global GHG emissions. With China’s coal-fired power stations accounting for 47% of the global total, assuming similar capacity utilization, China’s coal-fired power stations account for 13-14% of global GHG emissions or 7 GtCO2e of around 52 GtCO2e. It does not stop there. Many homes in China use coal for domestic heating; there is a massive coal-to-liquids program (which may not be currently operating due to the low oil price); manufacturers (such as metal refiners) burn it direct; and recently there are reports of producing gas from coal. So why would China pursue a massive renewables program?

Possible reasons for the Chinese “pro-climate” policies

First, is for strategic energy reasons. I believe that China does not want to be dependent on world oil price fluctuations, which could harm economic growth. China, therefore, builds massive hydro schemes, despite it there being damaging to the environment and sometimes displacing hundreds of thousands of people. China also pursues coal-to-liquids programs, alongside promoting solar and wind farms. Although duplicating effort, it means that if oil prices suffer another hike, China is more immune from the impact than

Second, is an over-riding policy of a fast increase in perceived living standards. For over 20 years China managed average growth rates of up to 10% per annum, increasing average incomes by up to eight times, and moving hundreds of millions of people out of grinding poverty. Now economic growth is slowing (to still fast rates by Western standards) the raising of perceived living standards is being achieved by other means. One such method is to reduce the particulate pollution, particularly in the cities. The recent heavy-handed banning of coal burning in cities (with people freezing this winter) is one example. Another, is the push for electric cars, with the electricity mostly coming from distant coal-fired power stations. In terms of reducing CO2 emissions, electric cars do not make sense, but they do make sense in densely-populated areas with an emerging middle class wanting independent means of travel.

Third, is the push to dominate areas of manufacturing. With many countries pursuing hopeless renewables policies, the market for wind turbines and solar panels is set to increase. The “rare earths” required for the wind turbine magnets, such as neodymium, are produced in large quantities in China, such as in highly polluted Baotou. With lithium (required for batteries), China might only be currently world’s third largest producer – and some way behind Australia and Chile – but its reserves are the world’s second largest and sufficient on their own to supply current global demand for decades. With raw material supplies and low, secure energy costs from coal, along with still relatively low labour costs, China is well-placed to dominate these higher added-value manufacturing areas.

Concluding Comments

The wider evidence shows that an effective response to climate change is not emanating from China. The current energy policies are dominated, and will continue to be dominated, by coal. This will far out-weigh any apparent reductions in emissions from the manufacturing of renewables. Rather, the growth of renewables should be viewed in the context of promoting the continued rapid and secure increase in living standards for the Chinese people, whether in per capita income, or in standards of the local environment.

Kevin Marshall

 

NOAA Future Aridity against Al Gore’s C20th Precipitation Graphic

Paul Homewood has taken a look at an article in yesterdays Daily Mail – A quarter of the world could become a DESERT if global warming increases by just 2ºC.

The article states

Aridity is a measure of the dryness of the land surface, obtained from combining precipitation and evaporation.  

‘Aridification would emerge over 20 to 30 per cent of the world’s land surface by the time the global temperature change reaches 2ºC (3.6ºF)’, said Dr Manoj Joshi from the University of East Anglia’s School of Environmental Sciences and one of the study’s co-authors.  

The research team studied projections from 27 global climate models and identified areas of the world where aridity will substantially change.  

The areas most affected areas are parts of South East Asia, Southern Europe, Southern Africa, Central America and Southern Australia.

Now, having read Al Gore’s authoritative book An Inconvenient Truth there are statements first about extreme flooding, and then about aridity (pages 108-113). The reason for flooding coming first is on a graphic of twentieth-century changes in precipitation on pages 114 & 115.

This graphic shows that, overall, the amount of precipitation has increased globally in the last century by almost 20%.

 However, the effects of climate change on precipitation is not uniform. Precipitation in the 20th century increased overall, as expected with global warming, but in some regions precipitation actually decreased.

The blue dots mark the areas with increased precipitation, the orange dots with decreases. The larger the dot, the larger the change. So, according to Nobel Laureate Al Gore, increased precipitation should be the far more common than increased aridity. If all warming is attributed to human-caused climate change (as the book seems to imply) then over a third of the dangerous 2ºC occurred in the 20th century. Therefore there should be considerable coherence between the recent arid areas and future arid areas.

The Daily Mail reproduces a map from the UEA, showing the high-risk areas.

There are a couple of areas with big differences.

Southern Australia

In the 20th century, much of Australia saw increased precipitation. Within the next two or three decades, the UEA projects it getting considerably arider. Could this change in forecast be the result of the extreme drought that broke in 2012 with extreme flooding? Certainly, the pictures of empty reservoirs taken a few years ago, alongside claims that they would never likely refill show the false predictions.

One such reservoir is Lake Eildon in Victoria. Below is a graphic of capacity levels in selected years. It is possible to compare other years by following the historical water levels for EILDON link.

Similarly, in the same post, I linked to a statement by re-insurer Munich Re stating increased forest fires in Southern Australia were due to human activity. Not by “anthropogenic climate change”, but by discarded fag ends, shards of glass and (most importantly) fires that were deliberately started.

Northern Africa

The UEA makes no claims about increased aridity in Northern Africa, particularly with respect to the Southern and Northern fringes of the Sahara. Increasing desertification of the Sahara used to be claimed as a major consequence of climate change. In the year following Al Gore’s movie and book, the UNIPCC produced its Fourth Climate Assessment Report. Working Group II report, Chapter 9 (Pg 448) on Africa made the following claim.

In other countries, additional risks that could be exacerbated by climate change include greater erosion, deficiencies in yields from rain-fed agriculture of up to 50% during the 2000-2020 period, and reductions in crop growth period (Agoumi, 2003).

Richard North took a detailed look at the background of this claim in 2010. The other African countries were Morocco, Algeria and Tunisia. Agoumi 2003 compiled three reports, only one of which – Morocco – had anything near a 50% claim. Yet Morocco seems, from Al Gore’s graphic to have had a modest increase in rainfall over the last century.

Conclusion

The UEA latest doom-laden prophesy of increased aridity flies in the face of the accepted wisdom that human-caused global warming will result in increased precipitation. In two major areas (Southern Australia and Northern Africa), increased aridity is at add odds with changes in precipitation claimed to have occurred in the 20th Century by Al Gore in An Inconvenient Truth. Yet over a third of the of the dangerous 2ºC warming limit occurred in the last century.

Kevin Marshall