Does data coverage impact the HADCRUT4 and NASA GISS Temperature Anomalies?

Introduction

This post started with the title “HADCRUT4 and NASA GISS Temperature Anomalies – a Comparison by Latitude“.  After deriving a global temperature anomaly from the HADCRUT4 gridded data, I was intending to compare the results with GISS’s anomalies by 8 latitude zones. However, this opened up an intriguing issue. Are global temperature anomalies impacted by a relative lack of data in earlier periods? The leads to a further issue of whether infilling of the data can be meaningful, and hence be considered to “improve” the global anomaly calculation.

A Global Temperature Anomaly from HADCRUT4 Gridded Data

In a previous post, I looked at the relative magnitudes of early twentieth century and post-1975 warming episodes. In the Hadley datasets, there is a clear divergence between the land and sea temperature data trends post-1980, a feature that is not present in the early warming episode. This is reproduced below as Figure 1.

Figure 1 : Graph of Hadley Centre 7 year moving average temperature anomalies for Land (CRUTEM4), Sea (HADSST3) and Combined (HADCRUT4)

The question that needs to be answered is whether the anomalous post-1975 warming on the land is due to real divergence, or due to issues in the estimation of global average temperature anomaly.

In another post – The magnitude of Early Twentieth Century Warming relative to Post-1975 Warming – I looked at the NASA Gistemp data, which is usefully broken down into 8 Latitude Zones. A summary graph is shown in Figure 2.

Figure 2 : NASA Gistemp zonal anomalies and the global anomaly

This is more detail than the HADCRUT4 data, which is just presented as three zones of the Tropics, along with Northern and Southern Hemispheres. However, the Hadley Centre, on their HADCRUT4 Data: download page, have, under  HadCRUT4 Gridded data: additional fields, a file HadCRUT.4.6.0.0.median_ascii.zip. This contains monthly anomalies for 5o by 5o grid cells from 1850 to 2017. There are 36 zones of latitude and 72 zones of longitude. Over 2016 months, there are over 5.22 million grid cells, but only 2.51 million (48%) have data. From this data, I have constructed a global temperature anomaly. The major issue in the calculation is that the grid cells are of different areas. A grid cell nearest to the equator at 0o to 5o has about 23 times the area of a grid cell adjacent to the poles at 85o to 90o. I used the appropriate weighting for each band of latitude.

The question is whether I have calculated a global anomaly similar to the Hadley Centre. Figure 3 is a reconciliation with the published global anomaly mean (available from here) and my own.

Figure 3 : Reconciliation between HADCRUT4 published mean and calculated weighted average mean from the Gridded Data

Prior to 1910, my calculations are slightly below the HADCRUT 4 published data. The biggest differences are in 1956 and 1915. Overall the differences are insignificant and do not impact on the analysis.

I split down the HADCRUT4 temperature data by eight zones of latitude on a similar basis to NASA Gistemp. Figure 4 presents the results on the same basis as Figure 2.

Figure 4 : Zonal surface temperature anomalies a the global anomaly calculated using the HADCRUT4 gridded data.

Visually, there are a number of differences between the Gistemp and HADCRUT4-derived zonal trends.

A potential problem with the global average calculation

The major reason for differences between HADCRUT4 & Gistemp is that the latter has infilled estimated data into areas where there is no data. Could this be a problem?

In Figure 5, I have shown the build-up in global coverage. That is the percentage of 5o by 5o grid cells with an anomaly in the monthly data.

Figure 5 : HADCRUT4 Change in the percentage coverage of each zone in the HADCRUT4 gridded data. 

Figure 5 shows a build-up in data coverage during the late nineteenth and early twentieth centuries. The World Wars (1914-1918 & 1939-1945) had the biggest impact on the Southern Hemisphere data collection. This is unsurprising when one considers it was mostly fought in the Northern Hemisphere, and European powers withdrew resources from their far-flung Empires to protect the mother countries. The only zones with significantly less than 90% grid coverage in the post-1975 warming period are the Arctic and the region below 45S. That is around 19% of the global area.

Finally, comparing comparable zones in the Northen and Southern hemispheres, the tropics seem to have comparable coverage, whilst for the polar, temperate and mid-latitude areas the Northern Hemisphere seems to have better coverage after 1910.

This variation in coverage can potentially lead to wide discrepancies between any calculated temperature anomalies and a theoretical anomaly based upon one with data in all the 5o by 5o grid cells. As an extreme example, with my own calculation, if just one of the 72 grid cells in a band of latitude had a figure, then an “average” would have been calculated for a band right around the world 555km (345 miles) from North to South for that month for that band. In the annual figures by zone, it only requires one of the 72 grid cells, in one of the months, in one of the bands of latitude to have data to calculate an annual anomaly. For the tropics or the polar areas, that is just one in 4320 data points to create an anomaly. This issue will impact early twentieth-century warming episode far more than the post-1975 one. Although I would expect the Hadley centre to have done some data cleanup of the more egregious examples in their calculation, potentially lack of data in grid cells could have quite random impacts, thus biasing the global temperature anomaly trends to an unknown, but significant extent. An appreciation of how this could impact can be appreciated from an example of NASA GISS Global Maps.

NASA GISS Global Maps Temperature Trends Example

NASA GISS Global Maps from GHCN v3 Data provide maps with the calculated change in average temperatures. I have run the maps to compare annual data for 1940 with a baseline of 1881-1910, capturing much of the early twentieth-century warming. I have run the maps at both the 1200km and 250km smoothing.

Figure 6 : NASA GISS Global anomaly Map and average anomaly by Latitude comparing 1940 with a baseline of 1881 to 1910 and a 1200km smoothing radius

Figure 7 : NASA GISS Global anomaly Map and average anomaly by Latitude comparing 1940 with a baseline of 1881 to 1910 and a 250km smoothing radius. 

With respect to the maps in figures 6 & 7

  • There is no apparent difference in the sea data between the 1200km and 250km smoothing radius, except in the polar regions with more cover in the former. The differences lie in the land area.
  • The grey areas with insufficient data all apply to the land or ocean areas in polar regions.
  • Figure 6, with 1200km smoothing, has most of the land infilled, whilst the 250km smoothing shows the lack of data coverage for much of South America, Africa, the Middle East, South-East Asia and Greenland.

Even with these land-based differences in coverage, it is clear that from either map that at any latitude there are huge variations in calculated average temperature change. For instance, take 40N. This line of latitude is North of San Francisco on the West Coast USA, clips Philidelphia on the East Coast. On the other side of the Atlantic, Madrid, Ankara and Beijing are at about 40N. There are significant points on the line on latitude with estimate warming greater than 1C (e.g. California), whilst at the same time in Eastern Europe, cooling may have exceeded 1C in the period. More extreme is at 60N (Southern Alaska, Stockholm, St Petersburg) the difference in temperature along the line of latitude is over 3C. This compares to a calculated global rise of 0.40C.

This lack of data may have contributed (along with a faulty algorithm) to the differences in the Zonal mean charts by Latitude. The 1200km smoothing radius chart bears little relation to the 250km smoothing radius. For instance:-

  •  1200km shows 1.5C warming at 45S, 250km about zero. 45S cuts through South Island, New Zealand.
  • From the equator to 45N, 1200km shows rise from 0.5C to over 2.0C, 250km shows drop from less than 0.5C to near zero, then rise to 0.2C. At around 45N lies Ottowa, Maine, Bordeaux, Belgrade, Crimea and the most Northern point in Japan.

The differences in the NASA Giss Maps, in a period when available data covered only around half the 2592 5o by 5o grid cells, indicate quite huge differences in trends between different areas. As a consequence, trying to interpolate warming trends from one area to adjacent areas appears to give quite different results in terms of trends by latitude.

Conclusions and Further Questions

The issue I originally focussed upon was the relative size of the early twentieth-century warming to the Post-1975. The greater amount of warming in the later period seemed to be due to the greater warming on land covering just 30% of the total global area. The sea temperature warming phases appear to be pretty much the same.

The issue that I focussed upon was a data issue. The early twentieth century had much less data coverage than after 1975. Further, the Southern Hemisphere had worse data coverage than the Northern Hemisphere, except in the Tropics. This means that in my calculation of a global temperature anomaly from the HADCRUT4 gridded data (which in aggregate was very similar to the published HADCRUT4 anomaly) the average by latitude will not be comparing like with like in the two warming periods. In particular, in the early twentieth-century, a calculation by latitude will not average right the way around the globe, but only on a limited selection of bands of longitude. On average this was about half, but there are massive variations. This would be alright if the changes in anomalies were roughly the same over time by latitude. But an examination of NASA GISS global maps for a period covering the early twentieth-century warming phase reveals that trends in anomalies at the same latitude are quite different over time. This implies that there could be large, but unknown, biases in the data.

I do not believe the analysis ends here. There are a number of areas that I (or others) can try to explore.

  1. Does the NASA GISS infilling of the data get us closer or further away from a what a global temperature anomaly would look like with full data coverage? My guess, based on the extreme example of Antartica trends (discussed here) is that the infilling will move away from the more perfect trend. The data could show otherwise.
  2. Are the changes in data coverage on land more significant than the global average or less? Looking at CRUTEM4 data could resolve this question.
  3. Would anomalies based upon similar grid coverage after 1900 give different relative trend patterns to the published ones based on dissimilar grid coverage?

Whether I get the time to analyze these is another issue.

Finally, the problem of trends varying considerably and quite randomly across the globe is the same issue that I found with land data homogenisation discussed here and here. To derive a temperature anomaly for a grid cell, it is necessary to make the data homogeneous. In standard homogenisation techniques, it is assumed that the underlying trends in an area is pretty much the same. Therefore, any differences in trend between adjacent temperature stations will be as a result of data imperfections. I found numerous examples where there were likely differences in trend between adjacent temperature stations. Homogenisation will, therefore, eliminate real but local climatic trends. Averaging incomplete global data where missing data could contain regional but unknown data trends may cause biases at a global scale.

Kevin Marshall

 

 

More Coal-Fired Power Stations in Asia

A lovely feature of the GWPF site is its extracts of articles related to all aspects of climate and related energy policies. Yesterday the GWPF extracted from an opinion piece in the Hong Kong-based South China Morning Post A new coal war frontier emerges as China and Japan compete for energy projects in Southeast Asia.
The GWPF’s summary:-

Southeast Asia’s appetite for coal has spurred a new geopolitical rivalry between China and Japan as the two countries race to provide high-efficiency, low-emission technology. More than 1,600 coal plants are scheduled to be built by Chinese corporations in over 62 countries. It will make China the world’s primary provider of high-efficiency, low-emission technology.

A summary point in the article is not entirely accurate. (Italics mine)

Because policymakers still regard coal as more affordable than renewables, Southeast Asia’s industrialisation continues to consume large amounts of it. To lift 630 million people out of poverty, advanced coal technologies are considered vital for the region’s continued development while allowing for a reduction in carbon emissions.

Replacing a less efficient coal-fired power station with one of the latest technology will reduce carbon (i.e CO2) emissions per unit of electricity produced. In China, these efficiency savings replacement process may outstrip the growth in power supply from fossil fuels. But in the rest of Asia, the new coal-fired power stations will be mostly additional capacity in the coming decades, so will lead to an increase in CO2 emissions. It is this additional capacity that will be primarily responsible for driving the economic growth that will lift the poor out of extreme poverty.

The newer technologies are important in other types emissions. That is the particle emissions that has caused high levels of choking pollution and smogs in many cities of China and India. By using the new technologies, other countries can avoid the worst excesses of this pollution, whilst still using a cheap fuel available from many different sources of supply. The thrust in China will likely be to replace the high pollution power stations with new technologies or adapt them to reduce the emissions and increase efficiencies. Politically, it is a different way of raising living standards and quality of life than by increasing real disposable income per capita.

Kevin Marshall

 

HADCRUT4, CRUTEM4 and HADSST3 Compared

In the previous post, I compared early twentieth-century warming with the post-1975 warming in the Berkeley Earth Global temperature anomaly. From a visual inspection of the graphs, I determined that the greater warming in the later period is due to more land-based warming, as the warming in the oceans (70% of the global area) was very much the same. The Berkeley Earth data ends in 2013, so does not include the impact of the strong El Niño event in the last three years.

Global average temperature series page of the Met Office Hadley Centre Observation Datasets has the average annual temperature anomalies for CRUTEM4 (land-surface air temperature) and HADSST3 (sea-surface temperature)  and HADCRUT4 (combined). From these datasets, I have derived the graph in Figure 1.

Figure 1 : Graph of Hadley Centre annual temperature anomalies for Land (CRUTEM4), Sea (HADSST3) and Combined (HADCRUT4)

  Comparing the early twentieth-century with 1975-2010,

  • Land warming is considerably greater in the later period.
  • Combined land and sea warming is slightly more in the later period.
  • Sea surface warming is slightly less in the later period.
  • In the early period, the surface anomalies for land and sea have very similar trends, whilst in the later period, the warming of the land is considerably greater than the sea surface warming.

The impact is more clearly shown with 7 year centred moving average figures in Figure 2.

Figure 2 : Graph of Hadley Centre 7 year moving average temperature anomalies for Land (CRUTEM4), Sea (HADSST3) and Combined (HADCRUT4)

This is not just a feature of the HADCRUT dataset. NOAA Global Surface Temperature Anomalies for land, ocean and combined show similar patterns. Figure 3 is on the same basis as Figure 2.

Figure 3 : Graph of NOAA 7 year moving average temperature anomalies for Land, Ocean and Combined.

The major common feature is that the estimated land temperature anomalies have shown a much greater warming trend that the sea surface anomalies since 1980, but no such divergence existed in the early twentieth century warming period. Given that the temperature data sets are far from complete in terms of coverage, and the data is of variable quality, is this divergence a reflection of the true average temperature anomalies based on far more complete and accurate data? There are a number of alternative possibilities that need to be investigated to help determine (using beancounter terminology) whether the estimates are a true and fair reflection of the prespective that more perfect data and techniques would provide. My list might be far from exhaustive.

  1. The sea-surface temperature set understates the post-1975 warming trend due to biases within data set.
  2. The spatial distribution of data changed considerably over time. For instance, in recent decades more data has become available from the Arctic, a region with the largest temperature increases in both the early twentieth century and post-1975.
  3. Land data homogenization techniques may have suppressed differences in climate trends where data is sparser. Alternatively, due to relative differences in climatic trends between nearby locations increasing over time, the further back in time homogenization goes, the more accentuated these differences and therefore the greater the suppression of genuine climatic differences. These aspects I discussed here and here.
  4. There is deliberate manipulation of the data to exaggerate recent warming. Having looked at numerous examples three years ago, this is a perspective that I do not believe to have had any significant impact. However, simply believing something not to be the case, even with examples, does not mean that it is not there.
  5. Strong beliefs about how the data should look have, over time and multiple data adjustments created biases within the land temperature anomalies.

What I do believe is that an expert opinion to whether this divergence between the land and sea surface anomalies is a “true and fair view” of the actual state of affairs can only be reached by a detailed examination of the data. Jumping to conclusions – which is evident from many people across the broad spectrum of opinions on catastrophic anthropogenic global warming debate – will fall short of the most rounded opinion that can be gleaned from the data.

Kevin Marshall

 

The magnitude of Early Twentieth Century Warming relative to Post-1975 Warming

I was browsing the Berkeley Earth website and came across their estimate of global average temperature change. Reproduced as Figure 1.

Figure 1 – BEST Global Temperature anomaly

What clearly stands out is the 10-year moving average line. It clearly shows warming from in the early twentieth century, (the period 1910 to 1940) being very similar warming from the mid-1970s to the end of the series in both time period and magnitude. Maybe the later warming period is up to one-tenth of a degree Celsius greater than the earlier one. The period from 1850 to 1910 shows stasis or a little cooling, but with high variability. The period from the 1940s to the 1970s shows stasis or slight cooling, and low variability.

This is largely corroborated by HADCRUT4, or at least the version I downloaded in mid-2014.

Figure 2 – HADCRUT4 Global Temperature anomaly

HADCRUT4 estimates that the later warming period is about three-twentieths of a degree Celsius greater than the earlier period and that the recent warming is slightly less than the BEST data.

The reason for the close fit is obvious. 70% of the globe is ocean and for that BEST use the same HADSST dataset as HADCRUT4. Graphics of HADSST are a little hard to come by, but KevinC at skepticalscience usefully produced a comparison of the latest HADSST3 in 2012 with the previous version.

Figure 3  – HADSST Ocean Temperature anomaly from skepticalscience 

This shows the two periods having pretty much the same magnitudes of warming.

It is the land data where the differences lie. The BEST Global Land temperature trend is reproduced below.

Figure 4 – BEST Global Land Temperature anomaly

For BEST global land temperatures, the recent warming was much greater than the early twentieth-century warming. This implies that the sea surface temperatures showed pretty much the same warming in the two periods. But if greenhouse gases were responsible for a significant part of global warming then the warming for both land and sea would be greater after the mid-1970s than in the early twentieth century. Whilst there was a rise in GHG levels in the early twentieth century, it was less than in the period from 1945 to 1975, when there was no warming, and much less than the post-1975 when CO2 levels rose massively. Whilst there can be alternative explanations for the early twentieth-century warming and the subsequent lack of warming for 30 years (when the post-WW2 economic boom which led to a continual and accelerating rise in CO2 levels), without such explanations being clear and robust the attribution of post-1975 warming to rising GHG levels is undermined. It could be just unexplained natural variation.

However, as a preliminary to examining explanations of warming trends, as a beancounter, I believe it is first necessary to examine the robustness of the figures. In looking at temperature data in early 2015, one aspect that I found unsatisfactory with the NASA GISS temperature data was the zonal data. GISS usefully divide the data between 8 bands of latitude, which I have replicated as 7 year centred moving averages in Figure 5.

Figure 5 – NASA Gistemp zonal anomalies and the global anomaly

What is significant is that some of the regional anomalies are far greater in magnitude

The most Southerly is for 90S-64S, which is basically Antarctica, an area covering just under 5% of the globe. I found it odd that there should a temperature anomaly for the region from the 1880s, when there were no weather stations recording on the frozen continent until the mid-1950s. The nearest is Base Orcadas located at 60.8 S 44.7 W, or about 350km north of 64 S. I found that whilst the Base Orcadas temperature anomaly was extremely similar to the Antarctica Zonal anomaly in the period until 1950, it was quite dissimilar in the period after.

Figure 6. Gistemp 64S-90S annual temperature anomaly compared to Base Orcadas GISS homogenised data.

NASA Gistemp has attempted to infill the missing temperature anomaly data by using the nearest data available. However, in this case, Base Orcadas appears to climatically different than the average anomalies for Antarctica, and from the global average as well. The result of this is to effectively cancel out the impact of the massive warming in the Arctic on global average temperatures in the early twentieth century. A false assumption has effectively shrunk the early twentieth-century warming. The shrinkage will be small, but it undermines the NASA GISS being the best estimate of a global temperature anomaly given the limited data available.

Rather than saying that the whole exercise of determining a valid comparison the two warming periods since 1900 is useless, I will instead attempt to evaluate how much the lack of data impacts on the anomalies. To this end, in a series of posts, I intend to look at the HADCRUT4 anomaly data. This will be a top-down approach, looking at monthly anomalies for 5o by 5o grid cells from 1850 to 2017, available from the Met Office Hadley Centre Observation Datasets. An advantage over previous analyses is the inclusion of anomalies for the 70% of the globe covered by ocean. The focus will be on the relative magnitudes of the early twentieth-century and post-1975 warming periods. At this point in time, I have no real idea of the conclusions that can be drawn from the analysis of the data.

Kevin Marshall

 

 

Ocean Impact on Temperature Data and Temperature Homgenization

Pierre Gosselin’s notrickszone looks at a new paper.

Temperature trends with reduced impact of ocean air temperature – Frank LansnerJens Olaf Pepke Pedersen.

The paper’s abstract.

Temperature data 1900–2010 from meteorological stations across the world have been analyzed and it has been found that all land areas generally have two different valid temperature trends. Coastal stations and hill stations facing ocean winds are normally more warm-trended than the valley stations that are sheltered from dominant oceans winds.

Thus, we found that in any area with variation in the topography, we can divide the stations into the more warm trended ocean air-affected stations, and the more cold-trended ocean air-sheltered stations. We find that the distinction between ocean air-affected and ocean air-sheltered stations can be used to identify the influence of the oceans on land surface. We can then use this knowledge as a tool to better study climate variability on the land surface without the moderating effects of the ocean.

We find a lack of warming in the ocean air sheltered temperature data – with less impact of ocean temperature trends – after 1950. The lack of warming in the ocean air sheltered temperature trends after 1950 should be considered when evaluating the climatic effects of changes in the Earth’s atmospheric trace amounts of greenhouse gasses as well as variations in solar conditions.

More generally, the paper’s authors are saying that over fairly short distances temperature stations will show different climatic trends. This has a profound implication for temperature homogenization. From Venema et al 2012.

The most commonly used method to detect and remove the effects of artificial changes is the relative homogenization approach, which assumes that nearby stations are exposed to almost the same climate signal and that thus the differences between nearby stations can be utilized to detect inhomogeneities (Conrad and Pollak, 1950). In relative homogeneity testing, a candidate time series is compared to multiple surrounding stations either in a pairwise fashion or to a single composite reference time series computed for multiple nearby stations. 

Lansner and Pederson are, by implication, demonstrating that the principle assumption on which homogenization is based (that nearby temperature stations are exposed to almost the same climatic signal) is not valid. As a result data homogenization will not only eliminate biases in the temperature data (such a measurement biases, impacts of station moves and the urban heat island effect where it impacts a minority of stations) but will also adjust out actual climatic trends. Where the climatic trends are localized and not replicated in surrounding areas, they will be eliminated by homogenization. What I found in early 2015 (following the examples of Paul Homewood, Euan Mearns and others) is that there are examples from all over the world where the data suggests that nearby temperature stations are exposed to different climatic signals. Data homogenization will, therefore, cause quite weird and unstable results. A number of posts were summarized in my post Defining “Temperature Homogenisation”.  Paul Matthews at Cliscep corroborated this in his post of February 2017 “Instability og GHCN Adjustment Algorithm“.

During my attempts to understand the data, I also found that those who support AGW theory not only do not question their assumptions but also have strong shared beliefs in what the data ought to look like. One of the most significant in this context is a Climategate email sent on Mon, 12 Oct 2009 by Kevin Trenberth to Michael Mann of Hockey Stick fame, and copied to Phil Jones of the Hadley centre, Thomas Karl of NOAA, Gavin Schmidt of NASA GISS, plus others.

The fact is that we can’t account for the lack of warming at the moment and it is a travesty that we can’t. The CERES data published in the August BAMS 09 supplement on 2008 shows there should be even more warming: but the data are surely wrong. Our observing system is inadequate. (emphasis mine)

Homogenizing data a number of times, and evaluating the unstable results in the context of strongly-held beliefs will bring the trends evermore into line with those beliefs. There is no requirement for some sort of conspiracy behind deliberate data manipulation for this emerging pattern of adjustments. Indeed a conspiracy in terms of a group knowing the truth and deliberately perverting that evidence does not really apply. Another reason for the conspiracy not applying is the underlying purpose of homogenization. It is to allow that temperature station to be representative of the surrounding area. Without that, it would not be possible to compile an average for the surrounding area, from which the global average in constructed. It is this requirement, in the context of real climatic differences over relatively small areas, I would suggest leads to the deletions of “erroneous” data and the infilling of estimated data elsewhere.

The gradual bringing the temperature data sets into line will beliefs is most clearly shown in the NASA GISS temperature data adjustments. Climate4you produces regular updates of the adjustments since May 2008. Below is the March 2018 version.

The reduction of the 1910 to 1940 warming period (which is at odds with theory) and the increase in the post-1975 warming phase (which correlates with the rise in CO2) supports my contention of the influence of beliefs.

Kevin Marshall

 

Climate Alarmist Bob Ward’s poor analysis of Research Data

After Christopher Booker’s excellent new Report for the GWPF “Global Warming: A Case Study In Groupthink” was published on 20th February, Bob Ward (Policy and Communications Director at the Grantham Research Institute on Climate Change and the Environment at the LSE) typed a rebuttal article “Do male climate change ‘sceptics’ have a problem with women?“. Ward commenced the article with a highly misleading statement.

On 20 February, the Global Warming Policy Foundation launched a new pamphlet at the House of Lords, attacking the mainstream media for not giving more coverage to climate change ‘sceptics’.

I will lead it to the reader to judge for themselves how misleading the statement is by reading the report or alternatively reading his summary at Capx.co.

At Cliscep (reproduced at WUWT), Jaime Jessop has looked into Ward’s distractive claims about the GWPF gender bias. This comment by Ward particularly caught my eye.

A tracking survey commissioned by the Department for Business, Energy and Industrial Strategy showed that, in March 2017, 7.6% answered “I don’t think there is such a thing as climate change” or “Climate change is caused entirely caused by natural processes”, when asked for their views. Among men the figure was 8.1%, while for women it was 7.1%.

I looked at the Tracking Survey. It is interesting that the Summary of Key Findings contains no mention of gender bias, nor of beliefs on climate change. It is only in the Wave 21 full dataset spreadsheet that you find the results of the question 22.

Q22. Thinking about the causes of climate change, which, if any, of the following best describes your opinion?
[INVERT ORDER OF RESPONSES 1-5]
1. Climate change is entirely caused by natural processes
2. Climate change is mainly caused by natural processes
3. Climate change is partly caused by natural processes and partly caused by human activity
4. Climate change is mainly caused by human activity
5. Climate change is entirely caused by human activity
6. I don’t think there is such a thing as climate change.
7. Don’t know
8. No opinion

Note that the first option presented to the questionee is 5, then 4, then 3, then 2, then 1. There may, therefore, be an inbuilt bias in overstating the support for Climate Change being attributed to human activity. But the data is clearly presented, so a quick pivot table was able to check Ward’s results.

The sample was of 2180 – 1090 females and 1090 males. Adding the responses  to “I don’t think there is such a thing as climate change” or “Climate change is caused entirely caused by natural processes” I get 7.16% for females – (37+41)/1090 – and 8.17% for males – (46+43)/1090. Clearly, Bob Ward has failed to remember what he was taught in high school about roundings.

Another problem is that this is raw data. The opinion pollsters have taken time and care to adjust for various demographic factors by adding a weighting to each line. On this basis, Ward should have reported 6.7% for females, 7.6% for males and 7.1% overall.

More importantly, if males tend to be more sceptical of climate change than females, then they will be less alarmist than females. But the data says something different. Of the weighted responses, to those who opted for the most extreme “Climate change is entirely caused by natural processes“, 12.5% were female and 14.5% were male. Very fractionally at the extreme, men are proportionality more alarmist than females than they are sceptical. More importantly, men are slightly more extreme in their opinions on climate change (for or against) than women.

The middle ground is the response to “Climate change is partly caused by natural processes and partly caused by human activity“. The weighted response was 44.5% female and 40.7% male, confirming that men are more extreme in their views than women.

There is a further finding that can be drawn. The projections by the IPCC for future unmitigated global warming assume that all, or the vast majority of, global warming since 1850 is human-caused. Less than 41.6% of British women and 43.2% of British men agree with this assumption that justifies climate mitigation policies.

Below are my summaries. My results are easily replicated for those with an intermediate level of proficiency in Excel.

Learning Note

The most important lesson for understanding data is to analyse that data from different perspectives, against different hypotheses. Bob Ward’s claim of a male gender bias towards climate scepticism in an opinion survey, upon a slightly broader analysis, becomes one where British males are slightly more extreme and forthright in their views than British females whether for or against. This has parallels to my conclusion when looking at the 2013 US study The Role of Conspiracist Ideation and Worldviews in Predicting Rejection of Science – Stephan Lewandowsky, Gilles E. Gignac, Klaus Oberauer. Here I found that rather than the paper’s finding that conspiracist ideation being “associated with the rejection of all scientific propositions tested”, the data strongly indicated that people with strong opinions on one subject, whether for or against, tend to have strong opinions on other subjects, whether for or against. Like with any bias of perspective, (ideological, religious, gender, race, social class, national, football team affiliation etc.) the way to counter bias is to concentrate on the data. Opinion polls are a poor starting point, but at least they may report on perspectives outside of one’s own immediate belief systems. 

Kevin Marshall

“Were going to miss the 2°C Warming target” study and IPCC AR5 WG3 Chapter 6

WUWT had a post on 22nd January

Study: we’re going to miss (and overshoot) the 2°C warming target

This comment (from a University of Southhampton pre-publication news release) needs some explanation to relate it to IPCC AR5.

Through their projections, Dr Goodwin and Professor Williams advise that cumulative carbon emissions needed to remain below 195-205 PgC (from the start of 2017) to deliver a likely chance of meeting the 1.5°C warming target while a 2°C warming target requires emissions to remain below 395-455 PgC.

The PgC is peta-grams of Carbon. For small weights, one normally uses grams. For larger weights one uses kilograms. For still larger weights one uses tonnes. Under the Imperial measurement system, one uses ounces, pounds and tons. So one peta-gram is a billion (or giga) tonne.
Following the IPCC’s convention, GHG emissions are expressed in units of CO2, not carbon. Other GHGs are expressed in CO2e. So 1 PgC = 3.664 GtCO2e.

So the emissions from the start of 2017 are 715-750 GtCO2e for 1.5°C of warming and 1447-1667 GtCO2e for 2°C of warming. To make comparable to IPCC AR5, (specifically to table 6.3 from IPCC AR5 WG3 Chapter 6 p431), one needs to adjust for two things – the IPCC’s projections are from 5 years earlier, and for CO2 emissions only, about 75% of GHG emissions.

The IPCC’s projections of CO2 emissions are 630-1180 GtCO2 for 1.5-1.7°C of warming and 960-1550 GtCO2e for 1.7-2.1°C of warming.

With GHG emissions roughly 50 GtCO2e a year and CO2 emissions 40 GtCO2 a year, from the IPCC’s figures updated from the start of 2017 and expressed in GtCO2e are 570-1300 GtCO2e for 1.5-1.7°C of warming and 1010-1800 GtCO2e for 1.7-2.1°C of warming.

Taking the mid-points of the IPCC’s and the Goodwin-Williams figures, the new projections are saying that at current emissions levels, 1.5°C will be breached four years earlier, and 2°C will be breached one year later. Only the mid-points are 1.6°C and 1.9°C, so it makes no real difference whatsoever. The Goodwin-Williams figures just narrow the ranges and use different units of measure.

But there is still a major problem. Consider this mega table 6.3 reproduced, at lower quality, below.

Notice Column A is for CO2 equivalent concentration in 2100 (ppm CO2eq). Current CO2 levels are around 405 ppm, but GHG gas levels are around 450 ppm CO2eq. Then notice columns G and H, with a joint heading of Concentration (ppm). Column G is for CO2 levels in 2100 and Column H is for CO2 equivalent levels. Note also that for the first few rows of data, Column H is greater than Column A, implying that sometime this century peak CO2 levels will be higher than at the end of the century, and (subject to the response period of the climate system to changes in greenhouse gas levels)  average global temperatures could (subject to the models being correct) exceed the projected 2100 levels. How much though?

I will a magic equation at the skeptical science blog, and (after correcting to make a doubling of CO2 convert to exactly 3°C of warming) assume that all changes in CO2 levels instantly translate into average temperature changes. Further, I assume that other greenhouse gases are irrelevant to the warming calculation, and peak CO2 concentrations are calculated from peak GHG, 2100 GHG, and 2100 CO2 concentrations. I derived the following table.

The 1.5°C warming scenario is actually 1.5-1.7°C warming in 2100, with a mid-point of 1.6°C. The peak implied temperatures are about 2°C.

The 2°C warming scenario is actually 1.7-2.1°C warming in 2100, with a mid-point of 1.9°C. The peak implied temperatures are about 2.3°C, with 2.0°C of warming in 2100 implying about 2.4°C peak temperature rise.

So when the IPCC talk about constraining temperature rise, it is about projected temperature rise in 2100, not about stopping global average temperature rise breaching 1.5°C or 2°C barriers.

Now consider the following statement from the University of Southhampton pre-publication news release, emphasis mine.

“Immediate action is required to develop a carbon-neutral or carbon-negative future or, alternatively, prepare adaptation strategies for the effects of a warmer climate,” said Dr Goodwin, Lecturer in Oceanography and Climate at Southampton. “Our latest research uses a combination of a model and historical data to constrain estimates of how long we have until 1.5°C or 2°C warming occurs. We’ve narrowed the uncertainty in surface warming projections by generating thousands of climate simulations that each closely match observational records for nine key climate metrics, including warming and ocean heat content.”

Professor Williams, Chair in Ocean Sciences at Liverpool, added: “This study is important by providing a narrower window of how much carbon we may emit before reaching 1.5°C or 2°C warming. There is a real need to take action now in developing and adopting the new technologies to move to a more carbon-efficient or carbon-neutral future as we only have a limited window before reaching these warming targets.” This work is particularly timely given the work this year of the Intergovernmental Panel on Climate Change (IPCC) to develop a Special Report on the Impacts of global warming of 1.5°C above pre-industrial levels.

Summary

The basic difference between IPCC AR5 Chapter 6 Table 6.3 and the new paper is the misleading message that various emissions policy scenarios will prevent warming breaching either 1.5°C or 2°C of warming when the IPCC scenarios are clear that this is the 2100 warming level. The IPCC scenarios imply that before 2100 warming could peak at respectively around 1.75°C or 2.4°C.  My calculations can be validated through assuming (a) a doubling of CO2 gives 3°C of warming, (b) other GHGs are irrelevant, (c) there no significant lag between the rise in CO2 level and rise in global average temperature.

Kevin Marshall

Is China leading the way on climate mitigation?

At the Conversation is an article on China’s lead in renewable energy.
China wants to dominate the world’s green energy markets – here’s why is by University of Sheffield academic Chris G Pope. The article starts:-

If there is to be an effective response to climate change, it will probably emanate from China. The geopolitical motivations are clear. Renewable energy is increasingly inevitable, and those that dominate the markets in these new technologies will likely have the most influence over the development patterns of the future. As other major powers find themselves in climate denial or atrophy, China may well boost its power and status by becoming the global energy leader of tomorrow.

The effective response ought to be put into the global context. At the end of October UNEP produced its Emissions Gap Report 2017, just in time for the COP23 meeting in Bonn. The key figure on the aimed for constraint of warming to 1.5°C to 2°C from pre-industrial levels – an “effective polcy response” – is E5.2, reproduced below.

An “effective response” by any one country is at least reducing it’s emissions substantially by 2030 compared with now at the start of 2018. To be a world leader in response to climate change requires reducing emissions in the next 12 years by more than the required global average of 20-30%.

Climate Action Tracker – which, unlike myself strongly promotes climate mitigation – rates China’s overall policies as Highly Insufficient in terms of limiting warming to 1.5°C to 2°C. The reason is that they forecast on the basis of current policies emissions will increase in China in the next few years, instead of rapidly decreasing.

So why has Chris Pope got China’s policy so radically wrong? After all, I accept the following statement.

Today, five of the world’s six top solar-module manufacturers, five of the largest wind turbine manufacturers, and six of the ten major car manufacturers committed to electrification are all Chinese-owned. Meanwhile, China is dominant in the lithium sector – think: batteries, electric vehicles and so on – and a global leader in smart grid investment and other renewable energy technologies.

Reducing net emissions means not just have lots of wind turbines, hydro schemes, solar farms and electric cars. It means those renewable forms of energy replacing CO2 energy sources. The problem is that renewables are adding to total energy production, along with fossil fuels. The principal source of China’s energy for electricity and heating is coal. The Global Coal Plant Tracker at endcoal.org has some useful statistics. In terms of coal-fired power stations, China now has 922 GW of coal-fired power stations operating (47% of the global total) with a further 153 GW “Announced + Pre-permit + Permitted” (28%) and 147 GW under construction (56%). Further, from 2006 to mid-2017, China’s Newly Operating Coal Plants had a capacity of 667 GW, fully 70% of the global total. Endcoal.org estimates that coal-fired power stations account for 72% of global GHG emissions from the energy sector, with the energy-sector contributing to 41% of global GHG emissions. With China’s coal-fired power stations accounting for 47% of the global total, assuming similar capacity utilization, China’s coal-fired power stations account for 13-14% of global GHG emissions or 7 GtCO2e of around 52 GtCO2e. It does not stop there. Many homes in China use coal for domestic heating; there is a massive coal-to-liquids program (which may not be currently operating due to the low oil price); manufacturers (such as metal refiners) burn it direct; and recently there are reports of producing gas from coal. So why would China pursue a massive renewables program?

Possible reasons for the Chinese “pro-climate” policies

First, is for strategic energy reasons. I believe that China does not want to be dependent on world oil price fluctuations, which could harm economic growth. China, therefore, builds massive hydro schemes, despite it there being damaging to the environment and sometimes displacing hundreds of thousands of people. China also pursues coal-to-liquids programs, alongside promoting solar and wind farms. Although duplicating effort, it means that if oil prices suffer another hike, China is more immune from the impact than

Second, is an over-riding policy of a fast increase in perceived living standards. For over 20 years China managed average growth rates of up to 10% per annum, increasing average incomes by up to eight times, and moving hundreds of millions of people out of grinding poverty. Now economic growth is slowing (to still fast rates by Western standards) the raising of perceived living standards is being achieved by other means. One such method is to reduce the particulate pollution, particularly in the cities. The recent heavy-handed banning of coal burning in cities (with people freezing this winter) is one example. Another, is the push for electric cars, with the electricity mostly coming from distant coal-fired power stations. In terms of reducing CO2 emissions, electric cars do not make sense, but they do make sense in densely-populated areas with an emerging middle class wanting independent means of travel.

Third, is the push to dominate areas of manufacturing. With many countries pursuing hopeless renewables policies, the market for wind turbines and solar panels is set to increase. The “rare earths” required for the wind turbine magnets, such as neodymium, are produced in large quantities in China, such as in highly polluted Baotou. With lithium (required for batteries), China might only be currently world’s third largest producer – and some way behind Australia and Chile – but its reserves are the world’s second largest and sufficient on their own to supply current global demand for decades. With raw material supplies and low, secure energy costs from coal, along with still relatively low labour costs, China is well-placed to dominate these higher added-value manufacturing areas.

Concluding Comments

The wider evidence shows that an effective response to climate change is not emanating from China. The current energy policies are dominated, and will continue to be dominated, by coal. This will far out-weigh any apparent reductions in emissions from the manufacturing of renewables. Rather, the growth of renewables should be viewed in the context of promoting the continued rapid and secure increase in living standards for the Chinese people, whether in per capita income, or in standards of the local environment.

Kevin Marshall

 

Climate Public Nuisance as a justification for Climate Mitigation

Ron Clutz, at his Science Matters blog, has a penchant for producing some interesting articles that open up new areas outside of the mainstream of either climate alarmism or climate scepticism, some of which may challenge my own perspectives. With Critical climate intelligence for jurists and others, Ron has done this again.
There is a lot of ground covered here, and I am sure that it just touches on a few of the many issues. The first area covered is the tort of Public Nuisance, explained by legal scholar Richard O. Faulk. This touches upon areas that I have dealt with recently, particularly this section. (bold mine)

Generally in tort cases involving public nuisance, there is a term, which we all know from negligence cases and other torts, called proximate causation. In proximate causation, there is a “but for” test: but for the defendant’s activity, would the injury have happened? Can we say that climate change would not have happened if these power plants, these isolated five power plants, were not emitting greenhouse gases? If they completely stopped, would we still have global warming? If you shut them down completely and have them completely dismantled, would we still have global warming? 

I think Faulk then goes off the argument when he states.

Is it really their emissions that are causing this, or is it the other billions and billions of things on the planet that caused global warming—such as volcanoes? Such as gases being naturally released through earth actions, through off-gassing?

Is it the refinery down in Texas instead? Is it the elephant on the grasses in Africa? Is it my cows on my ranch in Texas who emit methane every day from their digestive systems? How can we characterize the public utilities’ actions as “but for” causes or “substantial contributions?” So far, the courts haven’t even reached these issues on the merits.

A necessary (but not sufficient) condition to be met for adverse human-caused global warming to be abated, is that most, if not all, human GHG emissions must be stopped. Unlike with smoke particulates, where elimination in the local area will make a big difference, GHGs are well-mixed. So shutting down a coal-fired power station in Oak Creek will have the same impact on the future climate change for people of South-East Wisconsin as shutting down a similar coal-fired power station in Boxburg, Ningxia, Mpumalanga, Kolkata or Porto do Pecém. That is in the range of zero or insignificantly different to zero depending on your perspective on CAGW.

Proximate causation was a term that I should have used to counter Minnesotan valve-turners climate necessity defense. As I noted in that post, to reduce global emissions by the amount desired by the UNIPCC – constraining future emissions to well below 1000 GtCO2e, requires not only reducing the demand for fossil fuels and other sources of GHG emissions, but also requires many countries dependent on the supply of fossil fuels for a large part of their national incomes, to leave at least 75% of known fossil fuel reserves in the ground.

An example of proximate causation to be found in the post of 27 December Judge delivers crushing blow to Washington Clean Air RuleGovernor Inslee called the legislation “the nation’s first Clean Air Rule, to cap and reduce carbon pollution.” But the legislation will only reduce so-called carbon pollution if the reduction is greater than the net increase in other areas of the world. The will not happen as both demand and supply are not covered by global agreements with the aggregate impact

Kevin Marshall

Thomas Fuller on polar-bear-gate at Cliscep

This is an extended version of a comment made at Thomas Fuller’s cliscep article Okay, just one more post on polar-bear-gate… I promise…

There are three things highlighted in the post and the comments that illustrate the Polar Bear smear paper as being a rich resource towards understanding the worst of climate alarmism.

First is from Alan Kendall @ 28 Dec 17 at 9:35 am

But what Harvey et al. ignores is that Susan Crockford meticulously quotes from the “approved canon of polar bear research” and exhorts her readers to read it (making an offer to provide copies of papers difficult to obtain). She provides an entree into that canon- an entree obviously used by many and probably to the fury of polar bear “experts”.

This is spot on about Susan Crockford, and, in my opinion, what proper academics should be aiming at. To assess an area where widely different perspectives are possible, I was taught that it is necessary to read and evaluate the original documents. Climate alarmists in general, and this paper in particular, evaluate in relation collective opinion as opposed to more objective criteria. In the paper, “science” is about support for a partly fictional consensus, “denial” is seeking to undermine that fiction. On polar bears this is clearly stated in relation to the two groups of blogs.

We found a clear separation between the 45 science-based blogs and the 45 science-denier blogs. The two groups took diametrically opposite positions on the “scientific uncertainty” frame—specifically regarding the threats posed by AGW to polar bears and their Arctic-ice habitat. Scientific blogs provided convincing evidence that AGW poses a threat to both, whereas most denier blogs did not.

A key element is to frame statements in terms of polar extremes.

Second, is the extremely selective use of the data (or selective analysis methods) to enable the desired conclusion to be reached. Thomas Fuller has clearly pointed out in the article and restated in the comments with respect to WUWT, the following.

Harvey and his 13 co-authors state that WUWT overwhelmingly links to Crockford. I have shown that this is not the case.

Selective use of data (or selective analysis methods) is common on climate alarmism. For instance

  • The original MBH 98 Hockey-Stick graph used out-of-date temperature series, or tree-ring proxies such as at Gaspe in Canada, that were not replicated by later samples.
  • Other temperature reconstructions. Remember Keith Briffa’s Yamal reconstruction, which relied on one tree for the post-1990 reconstructions? (see here and here)
  • Lewandowsky et al “Moon Hoax” paper. Just 10 out of 1145 survey respondents supported the “NASA faked the Moon Landings” conspiracy theory. Of these just 2 dogmatically rejected “climate”. These two faked/scam/rogue respondents 860 & 889 supported every conspiracy theory, underpinning many of the correlations.
  • Smoothing out the pause in warming in Risbey, Lewandowsky et al 2014 “Well-estimated global surface warming in climate projections selected for ENSO phase”. In The Lewandowsky Smooth, I replicated the key features of the temperature graph in Excel, showing how no warming for a decade in Hadcrut4 was made to appear as if there was hardly a cessation of warming.

Third, is to frame the argument in terms of polar extremes. Richard S J Tol @ 28 Dec 17 at 7:13 am

And somehow the information in those 83 posts was turned into a short sequence of zeros and ones.

Not only one many issues is there a vast number of intermediate positions possible (the middle ground), there are other dimensions. One is the strength of evidential support for a particular perspective. There could be little or no persuasive evidence. Another is whether there is support for alternative perspectives. For instance, although sea ice data is lacking for the early twentieth-century warming, average temperature data is available for the Arctic. NASA Gistemp (despite its clear biases) has estimates for 64N-90N.

The temperature data seems to clearly indicate that all of the decline in Arctic sea ice from 1979 is unlikely to be attributed to AGW. From the 1880s to 1940 there was a similar magnitude of Arctic warming as from 1979 t0 2010 with cooling in between. Yet the rate of increase in GHG levels was greater from greater in 1975-2010 than 1945-1975, which was in turn greater than the period decades before.

Kevin Marshall