Does data coverage impact the HADCRUT4 and NASA GISS Temperature Anomalies?

Introduction

This post started with the title “HADCRUT4 and NASA GISS Temperature Anomalies – a Comparison by Latitude“.  After deriving a global temperature anomaly from the HADCRUT4 gridded data, I was intending to compare the results with GISS’s anomalies by 8 latitude zones. However, this opened up an intriguing issue. Are global temperature anomalies impacted by a relative lack of data in earlier periods? The leads to a further issue of whether infilling of the data can be meaningful, and hence be considered to “improve” the global anomaly calculation.

A Global Temperature Anomaly from HADCRUT4 Gridded Data

In a previous post, I looked at the relative magnitudes of early twentieth century and post-1975 warming episodes. In the Hadley datasets, there is a clear divergence between the land and sea temperature data trends post-1980, a feature that is not present in the early warming episode. This is reproduced below as Figure 1.

Figure 1 : Graph of Hadley Centre 7 year moving average temperature anomalies for Land (CRUTEM4), Sea (HADSST3) and Combined (HADCRUT4)

The question that needs to be answered is whether the anomalous post-1975 warming on the land is due to real divergence, or due to issues in the estimation of global average temperature anomaly.

In another post – The magnitude of Early Twentieth Century Warming relative to Post-1975 Warming – I looked at the NASA Gistemp data, which is usefully broken down into 8 Latitude Zones. A summary graph is shown in Figure 2.

Figure 2 : NASA Gistemp zonal anomalies and the global anomaly

This is more detail than the HADCRUT4 data, which is just presented as three zones of the Tropics, along with Northern and Southern Hemispheres. However, the Hadley Centre, on their HADCRUT4 Data: download page, have, under  HadCRUT4 Gridded data: additional fields, a file HadCRUT.4.6.0.0.median_ascii.zip. This contains monthly anomalies for 5o by 5o grid cells from 1850 to 2017. There are 36 zones of latitude and 72 zones of longitude. Over 2016 months, there are over 5.22 million grid cells, but only 2.51 million (48%) have data. From this data, I have constructed a global temperature anomaly. The major issue in the calculation is that the grid cells are of different areas. A grid cell nearest to the equator at 0o to 5o has about 23 times the area of a grid cell adjacent to the poles at 85o to 90o. I used the appropriate weighting for each band of latitude.

The question is whether I have calculated a global anomaly similar to the Hadley Centre. Figure 3 is a reconciliation with the published global anomaly mean (available from here) and my own.

Figure 3 : Reconciliation between HADCRUT4 published mean and calculated weighted average mean from the Gridded Data

Prior to 1910, my calculations are slightly below the HADCRUT 4 published data. The biggest differences are in 1956 and 1915. Overall the differences are insignificant and do not impact on the analysis.

I split down the HADCRUT4 temperature data by eight zones of latitude on a similar basis to NASA Gistemp. Figure 4 presents the results on the same basis as Figure 2.

Figure 4 : Zonal surface temperature anomalies a the global anomaly calculated using the HADCRUT4 gridded data.

Visually, there are a number of differences between the Gistemp and HADCRUT4-derived zonal trends.

A potential problem with the global average calculation

The major reason for differences between HADCRUT4 & Gistemp is that the latter has infilled estimated data into areas where there is no data. Could this be a problem?

In Figure 5, I have shown the build-up in global coverage. That is the percentage of 5o by 5o grid cells with an anomaly in the monthly data.

Figure 5 : HADCRUT4 Change in the percentage coverage of each zone in the HADCRUT4 gridded data. 

Figure 5 shows a build-up in data coverage during the late nineteenth and early twentieth centuries. The World Wars (1914-1918 & 1939-1945) had the biggest impact on the Southern Hemisphere data collection. This is unsurprising when one considers it was mostly fought in the Northern Hemisphere, and European powers withdrew resources from their far-flung Empires to protect the mother countries. The only zones with significantly less than 90% grid coverage in the post-1975 warming period are the Arctic and the region below 45S. That is around 19% of the global area.

Finally, comparing comparable zones in the Northen and Southern hemispheres, the tropics seem to have comparable coverage, whilst for the polar, temperate and mid-latitude areas the Northern Hemisphere seems to have better coverage after 1910.

This variation in coverage can potentially lead to wide discrepancies between any calculated temperature anomalies and a theoretical anomaly based upon one with data in all the 5o by 5o grid cells. As an extreme example, with my own calculation, if just one of the 72 grid cells in a band of latitude had a figure, then an “average” would have been calculated for a band right around the world 555km (345 miles) from North to South for that month for that band. In the annual figures by zone, it only requires one of the 72 grid cells, in one of the months, in one of the bands of latitude to have data to calculate an annual anomaly. For the tropics or the polar areas, that is just one in 4320 data points to create an anomaly. This issue will impact early twentieth-century warming episode far more than the post-1975 one. Although I would expect the Hadley centre to have done some data cleanup of the more egregious examples in their calculation, potentially lack of data in grid cells could have quite random impacts, thus biasing the global temperature anomaly trends to an unknown, but significant extent. An appreciation of how this could impact can be appreciated from an example of NASA GISS Global Maps.

NASA GISS Global Maps Temperature Trends Example

NASA GISS Global Maps from GHCN v3 Data provide maps with the calculated change in average temperatures. I have run the maps to compare annual data for 1940 with a baseline of 1881-1910, capturing much of the early twentieth-century warming. I have run the maps at both the 1200km and 250km smoothing.

Figure 6 : NASA GISS Global anomaly Map and average anomaly by Latitude comparing 1940 with a baseline of 1881 to 1910 and a 1200km smoothing radius

Figure 7 : NASA GISS Global anomaly Map and average anomaly by Latitude comparing 1940 with a baseline of 1881 to 1910 and a 250km smoothing radius. 

With respect to the maps in figures 6 & 7

  • There is no apparent difference in the sea data between the 1200km and 250km smoothing radius, except in the polar regions with more cover in the former. The differences lie in the land area.
  • The grey areas with insufficient data all apply to the land or ocean areas in polar regions.
  • Figure 6, with 1200km smoothing, has most of the land infilled, whilst the 250km smoothing shows the lack of data coverage for much of South America, Africa, the Middle East, South-East Asia and Greenland.

Even with these land-based differences in coverage, it is clear that from either map that at any latitude there are huge variations in calculated average temperature change. For instance, take 40N. This line of latitude is North of San Francisco on the West Coast USA, clips Philidelphia on the East Coast. On the other side of the Atlantic, Madrid, Ankara and Beijing are at about 40N. There are significant points on the line on latitude with estimate warming greater than 1C (e.g. California), whilst at the same time in Eastern Europe, cooling may have exceeded 1C in the period. More extreme is at 60N (Southern Alaska, Stockholm, St Petersburg) the difference in temperature along the line of latitude is over 3C. This compares to a calculated global rise of 0.40C.

This lack of data may have contributed (along with a faulty algorithm) to the differences in the Zonal mean charts by Latitude. The 1200km smoothing radius chart bears little relation to the 250km smoothing radius. For instance:-

  •  1200km shows 1.5C warming at 45S, 250km about zero. 45S cuts through South Island, New Zealand.
  • From the equator to 45N, 1200km shows rise from 0.5C to over 2.0C, 250km shows drop from less than 0.5C to near zero, then rise to 0.2C. At around 45N lies Ottowa, Maine, Bordeaux, Belgrade, Crimea and the most Northern point in Japan.

The differences in the NASA Giss Maps, in a period when available data covered only around half the 2592 5o by 5o grid cells, indicate quite huge differences in trends between different areas. As a consequence, trying to interpolate warming trends from one area to adjacent areas appears to give quite different results in terms of trends by latitude.

Conclusions and Further Questions

The issue I originally focussed upon was the relative size of the early twentieth-century warming to the Post-1975. The greater amount of warming in the later period seemed to be due to the greater warming on land covering just 30% of the total global area. The sea temperature warming phases appear to be pretty much the same.

The issue that I focussed upon was a data issue. The early twentieth century had much less data coverage than after 1975. Further, the Southern Hemisphere had worse data coverage than the Northern Hemisphere, except in the Tropics. This means that in my calculation of a global temperature anomaly from the HADCRUT4 gridded data (which in aggregate was very similar to the published HADCRUT4 anomaly) the average by latitude will not be comparing like with like in the two warming periods. In particular, in the early twentieth-century, a calculation by latitude will not average right the way around the globe, but only on a limited selection of bands of longitude. On average this was about half, but there are massive variations. This would be alright if the changes in anomalies were roughly the same over time by latitude. But an examination of NASA GISS global maps for a period covering the early twentieth-century warming phase reveals that trends in anomalies at the same latitude are quite different over time. This implies that there could be large, but unknown, biases in the data.

I do not believe the analysis ends here. There are a number of areas that I (or others) can try to explore.

  1. Does the NASA GISS infilling of the data get us closer or further away from a what a global temperature anomaly would look like with full data coverage? My guess, based on the extreme example of Antartica trends (discussed here) is that the infilling will move away from the more perfect trend. The data could show otherwise.
  2. Are the changes in data coverage on land more significant than the global average or less? Looking at CRUTEM4 data could resolve this question.
  3. Would anomalies based upon similar grid coverage after 1900 give different relative trend patterns to the published ones based on dissimilar grid coverage?

Whether I get the time to analyze these is another issue.

Finally, the problem of trends varying considerably and quite randomly across the globe is the same issue that I found with land data homogenisation discussed here and here. To derive a temperature anomaly for a grid cell, it is necessary to make the data homogeneous. In standard homogenisation techniques, it is assumed that the underlying trends in an area is pretty much the same. Therefore, any differences in trend between adjacent temperature stations will be as a result of data imperfections. I found numerous examples where there were likely differences in trend between adjacent temperature stations. Homogenisation will, therefore, eliminate real but local climatic trends. Averaging incomplete global data where missing data could contain regional but unknown data trends may cause biases at a global scale.

Kevin Marshall

 

 

HADCRUT4, CRUTEM4 and HADSST3 Compared

In the previous post, I compared early twentieth-century warming with the post-1975 warming in the Berkeley Earth Global temperature anomaly. From a visual inspection of the graphs, I determined that the greater warming in the later period is due to more land-based warming, as the warming in the oceans (70% of the global area) was very much the same. The Berkeley Earth data ends in 2013, so does not include the impact of the strong El Niño event in the last three years.

Global average temperature series page of the Met Office Hadley Centre Observation Datasets has the average annual temperature anomalies for CRUTEM4 (land-surface air temperature) and HADSST3 (sea-surface temperature)  and HADCRUT4 (combined). From these datasets, I have derived the graph in Figure 1.

Figure 1 : Graph of Hadley Centre annual temperature anomalies for Land (CRUTEM4), Sea (HADSST3) and Combined (HADCRUT4)

  Comparing the early twentieth-century with 1975-2010,

  • Land warming is considerably greater in the later period.
  • Combined land and sea warming is slightly more in the later period.
  • Sea surface warming is slightly less in the later period.
  • In the early period, the surface anomalies for land and sea have very similar trends, whilst in the later period, the warming of the land is considerably greater than the sea surface warming.

The impact is more clearly shown with 7 year centred moving average figures in Figure 2.

Figure 2 : Graph of Hadley Centre 7 year moving average temperature anomalies for Land (CRUTEM4), Sea (HADSST3) and Combined (HADCRUT4)

This is not just a feature of the HADCRUT dataset. NOAA Global Surface Temperature Anomalies for land, ocean and combined show similar patterns. Figure 3 is on the same basis as Figure 2.

Figure 3 : Graph of NOAA 7 year moving average temperature anomalies for Land, Ocean and Combined.

The major common feature is that the estimated land temperature anomalies have shown a much greater warming trend that the sea surface anomalies since 1980, but no such divergence existed in the early twentieth century warming period. Given that the temperature data sets are far from complete in terms of coverage, and the data is of variable quality, is this divergence a reflection of the true average temperature anomalies based on far more complete and accurate data? There are a number of alternative possibilities that need to be investigated to help determine (using beancounter terminology) whether the estimates are a true and fair reflection of the prespective that more perfect data and techniques would provide. My list might be far from exhaustive.

  1. The sea-surface temperature set understates the post-1975 warming trend due to biases within data set.
  2. The spatial distribution of data changed considerably over time. For instance, in recent decades more data has become available from the Arctic, a region with the largest temperature increases in both the early twentieth century and post-1975.
  3. Land data homogenization techniques may have suppressed differences in climate trends where data is sparser. Alternatively, due to relative differences in climatic trends between nearby locations increasing over time, the further back in time homogenization goes, the more accentuated these differences and therefore the greater the suppression of genuine climatic differences. These aspects I discussed here and here.
  4. There is deliberate manipulation of the data to exaggerate recent warming. Having looked at numerous examples three years ago, this is a perspective that I do not believe to have had any significant impact. However, simply believing something not to be the case, even with examples, does not mean that it is not there.
  5. Strong beliefs about how the data should look have, over time and multiple data adjustments created biases within the land temperature anomalies.

What I do believe is that an expert opinion to whether this divergence between the land and sea surface anomalies is a “true and fair view” of the actual state of affairs can only be reached by a detailed examination of the data. Jumping to conclusions – which is evident from many people across the broad spectrum of opinions on catastrophic anthropogenic global warming debate – will fall short of the most rounded opinion that can be gleaned from the data.

Kevin Marshall

 

Ocean Impact on Temperature Data and Temperature Homgenization

Pierre Gosselin’s notrickszone looks at a new paper.

Temperature trends with reduced impact of ocean air temperature – Frank LansnerJens Olaf Pepke Pedersen.

The paper’s abstract.

Temperature data 1900–2010 from meteorological stations across the world have been analyzed and it has been found that all land areas generally have two different valid temperature trends. Coastal stations and hill stations facing ocean winds are normally more warm-trended than the valley stations that are sheltered from dominant oceans winds.

Thus, we found that in any area with variation in the topography, we can divide the stations into the more warm trended ocean air-affected stations, and the more cold-trended ocean air-sheltered stations. We find that the distinction between ocean air-affected and ocean air-sheltered stations can be used to identify the influence of the oceans on land surface. We can then use this knowledge as a tool to better study climate variability on the land surface without the moderating effects of the ocean.

We find a lack of warming in the ocean air sheltered temperature data – with less impact of ocean temperature trends – after 1950. The lack of warming in the ocean air sheltered temperature trends after 1950 should be considered when evaluating the climatic effects of changes in the Earth’s atmospheric trace amounts of greenhouse gasses as well as variations in solar conditions.

More generally, the paper’s authors are saying that over fairly short distances temperature stations will show different climatic trends. This has a profound implication for temperature homogenization. From Venema et al 2012.

The most commonly used method to detect and remove the effects of artificial changes is the relative homogenization approach, which assumes that nearby stations are exposed to almost the same climate signal and that thus the differences between nearby stations can be utilized to detect inhomogeneities (Conrad and Pollak, 1950). In relative homogeneity testing, a candidate time series is compared to multiple surrounding stations either in a pairwise fashion or to a single composite reference time series computed for multiple nearby stations. 

Lansner and Pederson are, by implication, demonstrating that the principle assumption on which homogenization is based (that nearby temperature stations are exposed to almost the same climatic signal) is not valid. As a result data homogenization will not only eliminate biases in the temperature data (such a measurement biases, impacts of station moves and the urban heat island effect where it impacts a minority of stations) but will also adjust out actual climatic trends. Where the climatic trends are localized and not replicated in surrounding areas, they will be eliminated by homogenization. What I found in early 2015 (following the examples of Paul Homewood, Euan Mearns and others) is that there are examples from all over the world where the data suggests that nearby temperature stations are exposed to different climatic signals. Data homogenization will, therefore, cause quite weird and unstable results. A number of posts were summarized in my post Defining “Temperature Homogenisation”.  Paul Matthews at Cliscep corroborated this in his post of February 2017 “Instability og GHCN Adjustment Algorithm“.

During my attempts to understand the data, I also found that those who support AGW theory not only do not question their assumptions but also have strong shared beliefs in what the data ought to look like. One of the most significant in this context is a Climategate email sent on Mon, 12 Oct 2009 by Kevin Trenberth to Michael Mann of Hockey Stick fame, and copied to Phil Jones of the Hadley centre, Thomas Karl of NOAA, Gavin Schmidt of NASA GISS, plus others.

The fact is that we can’t account for the lack of warming at the moment and it is a travesty that we can’t. The CERES data published in the August BAMS 09 supplement on 2008 shows there should be even more warming: but the data are surely wrong. Our observing system is inadequate. (emphasis mine)

Homogenizing data a number of times, and evaluating the unstable results in the context of strongly-held beliefs will bring the trends evermore into line with those beliefs. There is no requirement for some sort of conspiracy behind deliberate data manipulation for this emerging pattern of adjustments. Indeed a conspiracy in terms of a group knowing the truth and deliberately perverting that evidence does not really apply. Another reason for the conspiracy not applying is the underlying purpose of homogenization. It is to allow that temperature station to be representative of the surrounding area. Without that, it would not be possible to compile an average for the surrounding area, from which the global average in constructed. It is this requirement, in the context of real climatic differences over relatively small areas, I would suggest leads to the deletions of “erroneous” data and the infilling of estimated data elsewhere.

The gradual bringing the temperature data sets into line will beliefs is most clearly shown in the NASA GISS temperature data adjustments. Climate4you produces regular updates of the adjustments since May 2008. Below is the March 2018 version.

The reduction of the 1910 to 1940 warming period (which is at odds with theory) and the increase in the post-1975 warming phase (which correlates with the rise in CO2) supports my contention of the influence of beliefs.

Kevin Marshall

 

President Trumps Tweet on record cold in New York and Temperature Data

As Record-breaking winter weather grips North-Eastern USA (and much of Canada as well) President Donald Trump has caused quite a stir with his latest Tweet.

There is nothing new in the current President’s tweets causing controversy. This is a hard-hitting one has highlights a point of real significance for AGW theory. After decades of human-caused global warming, record cold temperatures are more significant than record warm temperatures. Record cold can be accommodated within the AGW paradigm by claiming greater variability in climate resultant on the warming. This would be a portent of the whole climate system being thrown into chaos once some tipping point had been breached. But that would also require that warm records are
(a) far more numerous than cold records and
(b) Many new warm records outstrip the old records of a few decades ago by a greater amount than the rise in average temperatures in that area.
I will illustrate with three temperature data sets I looked at a couple of years ago – Reykjavík, Iceland and Isfjord Radio and Svalbard Airport on Svalbard.

Suppose there had been an extremely high and an extremely low temperature in 2009 in Reykjavík. For the extreme high temperature to be a record it would only have to be nominally higher than a record set in 1940 to be a new record. The unadjusted average anomaly data is the same. If the previous record had been set in say 1990, a new high record would only be confirmation of more extreme climate if it was at least 1C higher than the previous record. But a new cold record in 2009 could be up to 1C higher than a 1990 low record to count as greater climate extremes. Similarly in the case of Svalbard Airport, new warm records in 2008 or 2009 would need to be over 4C higher than records set around 1980, and new cold records would need to be up to 4C higher than records set around 1980 to count as effective new warm and cold records.
By rebasing in terms of unadjusted anomaly data (and looking at monthly data) a very large number of possible records could be generated from one temperature station. With thousands of temperature stations with long records, it is possible to generate a huge number of “records” to analyze if the temperatures are becoming more extreme. But absolute record cold records should be few and far between. However, if relative cold records outstrip relative warm records, then there are questions to be asked of the average data. Similarly, if there were a lack of absolute records or a decreasing frequency of relative records, then the beliefs in impending climate chaos would be undermined.

I would not want to jump ahead with the conclusions. The most important element is to mine the temperature data and then analyze the results in multiple ways. There are likely to be surprises that could enhance understanding of climate in quite novel ways.

Kevin Marshall

Evidence for the Stupidest Paper Ever

Judith Curry tweeted a few days ago

This is absolutely the stupidest paper I have ever seen published.

What might cause Judith Curry to make such a statement about Internet Blogs, Polar Bears, and Climate-Change Denial by Proxy? Below are some notes that illustrate what might be considered stupidity.

Warmest years are not sufficient evidence of a warming trend

The US National Oceanic and Atmospheric Administration (NOAA) and National Aeronautics and Space Administration (NASA) both recently reported that 2016 was the warmest year on record (Potter et al. 2016), followed by 2015 and 2014. Currently, 2017 is on track to be the second warmest year after 2016. 

The theory is that rising greenhouse gas levels are leading to warming. The major greenhouse gas is CO2, supposedly accounting for about 75% of the impact. There should, therefore, be a clear relationship between the rising CO2 levels and rising temperatures. The form that the relationship should take is that an accelerating rise in CO2 levels will lead to an accelerating rate of increase in global average temperatures. Earlier this year I graphed the rate of change in CO2 levels from the Mauna Loa data.

The trend over nearly sixty years should be an accelerating trend. Depending on which temperature dataset you use, around the turn of the century warming either stopped or dramatically slowed until 2014. A strong El Nino caused a sharp spike in the last two or three years. The data contradicts the theory in the very period when the signal should be strongest.

Only the stupid would see record global average temperatures (which were rising well before the rise in CO2 was significant) as strong evidence of human influence when a little understanding of theory would show the data contradicts that influence.

Misrepresentation of Consensus Studies

The vast majority of scientists agree that most of the warming since the Industrial Revolution is explained by rising atmospheric greenhouse gas (GHG) concentrations (Doran and Zimmerman 2009, Cook et al. 2013, Stenhouse et al. 2014, Carlton et al 2015, Verheggen et al. 2015), 

Doran and Zimmerman 2009 asked two questions

1. When compared with pre-1800s levels, do you think that mean global temperatures have generally risen, fallen, or remained relatively constant?

2. Do you think human activity is a significant contributing factor in changing mean global temperatures?

Believing that human activity is a significant contributing factor to rising global temperatures does not mean one believes the majority of warming is due to rising GHG concentrations. Only the stupid would fail to see the difference. Further, the results were a subset of all scientists, namely geoscientists. The reported 97% consensus was from a just 79 responses, a small subset of the total 3146 responses. Read the original to find out why.

The abstract to Cook et al. 2013 begins

We analyze the evolution of the scientific consensus on anthropogenic global warming (AGW) in the peer-reviewed scientific literature, examining 11 944 climate abstracts from 1991–2011 matching the topics ‘global climate change’ or ‘global warming’. We find that 66.4% of abstracts expressed no position on AGW, 32.6% endorsed AGW, 0.7% rejected AGW and 0.3% were uncertain about the cause of global warming. Among abstracts expressing a position on AGW, 97.1% endorsed the consensus position that humans are causing global warming. 

Expressing a position does not mean a belief. It could be an assumption. The papers were not necessarily by scientists, but merely authors of academic papers that involved the topics ‘global climate change’ or ‘global warming’. Jose Duarte listed some of the papers that were included in the survey, along with looking at some that were left out. It shows a high level of stupidity to use these flawed surveys as supporting the statement “The vast majority of scientists agree that most of the warming since the Industrial Revolution is explained by rising atmospheric greenhouse gas (GHG) concentrations“.

Belief is not Scientific Evidence

The most recent edition of climate bible from the UNIPCC states (AR5 WG1 Ch10 Page 869)

It is extremely likely that human activities caused more than half of the observed increase in GMST from 1951 to 2010.

Mispresenting surveys about beliefs are necessary because the real world data, even when that data is a deeply flawed statisticdoes not support the belief that “most of the warming since the Industrial Revolution is explained by rising atmospheric greenhouse gas (GHG) concentrations“.  

Even if the survey data supported the statement, the authors are substituting banal statements about beliefs for empirically-based scientific statements. This is the opposite direction to achieving science-based understanding. 

The false Consensus Gap

The article states

This chasm between public opinion and scientific agreement on AGW is now commonly referred to as the consensus gap (Lewandowsky et al. 2013)

Later is stated, in relation to sceptical blogs

Despite the growing evidence in support of AGW, these blogs continue to aggressively deny the causes and/or the projected effects of AGW and to personally attack scientists who publish peer-reviewed research in the field with the aim of fomenting doubt to maintain the consensus gap.

There is no reference that tracks the growing evidence in support of AGW. From WUWT (and other blogs) there has been a lot of debunking of the claims of the signs of climate apocalypse such as

  • Malaria increasing as a result of warming
  • Accelerating polar ice melt / sea level rise
  • Disappearing snows of Kilimanjaro due to warming
  • Kiribati and the Maldives disappearing due to sea level rise
  • Mass species extinction
  • Himalayan glaciers disappearing
  • The surface temperature record being a true and fair estimate of real warming
  • Climate models consistently over-estimating warming

The to the extent that a consensus gap exists it is between the consensus beliefs of the climate alarmist community and actual data. Scientific support from claims about the real world come from conjectures being verified, not by the volume of publications about the subject.

Arctic Sea Ice Decline and threats to Polar Bear Populations

The authors conjecture (with references) with respect to Polar Bears that

Because they can reliably catch their main prey, seals (Stirling and Derocher 2012, Rode et al. 2015), only from the surface of the sea ice, the ongoing decline in the seasonal extent and thickness of their sea-ice habitat (Amstrup et al. 2010, Snape and Forster 2014, Ding et al. 2017) is the most important threat to polar bears’ long-term survival.

That seems plausible enough. Now for the evidence to support the conjecture.

Although the effects of warming on some polar-bear subpopulations are not yet documented and other subpopulations are apparently still faring well, the fundamental relationship between polar-bear welfare and sea-ice availability is well established, and unmitigated AGW assures that all polar bears ultimately will be negatively affected. 

There is a tacit admission that the existing evidence contradicts the theory. There is data showing a declining trend in sea ice for over 35 years, yet in that time the various polar bear populations have been growing significantly, not just “faring well“. Surely there should be a decline by now in the peripheral Arctic areas where the sea ice has disappeared? The only historical evidence of decline is this comment in criticizing Susan Crockford’s work.

For example, when alleging sea ice recovered after 2012, Crockford downplayed the contribution of sea-ice loss to polar-bear population declines in the Beaufort Sea.

There is no reference to this claim, so readers cannot check if the claim is supported. But 2012 was an outlier year, with record lows in the Summer minimum sea ice extent due to unusually fierce storms in August. Losses of polar bears due to random & extreme weather events are not part of any long-term decline in sea ice.

Concluding Comments

The stupid errors made include

  • Making a superficial point from the data to support a conjecture, when deeper understanding contradicts it. This is the case with the conjecture that rising GHG levels are the main cause of recent warming.
  • Clear misrepresentation of opinion surveys.
  • Even if the opinion surveys were correctly interpreted, use of opinion to support scientific conjectures, as opposed looking at statistical tests of actual data or estimates should appear stupid from a scientific perspective.
  • Claims that a consensus gap between consensus and sceptic views when the real gap is between consensus opinion and actual data.
  • Claims that polar bear populations will decline as sea ice declines is contradicted by the historical data. There is no recognition of this contradiction.

I believe Harvey et al paper gives some lessons for climatologists in particular and academics in general.

First is that when making claims crucial to the argument they need to be substantiated. That substantiation needs to be more than referencing others who have said the same claims before.

Second is that points drawn from referenced articles should be accurately represented.

Third, is to recognize that scientific papers need to first reference actual data and estimates, not opinions.  It is by comparing the current opinions with the real world that opportunities for advancement of understanding arise.

Fourth is that any academic discipline should aim to move from conjectures to empirically-based verifiable statements.

I have only picked out some of the more obvious of the stupid points. The question that needs to be asked is why such stupidity should have been agreed upon by 14 academics and then passed peer review?

Kevin Marshall

How the “greater 50% of warming since 1950 is human caused” claim is deeply flawed

Over at Cliscep, Jaime Jessop has rather jokingly raised a central claim of the IPCC Fifth Assessment Report, after someone on Twitter had accused her of not being a real person.

So here’s the deal: Michael Tobis convinces me, on here, that the IPCC attribution statement is scientifically sound and it is beyond reasonable doubt that more than half of the warming post 1950 is indeed caused by emissions, and I will post a photo verifying my actual existence as a real person.

The Report states (AR5 WG1 Ch10 Page 869)

It is extremely likely that human activities caused more than half of the observed increase in GMST from 1951 to 2010.

This extremely likely is at the 95% confidence interval and includes all human causes. The more specific quote on human greenhouse gas emissions is from page 878, section “10.2.4 Single-Step and Multi-Step Attribution and the Role of the Null Hypothesis

Attribution results are typically expressed in terms of conventional ‘frequentist’ confidence intervals or results of hypothesis tests: when it is reported that the response to anthropogenic GHG increase is very likely greater than half the total observed warming, it means that the null hypothesis that the GHG-induced warming is less than half the total can be rejected with the data available at the 10% significance level.

It is a much more circumspect message than the “<a href=”http://stocker IPCC 2013″ target=”_blank”>human influence on the climate system is clear</a>” announcements of WG1 four years ago.  In describing attribution studies, the section states

Overall conclusions can only be as robust as the least certain link in the multi-step procedure.

There are a number of candidates for “least certain link” in terms of empirical estimates. In general, if the estimates are made with reference to the other estimates, or biased by theory/beliefs, then the statistical test is invalidated. This includes the surface temperature data.

Further, if the models have been optimised to fit the surface temperature data, then the >50% is an absolute maximum, whilst the real figure, based on perfect information, is likely to be less than that.

Most of all are the possibilities of unknown unknowns. For, instance, the suggestion that non-human causes could explain pretty much all the post-1950 warming can be inferred from some paleoclimate studies. This reconstruction Greenland ice core (graphic climate4you) shows warming around as great, or greater, than the current warming in the distant past. The timing of a warm cycle is not too far out either.

In the context of Jaime’s challenge, there is more than reasonable doubt in the IPCC attribution statement, even if a statistical confidence of 90% (GHG emissions) or 95% (all human causes) were acceptable as persuasive evidence.

There is a further problem with the statement. Human greenhouse gas emissions are meant to account for all the current warming, not just over 50%. If the full impact of a doubling is CO2 is eventually 3C of warming, then from that the 1960-2010 CO2 rise from 317ppm to 390ppm alone will eventually be 0.9C of warming. Possibly 1.2C of warming from all sources. This graphic from AR5 WG1 Ch10 shows the issues.

The orange line of anthropogenic forcing accounts for nearly 100% of all the measured warming post-1960 of around 0.8C – shown by the large dots. Yet this is about 60% of the warming in from GHG rises if a doubling of CO2 will produce 3C of warming. The issue is with the cluster of dots at the right of the graph, representing the pause, or slow down in warming around the turn of the century. I have produced a couple of charts that illustrate the problem.

In the first graph, the long term impact on temperatures of the CO2 rise from 2003-2012 is 2.5 times that from 1953-1962. Similarly, from the second graph, the long term impact on temperatures of the CO2 rise from 2000-2009 is 2.6 times that from 1950-1959. It is a darn funny lagged response if the rate of temperature rise can significantly slow down when the alleged dominant element causing them to rise accelerates. It could be explained by rising GHG emissions being a minor element in temperature rise, with natural factors both causing some of the warming in the 1976-1998 period, then reversing, causing cooling, in the last few years.

Kevin Marshall

 

 

Climate Delusions 2 – Use of Linear Warming Trends to defend Human-caused Warming

This post is part of a planned series about climate delusions. These are short pieces of where the climate alarmists are either deluding themselves, or deluding others, about the evidence to support the global warming hypothesis; the likely implications for changing the climate; the consequential implications of changing / changed climate; or associated policies to either mitigate or adapt to the harms. The delusion consists is I will make suggestions of ways to avoid the delusions.

In the previous post I looked at how for the Karl el al 2015 paper to be a pause-buster required falsely showing a linear trend in the data. In particular it required the selection of the 1950-1999 period for comparing with the twenty-first century warming. Comparison with the previous 25 years would shows a marked decrease in the rate of warming. Now consider again the claims made in the summary.

Newly corrected and updated global surface temperature data from NOAA’s NCEI do not support the notion of a global warming “hiatus.”  Our new analysis now shows that the trend over the period 1950–1999, a time widely agreed as having significant anthropogenic global warming, is 0.113°C decade−1 , which is virtually indistinguishable from the trend over the period 2000–2014 (0.116°C decade−1 ). …..there is no discernable (statistical or otherwise) decrease in the rate of warming between the second half of the 20th century and the first 15 years of the 21st century.

…..

…..the IPCC’s statement of 2 years ago—that the global surface temperature “has shown a much smaller increasing linear trend over the past 15 years than over the past 30 to 60 years”—is no longer valid.

The “pause-buster” linear warming trend needs to be put into context. In terms of timing the Karl reevaluation of the global temperature data was published in the run-up to the COP21 Paris meeting which aimed to get global agreement on reducing global greenhouse gas emissions to near zero by the end of the century. Having a consensus of the World’s leading climate experts admitting that warming was not happening strongly implied that there was no big problem to be dealt with. But is demonstrating a linear warming trend – even if it could be done without the use of grossly misleading statements like in Karl paper – sufficient to show that warming is caused by greenhouse gas emissions?

The IPCC estimates that about three-quarters of all greenhouse emissions are of carbon dioxide. The BBC’s recently made a graphic of the emission types, reproduced as Figure 1.

 

There is a strong similarity between the rise in CO2 emissions and the rise in CO2 levels. Although I will not demonstrate this here, the emissions data estimates are available from CDIAC where my claim an be verified. The issue arises with the rate of increase in CO2 levels. The full Mauna Loa CO2 record shows a marked increase in CO2 levels since the end of the 1950s, as reproduced in Figure 2.

What is not so clear is that the rate of rise is increasing. In fact in the 1960s CO2 increased on average by less than 1ppm per annum, whereas in the last few years it has exceeded over 2ppm per annum. But the supposed eventual impact of the impact of the rise in CO2 is though a doubling. That implies that if CO2 rises at a constant percentage rate, and the full impact is near instantaneous, then the rate of warming produced from CO2 alone will be linear. In Figure 3 I have shown the percentage annual increase in CO2 levels.

Of note from the graph

  • In every year of the record the CO2 level has increased.
  • The warming impact of the rise in CO2 post 2000 was twice that of the 1960s.
  • There was a marked slowdown in the rate of rise in CO2 in the 1990s, but it was only for a few years below the long term average.
  • After 1998 CO2 growth rates increased to a level greater for any for any previous period.

The empirical data of Mauna Loa CO2 levels shows what should be an increasing impact on average temperatures. The marked slowdown, or pause, in global warming post 2000, is therefore inconsistent with CO2 having a dominant, or even a major role, in producing that warming. Quoting a linear rate of warming over the whole period is people deluding both themselves and others to the empirical failure of the theory.

Possible Objections

You fail to isolate the short-term and long-term effects of CO2 on temperature.

Reply: The lagged, long-term effects would have to be both larger and negative for a long period to account for the divergence. There has so far been no successful and clear modelling, just a number of attempts that amount to excuses.

Natural variations could account for the slowdown.

Reply: Equally natural variations could account for much, if not all, of the average temperature rise.in preceding decades. Non-verifiable constructs that contradict real-world evidence, are for those who delude themselves or others.  Further, if natural factors can be a stronger influence on global average temperature change for more than decade than human-caused factors, then this is a tacit admission that human-caused factors are not a dominant influence on global average temperature change.

Kevin Marshall

 

Climate Delusions 1 – Karl et al 2015 propaganda

This is the first is a planned series of climate delusions. These are short pieces of where the climate alarmists are either deluding themselves, or deluding others, about the evidence to support the global warming hypothesis; the likely implications for changing the climate; the consequential implications of changing / changed climate; or associated policies to either mitigate or adapt to the harms. The delusion consists is I will make suggestions of ways to avoid the delusions.

Why is the Karl et al 2015 paper, Possible artifacts of data biases in the recent global surface warming hiatus proclaimed to be the pause-buster?

The concluding comments to the paper gives the following boast

Newly corrected and updated global surface temperature data from NOAA’s NCEI do not support the notion of a global warming “hiatus.”  …..there is no discernable (statistical or otherwise) decrease in the rate of warming between the second half of the 20th century and the first 15 years of the 21st century. Our new analysis now shows that the trend over the period 1950–1999, a time widely agreed as having significant anthropogenic global warming (1), is 0.113°C decade−1 , which is virtually indistinguishable from the trend over the period 2000–2014 (0.116°C decade−1 ). Even starting a trend calculation with 1998, the extremely warm El Niño year that is often used as the beginning of the “hiatus,” our global temperature trend (1998–2014) is 0.106°C decade−1 —and we know that is an underestimate because of incomplete coverage over the Arctic. Indeed, according to our new analysis, the IPCC’s statement of 2 years ago—that the global surface temperature “has shown a much smaller increasing linear trend over the past 15 years than over the past 30 to 60 years”—is no longer valid.

An opinion piece in Science, Much-touted global warming pause never happened, basically repeats these claims.

In their paper, Karl’s team sums up the combined effect of additional land temperature stations, corrected commercial ship temperature data, and corrected ship-to-buoy calibrations. The group estimates that the world warmed at a rate of 0.086°C per decade between 1998 and 2012—more than twice the IPCC’s estimate of about 0.039°C per decade. The new estimate, the researchers note, is much closer to the rate of 0.113°C per decade estimated for 1950 to 1999. And for the period from 2000 to 2014, the new analysis suggests a warming rate of 0.116°C per decade—slightly higher than the 20th century rate. “What you see is that the slowdown just goes away,” Karl says.

The Skeptical Science Temperature trend data gives very similar results. 1950-1999 gives a linear trend of 0.113°C decade−1 against 0.112°C decade−1 and for 2000-2014 gives 0.097°C decade−1 against 0.116°C decade−1. There is no real sign if a slowdown,

However, looking at any temperature anomaly  chart, whether Karl. NASA Gistemp, or HADCRUT4, it is clear that the period 1950-1975 showed little or no warming, whilst the last quarter of the twentieth century show significant warming.  This is confirmed by the Sks trend calculator figures in Figure 1.

What can be clearly seen is the claim of no slowdown in the twenty-first century compared with previous years is dependent on the selection of the period. To repeat the Karl et. al concluding claim.

Indeed, according to our new analysis, the IPCC’s statement of 2 years ago—that the global surface temperature “has shown a much smaller increasing linear trend over the past 15 years than over the past 30 to 60 years”—is no longer valid.

The period 1976-2014 is in the middle of the range, and from the Sks temperature trend is .160. The trend is significantly higher than 0.097, so a slowdown has taken place. Any remotely competent peer review would have checked what is the most startling claim. The comparative figures from HADCRUT4 are shown in Figure 2.

With the HADCRUT4 temperature trend it is not so easy to claim that there is no significant slowdown. But the full claim in the Karl et al paper to be a pause-buster can only be made by a combination of recalculating the temperature anomaly figures and selection of the 1950-1999 period for comparing the twenty-first century warming. It is the latter part that makes the “pause-buster” claims a delusion.

Kevin Marshall

 

Warming Bias in Temperature Data due to Consensus Belief not Conspiracy

In a Cliscep article Science: One Damned Adjustment After Another? Geoff Chambers wrote:-

So is the theory of catastrophic climate change a conspiracy? According to the strict dictionary definition, it is, in that the people concerned clearly conferred together to do something wrong – namely introduce a consistent bias in the scientific research, and then cover it up.

This was in response to last the David Rose article in the Mail on Sunday, about claims the infamous the Karl et al 2015 breached America’s National Oceanic and Atmospheric Administration (NOAA) own rules on scientific intergrity.

I would counter this claim about conspiracy in respect of temperature records, even in the strict dictionary definition. Still less does it conform to a conspiracy theory in the sense of some group with a grasp of what they believe to be the real truth, act together to provide an alternative to that truth. or divert attention and resources away from that understanding of that truth. like an internet troll. A clue as to know why this is the case comes from on of the most notorious Climategate emails. Kevin Trenberth to Micheal Mann on Mon, 12 Oct 2009 and copied to most of the leading academics in the “team” (including Thomas R. Karl).

The fact is that we can’t account for the lack of warming at the moment and it is a travesty that we can’t. The CERES data published in the August BAMS 09 supplement on 2008 shows there should be even more warming: but the data are surely wrong. Our observing system is inadequate.

It is the first sentence that was commonly quoted, but it is the last part is the most relevant for temperatures anomalies. There is inevitably a number of homogenisation runs to get a single set of anomalies. For example the Reykjavik temperature data was (a) adjusted by the Iceland Met office by standard procedures to allow for known locals biases (b) adjusted for GHCNv2 (the “raw data”) (c) adjusted again in GHCNv3 (d) homogenized by NASA to be included in Gistemp.

There are steps that I have missed. Certainly Gistemp homogenize the data quite frequently for new sets of data. As Paul Matthews notes, adjustments are unstable. Although one data set might on average be pretty much the same as previous ones, there will be quite large anomalies thrown out every time the algorithms are re-run for new data. What is more, due to the nature of the computer algorithms, there is no audit trail, therefore the adjustments are largely unexplainable with reference to the data before, let alone with reference to the original thermometer readings. So how does one know whether the adjustments are reasonable or not, except through a belief in how the results ought to look? In the case of the climatologists like Kevin Trenberth and Thomas R. Karl, variations that show warmer than the previous run will be more readily accepted as correct rather than variations that show cooler. That is, they will find reasons why a particular temperature data set now shows greater higher warming than before. but will reject as outliers results that show less warming than before. It is the same when choosing techniques, or adjusting for biases in the data. This is exacerbated when a number of different bodies with similar belief systems try to seek a consensus of results, like  Zeke Hausfather alludes to in his article at the CarbonBrief. Rather than verifying results in the real world, temperature data seeks to conform to the opinions of others with similar beliefs about the world.

Kevin Marshall

How strong is the Consensus Evidence for human-caused global warming?

You cannot prove a vague theory wrong. If the guess that you make is poorly expressed and the method you have for computing the consequences is a little vague then ….. you see that the theory is good as it can’t be proved wrong. If the process of computing the consequences is indefinite, then with a little skill any experimental result can be made to look like an expected consequence.

Richard Feynman – 1964 Lecture on the Scientific Method

It’s self-evident that democratic societies should base their decisions on accurate information. On many issues, however, misinformation can become entrenched in parts of the community, particularly when vested interests are involved. Reducing the influence of misinformation is a difficult and complex challenge.

The Debunking Handbook 2011 – John Cook and Stephan Lewandowsky

My previous post looked at the attacks on David Rose for daring to suggest that the rapid fall in global land temperatures at the El Nino event were strong evidence that the record highs in global temperatures were not due to human greenhouse gas emissions. The technique used was to look at long-term linear trends. The main problems with this argument were
(a) according to AGW theory warming rates from CO2 alone should be accelerating and at a higher rate than the estimated linear warming rates from HADCRUT4.
(b) HADCRUT4 shows warming stopped from 2002 to 2014, yet in theory the warming from CO2 should have accelerated.

Now there are at least two ways to view my arguments. First is to look at Feynman’s approach. The climatologists and associated academics attacking journalist David Rose chose to do so from a perspective of a very blurred specification of AGW theory. That is human emissions will cause greenhouse gas levels to rise, which will cause global average temperatures to rise. Global average temperature clearly have risen from all long-term (>40 year) data sets, so theory is confirmed. On a rising trend, with large variations due to natural variability, then any new records will be primarily “human-caused”. But making the theory and data slightly less vague reveals an opposite conclusion. Around the turn of the century the annual percentage increase in CO2 emissions went from 0.4% to 0.5% a year (figure 1), which should have lead to an acceleration in the rate of warming. In reality warming stalled.

The reaction was to come up with a load of ad hoc excuses. Hockey Schtick blog reached 66 separate excuses for the “pause” by November 2014, from the peer-reviewed to a comment in the UK Parliament.  This could be because climate is highly complex, with many variables, the presence of each contributing can only be guessed at, let alone the magnitude of each factor and the interrelationships with all factors. So how do you tell which statements are valid information and which are misinformation? I agree with Cook and Lewandowsky that misinformation is pernicious, and difficult to get rid of once it becomes entrenched. So how does one evaluate distinguish between the good information and the bad, misleading or even pernicious?

The Lewandowsky / Cook answer is to follow the consensus of opinion. But what is the consensus of opinion? In climate one variation is to follow a small subset of academics in the area who answer in the affirmative to

1. When compared with pre-1800s levels, do you think that mean global temperatures have generally risen, fallen, or remained relatively constant?

2. Do you think human activity is a significant contributing factor in changing mean global temperatures?

Problem is that the first question is just reading a graph and the second could be is a belief statement will no precision. Anthropogenic global warming has been a hot topic for over 25 years now. Yet these two very vague empirically-based questions, forming the foundations of the subject, should be able to be formulated more precisely. On the second it is a case of having pretty clear and unambiguous estimates as to the percentage of warming, so far, that is human caused. On that the consensus of leading experts are unable to say whether it is 50% or 200% of the warming so far. (There are meant to be time lags and factors like aerosols that might suppress the warming). This from the 2013 UNIPCC AR5 WG1 SPM section D3:-

It is extremely likely that more than half of the observed increase in global average surface temperature from 1951 to 2010 was caused by the anthropogenic increase in greenhouse gas concentrations and other anthropogenic forcings together.

The IPCC, encapsulating the state-of-the-art knowledge, cannot provide firm evidence in the form of a percentage, or even a fairly broad range even with over 60 years of data to work on..  It is even worse than it appears. The extremely likely phrase is a Bayesian probability statement. Ron Clutz’s simple definition from earlier this year was:-

Here’s the most dumbed-down description: Initial belief plus new evidence = new and improved belief.

For the IPCC claim that their statement was extremely likely, at the fifth attempt, they should be able to show some sort of progress in updating their beliefs to new evidence. That would mean narrowing the estimate of the magnitude of impact of a doubling of CO2 on global average temperatures. As Clive Best documented in a cliscep comment in October, the IPCC reports, from 1990 to 2013 failed to change the estimate range of 1.5°C to 4.5°C. Looking up Climate Sensitivity in Wikipedia we get the origin of the range estimate.

A committee on anthropogenic global warming convened in 1979 by the National Academy of Sciences and chaired by Jule Charney estimated climate sensitivity to be 3 °C, plus or minus 1.5 °C. Only two sets of models were available; one, due to Syukuro Manabe, exhibited a climate sensitivity of 2 °C, the other, due to James E. Hansen, exhibited a climate sensitivity of 4 °C. “According to Manabe, Charney chose 0.5 °C as a not-unreasonable margin of error, subtracted it from Manabe’s number, and added it to Hansen’s. Thus was born the 1.5 °C-to-4.5 °C range of likely climate sensitivity that has appeared in every greenhouse assessment since…

It is revealing that quote is under the subheading Consensus Estimates. The climate community have collectively failed to update the original beliefs, based on a very rough estimate. The emphasis on referring to consensus beliefs about the world, rather than looking outward for evidence in the real world, I would suggest is the primary reason for this failure. Yet such community-based beliefs completely undermines the integrity of the Bayesian estimates, making its use in statements about climate clear misinformation in Cook and Lewandowsky’s use of the term. What is more, those in the climate community who look primarily to these consensus beliefs rather than the data of the real world will endeavour to dismiss the evidence, or make up ad hoc excuses, or smear those who try to disagree. A caricature of these perspectives with respect to global average temperature anomalies is available in the form of a flickering widget at John Cooks’ skepticalscience website. This purports to show the difference between “realist” consensus and “contrarian” non-consensus views. Figure 2 is a screenshot of the consensus views, interpreting warming as a linear trend. Figure 3 is a screenshot of the non-consensus or contrarian views. They is supposed to interpret warming as a series of short, disconnected,  periods of no warming. Over time, each period just happens to be at a higher level than the previous. There are a number of things that this indicates.

(a) The “realist” view is of a linear trend throughout any data series. Yet the period from around 1940 to 1975 has no warming or slight cooling depending on the data set. Therefore any linear trend line derived for a longer period than 1970 to 1975 and ending in 2015 will show a lower rate of warming. This would be consistent the rate of CO2 increasing over time, as shown in figure 1. But for shorten the period, again ending in 2015, and once the period becomes less than 30 years, the warming trend will also decrease. This contracts the theory, unless ad hoc excuses are used, as shown in my previous post using the HADCRUT4 data set.

(b) Those who agree with the consensus are called “Realist”, despite looking inwards towards common beliefs. Those who disagree with warming are labelled “Contrarian”. This is not inaccurate when there is a dogmatic consensus. But it utterly false to lump all those who disagree with the same views, especially when no examples are provided of those who hold such views.

(c) The linear trend appears as a more plausible fit than the series of “contrarian” lines. By implication, those who disagree with the consensus are viewed as as having a distinctly more blinkered and distorted perspective than those who follow the consensus. Yet even using gistemp data set (which is gives greatest support to the consensus views) there is a clear break in the linear trend. The less partisan HADCRUT4 data shows an even greater break.

Those who spot the obvious – that around the turn of the century warming stopped or slowed down, when in theory it should have accelerated – are given a clear choice. They can conform to the scientific consensus, denying the discrepancy between theory and data. Or they can act as scientists, denying the false and empirically empty scientific consensus, receiving the full weight of all the false and career-damaging opprobrium that accompanies it.

fig2-sks-realists

 

 

fig3-sks-contras

Kevin Marshall