Climate Delusions 2 – Use of Linear Warming Trends to defend Human-caused Warming

This post is part of a planned series about climate delusions. These are short pieces of where the climate alarmists are either deluding themselves, or deluding others, about the evidence to support the global warming hypothesis; the likely implications for changing the climate; the consequential implications of changing / changed climate; or associated policies to either mitigate or adapt to the harms. The delusion consists is I will make suggestions of ways to avoid the delusions.

In the previous post I looked at how for the Karl el al 2015 paper to be a pause-buster required falsely showing a linear trend in the data. In particular it required the selection of the 1950-1999 period for comparing with the twenty-first century warming. Comparison with the previous 25 years would shows a marked decrease in the rate of warming. Now consider again the claims made in the summary.

Newly corrected and updated global surface temperature data from NOAA’s NCEI do not support the notion of a global warming “hiatus.”  Our new analysis now shows that the trend over the period 1950–1999, a time widely agreed as having significant anthropogenic global warming, is 0.113°C decade−1 , which is virtually indistinguishable from the trend over the period 2000–2014 (0.116°C decade−1 ). …..there is no discernable (statistical or otherwise) decrease in the rate of warming between the second half of the 20th century and the first 15 years of the 21st century.


…..the IPCC’s statement of 2 years ago—that the global surface temperature “has shown a much smaller increasing linear trend over the past 15 years than over the past 30 to 60 years”—is no longer valid.

The “pause-buster” linear warming trend needs to be put into context. In terms of timing the Karl reevaluation of the global temperature data was published in the run-up to the COP21 Paris meeting which aimed to get global agreement on reducing global greenhouse gas emissions to near zero by the end of the century. Having a consensus of the World’s leading climate experts admitting that warming was not happening strongly implied that there was no big problem to be dealt with. But is demonstrating a linear warming trend – even if it could be done without the use of grossly misleading statements like in Karl paper – sufficient to show that warming is caused by greenhouse gas emissions?

The IPCC estimates that about three-quarters of all greenhouse emissions are of carbon dioxide. The BBC’s recently made a graphic of the emission types, reproduced as Figure 1.


There is a strong similarity between the rise in CO2 emissions and the rise in CO2 levels. Although I will not demonstrate this here, the emissions data estimates are available from CDIAC where my claim an be verified. The issue arises with the rate of increase in CO2 levels. The full Mauna Loa CO2 record shows a marked increase in CO2 levels since the end of the 1950s, as reproduced in Figure 2.

What is not so clear is that the rate of rise is increasing. In fact in the 1960s CO2 increased on average by less than 1ppm per annum, whereas in the last few years it has exceeded over 2ppm per annum. But the supposed eventual impact of the impact of the rise in CO2 is though a doubling. That implies that if CO2 rises at a constant percentage rate, and the full impact is near instantaneous, then the rate of warming produced from CO2 alone will be linear. In Figure 3 I have shown the percentage annual increase in CO2 levels.

Of note from the graph

  • In every year of the record the CO2 level has increased.
  • The warming impact of the rise in CO2 post 2000 was twice that of the 1960s.
  • There was a marked slowdown in the rate of rise in CO2 in the 1990s, but it was only for a few years below the long term average.
  • After 1998 CO2 growth rates increased to a level greater for any for any previous period.

The empirical data of Mauna Loa CO2 levels shows what should be an increasing impact on average temperatures. The marked slowdown, or pause, in global warming post 2000, is therefore inconsistent with CO2 having a dominant, or even a major role, in producing that warming. Quoting a linear rate of warming over the whole period is people deluding both themselves and others to the empirical failure of the theory.

Possible Objections

You fail to isolate the short-term and long-term effects of CO2 on temperature.

Reply: The lagged, long-term effects would have to be both larger and negative for a long period to account for the divergence. There has so far been no successful and clear modelling, just a number of attempts that amount to excuses.

Natural variations could account for the slowdown.

Reply: Equally natural variations could account for much, if not all, of the average temperature preceding decades. Non-verifiable constructs that contradict real-world evidence, are for those who delude themselves or others.  Further, if natural factors can be a stronger influence on global average temperature change for more than decade than human-caused factors, then this is a tacit admission that human-caused factors are not a dominant influence on global average temperature change.

Kevin Marshall


Climate Delusions 1 – Karl et al 2015 propaganda

This is the first is a planned series of climate delusions. These are short pieces of where the climate alarmists are either deluding themselves, or deluding others, about the evidence to support the global warming hypothesis; the likely implications for changing the climate; the consequential implications of changing / changed climate; or associated policies to either mitigate or adapt to the harms. The delusion consists is I will make suggestions of ways to avoid the delusions.

Why is the Karl et al 2015 paper, Possible artifacts of data biases in the recent global surface warming hiatus proclaimed to be the pause-buster?

The concluding comments to the paper gives the following boast

Newly corrected and updated global surface temperature data from NOAA’s NCEI do not support the notion of a global warming “hiatus.”  …..there is no discernable (statistical or otherwise) decrease in the rate of warming between the second half of the 20th century and the first 15 years of the 21st century. Our new analysis now shows that the trend over the period 1950–1999, a time widely agreed as having significant anthropogenic global warming (1), is 0.113°C decade−1 , which is virtually indistinguishable from the trend over the period 2000–2014 (0.116°C decade−1 ). Even starting a trend calculation with 1998, the extremely warm El Niño year that is often used as the beginning of the “hiatus,” our global temperature trend (1998–2014) is 0.106°C decade−1 —and we know that is an underestimate because of incomplete coverage over the Arctic. Indeed, according to our new analysis, the IPCC’s statement of 2 years ago—that the global surface temperature “has shown a much smaller increasing linear trend over the past 15 years than over the past 30 to 60 years”—is no longer valid.

An opinion piece in Science, Much-touted global warming pause never happened, basically repeats these claims.

In their paper, Karl’s team sums up the combined effect of additional land temperature stations, corrected commercial ship temperature data, and corrected ship-to-buoy calibrations. The group estimates that the world warmed at a rate of 0.086°C per decade between 1998 and 2012—more than twice the IPCC’s estimate of about 0.039°C per decade. The new estimate, the researchers note, is much closer to the rate of 0.113°C per decade estimated for 1950 to 1999. And for the period from 2000 to 2014, the new analysis suggests a warming rate of 0.116°C per decade—slightly higher than the 20th century rate. “What you see is that the slowdown just goes away,” Karl says.

The Skeptical Science Temperature trend data gives very similar results. 1950-1999 gives a linear trend of 0.113°C decade−1 against 0.112°C decade−1 and for 2000-2014 gives 0.097°C decade−1 against 0.116°C decade−1. There is no real sign if a slowdown,

However, looking at any temperature anomaly  chart, whether Karl. NASA Gistemp, or HADCRUT4, it is clear that the period 1950-1975 showed little or no warming, whilst the last quarter of the twentieth century show significant warming.  This is confirmed by the Sks trend calculator figures in Figure 1.

What can be clearly seen is the claim of no slowdown in the twenty-first century compared with previous years is dependent on the selection of the period. To repeat the Karl et. al concluding claim.

Indeed, according to our new analysis, the IPCC’s statement of 2 years ago—that the global surface temperature “has shown a much smaller increasing linear trend over the past 15 years than over the past 30 to 60 years”—is no longer valid.

The period 1976-2014 is in the middle of the range, and from the Sks temperature trend is .160. The trend is significantly higher than 0.097, so a slowdown has taken place. Any remotely competent peer review would have checked what is the most startling claim. The comparative figures from HADCRUT4 are shown in Figure 2.

With the HADCRUT4 temperature trend it is not so easy to claim that there is no significant slowdown. But the full claim in the Karl et al paper to be a pause-buster can only be made by a combination of recalculating the temperature anomaly figures and selection of the 1950-1999 period for comparing the twenty-first century warming. It is the latter part that makes the “pause-buster” claims a delusion.

Kevin Marshall


Climate Experts Attacking a Journalist by Misinformation on Global Warming


Journalist David Rose was attacked for pointing out in a Daily Mail article that the strong El Nino event, that resulted in record temperatures, was reversing rapidly. He claimed record highs may be not down to human emissions. The Climate Feedback attack article claimed that the El Nino event did not affect the long-term human-caused trend. My analysis shows

  • CO2 levels have been rising at increasing rates since 1950.
  • In theory this should translate in warming at increasing rates. That is a non-linear warming rate.
  • HADCRUT4 temperature data shows warming stopped in 2002, only resuming with the El Nino event in 2015 and 2016.
  • At the central climate sensitivity estimate of doubling of CO2 leads to 3C of global warming, HADCRUT4 was already falling short of theoretical warming in 2000. This is without the impact of other greenhouse gases.
  • Putting a linear trend lines through the last 35 to 65 years of data will show very little impact of El Nino, but has a very large visual impact on the divergence between theoretical human-caused warming and the temperature data. It reduces the apparent impact of the divergence between theory and data, but does not eliminate it.

Claiming that the large El Nino does not affect long-term linear trends is correct. But a linear trend neither describes warming in theory or in the leading temperature set. To say, as experts in their field, that the long-term warming trend is even principally human-caused needs a lot of circumspection. This is lacking in the attack article.



Journalist David Rose recently wrote a couple of articles in the Daily Mail on the plummeting global average temperatures.
The first on 26th November was under the headline

Stunning new data indicates El Nino drove record highs in global temperatures suggesting rise may not be down to man-made emissions

With the summary

• Global average temperatures over land have plummeted by more than 1C
• Comes amid mounting evidence run of record temperatures about to end
• The fall, revealed by Nasa satellites, has been caused by the end of El Nino

Rose’s second article used the Met Offices’ HADCRUT4 data set, whereas the first used satellite data. Rose was a little more circumspect when he said.

El Nino is not caused by greenhouse gases and has nothing to do with climate change. It is true that the massive 2015-16 El Nino – probably the strongest ever seen – took place against a steady warming trend, most of which scientists believe has been caused by human emissions.

But when El Nino was triggering new records earlier this year, some downplayed its effects. For example, the Met Office said it contributed ‘only a few hundredths of a degree’ to the record heat. The size of the current fall suggests that this minimised its impact.

There was a massive reaction to the first article, as discussed by Jaime Jessop at Cliscep. She particularly noted that earlier in the year there were articles on the dramatically higher temperature record of 2015, such as in a Guardian article in January.There was also a follow-up video conversation between David Rose and Dr David Whitehouse of the GWPF commenting on the reactions. One key feature of the reactions was claiming the contribution to global warming trend of the El Nino effect was just a few hundredths of a degree. I find particularly interesting the Climate Feedback article, as it emphasizes trend over short-run blips. Some examples

Zeke Hausfather, Research Scientist, Berkeley Earth:
In reality, 2014, 2015, and 2016 have been the three warmest years on record not just because of a large El Niño, but primarily because of a long-term warming trend driven by human emissions of greenhouse gases.

Kyle Armour, Assistant Professor, University of Washington:
It is well known that global temperature falls after receiving a temporary boost from El Niño. The author cherry-picks the slight cooling at the end of the current El Niño to suggest that the long-term global warming trend has ended. It has not.

1.Recent record global surface temperatures are primarily the result of the long-term, human-caused warming trend. A smaller boost from El Niño conditions has helped set new records in 2015 and 2016.


2. The article makes its case by relying only on cherry-picked data from specific datasets on short periods.

To understand what was said, I will try to take the broader perspective. That is to see whether the evidence points conclusively to a single long-term warming trend being primarily human caused. This will point to the real reason(or reasons) for downplaying the impact of an extreme El Nino event on record global average temperatures. There are a number of steps in this process.

Firstly to look at the data of rising CO2 levels. Secondly to relate that to predicted global average temperature rise, and then expected warming trends. Thirdly to compare those trends to global data trends using the actual estimates of HADCRUT4, taking note of the consequences of including other greenhouse gases. Fourthly to put the calculated trends in the context of the statements made above.


1. The recent data of rising CO2 levels
CO2 accounts for a significant majority of the alleged warming from increases in greenhouse gas levels. Since 1958 CO2 (when accurate measures started to be taken at Mauna Loa) levels have risen significantly. Whilst I could produce a simple graph either the CO2 level rising from 316 to 401 ppm in 2015, or the year-on-year increases CO2 rising from 0.8ppm in the 1960s to over 2ppm in in the last few years, Figure 1 is more illustrative.

CO2 is not just rising, but the rate of rise has been increasing as well, from 0.25% a year in the 1960s to over 0.50% a year in the current century.


2. Rising CO2 should be causing accelerating temperature rises

The impact of CO2 on temperatures is not linear, but is believed to approximate to a fixed temperature rise for each doubling of CO2 levels. That means if CO2 levels were rising arithmetically, the impact on the rate of warming would fall over time. If CO2 levels were rising by the same percentage amount year-on-year, then the consequential rate of warming would be constant over time.  But figure 1 shows that percentage rise in CO2 has increased over the last two-thirds of a century.  The best way to evaluate the combination of CO2 increasing at an accelerating rate and a diminishing impact of each unit rise on warming is to crunch some numbers. The central estimate used by the IPCC is that a doubling of CO2 levels will result in an eventual rise of 3C in global average temperatures. Dana1981 at Skepticalscience used a formula that produces a rise of 2.967 for any doubling. After adjusting the formula, plugging the Mauna Loa annual average CO2 levels into values in produces Figure 2.

In computing the data I estimated the level of CO2 in 1949 (based roughly on CO2 estimates from Law Dome ice core data) and then assumed a linear increased through the 1950s. Another assumption was that the full impact of the CO2 rise on temperatures would take place in the year following that rise.

The annual CO2 induced temperature change is highly variable, corresponding to the fluctuations in annual CO2 rise. The 11 year average – placed at the end of the series to give an indication of the lagged impact that CO2 is supposed to have on temperatures – shows the acceleration in the expected rate of CO2-induced warming from the acceleration in rate of increase in CO2 levels. Most critically there is some acceleration in warming around the turn of the century.

I have also included the impact of linear trend (by simply dividing the total CO2 increase in the period by the number of years) along with a steady increase of .396% a year, producing a constant rate of temperature rise.

Figure 3 puts the calculations into the context of the current issue.

This gives the expected temperature linear temperature trends from various start dates up until 2014 and 2016, assuming a one year lag in the impact of changes in CO2 levels on temperatures. These are the same sort of linear trends that the climate experts used in criticizing David Rose. The difference in warming by more two years produces very little difference – about 0.054C of temperature rise, and an increase in trend of less than 0.01 C per decade. More importantly, the rate of temperature rise from CO2 alone should be accelerating.


3. HADCRUT4 warming

How does one compare this to the actual temperature data? A major issue is that there is a very indeterminate lag between the rise in CO2 levels and the rise in average temperature. Another issue is that CO2 is not the only greenhouse gas. More minor greenhouse gases may have different patterns if increases in the last few decades. However, the change the trends of the resultant warming, but only but the impact should be additional to the warming caused by CO2. That is, in the long term, CO2 warming should account for less than the total observed.
There is no need to do actual calculations of trends from the surface temperature data. The Skeptical Science website has a trend calculator, where one can just plug in the values. Figure 4 shows an example of the graph, which shows that the dataset currently ends in an El Nino peak.

The trend results for HADCRUT4 are shown in Figure 5 for periods up to 2014 and 2016 and compared to the CO2 induced warming.

There are a number of things to observe from the trend data.

The most visual difference between the two tables is the first has a pause in global warming after 2002, whilst the second has a warming trend. This is attributable to the impact of El Nino. These experts are also right in that it makes very little difference to the long term trend. If the long term is over 40 years, then it is like adding 0.04C per century that long term trend.

But there is far more within the tables than this observations. Concentrate first on the three “Trend in °C/decade” columns. The first is of the CO2 warming impact from figure 3. For a given end year, the shorter the period the higher is the warming trend. Next to this are Skeptical Science trends for the HADCRUT4 data set. Start Year 1960 has a higher trend than Start Year 1950 and Start Year 1970 has a higher trend than Start Year 1960. But then each later Start Year has a lower trend the previous Start Years. There is one exception. The period 2010 to 2016 has a much higher trend than for any other period – a consequence of the extreme El Nino event. Excluding this there are now over three decades where the actual warming trend has been diverging from the theory.

The third of the “Trend in °C/decade” columns is simply the difference between the HADCRUT4 temperature trend and the expected trend from rising CO2 levels. If a doubling of CO2 levels did produce around 3C of warming, and other greenhouse gases were also contributing to warming then one would expect that CO2 would eventually start explaining less than the observed warming. That is the variance would be positive. But CO2 levels accelerated, actual warming stalled, increasing the negative variance.


4. Putting the claims into context

Compare David Rose

Stunning new data indicates El Nino drove record highs in global temperatures suggesting rise may not be down to man-made emissions

With Climate Feedback KEY TAKE-AWAY

1.Recent record global surface temperatures are primarily the result of the long-term, human-caused warming trend. A smaller boost from El Niño conditions has helped set new records in 2015 and 2016.

The HADCRUT4 temperature data shows that there had been no warming for over a decade, following a warming trend. This is in direct contradiction to theory which would predict that CO2-based warming would be at a higher rate than previously. Given that a record temperatures following this hiatus come as part of a naturally-occurring El Nino event it is fair to say that record highs in global temperatures ….. may not be down to man-made emissions.

The so-called long-term warming trend encompasses both the late twentieth century warming and the twenty-first century hiatus. As the later flatly contradicts theory it is incorrect to describe the long-term warming trend as “human-caused”. There needs to be a more circumspect description, such as the vast majority of academics working in climate-related areas believe that the long-term (last 50+ years) warming  is mostly “human-caused”. This would be in line with the first bullet point from the UNIPCC AR5 WG1 SPM section D3:-

It is extremely likely that more than half of the observed increase in global average surface temperature from 1951 to 2010 was caused by the anthropogenic increase in greenhouse gas concentrations and other anthropogenic forcings together.

When the IPCC’s summary opinion, and the actual data are taken into account Zeke Hausfather’s comment that the records “are primarily because of a long-term warming trend driven by human emissions of greenhouse gases” is dogmatic.

Now consider what David Rose said in the second article

El Nino is not caused by greenhouse gases and has nothing to do with climate change. It is true that the massive 2015-16 El Nino – probably the strongest ever seen – took place against a steady warming trend, most of which scientists believe has been caused by human emissions.

Compare this to Kyle Armour’s statement about the first article.

It is well known that global temperature falls after receiving a temporary boost from El Niño. The author cherry-picks the slight cooling at the end of the current El Niño to suggest that the long-term global warming trend has ended. It has not.

This time Rose seems to have responded to the pressure by stating that there is a long-term warming trend, despite the data clearly showing that this is untrue, except in the vaguest sense. There data does not show a single warming trend. Going back to the skeptical science trends we can break down the data from 1950 into four periods.

1950-1976 -0.014 ±0.072 °C/decade (2σ)

1976-2002 0.180 ±0.068 °C/decade (2σ)

2002-2014 -0.014 ±0.166 °C/decade (2σ)

2014-2016 1.889 ±1.882 °C/decade (2σ)

There was warming for about a quarter of a century sandwiched between two periods of no warming. At the end is an uptick. Only very loosely can anyone speak of a long-term warming trend in the data. But basic theory hypotheses a continuous, non-linear, warming trend. Journalists can be excused failing to make the distinctions. As non-experts they will reference opinion that appears sensibly expressed, especially when the alleged experts in the field are united in using such language. But those in academia, who should have a demonstrable understanding of theory and data, should be more circumspect in their statements when speaking as experts in their field. (Kyle Armour’s comment is an extreme example of what happens when academics completely suspend drawing on their expertise.)  This is particularly true when there are strong divergences between the theory and the data. The consequence is plain to see. Expert academic opinion tries to bring the real world into line with the theory by authoritative but banal statements about trends.

Kevin Marshall

Temperature Homogenization at Puerto Casado


The temperature homogenizations for the Paraguay data within both the BEST and UHCN/Gistemp surface temperature data sets points to a potential flaw within the temperature homogenization process. It removes real, but localized, temperature variations, creating incorrect temperature trends. In the case of Paraguay from 1955 to 1980, a cooling trend is turned into a warming trend. Whether this biases the overall temperature anomalies, or our understanding of climate variation, remains to be explored.


A small place in Mid-Paraguay, on the Brazil/Paraguay border has become the centre of focus of the argument on temperature homogenizations.

For instance here is Dr Kevin Cowtan, of the Department of Chemistry at the University of York, explaining the BEST adjustments at Puerto Casado.

Cowtan explains at 6.40

In a previous video we looked at a station in Paraguay, Puerto Casado. Here is the Berkeley Earth data for that station. Again the difference between the station record and the regional average shows very clear jumps. In this case there are documented station moves corresponding to the two jumps. There may be another small change here that wasn’t picked up. The picture for this station is actually fairly clear.

The first of these “jumps” was a fall in the late 1960s of about 1oC. Figure 1 expands the section of the Berkeley Earth graph from the video, to emphasise this change.

Figure 1 – Berkeley Earth Temperature Anomaly graph for Puerto Casado, with expanded section showing the fall in temperature and against the estimated mean station bias.

The station move is after the fall in temperature.

Shub Niggareth looked at the metadata on the actual station move concluding


That is the evidence of the station move was vague. The major evidence was the fall in temperatures. Alternative evidence is that there were a number of other stations in the area exhibiting similar patterns.

But maybe there was some, unknown, measurement bias (to use Steven Mosher’s term) that would make this data stand out from the rest? I have previously looked eight temperature stations in Paraguay with respect to the NASA Gistemp and UHCN adjustments. The BEST adjustments for the stations, along another in Paul Homewood’s original post, are summarized in Figure 2 for the late 1960s and early 1970s. All eight have similar downward adjustment that I estimate as being between 0.8 to 1.2oC. The first six have a single adjustment. Asuncion Airport and San Juan Bautista have multiple adjustments in the period. Pedro Juan CA was of very poor data quality due to many gaps (see GHCNv2 graph of the raw data) hence the reason for exclusion.


GHCN Location


Break Type

Break Year



23.4 S,57.3 W






27.3 S,55.8 W






22.0 S,60.6 W






26.9 S,58.3 W





Puerto Casado

22.3 S,57.9 W


Station Move



San Juan Baut

26.7 S,57.1 W





Asuncion Aero

25.3 S,57.6 W








Station Move






Station Move



San Juan Bautista

25.8 S,56.3 W














Station Move



Pedro Juan CA

22.6 S,55.6 W









3 in 1970s


Figure 2 – Temperature stations used in previous post on Paraguayan Temperature Homogenisations


Why would both BEST and UHCN remove a consistent pattern covering and area of around 200,000 km2? The first reason, as Roger Andrews has found, the temperature fall was confined to Paraguay. The second reason is suggested by the UHCNv2 raw data1 shown in figure 3.

Figure 3 – UHCNv2 “raw data” mean annual temperature anomalies for eight Paraguayan temperature stations, with mean of 1970-1979=0.

There was an average temperature fall across these eight temperature stations of about half a degree from 1967 to 1970, and over one degree by the mid-1970s. But it was not at the same time. The consistency is only show by the periods before and after as the data sets do not diverge. Any homogenisation program would see that for each year or month for every data set, the readings were out of line with all the other data sets. Now maybe it was simply data noise, or maybe there is some unknown change, but it is clearly present in the data. But temperature homogenisation should just smooth this out. Instead it cools the past. Figure 4 shows the impact average change resulting from the UHCN and NASA GISS homogenisations.

Figure 4 – UHCNv2 “raw data” and NASA GISS Homogenized average temperature anomalies, with the net adjustment.

A cooling trend for the period 1955-1980 has been turned into a warming trend due to the flaw in homogenization procedures.

The Paraguayan data on its own does not impact on the global land surface temperature as it is a tiny area. Further it might be an isolated incident or offset by incidences of understating the warming trend. But what if there are smaller micro climates that are only picked up by one or two temperature stations? Consider figure 5 which looks at the BEST adjustments for Encarnacion, one of the eight Paraguayan stations.

Figure 5 – BEST adjustment for Encarnacion.

There is the empirical break in 1968 from the table above, but also empirical breaks in the 1981 and 1991 that look to be exactly opposite. What Berkeley earth call the “estimated station mean bias” is as a result of actual deviations in the real data. Homogenisation eliminates much of the richness and diversity in the real world data. The question is whether this happens consistently. First we need to understand the term “temperature homogenization“.

Kevin Marshall


  1. The UHCNv2 “raw” data is more accurately pre-homogenized data. That is the raw data with some adjustments.

Understanding GISS Temperature Adjustments

A couple of weeks ago something struck me as odd. Paul Homewood had been going on about all sorts of systematic temperature adjustments, showing clearly that the past has been cooled between the UHCN “raw data” and the GISS Homogenised data used in the data sets. When I looked at eight stations in Paraguay, at Reykjavik and at two stations on Spitzbergen I was able to corroborate this result. Yet Euan Mearns has looked at groups of stations in central Australia and Iceland, in both finding no warming trend between the raw and adjusted temperature data. I thought that Mearns must be wrong, so when he published on 26 stations in Southern Africa1, I set out to evaluate those results, to find the flaw. I have been unable to fully reconcile the differences, but the notes I have made on the Southern African stations may enable a greater understanding of temperature adjustments. What I do find is that clear trends in the data across a wide area have been largely removed, bringing the data into line with Southern Hemisphere trends. The most important point to remember is that looking at data in different ways can lead to different conclusions.

Net difference and temperature adjustments

I downloaded three lots of data – raw, GCHNv3 and GISS Homogenised (GISS H), then replicated Mearns’ method of calculating temperature anomalies. Using 5 year moving averages, in Chart 1 I have mapped the trends in the three data sets.

There is a large divergence prior to 1900, but for the twentieth century the warming trend is not excessively increased. Further, the warming trend from around 1900 is about half of that in the GISTEMP Southern Hemisphere or global anomalies. Looked in this way Mearns would appear to have a point. But there has been considerable downward adjustment of the early twentieth century warming, so Homewood’s claim of cooling the past is also substantiated. This might be the more important aspect, as the adjusted data makes the warming since the mid-1970s appear unusual.

Another feature is that the GCHNv3 data is very close to the GISS Homogenised data. So in looking the GISS H data used in the creation of the temperature data sets is very much the same as looking at GCHNv3 that forms the source data for GISS.

But why not mention the pre-1900 data where the divergence is huge?

The number of stations gives a clue in Chart 2.

It was only in the late 1890s that there are greater than five stations of raw data. The first year there are more data points left in than removed is 1909 (5 against 4).

Removed data would appear to have a role in the homogenisation process. But is it material? Chart 3 graphs five year moving averages of raw data anomalies, split between the raw data removed and retained in GISS H, along with the average for the 26 stations.

Where there are a large number of data points, it does not materially affect the larger picture, but does remove some of the extreme “anomalies” from the data set. But where there is very little data available the impact is much larger. That is particularly the case prior to 1910. But after 1910, any data deletions pale into insignificance next to the adjustments.

The Adjustments

I plotted the average difference between the Raw Data and the adjustment, along with the max and min values in Chart 4.

The max and min of net adjustments are consistent with Euan Mearns’ graph “safrica_deltaT” when flipped upside down and made back to front. It shows a difficulty of comparing adjusted, where all the data is shifted. For instance the maximum figures are dominated by Windhoek, which I looked at a couple of weeks ago. Between the raw data and the GISS Homogenised there was a 3.6oC uniform increase. There were a number of other lesser differences that I have listed in note 3. Chart 5 shows the impact of adjusting the adjustments is on both the range of the adjustments and the pattern of the average adjustments.

Comparing this with this average variance between the raw data and the GISS Homogenised shows the closer fit if the adjustments to the variance. Please note the difference in scale on Chart 6 from the above!

In the earlier period has by far the most deletions of data, hence the lack of closeness of fit between the average adjustment and average variance. After 1945, the consistent pattern of the average adjustment being slightly higher than the average variance is probably due to a light touch approach on adjustment corrections than due to other data deletions. The might be other reasons as well for the lack of fit, such as the impact of different length of data sets on the anomaly calculations.

Update 15/03/15

Of note is that the adjustments in the early 1890s and around 1930 is about three times the size of the change in trend. This might be partly due to zero net adjustments in 1903 and partly due to the small downward adjustments in post 2000.

The consequences of the adjustments

It should be remembered that GISS use this data to create the GISTEMP surface temperature anomalies. In Chart 7 I have amended Chart 1 to include Southern Hemisphere annual mean data on the same basis as the raw data and GISS H.

It seems fairly clear that the homogenisation process has achieved bringing the Southern Africa data sets into line with the wider data sets. Whether the early twentieth century warming and mid-century cooling are outliers that have been correctly cleansed is a subject for further study.

What has struck me in doing this analysis is that looking at individual surface temperature stations becomes nonsensical, as they are grid reference points. Thus comparing the station moves for Reykjavik with the adjustments will not achieve anything. The implications of this insight will have to wait upon another day.

Kevin Marshall


1. 26 Data sets

The temperature stations, with the periods for the raw data are below.








17.9 S

31.1 E



1897 – 2011


28.8 S

24.8 E



1897 – 2011


19.4 S

29.8 E



1898 – 1970


20.1 S

28.6 E



1897 – 2011


19.8 S

34.9 E



1913 – 1991


14.4 S

28.5 E



1925 – 2011


17.8 S

25.8 E



1918 – 2010


15.2 S

23.1 E


< 10,000

1923 – 2010


11.8 S

24.4 E


< 10,000

1923 – 1970


13.0 S

28.6 E



1923 – 1981

Capetown Safr

33.9 S

18.5 E



1880 – 2011


31.5 S

19.8 E


< 10,000

1941 – 2011

East London

33.0 S

27.8 E



1940 – 2011


22.6 S

17.1 E



1921 – 1991


26.5 S

18.1 E



1931 – 2010


29.1 S

26.3 E



1943 – 2011

De Aar

30.6 S

24.0 E



1940 – 2011


31.9 S

26.9 E



1940 – 1991


26.4 S

29.5 E



1940 – 1991


18.8 S

47.5 E



1889 – 2011


18.1 S

49.4 E



1951 – 2011

Porto Amelia

13.0 S

40.5 E


< 10,000

1947 – 1991


26.7 S

27.1 E



1940 – 1991


6.2 S

39.2 E



1880 – 1960


5.1 S

32.8 E



1893 – 2011

Dar Es Salaam

6.9 S

39.2 E



1895 – 2011

2. Temperature trends

To calculate the trends I used the OLS method, both from the formula and using the EXCEL “LINEST” function, getting the same answer each time. If you are able please check my calculations. The GISTEMP Southern Hemisphere and global data can be accessed direct from the NASA GISS website. The GISTEMP trends are from the skepticalscience trends tool. My figures are:-

3. Adjustments to the Adjustments


Recent adjustment

Other adjustment

Other Period






Mid-70s + inter-war



Dar Es Salaam






About 1999-2002
















Windhoek Temperature adjustments

At Euan Mearn’s blog I made reference to my findings, posted in full last night, that in the Isfjord Radio weather station had adjustments that varied between +4.0oC in 1917 to -1.7oC in the 1950s. I challenged anyone to find bigger adjustments than that. Euan came back with the example of Windhoek in South Africa, claiming 5oC of adjustments between the “raw” and GISS homogenised data.

I cry foul, as the adjustments are throughout the data set. J

That is the whole of the data set has been adjusted up by about 4 oC!

However, comparing the “raw” with the GISS homogenised data, with 5 year moving averages, (alongside the net adjustments) there are some interesting features.

The overall temperatures have been adjusted up by around 4oC, but

  • From the start of the record in 1920 to 1939 the cooling has been retained, if not slightly amplified.
  • The warming from 1938 to 1947 of 1.5oC has been erased by a combination of deleting the 1940 to 1944 data and reducing the 1945-1948 adjustments by 1.4oC.
  • The 1945-1948 adjustments, along with random adjustments and deletion of data mostly remove the near 1.5oC of cooling from the late 1940s to mid-1950s and the slight rebound through to the early 1960s.
  • The early 1970s cooling and the warming to the end of the series in the mid-1980s is largely untouched.

The overall adjustments leave a peculiar picture that cannot be explained by a homogenisation algorithm. The cooling in the 1920s offsets the global trend. Deletion of data and the adjustments in the data counter the peak of warming in the early 1940s in the global data. Natural variations in the raw data between the late 1940s and 1970 appear to have been removed, then the slight early 1970s cooling and the subsequent warming in the raw data left alone. However, the raw data shows average temperatures in the 1980s to be around 0.8oC higher than in the early 1920s. The adjustments seem to have removed this.

This removal of the warming trend tends to disprove something else. There appears to be no clever conspiracy, with a secret set of true figures. Rather, there are a lot of people dipping in to adjusting adjusted data to their view of the world, but nobody really questioning the results. They have totally lost sight of what the real data actually is. If they have compared the final adjusted data with the raw data, then they realised that the adjustments had managed to have eliminated a warming trend of over 1 oC per century.

Kevin Marshall

Is there a Homogenisation Bias in Paraguay’s Temperature Data?

Last month Paul Homewood at Notalotofpeopleknowthat looked at the temperature data for Paraguay. His original aim was to explain the GISS claims of 2014 being the hottest year.

One of the regions that has contributed to GISS’ “hottest ever year” is South America, particularly Brazil, Paraguay and the northern part of Argentina. In reality, much of this is fabricated, as they have no stations anywhere near much of this area…

….there does appear to be a warm patch covering Paraguay and its close environs. However, when we look more closely, we find things are not quite as they seem.

In “Massive Tampering With Temperatures In South America“, Homewood looked at the “three genuinely rural stations in Paraguay that are currently operating – Puerto Casado, Mariscal and San Juan.” A few days later in “All Of Paraguay’s Temperature Record Has Been Tampered With“, he looked at remaining six stations.

After identifying that all of the three rural stations currently operational in Paraguay had had huge warming adjustments made to their data since the 1950’s, I tended to assume that they had been homogenised against some of the nearby urban stations. Ones like Asuncion Airport, which shows steady warming since the mid 20thC. When I went back to check the raw data, it turns out all of the urban sites had been tampered with in just the same way as the rural ones.

What Homewood does not do is to check the data behind the graphs, to quantify the extent of the adjustment. This is the aim of the current post.

Warning – This post includes a lot of graphs to explain how I obtained my results.

Homewood uses comparisons of two graphs, which he helpful provides the links to. The raw GHCN data + UHSHCN corrections is available here up until 2011 only. The current after GISS homogeneity adjustment data is available here.

For all nine data sets that I downloaded both the raw and homogenised data. By simple subtraction I found the differences. In any one year, they are mostly the same for each month. But for clarity I selected a single month – October – the month of my wife’s birthday.

For the Encarnacion (27.3 S,55.8 W) data sets the adjustments are as follows.

In 1967 the adjustment was -1.3C, in 1968 +0.1C. There is cooling of the past.

The average adjustments for all nine data sets is as follows.

This pattern is broadly consistent across all data sets. These are the maximum and minimum adjustments.

However, this issue is clouded by the special adjustments required for the Pedro Juan CA data set. The raw data set has been patched from four separate files,

Removing does not affect the average picture.

But does affect the maximum and minimum adjustments. This is shows the consistency in the adjustment pattern.

The data sets are incomplete. Before 1941 there is only one data set – Ascuncion Aero. The count for October each year is as follows.

In recent years there are huge gaps in the data, but for the late 1960s when the massive switch in adjustments took place, there are six or seven pairs of raw and adjusted data.

Paul Homewood’s allegation that the past has been cooled is confirmed. However, it does not give a full understanding of the impact on the reported data. To assist, for the full year mean data, I have created temperature anomalies based on the average anomaly in that year.

The raw data shows a significant cooling of up to 1oC in the late 1960s. If anything there has been over-compensation in the adjustments. Since 1970, any warming in the adjusted data has been through further adjustments.

Is this evidence of a conspiracy to “hide a decline” in Paraguayan temperatures? I think not. My alternative hypothesis is that this decline, consistent over a number of thermometers is unexpected. Anybody looking at just one of these data sets recently, would assume that the step change in 40-year-old data from a distant third world country is bound to be incorrect. (Shub has a valid point) That change goes against the known warming trend for over a century from the global temperature data sets and the near stationary temperatures from 1950-1975. More importantly cooling goes against the “known” major driver of temperature recent change – rises in greenhouse gas levels. Do you trust some likely ropey instrument data, or trust your accumulated knowledge of the world? The clear answer is that the instruments are wrong. Homogenisation is then not to local instruments in the surrounding areas, but to the established expert wisdom of the world. The consequent adjustment cools past temperatures by one degree. The twentieth century warming is enhanced as a consequence of not believing what the instruments are telling you. The problem is that this step change is replicated over a number of stations. Paul Homewood had shown that it probably extends into Bolivia as well.

But what happens if the converse happens? What if there is a step rise in some ropey data set from the 1970s and 1980s? This might be large, but not inconsitent with what is known about the world. It is unlikely to be adjusted downwards. So if there have been local or regional step changes in average temperature over time both up and down, the impact will be to increase the rate of warming if the data analysts believe that the world is warming and human beings are the cause of it.

Further analysis is required to determine the extent of the problem – but not from this unpaid blogger giving up my weekends and evenings.

Kevin Marshall

All first time comments are moderated. Please also use the comments as a point of contact, stating clearly that this is the case and I will not click the publish button, subject to it not being abusive. I welcome other points of view, though may give a robust answer.

AndThenTheresPhysics on Paraguayan Temperature Data

The blog andthentheresphysics is a particularly dogmatic and extremist website. Most of the time it provides extremely partisan opinion pieces on climate science, but last week the anonymous blogger had a post “Puerto Casado” concerning an article in the Telegraph about Paraguayan temperature by Christopher Booker. I posted the following comment

The post only looks at one station in isolation, and does not reference original source of the claims.

Paul Homewood at notalotofpeopleknowthat looked at all three available rural stations in Paraguay. The data from Mariscal and San Jan Buatista/Misiones had the same pattern of homogenization adjustments as Puerto Casado. That is, cooling of the past, so that instead of the raw data showing the 1960s being warmer than today, it was cooler.

Using his accountancy mind set, Homewood then (after Booker’s article was published) checked the six available urban sites in Paraguay. His conclusion was that

warming adjustments have taken place at every single, currently operational site in Paraguay.

Then he looked at all 14 available stations in neighbouring Bolivia. His conclusion

At every station, bar one, we find the ….. past is cooled and the present warmed.”

(The exception was La Paz, where the cooling trend in the raw data had been reduced.)

Homogenization of data means correcting for biases. For a 580,000 sq mile area of Central South America it would appears strong adjustment biases to have been introduced in a single direction.

Homewood references every single site. Anyone can easily debunk my summary by searching the following:-

Jan-20 Massive Tampering With Temperatures In South America

Jan-26 All Of Paraguay’s Temperature Record Has Been Tampered With

Jan-30 Cooling The Past In Bolivia

My comment did not contain the hyperlinks or italics. It has been deleted without passing through moderation. The only bit of the moderation policy I believe that I fall foul of is the last.

This blog is also turning out to be both more time consuming and more stressful than anticipated. Some moderation may be based purely on whether or not I/we can face dealing with how a particular comment thread is evolving. This is not a public service and so, in general, any moderation decision is final.

The counter-argument from ATTP is

If you look again at the information for this station the trend before adjustments was -1.37oC per century, after quality control it was -0.89 oC per century, and after adjusting for the station moves was +1.36 oC per century. Also, if you consider the same region for the same months, the trend is +1.37 oC per century, and for the country for the same months it is +1.28 oC per century. So, not only can one justify the adjustments, the result of the adjustments is consistent with what would be expected for that region and for the country.

Paul Homewood has investigated all the other stations in Paraguay or in neighbouring Bolivia and found similar ad hoc adjustments. It completely undermines ATTP’s arguments. This anonymous individual is wrong. Rather than face dealing that he is wrong, ATTP has deleted my comment. He is entitled to his beliefs, and in a free society can proselytize to his heart’s content. But there are boundaries. One of them is in suppressing evidence that undermines the justification for costly and harmful public policies. That is policies that are harming the poor here in Britain, but (and more importantly) can only be remotely successful by destroying the prospect of increasing living standards for over half the world’s population. Paul Homewood and others are increasingly uncovering similar biases in the temperature record in other parts of the world. The underlying data for the global surface temperature sets is in need of a proper, independent audit, to determine the extent of the biases within it. But when the accusation that the Paraguayan temperature data set is corrupted, people will point to ATTP’s blog post as evidence that there is but a single instance, and that instance has been debunked. Another boundary is a value that that many in the criminal justice system also hold dear. The more emotive the subject, the greater all concerned must go out of their way to compare and contrast the arguments. That way, the influence of our very human prejudices will be minimized. Again, independent audits will help eliminate this. If ATTP thinks he has all the answers then he will not be afraid to encourage people to look at both sides, evaluate by independent standards, and make up their own minds.

Kevin Marshall

Comment ATTP 310115

Instances of biases in the temperature sets

This will be added to when I get time.

Paul Homewood on San Diego data 30-01-15

Shub Niggareth looks into the Puerto Casado story 29-01-15

Paul Homewood on Reykjavik, Iceland 30-01-15

Jennifer Marohasy letter on Australian data 15-01-15

Update 01-02-15

I have invited a response from ATTP, by posting #comment-46021.


You have deleted two of my comments in the last 24 hours that meet all of your moderation criteria except one – that you cannot face dealing with a challenge. That is your prerogative. However, the first comment, (now posted on my blog) I believe completely undermines your argument. Paul Homewood has shown that the Puerto Casado dataset homogenization did not make it consistent with neighbouring non-homogenized surface temperature stations, but that all the Paraguayan and neighbouring Bolivian surface temperature stations were “homogenized” in the same way. That is, rather than eliminating the biases that local factors can create, the homogenizations, by people far removed from the local situations, effectively corrupted the data set, in a way that fits reality to the data.

I might be wrong in this. But based on your arguments so far I believe that my analysis is better than yours. I also believe that who has the better argument will only be resolved by an independent audit of the adjustments. If you are on the side of truth you would welcome that, just as a prosecutor would welcome the chance to prove their case in court, or a pharmaceutical company would welcome independent testing of their new wonder-drug that could save millions of lives. Even if I am wrong, I will be glad at being refuted by superior arguments, as I will know that to refute my claims will require you to up your game. Humanity will be served by my challenging a weak case and making it stronger. You have generated over 500 comments to your post, so an appeal for help via email should generate some response. If that does not work there are many well-funded organisations that I am sure will rush to your assistance.

There are at least seven options I think you can take.

  1. Ignore me, and pretend nothing has happened. Bad idea. I will start analysing your posts, as you did with Wattsupwiththat, only rather than your pea-shooters firing blanks, I have the heavy artillery with HE shells.
  2. Do an attack post – like desmogblog or Bob Ward of the Grantham Institute might do. Bad idea, I will take that as perverting or suppressing the evidence, and things will get rather rough. After all, I am but a (slightly) manic ex-beancounter, and you have the consensus of science on your side, so why is should sending in the PR thugs be necessary unless you are on the losing side?
  3. Get together a response that genuinely ups the game. Win or lose you will have served humanity as I and others will have to rebut you. Engage and all will gain through greater understanding.
  4. Admit that there are other valid points of view. A start would be to release this comment, which will get posted on my blog anyway. I quite accept that you cannot come up with a rebuttal at the drop-of-a-hat. A simple comment that a response will be made sometime this year is fine by me.
  5. Also call for a truly independent audit of the surface temperature set. It could be for your own reasons, and if truly independent, I will support it. If a whitewash, like the enquiries that Gordon Brown ordered into Climategate, an audit will do more harm than good.
  6. Close down your blog and do something else instead. You choose to be anonymous, and I respect that. Walking away is easy.
  7. Admit that you got this one wrong. You will take some flack, but not from me.

Showing Warming when it has Stopped

There has been no statistically significant warming for at least 15 years. Yet some people, like commentator “Michael the Realist”, who is currently trolling Joanne Nova’s blog, are claiming otherwise. For instance

Again look at the following graph.

Now let me explain it to the nth degree.
# The long term trend over the whole period is obviously up.
# The long term trend has pauses and dips due to natural variations but the trend is unchanged.
# The current period is at the top of the trend.
# 2001 to 2010 is the hottest decade on the record despite a preponderance of natural cooling trends. (globally, ocean, land and both hemispheres)
# Hotter than the previous decade of 1991 to 2000 with its preponderance of natural warming events.
# Every decade bar one has been hotter than the previous decade since 1901.

Please explain why the above is true if not AGW with proof.

State of the climate 2012

The three highlighted comments are the ones that this posting addresses.

Using decadal average temperature changes to cover up the standstill.

The latest way to avoid the truth that warming has stopped for 15 years or more is by decadal averages. This can be illustrated by using an approximate model of the data. Assume constant average temperatures from 1960 to 1975, a linear warming of 0.6oC from 1976 to 1998, followed by a further standstill.

The decadal averages are

So, instead of 24 years of warming, it is 4 consecutive decades, that are each warmer than the last. The 2000s are warmer than the 1990s simply because there was warming the 1990s. It is political spin, relying on an ignorance of basic statistics, that is needed to make such claims.