aTTP falsely attacks Bjorn Lomborg’s “Impact of Current Climate Proposals” Paper

The following is a comment to be posted at Bishop Hill, responding to another attempt by blogger ….andThenThere’sPhysics to undermine the work of Bjorn Lomborg. The previous attempt was discussed here. This post includes a number of links, as well as a couple of illustrative screen captures at the foot of the table.

aTTP’s comment is

In fact, you should read Joe Romm’s post about this. He’s showing that the INDCs are likely to lead to around 3.5C which I think is relative to something like the 1860-1880 mean. This is very similar to the MIT’s 3.7, and quite a bit lower than the RCP8.5 of around 4.5C. So, yes, we all know that the INDCs are not going to do as much as some might like, but the impact is likely to be a good deal greater than that implied by Lomborg who has essentially assumed that we get to 2030 and then simply give up.

Nov 11, 2015 at 9:31 AM | …and Then There’s Physics

My Comment

aTTP at 9.31 refers to Joe Romm’s blog post of Nov 3 “Misleading U.N. Report Confuses Media On Paris Climate Talks“. Romm uses Climate Interactive’s Climate Scoreboard Tool to show the INDC submissions (if fully implemented) will result in 3.5°C as against the 4.5°C in the non-policy “No Action” Scenario. This is six times the claimed maximum impact of 0.17°C claimed in Lomberg’s new paper. Who is right? What struck me first was that Romm’s first graph, copied straight from the Climate Interactive’s seem to have a very large estimate for emissions in the “No Action” Scenario producing. Downloading the underlying data, I find the “No Action” global emissions in 2100 are 139.3 GtCO2e, compared with about 110 GtCO2e in Figure SPM5(a) of the AR5 Synthesis Report for the RCP8.5 scenario high emissions scenario. But it is the breakdown per country or region that matters.

For the USA, without action emissions are forecast to rise from 2010 to 2030 by 40%, in contrast to a rise of just 9% in the period 1990 to 2010. It is likely that emissions will fall without policy and will be no higher in 2100 than in 2010. The “no action” scenario overestimates 2030 emissions by 2-3 GtCO2e in 2030 and about 7-8 GtCO2e in 2100.

For the China the overestimation is even greater. Emissions will peak during the next decade as China fully industrializes, just as emissions peaked in most European countries in the 1970s and 1980s. Climate Interactive assumes that emissions will peak at 43 GtCO2e in 2090, whereas other estimates that the emissions peak will be around 16-17 GtCO2e before 2030.

Together, overestimations of the US and China’s “No Action” scenarios account for over half 55-60 GtCO2e 2100 emissions difference between the “No Action” and “Current INDC” scenarios. A very old IT term applies here – GIGO. If aTTP had actually checked the underlying assumptions he would realise that Romm’s rebuttal of Lomborg based on China’s emission assumptions (and repeated on his own blog) are as false as claiming that the availability of free condoms is why population peaks.

Links posted at https://manicbeancounter.com/2015/11/11/attp-falsely-attacks-bjorn-lomborgs-impact-of-current-climate-proposals-paper/

Kevin Marshall

 

Figures referred to (but not referenced) in the comment above

Figure 1: Climate Interactive’s graph, referenced by Joe Romm.


Figure 2: Reproduction of Figure SPM5(a) from Page 9 of the AR5 Synthesis Report.

 

Update – posted the following to ATTP’s blog



 

Defining “Temperature Homogenisation”

Summary

The standard definition of temperature homogenisation is of a process that cleanses the temperature data of measurement biases to only leave only variations caused by real climatic or weather variations. This is at odds with GHCN & GISS adjustments which delete some data and add in other data as part of the homogenisation process. A more general definition is to make the data more homogenous, for the purposes of creating regional and global average temperatures. This is only compatible with the standard definition if assume that there are no real data trends existing within the homogenisation area. From various studies it is clear that there are cases where this assumption does not hold good. The likely impacts include:-

  • Homogenised data for a particular temperature station will not be the cleansed data for that location. Instead it becomes a grid reference point, encompassing data from the surrounding area.
  • Different densities of temperature data may lead to different degrees to which homogenisation results in smoothing of real climatic fluctuations.

Whether or not this failure of understanding is limited to a number of isolated instances with a near zero impact on global temperature anomalies is an empirical matter that will be the subject of my next post.

Introduction

A common feature of many concepts involved with climatology, the associated policies and sociological analyses of non-believers, is a failure to clearly understand of the terms used. In the past few months it has become evident to me that this failure of understanding extends to term temperature homogenisation. In this post I look at the ambiguity of the standard definition against the actual practice of homogenising temperature data.

The Ambiguity of the Homogenisation Definition

The World Meteorological Organisation in its’ 2004 Guidelines on Climate Metadata and Homogenization1 wrote this explanation.

Climate data can provide a great deal of information about the atmospheric environment that impacts almost all aspects of human endeavour. For example, these data have been used to determine where to build homes by calculating the return periods of large floods, whether the length of the frost-free growing season in a region is increasing or decreasing, and the potential variability in demand for heating fuels. However, for these and other long-term climate analyses –particularly climate change analyses– to be accurate, the climate data used must be as homogeneous as possible. A homogeneous climate time series is defined as one where variations are caused only by variations in climate.

Unfortunately, most long-term climatological time series have been affected by a number of nonclimatic factors that make these data unrepresentative of the actual climate variation occurring over time. These factors include changes in: instruments, observing practices, station locations, formulae used to calculate means, and station environment. Some changes cause sharp discontinuities while other changes, particularly change in the environment around the station, can cause gradual biases in the data. All of these inhomogeneities can bias a time series and lead to misinterpretations of the studied climate. It is important, therefore, to remove the inhomogeneities or at least determine the possible error they may cause.

That is temperature homogenisation is necessary to isolate and remove what Steven Mosher has termed measurement biases2, from the real climate signal. But how does this isolation occur?

Venema et al 20123 states the issue more succinctly.

The most commonly used method to detect and remove the effects of artificial changes is the relative homogenization approach, which assumes that nearby stations are exposed to almost the same climate signal and that thus the differences between nearby stations can be utilized to detect inhomogeneities (Conrad and Pollak, 1950). In relative homogeneity testing, a candidate time series is compared to multiple surrounding stations either in a pairwise fashion or to a single composite reference time series computed for multiple nearby stations. (Italics mine)

Blogger …and Then There’s Physics (ATTP) partly recognizes these issues may exist in his stab at explaining temperature homogenisation4.

So, it all sounds easy. The problem is, we didn’t do this and – since we don’t have a time machine – we can’t go back and do it again properly. What we have is data from different countries and regions, of different qualities, covering different time periods, and with different amounts of accompanying information. It’s all we have, and we can’t do anything about this. What one has to do is look at the data for each site and see if there’s anything that doesn’t look right. We don’t expect the typical/average temperature at a given location at a given time of day to suddenly change. There’s no climatic reason why this should happen. Therefore, we’d expect the temperature data for a particular site to be continuous. If there is some discontinuity, you need to consider what to do. Ideally you look through the records to see if something happened. Maybe the sensor was moved. Maybe it was changed. Maybe the time of observation changed. If so, you can be confident that this explains the discontinuity, and so you adjust the data to make it continuous.

What if there isn’t a full record, or you can’t find any reason why the data may have been influenced by something non-climatic? Do you just leave it as is? Well, no, that would be silly. We don’t know of any climatic influence that can suddenly cause typical temperatures at a given location to suddenly increase or decrease. It’s much more likely that something non-climatic has influenced the data and, hence, the sensible thing to do is to adjust it to make the data continuous. (Italics mine)

The assumption of a nearby temperature stations have the same (or very similar) climatic signal, if true would mean that homogenisation would cleanse the data of the impurities of measurement biases. But there is only a cursory glance given to the data. For instance, when Kevin Cowtan gave an explanation of the fall in average temperatures at Puerto Casado neither he, nor anyone else, checked to see if the explanation stacked up beyond checking to see if there had been a documented station move at roughly that time. Yet the station move is at the end of the drop in temperatures, and a few minutes checking would have confirmed that other nearby stations exhibit very similar temperature falls5. If you have a preconceived view of how the data should be, then a superficial explanation that conforms to that preconception will be sufficient. If you accept the authority of experts over personally checking for yourself, then the claim by experts that there is not a problem is sufficient. Those with no experience of checking the outputs following processing of complex data will not appreciate the issues involved.

However, this definition of homogenisation appears to be different from that used by GHCN and NASA GISS. When Euan Mearns looked at temperature adjustments in the Southern Hemisphere and in the Arctic6, he found numerous examples in the GHCN and GISS homogenisations of infilling of some missing data and, to a greater extent, deleted huge chunks of temperature data. For example this graphic is Mearns’ spreadsheet of adjustments between GHCNv2 (raw data + adjustments) and the GHCNv3 (homogenised data) for 25 stations in Southern South America. The yellow cells are where V2 data exist V3 not; the greens cells V3 data exist where V2 data do not.

Definition of temperature homogenisation

A more general definition that encompasses the GHCN / GISS adjustments is of broadly making the data homogenous. It is not done by simply blending the data together and smoothing out the data. Homogenisation also adjusts anomalous data as a result of pairwise comparisons between local temperature stations, or in the case of extreme differences in the GHCN / GISS deletes the most anomalous data. This is a much looser and broader process than homogenisation of milk, or putting some food through a blender.

The definition I cover in more depth in the appendix.

The Consequences of Making Data Homogeneous

A consequence of cleansing the data in order to make it more homogenous gives a distinction that is missed by many. This is due to making the strong assumption that there are no climatic differences between the temperature stations in the homogenisation area.

Homogenisation is aimed at adjusting for the measurement biases to give a climatic reading for the location where the temperature station is located that is a closer approximation to what that reading would be without those biases. With the strong assumption, making the data homogenous is identical to removing the non-climatic inhomogeneities. Cleansed of these measurement biases the temperature data is then both the average temperature readings that would have been generated if the temperature station had been free of biases and a representative location for the area. This latter aspect is necessary to build up a global temperature anomaly, which is constructed through dividing the surface into a grid. Homogenisation, in the sense of making the data more homogenous by blending is an inappropriate term. All what is happening is adjusting for anomalies within the through comparisons with local temperature stations (the GHCN / GISS method) or comparisons with an expected regional average (the Berkeley Earth method).

But if the strong assumption does not hold, homogenisation will adjust these climate differences, and will to some extent fail to eliminate the measurement biases. Homogenisation is in fact made more necessary if movements in average temperatures are not the same and the spread of temperature data is spatially uneven. Then homogenisation needs to not only remove the anomalous data, but also make specific locations more representative of the surrounding area. This enables any imposed grid structure to create an estimated average for that area through averaging the homogenized temperature data sets within the grid area. As a consequence, the homogenised data for a temperature station will cease to be a closer approximation to what the thermometers would have read free of any measurement biases. As homogenisation is calculated by comparisons of temperature stations beyond those immediately adjacent, there will be, to some extent, influences of climatic changes beyond the local temperature stations. The consequences of climatic differences within the homogenisation area include the following.

  • The homogenised temperature data for a location could appear largely unrelated to the original data or to the data adjusted for known biases. This could explain the homogenised Reykjavik temperature, where Trausti Jonsson of the Icelandic Met Office, who had been working with the data for decades, could not understand the GHCN/GISS adjustments7.
  • The greater the density of temperature stations in relation to the climatic variations, the less that climatic variations will impact on the homogenisations, and the greater will be the removal of actual measurement biases. Climate variations are unlikely to be much of an issue with the Western European and United States data. But on the vast majority of the earth’s surface, whether land or sea, coverage is much sparser.
  • If the climatic variation at a location is of different magnitude to that of other locations in the homogenisation area, but over the same time periods and direction, then the data trends will be largely retained. For instance, in Svarlbard the warming temperature trends of the early twentieth century and from the late 1970s were much greater than elsewhere, so were adjusted downwards8.
  • If there are differences in the rate of temperature change, or the time periods for similar changes, then any “anomalous” data due to climatic differences at the location will be eliminated or severely adjusted, on the same basis as “anomalous” data due to measurement biases. For instance in large part of Paraguay at the end of the 1960s average temperatures by around 1oC. Due to this phenomena not occurring in the surrounding areas both the GHCN and Berkeley Earth homogenisation processes adjusted out this trend. As a consequence of this adjustment, a mid-twentieth century cooling in the area was effectively adjusted to out of the data9.
  • If a large proportion of temperature stations in a particular area have consistent measurement biases, then homogenisation will retain those biases, as it will not appear anomalous within the data. For instance, much of the extreme warming post 1950 in South Korea is likely to have been as a result of urbanization10.

Other Comments

Homogenisation is just part of the process of adjusting data for the twin purposes of attempting to correct for biases and building a regional and global temperature anomalies. It cannot, for instance, correct for time of observation biases (TOBS). This needs to be done prior to homogenisation. Neither will homogenisation build a global temperature anomaly. Extrapolating from the limited data coverage is a further process, whether for fixed temperature stations on land or the ship measurements used to calculate the ocean surface temperature anomalies. This extrapolation has further difficulties. For instance, in a previous post11 I covered a potential issue with the Gistemp proxy data for Antarctica prior to permanent bases being established on the continent in the 1950s. Making the data homogenous is but the middle part of a wider process.

Homogenisation is a complex process. The Venema et al 20123 paper on the benchmarking of homogenisation algorithms demonstrates that different algorithms produce significantly different results. What is clear from the original posts on the subject by Paul Homewood and the more detailed studies by Euan Mearns and Roger Andrews at Energy Matters, is that the whole process of going from the raw monthly temperature readings to the final global land surface average trends has thrown up some peculiarities. In order to determine whether they are isolated instances that have near zero impact on the overall picture, or point to more systematic biases that result from the points made above, it is necessary to understand the data available in relation to the overall global picture. That will be the subject of my next post.

Kevin Marshall

Notes

  1. GUIDELINES ON CLIMATE METADATA AND HOMOGENIZATION by Enric Aguilar, Inge Auer, Manola Brunet, Thomas C. Peterson and Jon Wieringa
  2. Steven Mosher – Guest post : Skeptics demand adjustments 09.02.2015
  3. Venema et al 2012 – Venema, V. K. C., Mestre, O., Aguilar, E., Auer, I., Guijarro, J. A., Domonkos, P., Vertacnik, G., Szentimrey, T., Stepanek, P., Zahradnicek, P., Viarre, J., Müller-Westermeier, G., Lakatos, M., Williams, C. N., Menne, M. J., Lindau, R., Rasol, D., Rustemeier, E., Kolokythas, K., Marinova, T., Andresen, L., Acquaotta, F., Fratianni, S., Cheval, S., Klancar, M., Brunetti, M., Gruber, C., Prohom Duran, M., Likso, T., Esteban, P., and Brandsma, T.: Benchmarking homogenization algorithms for monthly data, Clim. Past, 8, 89-115, doi:10.5194/cp-8-89-2012, 2012.
  4. …and Then There’s Physics – Temperature homogenisation 01.02.2015
  5. See my post Temperature Homogenization at Puerto Casado 03.05.2015
  6. For example

    The Hunt For Global Warming: Southern Hemisphere Summary

    Record Arctic Warmth – in 1937

  7. See my post Reykjavik Temperature Adjustments – a comparison 23.02.2015
  8. See my post RealClimate’s Mis-directions on Arctic Temperatures 03.03.2015
  9. See my post Is there a Homogenisation Bias in Paraguay’s Temperature Data? 02.08.2015
  10. NOT A LOT OF PEOPLE KNOW THAT (Paul Homewood) – UHI In South Korea Ignored By GISS 14.02.2015

Appendix – Definition of Temperature Homogenisation

When discussing temperature homogenisations, nobody asks what the term actual means. In my house we consume homogenised milk. This is the same as the pasteurized milk I drank as a child except for one aspect. As a child I used to compete with my siblings to be the first to open a new pint bottle, as it had the cream on top. The milk now does not have this cream, as it is blended in, or homogenized, with the rest of the milk. Temperature homogenizations are different, involving changes to figures, along with (at least with the GHCN/GISS data) filling the gaps in some places and removing data in others1.

But rather than note the differences, it is better to consult an authoritative source. From Dictionary.com, the definitions of homogenize are:-

verb (used with object), homogenized, homogenizing.

  1. to form by blending unlike elements; make homogeneous.
  2. to prepare an emulsion, as by reducing the size of the fat globules in (milk or cream) in order to distribute them equally throughout.
  3. to make uniform or similar, as in composition or function:

    to homogenize school systems.

  4. Metallurgy. to subject (metal) to high temperature to ensure uniform diffusion of components.

Applying the dictionary definitions, data homogenization in science is not about blending various elements together, nor about additions or subtractions from the data set, or adjusting the data. This is particularly true in chemistry.

For UHCN and NASA GISS temperature data homogenization involves removing or adjusting elements in the data that are markedly dissimilar from the rest. It can also mean infilling data that was never measured. The verb homogenize does not fit the processes at work here. This has led to some, like Paul Homewood, to refer to the process as data tampering or worse. A better idea is to look further at the dictionary.

Again from Dictionary.com, the first two definitions of the adjective homogeneous are:-

  1. composed of parts or elements that are all of the same kind; not heterogeneous:

a homogeneous population.

  1. of the same kind or nature; essentially alike.

I would suggest that temperature homogenization is a loose term for describing the process of making the data more homogeneous. That is for smoothing out the data in some way. A false analogy is when I make a vegetable soup. After cooking I end up with a stock containing lumps of potato, carrot, leeks etc. I put it through the blender to get an even constituency. I end up with the same weight of soup before and after. A similar process of getting the same after homogenization as before is clearly not what is happening to temperatures. The aim of making the data homogenous is both to remove anomalous data and blend the data together.

ATTP on Lomborg’s Australian Funding

Blogger …and then there’s physics (ATTP) joins in the hullabaloo about Bjorn Lomberg’s Lomborg’s Consensus Centre is getting A$4m of funding to set up a branch at the University of Western Australia. He says

However, ignoring that Lomborg appears to have a rather tenuous grasp on the basics of climate science, my main issue with what he says is its simplicity. Take all the problems in the world, determine some kind of priority ordering, and then start at the top and work your way down – climate change, obviously, being well down the list. It’s as if Lomborg doesn’t realise that the world is a complex place and that many of the problems we face are related. We can’t necessarily solve something if we don’t also try to address many of the other issues at the same time. It’s this kind of simplistic linear thinking – and that some seem to take it seriously – that irritates me most.

The comment about climatology is just a lead in. ATTP is expressing a normative view about the interrelationship of problems, along with beliefs about the solution. What he is rejecting as simplistic is the method of identifying the interrelated issues separately, understanding the relative size of the problems along with the effectiveness and availability of possible solutions and then prioritizing them.

This errant notion is exacerbated when ATTP implies that Lomborg has received the funding. Lomborg heads up the Copenhagen Consensus Centre and it is they who have received the funding to set up a branch in Australia. This description is from their website

We work with some of the world’s top economists (including 7 Nobel Laureates) to research and publish the smartest solutions to global challenges. Through social, economic and environmental benefit-cost research, we show policymakers and philanthropists how to do the most good for each dollar spent.

It is about bringing together some of the best minds available to understand the problems of the world. It is then to persuade those who are able to do something about the issues. It is not Lomborg’s personal views that are present here, but people with different views and from different specialisms coming together to argue and debate. Anyone who has properly studied economics will soon learn that there are a whole range of different views, many of them plausible. Some glimpse that economic systems are highly interrelated in ways that cannot be remotely specified, leading to the conclusion that any attempt to create a computer model of an economic system will be a highly distorted simplification. At a more basic level they will have learnt that in the real world there are 200 separate countries, all with different priorities. In many there is a whole range of different voiced opinions about what the priorities should be at national, regional and local levels. To address all these interrelated issues together would require the modeller of be omniscient and omnipresent. To actually enact the modeller’s preferred policies over seven billion people would require a level of omnipotence that Stalin could only dream of.

This lack of understanding of economics and policy making is symptomatic of those who believe in climate science. They fail to realize that models are only an attempted abstraction of the real world. Academic economists have long recognized the abstract nature of the subject along with the presence of strong beliefs about the subject. As a result, in the last century many drew upon the rapidly developing philosophy of science to distinguish whether theories were imparting knowledge about the world or confirming beliefs. The most influential by some distance was Milton Friedman. In his seminal essay The Methodology of Positive Economics he suggested the way round this problem was to develop bold yet simple predictions from the theory that, despite being unlikely, are nevertheless come true. I would suggest that you do not need to be too dogmatic in the application. The bold predictions do not need to be right 100% of the time, but an entire research programme should be establishing a good track record over a sustained period. In climatology the bold predictions, that would show a large and increasing problem, have been almost uniformly wrong. For instance:-

  • The rate of melting of the polar ice caps has not accelerated.
  • The rate of sea level rise has not accelerated in the era of satellite measurements.
  • Arctic sea ice did not disappear in the summer of 2013.
  • Hurricanes did not get worse following Katrina. Instead there followed the quietest period on record.
  • Snow has not become a thing of the past in England, nor in Germany.

Other examples have been compiled by Pierre Gosselin at Notrickszone, as part of his list of climate scandals.

Maybe it is different in climatology. The standard response is that the reliability of the models is based on the strength of the consensus in support. This view is not proclaimed by ATTP. Instead from the name it would appear he believes the reliability can be obtained from the basic physics. I have not done any physics since high school and have forgotten most of what I learnt. So in discerning what is reality in that area I have to rely on the opinions of physicists themselves. One of the greatest physicists since Einstein was Richard Feynman. He said fifty years ago in a lecture on the Scientific Method

You cannot prove a vague theory wrong. If the guess that you make is poorly expressed and the method you have for computing the consequences is a little vague then ….. you see that the theory is good as it can’t be proved wrong. If the process of computing the consequences is indefinite, then with a little skill any experimental result can be made to look like an expected consequence.

Climate models, like economic models, will always be vague. This is not due to being poorly expressed (though they often are) but due to the nature of the subject. Short of rejecting climate models as utter nonsense, I would suggest the major way of evaluating whether they say something distinctive about the real world is on the predictive ability. But a consequence of theories always being vague in both economics and climate is you will not be able to use the models as a forecasting tool. As Freeman Dyson (who narrowly missed sharing a Nobel Prize with Feynman) recently said of climate models:-

These climate models are excellent tools for understanding climate, but that they are very bad tools for predicting climate. The reason is simple – that they are models which have very few of the factors that may be important, so you can vary one thing at a time ……. to see what happens – particularly carbon dioxide. But there are a whole lot of things that they leave out. ….. The real world is far more complicated than the models.

This implies that when ATTP is criticizing somebody else’s work with a simple model, or a third person’s work, he is likely criticizing them for looking at a highly complex issue in another way. Whether his way is better, worse or just different we have no way of knowing. All we can infer from his total rejection of ideas of experts in a field to which he lacks even a basic understanding, is that he has no basis of knowing either.

To be fair, I have not looked at the earlier part of ATTP’s article. For instance he says:-

If you want to read a defense of Lomborg, you could read Roger Pielke Jr’s. Roger’s article makes the perfectly reasonable suggestion that we shouldn’t demonise academics, but fails to acknowledge that Lomborg is not an academic by any standard definition…….

The place to look for a “standard definition” of a word is a dictionary. The noun definitions are

noun

8. a student or teacher at a college or university.

9. a person who is academic in background, attitudes, methods, etc.:

He was by temperament an academic, concerned with books and the arts.

10. (initial capital letter) a person who supports or advocates the Platonic school of philosophy.

This is Bjorn Lomborg’s biography from the Copenhagen Consensus website:-

Dr. Bjorn Lomborg is Director of the Copenhagen Consensus Center and Adjunct Professor at University of Western Australia and Visiting Professor at Copenhagen Business School. He researches the smartest ways to help the world, for which he was named one of TIME magazine’s 100 most influential people in the world. His numerous books include The Skeptical Environmentalist, Cool It, How to Spend $75 Billion to Make the World a Better Place and The Nobel Laureates’ Guide to the Smartest Targets for the World 2016-2030.

Lomborg meets both definitions 8 & 9, which seem to be pretty standard. Like with John Cook and William Connolley defining the word sceptic, it would appear that ATTP rejects the authority of those who write the dictionary. Or more accurately does not even to bother to look. Like with rejecting the authority of those who understand economics it suggests ATTP uses the authority of his own dogmatic beliefs as the standard by which to evaluate others.

Kevin Marshall

The Propaganda methods of ….and Then There’s Physics on Temperature Homogenisation

There has been a rash of blog articles about temperature homogenisations that is challenging the credibility of the NASS GISS temperature data. This has lead to attempts by anonymous blogger andthentheresphysics (ATTP) to crudely deflect from the issues identified. It is propagandist’s trick of turning people’s perspectives. Instead of a dispute about some scientific data, ATTP turns the affair into a dispute between those with authority and expertise in scientific analysis, against a few crackpot conspiracy theorists.

The issues on temperature homogenisation are to do with the raw surface temperature data and the adjustments made to remove anomalies or biases within the data. “Homogenisation” is a term used for process of adjusting the anomalous data into line with that from the surrounding data.

The blog articles can be split into three categories. The primary articles are those that make direct reference to the raw data set and the surrounding adjustments. The secondary articles refer to the primary articles, and comment upon them. The tertiary articles are directed at the secondary articles, making little or no reference to the primary articles. I perceive the two ATTP articles as fitting into the scheme below.

Primary Articles

The source of complaints about temperature homogenisations is Paul Homewood at his blog notalotofpeopleknowthat. The source of the articles is NASA’s Goddard Institute for Space Studies (GISS) database. For any weather station GISS provide nice graphs of the temperature data. The current after GISS homogeneity adjustment data is available here and the raw GHCN data + UHSHCN corrections is available here up until 2011 only. For any weather station GISS provide nice graphs of the temperature data. Homewood’s primary analysis was to show the “raw data” side by side.

20/01/15 Massive Tampering With Temperatures In South America

This looked at all three available rural stations in Paraguay. The data from all three at Puerto Casado, Mariscal and San Jan Buatista/Misiones had the same pattern of homogenization adjustments. That is, cooling of the past, so that instead of the raw data showing the 1960s being warmer than today, it was cooler. What could they have been homogenized to?

26/01/15 All Of Paraguay’s Temperature Record Has Been Tampered With

This checked the six available urban sites in Paraguay. Homewood’s conclusion was that

warming adjustments have taken place at every single, currently operational site in Paraguay.

How can homogenization adjustments all go so same way? There is no valid reason for making such adjustments, as there is no reference point for the adjustments.

29/01/15Temperature Adjustments Around The World

Homewood details other examples from Southern Greenland, Iceland, Northern Russia, California, Central Australia and South-West Ireland. Instead of comparing the raw with the adjusted data, he compared the old adjusted data with the recent data. Adjustment decisions are changing over time, making the adjusted data sets give even more pronounced warming trends.

30/01/15 Cooling The Past In Bolivia

Then he looked at all 14 available stations in neighbouring Bolivia. His conclusion

At every station, bar one, we find the ….. past is cooled and the present warmed.”

(The exception was La Paz, where the cooling trend in the raw data had been reduced.)

Why choose Paraguay in the first place? In the first post, Homewood explains that within a NOAA temperature map for the period 1981-2010 there appeared to be a warming hotspot around Paraguay. Being a former accountant he checked the underlying data to see if it existed in the data. Finding an anomaly in one area, he checked more widely.

The other primary articles are

26/01/15 Kevin Cowton NOAA Paraguay Data

This Youtube video was made in response to Christopher Booker’s article in the Telegraph, a secondary source of data. Cowton assumes Booker is the primary source, and is criticizing NOAA data. A screen shot of the first paragraph shows these are untrue.

Further, if you read down the article, Cowton’s highlighting of the data from one weather station is also misleading. Booker points to three, but just illustrates one.

Despite this, it still ranks as a primary source, as there are direct references to the temperature data and the adjustments. They are not GISS adjustments, but might be the same.

29/01/15 Shub Niggurath – The Puerto Casado Story

Shub looked at the station moves. He found that the metadata for the station data is a mess, so there is no actual evidence of the location changing. But, Shub reasons the fact that there was a step change in the data meant that it moved, and the fact that it moved meant there was a change. Shub is a primary source as he looks at the adjustment reason.

 

Secondary Articles

The three secondary articles by Christopher Booker, James Delingpole and BishopHill are just the connectors in this story.

 

Tertiary articles of “…and Then There’s Physics”

25/01/15 Puerto Cascado

This looked solely at Booker’s article. It starts

Christopher Booker has a new article in the The Telegraph called Climategate, the sequel: How we are STILL being tricked with flawed data on global warming. The title alone should be enough to convince anyone sensible that it isn’t really worth reading. I, however, not being sensible, read it and then called Booker an idiot on Twitter. It was suggested that rather than insulting him, I should show where he was wrong. Okay, this isn’t really right, as there’s only so much time and effort available, and it isn’t really worth spending it rebutting Booker’s nonsense.

However, thanks to a tweet from Ed Hawkins, it turns out that it is really easy to do. Booker shows data from a site in Paraguay (Puerto Casado) in which the data was adjusted from a trend of -1.37o C per century to +1.36o C per century. Shock, horror, a conspiracy?

 

ATTP is highlighting an article, but is strongly discouraging anybody from reading it. That is why the referral is a red line in the graphic above. He then says he is not going to provide a rebuttal. ATTP is good to his word and does not provide a rebuttal. Basically it is saying “Don’t look at that rubbish, look at the real authority“. But he is wrong for a number of reasons.

  1. ATTP provides misdirection to an alternative data source. Booker quite clearly states that the source of the data is the NASA GISS temperature set. ATTP cites Berkeley Earth.
  2. Booker clearly states that there are thee rural temperature stations spatially spread that show similar results. ATTP’s argument that a single site was homogenized with the others in the vicinity falls over.
  3. This was further undermined by Paul Homewood’s posting on the same day on the other 6 available sites in Paraguay, all giving similar adjustments.
  4. It was further undermined by Paul Homewood’s posting on 30th January on all 14 sites in Bolivia.

The story is not of a wizened old hack making some extremist claims without any foundation, but of a retired accountant seeing an anomaly, and exploring it. In audit, if there is an issue then you keep exploring it until you can bottom it out. Paul Homewood has found an issue, found it is extensive, but is still far from finding the full extent or depth. ATTP, when confronted by my summary of the 23 stations that corroborate each other chose to delete it. He has now issued an update.

Update 4/2/2015 : It’s come to my attention that some are claiming that this post is misleading my readers. I’m not quite sure why, but it appears to be related to me not having given proper credit for the information that Christopher Booker used in his article. I had thought that linking to his article would allow people to establish that for themselves, but – just to be clear – the idiotic, conspiracy-laden, nonsense originates from someone called Paul Homewood, and not from Chistopher Booker himself. Okay, everyone happy now? J

ATTP cannot accept that he is wrong. He has totally misrepresented the arguments. When confronted with alternative evidence ATTP resorts to vitriolic claims. If someone is on the side of truth and science, they will encourage people to compare and contrast the evidence. He seems to have forgotten the advice about when in a whole…..

01/02/15
Temperature homogenisation

ATTP’s article on Temperature Homogenisation starts

Amazing as it may seem, the whole tampering with temperature data conspiracy has managed to rear its ugly head once again. James Delingpole has a rather silly article that even Bishop Hill calls interesting (although, to be fair, I have a suspicion that in “skeptic” land, interesting sometimes means “I know this is complete bollocks, but I can’t bring myself to actually say so”). All of Delingpole’s evidence seems to come from “skeptic” bloggers, whose lack of understand of climate science seems – in my experience – to be only surpassed by their lack of understanding of the concept of censorship J.

ATPP starts with a presumption of being on the side of truth, with no fault possible on his side. Any objections are due to a conscious effort to deceive. The theory of cock-up or of people not checking their data does not seem to have occurred to him. Then there is a link to Delingpole’s secondary article, but calling it “silly” again deters readers from looking for themselves. If they did, the readers would be presented with flashing images of all the “before” and “after” GISS graphs from Paraguay, along with links to the 6 global sites and Shub’s claims that there is a lack of evidence for the Puerto Casado site being moved. Delingpole was not able the more recent evidence from Bolivia, that further corroborates the story.

He then makes a tangential reference to his deleting my previous comments, though I never once used the term “censorship”, nor did I tag the article “climate censorship”, as I have done to some others. Like on basic physics, ATTP claims to have a superior understanding of censorship.

There are then some misdirects.

  • The long explanation of temperature homogenisation makes some good points. But what it does not do is explain that the size and direction of any adjustment is an opinion, and as such be wrong. It a misdirection to say that the secondary sources are against any adjustments. They are against adjustments that create biases within the data.
  • Quoting Richard Betts’s comment on Booker’s article about negative adjustments in sea temperature data is a misdirection, as Booker (a secondary source) was talking about Paraguay, a land-locked country.
  • Referring to Cowton’s alternative analysis is another misdirect, as pointed out above. Upon reflection, ATTP may find it a tad embarrassing to have this as his major source of authority.

Conclusions

When I studied economics, many lecturers said that if you want to properly understand an argument or debate you need to look at the primary sources, and then compare and contrast the arguments. Although the secondary sources were useful background, particularly in a contentious issue, it is the primary sources on all sides that enable a rounded understanding. Personally, by being challenged by viewpoints that I disagreed with enhanced my overall understanding of the subject.

ATTP has managed to turn this on its head. He uses methods akin to crudest propagandists of last century. They started from deeply prejudiced positions; attacked an opponent’s integrity and intelligence; and then deflected away to what they wanted to say. There never gave the slightest hint that one side might be at fault, or any acknowledgement that the other may have a valid point. For ATTP, and similar modern propagandists, rather than have a debate about the quality of evidence and science, it becomes a war of words between “deniers“, “idiots” and “conspiracy theorists” against the basic physics and the overwhelming evidence that supports that science.

If there is any substance to these allegations concerning temperature adjustments, for any dogmatists like ATTP, it becomes a severe challenge to their view of the world. If temperature records have systematic adjustment biases then climate science loses its’ grip on reality. The climate models cease to be about understanding the real world, but conforming to people’s flawed opinions about the world.

The only way to properly understand the allegations is to examine the evidence. That is to look at the data behind the graphs Homewood presents. I have now done that for the nine Paraguayan weather stations. The story behind that will have to await another day. However, although I find Paul Homewood’s claims of systematic biases in the homogenisation process to be substantiated, I do not believe that it points to a conspiracy (in terms of a conscious and co-ordinated attempt to deceive) on the part of climate researchers.

AndThenTheresPhysics on Paraguayan Temperature Data

The blog andthentheresphysics is a particularly dogmatic and extremist website. Most of the time it provides extremely partisan opinion pieces on climate science, but last week the anonymous blogger had a post “Puerto Casado” concerning an article in the Telegraph about Paraguayan temperature by Christopher Booker. I posted the following comment

The post only looks at one station in isolation, and does not reference original source of the claims.

Paul Homewood at notalotofpeopleknowthat looked at all three available rural stations in Paraguay. The data from Mariscal and San Jan Buatista/Misiones had the same pattern of homogenization adjustments as Puerto Casado. That is, cooling of the past, so that instead of the raw data showing the 1960s being warmer than today, it was cooler.

Using his accountancy mind set, Homewood then (after Booker’s article was published) checked the six available urban sites in Paraguay. His conclusion was that

warming adjustments have taken place at every single, currently operational site in Paraguay.

Then he looked at all 14 available stations in neighbouring Bolivia. His conclusion

At every station, bar one, we find the ….. past is cooled and the present warmed.”

(The exception was La Paz, where the cooling trend in the raw data had been reduced.)

Homogenization of data means correcting for biases. For a 580,000 sq mile area of Central South America it would appears strong adjustment biases to have been introduced in a single direction.

Homewood references every single site. Anyone can easily debunk my summary by searching the following:-

Jan-20 Massive Tampering With Temperatures In South America

Jan-26 All Of Paraguay’s Temperature Record Has Been Tampered With

Jan-30 Cooling The Past In Bolivia

My comment did not contain the hyperlinks or italics. It has been deleted without passing through moderation. The only bit of the moderation policy I believe that I fall foul of is the last.

This blog is also turning out to be both more time consuming and more stressful than anticipated. Some moderation may be based purely on whether or not I/we can face dealing with how a particular comment thread is evolving. This is not a public service and so, in general, any moderation decision is final.

The counter-argument from ATTP is

If you look again at the information for this station the trend before adjustments was -1.37oC per century, after quality control it was -0.89 oC per century, and after adjusting for the station moves was +1.36 oC per century. Also, if you consider the same region for the same months, the trend is +1.37 oC per century, and for the country for the same months it is +1.28 oC per century. So, not only can one justify the adjustments, the result of the adjustments is consistent with what would be expected for that region and for the country.

Paul Homewood has investigated all the other stations in Paraguay or in neighbouring Bolivia and found similar ad hoc adjustments. It completely undermines ATTP’s arguments. This anonymous individual is wrong. Rather than face dealing that he is wrong, ATTP has deleted my comment. He is entitled to his beliefs, and in a free society can proselytize to his heart’s content. But there are boundaries. One of them is in suppressing evidence that undermines the justification for costly and harmful public policies. That is policies that are harming the poor here in Britain, but (and more importantly) can only be remotely successful by destroying the prospect of increasing living standards for over half the world’s population. Paul Homewood and others are increasingly uncovering similar biases in the temperature record in other parts of the world. The underlying data for the global surface temperature sets is in need of a proper, independent audit, to determine the extent of the biases within it. But when the accusation that the Paraguayan temperature data set is corrupted, people will point to ATTP’s blog post as evidence that there is but a single instance, and that instance has been debunked. Another boundary is a value that that many in the criminal justice system also hold dear. The more emotive the subject, the greater all concerned must go out of their way to compare and contrast the arguments. That way, the influence of our very human prejudices will be minimized. Again, independent audits will help eliminate this. If ATTP thinks he has all the answers then he will not be afraid to encourage people to look at both sides, evaluate by independent standards, and make up their own minds.

Kevin Marshall

Comment ATTP 310115

Instances of biases in the temperature sets

This will be added to when I get time.

Paul Homewood on San Diego data 30-01-15

Shub Niggareth looks into the Puerto Casado story 29-01-15

Paul Homewood on Reykjavik, Iceland 30-01-15

Jennifer Marohasy letter on Australian data 15-01-15

Update 01-02-15

I have invited a response from ATTP, by posting #comment-46021.

ATTP

You have deleted two of my comments in the last 24 hours that meet all of your moderation criteria except one – that you cannot face dealing with a challenge. That is your prerogative. However, the first comment, (now posted on my blog) I believe completely undermines your argument. Paul Homewood has shown that the Puerto Casado dataset homogenization did not make it consistent with neighbouring non-homogenized surface temperature stations, but that all the Paraguayan and neighbouring Bolivian surface temperature stations were “homogenized” in the same way. That is, rather than eliminating the biases that local factors can create, the homogenizations, by people far removed from the local situations, effectively corrupted the data set, in a way that fits reality to the data.

I might be wrong in this. But based on your arguments so far I believe that my analysis is better than yours. I also believe that who has the better argument will only be resolved by an independent audit of the adjustments. If you are on the side of truth you would welcome that, just as a prosecutor would welcome the chance to prove their case in court, or a pharmaceutical company would welcome independent testing of their new wonder-drug that could save millions of lives. Even if I am wrong, I will be glad at being refuted by superior arguments, as I will know that to refute my claims will require you to up your game. Humanity will be served by my challenging a weak case and making it stronger. You have generated over 500 comments to your post, so an appeal for help via email should generate some response. If that does not work there are many well-funded organisations that I am sure will rush to your assistance.

There are at least seven options I think you can take.

  1. Ignore me, and pretend nothing has happened. Bad idea. I will start analysing your posts, as you did with Wattsupwiththat, only rather than your pea-shooters firing blanks, I have the heavy artillery with HE shells.
  2. Do an attack post – like desmogblog or Bob Ward of the Grantham Institute might do. Bad idea, I will take that as perverting or suppressing the evidence, and things will get rather rough. After all, I am but a (slightly) manic ex-beancounter, and you have the consensus of science on your side, so why is should sending in the PR thugs be necessary unless you are on the losing side?
  3. Get together a response that genuinely ups the game. Win or lose you will have served humanity as I and others will have to rebut you. Engage and all will gain through greater understanding.
  4. Admit that there are other valid points of view. A start would be to release this comment, which will get posted on my blog anyway. I quite accept that you cannot come up with a rebuttal at the drop-of-a-hat. A simple comment that a response will be made sometime this year is fine by me.
  5. Also call for a truly independent audit of the surface temperature set. It could be for your own reasons, and if truly independent, I will support it. If a whitewash, like the enquiries that Gordon Brown ordered into Climategate, an audit will do more harm than good.
  6. Close down your blog and do something else instead. You choose to be anonymous, and I respect that. Walking away is easy.
  7. Admit that you got this one wrong. You will take some flack, but not from me.