Warming Bias in Temperature Data due to Consensus Belief not Conspiracy

In a Cliscep article Science: One Damned Adjustment After Another? Geoff Chambers wrote:-

So is the theory of catastrophic climate change a conspiracy? According to the strict dictionary definition, it is, in that the people concerned clearly conferred together to do something wrong – namely introduce a consistent bias in the scientific research, and then cover it up.

This was in response to last the David Rose article in the Mail on Sunday, about claims the infamous the Karl et al 2015 breached America’s National Oceanic and Atmospheric Administration (NOAA) own rules on scientific intergrity.

I would counter this claim about conspiracy in respect of temperature records, even in the strict dictionary definition. Still less does it conform to a conspiracy theory in the sense of some group with a grasp of what they believe to be the real truth, act together to provide an alternative to that truth. or divert attention and resources away from that understanding of that truth. like an internet troll. A clue as to know why this is the case comes from on of the most notorious Climategate emails. Kevin Trenberth to Micheal Mann on Mon, 12 Oct 2009 and copied to most of the leading academics in the “team” (including Thomas R. Karl).

The fact is that we can’t account for the lack of warming at the moment and it is a travesty that we can’t. The CERES data published in the August BAMS 09 supplement on 2008 shows there should be even more warming: but the data are surely wrong. Our observing system is inadequate.

It is the first sentence that was commonly quoted, but it is the last part is the most relevant for temperatures anomalies. There is inevitably a number of homogenisation runs to get a single set of anomalies. For example the Reykjavik temperature data was (a) adjusted by the Iceland Met office by standard procedures to allow for known locals biases (b) adjusted for GHCNv2 (the “raw data”) (c) adjusted again in GHCNv3 (d) homogenized by NASA to be included in Gistemp.

There are steps that I have missed. Certainly Gistemp homogenize the data quite frequently for new sets of data. As Paul Matthews notes, adjustments are unstable. Although one data set might on average be pretty much the same as previous ones, there will be quite large anomalies thrown out every time the algorithms are re-run for new data. What is more, due to the nature of the computer algorithms, there is no audit trail, therefore the adjustments are largely unexplainable with reference to the data before, let alone with reference to the original thermometer readings. So how does one know whether the adjustments are reasonable or not, except through a belief in how the results ought to look? In the case of the climatologists like Kevin Trenberth and Thomas R. Karl, variations that show warmer than the previous run will be more readily accepted as correct rather than variations that show cooler. That is, they will find reasons why a particular temperature data set now shows greater higher warming than before. but will reject as outliers results that show less warming than before. It is the same when choosing techniques, or adjusting for biases in the data. This is exacerbated when a number of different bodies with similar belief systems try to seek a consensus of results, like  Zeke Hausfather alludes to in his article at the CarbonBrief. Rather than verifying results in the real world, temperature data seeks to conform to the opinions of others with similar beliefs about the world.

Kevin Marshall

2 Comments

  1. I don’t disagree with anything you say here. After the passage you quote I said:

    “Which is not to say that Mann, Jones, Schmidt, Briffa, Amman, et al. ever got together and decided that the theory of catastrophic global warming must be defended at all costs. It is enough that a number of scientists converged on a scientific hypothesis, sold it to the politicians and other interest groups, and that a vast number of people now have a stake in it…”

    I leave open the explanation for their convergence. Our critics can only imagine One Explanation to Rule Them All, i.e. a conspiracy, which is the idea they accuse us of promulgating.

    An alternative explanation which works but which no-one dares utter (least of all people like me who dropped science after A-level) is that the people involved are a bit thick. Or to put it more politely, they have forgotten the scientific method, which involves subjecting one’s own ideas to critical examination. What kind of person can say with a straight face, as you quote Trenberth: “the data are surely wrong.” Of course they are. The data are always “wrong” in the sense of being inadequate for the needs of the searcher after absolute truth. You do what you can with what you have. And if you can’t do much you say so.

    The internet created Climategate because it created a system where scientists could communicate without going through secretaries. Can you imagine Jones dictating: “Take a letter Miss Sligo: “Dear Mike, Please destroy this letter…”

    • manicbeancounter

       /  13/02/2017

      Geoff,
      Thanks for your comment. Maybe I tend to jump a bit hard on any suggestion of conspiracy in relation to temperature data. It is because it is a very easy answer to say that the manipulation of the temperature data is deliberate. Like fiddling the accounts in a large business, you would leave distinctive signs. With respect to adjustments it is not about the the scientific method either. Temperature data sets are there to provide the data by which to test theories. Rather, there is the untested assumption that the temperature trends in a particular area the same, so adjustments cleanse the data of impurities, such as from poor readings and poor siting of thermometers. Nobody from NOAA, NASA or CRU checks the outputs to see whether they are meaningful at the local level. If they did they would find that global average temperature was not a very empirically meaningful concept.
      This is where I entirely agree with you.

      The data are always “wrong” in the sense of being inadequate for the needs of the searcher after absolute truth. You do what you can with what you have. And if you can’t do much you say so.

      Another way of putting the point is that there are limits to what can be extracted the data. If nobody checks, and you follow Trenberth’s belief then Ronald Coase’s reputed maxim comes true.
      If you torture the data long enough, it will confess“.

      The Karl et al 2015 paper is only work in progress on the confession. They need to a slight of hand to claim that warming had accelerated post 2000. That was to compare 2000 to 2012 to the 1950-1999 period. 1950-1975 showed no warming or slight cooling, whilst 1975-1999 showed marked warming.

      At university I studied economics, where there are issues of data cleansing akin to those in temperature data. On the Econometrics course, after weeks spent on cleansing techniques, the lecturer said that once cleansed the data may have artificial biases, and any statistical tests will be meaningless. But if you don’t cleanse the data, the data may still be meaningless. All what you can do is take a stab and (with a lot of integrity) aim at something better than you started out with.

%d bloggers like this: