The Morality of Lying and Exaggerating for Climate Science

In the Guardian today, James Garvey, argues that the actions of Peter Gleick in lying to obtain documents from the Heartland Institute could be justified in the interests of the wider good. He says

The documents, if authentic, show that Heartland takes money – in secret – from people who have something to gain by the idea that climate science is uncertain, and then spread that idea with enthusiasm. Do I actually need to say this in 2012? There is no controversy in the scientific community about Heartland’s target: the fact of warming and the human role played in it.

What Heartland is doing is harmful, because it gets in the way of public consensus and action. Was Gleick right to lie to expose Heartland and maybe stop it from causing further delay to action on climate change?

There are some issues with this statement

  • The most important strategy document is almost certainly fake. Peter Gleick was accused of being the source of the leak by Steven Mosher, because this document was in his distinctive style of writing, including grammatical errors. Gleick denies he wrote the fake document, but now admits to (the lesser crime of) obtaining the other documents by deception.
  • The following statement is ambiguous

    There is no controversy in the scientific community about Heartland’s target: the fact of warming and the human role played in it

    It can mean one of four options. First, that the “scientific community” believe what the Heartland’s target is (so there must be a straw poll somewhere). Second, the scientific community believe in anthropogenic global warming. In which case there a definition of who is in the “scientific community” and who is out. The “97% of scientists believe” was a small subset of all scientists in the climate field, who were asked two very trivial questions, so the degree of belief is not in the predicted level of catastrophe that will justify drastic action. Third as to whether the human role played in (global warming) is a fact. The statement of global average surface temperatures being higher than they were 50, 100, 150, or 400 years ago is incontrovertible (though the actual amount is debatable), but the human role is a subject of wide controversy. They are two separate facts, so the human role is just a belief of the 97% of 1.6% of those who answered two trivial questions, which was just over 30% of those who received questionnaires. Whatever the ambiguities in the statement, it does not rely on scientific evidence, as there is plenty of controversy of the anthropogenic contribution due to a lack of incontrovertible scientific evidence.

  • If the scientific consensus was created by a minority  and maintained by “outing” any who voiced concerns, with activists seeking to annul their funding, then that “consensus” opinion should be viewed with a little bit of scepticism.
  • The statement “What Heartland is doing is harmful, because it gets in the way of public consensus and action.” is a potential moral minefield. If 90% of the population decide that it is alright to persecute a peaceful minority would that be alright? If 90% of the population strongly believe that potential terrorists should be held without trial and tortured, would that be alright?

But leaving these issues aside, the problem with telling lies, or exaggerating, is when you are found out. Once you have lost people’s trust, it is very hard to regain that trust. Dale Carnegie in “How To Win Friends And Influence People” made this very point. 
However, from a purely utilitarian point of view it might be permissible to mislead a suspect criminal in order to find the evidence, at it is not that person’s trust that you want to maintain. The wider public will generally think well of you if you get a criminal off the streets. But if it is to marginalise you opponents, it will backfire if the wider public then perceive that you cannot be trusted. This is especially true when much of the case for climate change is based on trust in scientists to report accurately on a complex subject.

The reasons that there is growing distrust in the scientific consensus are multiple:

  • Michael Mann’s hockey stick studies were based on cherry-picked data, biased weightings of individual studies that showed hockey sticks over the ones that did not AND the favoured studies have all been overturned.
  • The UNIPCC 2007 report did not live up the projected image in a number of areas. The Himalayan Glaciers episode is only the tip of the non-melting iceberg. It is full of partisan analysis and exclusion of contrary science.
  • The Climategate email hack also showed the public image of certainties held by a wide number of scientists is nothing of the sort. The core group are highly partisan, and have taken strenuous efforts to exclude contrary views from the journals.

Finally, please remember that activists have got every major scientific body, including the Royal Society, to make proclamations in favour of Global Warming Alarmism. If public funding of science is seen to go to those who lie and exaggerate, then there will be increased distrust in all areas of science. These activists scientists are risking more than their own reputations.

 

Climate Change Damage Impacts – A story embellished at every retelling

Willis Eschenbach has a posting on a recent paper on climate change damage impacts. This is my comment, with hyperlinks and tables.

My first reaction was “Oi– they have copied my idea!”

Well the damage function at least!

https://manicbeancounter.wordpress.com/2011/02/11/climate-change-policy-in-perspective-%E2%80%93-part-1-of-4/

Actually, this can be found by the claims of the Stern Review or AR4. Try looking at the table in AR4 of “Examples of impacts associated with global average temperature change” and you will get the idea.

A simpler, but more visual, perspective is gained from a slide produced for the launch of the Stern Review.

More seriously Willis, this is worse than you thought. The paper makes the claim that unlikely but high impact events should be considered. The argument is that the likelihood and impacts of potential catastrophes are both higher than previous thought. The paper then states

“Various tipping points can be envisaged (Lenton et al., 2008; Kriegler et al., 2009), which would lead to severe sudden damages. Furthermore, the consequent political or community responses could be even more serious.”

Both of these papers are available online at PNAS. The Lenton paper consisted of a group of academics specialising in catastrophic tipping points getting together for a retreat in Berlin. They concluded that these tipping points needed to include “political time horizons”, “ethical time horizons”, and where a “A significant number of people care about the fate of (a)

component”. That is, there is a host of non-scientific reasons for exaggerating the extent and the likelihood of potential events.

The Krieger paper says “We have elicited subjective probability intervals for the occurrence of such major changes under global warming from 43 scientists.” Is anybody willing to assess if the subjective probability intervals might deviate from objective probability intervals, and in which direction.

So the “Climate Change damage impacts” paper takes two embellished tipping points papers and adds “…the consequent political or community responses could be even more serious.”

There is something else you need to add into the probability equation. The paper assumes the central estimate of temperature rise from a doubling of CO2 levels is 2.8 degrees centigrade. This is only as a result of strong positive feedbacks. Many will have seen the recent discussions at Climateaudit and wattsupwiththat about the Spencer & Bracewell, Lindzen and Choi and Dessler papers. Even if Dessler is given the benefit of the doubt on this, the evidence for strong positive feedbacks is very weak indeed.

In conclusion, the most charitable view is that this paper takes an exaggerated view (both magnitude and likelihood) of a couple of papers with exaggerated views (both magnitude and likelihood), all subject to the occurrence of a temperature rise for which there is no robust empirical evidence.

Phillip Morris’s FOI is in the Public Interest

The BBC gave headline news today about a FOI request by Phillip Morris about Government funded research. The Guardian and Telegraph joined in as well. This is a comment left at Bishop Hill.

There are some legitimate reasons why a cigarette company (and the general public) might want to know more details of a research study. This is Government-funded research to justify legislation, without counter-studies for balance. Bearing in mind that the study was of 6,000 young people, who the Professors believe are highly impressionable from marketing.

1.    Were the questions neutral and held in a neutral venue?

2.    Did the resulting peer-reviewed article draw conclusions that the data substantiates? Are they statistically significant?

3.    Can other conclusions be drawn by the data?

It should be borne in mind by those who jump to conclusions that

a)    The two professors who did the study have PhDs in marketing and in social policy.

b)    The study is not about the health affects of smoking. It is about justifying compulsory neutral packaging for cigarettes.

c)    This particular study is very difficult to find on the internet, and is not listed on either of their websites amongst the publications. One has a list of over eighty.

One of the Professors was co-author of a similar study (only with adults), which got an unfavourable review in the Guardian. This time the sample size was 43, divided into 3 distinct groups.


http://www.guardian.co.uk/education/2011/may/30/smokers-health-warnings-cigarette-packets

The level of research into the harm smoking can cause is considerable and of high quality. The original British Doctors Study than confirmed the link between both lung cancer and coronary thrombosis was ground-breaking statistically. That does not mean that all the policy research is of a similar quality.

http://en.wikipedia.org/wiki/British_Doctors_Study

It is my belief Government social policy should aim at the net improvement of society. That implies that in funding research into social policy there is a duty of care to ensure balance, and that conclusions are robust. There are very legitimate reasons that this line of research falls short.

Evangelical Christians and Climate Change Skepticism

Wm Briggs reports on a “forthcoming Bulletin of the American Meteorological Society paper “Making the climate a part of the human world”, in which University of British Columbia geographer Simon Donner argues that religion is the cause of global warming denial. ” (Pre-publication copy here)

Simon Donner’s Views

Donner’s Summary is

“Ongoing public uncertainty about climate change is rooted in a perceived conflict between the scientific evidence for a human role in the climate and a common belief that the weather and climate are controlled by higher powers.”

This is backed up by a number of studies of religions, both ancient and primitive religions from various parts of the world. This includes from Fiji and Papua New Guinea. I can find no reference to the major religions of Islam, Hinduism or Buddhism. There is only one biblical reference, from the Old Testament book of Job, but none from the New Testament – the stories about Jesus and his disciples. Neither is there a distinction between Catholicism and Protestantism, nor a split between evangelical and liberal protestants.

The Religious Sceptics in USA

The majority of the religious sceptics in the USA are the Protestant Evangelicals. Their type of Christianity is centred on biblical study, both individually and corporately, to perceive the revealed word of God and the interpretation for current circumstances. There are the specialists – the ordained pastors – who provide interpretations through sermons. However, this is just the lead for personal study and reflection.

Collectively, these evangelicals are not unified body theologically. For instance, a quick comparison of the Southern Baptist Convention and the Assemblies of God websites will quickly demonstrate the point. Nor are there strong ecumenical links between the major churches, as found in Britain.

This bible-based view of Christianity comes directly from the Reformation. In medieval Europe the Bible was handwritten and only available in Latin. With most people illiterate, reading of the Bible was limited to a few dedicated scholars, with interpretation highly centralised and strictly controlled. Any deviation was treated as heresy, often punishable by death. A combination of the advent of printing and translation into the vernacular suddenly made the word of God accessible to a much wider population. It soon became evident that the established religious orthodoxy was, in many places, unsupported from the sacred text and in some cases fundamentally at odds with that text. It was this need to study that changed public worship so dramatically, with teaching replacing the Mass as the centrepiece.

Politically, access to the Bible democratised understanding and the questioning of authority and centralised power. This gave a scholarly impetus to the development of modern science, and also the Liberal political philosophy of John Locke and the Scottish Enlightenment that in turn heavily influenced the Founding Fathers.

An Alternative Thesis

Evangelicals have as their primary resource the Bible and the interpretation of God’s purpose from within their local congregation. Your average church member will have quite a detailed knowledge of the Bible, being able to quote much of the primary doctrine and some major passages. Generally they also “cherish and defend religious liberty, and deny the right of any secular or religious authority to impose a confession of faith upon a church or body of churches.
(Southern Baptist Convention). The scepticism towards climate change comes from its presentation. It comes across as a core doctrine that is agreed upon by a consensus of leading scientists. But the truth cannot be perceived by the lay person, but only revealed by impenetrable computer models to scientific experts. Any deviation or questioning of core doctrine is treated with contempt and as a heresy. Yet the high scientific standards that these experts are supposed to follow has been found wanting. There are two areas where this is demonstrated most.

First, the poster hockey stick of a decade ago – showing global temperatures were dramatically higher than at an time in the last millennium – was investigated by the Steve McIntyre. He showed the results were as a result of a number of elements including cherry picking data; giving undue weighting to favourable results; excluding some unfavourable data points; failing to apply proper statistical tests. A book charting this episode is found here, and my comparison of an exchange following a savage book review is here.

Second is the Climategate email release, which showed that the core scientists were a fairly small group, that they viewed the science as far from settled, and they adhered to lower standards of scholarship than was the public perception.

The Inferences from the Donner Paper

Donner has either little understanding of mainstream Christianity in the USA, or he deliberately misrepresents what it stands for. In so doing, he not only completely misses the point of why religious Americans are sceptical but does so in such a way that will make them more antagonistic. The fact that peer review should allow through a paper that clearly does not have proper argument to support the thesis shows a failure to of that process. That a person with no qualifications or prior publishing record in the field of sociology or theology should be allowed to publish on the subject in a journal specialising in the weather shows how far climate science is straying beyond its area of competency. For Christians who unsure of the global warming arguments, clear evidence that a climate scientist not knowing what they are talking about will make them more sceptical. They will be more likely to accept the sceptical comments that the science is flawed, whether the theory, the computer models or the statistics.

Outflanking Al Gore & other alarmists

At Wattupwiththat there is a proposal to build a database by

Find(ing) every false, misleading, scary, idiotic, non-scientific statement they have made in the past twenty years. Create an index by name with pages listing those statement with links to the source. Keep it factual. Let their own words come back to haunt them.

My comment was

A database of all the exaggerations, errors and false prophesies on its own will do no good. No matter how extensive and thorough and rigorous, it will be dismissed as having been compiled by serial deniers funded by big oil. Getting a fair hearing in the MSM will be impossible. It the coming battle the alarmists have decided the field of battle and have impenetrable armour.

To be brief, there needs to be two analogies brought to the fore.

First is the legal analogy. If there is a case for CAGW, it must be demonstrated by primary, empirical evidence. That evidence must be tested by opponents. It is not the bits, that may be true – like lots more CO2 will cause some warming. But that there is sufficient CO2 to cause some warming, which will be magnified by positive feedbacks to cause even greater warming, and this substantive warming will destabilize the planet weather systems in a highly negative way. The counter-argument is two-fold – that many of dire, immediate, forecasts have been highly exaggerated and more importantly, the compound uncertainties that have been vastly underestimated. That the case is weak is shown by the prominence given to what is hearsay evidence, such as the consensus, or the proclamations of groups of scientists, or to the image of the hockey stick. In some cases, it has been tantamount to jury-tampering.

Second is the medical analogy. A medical doctor, in proscribing a painful and potentially harmful course of treatment, should at last have a strong professionally-based expectation that post treatment the patient will be better off than if nothing was done. The very qualities that make politicians electable – of being able to make build coalitions by fudging, projecting an image, and undermining the opponents by polarizing views – make them patently unfit for driving through and micro-managing effective policy to reduce CO2. They will of necessity overstate the benefits and massively understate the costs, whether financial or in human suffering. They will not admit that the problem is beyond their capabilities, nor that errors had been made. The problem is even worse in powerful dictatorships than democracies.

I have tried to suggest a method (for those who are familiar with microeconomics) the IPCC/Stern case for containing CO2 here.

https://manicbeancounter.wordpress.com/2011/02/11/climate-change-policy-in-perspective-%E2%80%93-part-1-of-4/

Also, why there is no effective, global political solution possible.

https://manicbeancounter.wordpress.com/2011/02/13/climate-change-in-perspective-%E2%80%93-part-2-of-4-the-mitigation-curve/

What is missing is why the costs of global warming have been grossly exaggerated.

Show Warming After it Has Stopped Part 2

Last week I posted how Miles Allen had pulled off a trick to show warming in the 21st century after that trend had stopped in 1998. According to David Middleton at Watts up with That, the BBC’s Richard Black is using a similar decadal comparison to show that warming has continued. There are two Richard Black’s claim that the GWPF are cherry-picking the data. First, that an employee of the UK state broadcaster should choose to use a foreign temperature record over the UK one. Second, why the switch to decadal comparisons, when the IPCC has long used the norm.

Let me break this down with two graphs. Like with the previous posting, I see no scientific reason to necessitate why the starting point for the earth’s orbit of the sun has to be on 1st January. I therefore include all 12 month moving averages. That is Jan-Dec, Feb-Jan, Mar-Feb etc. I have also included three lines on my analysis. First the NASA GISSTEMP; second the HADCRUT3 and third the difference between the two.

The first graph shows the decadal change in the NASA GISS figures that Richard Black is talking about. Sure enough the only period where the 12 month average temperature anomaly is lower than a decade before is in 2008. Using the HADCRUT3 data reveals a similar pattern, but the negative period is much longer. If The HADCRUT3 decadal change is subtracted from the GISSTEMP, there is shown to be a greater decadal warming trend in the NASA than in the UK figures. This might suggest the reason for Richard Black’s preference for foreign data over that paid for by the UK taxpayer’s.

The second graph shows the 12 month moving average data – and clearly shows the reasons for both using decadal temperature changes over annual, and foreign data over British. From 1988 to 1997, there was no real warming trend if the Pinatubo cooling is removed from 1995. However the NASA anomaly seems to be around twice as volatile is the Hadley. But in 1998 the position reverses. The natural 1998 El Nino effect is twice according to the British scientists, as it is to Dr Hansen and his team. Post 1998 the story diverges. According to NASA, the warming resumes on an upward trend. According to the Hadley scientists, the 1998 El Nino causes a step change in average temperatures and the warming stops. As a result the NASA GISS warming trend is mirrored by its divergence from the more established and sober British series.

Showing Warming after it has Stopped

Bishop Hill points to an article by Miles Allen that

“examines how predictions he made in 2000 compare to outturn. The match between prediction and outturn is striking…..”

Bishop Hill points out that this using HADCRUT decadal data. Maybe a quick examination of the figures will reveal something? Using the HADCRUT3 data here is are the data for the last five decade.

This shows that the decadal rate of warming has been rising at a pretty constant rate for the last three decades. So all those sceptics who claim that global warming has stopped must have got it wrong then?

Let us examine the data a bit more closely.

The blue line is the Hadcrut annual anomaly figures from 1965 to 2010. The smoother red line is the 10 year average anomaly, starting with the 1956-1965 average and finishing with the 2001-2010 average. The decadal averages are highlighted by the red triangles.

The blue would indicate to me that there was a warming trend from 1976 to 1998, since then it has stopped. This is borne out by the 10 year moving average, but (due to the averaging) the plateau arrives five years later. But the story from the decadal figures is different, simply due to timing.

So what scientific basis is there for using the decadal average? Annual data seems reasonable, at it is the time for the earth make one rotation around the sun. But the calendar is fixed where it is because 1500 years ago Dionysius Exiguus devised a calendar with a mistaken estimate of the birth (or conception) of Jesus Christ as Year 1, and we have number base 10 possibly to the number of fingers we have. Both are a human artefact. Further, the data is actually held in months, so it is only due to the Christian calendar that we go from January to December. This means of the 120 possible periods for decadal averages, Myles Allen shows a cultural prejudice, and in choosing decadal averages, he shows a very human bias, over real world selectivity.

How does this affect the analysis of the performance of the models? The global temperature averages showed a sharp uptick in 1998. Therefore, if the models simply predicted a continuation of the trend of the previous twenty years, they would have been quite accurate. The fact was the prediction was higher than the outturn, so the models overestimated. It is only by exploiting the arbitrary construct of decadal data that the difference appears insignificant. Drop to 5 years moving average, and you will get a bigger divergence. Wait a couple of years, and you will get a bigger divergence. Use annual figures and you will get a bigger divergence. The result is not robust.

Interpreters of Interpreters to the nth degree

James Delingpole has attracted some ire for saying he is an “interpreter of interpreters”. I commented on Bishop Hill’s Blog

Wasn’t the original hockey stick paper an “interpreter of interpretations”? That is it gathered together a selection of data studies of past climate proxies and tried to give an interpretation – with some elements of bias. The IPCC, liking this paper’s conclusion then interpreted this as being definitive, despite its conclusions being contrary to many other studies. Learned societies, not least the Royal Society then interpret this as being the final argument, being the opinion of 2500 leading scientists. With learned pronouncements from the leading scientific organizations, the BBC, Guardian etc interprets that the science is settled, so the subject is closed. James Delingpole, in putting himself as a second tier interpreter, might be over-reaching himself in the ranking. However, he actually considers the arguments, unlike those who rely on multi-layered interpretations.

But more important than lowly a person is in the interpretation chain, is the reliability of that opinion compared with the ultimate reality that we are interpreting. Scientific enquiry must positively endeavour to free itself from biases. That was part of Popper’s injunction to make hypotheses capable of falsification. But with climate science

In the Hockey Stick Studies you will find (See “The Hockey Stick Illusion”)

  • Positive efforts to choose the limited number of data interpretations that suite the conclusion desired (with some having their own strong biases)
  • Giving these favourable studies an undue statistical bias against those that come to no, or contrary, conclusions.
  • Choosing the statistical tests that give favourable results.
  • A clique of people providing similar results through using similar methods around a core group of papers.
  • Peer review being used as a means of peer pressure in promoting favourable comments and papers, whilst obstructing contrary views.

The IPCC has been set up to act as a biased interpreter. It is there to argue the case for action on global warming climate change, not to arrive at a balanced opinion on the science.

The bias is upon interpretation in one direction is at every level of science and opinion.

  • Funding of research is based on conformity.
  • Pressure groups exist to “out” the non-conformists, like the McCarthyists of two generations ago.
  • There is also pressure on scientific organizations to declare unequivocal support.
  • There is severe censure and libelous statements made against any who dissent.

     

So, however much Delingpole may provide interpretations of interpretations without reading all the original literature, his opinions might be more valuable than those prestigious scientists who conform.


 

Denialists become Superfluous

OR Who Needs Enemies when you have Friends like These

Climate Psychology is a blog specialising in mirror-posting articles from one side of the climate change argument, but with more lurid titles. The direct inference being that the truth of the science is so blatantly apparent that any criticism must be by the deluded, the deranged, or be in the pay of some sinister forces. One such mirror posting is of Tamino’s “Hockey Stick Delusion” at RealClimate under the Title “Tamino debunks the junk science of Montford and McIntyre for the umpteenth time — the hockey stick is still sticking around

To anyone who looks at both sides of the argument – who properly compares and contracts each point made, will see that Tamino fails to address the points made. As I said in an earlier posting

 Look at

1. Who gives the fullest answers?

2. Which side evades the points, or attempts sleight of hand?

3. How are contrary or neutral points treated. Clue – look at how Judith Curry (who is trying to remain neutral) is treated. Further, look at how contrary opinions are treated.

4. Finally who are the real deniers in all of this?

This leaning on psychology is nothing new. It was used by the KGB to punish dissenters without trial (see also here and here). The recent publication of a statistical analysis of the Hockey Stick by McShane and Warner again shows which side of the debate the delusional mostly reside. The greatest irony is the blog has the following quote:-

“Because the truth is that promoting science isn’t just about providing resources — it’s about protecting free and open inquiry. It’s about ensuring that facts and evidence are never twisted or obscured by politics or ideology.” Barack Obama

Tamino v. Montford – A Sense-Check

Clarification – This post is an attempt to say two things – but badly.

First, a simplistic verification of a global temperature reconstruction is to cross-check against local temperature reconstructions from around the world. These, on average, strongly contradict the hockey stick.

Second, Tamino’s claim is essentially McIntyre has just been taking pot-shots at sound science. Instead McIntyre has looked at all the steps in making a reconstruction, and found all wanting.

So what of a neutral lay-person trying to compare the Montford’s Hockey Stick Illusion and Tamino’s debunking? From my accountancy experience, it is normal to try to get a sense-check. What is the expected result? If the actual is different from the expected, then difference needs to be reconciled. The MBH98, MBH99, and the subsequent reconstructions in the book, completely overturned perceived thinking, so there needs to be a sense-check to make sure the results are valid. 

 The sense-check for the global temperature reconstructions can be from localized reconstructions from around the world, to see if the global reconstruction replicates the typical pattern. A website, CO2science.org, documents peer-reviewed articles estimating temperatures in the medieval warm period. For those that have a temperature estimate, those that agree with the hockey stick – that temperatures were lower than today – are out-numbered 5 to 1 by those that say temperatures were higher in the MWP. The raw median, median, and mode values are that temperatures were about 0.75oC warmer than today. The weaker, qualitative, studies have a similar picture. Those that suggest that temperatures in the MWP were similar to or lower than today are outnumbered more than 4 to 1 by those that suggest temperatures were higher. So when the more scientific, global, reconstructions come up with a novel, contrary, result, there needs to a full reconciliation to explain why. Without such an explanation, we just have McIntyre’s multi-layered* findings that the global reconstructions are critically flawed stands.

*McIntyre’s findings are multi-layered, including.

a)      Hockey Stick shapes were given undue weighting by the short-centering of the PC analysis. For instance, McIntyre calculated that Sheep Mountain had 390 times the weighting of Mayberry Slough (p113-114). Of the 112 original proxies in MBH9, just 13 had a hockey stick shape. Tamino does not counter this, only looking at the 22 longer hockey stick series, made up of individual series, such as Gaspe, along with regional combinations such as NOAMERPC1.

b)      Dodgy data and infilling. Looking across the columns of data, McIntyre noticed identical data in adjacent columns, as though infilling had taken place. (p78-81)

c)      Many of these series were based on old data. If Mann had used the most recent data available in 1998, could the final Hockey Stick have been less pronounced? (p83-84)

d)      Some of the most important original proxies were flawed.

  1.  
    1. Gaspé has better data, but was unpublished. (p174) It also had an alternative proxy with better data in Alaska. (More here)
    2. Sheep mountain had updated proxies that fails to show an HS (p 357-361)
    3. The Graybill bristlecone series had a number of flaws (e.g. p121-125 & p353-357)

e)      The failure of alternative reconstructions. (Chapter 10).

f)        There was considerable evidence of biases in the data selection in the proxies (along with small sample sizes); the selection of the proxies in the reconstruction; and the short-centring which gave rise to hockey sticks on random data 99% of the time. Given this, any measure of correlation statistic was rendered largely meaningless. McIntyre did not explore this. However, Montford provides evidence that the verification statistic used was highly irregular in the disciplines outside of climate science. (e.g. p156-164)  Latest – McIntyre shows the evidence that to suggest verification statistic was cherry-picked.

That is, the selection of data in the proxies, the proxy selection, the bias by short-centering, and the selection of verification statistic are all different levels in establishing a reconstruction, and all shown by McIntyre to have failed.

For a different take – which side pursues scientific understanding, see the follow-up https://manicbeancounter.wordpress.com/2010/07/27/the-hockey-stick-and-climate-science/