How strong is the Consensus Evidence for human-caused global warming?

You cannot prove a vague theory wrong. If the guess that you make is poorly expressed and the method you have for computing the consequences is a little vague then ….. you see that the theory is good as it can’t be proved wrong. If the process of computing the consequences is indefinite, then with a little skill any experimental result can be made to look like an expected consequence.

Richard Feynman – 1964 Lecture on the Scientific Method

It’s self-evident that democratic societies should base their decisions on accurate information. On many issues, however, misinformation can become entrenched in parts of the community, particularly when vested interests are involved. Reducing the influence of misinformation is a difficult and complex challenge.

The Debunking Handbook 2011 – John Cook and Stephan Lewandowsky

My previous post looked at the attacks on David Rose for daring to suggest that the rapid fall in global land temperatures at the El Nino event were strong evidence that the record highs in global temperatures were not due to human greenhouse gas emissions. The technique used was to look at long-term linear trends. The main problems with this argument were
(a) according to AGW theory warming rates from CO2 alone should be accelerating and at a higher rate than the estimated linear warming rates from HADCRUT4.
(b) HADCRUT4 shows warming stopped from 2002 to 2014, yet in theory the warming from CO2 should have accelerated.

Now there are at least two ways to view my arguments. First is to look at Feynman’s approach. The climatologists and associated academics attacking journalist David Rose chose to do so from a perspective of a very blurred specification of AGW theory. That is human emissions will cause greenhouse gas levels to rise, which will cause global average temperatures to rise. Global average temperature clearly have risen from all long-term (>40 year) data sets, so theory is confirmed. On a rising trend, with large variations due to natural variability, then any new records will be primarily “human-caused”. But making the theory and data slightly less vague reveals an opposite conclusion. Around the turn of the century the annual percentage increase in CO2 emissions went from 0.4% to 0.5% a year (figure 1), which should have lead to an acceleration in the rate of warming. In reality warming stalled.

The reaction was to come up with a load of ad hoc excuses. Hockey Schtick blog reached 66 separate excuses for the “pause” by November 2014, from the peer-reviewed to a comment in the UK Parliament.  This could be because climate is highly complex, with many variables, the presence of each contributing can only be guessed at, let alone the magnitude of each factor and the interrelationships with all factors. So how do you tell which statements are valid information and which are misinformation? I agree with Cook and Lewandowsky that misinformation is pernicious, and difficult to get rid of once it becomes entrenched. So how does one evaluate distinguish between the good information and the bad, misleading or even pernicious?

The Lewandowsky / Cook answer is to follow the consensus of opinion. But what is the consensus of opinion? In climate one variation is to follow a small subset of academics in the area who answer in the affirmative to

1. When compared with pre-1800s levels, do you think that mean global temperatures have generally risen, fallen, or remained relatively constant?

2. Do you think human activity is a significant contributing factor in changing mean global temperatures?

Problem is that the first question is just reading a graph and the second could be is a belief statement will no precision. Anthropogenic global warming has been a hot topic for over 25 years now. Yet these two very vague empirically-based questions, forming the foundations of the subject, should be able to be formulated more precisely. On the second it is a case of having pretty clear and unambiguous estimates as to the percentage of warming, so far, that is human caused. On that the consensus of leading experts are unable to say whether it is 50% or 200% of the warming so far. (There are meant to be time lags and factors like aerosols that might suppress the warming). This from the 2013 UNIPCC AR5 WG1 SPM section D3:-

It is extremely likely that more than half of the observed increase in global average surface temperature from 1951 to 2010 was caused by the anthropogenic increase in greenhouse gas concentrations and other anthropogenic forcings together.

The IPCC, encapsulating the state-of-the-art knowledge, cannot provide firm evidence in the form of a percentage, or even a fairly broad range even with over 60 years of data to work on..  It is even worse than it appears. The extremely likely phrase is a Bayesian probability statement. Ron Clutz’s simple definition from earlier this year was:-

Here’s the most dumbed-down description: Initial belief plus new evidence = new and improved belief.

For the IPCC claim that their statement was extremely likely, at the fifth attempt, they should be able to show some sort of progress in updating their beliefs to new evidence. That would mean narrowing the estimate of the magnitude of impact of a doubling of CO2 on global average temperatures. As Clive Best documented in a cliscep comment in October, the IPCC reports, from 1990 to 2013 failed to change the estimate range of 1.5°C to 4.5°C. Looking up Climate Sensitivity in Wikipedia we get the origin of the range estimate.

A committee on anthropogenic global warming convened in 1979 by the National Academy of Sciences and chaired by Jule Charney estimated climate sensitivity to be 3 °C, plus or minus 1.5 °C. Only two sets of models were available; one, due to Syukuro Manabe, exhibited a climate sensitivity of 2 °C, the other, due to James E. Hansen, exhibited a climate sensitivity of 4 °C. “According to Manabe, Charney chose 0.5 °C as a not-unreasonable margin of error, subtracted it from Manabe’s number, and added it to Hansen’s. Thus was born the 1.5 °C-to-4.5 °C range of likely climate sensitivity that has appeared in every greenhouse assessment since…

It is revealing that quote is under the subheading Consensus Estimates. The climate community have collectively failed to update the original beliefs, based on a very rough estimate. The emphasis on referring to consensus beliefs about the world, rather than looking outward for evidence in the real world, I would suggest is the primary reason for this failure. Yet such community-based beliefs completely undermines the integrity of the Bayesian estimates, making its use in statements about climate clear misinformation in Cook and Lewandowsky’s use of the term. What is more, those in the climate community who look primarily to these consensus beliefs rather than the data of the real world will endeavour to dismiss the evidence, or make up ad hoc excuses, or smear those who try to disagree. A caricature of these perspectives with respect to global average temperature anomalies is available in the form of a flickering widget at John Cooks’ skepticalscience website. This purports to show the difference between “realist” consensus and “contrarian” non-consensus views. Figure 2 is a screenshot of the consensus views, interpreting warming as a linear trend. Figure 3 is a screenshot of the non-consensus or contrarian views. They is supposed to interpret warming as a series of short, disconnected,  periods of no warming. Over time, each period just happens to be at a higher level than the previous. There are a number of things that this indicates.

(a) The “realist” view is of a linear trend throughout any data series. Yet the period from around 1940 to 1975 has no warming or slight cooling depending on the data set. Therefore any linear trend line derived for a longer period than 1970 to 1975 and ending in 2015 will show a lower rate of warming. This would be consistent the rate of CO2 increasing over time, as shown in figure 1. But for shorten the period, again ending in 2015, and once the period becomes less than 30 years, the warming trend will also decrease. This contracts the theory, unless ad hoc excuses are used, as shown in my previous post using the HADCRUT4 data set.

(b) Those who agree with the consensus are called “Realist”, despite looking inwards towards common beliefs. Those who disagree with warming are labelled “Contrarian”. This is not inaccurate when there is a dogmatic consensus. But it utterly false to lump all those who disagree with the same views, especially when no examples are provided of those who hold such views.

(c) The linear trend appears as a more plausible fit than the series of “contrarian” lines. By implication, those who disagree with the consensus are viewed as as having a distinctly more blinkered and distorted perspective than those who follow the consensus. Yet even using gistemp data set (which is gives greatest support to the consensus views) there is a clear break in the linear trend. The less partisan HADCRUT4 data shows an even greater break.

Those who spot the obvious – that around the turn of the century warming stopped or slowed down, when in theory it should have accelerated – are given a clear choice. They can conform to the scientific consensus, denying the discrepancy between theory and data. Or they can act as scientists, denying the false and empirically empty scientific consensus, receiving the full weight of all the false and career-damaging opprobrium that accompanies it.

fig2-sks-realists

 

 

fig3-sks-contras

Kevin Marshall

 

Notes on John Cook’s presentation at Bristol University

On Friday 19th September John Cook talked on “ Dogma vs. consensus: Letting the evidence speak on climate change” at Bristol University. He was introduced by Stephen Lewandowsky, who is now a professor there. The slides are available at Skepticalscience. Here are some notes on the talk, along with brief comments.

The global warming hypothesis

John Cook started by asking people to see if they can define the greenhouse effect as a way of detecting if people know what they are talking about. However, he did not then apply this criteria in evaluating consensus views.

He stated that there is no single cause of global warming (including solar and volcanic), but that there is a major or principle one. From then on Cook proceeded as if there was a single cause. There was no evidence for relative size of each cause of global warming. Nor was there any consideration of the implications if AGW accounted for half or less of the warming rather than the entirety of it.

He stated that there are multiple lines of evidence for AGW actually operating, no mention of the quality of the evidence, or of contrary evidence that the pattern of warming does not fit the models.

Cook et. al 97% scientific consensus paper

Cook than went on to talk about his 97% consensus paper. He then showed the Barak Obama tweet.

In the Q&A John Cook admitted to two things. First, the paper only dealt with declared belief in the broadest, most banal, form of the global warming hypothesis. That is greenhouse gas levels are increasing and there is some warming as a consequence. Second is that the included papers that were outside the realm of climate science1, and quite possibly written by people without a science degree. The Barak Obama tweet account seems to have got the wrong impression.

This should be seen in the light of a comment about why consensus is important.

Communicating consensus isn’t about proving climate change. It’s addresses a public misconception about expert opinion.

John Cook has spectacularly failed on his own terms.

Fake Experts

Cook pointed to a petition of a few years ago signed by over 31,000 American scientists, opposing the Kyoto Treaty on the basis that it would harm the environment, hinder the advance of science and technology and damage the health and welfare of mankind. They also signed to say that there was no convincing evidence of catastrophic global warming.

He calls these people “fake experts” because these “scientists”, but are not “climate scientists”. But as we have seen neither were all the authors on his climate consensus paper.

If scientists from other areas are “fake experts” on climate science, then this equally applies to those making statements in support of the “climate consensus”. That means all the statements by various scientific bodies outside of the field of “climate” are equally worthless. Even more worthless are proclamations by political activists and politicians.

But most of all neither is John Cook a “climate expert”, as his degree is in physics.

Four Hiroshima Bombs and a Zillion Kitten Sneezes

As an encore, Cook had a short presentation on global warming. There were no hockey sticks showing the last thousand years of warming, or even a NASA Gistemp global surface temperature anomaly graph for the last century. The new graph is the earth’s cumulative heat energy accumulation since 1970, broken down into components. It was a bit like the UNIPCC’s graph below from AR5 WG1 Chapter 3. However, I do not remember the uncertainty bands being on Cook’s version.

Seeing that, I whispered to my neighbour “Four Hiroshima Bombs”. Lo and behold the next slide mentioned them. Not a great prediction on my part, as skepticalscience.com has a little widget. But an alternative variant of this was a zillion kitten sneezes a second, or some such preposterous figure. The next slide was a cute picture of a kitten. Cook seems to be parodying his work.

The Escalator with cherries on top

The last slide was of Cook’s “Escalator” graph, or at least the “Skeptics” view. The special feature for the evening was a pair of cherries in the top left, to emphasise that “skeptics” cherry-pick the evidence.

It was left flickering away for the last 15 minutes.

 

My thoughts on the presentation

Some of the genuine sceptics who left the room were seething, although they hung around and chatted.

But having reviewed my notes and the slides my view is different. John Cook started the presentation by trying to establish his expert authority on the global warming hypothesis. Then he let slip that he does not believe all global warming is from rising greenhouse gas levels. The centrepiece was the 97.4% scientific consensus paper where he was lead author. But, as Cook himself admitted, the survey looked for support for the most banal form of global warming, and the surveyed papers were not all written by climate scientists. Yet Barak Obama is enacting policy based on the false impression of a scientific consensus of dangerous warming.

Then in dissing an alternative viewpoint from actual scientists, Cook has implicitly undermined years of hard campaigning and entryism by green activists in getting nearly every scientific body in the world to make propaganda statements in support of the catastrophic global warming hypothesis and the necessity of immediate action to save the planet. Cook then parodied his own “four Hiroshima bombs a second” widget, before finishing off with a flickering gross misrepresentation of the sceptics, a number of whom were in the room listening politely.

About the final question was from someone who asked about why nearly all the questions were coming from sceptics, when the vast majority of the people in the room were in support of the “science”. At the end there was polite applause, and the room quickly emptied. I think the answer to the lack of questions was the embarrassment people felt. If John Cook is now the leading edge of climate alarmism, then the game is up.

Kevin Marshall

Notes

  1. This was in response to a question from blogger Katabasis pointed out some papers that were clearly not climate science, I believe using Jose Duarte’s list.

Michael Mann’s bias on Hockey Sticks

Two major gripes of mine with the “Climate Consensus” are their making unsubstantiated claims from authority, and a total failure to acknowledge when one of their own makes stupid, alarmist comments that contradict the peer-reviewed consensus.

An example is from Professor Michael Mann commenting on his specialist subject of temperature reconstructions of the past for a Skeptical science “97% Consensus” spin-off campaign.


I will break this statement down.

“There are now dozens of hockey sticks and they all come to the same basic conclusion”

His view is that warming is unprecedented, shown by dozens of hockey sticks that replicate his famous graph in the UNIPCC Third Assessment Report of 2001.

Rather than look at the broader picture warming being unprecedented on any time scale1, I will concentrate on this one thousand year period. If a global reconstruction shows a hockey stick, then (without strong reasoned arguments to the contrary) one would expect the vast majority of temperature reconstructions from actual sites by various methods to also show hockey sticks

CO2Science.com, in their Medieval Warm Period Project, have catalogued loads of these reconstructions from all over the world. They split them into two categories – quantitative and qualitative differentials in the average temperature estimates between the peak of the medieval warm period and now.

It would seem to me that Mann is contradicted by the evidence of dozens of studies, but corroborated by only a few. Mann’s statement of dozens of hockey sticks reaching the same basic conclusion ignores the considerable evidence to the contrary.

“The recent warming does appear to be unprecedented as far back as we can go”

Maybe, as Mann and his fellow “scientists” like to claim, that the people behind this website are in “denial” of the science. Maybe they have just cherry-picked a few studies from a much greater number of reconstructions. So let us look at the evidence the SkS team provide. After all, it is they who are running the show. Under their article on the medieval warm period, there is the following graph of more recent climate reconstructions.


It would seem the “Mann EIV” reconstruction in green does not show a hockey stick, but flat (or gently rising) temperatures from 500-1000 AD; falling temperatures to around 1800; then an uptick starting decades before the major rise in CO2 levels post 1945. The twentieth century rise in temperatures appears to be about half the 0.7oC recorded by the thermometers, leading one to suspect that reconstructions understate past fluctuations in temperature as well. The later Ljungqvist reconstructions shows a more pronounced medieval warm period and a much earlier start of the current warming phase, in around 1700. This is in agreement with the Moberg and Hegerl reconstructions. Further the Moberg reconstruction has a small decline in temperatures post 1950.

Even worse, the graphic was from the Pages2K site. On temperature reconstructions of the last two millennia Pages2K state:-

Despite significant progress over the last few decades, we still do not sufficiently understand the precise sequence of changes related to regional climate forcings, internal variability, system feedbacks, and the responses of surface climate, land-cover, and bio- and hydro-sphere.

Furthermore, at the decadal-to-centennial timescale we do not understand how sensitive the climate is to changes in solar activity, frequency of volcanic eruptions, greenhouse gas and aerosol concentration, and land cover.

So Michael Mann’s statement if warming being unprecedented is contradicted by peer-reviewed science. Skeptical Science published this statement when it was falsified by Mann’s own published research and that of others.

“But even if we didn’t have that evidence, we would still know that humans are warming the planet, changing the climate and that represent a threat if we don’t do something about it”

There is no corroborating evidence to the climate models from temperature reconstructions. In fact, empirical data shows that the models may be claiming as human-caused temperature increases that are naturally-caused, but for reasons not fully understood. So the “knowing” must be assumed to be from belief, just as the threat and the ability of the seven billion “us” to counter that threat are beliefs as well.

Kevin Marshall

 

Notes

  1. The emergence from the Younger Dryas cooling period 11,500 years ago was at least 10 times the warming of the past 100 years, and was maybe in a period of less than 300 years. See WUWT article here, or the emerging story on the causes here.

NASA corrects errors in the GISTEMP data

In estimating global average temperatures there are a number of different measures to choose from. The UNIPCC tends to favour the British Hadley Centre HADCRUT data. Many of those who believe in the anthropogenic global warming hypothesis have a propensity to believe in the alternative NASA Goddard Institute for Space Studies data. Sceptics criticize GISTEMP due to its continual changes, often in the direction of supporting climate alarmism.

I had downloaded both sets of annual data in April 2011, and also last week. In comparing the two sets of data I noticed something remarkable. Over the last three years the two data sets have converged. The two most significant areas of convergence are in the early twentieth century warming phase (roughly 1910-1944) and the period 1998 to 2010. This convergence is mostly GISTEMP coming into line with HADCRUT. In doing so, it now diverges more from the rise in CO2.

In April 2011 I downloaded the HACRUT3 data, along with GISTEMP. The GISTEMP data carries the same name, but the Hadley centre now has replaced the HADCRUT3 data set with HADCRUT4. Between the two data sets and over just three years, one would expect the four sets of data to be broadly in agreement. To check this I plotted the annual average anomalies figures below.

The GISTEMP 2011 annual mean data, (in light blue) appears to be an outlier of the four data sets. This is especially for the periods 1890-1940 and post 2000.

To emphasise this, I found the difference between data sets, then plotted the five tear centred moving average of the data.

The light green dotted line shows the divergence in data sets three years ago. From 1890 to 1910 the divergence goes from zero to 0.3 degrees. This reduces to almost zero in the early 1940s, increases to 1950, reduces to the late 1960s. From 2000 to 2010 the divergence increases markedly. The current difference, shown by the dark green dotted line shows much greater similarities. The spike around 1910 has disappeared, as has the divergence in the last decade. These changes are more due to changes in GISTEMP (solid blue line) that HADCRUT (solid orange).

To see these changes more clearly, I applied OLS to the warming periods. The start of the period I took as the lowest year at the start, and the end point as the peak. The results of the early twentieth century were as follows:-

GISTEMP 2011 is the clear outlier for three reasons. First it has the most inconsistent measured warming, just 60-70% of the other figures. Second is that the beginning low point is the most inconsistent. Third is the only data set not to have 1944 as the peak of the warming cycle. The anomalies are below.

There were no such issues of start and end of the late twentieth century warming periods, shown below.

There is a great deal of conformity between these data sets. This is not the case for 1998-2010.

The GISTEMP 2011 figures seemed oblivious to the sharp deceleration in warming that occurred post 1998, which was also showing in satellite data. This has now been corrected in the latest figures.

The combined warming from 1976 to 2010 reported by the four data sets is as follows.

GISTEMP 2011 is the clear outlier here, this time being the highest of the four data sets. Different messages from the two warming periods can be gleaned by looking across the four data sets.

GISTEMP 2011 gives the impression of accelerating warming, consistent with the rise in atmospheric CO2 levels. HADCRUT3 suggests that rising CO2 has little influence on temperature, at least without demonstrating another warming element that was present in early part of the twentieth century and not in the latter part. The current data sets lean more towards HADCRUT3 2011 than GISTEMP 2011. Along with the clear pause from 1944 to 1976, it could explain why this is not examined too closely by the climate alarmists. The exception is by DANA1981 at Skepticalscience.com, who tries to account for the early twentieth century warming by natural factors. As it is three years old, it would be interesting to see an update based on more recent data.

What is strongly apparent from recent changes, is that the GISTEMP global surface temperature record contained errors, or inferior methods, that have now been corrected. That does not necessarily mean that it is a more accurate representation of the real world, but that it is more consistent with the British data sets, and less consistent strong forms of the global warming hypothesis.

Kevin Marshall

How Skeptical Science maintains the 97% Consensus fallacy

Richard Tol has at last published a rebuttal of the Cook et al 97% consensus paper. So naturally Skeptical Science, run by John Cook publishes a rebuttal by Dana Nuccitelli. It is cross-posted at the Guardian Climate Consensus – the 97%, that is authored by Dana Nuccitelli. I strongly believe in comparing and contrasting different points of view, and winning an argument on its merits. Here are some techniques that Dana1981 employ that go counter to my view. That is discouraging the reader from looking at the other side by failing to link to opposing views, denigrating the opponents, and distorting the arguments.

Refusing to acknowledge the opponents credentials

Dana says

…… economist and Global Warming Policy Foundation advisor Richard Tol

These are extracts from Tol’s own biography, with my underlines

Richard S.J. Tol is a Professor at the Department of Economics, University of Sussex and the Professor of the Economics of Climate Change…. Vrije Universiteit, Amsterdam. Formerly, he was a Research Professor (in), Dublin, the Michael Otto Professor of Sustainability and Global Change at Hamburg University …..He has had visiting appointments at ……. University of Victoria, British Colombia (&)University College London, and at the Princeton Environmental Institute and the Department of Economics…….. He is ranked among the top 100 economists in the world, and has over 200 publications in learned journals (with 100+ co-authors), 3 books, 5 major reports, 37 book chapters, and many minor publications. He specialises in the economics of energy, environment, and climate, and is interested in integrated assessment modelling. He is an editor for Energy Economics, and an associate editor of economics the e-journal. He is advisor and referee of national and international policy and research. He is an author (contributing, lead, principal and convening) of Working Groups I, II and III of the Intergovernmental Panel on Climate Change…..

Dana and Cook can’t even get close – so they hide it.

Refusing to link the Global Warming Policy Foundation

There is a link to the words. It goes to a desmogblog article which begins with the words

The Global Warming Policy Foundation (GWPF) is a United Kingdom think tank founded by climate change denialist Nigel Lawson.

The description is the GWPF’s website is

We are an all-party and non-party think tank and a registered educational charity which, while open-minded on the contested science of global warming, is deeply concerned about the costs and other implications of many of the policies currently being advocated.

Failing to allow reader to understand the alternative view for themselves

The Guardian does not link to Tol’s article. The SkS article links to the peer-reviewed paper, which costs $19.95. Bishop Hill blog also links you to Tol’s own blog, where he discusses in layman’s terms the article. There is also a 3 minute presentation video, created by the paper’s publishers, where Tol explains the findings.

Distorted evidence on data access

Dana says

The crux of Tol’s paper is that he would have conducted a survey of the climate literature in a slightly different way than our approach. He’s certainly welcome to do just that – as soon as we published our paper, we also launched a webpage to make it as easy as possible for anyone to read the same scientific abstracts that we looked at and test the consensus for themselves.

Tol says

So I asked for the data to run some tests myself. I got a fraction, and over the course of the next four months I got a bit more – but still less than half of all data are available for inspection. Now Cook’s university is sending legal threats to a researcher who found yet another chunk of data.

The Mystery, threatened, researcher

The researcher is Brandon Shollenberger.

Dana says

In addition to making several basic errors, Tol cited numerous denialist and GWPF blog posts, including several about material stolen from our team’s private discussion forum during a hacking.

Brandon gives a description of how obtained the data at “wanna be hackers?“. It was not hacking, in the sense of by-passing passwords and other security, but following the links left around on unprotected sites. What is more, he used similar methods to those used before to get access to a “secret” discussion forum. This forum included some disturbing Photoshop images, including this one of John Cook, complete with insignia of the Sks website.

A glowing endorsement of counter critiques

Dana says

An anonymous individual has also published an elegant analysis
showing that Tol’s method will decrease the consensus no matter what data are put into it. In other words, his 91% consensus result is an artifact of his flawed methodology.

So it must be right then, and also the last word?

Failing to look at the counter-counter critique

Dana, like other fellow believers, does not look at the rebuttal.

Bishop Hill says

This has prompted a remarkable rapid response from an anonymous author here, which says that Tol has it all wrong. If I understand it correctly, Tol has corrected Cook’s results. The critic claims to have worked back from Tol’s results to what should have been Cook’s original results and got a nonsense result, thus demonstrating that Tol’s method is nonsense.

Tol’s reply today equally quickfire and says that his critic, who he has dubbed “Junior” has not used the correct data at all.

Junior did not reconstruct the [matrix] T that I used. This is unfortunate as my T is online…

Junior thus made an error and blamed it on me.

Demonstration of climate science as a belief system

This is my personal view, not of Tol’s, nor of Sks.

Tol in his short presentation, includes this slide as a better categorization of the reviewed papers.

My take on these figures is that 8% give an explicit endorsement, and two-thirds take no position. Taking out the 7970 with no position gives 98.0%. Looking at just those 1010 that take an explicit position gives a “97.6% consensus”.

I accept Jesus as my Lord and Saviour, but I would declare as bunkum any similar survey that scanned New Testament theology peer-reviewed journals to demonstrate the divinity of Christ from the position taken by the authors. People study theology because they are fellow Christians. Atheists or agnostics reject it out of hand. Many scholars are employed by theological colleges, that exit to train people for ministry. Theological journals would be unlikely to accept articles that openly questioned the central tenets of Christianity. If they did many seminaries (but not many Universities) would not subscribe to the publication. In the case of climatology, publishing a paper openly critical of climatology gets a similar reaction to publishing views that some gay people might be so out of choice, rather than discovering their true nature, or that Vladimir Putin’s annexation of Crimea is not dissimilar to Hitler’s annexation of Sudetenland in 1938.

The lack of disagreement and the reactions to objections, I would interpret as “climate science” being an alternative belief system. People with a superior understanding of their subject area have nothing to fear from allowing comparison with alternative and inferior views.

 Kevin Marshall

 

 

The Role of Pivot Tables in Understanding Lewandowsky, Oberauer & Gignac 2012

Summary

Descriptive statistics, particularly in the form of pivot tables enable a bridging of the gap between the public pronouncements and the high level statistical analysis that can only be performed by specialists. In empirically-based scientific papers, data analysis by spread sheet enables the robust questions to be asked by the non-specialist and the expert reviewer alike. In relation to Lewandowsky et. al 2012, it highlights the gulf between the robust public claims and the actual opinion poll results on which it is based.

Introduction

In a blog post “Drilling into Noise” on 17 September, Stephan Lewandowsky (along with co-author Klaus Oberauer) makes an interesting comment

The science of statistics is all about differentiating signal from noise. This exercise is far from trivial: Although there is enough computing power in today’s laptops to churn out very sophisticated analyses, it is easily overlooked that data analysis is also a cognitive activity.

Numerical skills alone are often insufficient to understand a data set—indeed, number-crunching ability that’s unaccompanied by informed judgment can often do more harm than good.

This fact frequently becomes apparent in the climate arena, where the ability to use pivot tables in Excel or to do a simple linear regressions is often over-interpreted as deep statistical competence.

Now let me put this in context.

    The science of statistics is all about differentiating signal from noise. This exercise is far from trivial:

A more typical definition of statistics is

Statistics is the study of how to collect, organize, analyze, and interpret numerical information from data.

So Lewandowsky and Oberauer appear to seem to have a narrow and elitist interpretation.

“it is easily overlooked that data analysis is also a cognitive activity.”

Lewandowsky and Oberauer are cognitive scientists. They are merely claiming that this is within their area of competence.

Numerical skills alone are often insufficient to understand a data set—indeed, number-crunching ability that’s unaccompanied by informed judgment can often do more harm than good.

Agreed – but that implies that what follows should demonstrate something unique, they can only be gained by higher level or “scientific” analysis.

This fact frequently becomes apparent in the climate arena, where the ability to use pivot tables in Excel or to do a simple linear regressions is often over-interpreted as deep statistical competence.

I have not found pivot tables used before to analyse data in the climate arena. Nor have I seen simple linear regressions. The heavyweight statistical analysis from those who dispute the science has centred around one person – Steve McIntyre. In fact, to my knowledge, the first instance of when pivot tables were presented are primary analysis by sceptics was when I published my analysis

I would quite agree that pivot tables are not a replacement for deep statistical analysis. But it has role. My analysis using pivot tables, published on 1st September has a number of things which I identified independently which are not brought out in the original paper. These I present below. Then I will suggest how the reporting in the mainstream media might have been somewhat different if they had seen the pivot table summaries. Finally I will make some suggestions as to how the low level statistical analysis can contribute to relating to more “scientific” statistics.

Analysis using pivot tables

How Many Sceptics?

When I first glanced through the paper at the end of July, I wrote

It was an internet based survey, with links posted on 8 “pro-science” blogs. Five skeptic blogs were approached. As such, one would expect that “pro-science” responses would far outweigh “denialist” responses. I cannot find the split.

On obtaining the data, this was what first looked at. In the posting I looked at the 4 Climate Science questions, classifying into acceptors and rejectors (“denialist”) of the science.


Or summarising into 3 categories


Those who dogmatically rejected every question were outnumbered more than 10 to 1 by those who dogmatically accepted. Those who accept the science comprise three-quarters of the respondents. Most people would believe this to be material to a paper analysing those who reject the science.

NASA faked the moon landing|Therefore (Climate) Science is a Hoax

This is the beginning of the title of the paper. Pivot tables are great analysing this. The row labels are “Climate Science Belief”, the columns are CYMoon, and under “∑ values” enter the count of another column of values.

After a bit of formatting, and three more columns of simple formulas, I got this.


Just 10 out of 1145 respondents agree that NASA faked the moon landings. (I was not the first to publish this result. Anthony Watts piped me by a few hours.)

Strictly this is a claim against the “Climate Change” conspiracy theory CYClimChange and CYMoon. I did this table as well


Of the 64 who strongly accept that the climate change conspiracy theory, just 2 also strongly accept CYMOON. Even worse the title is the other way round. So the sample of those who believe NASA faked the moon landings is just 10. The sample size was just too small to make a prediction. Even worse, you could make the wrong result due to the next issue.

Identification of scam responses

One test was to look at the average agreement to each of 12 conspiracy theories that were independent of the climate area. So I rounded the average response to the nearest whole number for each respondent. And then did a pivot table.


I believe I was the first to identify publically the two that averaged 4 on the conspiracy theories and rejected the climate science. These are the two that Steve McIntyre has dubbed “Super-scammers”.

The biggest conclusion that I see is that the vast majority of respondents, no matter what their views on climate, don’t have much time for conspiracy theories. In fact, if you take out the two super-scammers, the most sceptical bunch are the group that dogmatically reject climate science.

This is confirmed if you take the average conspiracy score for each group.


Taking out the two super-scammers brings the average for the dogmatic rejectors from 1.63 to 1.49. With such small numbers, one or two outliers can have an impact on the data.

Measuring up against the public perception

There were two major newspaper articles that promoted the

The Guardian article on 27th July started

Are climate sceptics more likely to be conspiracy theorists?

New research finds that sceptics also tend to support conspiracy theories such as the moon landing being faked

Even a paper such as the Guardian, which prints all sorts of extremist dogma in denigrating sceptics, would have thought twice about publishing that comment if they had been presented with the tables.

The Telegraph article of 28th August included

“NASA faked the moon landing – Therefore (Climate) Science is a Hoax: An Anatomy of the Motivated Rejection of Science”, was based on a survey of more than 1000 visitors to blogs dedicated to discussion of climate change.

An astute reporter, on the basis of my pivot tables, could reasonably ask Professor Lewandowsky how it follows from just 10 respondents who support the idea that “NASA faked the moon landing” that you can make any sort of prediction about beliefs about climate. The questionnaires were placed on climate blogs, not conspiracy-based blogs, so surely any prediction should be framed the other way round?

It also included

The lead researcher, Professor Stephan Lewandowsky, from the University of Western Australia, said conspiracy theories are the “antithesis to scientific thinking” and those who believe them are more likely to reject the scientific consensus that humans contribute to climate change.

“Science is about weeding out bad ideas,” he told The Daily Telegraph. “With conspiracy theories, you start out with a theory and stick to it no matter what the evidence. So it is not that surprising that conspiracy theorists would not accept scientific propositions … If the scientific evidence is overwhelming and you don’t like the conclusion, you have to find a way to reject those findings.”

An astute reporter, on the basis of my pivot tables, could reasonably ask why Professor Lewandowsky is still sticking to his hypothesis when such small numbers support the wacky conspiracy theories. They may then ask a supplementary question. Given that there were 15 questions on conspiracy theories (14 with results reported), and just 5 on free markets, was not the original purpose to establish the conspiracy theory hypothesis and the secondary one on political orientation?

Suggestions for the Role of Low Level Statistical Analysis

In summary, whilst would quite agreeing that spread sheet analysis using pivot tables are not a replacement for deep statistical analysis there are a number of ways where it can be a powerful aid.

Firstly, it is a quick way of getting a sense of what the data is suggesting. Pivot tables can enable a quick summary visually in lots of different ways. It may need additional classifications, such as my acceptors / rejectors. It also needs thought, and an almost a manic sense of trial and error.

Second, it can give a quick comparison to what is being derived from the higher level statistics or modelling. For scientists it is a way of reviewing the data, to make sure that they have the most important points, and have not gone up blind alleys. For non-scientists (and for those scientists reviewing the work of others) it is a way of quickly getting a sense of whether the conclusions are substantiated by the empirical evidence.

Thirdly, and most importantly, it is a means of communicating to the wider public. It provides a bridge between the mainstream media and the scientists. If climate scientists want to win the trust of the wider public, then they need to relate their work in more intelligible terms, capable of being cross-examined. Instead we have the high level models and a then lot of shouting about how wrong and biased are any criticisms. That leads to a lot of scientists, including Lewandowsky, who are totally incapable of perceiving that they could be wrong, or that there could be even modicum of truth in what the critics say. This denial is best summarized in the graphic displayed in the Lewandowsky and Oberauer posting of the “skeptics” view on recent warming trends. It is a total misrepresentation, used as a means of avoiding intelligent discussion.

 

Kevin Marshall

Lewandowsky et al. 2012 MOTIVATED REJECTION OF SCIENCE – Part 5 the Missing Links

Jo Nova has now provided the first full list of the survey questions used for the Lewandowsky, Oberauer & Gignac paper, along with a well-written summary. However, there are a number of elements that need to be emphasised

  1. If “climate denial” is on a par with “holocaust” or “smoking” denial, why not start by referencing the clearest statement of the evidence, rather than past opinion surveys? That is, if direct evidence is available, why resort to hearsay evidence?
  2. But if opinion surveys are used, then they should at least be good ones. But the primary references are Anderegg, Prall, Harold, & Schneider, 2010 (Most climate scientists believe in what they do) and Doran & Zimmerman, 2009 (97% of climate scientists = 75/77 cut from >3000 responses).
  3. Even so, surely the association with NASA Moon Landings was correct? After all, the title is “NASA faked the moon landing|Therefore (Climate) Science is a Hoax: An Anatomy of the Motivated Rejection of Science.”
    Not when 93% of all respondents gave it a firm thumbs down.
  4. When Lewandowsky says over >1100 responses, and only talks about those who “reject the science”, it surely implies that all (or at least the vast majority) of responses were from the people he is attacking? Actually, around 15% of responses were from skeptics, in terms of answers to four “climate science” questions. Professional polling organisations in the UK state these figures. But a scientific journal seems not to have insisted.
  5. There are loads of conspiracy theories. But one of the most popular in recent years is something like “Climate denial only exists as a serious force due to significant funding by oil and tobacco interests.” Lewandowsky and his junior partners cannot have missed that one.

The basic psychology behind this can be found in “The Debunking Handbook” on the front page of the skepticalscience website. Here is the justification for lying, ad hom attacks and continued government grants to a failed research program. They know the truth, and are claiming a monopoly of that truth. But to legitimately claim a monopoly it is necessary to show the corollary. The corollary is that every person who disagrees with you is wrong on everything. In empirical sciences this leaves no gap for different interpretations from the same data; no gap for the unexplained; no gap for hypotheses or assumptions to be falsified; and no gap for new data contradicting old data or forecasts. Lewandowsky’s opinion poll applies the truth in the “The Debunking Handbook” to justify one version of climate science having a continued monopoly by showing that opponents are a load of undesirable nutters. It is not just full of gaps. Like past claimants to the throne of dictators of truth, he is more wrong than his detractors.

But if you do not have a monopoly of the truth in climate science, what is the alternative? What if there is a potential future threat, which is very real, but for which there is very little firm evidence? A tentative proposal will be the subject of my next posting.

Michael Mann’s narrow definition of “Skepticism”

Climate Scientist MM continues his dogged defence of the climate consensus at Thinkprogress.

Consider the following statement

Make no mistake: Skepticism is fundamental to good science. Whenever a conclusion is drawn or a proposition is made, the demand that it stand up to scrutiny is the self-correcting machinery that drives us towards a better understanding of the way the world works. In this sense, every scientist should be a skeptic. Good science responds to good faith challenges, and to contradictory evidence that is presented, and climate-change science should be no different.

The spirit of the following statement is at first beguiling, and the spirit is something that many would agree with, although “good faith challenges” allows for discrimination against people you disagree with. However, it is his meaning of “scepticism” that I want to take issue with here.

Mann’s definition is most clearly expressed by John Cook of “Skeptical Science“, but also supported by (amongst others) Tamino of “Open Mind” blog. The clearest expression is in the article “Are you a genuine skeptic or a climate denier?

Genuine skeptics consider all the evidence in their search for the truth. Deniers, on the other hand, refuse to accept any evidence that conflicts with their pre-determined views.

Compare this with a more established source of word definitions – the Oxford English Dictionary. I don’t have the full 20 volume edition, but I think my 1983 book club edition of the Shorter OED will do well enough. There are a number of definitions of “sceptic” on page 1900.

Definition 1 pertains to a school of philosophy after the Greek Pyrrho, which doubts the possibility of knowledge of any kind.

Definition 2 is someone who doubts the validity of knowledge claims in a particular area of inquiry. This includes, but is not confined to the natural sciences. In the area of climate is the Climate Realists like Tallbloke, who doubt the greenhouse gas theory.

Definition 2.1 “one who maintains a doubting attitude with reference to a particular question or statement“. The OED has this as the popular definition.

Definition 3 is one who doubts the truth of Christianity. An older definition, not applicable here.

Definition 4 is one who is seeking the truth. That is “an inquirer who has not arrived at definite convictions“. This is only occasionally used, at least in the late 20th century.

Cook’s definition is at odds with all the definitions in the dictionary. There is nothing there about how much evidence a genuine skeptic must consider. Indeed, it by his own definition Cook is not a skeptic. More seriously, Cook is disagreeing with the experts in their field. According to Cook’s definition, a skeptic is someone who formerly had a doubting attitude as in 2.1, but now has fallen into line. The philosophers are a school of deniers. (Desmogblog will no doubt now unearth evidence that they were in the pay of big olive oil producers.) Some who still doubts the truth of climate change will not have “considered all the evidence” yet. For those who have read the evidence, this category ceases to exist. The doubter of Christianity is irrelevant, whilst the seeker of truth is someone who is behind the curve.

But then who do you believe on the definition of “skeptic / sceptic”. A consensus of the world’s leading experts, or a group of dogmatic people using language for partisan purposes?

NB. I use sceptic with a “c” to denote the expert definition, and with a “k” to define the partisan definition. However, I quite realise the use of “c” was probably as a result of King George III trying the separate Britain from the revolting colonies by means of a common language.

With respect to Dr Mann, I may have got him totally wrong. Maybe he does not realise that skepticalscience.com is based on a misrepresentation. If Dr Mann (or a nominated associate) would like to clarify that he follows expert opinion, I will be more than happy to distance him from the polemicists who allegedly support him.

Antarctic Ice Melt at the dogmatic “Skeptical Science”

Have just posted the following at BishopHill (who has being looking at review comments skepticalscience.com)

The comments are not the major issue with skepticalscience. It is the analysis. It picks from the peer-reviewed data to give the most alarmist spin, often ignoring the more rounded, more recent and less alarmist articles or data. (a pattern familiar to those who have read the Hockey Stick Illusion)

On Antarctic ice melt, this is certainly the case. SkS relies on a single author – Velicogna (two papers 2006 & 2009) – to substantiate the claim that the Antarctic pack ice is not just melting, it is accelerating. The 2009 paper looked at only six years of data. Yet less than two months ago there was published a paper that looked at a much longer period, looked at various studies (including Velicogna) and at different ways of estimating. It concluded that there may be some ice loss, but no acceleration. Anthony Watts summarises this paper quite nicely at http://wattsupwiththat.com/2011/07/27/antarctic-ice-%E2%80%93-more-accurate-estimates/

Watts’s article also links to the original article. Do not take the word of a (slightly) manic beancounter. Do the comparison and you will find that the SkS is anything but sceptical and far from scientific.

I would suggest that this is not an isolated incident either. I have found at least two more. Perhaps others could have a delve?

Would anyone like some suggestions where to start comparing SKs with other (more rounded) viewpoints?

  1. Why has it not warmed since 1998? SKS – It is because the oceans have been warming instead says Sks. But their data stops in 2003 – just when we started to get far more accurate data from some fancy buoys (search wattsupwiththat). It has stabilised since then. The air is not warming, neither are the oceans, so the alarmists have to go beyond the measurable.
  2. The economic case for carbon pricing has been made. If you take economic models as being reality, ignore the contrary arguments and most importantly ignore the public policy problems. I explain in theoretical terms here. Alternatively read books by Roger Pielke Jnr (the Climate Fix), Nigel Lawson (An Appeal to Reason), or Tim Worstall (Chasing Rainbows) for a better understanding of why policies will necessarily fail.

Will add others when I come across them.


Another example of Censorship of Skeptics

The blog Zone5 (written by an environmentalist who is thoughtfully sceptical of global warming) has had an article taken down from what has been one of the more moderate pro-CAGW blogs. I left the following comment

The removal of your article is another small example of what you were writing about. Any attempt to offer counter arguments, or to criticize, is being shut down. This is true of blog comments or of peer-reviewed papers. But enough of the negative. Your article made some excellent points, particularly on Al Gore’s movie

First he misrepresents the science by claiming we are facing near certain doom, then he completely downplays the kind of changes we would have to make to prevent catastrophe if we accept the worst case scenario.

It is the crux of what I consider to be the problem of the climate change agenda. I believe there is quite strong science to back up the claim that a doubling of CO2 will cause about one degree of warming. Maybe the climate models are right, and this effect will be doubled or more by clouds feedbacks (though the virulence with which scientific papers that suggest otherwise have been attacked, and a similarly weak rebuttal suggesting the opposite praised greatly, suggests this is an Achilles heel). However, your comment on Al Gore’s film neatly summarises the issue in general. The potential effects of climate change are over-estimated in two ways – of magnitude and likelihood. The most important magnitude is time. For instance, the potential sea level rise is treated as if it would be in metres per year. So fast that large areas of land would be swamped before the harvest could be brought in. But even if global temperatures rose by five degrees in a generation (very unlikely), the resultant sea level rise would be sufficiently slow to relocate homes and agriculture, or to build dykes. People’s ability to adapt to rapidly to changes are remarkable, as emigrants from Britain to Australia (or from Asia to Britain) can testify, yet this is vastly underplayed.

The downplaying of effective policy issues is, if anything, even worse. It is assumed that with a little extra tax, everybody will switch to electric cars or bicycles, and plug a few drafts to cut heating bills by 90%. All this until we get a technological breakthrough in a few years to allow super-abundant carbon free power and near costless power. If Britain (or the EU) takes the lead, then everybody else will follow. No problem about over-running on costs, or pursuing the wrong type of green energy. No concern that a million or more families will enter fuel poverty every year, whilst still failing far behind on emissions reduction targets.

The overplay of risks / underplay of policy costs was put in a more sophisticated way in the Stern Review. I have attempted to analyse this at

https://manicbeancounter.wordpress.com/2011/02/11/climate-change-policy-in-perspective-%E2%80%93-part-1-of-4/

Please continue to encourage people to think for themselves and compare the various perspectives.

Is this another example of shutting down any sort of dissent, like the increasing dogmatism & extremism of sceptical science? (see here
here
here).