John Cook undermining democracy through misinformation

It seems that John Cook was posting comments in 2011 under the pseudonym Lubos Motl. The year before physicist and blogger Luboš Motl had posted a rebuttal of Cook’s then 104 Global Warming & Climate Change Myths. When someone counters your beliefs point for point, then most people would naturally feel some anger. Taking the online identity of Motl is potentially more than identity theft. It can be viewed as an attempt to damage the reputation of someone you oppose.

However, there is a wider issue here. In 2011 John Cook co-authored with Stephan Lewandowsky The Debunking Handbook, that is still featured prominently on the skepticalscience.com. This short tract starts with the following paragraphs:-

It’s self-evident that democratic societies should base their decisions on accurate information. On many issues, however, misinformation can become entrenched in parts of the community, particularly when vested interests are involved. Reducing the influence of misinformation is a difficult and complex challenge.

A common misconception about myths is the notion that removing its influence is as simple as packing more information into people’s heads. This approach assumes that public misperceptions are due to a lack of knowledge and that the solution is more information – in science communication, it’s known as the “information deficit model”. But that model is wrong: people don’t process information as simply as a hard drive downloading data.

If Cook was indeed using the pseudonym Lubos Motl then he was knowingly putting out into the public arena misinformation in a malicious form. If he misrepresented Motl’s beliefs, then the public may not know who to trust. Targeted against one effective critic, it could trash their reputation. At a wider scale it could allow morally and scientifically inferior views to gain prominence over superior viewpoints. If the alarmist beliefs were superior it what be necessary to misrepresent alternative opinions. Open debate would soon reveal which side had the better views. But in debating and disputing, all sides would sharpen their arguments. What would quickly disappear is the reliance on opinion surveys and rewriting of dictionaries. Instead, proper academics would be distinguishing between quality, relevant evidence from dogmatic statements based on junk sociology and psychology. They would start defining the boundaries of expertise between the basic physics, computer modelling, results analysis, public policy-making, policy-implementation, economics, ethics and the philosophy of science. They may then start to draw on the understanding that has been achieved in these subject areas.

Kevin Marshall

Economic v Climate Models

Luboš Motl has a polemical look at the supposed refutation of a sceptics arguments. This is an extended version of my comment.

Might I offer an alternative view of item 30 – economic v climate models?

Economic models are different from climate models. They try to model empirical generalisations and (with a bit of theory & a lot of opinion) try to forecast future trends. They tend to be best over the short term when things are pretty much the same from one year to the next. The consensus of forecasts are pretty useless at predicting discontinuities in trends, such as the credit crunch. At there best their forecasts at little better than the dumb forecast that next period will be the same as last period. In general the accuracy of economic forecasts is inversely proportional to their utility.

Climate models are somewhat different according to Dr MacCracken.

“In physical systems, we do have a theory—make a change and there will be a response in largely understandable and calculatable ways. Models don’t replace theory; their very structure is based on our theoretical understanding, which is why they are called theoretical models. All that the computers do is to very rapidly make the calculations in accord with their theoretical underpinnings, doing so much, much faster than scientists could with pencil and paper.”

The good doctor omits to mention some other factors. It might be the case that climate scientists have all the major components of the climate system (though clouds are a significant problems), but he omits to include measurements. The interplay of complex factors can cause unpredictable outcomes depending on timing and extent, as well as the theory. The climate models, though they have a similarity of theory and extent, come up with widely different forecasts. Even this variation is probably limited by sense-checking the outcomes and making ad hoc adjustments. If the models are basically correct then major turning points should capable of being predicted. The post 1998 stasis in temperatures, the post 2003 stability in sea temperatures and the decline in hurricanes post Katrina are all indicators that models are overly sensitive. The novelty that the models do predict tend not to be there, but the novelties that do exist are not predicted.

If it is the case that climate models are still boldly proclaiming a divergency from trend, whilst economic models have much more modest in their claims, is this not an indicator of climate model’s superiority? It would be if one could discount the various economic incentives. Economic models are funded by competing in institutions. Some are private sector, and some are public sector. For most figures there is forecast verification monthly (e.g. inflation, jobs) or quarterly (growth). If a model were consistently an outlier if would lose standing, as the forecasts are evaluated against each other. If it was more accurate then the press would quote it, being good name placement for the organisation. In the global warming forecasts, there is not even an annual variation. The incentive is either to conform, or to provide more extreme (it is worse than we thought) prognostications. If the model projected basically said “chill-out, it ain’t that bad man”, they authors would be ostracized and called deniers. At a minimum the academics would lose status and ultimately lose out on the bigger research grants.

(A more extreme example is of a major earthquake forecast. “There will not be one today” is a very accurate prediction. In the case of Tokyo area over the last 100 years that would have been wrong only twice, an accuracy of greater than 1 in 10,000).

A note on HADCRUT3 v GISSTEMP

Have just posted to WUWT the following on global temperature anomalies:-

Thanks Luboš for a well-thought out article, and nicely summarised by

“The “error of the measurement” of the warming trend is 3 times larger than the result!”

One of the implications of this wide variability, and the concentration of temperature measurements in a small proportion of the land mass (with very little from the oceans covering 70% of the globe) is that one must be very careful in the interpretation of the data. Even if the surface stations were totally representative and uniformly accurate (no UHI) and the raw data properly adjusted (Remember Darwin, Australia on this blog?), there are still normative judgements to be made to achieve a figure.

I have done some (much cruder) analysis comparing HADCRUT3 to GISSTEMP for the period 1880 to 2010, which helps illustrate these judgemental decisions.

1. The temperature series agree on the large fluctuations, with the exception of the post 1945 cooling – it happens 2 or 3 years later and more slowly in GISSTEMP.

2. One would expect greater agreement with recent data in more recent years. But since 1997 the difference in temperature anomalies has widened by nearly 0.3 celsius – GISSTEMP showing rapid warming and HADCRUT showing none.

3. If you take the absolute change in anomaly from month to month and average from 1880 to 2010, GISSTEMP is nearly double that of HADCRUT3 – 0.15 degrees v 0.08. The divergence in volatility reduced from 1880 to the middle of last century, when GISSTEMP was around 40% more volatile than HADCRUT3. But since then the relative volatility has increased. The figures for the last five years are respectively about 0.12 and 0.05 degrees. That is GISSTEMP is around 120% more volatile that HADCRUT3.

This all indicates that there must be greater clarity in the figures. We need the temperature indices to be compiled by qualified independent statisticians, not by those who major in another subject. This is particularly true of the major measure of global warming, where there is more than a modicum of partisan elements.

These graphs help illustrate the points made. Please note that I use overlapping moving averages, so it is for illustrative purposes only.

NB. Luboš Motl’s article was cross-posted from his blog here