Gergis 2012 Mark 2 – Hurdles to overcome

BishopHill reported yesterday on the withdrawn Gergis paper that

The authors are currently reviewing the data and methods. The revised paper will be re-submitted to the Journal of Climate by the end of July and it will be sent out for peer review again.

It is worth listing the long list of criticisms that have been made of the paper. There are a lot of hurdles to overcome before Gergis et al 2012 should qualify for the status of a scientific paper.

My own, quite basic, points are:-

  1. Too few proxies for such a large area. Just 27 for > 5% of the globe.
  2. Even then, 6 are well outside the area.
  3. Of these six, Gergis’s table makes it appear 3 are inside the area. My analysis is below.


  4. Despite huge area, there are significant clusters – with massive differences between proxies at the same or nearby sites.
  5. There are no proxies from the sub-continental land mass of Australia.
  6. Need to remove the Palmyra Proxy because (a) it has errant readings (b) fails the ‘t’ test (c) > 2000km outside of the area, in the Northern Hemisphere.
  7. Without Palmyra the medieval period becomes the warmest of the millennium. But with just two tree ring proxies, one at 42 O South and the other at 43 O S representing an range from 0 to 50O S, this is hardly reliable. See the sum of proxies by year. Palmyra is the coral proxy in the 12th, 14th and 15th centuries.


On top of this are Steve McIntyre’s (with assistance from JeanS and RomanM) more fundamental criticisms:-

  1. The filtering method of Gergis excluded the high quality Law Dome series, but included the lower quality Vostok data, and the Oroko tree ring proxy. McIntyre notes that Jones and Mann 2003 rejected Oroko, but included Law Dome on different criteria.
  2. Gergis screening correlations were incorrectly calculated. JeanS calculated properly. Only 6 out of 27 proxies passed. (NB none of the six proxies outside the area passed)


  3. The Gergis initially screened 62 proxies. Given that the screening included proxies that should not have included 21 proxies, but should it have included some of the 35 excluded proxies. We do not know, as Gergis has refused to reveal these excluded proxies.
  4. Screening creates a bias in the results in favour of the desired result if that correlation is with a short period of the data. RomanM states the issues succinctly here. My, more colloquial take, is that if the proxies (to some extent) randomly show C20th warming or not, then you will accept proxies with a C20th uptick. If proxies show previous fluctuations (to some extent) randomly and (to some extent) independently of the C20th uptick, then those previous fluctuations will be understated. There only has to be a minor amount of randomness to show bias given that a major conclusion was

    The average reconstructed temperature anomaly in Australasia during A.D. 1238-1267, the warmest 30-year pre-instrumental period, is 0.09°C (±0.19°C) below 1961-1990 levels.

UPDATE 03/08/12

The end of July submissions date seems to have slipped to the end of September.

Palmyra Atoll Coral Proxy in Gergis et al 2012

There is a lot of discussion on Bishop Hill (here and here) and Climate Audit of a new paper in Journal of Climate “Evidence of unusual late 20th century warming from an Australasian temperature reconstruction spanning the last millennium“, with lead author, Dr Joëlle Gergis. The reconstruction was based upon 27 climate proxies, one of which was a coral proxy from Palmyra Atoll.

There are two issues with this study.

Location

The study is a “temperature reconstruction for the combined land and oceanic region of Australasia (0°S-50°S, 110°E-180°E)“. The study lists Palmyra Atoll as being at 6° S, 162° E, so within the study area. Wikipedia has the location at 5°52′ N, 162°06′ W, or over 2100Km (1300 miles) outside the study area. On a similar basis, Rarotunga in the Cook Islands (for which there are two separate coral proxy studies), is listed as being at 21° S, 160° E. Again well within the study area. Wikipedia has the location at 21° 14′ 0″ S, 159° 47′ 0″ W, or about 2000Km (1250 miles) outside the study area. The error has occurred due to a table with columns headed “Lon (°E)”, and “Lat (°S). Along with the two ice core studies from Vostok Station, Antarctica (Over 3100km, 1900 miles south of 50° S) there are 5 of the 27 proxies that are significantly outside the region.

Temperature Reconstruction

Palmyra Atoll reconstruction is one of just three reconstructions that has any data before 1430. From the abstract, a conclusion was

The average reconstructed temperature anomaly in Australasia during A.D. 1238-1267, the warmest 30-year pre-instrumental period, is 0.09°C (±0.19°C) below 1961-1990 levels.

From the proxy matrix I have plotted the data.


This indicates a massive change in late twentieth century temperatures, with 1996 being the most extreme on record.

The other two data sets with pre-1430 data are tree ring proxies from Mount Read, Tasmania and Oroko, New Zealand. These I have plotted with a 30 year moving average, with the data point at the last year.


There is something not right with the Palmyra Atoll proxy. The late 20th century trend is far too extreme. In the next posting I will compare to some other coral data sets.

George Monbiot’s narrow definition of “charlatan”

Bishop Hill quotes George Monbiot

I define a charlatan as someone who won’t show you his records. This looks to me like a good [example]: http://t.co/5hDF57sI

Personally, I believe that for word definitions one should use a consensus of the leading experts in the field. My Shorter OED has the following definition that is more apt.

An empiric who pretends to wonderful knowledge or secrets.

Like John Cook’s definition of “skeptic“, Monbiot’s definition is narrower and partisan. Monbiot was referring to maverick weather forecaster Piers Corbyn. If someone has a “black box” that performs well under independent scrutiny, then they are charlatan under Monbiot’s definition, but not the OED’s. This could include the following.

  • A software manufacturer who does not reveal their computer code.
  • A pharmaceutical company that keeps secret the formulation of their wonder drug.
  • A soft drink manufacturer, who keeps their formulation secret. For instance Irn-Bru®.

The problem is that these examples have a common feature (that Piers Corbyn would claim to share to some extent). They have predictive effects that are replicated time and time again. A soft drink might just be the taste. Climate science cannot very well replicate the past, and predictions from climate models have failed to come about, even given their huge range of possible scenarios. This is an important point for any independent evaluation. The availability of the data or records matter not one iota. It is what these black boxes say about the real world that matters. I would claim that as empirical climate science becomes more sophisticated, no one person will be able to replicate a climate model. Publishing all the data and code, as Steve McIntyre would like, will make as much difference as publishing all the data and components of a mobile phone. Nobody will be able to replicate it. But it is possible to judge a scientific paper on what it says about the real world, either through predictions or independent statistical measures of data analysis.

A Climate Change / Global Warming Spectrum

In politics, most people’s views can be placed on a spectrum, when it comes to climate change / global warming there is no such perspectives. The views are often polarized, particularly by those who believe in a future climate catastrophe. This is an initial attempt at a grid aimed at clarifying the issues. Your constructive advice is sought on how this might be improved.

When there are contentious or politicized issues, a spectrum of opinions emerge where there is free discussion of ideas. This is true in politics and the Christian religion. In both, there is not just a one-dimensional spectrum of ideas, but multi-dimensional perspectives. For instance, in politics it has been argued that the left-right spectrum should be split into economic and moral issues. The United States Libertarian Party has had a simple survey running since 1995. A more comprehensive (but still American-orientated) survey is the Political Spectrum Quiz.

Another idea comes from Greg Craven, who did a series of zany You-Tube videos on Climate Change, particularly such as The Most Terrifying Video You’ll Ever See” and “How it all ends“. He claimed that for the mass of non-scientists it was best to take a risk-based approach, grading the science on the credibility of those who made the claims. One objection with his analysis was it was based on polar extremes. That is either the worst climate catastrophe imaginable, or it is all a hoax. I proposed that there was a spectrum of possible outcomes, with the apocalyptic catastrophe at one extreme and the null outcome at the other. Basically there is a spectrum of views.

For this spectrum, the possible scenarios are from the null outcome on the left, rising to a huge climate catastrophe on the right.

Craven’s argument was to consider either 0 or 1000, whereas I claimed that the UNIPCC scenarios (representing the “consensus” of climate scientists), allowed for a fair range of outcomes. I have provided a log scale, as this puts clear distance between someone who believes in a low risk of catastrophe of extreme catastrophe to someone who says there is no risk at all. For instance, if someone believes that there is a 1% chance of the worst case, a 9% chance of loss of 100 and a 90% chance of a loss of 10, then their score would be 0.01*1000 + 0.09*100 + 0.90*10 = 28. In other words, for that person, especially if they are risk averse, there is still a very significant issue that should justify serious consideration of some type of global policy action.

But this measure of the prospective level of climate catastrophe needs to be based upon something. That something is scientific evidence, not people’s intuitions or gut feelings. If we imagine that the uncertainties can be measured as risks (as neoclassical economists do) then then the worst case scenario can only be attained if there is near certain, unambiguous scientific evidence in support of that prediction. If the evidence is weak statistically, gives highly variables results depending on methodology or data sets, or only tangential to the prediction, then a lower risk weighting lower than 1 will need to be ascribed. For an overall picture, we need to ascribe a weighting to the body of evidence. I propose a traffic light system. In outline green is for an overwhelming body of evidence, red is for no proper evidence whatsoever, and amber is for some weak evidence. Something along the following lines:-

Basically, an unambiguous case for impending global catastrophe must have a substantial body of strong scientific evidence to substantiate that case, with little or no contrary evidence. I will develop on another day the analogy with evidence presented to a criminal court by the prosecution. However, for the present, an analogy that is relevant is that this conclusion is only reached once the evidence fails to fall over under independent cross-examination.

This gives us a grid with the magnitude of the climate catastrophe on the X axis, and the scientific case on the Y axis. The grid, with my first opinion of where people various groups are placed, is given below. I know it is controversial – the whole aim is to get people to start thinking constructively about their views.

Alarmist Blogs (for instance Skeptical Science and Desmogblog) have an extreme black-and-white one world where they are always right, and anyone who disagrees is the polar opposite . “Deniers” is a bogeyman construct of their making.

If one reads the detail of UNIPCC AR4 report, the “Consensus” of climate scientists allow for some uncertainties, and for scenarios which are not so catastrophic.

The more Sceptical Scientists, such as Richard Lindzen, Roger Pielke Snr and Roy Spencer, view increasing greenhouse gases as a serious issue for study. However, they view the evidence as being both much weaker than the “consensus” and pointing to a much less alarming future.

The most popular Sceptic Blogs, such as Wattsupwiththat, Climate Audit and Bishop Hill I characterise as having a position of “The stronger the evidence, the weaker the relevance“. That is they allow for a considerable spread of views, but neither dismiss rise in CO2 as of no consequence, nor claim that the available evidence is strong.

Finally, the Climate Realists such as Joanne Nova and the British Climate Realists website. They occupy a similar position as the “deniers”, but from a much more substantial position. They can see little or no evidence of catastrophe, but huge amounts of exaggeration dressed up as science.

What are your opinions? What position do you think you lie on the grid? Is there an alternative (and more informative) way of characterizing the different positions?

Phillip Morris’s FOI is in the Public Interest

The BBC gave headline news today about a FOI request by Phillip Morris about Government funded research. The Guardian and Telegraph joined in as well. This is a comment left at Bishop Hill.

There are some legitimate reasons why a cigarette company (and the general public) might want to know more details of a research study. This is Government-funded research to justify legislation, without counter-studies for balance. Bearing in mind that the study was of 6,000 young people, who the Professors believe are highly impressionable from marketing.

1.    Were the questions neutral and held in a neutral venue?

2.    Did the resulting peer-reviewed article draw conclusions that the data substantiates? Are they statistically significant?

3.    Can other conclusions be drawn by the data?

It should be borne in mind by those who jump to conclusions that

a)    The two professors who did the study have PhDs in marketing and in social policy.

b)    The study is not about the health affects of smoking. It is about justifying compulsory neutral packaging for cigarettes.

c)    This particular study is very difficult to find on the internet, and is not listed on either of their websites amongst the publications. One has a list of over eighty.

One of the Professors was co-author of a similar study (only with adults), which got an unfavourable review in the Guardian. This time the sample size was 43, divided into 3 distinct groups.


http://www.guardian.co.uk/education/2011/may/30/smokers-health-warnings-cigarette-packets

The level of research into the harm smoking can cause is considerable and of high quality. The original British Doctors Study than confirmed the link between both lung cancer and coronary thrombosis was ground-breaking statistically. That does not mean that all the policy research is of a similar quality.

http://en.wikipedia.org/wiki/British_Doctors_Study

It is my belief Government social policy should aim at the net improvement of society. That implies that in funding research into social policy there is a duty of care to ensure balance, and that conclusions are robust. There are very legitimate reasons that this line of research falls short.