Friends of the Earth still perverting the evidence for fracking harms

Yesterday, the Advertising Standards Authority at long last managed to informally resolve the complaints about a misleading leaflet by Friends of the Earth Trust and Friends of the Earth Ltd. This is no fault of the ASA. Rather FoE tried to defend the indefensible, drawing out the process much like they try to draw out planning inquiries. From the Guardian

“After many attempts by Friends of the Earth to delay this decision, the charity’s admission that all of the claims it made, that we complained about, were false should hopefully put a stop to it misleading the UK public on fracking,” said Francis Egan, chief executive of Cuadrilla. …..

According to the BBC

Friends of the Earth (FOE) must not repeat misleading claims it made in an anti-fracking leaflet, the advertising watchdog has said.

The fundraising flyer claimed fracking chemicals could pollute drinking water and cause cancer and implied the process increases rates of asthma.

The charity “agreed not to repeat the claims,” the Advertising Standards Authority (ASA) said.

All pretty clear. As the BBC reports that the eco-worriers are not to be told that they are misleading the public.

Donna Hume, a campaigner for the environmental charity, said it would “continue to campaign against fracking” because it was “inherently risky for the environment”.


Ms Hume said Cuadrilla “started this process to distract from the real issues about fracking” and was trying to “shut down opposition”.

“It hasn’t worked though. What’s happened instead is that the ASA has dropped the case without ruling,” she said.

“We continue to campaign against fracking, alongside local people, because the process of exploring for and extracting shale gas is inherently risky for the environment, this is why fracking is banned or put on hold in so many countries.”

Donna Hume was just acting as mouthpiece to the FoE, who issued a misleading statement about the case. They stated

Last year fracking company Cuadrilla complained to the Advertising Standards Authority about one of our anti fracking leaflets.

But after more than a year, the complaint has been closed without a ruling.

The scientific evidence that fracking can cause harm to people and the environment keeps stacking up. Friends of the Earth is not alone in pointing out the risks of fracking, to the climate, to public health, of water contamination, and to the natural environment.

ASA Chief Executive Guy Parker, took the unusual step of setting the record straight.

But amidst the reports, the public comments by the parties involved and the social media chatter, there’s a risk that the facts become obscured.

So let me be clear. We told Friends of the Earth that based on the evidence we’d seen, claims it made in its anti-fracking leaflet or claims with the same meaning cannot be repeated, and asked for an assurance that they wouldn’t be. Friends of the Earth gave us an assurance to that effect. Unless the evidence changes, that means it mustn’t repeat in ads claims about the effects of fracking on the health of local populations, drinking water or property prices.

Friends of the Earth has said we “dropped the case”. That’s not an accurate reflection of what’s happened. We thoroughly investigated the complaints we received and closed the case on receipt of the above assurance. Because of that, we decided against publishing a formal ruling, but plainly that’s not the same thing as “dropping the case”. Crucially, the claims under the microscope mustn’t reappear in ads, unless the evidence changes. Dropped cases don’t have that outcome.

The ASA, which tries to be impartial and objective, had to take the unusual statement to combat FoE deliberate misinformation. So what is the scientific evidence that FoE claim? This from the false statement that ASA was forced to rebut.

The risks of fracking

In April 2016, a major peer-reviewed study by research institute PSE Healthy Energy was published in academic journal PLOS ONE, which assessed 685 pieces of peer-reviewed scientific literature from around the world over 2009-2015 and found:

  • “84% of public health studies contain findings that indicate public health hazards, elevated risks, or adverse health outcomes”

  • “69% of water quality studies contain findings that indicate potential, positive association, or actual incidence of water contamination”

  • “87% of air quality studies contain findings that indicate elevated air pollutant emissions and/or atmospheric concentrations”

I suggest readers actually read what is said. Hundreds of studies cannot identify, beyond reasonable doubt, that there is a significant large risk to human health. If any single study did establish this it would be world news. It is just hearsay, that would be dismissed by a criminal court in the UK. A suggestion is from what the  PLOS-ONE Journal does not include in the submission criteria, that is normal in traditional journals – that submissions should have something novel to say about the subject area. As an online journal it does not have to pay its way by subscriptions, as authors usually have to pay a fee of $1495 prior to publication.

But this still leaves the biggest piece of misinformation that FoE harps on about, but was not included in the ruling. Below is the BBC’s two pictures of the leaflet.



It is the the issue of climate change that goes unchallenged. Yet it is the most pernicious and misleading claim of the lot. If fracking goes ahead in the UK it will make not a jot of difference. According to the EU EDGAR data the UK emitted just 1.1% of global GHG emissions in 2012. That proportion is falling principally because emissions are rising in other countries. It will continue to fall as emissions in developing countries rise, as those countries develop. That is China, India, the rest of South East Asia and 50+ African nations. These developing countries, which are exempt from any obligation to constrain emissions under the 1992 Rio Declaration, have 80% of the global population and accounted for over 100% of emissions growth between 1990 and 2012. I have summarized the EDGAR data below.


So who does the FoE speak for when they say “We could trigger catastrophic global temperature increases if we burn shale in addition to other fossil fuels“?

They do not speak for the USA, where shale gas has replaced coal, and where total emissions have reduced as a result, with real pollutants falling. There the bonus has been hundreds of thousands of extra jobs. They do not speak for China where half of the global increase (well 53%) in GHG emissions between 1990 and 2012 occurred. They cannot speak for Britain, as if it triggers massive falls in energy costs like in the USA (and geologically the North of England Bowland shale deposits look to be much deeper than the US deposits, so potentially cheaper to extract) then industry could be attracted back to the UK from countries like China with much higher emissions per unit of output.

Even worse, F0E do not speak for the British people. In promoting renewables, they are encouraging higher energy prices, which have lead to increasing fuel poverty and increased winter deaths among the elderly. On the other hand the claims of climate catastrophism from human emissions look to be far fetched when this century global average temperature rises have stalled, when according to theory they should have increased at an accelerated rate.

Kevin Marshall

Update 7th Jan 11am

Ron Clutz has posted a good summary of the initial ruling, along with pointing to a blog run by retired minister Rev. Michael Roberts, who was one of the two private individuals (along with gas exploration company Cuadrilla) who made the complaint to ASA.

The Rev Roberts has a very detailed post on the 4th January giving extensive background history of FoE’s misinformation campaign against shale gas exploration in the Fylde. There is one link I think they should amend. The post finishes with what I believe to be a true statement.

Leaflet omits main reason for opposition is Climate change

The link is just to a series of posts on fracking in Lancashire. It is one of them is

Lancashire fracking inquiry: 3 reasons fracking must be stopped

The first reason is climate change. But rather than relate emissions to catastrophic global warming, they point to the how allowing development of fossil fuels appears in relation to Government commitments made in the Climate Change Act 2008 and the Paris Agreement. FoE presents their unbalanced case in much fuller detail in the mis-named Fracking Facts.

Update 2 7th Jan 2pm

I have rechecked the post Cuadrilla’s leaflet complaint is closed without a ruling, while evidence of fracking risks grows, where Friends of the Earth activist Tony Bosworth makes the grossly misleading statement that ASA closed the case without a ruling. The claim is still there, but no acknowledgement of the undertaking that F0E made to ASA. F0E mislead the public in order to gain donations, and now tries to hide the information from its supporters by misinformation. Below, is a screenshot of the beginning of the article.

How strong is the Consensus Evidence for human-caused global warming?

You cannot prove a vague theory wrong. If the guess that you make is poorly expressed and the method you have for computing the consequences is a little vague then ….. you see that the theory is good as it can’t be proved wrong. If the process of computing the consequences is indefinite, then with a little skill any experimental result can be made to look like an expected consequence.

Richard Feynman – 1964 Lecture on the Scientific Method

It’s self-evident that democratic societies should base their decisions on accurate information. On many issues, however, misinformation can become entrenched in parts of the community, particularly when vested interests are involved. Reducing the influence of misinformation is a difficult and complex challenge.

The Debunking Handbook 2011 – John Cook and Stephan Lewandowsky

My previous post looked at the attacks on David Rose for daring to suggest that the rapid fall in global land temperatures at the El Nino event were strong evidence that the record highs in global temperatures were not due to human greenhouse gas emissions. The technique used was to look at long-term linear trends. The main problems with this argument were
(a) according to AGW theory warming rates from CO2 alone should be accelerating and at a higher rate than the estimated linear warming rates from HADCRUT4.
(b) HADCRUT4 shows warming stopped from 2002 to 2014, yet in theory the warming from CO2 should have accelerated.

Now there are at least two ways to view my arguments. First is to look at Feynman’s approach. The climatologists and associated academics attacking journalist David Rose chose to do so from a perspective of a very blurred specification of AGW theory. That is human emissions will cause greenhouse gas levels to rise, which will cause global average temperatures to rise. Global average temperature clearly have risen from all long-term (>40 year) data sets, so theory is confirmed. On a rising trend, with large variations due to natural variability, then any new records will be primarily “human-caused”. But making the theory and data slightly less vague reveals an opposite conclusion. Around the turn of the century the annual percentage increase in CO2 emissions went from 0.4% to 0.5% a year (figure 1), which should have lead to an acceleration in the rate of warming. In reality warming stalled.

The reaction was to come up with a load of ad hoc excuses. Hockey Schtick blog reached 66 separate excuses for the “pause” by November 2014, from the peer-reviewed to a comment in the UK Parliament.  This could be because climate is highly complex, with many variables, the presence of each contributing can only be guessed at, let alone the magnitude of each factor and the interrelationships with all factors. So how do you tell which statements are valid information and which are misinformation? I agree with Cook and Lewandowsky that misinformation is pernicious, and difficult to get rid of once it becomes entrenched. So how does one evaluate distinguish between the good information and the bad, misleading or even pernicious?

The Lewandowsky / Cook answer is to follow the consensus of opinion. But what is the consensus of opinion? In climate one variation is to follow a small subset of academics in the area who answer in the affirmative to

1. When compared with pre-1800s levels, do you think that mean global temperatures have generally risen, fallen, or remained relatively constant?

2. Do you think human activity is a significant contributing factor in changing mean global temperatures?

Problem is that the first question is just reading a graph and the second could be is a belief statement will no precision. Anthropogenic global warming has been a hot topic for over 25 years now. Yet these two very vague empirically-based questions, forming the foundations of the subject, should be able to be formulated more precisely. On the second it is a case of having pretty clear and unambiguous estimates as to the percentage of warming, so far, that is human caused. On that the consensus of leading experts are unable to say whether it is 50% or 200% of the warming so far. (There are meant to be time lags and factors like aerosols that might suppress the warming). This from the 2013 UNIPCC AR5 WG1 SPM section D3:-

It is extremely likely that more than half of the observed increase in global average surface temperature from 1951 to 2010 was caused by the anthropogenic increase in greenhouse gas concentrations and other anthropogenic forcings together.

The IPCC, encapsulating the state-of-the-art knowledge, cannot provide firm evidence in the form of a percentage, or even a fairly broad range even with over 60 years of data to work on..  It is even worse than it appears. The extremely likely phrase is a Bayesian probability statement. Ron Clutz’s simple definition from earlier this year was:-

Here’s the most dumbed-down description: Initial belief plus new evidence = new and improved belief.

For the IPCC claim that their statement was extremely likely, at the fifth attempt, they should be able to show some sort of progress in updating their beliefs to new evidence. That would mean narrowing the estimate of the magnitude of impact of a doubling of CO2 on global average temperatures. As Clive Best documented in a cliscep comment in October, the IPCC reports, from 1990 to 2013 failed to change the estimate range of 1.5°C to 4.5°C. Looking up Climate Sensitivity in Wikipedia we get the origin of the range estimate.

A committee on anthropogenic global warming convened in 1979 by the National Academy of Sciences and chaired by Jule Charney estimated climate sensitivity to be 3 °C, plus or minus 1.5 °C. Only two sets of models were available; one, due to Syukuro Manabe, exhibited a climate sensitivity of 2 °C, the other, due to James E. Hansen, exhibited a climate sensitivity of 4 °C. “According to Manabe, Charney chose 0.5 °C as a not-unreasonable margin of error, subtracted it from Manabe’s number, and added it to Hansen’s. Thus was born the 1.5 °C-to-4.5 °C range of likely climate sensitivity that has appeared in every greenhouse assessment since…

It is revealing that quote is under the subheading Consensus Estimates. The climate community have collectively failed to update the original beliefs, based on a very rough estimate. The emphasis on referring to consensus beliefs about the world, rather than looking outward for evidence in the real world, I would suggest is the primary reason for this failure. Yet such community-based beliefs completely undermines the integrity of the Bayesian estimates, making its use in statements about climate clear misinformation in Cook and Lewandowsky’s use of the term. What is more, those in the climate community who look primarily to these consensus beliefs rather than the data of the real world will endeavour to dismiss the evidence, or make up ad hoc excuses, or smear those who try to disagree. A caricature of these perspectives with respect to global average temperature anomalies is available in the form of a flickering widget at John Cooks’ skepticalscience website. This purports to show the difference between “realist” consensus and “contrarian” non-consensus views. Figure 2 is a screenshot of the consensus views, interpreting warming as a linear trend. Figure 3 is a screenshot of the non-consensus or contrarian views. They is supposed to interpret warming as a series of short, disconnected,  periods of no warming. Over time, each period just happens to be at a higher level than the previous. There are a number of things that this indicates.

(a) The “realist” view is of a linear trend throughout any data series. Yet the period from around 1940 to 1975 has no warming or slight cooling depending on the data set. Therefore any linear trend line derived for a longer period than 1970 to 1975 and ending in 2015 will show a lower rate of warming. This would be consistent the rate of CO2 increasing over time, as shown in figure 1. But for shorten the period, again ending in 2015, and once the period becomes less than 30 years, the warming trend will also decrease. This contracts the theory, unless ad hoc excuses are used, as shown in my previous post using the HADCRUT4 data set.

(b) Those who agree with the consensus are called “Realist”, despite looking inwards towards common beliefs. Those who disagree with warming are labelled “Contrarian”. This is not inaccurate when there is a dogmatic consensus. But it utterly false to lump all those who disagree with the same views, especially when no examples are provided of those who hold such views.

(c) The linear trend appears as a more plausible fit than the series of “contrarian” lines. By implication, those who disagree with the consensus are viewed as as having a distinctly more blinkered and distorted perspective than those who follow the consensus. Yet even using gistemp data set (which is gives greatest support to the consensus views) there is a clear break in the linear trend. The less partisan HADCRUT4 data shows an even greater break.

Those who spot the obvious – that around the turn of the century warming stopped or slowed down, when in theory it should have accelerated – are given a clear choice. They can conform to the scientific consensus, denying the discrepancy between theory and data. Or they can act as scientists, denying the false and empirically empty scientific consensus, receiving the full weight of all the false and career-damaging opprobrium that accompanies it.





Kevin Marshall


Climate Experts Attacking a Journalist by Misinformation on Global Warming


Journalist David Rose was attacked for pointing out in a Daily Mail article that the strong El Nino event, that resulted in record temperatures, was reversing rapidly. He claimed record highs may be not down to human emissions. The Climate Feedback attack article claimed that the El Nino event did not affect the long-term human-caused trend. My analysis shows

  • CO2 levels have been rising at increasing rates since 1950.
  • In theory this should translate in warming at increasing rates. That is a non-linear warming rate.
  • HADCRUT4 temperature data shows warming stopped in 2002, only resuming with the El Nino event in 2015 and 2016.
  • At the central climate sensitivity estimate of doubling of CO2 leads to 3C of global warming, HADCRUT4 was already falling short of theoretical warming in 2000. This is without the impact of other greenhouse gases.
  • Putting a linear trend lines through the last 35 to 65 years of data will show very little impact of El Nino, but has a very large visual impact on the divergence between theoretical human-caused warming and the temperature data. It reduces the apparent impact of the divergence between theory and data, but does not eliminate it.

Claiming that the large El Nino does not affect long-term linear trends is correct. But a linear trend neither describes warming in theory or in the leading temperature set. To say, as experts in their field, that the long-term warming trend is even principally human-caused needs a lot of circumspection. This is lacking in the attack article.



Journalist David Rose recently wrote a couple of articles in the Daily Mail on the plummeting global average temperatures.
The first on 26th November was under the headline

Stunning new data indicates El Nino drove record highs in global temperatures suggesting rise may not be down to man-made emissions

With the summary

• Global average temperatures over land have plummeted by more than 1C
• Comes amid mounting evidence run of record temperatures about to end
• The fall, revealed by Nasa satellites, has been caused by the end of El Nino

Rose’s second article used the Met Offices’ HADCRUT4 data set, whereas the first used satellite data. Rose was a little more circumspect when he said.

El Nino is not caused by greenhouse gases and has nothing to do with climate change. It is true that the massive 2015-16 El Nino – probably the strongest ever seen – took place against a steady warming trend, most of which scientists believe has been caused by human emissions.

But when El Nino was triggering new records earlier this year, some downplayed its effects. For example, the Met Office said it contributed ‘only a few hundredths of a degree’ to the record heat. The size of the current fall suggests that this minimised its impact.

There was a massive reaction to the first article, as discussed by Jaime Jessop at Cliscep. She particularly noted that earlier in the year there were articles on the dramatically higher temperature record of 2015, such as in a Guardian article in January.There was also a follow-up video conversation between David Rose and Dr David Whitehouse of the GWPF commenting on the reactions. One key feature of the reactions was claiming the contribution to global warming trend of the El Nino effect was just a few hundredths of a degree. I find particularly interesting the Climate Feedback article, as it emphasizes trend over short-run blips. Some examples

Zeke Hausfather, Research Scientist, Berkeley Earth:
In reality, 2014, 2015, and 2016 have been the three warmest years on record not just because of a large El Niño, but primarily because of a long-term warming trend driven by human emissions of greenhouse gases.

Kyle Armour, Assistant Professor, University of Washington:
It is well known that global temperature falls after receiving a temporary boost from El Niño. The author cherry-picks the slight cooling at the end of the current El Niño to suggest that the long-term global warming trend has ended. It has not.

1.Recent record global surface temperatures are primarily the result of the long-term, human-caused warming trend. A smaller boost from El Niño conditions has helped set new records in 2015 and 2016.


2. The article makes its case by relying only on cherry-picked data from specific datasets on short periods.

To understand what was said, I will try to take the broader perspective. That is to see whether the evidence points conclusively to a single long-term warming trend being primarily human caused. This will point to the real reason(or reasons) for downplaying the impact of an extreme El Nino event on record global average temperatures. There are a number of steps in this process.

Firstly to look at the data of rising CO2 levels. Secondly to relate that to predicted global average temperature rise, and then expected warming trends. Thirdly to compare those trends to global data trends using the actual estimates of HADCRUT4, taking note of the consequences of including other greenhouse gases. Fourthly to put the calculated trends in the context of the statements made above.


1. The recent data of rising CO2 levels
CO2 accounts for a significant majority of the alleged warming from increases in greenhouse gas levels. Since 1958 CO2 (when accurate measures started to be taken at Mauna Loa) levels have risen significantly. Whilst I could produce a simple graph either the CO2 level rising from 316 to 401 ppm in 2015, or the year-on-year increases CO2 rising from 0.8ppm in the 1960s to over 2ppm in in the last few years, Figure 1 is more illustrative.

CO2 is not just rising, but the rate of rise has been increasing as well, from 0.25% a year in the 1960s to over 0.50% a year in the current century.


2. Rising CO2 should be causing accelerating temperature rises

The impact of CO2 on temperatures is not linear, but is believed to approximate to a fixed temperature rise for each doubling of CO2 levels. That means if CO2 levels were rising arithmetically, the impact on the rate of warming would fall over time. If CO2 levels were rising by the same percentage amount year-on-year, then the consequential rate of warming would be constant over time.  But figure 1 shows that percentage rise in CO2 has increased over the last two-thirds of a century.  The best way to evaluate the combination of CO2 increasing at an accelerating rate and a diminishing impact of each unit rise on warming is to crunch some numbers. The central estimate used by the IPCC is that a doubling of CO2 levels will result in an eventual rise of 3C in global average temperatures. Dana1981 at Skepticalscience used a formula that produces a rise of 2.967 for any doubling. After adjusting the formula, plugging the Mauna Loa annual average CO2 levels into values in produces Figure 2.

In computing the data I estimated the level of CO2 in 1949 (based roughly on CO2 estimates from Law Dome ice core data) and then assumed a linear increased through the 1950s. Another assumption was that the full impact of the CO2 rise on temperatures would take place in the year following that rise.

The annual CO2 induced temperature change is highly variable, corresponding to the fluctuations in annual CO2 rise. The 11 year average – placed at the end of the series to give an indication of the lagged impact that CO2 is supposed to have on temperatures – shows the acceleration in the expected rate of CO2-induced warming from the acceleration in rate of increase in CO2 levels. Most critically there is some acceleration in warming around the turn of the century.

I have also included the impact of linear trend (by simply dividing the total CO2 increase in the period by the number of years) along with a steady increase of .396% a year, producing a constant rate of temperature rise.

Figure 3 puts the calculations into the context of the current issue.

This gives the expected temperature linear temperature trends from various start dates up until 2014 and 2016, assuming a one year lag in the impact of changes in CO2 levels on temperatures. These are the same sort of linear trends that the climate experts used in criticizing David Rose. The difference in warming by more two years produces very little difference – about 0.054C of temperature rise, and an increase in trend of less than 0.01 C per decade. More importantly, the rate of temperature rise from CO2 alone should be accelerating.


3. HADCRUT4 warming

How does one compare this to the actual temperature data? A major issue is that there is a very indeterminate lag between the rise in CO2 levels and the rise in average temperature. Another issue is that CO2 is not the only greenhouse gas. More minor greenhouse gases may have different patterns if increases in the last few decades. However, the change the trends of the resultant warming, but only but the impact should be additional to the warming caused by CO2. That is, in the long term, CO2 warming should account for less than the total observed.
There is no need to do actual calculations of trends from the surface temperature data. The Skeptical Science website has a trend calculator, where one can just plug in the values. Figure 4 shows an example of the graph, which shows that the dataset currently ends in an El Nino peak.

The trend results for HADCRUT4 are shown in Figure 5 for periods up to 2014 and 2016 and compared to the CO2 induced warming.

There are a number of things to observe from the trend data.

The most visual difference between the two tables is the first has a pause in global warming after 2002, whilst the second has a warming trend. This is attributable to the impact of El Nino. These experts are also right in that it makes very little difference to the long term trend. If the long term is over 40 years, then it is like adding 0.04C per century that long term trend.

But there is far more within the tables than this observations. Concentrate first on the three “Trend in °C/decade” columns. The first is of the CO2 warming impact from figure 3. For a given end year, the shorter the period the higher is the warming trend. Next to this are Skeptical Science trends for the HADCRUT4 data set. Start Year 1960 has a higher trend than Start Year 1950 and Start Year 1970 has a higher trend than Start Year 1960. But then each later Start Year has a lower trend the previous Start Years. There is one exception. The period 2010 to 2016 has a much higher trend than for any other period – a consequence of the extreme El Nino event. Excluding this there are now over three decades where the actual warming trend has been diverging from the theory.

The third of the “Trend in °C/decade” columns is simply the difference between the HADCRUT4 temperature trend and the expected trend from rising CO2 levels. If a doubling of CO2 levels did produce around 3C of warming, and other greenhouse gases were also contributing to warming then one would expect that CO2 would eventually start explaining less than the observed warming. That is the variance would be positive. But CO2 levels accelerated, actual warming stalled, increasing the negative variance.


4. Putting the claims into context

Compare David Rose

Stunning new data indicates El Nino drove record highs in global temperatures suggesting rise may not be down to man-made emissions

With Climate Feedback KEY TAKE-AWAY

1.Recent record global surface temperatures are primarily the result of the long-term, human-caused warming trend. A smaller boost from El Niño conditions has helped set new records in 2015 and 2016.

The HADCRUT4 temperature data shows that there had been no warming for over a decade, following a warming trend. This is in direct contradiction to theory which would predict that CO2-based warming would be at a higher rate than previously. Given that a record temperatures following this hiatus come as part of a naturally-occurring El Nino event it is fair to say that record highs in global temperatures ….. may not be down to man-made emissions.

The so-called long-term warming trend encompasses both the late twentieth century warming and the twenty-first century hiatus. As the later flatly contradicts theory it is incorrect to describe the long-term warming trend as “human-caused”. There needs to be a more circumspect description, such as the vast majority of academics working in climate-related areas believe that the long-term (last 50+ years) warming  is mostly “human-caused”. This would be in line with the first bullet point from the UNIPCC AR5 WG1 SPM section D3:-

It is extremely likely that more than half of the observed increase in global average surface temperature from 1951 to 2010 was caused by the anthropogenic increase in greenhouse gas concentrations and other anthropogenic forcings together.

When the IPCC’s summary opinion, and the actual data are taken into account Zeke Hausfather’s comment that the records “are primarily because of a long-term warming trend driven by human emissions of greenhouse gases” is dogmatic.

Now consider what David Rose said in the second article

El Nino is not caused by greenhouse gases and has nothing to do with climate change. It is true that the massive 2015-16 El Nino – probably the strongest ever seen – took place against a steady warming trend, most of which scientists believe has been caused by human emissions.

Compare this to Kyle Armour’s statement about the first article.

It is well known that global temperature falls after receiving a temporary boost from El Niño. The author cherry-picks the slight cooling at the end of the current El Niño to suggest that the long-term global warming trend has ended. It has not.

This time Rose seems to have responded to the pressure by stating that there is a long-term warming trend, despite the data clearly showing that this is untrue, except in the vaguest sense. There data does not show a single warming trend. Going back to the skeptical science trends we can break down the data from 1950 into four periods.

1950-1976 -0.014 ±0.072 °C/decade (2σ)

1976-2002 0.180 ±0.068 °C/decade (2σ)

2002-2014 -0.014 ±0.166 °C/decade (2σ)

2014-2016 1.889 ±1.882 °C/decade (2σ)

There was warming for about a quarter of a century sandwiched between two periods of no warming. At the end is an uptick. Only very loosely can anyone speak of a long-term warming trend in the data. But basic theory hypotheses a continuous, non-linear, warming trend. Journalists can be excused failing to make the distinctions. As non-experts they will reference opinion that appears sensibly expressed, especially when the alleged experts in the field are united in using such language. But those in academia, who should have a demonstrable understanding of theory and data, should be more circumspect in their statements when speaking as experts in their field. (Kyle Armour’s comment is an extreme example of what happens when academics completely suspend drawing on their expertise.)  This is particularly true when there are strong divergences between the theory and the data. The consequence is plain to see. Expert academic opinion tries to bring the real world into line with the theory by authoritative but banal statements about trends.

Kevin Marshall

Failed Arctic Sea Ice predictions illustrates Degenerating Climatology

The Telegraph yesterday carried an interesting article. Telegraph Experts said Arctic sea ice would melt entirely by September 2016 – they were wrong

Dire predictions that the Arctic would be devoid of sea ice by September this year have proven to be unfounded after latest satellite images showed there is far more now than in 2012.
Scientists such as Prof Peter Wadhams, of Cambridge University, and Prof Wieslaw Maslowski, of the Naval Postgraduate School in Moderey, California, have regularly forecast the loss of ice by 2016, which has been widely reported by the BBC and other media outlets.

In June, Michel at Trustyetverify blog traced a number of these false predictions. Michel summarized

(H)e also predicted until now:
• 2008 (falsified)
• 2 years from 2011 → 2013 (falsified)
• 2015 (falsified)
• 2016 (still to come, but will require a steep drop)
• 2017 (still to come)
• 2020 (still to come)
• 10 to 20 years from 2009 → 2029 (still to come)
• 20 to 30 years from 2010 → 2040 (still to come).

The 2016 prediction is now false. Paul Homewood has been looking at Professor Wadhams’ failed prophesies in a series of posts as well.

The Telegraph goes on to quote from three, more moderate, sources. One of them is :-

Andrew Shepherd, professor of earth observation at University College London, said there was now “overwhelming consensus” that the Arctic would be free of ice in the next few decades, but warned earlier predictions were based on poor extrapolation.
“A decade or so ago, climate models often failed to reproduce the decline in Arctic sea ice extent revealed by satellite observations,” he said.
“One upshot of this was that outlier predictions based on extrapolation alone were able to receive wide publicity.
“But climate models have improved considerably since then and they now do a much better job of simulating historical events.
This means we have greater confidence in their predictive skill, and the overwhelming consensus within the scientific community is that the Arctic Ocean will be effectively free of sea ice in a couple of decades should the present rate of decline continue.

(emphasis mine)

Professor Shepard is saying that the shorter-term (from a few months to a few years) highly dire predictions have turned out to be false, but improved techniques in modelling enable much more sound predictions over 25-50 years to be made. That would require a development on two dimensions – scale and time. Detecting a samll human-caused change over decades needs far greater skill in differentiating from natural variations on a year-by-year time scale from a dramatic shift. Yet it would appear that at the end of the last century there was a natural upturn following from an unusually cold period in the 1950s to the 1970s, as documented by HH Lamb. This resulted in an extension in the sea ice. Detection of the human influence problem is even worse if the natural reduction in sea ice has worked concurrently with that human influence. However, instead of offering us demonstrated increased technical competency in modelling (as opposed to more elaborate models), Professor Shepard offers us the consensus of belief that the more moderate predictions are reliable.
This is a clear example of degenerating climatology that I outlined in last year. In particular, I proposed that rather than progressive climate science – increasing scientific evidence and more rigorous procedures for tighter hypotheses about clear catastrophic anthropogenic global warming – we have degenerating climatology, which is ever weaker and vaguer evidence for some global warming.

If Professor Wadhams had consistently predicted the lack of summer sea ice for a set time period, then it would be strong confirmation of a potentially catastrophic problem. Climatology would have scored a major success. Even if instead of ice-free summers by now, there had been evidence of clear acceleration in the decline in sea ice extent, then it could have been viewed as some progression. But instead we should accept a consensus of belief that will only be confirmed or refuted decades ahead. The interpretation of success or failure. will then, no doubt, be given to the same consensus who were responsible for the vague predictions in the first place.

Kevin Marshall

Guardian Images of Global Warming Part 2 – A Starved Dead Polar Bear

In the Part 2 of my look at Ashley Cooper’s photographs of global warming published in The Guardian on June 3rd I concentrate on the single image of a dead, emaciated, polar bear.
The caption reads

A male polar bear that starved to death as a consequence of climate change. Polar bears need sea ice to hunt their main prey, seals. Western fjords of Svalbard which normally freeze in winter, remained ice free all season during the winter of 2012/13, one of the worst on record for sea ice around the island archipelago. This bear headed hundreds of miles north, looking for suitable sea ice to hunt on before it finally collapsed and died.

The US National Snow and Ice Data Center (NSIDC) has monthly maps of sea ice extent. The Western Fjords were indeed ice free during the winter of 2012/13, even in March 2013 when the sea ice reaches a maximum. In March 2012 Western Fjords were also ice free, along with most of the North Coast was as well.  The maps are also available for March of 2011, 2010, 2009 and 2008. It is the earliest available year that seems to have the minimum extent. Screen shots of Svarlbard are shown below.

As the sea ice extent has been diminishing for years, maybe this had impacted on the polar bear population? This is not the case. A survey published late last year, showed that polar bear numbers has increased by 42% between 2004 and 2015 for Svarlbard and neighbouring archipelagos of Franz Josef Land and Novaya Zemlya.

Even more relevantly, studies have shown that the biggest threat to polar bear is not low sea ice levels but unusually thick spring sea ice. This affects the seal population, the main polar bear food source, at the time of year when the polar bears are rebuilding fat after the long winter.
Even if diminishing sea ice is a major cause of some starvation then it may have been a greater cause in the past. There was no satellite data prior to the late 1970s when the sea ice levels started diminishing. The best proxies are the average temperatures. Last year I looked at the two major temperature data sets for Svarlbard, both located on the West Coast where the dead polar bear was found. It would appear that there was a more dramatic rise in temperatures in Svarlbard in the period 1910-1925 than in period since the late 1970s. But in the earlier warming period polar bear numbers were likely decreasing, continuing into later cooling period. Recovery in numbers corresponds to the warming period. These changes have nothing to do with average temperatures or sea ice levels. It is because until recent decades polar bears were being hunted, a practice that has largely stopped.

The starvation of this pictured polar bear may have a more mundane cause. Polar bears are at the top of the food chain, relying on killing fast-moving seals for food. As a polar bear gets older it slows down, due to arthritis and muscles not working as well. As speed and agility are key factors in catching food, along with a bit of luck, starvation might be the most common cause of death in polar bears.

Kevin Marshall

Beliefs and Uncertainty: A Bayesian Primer

Ron Clutz’s introduction, based on a Scientific American article by John Horgan on January 4, 2016, starts to grapple with the issues involved.

The take home quote from Horgan is on the subject of false positives.

Here is my more general statement of that principle: The plausibility of your belief depends on the degree to which your belief–and only your belief–explains the evidence for it. The more alternative explanations there are for the evidence, the less plausible your belief is. That, to me, is the essence of Bayes’ theorem.

“Alternative explanations” can encompass many things. Your evidence might be erroneous, skewed by a malfunctioning instrument, faulty analysis, confirmation bias, even fraud. Your evidence might be sound but explicable by many beliefs, or hypotheses, other than yours.

In other words, there’s nothing magical about Bayes’ theorem. It boils down to the truism that your belief is only as valid as its evidence. If you have good evidence, Bayes’ theorem can yield good results. If your evidence is flimsy, Bayes’ theorem won’t be of much use. Garbage in, garbage out.
With respect to the question of whether global warming is human caused, there is basically a combination of three elements – (i) Human caused (ii) Naturally caused (iii) Random chaotic variation. There may be a number of sub-elements and an infinite number of combinations including some elements counteracting others, such as El Nino events counteracting underlying warming. Evaluation of new evidence is in the context of explanations being arrived at within a community of climatologists with strong shared beliefs that at least 100% of recent warming is due to human GHG emissions. It is that same community who also decide the measurement techniques for assessing the temperature data; the relevant time frames; and the categorization of the new data. With complex decisions the only clear decision criteria is conformity to the existing consensus conclusions. As a result, the original Bayesian estimates become virtually impervious to new perspectives or evidence that contradicts those original estimates.

Science Matters

Those who follow discussions regarding Global Warming and Climate Change have heard from time to time about the Bayes Theorem. And Bayes is quite topical in many aspects of modern society:

Bayesian statistics “are rippling through everything from physics to cancer research, ecology to psychology,” The New York Times reports. Physicists have proposed Bayesian interpretations of quantum mechanics and Bayesian defenses of string and multiverse theories. Philosophers assert that science as a whole can be viewed as a Bayesian process, and that Bayes can distinguish science from pseudoscience more precisely than falsification, the method popularized by Karl Popper.

Named after its inventor, the 18th-century Presbyterian minister Thomas Bayes, Bayes’ theorem is a method for calculating the validity of beliefs (hypotheses, claims, propositions) based on the best available evidence (observations, data, information). Here’s the most dumbed-down description: Initial belief plus new evidence = new and improved belief.   (A fuller and…

View original post 1,082 more words

James Ross Island warming of past 100 years not unusual

At Wattsupwiththat there is a post by Sebastian Lüning The Medieval Warm Period in Antarctica: How two one-data-point studies missed the target.

Lüning has the following quote and graphic from Mulvaney et al. 2012, published in Nature.

But the late Bob Carter frequently went on about the recent warming being nothing unusual. Using mainstream thinking, would you trust a single climate denialist against proper climate scientists?

There is a simple test. Will similar lines fit to data of the last two thousand years? It took me a few minutes to produce the following.

Bob Carter is right and nine leading experts, plus their peer reviewers are wrong. From the temperature reconstruction there were at least five times in the last 2000 years when there were similar or greater jumps in average temperature. There are also about seven temperature peaks similar to the most recent.

It is yet another example about how to look at the basic data rather than the statements of the experts. It is akin to a court preferring the actual evidence rather than hearsay.

Kevin Marshall

Insight into the mindset of FoE activists

Bishop Hill comments about how

the Charities Commissioners have taken a dim view of an FoE leaflet that claimed that silica – that’s sand to you or me – used in fracking fluid was a known carcinogen.

Up pops a FoE activist making all sorts of comments, including attacking the hosts book The Hockey Stick Illusion. Below is my comment

Phil Clarke’s comments on the hosts book are an insight into the Green Activists.
He says Jan 30, 2016 at 9:58 AM

So you’ve read HSI, then?
I have a reading backlog of far more worthwhile volumes, fiction and non-fiction. Does anybody dispute a single point in Tamino’s adept demolition?


Where did I slag off HSI? I simply trust Tamino; the point about innuendo certainly rings true, based on other writings.
So no, I won’t be shelling out for a copy of a hatchet job on a quarter-century old study. But I did read this, in detail

Tamino’s article was responded to twice by Steve McIntyre. The first looks at the use of non-standard statistical methods and Re-post of “Tamino and the Magic Flute” simply repeats the post of two years before. Tamino had ignored previous rebuttals. A simple illustration is the Gaspé series that Tamino defends. He misses out many issues with this key element in the reconstruction, including that a later sample from the area failed to show a hockey stick.
So Phil Clarke has attacked a book that he has not read, based on biased review by an author in line with his own prejudices. He ignores the counter-arguments, just as the biased review author does as well. Says a lot about the rubbish Cuadrilla are up against.

Kevin Marshall

John Cook undermining democracy through misinformation

It seems that John Cook was posting comments in 2011 under the pseudonym Lubos Motl. The year before physicist and blogger Luboš Motl had posted a rebuttal of Cook’s then 104 Global Warming & Climate Change Myths. When someone counters your beliefs point for point, then most people would naturally feel some anger. Taking the online identity of Motl is potentially more than identity theft. It can be viewed as an attempt to damage the reputation of someone you oppose.

However, there is a wider issue here. In 2011 John Cook co-authored with Stephan Lewandowsky The Debunking Handbook, that is still featured prominently on the This short tract starts with the following paragraphs:-

It’s self-evident that democratic societies should base their decisions on accurate information. On many issues, however, misinformation can become entrenched in parts of the community, particularly when vested interests are involved. Reducing the influence of misinformation is a difficult and complex challenge.

A common misconception about myths is the notion that removing its influence is as simple as packing more information into people’s heads. This approach assumes that public misperceptions are due to a lack of knowledge and that the solution is more information – in science communication, it’s known as the “information deficit model”. But that model is wrong: people don’t process information as simply as a hard drive downloading data.

If Cook was indeed using the pseudonym Lubos Motl then he was knowingly putting out into the public arena misinformation in a malicious form. If he misrepresented Motl’s beliefs, then the public may not know who to trust. Targeted against one effective critic, it could trash their reputation. At a wider scale it could allow morally and scientifically inferior views to gain prominence over superior viewpoints. If the alarmist beliefs were superior it what be necessary to misrepresent alternative opinions. Open debate would soon reveal which side had the better views. But in debating and disputing, all sides would sharpen their arguments. What would quickly disappear is the reliance on opinion surveys and rewriting of dictionaries. Instead, proper academics would be distinguishing between quality, relevant evidence from dogmatic statements based on junk sociology and psychology. They would start defining the boundaries of expertise between the basic physics, computer modelling, results analysis, public policy-making, policy-implementation, economics, ethics and the philosophy of science. They may then start to draw on the understanding that has been achieved in these subject areas.

Kevin Marshall

ATTP on Lomborg’s Australian Funding

Blogger …and then there’s physics (ATTP) joins in the hullabaloo about Bjorn Lomberg’s Lomborg’s Consensus Centre is getting A$4m of funding to set up a branch at the University of Western Australia. He says

However, ignoring that Lomborg appears to have a rather tenuous grasp on the basics of climate science, my main issue with what he says is its simplicity. Take all the problems in the world, determine some kind of priority ordering, and then start at the top and work your way down – climate change, obviously, being well down the list. It’s as if Lomborg doesn’t realise that the world is a complex place and that many of the problems we face are related. We can’t necessarily solve something if we don’t also try to address many of the other issues at the same time. It’s this kind of simplistic linear thinking – and that some seem to take it seriously – that irritates me most.

The comment about climatology is just a lead in. ATTP is expressing a normative view about the interrelationship of problems, along with beliefs about the solution. What he is rejecting as simplistic is the method of identifying the interrelated issues separately, understanding the relative size of the problems along with the effectiveness and availability of possible solutions and then prioritizing them.

This errant notion is exacerbated when ATTP implies that Lomborg has received the funding. Lomborg heads up the Copenhagen Consensus Centre and it is they who have received the funding to set up a branch in Australia. This description is from their website

We work with some of the world’s top economists (including 7 Nobel Laureates) to research and publish the smartest solutions to global challenges. Through social, economic and environmental benefit-cost research, we show policymakers and philanthropists how to do the most good for each dollar spent.

It is about bringing together some of the best minds available to understand the problems of the world. It is then to persuade those who are able to do something about the issues. It is not Lomborg’s personal views that are present here, but people with different views and from different specialisms coming together to argue and debate. Anyone who has properly studied economics will soon learn that there are a whole range of different views, many of them plausible. Some glimpse that economic systems are highly interrelated in ways that cannot be remotely specified, leading to the conclusion that any attempt to create a computer model of an economic system will be a highly distorted simplification. At a more basic level they will have learnt that in the real world there are 200 separate countries, all with different priorities. In many there is a whole range of different voiced opinions about what the priorities should be at national, regional and local levels. To address all these interrelated issues together would require the modeller of be omniscient and omnipresent. To actually enact the modeller’s preferred policies over seven billion people would require a level of omnipotence that Stalin could only dream of.

This lack of understanding of economics and policy making is symptomatic of those who believe in climate science. They fail to realize that models are only an attempted abstraction of the real world. Academic economists have long recognized the abstract nature of the subject along with the presence of strong beliefs about the subject. As a result, in the last century many drew upon the rapidly developing philosophy of science to distinguish whether theories were imparting knowledge about the world or confirming beliefs. The most influential by some distance was Milton Friedman. In his seminal essay The Methodology of Positive Economics he suggested the way round this problem was to develop bold yet simple predictions from the theory that, despite being unlikely, are nevertheless come true. I would suggest that you do not need to be too dogmatic in the application. The bold predictions do not need to be right 100% of the time, but an entire research programme should be establishing a good track record over a sustained period. In climatology the bold predictions, that would show a large and increasing problem, have been almost uniformly wrong. For instance:-

  • The rate of melting of the polar ice caps has not accelerated.
  • The rate of sea level rise has not accelerated in the era of satellite measurements.
  • Arctic sea ice did not disappear in the summer of 2013.
  • Hurricanes did not get worse following Katrina. Instead there followed the quietest period on record.
  • Snow has not become a thing of the past in England, nor in Germany.

Other examples have been compiled by Pierre Gosselin at Notrickszone, as part of his list of climate scandals.

Maybe it is different in climatology. The standard response is that the reliability of the models is based on the strength of the consensus in support. This view is not proclaimed by ATTP. Instead from the name it would appear he believes the reliability can be obtained from the basic physics. I have not done any physics since high school and have forgotten most of what I learnt. So in discerning what is reality in that area I have to rely on the opinions of physicists themselves. One of the greatest physicists since Einstein was Richard Feynman. He said fifty years ago in a lecture on the Scientific Method

You cannot prove a vague theory wrong. If the guess that you make is poorly expressed and the method you have for computing the consequences is a little vague then ….. you see that the theory is good as it can’t be proved wrong. If the process of computing the consequences is indefinite, then with a little skill any experimental result can be made to look like an expected consequence.

Climate models, like economic models, will always be vague. This is not due to being poorly expressed (though they often are) but due to the nature of the subject. Short of rejecting climate models as utter nonsense, I would suggest the major way of evaluating whether they say something distinctive about the real world is on the predictive ability. But a consequence of theories always being vague in both economics and climate is you will not be able to use the models as a forecasting tool. As Freeman Dyson (who narrowly missed sharing a Nobel Prize with Feynman) recently said of climate models:-

These climate models are excellent tools for understanding climate, but that they are very bad tools for predicting climate. The reason is simple – that they are models which have very few of the factors that may be important, so you can vary one thing at a time ……. to see what happens – particularly carbon dioxide. But there are a whole lot of things that they leave out. ….. The real world is far more complicated than the models.

This implies that when ATTP is criticizing somebody else’s work with a simple model, or a third person’s work, he is likely criticizing them for looking at a highly complex issue in another way. Whether his way is better, worse or just different we have no way of knowing. All we can infer from his total rejection of ideas of experts in a field to which he lacks even a basic understanding, is that he has no basis of knowing either.

To be fair, I have not looked at the earlier part of ATTP’s article. For instance he says:-

If you want to read a defense of Lomborg, you could read Roger Pielke Jr’s. Roger’s article makes the perfectly reasonable suggestion that we shouldn’t demonise academics, but fails to acknowledge that Lomborg is not an academic by any standard definition…….

The place to look for a “standard definition” of a word is a dictionary. The noun definitions are


8. a student or teacher at a college or university.

9. a person who is academic in background, attitudes, methods, etc.:

He was by temperament an academic, concerned with books and the arts.

10. (initial capital letter) a person who supports or advocates the Platonic school of philosophy.

This is Bjorn Lomborg’s biography from the Copenhagen Consensus website:-

Dr. Bjorn Lomborg is Director of the Copenhagen Consensus Center and Adjunct Professor at University of Western Australia and Visiting Professor at Copenhagen Business School. He researches the smartest ways to help the world, for which he was named one of TIME magazine’s 100 most influential people in the world. His numerous books include The Skeptical Environmentalist, Cool It, How to Spend $75 Billion to Make the World a Better Place and The Nobel Laureates’ Guide to the Smartest Targets for the World 2016-2030.

Lomborg meets both definitions 8 & 9, which seem to be pretty standard. Like with John Cook and William Connolley defining the word sceptic, it would appear that ATTP rejects the authority of those who write the dictionary. Or more accurately does not even to bother to look. Like with rejecting the authority of those who understand economics it suggests ATTP uses the authority of his own dogmatic beliefs as the standard by which to evaluate others.

Kevin Marshall