Met Office Extreme Wet Winter Projections

I saw an article in the Telegraph

Met Office warns Britain is heading for ‘unprecedented’ winter rainfall, with records broken by up to 30pc 

Britain is heading for “unprecedented” winter rainfall after the Met Office’s new super computer predicted records will be broken by up to 30 per cent.

Widespread flooding has hit the UK in the past few years leading meteorologists to search for new ways to “quantify the risk of extreme rainfall within the current climate”.

In other words, the Telegraph reporting that the Met Office is projecting that if the current record is, say, 100mm, new records of 130mm could be set.

BBC is reporting something slightly different

High risk of ‘unprecedented’ winter downpours – Met Office

There is an increased risk of “unprecedented” winter downpours such as those that caused extensive flooding in 2014, the UK Met Office says.

Their study suggests there’s now a one in three chance of monthly rainfall records being broken in England and Wales in winter.

The estimate reflects natural variability plus changes in the UK climate as a result of global warming.

The BBC has a nice graphic, of the most extreme winter month of recent years for rainfall.

The BBC goes onto say

Their analysis also showed a high risk of record-breaking rainfall in England and Wales in the coming decade.

“We found many unprecedented events in the model data and this comes out as a 7% risk of a monthly record extreme in a given winter in the next few years, that’s just over Southeast England,” Dr Vikki Thompson, the study’s lead author told BBC News.

“Looking at all the regions of England and Wales we found a 34% chance of an extreme event happening in at least one of those regions each year.”

Not only is there a greater risk, but the researchers were also able to estimate that these events could break existing records by up to 30%.

“That is an enormous number, to have a monthly value that’s 30% larger, it’s a bit like what we had in 2014, and as much again,” said Prof Adam Scaife from the Met Office.

The 30% larger is an outlier.

But over what period is the record?

The Met Office website has an extended version of what the BBC reports. But strangely no figures. There is a little video by Dr Vikki Thomson to explain.

She does say only recent data is used, but no definition of what constitutes recent. A clue lies not in the text, but an explanatory graphic.

It is from 35 years of winters, which ties into the BBC’s graphic from 1981. There are nine regions in England and Wales by the Met Office definition. The tenth political region of London is included in the South East. There could be different regions for the modeling. As Ben Pile and Paul Homewood pointed out in the comments to the Cliscep article, elsewhere the Met Office splits England and Wales into six regions. What is amazing is that the Met Office article does not clarify the number of regions, still less show the current records in the thirty-five years of data. There is therefore no possibility of ever verifying the models.

Put this into context. Northern Ireland and Scotland are excluded, which seems a bit arbitrary. If rainfall was random, then the chance of this coming winter setting a new record in a region is nearly 3%. For any one of nine regions, if data rainfall data independent between regions (which it is not) it is nearly a 26% chance. 34% is higher. But consider the many alternatives ways for the climate patterns to become more extreme and variable. After all, with global warming there climate could be thrown into chaos, so more extreme weather should be emerging as a foretaste of much worse to come. Given the many different aspects of weather, there could be hundreds of possible ways climate could get worse. With rainfall, it could be wetter or drier, in either summer or winter. That is four variables, of which the Met Office choose just one. Or could be in any 1, 2, 3… or 12 month period. Then again, climate change could mean more frequent and violent storms, such as that of 1987. Or it could mean more heatwaves. Statistically, heatwaves records could be a number of different ways, such as, say, 5 consecutive days in a month where the peak daily temperature is more than 5C about the long-term monthly average peak temperature.
So why choose rainfall in winter? Maybe it is because in recent years there have been a number of unusually wet winters. It looks like the Met Office, for all the power of their mighty computers, have fallen for a common fallacy.

 

Texas sharpshooter fallacy is an informal fallacy which is committed when differences in data are ignored, but similarities are stressed. From this reasoning, a false conclusion is inferred. This fallacy is the philosophical/rhetorical application of the multiple comparisons problem (in statistics) and apophenia (in cognitive psychology). It is related to the clustering illusion, which refers to the tendency in human cognition to interpret patterns where none actually exist.
The name comes from a joke about a Texan who fires some gunshots at the side of a barn, then paints a target centered on the tightest cluster of hits and claims to be a sharpshooter.

A run of extremely wet winters might be due to random clustering, or it could genuine patterns from natural variation, or it could be a sign of human-caused climate change. An indication of random clustering would be to look at many other the different aspects of weather, to see if there is a recent trend of emerging climate chaos. Living in Britain, I suspect that the recent wet weather is just drawing the target around the tightest clusters. Even then, high winter rainfall in Britain high rainfall this is usually accompanied by slightly milder temperatures than average. Extreme winter cold is usually on cloud-free days. So, if winter rainfall is genuinely getting worse it seems that the whole global warming thing for Britain is predicted to become a bit a damp squib.

Kevin Marshall

 

How strong is the Consensus Evidence for human-caused global warming?

You cannot prove a vague theory wrong. If the guess that you make is poorly expressed and the method you have for computing the consequences is a little vague then ….. you see that the theory is good as it can’t be proved wrong. If the process of computing the consequences is indefinite, then with a little skill any experimental result can be made to look like an expected consequence.

Richard Feynman – 1964 Lecture on the Scientific Method

It’s self-evident that democratic societies should base their decisions on accurate information. On many issues, however, misinformation can become entrenched in parts of the community, particularly when vested interests are involved. Reducing the influence of misinformation is a difficult and complex challenge.

The Debunking Handbook 2011 – John Cook and Stephan Lewandowsky

My previous post looked at the attacks on David Rose for daring to suggest that the rapid fall in global land temperatures at the El Nino event were strong evidence that the record highs in global temperatures were not due to human greenhouse gas emissions. The technique used was to look at long-term linear trends. The main problems with this argument were
(a) according to AGW theory warming rates from CO2 alone should be accelerating and at a higher rate than the estimated linear warming rates from HADCRUT4.
(b) HADCRUT4 shows warming stopped from 2002 to 2014, yet in theory the warming from CO2 should have accelerated.

Now there are at least two ways to view my arguments. First is to look at Feynman’s approach. The climatologists and associated academics attacking journalist David Rose chose to do so from a perspective of a very blurred specification of AGW theory. That is human emissions will cause greenhouse gas levels to rise, which will cause global average temperatures to rise. Global average temperature clearly have risen from all long-term (>40 year) data sets, so theory is confirmed. On a rising trend, with large variations due to natural variability, then any new records will be primarily “human-caused”. But making the theory and data slightly less vague reveals an opposite conclusion. Around the turn of the century the annual percentage increase in CO2 emissions went from 0.4% to 0.5% a year (figure 1), which should have lead to an acceleration in the rate of warming. In reality warming stalled.

The reaction was to come up with a load of ad hoc excuses. Hockey Schtick blog reached 66 separate excuses for the “pause” by November 2014, from the peer-reviewed to a comment in the UK Parliament.  This could be because climate is highly complex, with many variables, the presence of each contributing can only be guessed at, let alone the magnitude of each factor and the interrelationships with all factors. So how do you tell which statements are valid information and which are misinformation? I agree with Cook and Lewandowsky that misinformation is pernicious, and difficult to get rid of once it becomes entrenched. So how does one evaluate distinguish between the good information and the bad, misleading or even pernicious?

The Lewandowsky / Cook answer is to follow the consensus of opinion. But what is the consensus of opinion? In climate one variation is to follow a small subset of academics in the area who answer in the affirmative to

1. When compared with pre-1800s levels, do you think that mean global temperatures have generally risen, fallen, or remained relatively constant?

2. Do you think human activity is a significant contributing factor in changing mean global temperatures?

Problem is that the first question is just reading a graph and the second could be is a belief statement will no precision. Anthropogenic global warming has been a hot topic for over 25 years now. Yet these two very vague empirically-based questions, forming the foundations of the subject, should be able to be formulated more precisely. On the second it is a case of having pretty clear and unambiguous estimates as to the percentage of warming, so far, that is human caused. On that the consensus of leading experts are unable to say whether it is 50% or 200% of the warming so far. (There are meant to be time lags and factors like aerosols that might suppress the warming). This from the 2013 UNIPCC AR5 WG1 SPM section D3:-

It is extremely likely that more than half of the observed increase in global average surface temperature from 1951 to 2010 was caused by the anthropogenic increase in greenhouse gas concentrations and other anthropogenic forcings together.

The IPCC, encapsulating the state-of-the-art knowledge, cannot provide firm evidence in the form of a percentage, or even a fairly broad range even with over 60 years of data to work on..  It is even worse than it appears. The extremely likely phrase is a Bayesian probability statement. Ron Clutz’s simple definition from earlier this year was:-

Here’s the most dumbed-down description: Initial belief plus new evidence = new and improved belief.

For the IPCC claim that their statement was extremely likely, at the fifth attempt, they should be able to show some sort of progress in updating their beliefs to new evidence. That would mean narrowing the estimate of the magnitude of impact of a doubling of CO2 on global average temperatures. As Clive Best documented in a cliscep comment in October, the IPCC reports, from 1990 to 2013 failed to change the estimate range of 1.5°C to 4.5°C. Looking up Climate Sensitivity in Wikipedia we get the origin of the range estimate.

A committee on anthropogenic global warming convened in 1979 by the National Academy of Sciences and chaired by Jule Charney estimated climate sensitivity to be 3 °C, plus or minus 1.5 °C. Only two sets of models were available; one, due to Syukuro Manabe, exhibited a climate sensitivity of 2 °C, the other, due to James E. Hansen, exhibited a climate sensitivity of 4 °C. “According to Manabe, Charney chose 0.5 °C as a not-unreasonable margin of error, subtracted it from Manabe’s number, and added it to Hansen’s. Thus was born the 1.5 °C-to-4.5 °C range of likely climate sensitivity that has appeared in every greenhouse assessment since…

It is revealing that quote is under the subheading Consensus Estimates. The climate community have collectively failed to update the original beliefs, based on a very rough estimate. The emphasis on referring to consensus beliefs about the world, rather than looking outward for evidence in the real world, I would suggest is the primary reason for this failure. Yet such community-based beliefs completely undermines the integrity of the Bayesian estimates, making its use in statements about climate clear misinformation in Cook and Lewandowsky’s use of the term. What is more, those in the climate community who look primarily to these consensus beliefs rather than the data of the real world will endeavour to dismiss the evidence, or make up ad hoc excuses, or smear those who try to disagree. A caricature of these perspectives with respect to global average temperature anomalies is available in the form of a flickering widget at John Cooks’ skepticalscience website. This purports to show the difference between “realist” consensus and “contrarian” non-consensus views. Figure 2 is a screenshot of the consensus views, interpreting warming as a linear trend. Figure 3 is a screenshot of the non-consensus or contrarian views. They is supposed to interpret warming as a series of short, disconnected,  periods of no warming. Over time, each period just happens to be at a higher level than the previous. There are a number of things that this indicates.

(a) The “realist” view is of a linear trend throughout any data series. Yet the period from around 1940 to 1975 has no warming or slight cooling depending on the data set. Therefore any linear trend line derived for a longer period than 1970 to 1975 and ending in 2015 will show a lower rate of warming. This would be consistent the rate of CO2 increasing over time, as shown in figure 1. But for shorten the period, again ending in 2015, and once the period becomes less than 30 years, the warming trend will also decrease. This contracts the theory, unless ad hoc excuses are used, as shown in my previous post using the HADCRUT4 data set.

(b) Those who agree with the consensus are called “Realist”, despite looking inwards towards common beliefs. Those who disagree with warming are labelled “Contrarian”. This is not inaccurate when there is a dogmatic consensus. But it utterly false to lump all those who disagree with the same views, especially when no examples are provided of those who hold such views.

(c) The linear trend appears as a more plausible fit than the series of “contrarian” lines. By implication, those who disagree with the consensus are viewed as as having a distinctly more blinkered and distorted perspective than those who follow the consensus. Yet even using gistemp data set (which is gives greatest support to the consensus views) there is a clear break in the linear trend. The less partisan HADCRUT4 data shows an even greater break.

Those who spot the obvious – that around the turn of the century warming stopped or slowed down, when in theory it should have accelerated – are given a clear choice. They can conform to the scientific consensus, denying the discrepancy between theory and data. Or they can act as scientists, denying the false and empirically empty scientific consensus, receiving the full weight of all the false and career-damaging opprobrium that accompanies it.

fig2-sks-realists

 

 

fig3-sks-contras

Kevin Marshall

 

Climate Experts Attacking a Journalist by Misinformation on Global Warming

Summary

Journalist David Rose was attacked for pointing out in a Daily Mail article that the strong El Nino event, that resulted in record temperatures, was reversing rapidly. He claimed record highs may be not down to human emissions. The Climate Feedback attack article claimed that the El Nino event did not affect the long-term human-caused trend. My analysis shows

  • CO2 levels have been rising at increasing rates since 1950.
  • In theory this should translate in warming at increasing rates. That is a non-linear warming rate.
  • HADCRUT4 temperature data shows warming stopped in 2002, only resuming with the El Nino event in 2015 and 2016.
  • At the central climate sensitivity estimate of doubling of CO2 leads to 3C of global warming, HADCRUT4 was already falling short of theoretical warming in 2000. This is without the impact of other greenhouse gases.
  • Putting a linear trend lines through the last 35 to 65 years of data will show very little impact of El Nino, but has a very large visual impact on the divergence between theoretical human-caused warming and the temperature data. It reduces the apparent impact of the divergence between theory and data, but does not eliminate it.

Claiming that the large El Nino does not affect long-term linear trends is correct. But a linear trend neither describes warming in theory or in the leading temperature set. To say, as experts in their field, that the long-term warming trend is even principally human-caused needs a lot of circumspection. This is lacking in the attack article.

 

Introduction

Journalist David Rose recently wrote a couple of articles in the Daily Mail on the plummeting global average temperatures.
The first on 26th November was under the headline

Stunning new data indicates El Nino drove record highs in global temperatures suggesting rise may not be down to man-made emissions

With the summary

• Global average temperatures over land have plummeted by more than 1C
• Comes amid mounting evidence run of record temperatures about to end
• The fall, revealed by Nasa satellites, has been caused by the end of El Nino

Rose’s second article used the Met Offices’ HADCRUT4 data set, whereas the first used satellite data. Rose was a little more circumspect when he said.

El Nino is not caused by greenhouse gases and has nothing to do with climate change. It is true that the massive 2015-16 El Nino – probably the strongest ever seen – took place against a steady warming trend, most of which scientists believe has been caused by human emissions.

But when El Nino was triggering new records earlier this year, some downplayed its effects. For example, the Met Office said it contributed ‘only a few hundredths of a degree’ to the record heat. The size of the current fall suggests that this minimised its impact.

There was a massive reaction to the first article, as discussed by Jaime Jessop at Cliscep. She particularly noted that earlier in the year there were articles on the dramatically higher temperature record of 2015, such as in a Guardian article in January.There was also a follow-up video conversation between David Rose and Dr David Whitehouse of the GWPF commenting on the reactions. One key feature of the reactions was claiming the contribution to global warming trend of the El Nino effect was just a few hundredths of a degree. I find particularly interesting the Climate Feedback article, as it emphasizes trend over short-run blips. Some examples

Zeke Hausfather, Research Scientist, Berkeley Earth:
In reality, 2014, 2015, and 2016 have been the three warmest years on record not just because of a large El Niño, but primarily because of a long-term warming trend driven by human emissions of greenhouse gases.

….
Kyle Armour, Assistant Professor, University of Washington:
It is well known that global temperature falls after receiving a temporary boost from El Niño. The author cherry-picks the slight cooling at the end of the current El Niño to suggest that the long-term global warming trend has ended. It has not.

…..
KEY TAKE-AWAYS
1.Recent record global surface temperatures are primarily the result of the long-term, human-caused warming trend. A smaller boost from El Niño conditions has helped set new records in 2015 and 2016.

…….

2. The article makes its case by relying only on cherry-picked data from specific datasets on short periods.

To understand what was said, I will try to take the broader perspective. That is to see whether the evidence points conclusively to a single long-term warming trend being primarily human caused. This will point to the real reason(or reasons) for downplaying the impact of an extreme El Nino event on record global average temperatures. There are a number of steps in this process.

Firstly to look at the data of rising CO2 levels. Secondly to relate that to predicted global average temperature rise, and then expected warming trends. Thirdly to compare those trends to global data trends using the actual estimates of HADCRUT4, taking note of the consequences of including other greenhouse gases. Fourthly to put the calculated trends in the context of the statements made above.

 

1. The recent data of rising CO2 levels
CO2 accounts for a significant majority of the alleged warming from increases in greenhouse gas levels. Since 1958 CO2 (when accurate measures started to be taken at Mauna Loa) levels have risen significantly. Whilst I could produce a simple graph either the CO2 level rising from 316 to 401 ppm in 2015, or the year-on-year increases CO2 rising from 0.8ppm in the 1960s to over 2ppm in in the last few years, Figure 1 is more illustrative.

CO2 is not just rising, but the rate of rise has been increasing as well, from 0.25% a year in the 1960s to over 0.50% a year in the current century.

 

2. Rising CO2 should be causing accelerating temperature rises

The impact of CO2 on temperatures is not linear, but is believed to approximate to a fixed temperature rise for each doubling of CO2 levels. That means if CO2 levels were rising arithmetically, the impact on the rate of warming would fall over time. If CO2 levels were rising by the same percentage amount year-on-year, then the consequential rate of warming would be constant over time.  But figure 1 shows that percentage rise in CO2 has increased over the last two-thirds of a century.  The best way to evaluate the combination of CO2 increasing at an accelerating rate and a diminishing impact of each unit rise on warming is to crunch some numbers. The central estimate used by the IPCC is that a doubling of CO2 levels will result in an eventual rise of 3C in global average temperatures. Dana1981 at Skepticalscience used a formula that produces a rise of 2.967 for any doubling. After adjusting the formula, plugging the Mauna Loa annual average CO2 levels into values in produces Figure 2.

In computing the data I estimated the level of CO2 in 1949 (based roughly on CO2 estimates from Law Dome ice core data) and then assumed a linear increased through the 1950s. Another assumption was that the full impact of the CO2 rise on temperatures would take place in the year following that rise.

The annual CO2 induced temperature change is highly variable, corresponding to the fluctuations in annual CO2 rise. The 11 year average – placed at the end of the series to give an indication of the lagged impact that CO2 is supposed to have on temperatures – shows the acceleration in the expected rate of CO2-induced warming from the acceleration in rate of increase in CO2 levels. Most critically there is some acceleration in warming around the turn of the century.

I have also included the impact of linear trend (by simply dividing the total CO2 increase in the period by the number of years) along with a steady increase of .396% a year, producing a constant rate of temperature rise.

Figure 3 puts the calculations into the context of the current issue.

This gives the expected temperature linear temperature trends from various start dates up until 2014 and 2016, assuming a one year lag in the impact of changes in CO2 levels on temperatures. These are the same sort of linear trends that the climate experts used in criticizing David Rose. The difference in warming by more two years produces very little difference – about 0.054C of temperature rise, and an increase in trend of less than 0.01 C per decade. More importantly, the rate of temperature rise from CO2 alone should be accelerating.

 

3. HADCRUT4 warming

How does one compare this to the actual temperature data? A major issue is that there is a very indeterminate lag between the rise in CO2 levels and the rise in average temperature. Another issue is that CO2 is not the only greenhouse gas. More minor greenhouse gases may have different patterns if increases in the last few decades. However, the change the trends of the resultant warming, but only but the impact should be additional to the warming caused by CO2. That is, in the long term, CO2 warming should account for less than the total observed.
There is no need to do actual calculations of trends from the surface temperature data. The Skeptical Science website has a trend calculator, where one can just plug in the values. Figure 4 shows an example of the graph, which shows that the dataset currently ends in an El Nino peak.

The trend results for HADCRUT4 are shown in Figure 5 for periods up to 2014 and 2016 and compared to the CO2 induced warming.

There are a number of things to observe from the trend data.

The most visual difference between the two tables is the first has a pause in global warming after 2002, whilst the second has a warming trend. This is attributable to the impact of El Nino. These experts are also right in that it makes very little difference to the long term trend. If the long term is over 40 years, then it is like adding 0.04C per century that long term trend.

But there is far more within the tables than this observations. Concentrate first on the three “Trend in °C/decade” columns. The first is of the CO2 warming impact from figure 3. For a given end year, the shorter the period the higher is the warming trend. Next to this are Skeptical Science trends for the HADCRUT4 data set. Start Year 1960 has a higher trend than Start Year 1950 and Start Year 1970 has a higher trend than Start Year 1960. But then each later Start Year has a lower trend the previous Start Years. There is one exception. The period 2010 to 2016 has a much higher trend than for any other period – a consequence of the extreme El Nino event. Excluding this there are now over three decades where the actual warming trend has been diverging from the theory.

The third of the “Trend in °C/decade” columns is simply the difference between the HADCRUT4 temperature trend and the expected trend from rising CO2 levels. If a doubling of CO2 levels did produce around 3C of warming, and other greenhouse gases were also contributing to warming then one would expect that CO2 would eventually start explaining less than the observed warming. That is the variance would be positive. But CO2 levels accelerated, actual warming stalled, increasing the negative variance.

 

4. Putting the claims into context

Compare David Rose

Stunning new data indicates El Nino drove record highs in global temperatures suggesting rise may not be down to man-made emissions

With Climate Feedback KEY TAKE-AWAY

1.Recent record global surface temperatures are primarily the result of the long-term, human-caused warming trend. A smaller boost from El Niño conditions has helped set new records in 2015 and 2016.

The HADCRUT4 temperature data shows that there had been no warming for over a decade, following a warming trend. This is in direct contradiction to theory which would predict that CO2-based warming would be at a higher rate than previously. Given that a record temperatures following this hiatus come as part of a naturally-occurring El Nino event it is fair to say that record highs in global temperatures ….. may not be down to man-made emissions.

The so-called long-term warming trend encompasses both the late twentieth century warming and the twenty-first century hiatus. As the later flatly contradicts theory it is incorrect to describe the long-term warming trend as “human-caused”. There needs to be a more circumspect description, such as the vast majority of academics working in climate-related areas believe that the long-term (last 50+ years) warming  is mostly “human-caused”. This would be in line with the first bullet point from the UNIPCC AR5 WG1 SPM section D3:-

It is extremely likely that more than half of the observed increase in global average surface temperature from 1951 to 2010 was caused by the anthropogenic increase in greenhouse gas concentrations and other anthropogenic forcings together.

When the IPCC’s summary opinion, and the actual data are taken into account Zeke Hausfather’s comment that the records “are primarily because of a long-term warming trend driven by human emissions of greenhouse gases” is dogmatic.

Now consider what David Rose said in the second article

El Nino is not caused by greenhouse gases and has nothing to do with climate change. It is true that the massive 2015-16 El Nino – probably the strongest ever seen – took place against a steady warming trend, most of which scientists believe has been caused by human emissions.

Compare this to Kyle Armour’s statement about the first article.

It is well known that global temperature falls after receiving a temporary boost from El Niño. The author cherry-picks the slight cooling at the end of the current El Niño to suggest that the long-term global warming trend has ended. It has not.

This time Rose seems to have responded to the pressure by stating that there is a long-term warming trend, despite the data clearly showing that this is untrue, except in the vaguest sense. There data does not show a single warming trend. Going back to the skeptical science trends we can break down the data from 1950 into four periods.

1950-1976 -0.014 ±0.072 °C/decade (2σ)

1976-2002 0.180 ±0.068 °C/decade (2σ)

2002-2014 -0.014 ±0.166 °C/decade (2σ)

2014-2016 1.889 ±1.882 °C/decade (2σ)

There was warming for about a quarter of a century sandwiched between two periods of no warming. At the end is an uptick. Only very loosely can anyone speak of a long-term warming trend in the data. But basic theory hypotheses a continuous, non-linear, warming trend. Journalists can be excused failing to make the distinctions. As non-experts they will reference opinion that appears sensibly expressed, especially when the alleged experts in the field are united in using such language. But those in academia, who should have a demonstrable understanding of theory and data, should be more circumspect in their statements when speaking as experts in their field. (Kyle Armour’s comment is an extreme example of what happens when academics completely suspend drawing on their expertise.)  This is particularly true when there are strong divergences between the theory and the data. The consequence is plain to see. Expert academic opinion tries to bring the real world into line with the theory by authoritative but banal statements about trends.

Kevin Marshall

Failed Arctic Sea Ice predictions illustrates Degenerating Climatology

The Telegraph yesterday carried an interesting article. Telegraph Experts said Arctic sea ice would melt entirely by September 2016 – they were wrong

Dire predictions that the Arctic would be devoid of sea ice by September this year have proven to be unfounded after latest satellite images showed there is far more now than in 2012.
Scientists such as Prof Peter Wadhams, of Cambridge University, and Prof Wieslaw Maslowski, of the Naval Postgraduate School in Moderey, California, have regularly forecast the loss of ice by 2016, which has been widely reported by the BBC and other media outlets.

In June, Michel at Trustyetverify blog traced a number of these false predictions. Michel summarized

(H)e also predicted until now:
• 2008 (falsified)
• 2 years from 2011 → 2013 (falsified)
• 2015 (falsified)
• 2016 (still to come, but will require a steep drop)
• 2017 (still to come)
• 2020 (still to come)
• 10 to 20 years from 2009 → 2029 (still to come)
• 20 to 30 years from 2010 → 2040 (still to come).

The 2016 prediction is now false. Paul Homewood has been looking at Professor Wadhams’ failed prophesies in a series of posts as well.

The Telegraph goes on to quote from three, more moderate, sources. One of them is :-

Andrew Shepherd, professor of earth observation at University College London, said there was now “overwhelming consensus” that the Arctic would be free of ice in the next few decades, but warned earlier predictions were based on poor extrapolation.
“A decade or so ago, climate models often failed to reproduce the decline in Arctic sea ice extent revealed by satellite observations,” he said.
“One upshot of this was that outlier predictions based on extrapolation alone were able to receive wide publicity.
“But climate models have improved considerably since then and they now do a much better job of simulating historical events.
This means we have greater confidence in their predictive skill, and the overwhelming consensus within the scientific community is that the Arctic Ocean will be effectively free of sea ice in a couple of decades should the present rate of decline continue.

(emphasis mine)

Professor Shepard is saying that the shorter-term (from a few months to a few years) highly dire predictions have turned out to be false, but improved techniques in modelling enable much more sound predictions over 25-50 years to be made. That would require a development on two dimensions – scale and time. Detecting a samll human-caused change over decades needs far greater skill in differentiating from natural variations on a year-by-year time scale from a dramatic shift. Yet it would appear that at the end of the last century there was a natural upturn following from an unusually cold period in the 1950s to the 1970s, as documented by HH Lamb. This resulted in an extension in the sea ice. Detection of the human influence problem is even worse if the natural reduction in sea ice has worked concurrently with that human influence. However, instead of offering us demonstrated increased technical competency in modelling (as opposed to more elaborate models), Professor Shepard offers us the consensus of belief that the more moderate predictions are reliable.
This is a clear example of degenerating climatology that I outlined in last year. In particular, I proposed that rather than progressive climate science – increasing scientific evidence and more rigorous procedures for tighter hypotheses about clear catastrophic anthropogenic global warming – we have degenerating climatology, which is ever weaker and vaguer evidence for some global warming.

If Professor Wadhams had consistently predicted the lack of summer sea ice for a set time period, then it would be strong confirmation of a potentially catastrophic problem. Climatology would have scored a major success. Even if instead of ice-free summers by now, there had been evidence of clear acceleration in the decline in sea ice extent, then it could have been viewed as some progression. But instead we should accept a consensus of belief that will only be confirmed or refuted decades ahead. The interpretation of success or failure. will then, no doubt, be given to the same consensus who were responsible for the vague predictions in the first place.

Kevin Marshall

William Connolley is on side of anti-science not the late Bob Carter

In the past week there have been a number of tributes to Professor Bob Carter, retired Professor of Geology and leading climate sceptic. This includes Jo Nova, James Delingpole, Steve McIntyre, Ian Pilmer at the GWPF, Joe Bast of The Heartland Institute and E. Calvin Beisner of Cornwall Alliance. In complete contrast William Connolley posted this comment in a post Science advances one funeral at a time

Actually A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it, but I’m allowed to paraphrase in titles. And anyway he said it in German, naturally. Today brings us news of another such advancement in science, with the reported death of Robert Carter.

Below is a comment I posted at Climate Scepticism

I believe Max Planck did have a point. In science people tenaciously hold onto ideas even if they have been falsified by the evidence or (as more often happens) they are supplanted by better ideas. Where the existing ideas form an institutionalized consensus, discrimination has occurred against those with the hypotheses can undermine that consensus. It can be that the new research paradigm can only gain prominence when the numbers dwindle in the old paradigm. As a result the advance of new knowledge and understanding is held back.

To combat this innate conservatism in ideas I propose four ideas.

First is to promote methods of evaluating competing theories that are independent of consensus or opinion. In pure science that is by conducting experiments that would falsify a hypothesis. In complex concepts, for which experiment is not possible and data is incomplete and of poor quality, like the AGW hypothesis or economic theories, comparative analysis needs to be applied based upon independent standards.

Second is to recognize institutional bias by promoting pluralism and innovation.

Third is to encourage better definition of concepts, more rigorous standards of data within the existing research paradigm to push the boundaries.

Fourth is to train people to separate scientific endeavours from belief systems, whether religious, political or ethical.

The problem for William Connolley is that all his efforts within climatology – such as editing Wikipedia to his narrow views, or helping set up Real Climate to save the Mannian Hockey Stick from exposure of its many flaws – are with enforcing the existing paradigm and blocking any challenges. He is part of the problem that Planck was talking about.

As an example of the narrow and dogmatic views that Connolley supports, here is the late Bob Carter on his major point about how beliefs in unprecedented human-caused warming are undermined by the long-term temperature proxies from ice core data. The video quality is poor, probably due to a lack of professional funding that Connolley and his fellow-travellers fought so hard to deny.

Kevin Marshall

Shotton Open Cast Coal Mine Protest as an example of Environmental Totalitarianism

Yesterday, in the Greens and the Fascists, Bishop Hill commented on Jonah Goldberg’s book Liberal Fascists. In summing up, BH stated:-

Goldberg is keen to point out that the liberal and progressive left of today do not share the violent tendencies of their fascist forebears: theirs is a gentler totalitarianism (again in the original sense of the word). The same case can be made for the greens. At least for now; it is hard to avoid observing that their rhetoric is becoming steadily more violent and the calls for unmistakably fascist policy measures are ever more common.

The link is to an article in the Ecologist (reprinted from Open Democracy blog) – “Coal protesters must be Matt Ridley’s guilty consience

The coal profits that fill Matt Ridley’s bank account come wet with the blood of those killed and displaced by the climate disaster his mines contribute to, writes T. If hgis consicence is no longer functioning, then others must step into that role to confront him with the evil that he is doing. (Spelling as in the original)

The protest consisted of blocking the road for eight hours to Shotton open cast coal mine. The reasoning was

This was an effective piece of direct action against a mine that is a major contributor to climate disaster, and a powerful statement against the climate-denying Times columnist, Viscount Matt Ridley, that owns the site. In his honour, we carried out the action as ‘Matt Ridley’s Conscience’.

The mine produces about one million tonnes of coal a year out of 8,000 million tonnes globally. The blocking may have reduced annual output by 0.3%. This will be made up from the mine, or from other sources. Coal is not the only source of greenhouse gas emissions, so the coal resulting in less than 0.004% of global greenhouse gas emissions. Further, the alleged impact of GHG emissions on the climate is cumulative. The recoverable coal at Shotton is estimated at 6 million tonnes or 0.0007% of the estimated global reserves of 861 billion tonnes (Page 5). These global reserves could increase as new deposits are found, as has happened in the recent past for coal, gas and oil. So far from being “a major contributor to climate disaster”, Shotton Open Cast Coal Mine is a drop in the ocean.

But is there a climate disaster of which Matt Ridley is in denial? Anonymous author and convicted criminal T does not offer any evidence of current climate disasters. He is not talking about modelled projections, but currently available evidence. So where are all the dead bodies, or the displaced persons? Where are the increased deaths through drought-caused famines? Where are the increased deaths from malaria or other diseases from warmer and worsening conditions? Where is the evidence of increased deaths from extreme weather, such as hurricanes? Where are the refugees from drought-stricken areas, or from low-lying areas now submerged beneath the waves?

The inability to evaluate the evidence is shown by the comment.

Ridley was ( … again) offered a platform on BBC Radio 4 just a week before our hearing, despite his views being roundly debunked by climate scientists.

The link leads to a script of the Radio 4 interview with annotated comments. I am not sure that all the collective brains do debunk (that is expose the falseness or hollowness of (an idea or belief)) Matt Ridley’s comments. Mostly it is based on nit-picking or pointing out the contradictions with their own views and values. There are two extreme examples among 75 comments I would like to highlight two.

First is that Matt Ridley mentioned the Hockey Stick graphs and the work of Steve McIntyre in exposing the underlying poor data. The lack of a medieval warm period would provide circumstantial (or indirect) evidence that the warming of the last 200 years is unprecedented. Gavin Schmidt, responded with comments (5) and (6) shown below.

Schmidt is fully aware that Steve McIntyre also examined the Wahl and Amman paper and thoroughly discredited it. In 2008 Andrew Montford wrote a long paper of the shenanigans that went into the publication of the paper, and its lack of statistical significance. Following from this Montford wrote the Hockey Stick Illusion in 2010, which was reviewed by Tamino of RealClimate. Steve McIntyre was able to refute the core arguments in Tamino’s polemic by reposting Tamino and the Magic Flute, which was written in 2008 and covered all the substantial arguments that Tamino made. Montford’s book further shows a number of instances where peer review in academic climatology journals is not a quality control mechanism, but more a device of discrimination between those that support the current research paradigm and those that would undermine that consensus.

Comment 6 concludes

The best updates since then – which include both methodology improvements and expanded data sources – do not show anything dramatically different to the basic picture shown in MBH.

The link is to Chapter 5 on the IPCC AR5 WG1 assessment report. The paleoclimate discussion is a small subsection, a distinct reversal from the prominent place given to the original hockey stick in the third assessment report of 2001. I would contend the picture is dramatically different. Compare the original hockey stick of the past 1,000 years with Figure 5.7 on page 409 of AR5 WG1 Chapter 5.

In 2001, the MBH reconstruction was clear. From 1900 to 2000 average temperatures in the Northern Hemisphere have risen by over 1C, far more than the change in any of century. But from at least two of the reconstructions – Ma08eivl and Lj10cps – there have been similarly sized fluctuations in other periods. The evidence now seems to back up Matt Ridley’s position of some human influence on temperatures, but does not support the contention of unprecedented temperature change. Gavin Schmidt’s opinions are not those of an expert witness, but of a blinkered activist.

Schmidt’s comments on hockey stick graphs are nothing compared to comment 35

The Carbon Brief (not the climate scientists) rejects evidence that contradicts their views based on nothing more than ideological prejudice. A search for Indur Goklany will find his own website, where he has copies of his papers. Under the “Climate Change” tab is not only the 2009 paper, but a 2011 update – Wealth and Safety: The Amazing Decline in Deaths from Extreme Weather in an Era of Global Warming, 1900–2010. Of interest are two tables.

Table 2 is a reproduction of World Health Organisation data from 2002. It clearly shows that global warming is well down the list of causes of deaths. Goklany states in the article why these figures are based on dubious assumptions. Anonymous T falsely believes that global warming is curr

Figure 6 for the period 1990-2010 shows

  • the Global Death and Death Rates per million Due to Extreme Weather Events
  • CO2 Emissions
  • Global average GDP Per Capita

Figure 6 provides strong empirical evidence that increasing CO2 emissions (about 70-80% of total GHG emissions) have not caused increased deaths. They are a consequence of increasing GDP per capita, which as Goklany argues, have resulted in fewer deaths from extreme weather. More importantly, increasing GDP has resulted in increased life expectancy and reductions in malnutrition and deaths that be averted by access to rudimentary health care. Anonymous T would not know this even if he had read all the comments, yet it completely undermines the beliefs that caused him to single out Matt Ridley.

The worst part of Anonymous T’s article

Anonymous T concludes the article as follows (Bold mine)

The legal process efficiently served its function of bureaucratising our struggle, making us attempt to justify our actions in terms of the state’s narrow, violent logic. The ethics of our action are so clear, and declaring myself guilty felt like folding to that.

We found ourselves depressed and demoralised, swamped in legal paperwork. Pleading guilty frees us from the stress of a court case, allowing us to focus on more effective arenas of struggle.

I faced this case from a position of relative privilege – with the sort of appearance, education and lawyers that the courts favour. Even then I found it crushing. Today my thoughts are with those who experience the racism, classism and ableism of the state and its laws in a way that I did not.

That reflection makes me even more convinced of the rightness of our actions. Climate violence strikes along imperialist lines, with those least responsible, those already most disadvantaged by colonial capitalism, feeling the worst impacts.

Those are the people that lead our struggle, but are often also the most vulnerable to repression in the struggle. When fighting alongside those who find themselves at many more intersections of the law’s oppression than I do, I have a responsibility to volunteer first when we need to face up to the police and the state.

Faced with structural injustice and laws that defend it, Matt Ridley’s Conscience had no choice but to disobey. Matt Ridley has no conscience and neither does the state nor its system of laws. Join in. Be the Conscience you want to see in the world.

The writer rejects the rule of law, and is determined to carry out more acts of defiance against it. He intends to commit more acts of violence, with “climate” as a cover for revolutionary Marxism. Further the writer is trying to incite others to follow his lead. He claims to know Matt Ridley’s Conscience better than Ridley himself, but in the next sentence claims that “Matt Ridley has no conscience“. Further this statement would seem to contradict a justification for the criminal acts allegedly made in Bedlington Magistrates Court on December 16th
that the protesters were frustrated by the lack of UK Government action to combat climate change.

It is not clear who is the author of this article, but he/she is one of the following:-

Roger Geffen, 49, of Southwark Bridge Road, London.

Ellen Gibson, 21, of Elm Grove, London;

Philip MacDonald, 28, of Blackstock Road, Finsbury Park, London;

Beth Louise Parkin, 29, of Dodgson House, Bidborough Street, London;

Pekka Piirainen, 23, of Elm Grove, London;

Thomas Youngman, 22, of Hermitage Road, London.

Laurence Watson, 27, of Blackstock Road, Finsbury Park, London;

Guy Shrubsole, 30, of Bavent Road, London;

Lewis McNeill, 34, of no fixed address.

Kevin Marshall

aTTP falsely attacks Bjorn Lomborg’s “Impact of Current Climate Proposals” Paper

The following is a comment to be posted at Bishop Hill, responding to another attempt by blogger ….andThenThere’sPhysics to undermine the work of Bjorn Lomborg. The previous attempt was discussed here. This post includes a number of links, as well as a couple of illustrative screen captures at the foot of the table.

aTTP’s comment is

In fact, you should read Joe Romm’s post about this. He’s showing that the INDCs are likely to lead to around 3.5C which I think is relative to something like the 1860-1880 mean. This is very similar to the MIT’s 3.7, and quite a bit lower than the RCP8.5 of around 4.5C. So, yes, we all know that the INDCs are not going to do as much as some might like, but the impact is likely to be a good deal greater than that implied by Lomborg who has essentially assumed that we get to 2030 and then simply give up.

Nov 11, 2015 at 9:31 AM | …and Then There’s Physics

My Comment

aTTP at 9.31 refers to Joe Romm’s blog post of Nov 3 “Misleading U.N. Report Confuses Media On Paris Climate Talks“. Romm uses Climate Interactive’s Climate Scoreboard Tool to show the INDC submissions (if fully implemented) will result in 3.5°C as against the 4.5°C in the non-policy “No Action” Scenario. This is six times the claimed maximum impact of 0.17°C claimed in Lomberg’s new paper. Who is right? What struck me first was that Romm’s first graph, copied straight from the Climate Interactive’s seem to have a very large estimate for emissions in the “No Action” Scenario producing. Downloading the underlying data, I find the “No Action” global emissions in 2100 are 139.3 GtCO2e, compared with about 110 GtCO2e in Figure SPM5(a) of the AR5 Synthesis Report for the RCP8.5 scenario high emissions scenario. But it is the breakdown per country or region that matters.

For the USA, without action emissions are forecast to rise from 2010 to 2030 by 40%, in contrast to a rise of just 9% in the period 1990 to 2010. It is likely that emissions will fall without policy and will be no higher in 2100 than in 2010. The “no action” scenario overestimates 2030 emissions by 2-3 GtCO2e in 2030 and about 7-8 GtCO2e in 2100.

For the China the overestimation is even greater. Emissions will peak during the next decade as China fully industrializes, just as emissions peaked in most European countries in the 1970s and 1980s. Climate Interactive assumes that emissions will peak at 43 GtCO2e in 2090, whereas other estimates that the emissions peak will be around 16-17 GtCO2e before 2030.

Together, overestimations of the US and China’s “No Action” scenarios account for over half 55-60 GtCO2e 2100 emissions difference between the “No Action” and “Current INDC” scenarios. A very old IT term applies here – GIGO. If aTTP had actually checked the underlying assumptions he would realise that Romm’s rebuttal of Lomborg based on China’s emission assumptions (and repeated on his own blog) are as false as claiming that the availability of free condoms is why population peaks.

Links posted at https://manicbeancounter.com/2015/11/11/attp-falsely-attacks-bjorn-lomborgs-impact-of-current-climate-proposals-paper/

Kevin Marshall

 

Figures referred to (but not referenced) in the comment above

Figure 1: Climate Interactive’s graph, referenced by Joe Romm.


Figure 2: Reproduction of Figure SPM5(a) from Page 9 of the AR5 Synthesis Report.

 

Update – posted the following to ATTP’s blog



 

Degenerating Climatology 1: IPCC Statements on Human Caused Warming

This is the first in an occasional series of illustrating the degeneration of climatology away from an empirical science. In my view, for climatology to be progressing it needs to be making ever clearer empirical statements that support the Catastrophic Anthropogenic Global Warming (CAGW) hypothesis and moving away from the bland statements that can just as easily support a weaker form of the hypothesis, or support random fluctuations. In figure 1 this progression is illustrated by the red arrow, with increasing depth of colour. The example given below is an illustration of the opposite tendency.

Obscuring the slowdown in warming in AR5

Every major temperature data set shows that the warming rate this century has been lower than that towards the end of the end of the twentieth century. This is becoming a severe issue for those who believe that the main driver of warming is increasing atmospheric greenhouse gas levels. This gave a severe problem for the IPCC in trying to find evidence for the theory when they published in late 2013.

In the IPCC Fifth Assessment Report Working Group 1 (The Physical Science Basis) Summary for Policy Makers, the headline summary on the atmosphere is:-

Each of the last three decades has been successively warmer at the Earth’s surface than any preceding decade since 1850. In the Northern Hemisphere, 1983–2012 was likely the warmest 30-year period of the last 1400 years (medium confidence).

There are three parts to this.

  • The last three decades have been successively warmer according to the major surface temperature data sets. The 1980s were warmer than the 1970s; the 1990s warmer than the 1980s; and the 2000s warmer than the 1990s.
  • The 1980s was warmer than any preceding decade from the 1850s.
  • In the collective opinion of the climate experts there is greater than a 66% chance that the 1980s was the warmest decade in 1400 years.

What the does not include are the following.

  1. That global average temperature rises have slowed down in the last decade compared with the 1990s. From 2003 in the HADCRUT4 temperature series warming had stopped.
  2. That global average temperature also rose significantly in the mid-nineteenth and early twentieth centuries.
  3. That global average temperature fell in 4 or 5 of the 13 decades from 1880 to 2010.
  4. That in the last 1400 years there was a warm period about 1000 years ago and a significantly cold period that could have reached bottomed out around 1820. That is a Medieval Warm Period and the Little Ice Age.
  5. That there is strong evidence of Roman Warm Period that about 2000 years ago and a Bronze Age warm period about 3000 years ago.

Point (i) to (iii) can be confirmed by figure 2. Both the two major data surface temperature anomalies show warming trends in each of the last three decades, implying successive warming. A similar statement could have been made in 1943 if the data had been available.

In so far as the CAGW hypothesis is broadly defined as a non-trivial human-caused rise in temperatures (the narrower more precise definition being that the temperature change has catastrophic consequences) there is no empirical support found from the actual temperature records or from the longer data reconstructions from proxy data.

The major statement above is amplified by the major statement from the press release of 27/09/2013.

It is extremely likely that human influence has been the dominant cause of the observed warming since the mid-20th century. The evidence for this has grown, thanks to more and better observations, an improved understanding of the climate system response and improved climate models.

This statement does exclude other types of temperature change, let alone other causes of the temperature change. The cooling in the 1960s is not included. The observed temperature change is only the net impact of all influences, known or unknown. Further, the likelihood is based upon expert opinion. If the experts have always given prominence to human influences on warming (as opposed to natural and random influences) then their opinion will be biased. Over time if this opinion is not objectively adjusted in the light of evidence that does not conform to the theory the basis of Bayesian statistic is undermined.

Does the above mean that climatology is degenerating away from a rigorous scientific discipline? I have chosen the latest expert statements, but not compared them with previous statements. A comparable highlighted statement to the human influence statement from the fourth assessment report WG1 SPM (Page 3) is

The understanding of anthropogenic warming and cooling influences on climate has improved since the TAR, leading to very high confidence that the global average net effect of human activities since 1750 has been one of warming, with a radiative forcing of +1.6 [+0.6 to +2.4] W m–2

The differences are

  • The greenhouse gas effect is no longer emphasised. It is now the broader “human influence”.
  • The previous statement was prepared to associate the influence with a much longer period. Probably the collapse of hockey stick studies, with their portrayal of unprecedented warming, has something to do with this.
  • Conversely, the earlier statement is only prepared to say that since 1750 the net effect of human influences has been one of warming. The more recent statement claims a dominant cause of warming has been human caused.

This leads my final point indicating degeneration of climatology away from science. When comparing the WG1 SPMs for TAR, AR4 and AR5 there are shifting statements. In each report the authors have chosen the best statements to fit their case at that point in time. The result is a lack of continuity that might demonstrate and increasing correspondence between theory and data.

Kevin Marshall