UK Government Committee 7000 heat-deaths in 2050s assumes UK’s climate policies will be useless

Summary

Last week, on the day forecast to have record temperatures in the UK, the Environmental Audit Committee warns of 7,000 heat-related deaths every year in the UK by the 2050s if the Government did not act quickly. That prediction was based upon Hajat S, et al 2014. Two principle assumptions behind that prognosis did not hold at the date when the paper was submitted. First is that any trend of increasing summer heatwaves in the data period of 1993 to 2006 had by 2012 ended. The six following summers were distinctly mild, dull and wet. Second, based upon estimates from the extreme 2003 heatwave, is that most of the projected heat deaths would occur in NHS hospitals, is the assumption that health professionals in the hospitals would not only ignore the increasing death toll, but fail to take adaptive measures to an observed trend of evermore frequent summer heatwaves. Instead, it would require a central committee to co-ordinate the data gathering and provide the analysis. Without the politicians and bureaucrats producing reports and making recommendations the world will collapse.
There is a third, implied assumption, in the projection. The 7,000 heat-related deaths in the 2050s assumes the complete failure of the Paris Agreement to control greenhouse emissions, let alone keep warming to within any arbitrary 1.5°C or 2°C. That means other countries have failed to follow Britain’s lead in reducing their emissions by 80% by 2050. The implied assumption is that the considerable costs and hardships on imposed on the British people by the Climate Change Act 2008 will have been for nothing.

Announcement on the BBC

In the early morning of last Thursday – a day when there were forecasts of possible record temperatures – the BBC published a piece by Roger Harrabin “Regular heatwaves ‘will kill thousands’”, which began

The current heatwave could become the new normal for UK summers by 2040 because of climate change, MPs say.
The Environmental Audit Committee warns of 7,000 heat-related deaths every year in the UK by 2050 if the government doesn’t act quickly. 
Higher temperatures put some people at increased risk of dying from cardiac, kidney and respiratory diseases.
The MPs say ministers must act to protect people – especially with an ageing population in the UK.

I have left the link in. It is not to a Report by the EAC but to a 2014 paper mentioned once in the report. The paper is Hajat S, et al. J Epidemiol Community Health DOI: 10.1136/jech-2013-202449 “Climate change effects on human health: projections of temperature-related mortality for the UK during the 2020s, 2050s and 2080s”.

Hajat et al 2014

Unusually for a scientific paper, Hajat et al 2014 contains very clear highlighted conclusions.

What is already known on this subject

▸ Many countries worldwide experience appreciable burdens of heat-related and cold-related deaths associated with current weather patterns.

▸ Climate change will quite likely alter such risks, but details as to how remain unclear.

What this study adds

Without adaptation, heat-related deaths would be expected to rise by around 257% by the 2050s from a current annual baseline of around 2000 deaths, and cold-related mortality would decline by 2% from a baseline of around 41 000 deaths.

▸ The increase in future temperature-related deaths is partly driven by expected population growth and ageing.

▸ The health protection of the elderly will be vital in determining future temperature-related health burdens.

There are two things of note. First the current situation is viewed as static. Second, four decades from now heat-related deaths will dramatically increase without adaptation.
With Harrabin’s article there is no link to the Environmental Audit Committee’s report page, direct to the full report, or to the announcement, or even to its homepage.

The key graphic in the EAC report relating to heat deaths reproduces figure 3 in the Hajat paper.

The message being put out is that, given certain assumptions, deaths from heatwaves will increase dramatically due to climate change, but cold deaths will only decline very slightly by the 2050s.
The message from the graphs is if the central projections are true (note the arrows for error bars) in the 2050s cold deaths will still be more than five times the heat deaths. If the desire is to minimize all temperature-related deaths, then even in the 2050s the greater emphasis still ought to be on cold deaths.
The companion figure 4 of the Hajat et al 2014 should also be viewed.

Figure 4 shows that both heat and cold deaths is almost entirely an issue with the elderly, particularly with the 85+ age group.
Hajat et al 2014 looks at regional data for England and Wales. There is something worthy of note in the text to Figure 1(A).

Region-specific and national-level relative risk (95% CI) of mortality due to hot weather. Daily mean temperature 93rd centiles: North East (16.6°C), North West (17.3°C), Yorks & Hum (17.5°C), East Midlands (17.8°C), West Midlands (17.7°C), East England (18.5°C), London (19.6°C), South East (18.3°C), South West (17.6°C), Wales (17.2°C).

The coldest region, the North East, has mean temperatures a full 3°C lower than London, the warmest region. Even with high climate sensitivities, the coldest region (North East) is unlikely to see temperature rises of 3°C in 50 years to make mean temperature as high as London today. Similarly, London will not be as hot as Milan. there would be an outcry if the London had more than three times the heat deaths of Newcastle, or if Milan had had more than three times the heat deaths of London. So how does Hajat et al 2014 reach these extreme conclusions?
There are as number of assumptions that are made, both explicit and implicit.

Assumption 1 : Population Increase

(T)otal UK population is projected to increase from 60 million in mid-2000s to 89 million by mid-2080s

By the 2050s there is roughly a 30% increase in population. Heat death rates per capita only show a 150% increase in five decades.

 

Assumption 2 : Lack of improvement in elderly vulnerability
Taking the Hajat et al figure 4, the relative proportions hot and cold deaths between age bands is not assumed to change, as my little table below shows.

The same percentage changes for all three age bands I find surprising. As the population ages, I would expect the 65-74 and 74-84 age bands to become relatively healthier, continuing the trends of the last few decades. That will make them less vulnerable to temperature extremes.

Assumption 3 : Climate Sensitivities

A subset of nine regional climate model variants corresponding to climate sensitivity in the range of 2.6–4.9°C was used.

The compares to the IPCC AR5 WG1 SPM Page 16

Equilibrium climate sensitivity is likely in the range 1.5°C to 4.5°C (high confidence)

With a mid-point of 3.75°C compared to the IPCC’s 3°C does not make much difference over 50 years. The IPCC’s RCP8.5 unmitigated emissions growth scenario has 3.7°C (4.5-0.8) of warming from 2010 to 2100. Pro-rata the higher sensitivities give about 2.5°C of warming by the 2050s, still making mean temperatures in the North East just below that of London today.
The IPCC WG1 report was published a few months after the Hajat paper was accepted for publication. However, the ECS range 1.5−4.5 was unchanged from the 1979 Charney report, so there should be a least a footnote justifying the higher senitivitity. An alternative approach to these vague estimates derived from climate models is those derived from changes over the historical instrumental data record using energy budget models. The latest – Lewis and Curry 2018 – gives an estimate of 1.5°C. This finding from the latest research would more than halved any predicted warming to the 2050s of the Hajat paper’s central ECS estimate.

Assumption 4 : Short period of temperature data

The paper examined both regional temperature data and deaths for the period 1993–2006. This 14 period had significant heatwaves in 1995, 2003 and 2006. Climatically this is a very short period, ending a full six years before the paper was submitted.
From the Met Office Hadley Centre Central England Temperature Data I have produced the following graphic of seasonal data for 1975-2012, with 1993-2006 shaded.

Typical mean summer temperatures (JJA) were generally warmer than in both the period before and the six years after. Winter (DJF) average temperatures for 2009 to 2011 were the coldest three run of winters in the whole period. Is this significant?
A couple of weeks ago the GWPF drew attention to a 2012 Guardian article The shape of British summers to come?

It’s been a dull, damp few months and some scientists think we need to get used to it. Melting ice in Greenland could be bringing permanent changes to our climate
The news could be disconcerting for fans of the British summer. Because when it comes to global warming, we can forget the jolly predictions of Jeremy Clarkson and his ilk of a Mediterranean climate in which we lounge among the olive groves of Yorkshire sipping a fine Scottish champagne. The truth is likely to be much duller, and much nastier – and we have already had a taste of it. “We will see lots more floods, droughts, such as we’ve had this year in the UK,” says Peter Stott, leader of the climate change monitoring and attribution team at the Met Office. “Climate change is not a nice slow progression where the global climate warms by a few degrees. It means a much greater variability, far more extremes of weather.”

Six years of data after the end of the data period, but five months before the paper was submitted on 31/01/2013 and nine months before the revised draft was submitted, there was a completely new projection saying the opposite of more extreme heatwaves.
The inclusion more recent available temperature data is likely to have materially impacted on the modelled extreme hot and cold death temperature projections for many decades in the future.

Assumption 5 : Lack of Adaptation
The heat and cold death projections are “without adaptation”. This assumption means that over the decades people do not learn from experience, buy air conditioners, drink water and look out for the increasing vulnerable. People basically ignore the rise in temperatures, so by the 2050s treat a heatwave of 35°C exactly the same as one of 30°C today. To put this into context, it is worth looking as another papers used in the EAC Report.
Mortality in southern England during the 2003 heat wave by place of death – Kovats et al – Health Statistics Quarterly Spring 2006
The only table is reproduced below.

Over half the total deaths were in General Hospitals. What does this “lack of adaptation” assumption imply about the care given by health professionals to vulnerable people in their care? Surely, seeing rising death tolls they would be taking action? Or do they need a political committee in Westminster looking at data well after the event to point out what is happening under there very noses? Even when data been collated and analysed in such publications as the Government-run Health Statistics Quarterly? The assumption of no adaptation should have been alongside and assumption “adaptation after the event and full report” with new extremes of temperature coming as a complete surprise. However, that might still be unrealistic considering “cold deaths” are a current problem.

Assumption 6 : Complete failure of Policy
The assumption high climate sensitivities resulting in large actual rises in global average temperatures in the 2050s and 2080s implies another assumption with political implications. The projection of 7,000 heat-related deaths assumes the complete failure of the Paris Agreement to control greenhouse emissions, let alone keep warming to within any arbitrary 1.5°C or 2°C. The Hajat paper may not state this assumption, but by assuming increasing temperatures from rising greenhouse levels, it is implied that no effective global climate mitigation policies have been implmented. This is a fair assumption. The UNEP emissions Gap Report 2017 (pdf), published in October last year is the latest attempt to estimate the scale of the policy issue. The key is the diagram reproduced below.

The aggregate impact of climate mitigation policy proposals (as interpreted by the promoters of such policies) is much closer to the non-policy baseline than the 1.5°C or 2°C emissions pathways. That means other countries have failed to follow Britain’s lead in reducing their emissions by 80% by 2050. In its headline “Heat-related deaths set to treble by 2050 unless Govt acts” the Environmental Audit Committee are implicitly accepting that the Paris Agreement will be a complete flop. That the considerable costs and hardships on imposed on the British people by the Climate Change Act 2008 will have been for nothing.

Concluding comments

Projections about the consequences of rising temperatures require making restrictive assumptions to achieve a result. In academic papers, some of these assumptions are explicitly-stated, others not. The assumptions are required to limit the “what-if” scenarios that are played out. The expected utility of modeled projections is related to whether the restrictive assumptions bear relation to actual reality and empirically-verified theory. The projection of over 7,000 heat deaths in the 2050s is based upon

(1) Population growth of 30% by the 2050s

(2) An aging population not getting healthier at any particular age

(3) Climate sensitivities higher than the consensus, and much higher than the latest data-based research findings

(4) A short period of temperature data with trends not found in the next few years of available data

(5) Complete lack of adaptation over decades – an implied insult to health professionals and carers

(6) Failure of climate mitigation policies to control the growth in temperatures.

Assumptions (2) to (5) are unrealistic, and making any more realistic would significantly reduce the projected number of heat deaths in the 2050s. The assumption of lack of adaptation is an implied insult to many health professionals who monitor and adapt to changing conditions. In assuming a lack of climate mitigation policies implies that the £319bn Britain is projected is spent on combating climate change between 2014 and 2030 is a waste of money. Based on available data, this assumption is realistic.

Kevin Marshall

How strong is the Consensus Evidence for human-caused global warming?

You cannot prove a vague theory wrong. If the guess that you make is poorly expressed and the method you have for computing the consequences is a little vague then ….. you see that the theory is good as it can’t be proved wrong. If the process of computing the consequences is indefinite, then with a little skill any experimental result can be made to look like an expected consequence.

Richard Feynman – 1964 Lecture on the Scientific Method

It’s self-evident that democratic societies should base their decisions on accurate information. On many issues, however, misinformation can become entrenched in parts of the community, particularly when vested interests are involved. Reducing the influence of misinformation is a difficult and complex challenge.

The Debunking Handbook 2011 – John Cook and Stephan Lewandowsky

My previous post looked at the attacks on David Rose for daring to suggest that the rapid fall in global land temperatures at the El Nino event were strong evidence that the record highs in global temperatures were not due to human greenhouse gas emissions. The technique used was to look at long-term linear trends. The main problems with this argument were
(a) according to AGW theory warming rates from CO2 alone should be accelerating and at a higher rate than the estimated linear warming rates from HADCRUT4.
(b) HADCRUT4 shows warming stopped from 2002 to 2014, yet in theory the warming from CO2 should have accelerated.

Now there are at least two ways to view my arguments. First is to look at Feynman’s approach. The climatologists and associated academics attacking journalist David Rose chose to do so from a perspective of a very blurred specification of AGW theory. That is human emissions will cause greenhouse gas levels to rise, which will cause global average temperatures to rise. Global average temperature clearly have risen from all long-term (>40 year) data sets, so theory is confirmed. On a rising trend, with large variations due to natural variability, then any new records will be primarily “human-caused”. But making the theory and data slightly less vague reveals an opposite conclusion. Around the turn of the century the annual percentage increase in CO2 emissions went from 0.4% to 0.5% a year (figure 1), which should have lead to an acceleration in the rate of warming. In reality warming stalled.

The reaction was to come up with a load of ad hoc excuses. Hockey Schtick blog reached 66 separate excuses for the “pause” by November 2014, from the peer-reviewed to a comment in the UK Parliament.  This could be because climate is highly complex, with many variables, the presence of each contributing can only be guessed at, let alone the magnitude of each factor and the interrelationships with all factors. So how do you tell which statements are valid information and which are misinformation? I agree with Cook and Lewandowsky that misinformation is pernicious, and difficult to get rid of once it becomes entrenched. So how does one evaluate distinguish between the good information and the bad, misleading or even pernicious?

The Lewandowsky / Cook answer is to follow the consensus of opinion. But what is the consensus of opinion? In climate one variation is to follow a small subset of academics in the area who answer in the affirmative to

1. When compared with pre-1800s levels, do you think that mean global temperatures have generally risen, fallen, or remained relatively constant?

2. Do you think human activity is a significant contributing factor in changing mean global temperatures?

Problem is that the first question is just reading a graph and the second could be is a belief statement will no precision. Anthropogenic global warming has been a hot topic for over 25 years now. Yet these two very vague empirically-based questions, forming the foundations of the subject, should be able to be formulated more precisely. On the second it is a case of having pretty clear and unambiguous estimates as to the percentage of warming, so far, that is human caused. On that the consensus of leading experts are unable to say whether it is 50% or 200% of the warming so far. (There are meant to be time lags and factors like aerosols that might suppress the warming). This from the 2013 UNIPCC AR5 WG1 SPM section D3:-

It is extremely likely that more than half of the observed increase in global average surface temperature from 1951 to 2010 was caused by the anthropogenic increase in greenhouse gas concentrations and other anthropogenic forcings together.

The IPCC, encapsulating the state-of-the-art knowledge, cannot provide firm evidence in the form of a percentage, or even a fairly broad range even with over 60 years of data to work on..  It is even worse than it appears. The extremely likely phrase is a Bayesian probability statement. Ron Clutz’s simple definition from earlier this year was:-

Here’s the most dumbed-down description: Initial belief plus new evidence = new and improved belief.

For the IPCC claim that their statement was extremely likely, at the fifth attempt, they should be able to show some sort of progress in updating their beliefs to new evidence. That would mean narrowing the estimate of the magnitude of impact of a doubling of CO2 on global average temperatures. As Clive Best documented in a cliscep comment in October, the IPCC reports, from 1990 to 2013 failed to change the estimate range of 1.5°C to 4.5°C. Looking up Climate Sensitivity in Wikipedia we get the origin of the range estimate.

A committee on anthropogenic global warming convened in 1979 by the National Academy of Sciences and chaired by Jule Charney estimated climate sensitivity to be 3 °C, plus or minus 1.5 °C. Only two sets of models were available; one, due to Syukuro Manabe, exhibited a climate sensitivity of 2 °C, the other, due to James E. Hansen, exhibited a climate sensitivity of 4 °C. “According to Manabe, Charney chose 0.5 °C as a not-unreasonable margin of error, subtracted it from Manabe’s number, and added it to Hansen’s. Thus was born the 1.5 °C-to-4.5 °C range of likely climate sensitivity that has appeared in every greenhouse assessment since…

It is revealing that quote is under the subheading Consensus Estimates. The climate community have collectively failed to update the original beliefs, based on a very rough estimate. The emphasis on referring to consensus beliefs about the world, rather than looking outward for evidence in the real world, I would suggest is the primary reason for this failure. Yet such community-based beliefs completely undermines the integrity of the Bayesian estimates, making its use in statements about climate clear misinformation in Cook and Lewandowsky’s use of the term. What is more, those in the climate community who look primarily to these consensus beliefs rather than the data of the real world will endeavour to dismiss the evidence, or make up ad hoc excuses, or smear those who try to disagree. A caricature of these perspectives with respect to global average temperature anomalies is available in the form of a flickering widget at John Cooks’ skepticalscience website. This purports to show the difference between “realist” consensus and “contrarian” non-consensus views. Figure 2 is a screenshot of the consensus views, interpreting warming as a linear trend. Figure 3 is a screenshot of the non-consensus or contrarian views. They is supposed to interpret warming as a series of short, disconnected,  periods of no warming. Over time, each period just happens to be at a higher level than the previous. There are a number of things that this indicates.

(a) The “realist” view is of a linear trend throughout any data series. Yet the period from around 1940 to 1975 has no warming or slight cooling depending on the data set. Therefore any linear trend line derived for a longer period than 1970 to 1975 and ending in 2015 will show a lower rate of warming. This would be consistent the rate of CO2 increasing over time, as shown in figure 1. But for shorten the period, again ending in 2015, and once the period becomes less than 30 years, the warming trend will also decrease. This contracts the theory, unless ad hoc excuses are used, as shown in my previous post using the HADCRUT4 data set.

(b) Those who agree with the consensus are called “Realist”, despite looking inwards towards common beliefs. Those who disagree with warming are labelled “Contrarian”. This is not inaccurate when there is a dogmatic consensus. But it utterly false to lump all those who disagree with the same views, especially when no examples are provided of those who hold such views.

(c) The linear trend appears as a more plausible fit than the series of “contrarian” lines. By implication, those who disagree with the consensus are viewed as as having a distinctly more blinkered and distorted perspective than those who follow the consensus. Yet even using gistemp data set (which is gives greatest support to the consensus views) there is a clear break in the linear trend. The less partisan HADCRUT4 data shows an even greater break.

Those who spot the obvious – that around the turn of the century warming stopped or slowed down, when in theory it should have accelerated – are given a clear choice. They can conform to the scientific consensus, denying the discrepancy between theory and data. Or they can act as scientists, denying the false and empirically empty scientific consensus, receiving the full weight of all the false and career-damaging opprobrium that accompanies it.

fig2-sks-realists

 

 

fig3-sks-contras

Kevin Marshall

 

Freeman Dyson on Climate Models

One of the leading physicists on the planet, Freeman Dyson, has given a video interview to the Vancouver Sun. Whilst the paper emphasizes Dyson’s statements about the impact of more CO2 greening the Earth, there is something more fundamental that can be gleaned.

Referring to a friend who constructed the first climate models, Dyson says at about 10.45

These climate models are excellent tools for understanding climate, but that they are very bad tools for predicting climate. The reason is simple – that they are models which have very few of the factors that may be important, so you can vary one thing at a time ……. to see what happens – particularly carbon dioxide. But there are a whole lot of things that they leave out. ….. The real world is far more complicated than the models.

I believe that Climate Science has lost sight of what this understanding of what their climate models actually are literally attempts to understand the real world, but are not the real world at all. It reminds me of something another physicist spoke about fifty years ago. Richard Feynman, a contemporary that Dyson got to know well in the late 1940s and early 1950s said of theories:-

You cannot prove a vague theory wrong. If the guess that you make is poorly expressed and the method you have for computing the consequences is a little vague then ….. you see that the theory is good as it can’t be proved wrong. If the process of computing the consequences is indefinite, then with a little skill any experimental result can be made to look like an expected consequence.

Complex mathematical models suffer from this vagueness in abundance. When I see supporters of climate arguing the critics of the models are wrong by stating some simple model, and using selective data they are doing what lesser scientists and pseudo-scientists have been doing for decades. How do you confront this problem? Climate is hugely complex, so simple models will always fail on the predictive front. However, unlike Dyson I do not think that all is lost. The climate models have had a very bad track record due to climatologists not being able to relate their models to the real world. There are a number of ways they could do this. A good starting point is to learn from others. Climatologists could draw upon the insights from varied sources. With respect to the complexity of the subject matter, the lack of detailed, accurate data and the problems of prediction, climate science has much in common with economics. There are insights that can be drawn on prediction. One of the first empirical methodologists was the preeminent (or notorious) economist of the late twentieth century – Milton Friedman. Even without his monetarism and free-market economics, he would be known for his 1953 Essay “The Methodology of Positive Economics”. Whilst not agreeing with the entirety of the views expressed (there is no satisfactory methodology of economics) Friedman does lay emphasis on making simple, precise and bold predictions. It is the exact opposite of the Cook et al. survey which claims a 97% consensus on climate, implying that it relates to a massive and strong relationship between greenhouse gases and catastrophic global warming when in fact it relates to circumstantial evidence for a minimal belief in (or assumption of) the most trivial form of human-caused global warming. In relation to climate science, Friedman would say that it does not matter about consistency with the basic physics, nor how elegantly the physics is stated. It could be you believe that the cause of warming comes from the hot air produced by the political classes. What matters that you make bold predictions based on the models that despite being simple and improbable to the non-expert, nevertheless turn out to be true. However, where bold predictions have been made that appear to be improbable (such as worsening hurricanes after Katrina or the effective disappearance of Arctic Sea ice in late 2013) they have turned out to be false.

Climatologists could also draw upon another insight, held by Friedman, but first clearly stated by John Neville Keynes (father of John Maynard Keynes). That is on the need to clearly distinguish between the positive (what is) and the normative (what ought to be). But that distinction was alienate the funders and political hangers-on. It would also mean a clear split of the science and policy.

Hattips to Hilary Ostrov, Bishop Hill, and Watts up with that.

 

Kevin Marshall

Feynman on Communist Science

I am currently engrossed in GENIUS: Richard Feynman and Modern Physics by James Gleick

In July 1962 Feynman went behind the Iron Curtain to attend a conference on gravitation in Warsaw. He was exasperated at the state of Soviet science. He wrote to his wife Gweneth:-

The “work” is always: (1) completely un-understandable, (2) vague and indefinite, (3) something correct that is obvious and self-evident, worked out by long and difficult analysis, and presented as an important discovery, or (4) a claim based on stupidity of the author that some obvious and correct fact, accepted and checked for years is, in fact, false (these are the worst: no argument will convince the idiot), (5) an attempt to do something, probably impossible, but certainly of no utility, which, it is finally revealed at the end, fails or (6) just plain wrong. There is a great deal of “activity in the field” these days, but this “activity” is mainly in showing that the previous “activity” of somebody else resulted in an error or in nothing useful or in something promising. (Page 353)

The failings of Government-backed science are nothing new.

The Logic and Boundaries of Scepticism

In response to a couple of my recent postings, William Connolley has made what I consider to be some pretty absurd statements. The lack of logic and the imprecision of language he uses have elements in common with the more mainstream believers in “climate science”. I consider some of these below, along with other statements.

__________________________________________________________

Consider the statement

You’ve failed to realise that using the label “skeptic” doesn’t actually make you a skeptic.

Equally it does not mean that a person is wrong in using the label. Given the word has multiple broad definitions, demonstrating that another is not a genuinely a sceptic1 is extremely difficult.

___________________________________________________________

I am sceptical of the statement “the majority of twentieth century warming was caused by the increase in GHGs“. In a respected dictionary I find two definitions that both apply to my scepticism

  1. a person who questions the validity or authenticity of something purporting to be factual.
  2. a person who maintains a doubting attitude, as toward values, plans, statements, or the character of others.

If someone states a different definition of “sceptic“, (which they may have made up) it does not mean I am not sceptical. It just means that they are playing with words.

If someone says the statement is proven by scientific evidence they are wrong. Scientific statements are never proven, just failed to be falsified by the evidence.

If someone says that there is overwhelming scientific evidence in support of the above statement then they are wrong. There is insufficient data to demonstrate beyond reasonable doubt at the moment. There may never be the evidence, as much of stored heat energy of the climate is in the oceans. Most of the twentieth century data necessary to establish the hypothesis is missing.

If someone says of the statement I am not sceptical, but instead denying the scientific consensus they would be wrong. Firstly, the consensus IPCC does exclude the possibility that a minority of the warming was increased by greenhouse gases. Check out the 2013 AR5 WG1 SPM to verify. Secondly, even if they did make the claim, given that the IPCC has in the past made knowledge claims that it no longer holds to (e.g. radiative forcing components and the hockey stick), I am justified in being sceptical of their abilities to get it right this time.

Further, if someone says of the statement I am not sceptical due denying the scientific consensus, they are using an evaluation criteria that I reject. They are free to believe it is a valid criteria, but I believe it is equivalent to “hearsay” evidence in law.

My rejection of the claim that the statement “twentieth century warming was human caused” as being essentially true does not make me unsceptical of the weaker first statement. Nor is it sufficient to claim that I reject the IPCC Consensus, as they make even weaker statements.

___________________________________________________________

If someone were to demonstrate beyond reasonable doubt that a statement were true, then I would cease being sceptical. Personally I would accept the scientific evidence at a much lower level than that in support of the statements “smoking causes lung cancer”2 or “HIV causes AIDS”3.

___________________________________________________________

On reflection the last statement is not quite correct when applied to climatology. In the past I would have accepted a scientific statement based on expert opinion, or reasonable scientific evidence. But on many levels the climate community have breached the trust which any reasonable member of the public might bestow on an expert. They have failed to draw upon the accumulated wisdom of other areas, such as philosophy of science, decision theory, diplomacy or public choice economics. They have rejected things I value, such as comparing and contrasting different viewpoints, recognizing one’s own bias and listening to others. They have embraced principles I dislike, such as marginalizing opponents, and censoring of opposing opinions. But most of all many will never countenance the possibility of their own fallibility.

___________________________________________________________

If someone does not reject out of hand statements of Murray Salby (who rejects the notion that the rise in CO2 levels is human caused), it does not automatically mean they accept wholeheartedly what he says. Neither does posting articles on their blog mean they believe in what Salby says. There are a number of alternative reasons. For instance, they could feel that his sacking was not justified. Or they could mean the website owner is a pluralist in science, who believes that you should not reject new ideas out of hand. Or they could believe in academic freedom. Or they could be trying to act as a forum for different ideas. Instead, using that, or similar arguments shows an inability to consider other possibilities, or to countenance that those you oppose may have valid alternative positions. It is a normal human failing to deny the humanity of others, and one that I believe we should strive to counter. Further, those with high intelligence, coupled with dogmatic beliefs, are often those most guilty of casting those with opposing beliefs as being incapable of understanding their viewpoint.

___________________________________________________________

Claims that doctorates in climatology (or related subjects) confer special skills or abilities, that non-scientists do not possess is just bluster. The subject has no established track record of understanding the climate system, but has plenty of excuses for failure. It ignores many distinctions learnt in other empirically-based subjects. But most of all, the subject demands belief in a particular scientific hypothesis. Any criticism of that hypothesis, or contradictory evidence, undermines their core beliefs. Thus being too close to the subject may be a positive disability in engagement. To counter this more traditional sciences have promoted belief in the scientific method rather than belief in the scientific hypothesis. Those areas with strong ideological beliefs, such as economics and politics, have in free societies recognized the values of pluralism.

Kevin Marshall

  1. In Britain, “skeptic” is spelt “sceptic”. So I use that spelling, except when quoting others.
  2. I discussed the evidence from Cancer Research UK here following an article at “The Conversation“.
  3. AVERT, an HIV and AIDS Charity based in the UK, gives a long and through article on the case for “HIV causes AIDS“. In terms of communication of their case, there is a lot that the climate community could learn from.

IPCC’s 1990 Temperature Projections – David Evans against Mike Buckley

The following comments by Mike Buckley (referenced here) are more revealing about the state of climate science than any errors on Evan’s part.


  1. Surface Temperatures v lower tropospheric temperatures.

    As a beancounter (accountant) I like to reconcile figures. That is to account for the discrepancies. Jo Nova, Anthony Watts and others have found numerous reasons for the discrepancies. The surface temperature records have many “adjustments” that brings reality into line with the models. Whatever excuses you can conjure up, as an accountant I would say that they fail to offer a “true and fair view”.

  2. Trend lines should not start at the origin.

    So you disagree with standard forecasting? That is you start with the current position.

  3. Trend lines should be curved.

    Agreed. This is for simplicity. See next point.

  4. Trend lines should be further apart.

    Are you saying that the climate models have a wider predictive band of 0.75 celsius over 25 years? If they were straight lines, over a century they cannot get within 3 degrees. If Dr Evans had not simplified, the range would have been much greater.

There is a way of more precisely comparing the models with the actuals. The critical variable is CO2 levels. Therefore we should re-run the models from 1990 with actual CO2 data. By then explaining the variances, we can better achieve better understanding and adjust the models for the future. But there is plenty of evidence that this needs to be done by people who are independent. It will not happen, as the actual rise in CO2 was similar to the highest projections of the time.

The philosopher of science Karl Popper is remembered for the falsification principle. A less stringent criteria is that progressive science confronts the anomalies and gets predictions ever closer to the data. Pseudo-science closes ranks, makes up excuses, and “adjusts” perceptions of reality to fit the theory. Progressive science is highly competitive and open, whilst pseudo-science becomes ever more dogmatic, intolerant and insular.

AGW – The Limits of the Science

Just posted to Wattsupwiththat.

To say that we cannot make any predictions from models is inaccurate. However, a combination of the scarcity / inaccuracy of data and the highly complex nature of climate systems severely limit what we can be extrapolated. We are restricted to the most basic of “pattern predictions”. With respect to future temperature changes this is most probably restricted to the range of longer-term (30 plus years) trends. Prof. Bob Carter’s analysis is probably as far as we can go on the available data. That is we have a uniform, increasing, average temperature trend over the last 150 years, with 60 year cycles providing deviations around this trend. This trend is unexceptional when viewed from temperature data from ice-cores going back hundreds of thousands of years.

The attempt to cast every unusual weather event in terms of anthropogenic warming, and only selecting the data that fits the theories, not only risks policies that are inappropriate. It may lead us in failing to pick up the signals of potential trends for which the signal is weak, or where detection is from trends or patterns that do not fit theory. For example my house, along with hundreds of others in the area has been without water for over twelve hours now due to a burst water main, caused by the severe cold. A contributing factor to the delay in repair was the lack of resource available. Too much reliance on speculative forecasts of increasingly mild winters, and snow being a rare event has virtually eliminated contingency planning for extreme cold. Yet natural factors (e.g. La Nina, lack of sunspots) would have suggested otherwise.

The AGW science is not only costing us more for fuel. It is also putting us at greater risk of the consequences of extreme weather.

For Robert Carter’s views, see a video at http://video.google.com/videoplay?docid=-1326937617167558947#

Royal Society lacks rigor in 20% cuts hypothesis

The New Scientist reports that the Royal Society believes that a “20 per cent cuts to British science means ‘game over’”. (Hattip BishopHill)

In the article, some of the scientists point to the need for innovation to promote the high-tech industries on which our recovery depends. I quite agree. However, I would profoundly disagree that government-funded research science is the best way to achieve this. Firstly, because government-funded research is notoriously bad at producing the job-creating outputs. In fact, the public sector tends to specialise in pure research, with only distant business opportunities. Second, is that government-funded research tends to be long-term. Most politicians would agree that currently we need the new jobs in the next few months, not a decade or more down the line.

As an aside, the idea that a 20% cut “would cause irreversible destruction” is a hypothesis that should be expounded in a more rigorous & scientific manner, with empirical evidence to back this up. I believe that it is analogous to the notion of tipping-points in climate science, so the Royal Society would do well to exchange notes with the folks at the Climate Research Unit at UEA. In trying to model their separate issues they will find that positing of such turning points relies on disregarding the real-world background “noise”. Such “noise” renders the turning points both unpredictable and highly unlikely.

My counter-argument is that, historically, Britain has been very good at the creative elements of pure science and invention. We are not so good at turning that into the reliable world-beating products that create the jobs. We are the country of Newton, Marconi, Whittle and Turing. We are not the country of Apple, Toyota, Nokia, Siemens or BMW.

Considering Uncertainty in Climate Science

Sir John Bedddington, provides the introduction to a summary of “The Science of Climate change” on  UK Business Department website. He states

            “The fact that uncertainty exists in climate science, as it does in other fields, does not negate the value of the evidence – and it is important to recognise that uncertainty may go in both (or a number of) directions.”

This may be true in a new field, but there is evidence that where the consensus is concerned, when assumptions have to be made, or choices made between different scientific conclusions, there has been a very strong bias towards the more alarmist conclusions. For instance,

  1. The emphasis on positive feedbacks;
  2. The over-statement of climate sensitivities;
  3. The promotion of the hockey stick as secondary verification of recent warming being largely due to anthropogenic factors.
  4. Further there has been a public relations failure to challenge unsound science, or wild predictions, or false confirmations.
  5. Neither have there been any consensus scientists standing up to emphasise that the model scenarios of future temperature changes are not forecasts

The consequence of recognising uncertainty means that an audit is required of the total picture. Each part of the science needs to be graded according to the certainties. Most certain is that a massive increase in greenhouse gases will, ceteris paribus, cause a rise in temperature. At the other extreme are predictions that within a generation the Arctic Ocean will be ice-free in summer, or the Himalayan glaciers will have vanished, or the Maldives will disappear beneath the waves. The rhetoric needs to be replaced by establishing the case on a scientific basis. It is not sufficient to say that there is uncertainty and move on as is nothing had happened. The presence of uncertainty severely weakens the claim that the science is established and settled. We should now see the consequences for policy.

Hattip to BishopHill

The Division of Labour & Climate Science Part 1

Bishop Hill displays to an excellent short video at TED by Matt Ridley, encapsulating the concepts of the division of labour and comparative advantage. One thing that Matt Ridley leaves out is the creative destructiveness of competition through supplanting the existing order. Specialisation leads to new products and processes. By implication, the established processes and products are overturned. (Joseph Schumpeter needs to be added to the list of Adam Smith and David Ricardo)

It is not just in the sphere of production that these concepts apply. It is also with empirical science, be it economics, medical research or climatology. With complex data and many facets to the subject, there is scope for division of labour into

–         Data collectors,

–         Data analysts & measurers,

–         Statisticians to validate the analysis,

–         Theoreticians to innovate or create new ideas.

–         Mathematicians, to provide tools for analysis.

–         Methodologists, to provide structures of meaning and assess the boundaries of science.

–         This is alongside the general sub-divisions of the subject, which may change over time.

–         Alongside greater specialists there is also scope for generalist assessors who get a total perspective of the corpus of knowledge, weighing up the status of competing ideas.

–         Academic competition (to gain status) leads to improvements, but can also lead to diversity in conclusions. It also tends to blunt the conclusions where data is ambiguous or fuzzy.

This makes things a bit messy. In economics there has ceased to be any dominant schools of thought or policy prescriptions. But in climatology we are lucky to have the IPCC, which divides the world into a small group of generalist experts (who agree their main conclusions) and the masses, who accept the wisdom handed down. A bit like the guild system, that kept England in the Dark Ages.