The Policy Gap in Achieving the Emissions Goals

The Millar et al. 2017 has severe problems with the numbers, as my previous post suggested. But there is a more fundamental problem in achieving emissions goals. It is contained in the introductory paragraphs to an article lead author Richard Millar posted at Carbon Brief

The Paris Agreement set a long-term goal of limiting global warming to “well-below” 2C above pre-industrial levels and to pursue efforts to restrict it to 1.5C.

A key question for the upcoming rounds of the international climate negotiations, particularly when countries review their climate commitments next year, is exactly how fast would we have to cut emissions to reach these goals?

In a new paper, published in Nature Geoscience, we provide updated estimates of the remaining “carbon budget” for 1.5C. This is the total amount of CO2 emissions that we can still emit whilst limiting global average warming to 1.5C.

Our estimates suggest that we would have a remaining carbon budget equivalent to around 20 years at current emissions rates for a 2-in-3 chance of restricting end-of-century warming to below 1.5C.

This suggests that we have a little more breathing space than previously thought to achieve the 1.5C limit. However, although 1.5C is not yet a geophysical impossibility, it remains a very difficult policy challenge.

The problem is with the mixing of singular and plural statements. The third paragraph shows the problem.

In a new paper, published in Nature Geoscience, we provide updated estimates of the remaining “carbon budget” for 1.5C. This is the total amount of CO2 emissions that we can still emit whilst limiting global average warming to 1.5C.

In the first sentence, the collective “we” refers to the ten authors of the paper. That is Richard J. Millar, Jan S. Fuglestvedt, Pierre Friedlingstein, Joeri Rogelj, Michael J. Grubb, H. Damon Matthews, Ragnhild B. Skeie, Piers M. Forster, David J. Frame & Myles R. Allen.  In the second sentence, the collective “we” refers to approximately 7500 million people on the planet, who live about 195 countries. Do they speak for all the people in Russia, India, Nigeria, Iran, Iraq, China, Taiwan, North and South Korea, the United States and Australia for instance? What I would suggest is they are speaking figuratively about what they believe the world ought to be doing.

Yet the political realities are that even though most countries have signed the Paris Agreement, it does not commit them to a particular emissions pathway, nor to eliminate their emissions by a particular date. It only commits them to produce further INDC submissions every five years, along with attending meetings and making the right noises. Their INDC submissions are not scrutinized, still less sent back for “improved ambition” if they are inadequate in contributing to the aggregate global plan.

Looking at the substance of the Paris Agreement, point 17 notes gives an indication of the policy gap.

17. Notes with concern that the estimated aggregate greenhouse gas emission levels in 2025 and 2030 resulting from the intended nationally determined contributions do not fall within least-cost 2 ˚C scenarios but rather lead to a projected level of 55 gigatonnes in 2030, and also notes that much greater emission reduction efforts will be required than those associated with the intended nationally determined contributions in order to hold the increase in the global average temperature to below 2 ˚C above pre-industrial levels by reducing emissions to 40 gigatonnes or to 1.5 ˚C above pre-industrial levels by reducing to a level to be identified in the special report referred to in paragraph 21 below;

But the actual scale of the gap is best seen from the centerpiece graphic of the UNFCCC Synthesis report on the aggregate effect of INDCs, prepared in the run-up to COP21 Paris. Note that this website also has all the INDC submissions in three large Pdf files.

The graphic I have updated with estimates of the policy gap with my take on revised Millar et. al 2017 policy gaps shown by red arrows.

The extent of the arrows could be debated, but will not alter the fact that Millar et. al 2017 are assuming that by adjusting the figures and assuming that they are thinking for the whole world, that the emissions objectives will be achieved. The reality is that very few countries have committed to reducing their emissions by anything like an amount consistent with even a 2°C pathway. Further, that commitment is just until 2030, not for the 70 years beyond that. There is no legally-binding commitment in the Paris Agreement for a country to reduce emissions to zero sometime before the end of the century. Further, a number of countries (including Nigeria, Togo, Saudi Arabia, Turkmenistan, Iraq and Syria) have not signed the Paris Agreement – and the United States has given notification of coming out of the Agreement. Barring huge amounts of funding or some technological miracle most developing countries, with a majority of the world population, will go on increasing their emissions for decades. This includes most of the countries who were Non-Annex Developing Countries to the 1992 Rio Declaration. Collectively they accounted for just over 100% of the global GHG emissions growth between 1990 and  2012.

As some of these Countries’ INDC Submissions clearly state, most will not sacrifice economic growth and the expectations of their people’s for the unproven dogma of politicalized academic activists in completely different cultures say that the world ought to cut emissions. They will attend climate conferences and be seen to be on a world stage, then sign meaningless agreements afterward that commit them to nothing.

As a consequence, if catastrophic anthropogenic global warming is true (like the fairies at the bottom of the garden) and climate mitigation reduction targets are achieved, the catastrophic climate change will be only slightly less catastrophic and the most extreme climate mitigation countries will be a good deal poorer. The non-policy countries will the ones better off. It is the classic free-rider problem, which results in an underprovision of those goods or services. If AGW is somewhat milder, then even these countries will be no worse off.

This is what really irritates me. I live in Britain, where the Climate Change Act 2008 has probably the most ludicrous targets in the world. That Act was meant to lead the world on climate change. The then Environment Secretary David Miliband introduced the bill with this message in March 2007.

From the graphic above COP21 Paris showed that most of the world is not following Britain’s lead. But the “climate scientists” are so stuck in their manipulated models, they forget that their models and beliefs of their peers are not the realities of the wider world. The political realities mean that reduction of CO2 emissions are net harmful to the people of Britain, both now and for future generations of Britains. The activists are just as wilfully negligent in shutting down any independent review of policy as a pharmaceutical company who would push one of its products onto the consumers without an independent evaluation of both the benefits and potential side effects.

Kevin Marshall

The Inferior Methods in Supran and Oreskes 2017

In the previous post I looked at one aspect of the article Assessing ExxonMobil’s Climate Change Communications (1977–2014) by Geoffrey Supran and Naomi Oreskes. I concluded the basis for evaluation of ExxonMobil’s sponsored climate papers – “AGW is real, human-caused, serious, and solvable” –  is a mantra held by people who fail to distinguish between empirical and verifiable statements, tautologies, opinions and public policy that requires some fanciful global political implementation. In this post I look at how the application of that mantra in analyzing journal articles can lead to grossly misleading interpretations.

Under Section 2. Method, in Table 2 the authors lay out their criteria evaluation in terms of how the wording supports (endorses) or doubts elements of the mantra. For AGW is real & human-caused there are quite complex criteria. But for whether it is “serious” and “solvable” they are much more straightforward, and I have reproduced them below.

The acknowledgment or doubt of “AGW as serious” or “AGW as solvable” are in relation to the mantra. That is the only criteria used. Supran and Oreskes would claim that this does not matter. What they are looking at is the positions communicated in the papers relative to the positions expressed by ExxonMobil externally. But there are problems with this methodology in terms of alternative perspectives that are missing.

First is that the underlying quality and clarity of results and relevancy of each paper is ignored. What matters to Supran and Oreskes is the language used.

Second is that ExxonMobil’s papers are not the only research on whether “AGW is real, human-caused, serious, and solvable”. The authors could also take into account the much wider body of papers out there within the broad areas covered by the mantra.

Third, if the totality of the research – whether ExxonMobil’s or the totality of climate research – does not amount to a strong case for anthropogenic global warming being a serious global problem, and nor having a workable solution, why should they promote politicized delusions?

Put this into the context of ExxonMobil – one of the World’s most successful businesses over decades – by applying some of the likely that it would use in assessing a major project or major strategic investment. For instance

  • How good is the evidence that there is a serious problem on a global scale emerging from human GHG emissions?
  • How strong is the evidence that humans have caused the recent warming?
  • Given many years of research, what is the track record of improving the quality and refinement of the output in the climate area?
  • What quality controls and KPIs are in place to enable both internal and external auditors to validate the work?
  • Where projections are made, what checks on the robustness of those projections have been done?
  • Where economic projections are produced, have they been done by competent mainstream economists, what are the assumptions made, and what sensitivity analyses have been done on those assumptions?
  • Does the project potentially harm investors, employees, customers and other stakeholders in the business? Where are the risk assessments of such potential harms, along with the procedures for the reporting and investigation of non-compliances?
  • Does a proposed project risk contravening laws and internal procedures relating to bribery and corruption?
  • Once a project is started, is it possible to amend that project over time or even abandon it should it fail to deliver? What are the contractual clauses that enable project amendment or abandonment and the potential costs of doing so?

Conclusions and further thoughts

Supran and Oreskes evaluate the ExxonMobil articles for AGW and policy in terms of a belief mantra applied to a small subset of the literature on the subject. Each article is looked at independently of from all other articles and indeed all other available information. Further any legitimate argument or evidence that undermines the mantra is evidence of doubt. It is all throwing the onus on ExxonMobil to disprove the allegations, but never for Supran and Oreskes justify their mantra or their method of analysis is valid.

There are some questions arising from this, that I hope to pursue in later posts.

1. Is the method of analysis just a means of exposing ExxonMobil’s supposed hypocrisy by statistical means, or does it stem from a deeply flawed and ideological way of perceiving the world, that includes trying to shut out the wider realities of the real world, basic logic and other competing (and possibly superior) perspectives?

2. Whatever spread of misinformation and general hypocrisy might be shown on the part of ExxonMobil from more objective and professional perspectives, is there not greater misinformation sown by the promoters of the “climate consensus“?

3. Can any part of the mantra “AGW is real, human-caused, serious, and solvable” be shown to be false in the real world, beyond reasonable doubt?

Kevin Marshall

 

Supran and Oreskes on ExxonMobils Communication of Climate Change

Over at Cliscep, Geoff Chambers gave a rather bitter review (with foul language) about a new paper, Assessing ExxonMobil’s Climate Change Communications (1977–2014) by Geoffrey Supran and Naomi Oreskes.
One point that I would like to explore is part of a quote Geoff uses:-

The issue at stake is whether the corporation misled consumers, shareholders and/or the general public by making public statements that cast doubt on climate science and its implications, and which were at odds with available scientific information and with what the company knew. We stress that the question is not whether ExxonMobil ‘suppressed climate change research,’ but rather how they communicated about it.

It is the communication of climate science by a very powerful oil company, that the paper concentrates upon. The approach reveals a lot about the Climate Change movement as well. In particular, this statement in the introduction:-

Research has shown that four key points of understanding about AGW—that it is real, human-caused, serious, and solvable—are important predictors of the public’s perceived issue seriousness, affective issue involvement, support for climate policies, and political activism [62–66].

The references are as follows

[62] Krosnick J A, Holbrook A L, Lowe L and Visser P S 2006 The origins and consequences of democratic citizens’ policy agendas: a study of popular concern about global warming Clim. Change 77 7–43
[63] Ding D, Maibach E W, Zhao X, Roser-Renouf C and Leiserowitz A 2011 Support for climate policy and societal action are linked to perceptions about scientific agreement Nat. Clim. Change 1 462–6
[64] Roser-Renouf C, Maibach E W, Leiserowitz A and Zhao X 2014 The genesis of climate change activism: from key beliefs to political action Clim. Change 125 163–78
[65] Roser-Renouf C, Atkinson L, Maibach E and Leiserowitz A 2016 The consumer as climate activist Int. J. Commun. 10 4759–83
[66] van der Linden S L, Leiserowitz A A, Feinberg G D and Maibach E W 2015 The scientific consensus on climate change as a gateway belief: experimental evidence PLoS One 10 e0118489

For the purposes of Supran and Oreskes study, the understanding that people have does not require any substance at all beyond beliefs. For instance, the Jehovah Witness Sect developing an “understanding” that Armageddon would occur in 1975. This certainly affected their activities in the lead up to the momentous history-ending event. Non-believers or members of the Christian Church may have been a little worried, shrugged their shoulders, or thought the whole idea ridiculous. If similar studies to those on climate activism had been conducted on the prophecy of Armageddon 1975, similar results could have been found to those quoted for AGW beliefs in references 62-66. That is, the stronger the belief in the cause, whether religious evangelism in the case of Jehovah’s Witnesses, or ideological environmentalism in the case of AGW, is a predictor of activism in support of the cause. They cannot go further because of an issue with scholarly articles. Claims made must be substantiated, something that cannot be done with respect to the prophesies of climate catastrophism, except in a highly nuanced form.
But the statement that AGW is “real, human-caused, serious, and solvable” – repeated five times in the article – indicates something about the activists understanding of complex issues.
AGW is real” is not a proper scientific statement, as it is not quantified. Given that the impacts on surface temperatures can muffled and delayed nearly indefinitely by natural factors, or swallowed by the oceans, the belief can be independent of any contrary evidence for decades to come.
AGW is human-caused”, is saying “Human-caused global warming is human-caused”. It is a tautology that tells us nothing about the real world.
AGW is serious” is an opinion. It may be a very widely-held opinion, with many articles written with confirming evidence, and many concerned people attending massive conferences where it is discussed. But without clear evidence for emerging net adverse consequences, the opinion is largely unsubstantiated.
AGW is solvable” could be whether it is theoretically solvable, given the technology and policies being implemented. But the statement also includes whether it is politically solvable, getting actual policies to reduce emissions fully implemented. If the “solution” is the reduction of global emissions to a level commensurate with 2C of warming (hence a partial solution), then COP21 in Paris shows that AGW is a long way from being solvable, with no actual solution in sight. Whereas the 2C limit requires global emissions to be lower in 2030 than in 2015, and falling rapidly, fully implemented policies would still see emissions higher in 2030 than in 2015 and still increasing.

The statement AGW is “real, human-caused, serious, and solvable” is, therefore, nothing more than a mantra held by people who fail to distinguish between empirical and verifiable statements, tautologies, opinions and public policy that requires some fanciful global political implementation. 

Kevin Marshall

Met Office Extreme Wet Winter Projections

I saw an article in the Telegraph

Met Office warns Britain is heading for ‘unprecedented’ winter rainfall, with records broken by up to 30pc 

Britain is heading for “unprecedented” winter rainfall after the Met Office’s new super computer predicted records will be broken by up to 30 per cent.

Widespread flooding has hit the UK in the past few years leading meteorologists to search for new ways to “quantify the risk of extreme rainfall within the current climate”.

In other words, the Telegraph reporting that the Met Office is projecting that if the current record is, say, 100mm, new records of 130mm could be set.

BBC is reporting something slightly different

High risk of ‘unprecedented’ winter downpours – Met Office

There is an increased risk of “unprecedented” winter downpours such as those that caused extensive flooding in 2014, the UK Met Office says.

Their study suggests there’s now a one in three chance of monthly rainfall records being broken in England and Wales in winter.

The estimate reflects natural variability plus changes in the UK climate as a result of global warming.

The BBC has a nice graphic, of the most extreme winter month of recent years for rainfall.

The BBC goes onto say

Their analysis also showed a high risk of record-breaking rainfall in England and Wales in the coming decade.

“We found many unprecedented events in the model data and this comes out as a 7% risk of a monthly record extreme in a given winter in the next few years, that’s just over Southeast England,” Dr Vikki Thompson, the study’s lead author told BBC News.

“Looking at all the regions of England and Wales we found a 34% chance of an extreme event happening in at least one of those regions each year.”

Not only is there a greater risk, but the researchers were also able to estimate that these events could break existing records by up to 30%.

“That is an enormous number, to have a monthly value that’s 30% larger, it’s a bit like what we had in 2014, and as much again,” said Prof Adam Scaife from the Met Office.

The 30% larger is an outlier.

But over what period is the record?

The Met Office website has an extended version of what the BBC reports. But strangely no figures. There is a little video by Dr Vikki Thomson to explain.

She does say only recent data is used, but no definition of what constitutes recent. A clue lies not in the text, but an explanatory graphic.

It is from 35 years of winters, which ties into the BBC’s graphic from 1981. There are nine regions in England and Wales by the Met Office definition. The tenth political region of London is included in the South East. There could be different regions for the modeling. As Ben Pile and Paul Homewood pointed out in the comments to the Cliscep article, elsewhere the Met Office splits England and Wales into six regions. What is amazing is that the Met Office article does not clarify the number of regions, still less show the current records in the thirty-five years of data. There is therefore no possibility of ever verifying the models.

Put this into context. Northern Ireland and Scotland are excluded, which seems a bit arbitrary. If rainfall was random, then the chance of this coming winter setting a new record in a region is nearly 3%. For any one of nine regions, if data rainfall data independent between regions (which it is not) it is nearly a 26% chance. 34% is higher. But consider the many alternatives ways for the climate patterns to become more extreme and variable. After all, with global warming there climate could be thrown into chaos, so more extreme weather should be emerging as a foretaste of much worse to come. Given the many different aspects of weather, there could be hundreds of possible ways climate could get worse. With rainfall, it could be wetter or drier, in either summer or winter. That is four variables, of which the Met Office choose just one. Or could be in any 1, 2, 3… or 12 month period. Then again, climate change could mean more frequent and violent storms, such as that of 1987. Or it could mean more heatwaves. Statistically, heatwaves records could be a number of different ways, such as, say, 5 consecutive days in a month where the peak daily temperature is more than 5C about the long-term monthly average peak temperature.
So why choose rainfall in winter? Maybe it is because in recent years there have been a number of unusually wet winters. It looks like the Met Office, for all the power of their mighty computers, have fallen for a common fallacy.

 

Texas sharpshooter fallacy is an informal fallacy which is committed when differences in data are ignored, but similarities are stressed. From this reasoning, a false conclusion is inferred. This fallacy is the philosophical/rhetorical application of the multiple comparisons problem (in statistics) and apophenia (in cognitive psychology). It is related to the clustering illusion, which refers to the tendency in human cognition to interpret patterns where none actually exist.
The name comes from a joke about a Texan who fires some gunshots at the side of a barn, then paints a target centered on the tightest cluster of hits and claims to be a sharpshooter.

A run of extremely wet winters might be due to random clustering, or it could genuine patterns from natural variation, or it could be a sign of human-caused climate change. An indication of random clustering would be to look at many other the different aspects of weather, to see if there is a recent trend of emerging climate chaos. Living in Britain, I suspect that the recent wet weather is just drawing the target around the tightest clusters. Even then, high winter rainfall in Britain high rainfall this is usually accompanied by slightly milder temperatures than average. Extreme winter cold is usually on cloud-free days. So, if winter rainfall is genuinely getting worse it seems that the whole global warming thing for Britain is predicted to become a bit a damp squib.

Kevin Marshall

 

Warming Bias in Temperature Data due to Consensus Belief not Conspiracy

In a Cliscep article Science: One Damned Adjustment After Another? Geoff Chambers wrote:-

So is the theory of catastrophic climate change a conspiracy? According to the strict dictionary definition, it is, in that the people concerned clearly conferred together to do something wrong – namely introduce a consistent bias in the scientific research, and then cover it up.

This was in response to last the David Rose article in the Mail on Sunday, about claims the infamous the Karl et al 2015 breached America’s National Oceanic and Atmospheric Administration (NOAA) own rules on scientific intergrity.

I would counter this claim about conspiracy in respect of temperature records, even in the strict dictionary definition. Still less does it conform to a conspiracy theory in the sense of some group with a grasp of what they believe to be the real truth, act together to provide an alternative to that truth. or divert attention and resources away from that understanding of that truth. like an internet troll. A clue as to know why this is the case comes from on of the most notorious Climategate emails. Kevin Trenberth to Micheal Mann on Mon, 12 Oct 2009 and copied to most of the leading academics in the “team” (including Thomas R. Karl).

The fact is that we can’t account for the lack of warming at the moment and it is a travesty that we can’t. The CERES data published in the August BAMS 09 supplement on 2008 shows there should be even more warming: but the data are surely wrong. Our observing system is inadequate.

It is the first sentence that was commonly quoted, but it is the last part is the most relevant for temperatures anomalies. There is inevitably a number of homogenisation runs to get a single set of anomalies. For example the Reykjavik temperature data was (a) adjusted by the Iceland Met office by standard procedures to allow for known locals biases (b) adjusted for GHCNv2 (the “raw data”) (c) adjusted again in GHCNv3 (d) homogenized by NASA to be included in Gistemp.

There are steps that I have missed. Certainly Gistemp homogenize the data quite frequently for new sets of data. As Paul Matthews notes, adjustments are unstable. Although one data set might on average be pretty much the same as previous ones, there will be quite large anomalies thrown out every time the algorithms are re-run for new data. What is more, due to the nature of the computer algorithms, there is no audit trail, therefore the adjustments are largely unexplainable with reference to the data before, let alone with reference to the original thermometer readings. So how does one know whether the adjustments are reasonable or not, except through a belief in how the results ought to look? In the case of the climatologists like Kevin Trenberth and Thomas R. Karl, variations that show warmer than the previous run will be more readily accepted as correct rather than variations that show cooler. That is, they will find reasons why a particular temperature data set now shows greater higher warming than before. but will reject as outliers results that show less warming than before. It is the same when choosing techniques, or adjusting for biases in the data. This is exacerbated when a number of different bodies with similar belief systems try to seek a consensus of results, like  Zeke Hausfather alludes to in his article at the CarbonBrief. Rather than verifying results in the real world, temperature data seeks to conform to the opinions of others with similar beliefs about the world.

Kevin Marshall

My Amazon Review of Ladybird Book of Climate Change

The following is my Amazon review of Ladybird Book of Climate Change.

The format goes back to the Ladybird Books of my childhood, with text on the left and a nice colour picture on the right. Whilst lacking in figures and references it provides an excellent summary of the current case of climate alarmism and the mitigation policies required to “save the world”. As such it is totally lopsided.
For instance, on page 35 is a drawing of 3 children holding a banner with “1.5 to stay alive”. The central estimate of the climate consensus since the Charney report of 1979 is that a doubling of CO2 levels will lead to 3C of warming. That means a rise from 280 to 400ppm would give 1.54C of warming. With the impact of the rise in other greenhouse gas levels the 2C of warming should already of happened. Either it is somehow hidden, ready to jump out at us unawares, or the the impact of emissions on climate has been exaggerated, so policy is not required.
The other major problem is with policy. The policy proposals are centered around what individuals in the UK can do. That is recycle more, eat less red meat and turn the heat down. There is no recognition that it is global GHG emissions that cause atmospheric GHG levels to rise. If the theory is correct, constraint of global warming means global emissions reductions. That includes the 80%+ of the global population who live in countries exempt from any obligation to constrain emissions. Including all the poorest countries, these countries accounted for all the emissions growth from 1990 to at least 2012.
If people genuinely want to learn about a controversial subject then they need to read different viewpoints. This is as true of climate change as history, economics or philosophy.

Ladybird Book on Climate Change

A couple of weeks ago there was a big splash about the forthcoming Ladybird Book for adults on Climate Change. (Daily Mail, Guardian, Sun, Telegraph etc.) Given that it was inspired by HRH The Prince of Wales, who wrote the forward, it should sell well. Even better, having just received a copy in a format that harks back to the Ladybird Books I grew up with. That is on each double page words on the left and a high quality coloured picture filling the right hand page. Unlike, the previous adult Ladybird series, which was humorous, this is the first in a series that seeks to educate.

The final paragraph of the forward states:-

I hope this modest attempt to alert a global public to the “wolf at the door” will make some small contribution towards requisite action; action that must be urgently scaled up, and scaled up now.

The question is whether there is enough here to convince the undecided. Is this is founded on real science, then there should be a sufficient level of evidence to show

(a) there is a huge emerging problem with climate.

(b) that the problem is human caused.

(b) that there are a set of potential steps that can be taken to stop constrain this problem.

(c) that the cure is not worse than the disease.

(d) that sufficient numbers will take up the policy to meet the targets.

My approach is is to look at whether there is sufficient evidence to persuade a jury. Is there evidence that would convict humanity of the collective sin of destroying the planet for future generations? And is there evidence that to show that, through humanity collectively working for the common good, catastrophe can be averted and a better future can be bequeathed to those future generations? That presumes that there is sufficient quality of evidence that an impartial Judge would not throw the evidence out as hearsay.

Evidence for an Emerging Problem with Climate.

Page 8 on melting ice and rising sea levels starts with the reduced Arctic sea ice. The only quantifiable estimate of the climate change other than the temperature graph on page 6, in claiming at the end of the 2016 melt season the sea ice levels were two-thirds that of at the end of the end of the twentieth century.

Any jury would hear that there has only been satellite data of sea ice extent since 1979; that this was the end of a period known as the “sea ice years“; that the maximum winter ice extent in April was likely less in the eighteenth century than today; that ships log books suggest that general sea ice extent was the roughly the same one hundred and fifty years ago as today; and that in the Antarctic average sea ice extent increase has largely offset the Arctic decrease.

The rest about sea levels correctly state both that they have risen; that the reasons for the rise are a combination of warming seas and melting ice caps. It is also correct that flooding occurs in storm surges. But there is no quantification of the rise in sea levels (about 8-12 inches a century), nor of the lack of evidence of the predicted acceleration.

Page 10 on heatwaves, droughts, floods and storms states that they can cause disruption, economic damage and loss of life. there are also recent examples, and speculation about future trends. But no evidence of emerging trends, particularly increasing loss of life. This lack of evidence is because the evidence of the harms of extreme weather appear on the decrease. Indur Goklany has been a rich source of the counter-evidence over many years.

Page 12 begins

Threats to food and water supply, human health and national security, and the risk of humanitarian crises are all potentially increases by climate change.

The rest is just padding out this speculation.

Page 14 is on disappearing wildlife. One quote

The polar bear has come to symbolize the threats posed to wildlife by climate change….

You can probably find many images of starved dead polar bears to back this up. But the truth is that this creatures live by hunting, and as they get older slow down, so are no longer fast enough to catch seals, their main food source. Zoologist Susan Crockford has a blog detailing how polar bear numbers have increased in recent years, and far from being threatened the species is thriving.

The climate change problem is mostly human caused

The book details that emissions of greenhouse gas levels have gone up, and so have the levels of greenhouse gases. The only quantities is for CO2, the major greenhouse gas. (Page 20) There is simple diagram explaining how CO2 emissions impacts on atmospheric CO2 levels, before explaining the major sources of the net increase – fossil fuel emissions and clearing forests. There is no actual testing of the theory against the data. But Page 20 begins

The scientific evidence shows that dominant cause of the rapid warming of the Earth’s climate over the last half century has been the activities of people…

The relevant quote from UNIPCC AR5 WG1 SPM section D3 says something slightly differently.

It is extremely likely that more than half of the observed increase in global average surface temperature from 1951 to 2010 was caused by the anthropogenic increase in greenhouse gas concentrations and other anthropogenic forcings together.

The extremely likely phrase is a Bayesian estimate. It is a belief that should be updated on the best available evidence. Lack of evidence, after much searching, suggests the original guess was wrong. Therefore true Bayesians would downgrade their certainties if they cannot refine the estimates over time. But this was written in 2013. Since the Carney Report of 1979 and the previous four IPCC reports of 1990 to 2007, there has been no refinement in the estimate of how much warming will eventually result from a doubling of CO2.

But how does the evidence stack up? On page 6 there is a chart of global surface temperature anomalies. That increase in temperatures can be tested against the doubling effect of CO2. Since around the turn of century the rate of rise in CO2 emissions and atmospheric CO2 levels has accelerated. But global warming stopped  for over a decade until 2014, only to restart due to a natural phenomena. Comparing the actual data to the theory, fails to support the strong beliefs that GHG emissions are the dominant cause of recent warming. 

Policy to contain the problem

Page 34 go into the benefits of containing warming to 1.5C. Given that the central estimate from the climate community since 1979 has been that a doubling of CO2 will lead to and eventual rise in average temperature of 3C, a rise in CO2 levels from the pre-industrial levels of 280ppm to 400ppm reached in 2015 would give 1.544C of warming. With other greenhouse gases it should be nearer to 2C of warming. Either it is way too late (and the warming is lurking like the Loch Ness monster is the dark and murky depths) or the central estimate is exaggerated. So the picture of three young people holding a banner with 1.5 to stay alive is of the doomed who we can do nothing about, or false alarmism.

Page 36 has a nice graphic adopted from the IPCC Synthesis Report of 2014, showing the liquid dripping through an egg-timer. It shows the estimate that 2000 billion tonnes of CO2 have been emitted so far, 1000 billion tonnes can be emitted before the 2 C of warming is breached. This was from a presentation to summarize the IPCC AR5 Synthesis Report of 2014. Slide 33 of 35.

Problem is that this was the data up to 2011, not five years later to 2016; it was for GHG emissions in billions of tonnes of CO2 equivalents; and the 40 billions tonnes of CO2 emissions should be around 52-55 billion tonnes CO2e GHG emissions. See for instance the EU Commission’s EDGAR figures, estimating 54GtCO2e in 2012 and 51GtCO2e in 2010 (against the IPCCs 49 GtCO2e). So the revised figure is about 750GtCO2e of emissions before this catestrophic figure is breached. The Ladybird book does not have references, to keep things simple, but should at least properly reflect the updated numbers. The IPCC stretched the numbers in 2014 in order to keep the show on the road to such extent that they fall apart on even a cursory examination. The worst part is at the very top of the egg-timer, coloured scarlett is “Coal, oil and gas reserves that cannot be used“. These are spread across the globe. Most notably the biggest reserves are in China, USA, Russia, Canada, Australia, Middle East and Venezuela, with the rest of the World have a substantial share of the rest.

The cure is worse than the disease

For the rest of the book to suggest European solutions like recycling, eating less red meat, turning down the heating to 17C and more organic farming, the authors write about making very marginal differences to emissions in a few countries with a small minority of global emissions. Most of those reserves will not be left in the ground no matter how much the first in line to the Throne gets hot under the collar. The global emissions will keep on increasing from non-policy countries with over 80% of the global population, two-thirds of global emissions and nearly 100% of the world’s poorest people. Below is a breakdown of those countries.

These countries collectively produced 35000 MtCOe in 2012, or 35 GtCO2e. That will increase well into the future short of inventing a safe nuclear reactor the size weight and cost of a washing machine. Now compare to the global emissions pathways to stop the 1.5C  or 2C of warming prepared by the UNFCCC for the 2015 Paris talks.

 

The combined impact of all the vague policy proposals do not stop global emissions from rising. It is the non-policy developing countries that make the real difference between policy proposals and the modelled warming pathways. If those countries do not keep using fossil fuels at increasing rates, then they deprive billions of people of increasing living standards for themselves and their children. Yet this must happen very quickly for the mythical 2C of warming not to be breached. So in the UK we just keep on telling people not to waste so much food, buy organic, ride a bike and put on a jumper.

There is no strong evidence would convict humanity of the collective sin of destroying the planet for future generations. Nor is there evidence that to show that a better future can be bequeathed to those future generations when the policies would destroy the economic future of the vast majority. The book neatly encapsulates how blinkered are the climate alarmists to both the real-world evidence and the wider moral policy perspectives.

Kevin Marshall

 

Friends of the Earth still perverting the evidence for fracking harms

Yesterday, the Advertising Standards Authority at long last managed to informally resolve the complaints about a misleading leaflet by Friends of the Earth Trust and Friends of the Earth Ltd. This is no fault of the ASA. Rather FoE tried to defend the indefensible, drawing out the process much like they try to draw out planning inquiries. From the Guardian

“After many attempts by Friends of the Earth to delay this decision, the charity’s admission that all of the claims it made, that we complained about, were false should hopefully put a stop to it misleading the UK public on fracking,” said Francis Egan, chief executive of Cuadrilla. …..

According to the BBC

Friends of the Earth (FOE) must not repeat misleading claims it made in an anti-fracking leaflet, the advertising watchdog has said.

The fundraising flyer claimed fracking chemicals could pollute drinking water and cause cancer and implied the process increases rates of asthma.

The charity “agreed not to repeat the claims,” the Advertising Standards Authority (ASA) said.

All pretty clear. As the BBC reports that the eco-worriers are not to be told that they are misleading the public.

Donna Hume, a campaigner for the environmental charity, said it would “continue to campaign against fracking” because it was “inherently risky for the environment”.

……..

Ms Hume said Cuadrilla “started this process to distract from the real issues about fracking” and was trying to “shut down opposition”.

“It hasn’t worked though. What’s happened instead is that the ASA has dropped the case without ruling,” she said.

“We continue to campaign against fracking, alongside local people, because the process of exploring for and extracting shale gas is inherently risky for the environment, this is why fracking is banned or put on hold in so many countries.”

Donna Hume was just acting as mouthpiece to the FoE, who issued a misleading statement about the case. They stated

Last year fracking company Cuadrilla complained to the Advertising Standards Authority about one of our anti fracking leaflets.

But after more than a year, the complaint has been closed without a ruling.

The scientific evidence that fracking can cause harm to people and the environment keeps stacking up. Friends of the Earth is not alone in pointing out the risks of fracking, to the climate, to public health, of water contamination, and to the natural environment.

ASA Chief Executive Guy Parker, took the unusual step of setting the record straight.

But amidst the reports, the public comments by the parties involved and the social media chatter, there’s a risk that the facts become obscured.

So let me be clear. We told Friends of the Earth that based on the evidence we’d seen, claims it made in its anti-fracking leaflet or claims with the same meaning cannot be repeated, and asked for an assurance that they wouldn’t be. Friends of the Earth gave us an assurance to that effect. Unless the evidence changes, that means it mustn’t repeat in ads claims about the effects of fracking on the health of local populations, drinking water or property prices.

Friends of the Earth has said we “dropped the case”. That’s not an accurate reflection of what’s happened. We thoroughly investigated the complaints we received and closed the case on receipt of the above assurance. Because of that, we decided against publishing a formal ruling, but plainly that’s not the same thing as “dropping the case”. Crucially, the claims under the microscope mustn’t reappear in ads, unless the evidence changes. Dropped cases don’t have that outcome.

The ASA, which tries to be impartial and objective, had to take the unusual statement to combat FoE deliberate misinformation. So what is the scientific evidence that FoE claim? This from the false statement that ASA was forced to rebut.

The risks of fracking

In April 2016, a major peer-reviewed study by research institute PSE Healthy Energy was published in academic journal PLOS ONE, which assessed 685 pieces of peer-reviewed scientific literature from around the world over 2009-2015 and found:

  • “84% of public health studies contain findings that indicate public health hazards, elevated risks, or adverse health outcomes”

  • “69% of water quality studies contain findings that indicate potential, positive association, or actual incidence of water contamination”

  • “87% of air quality studies contain findings that indicate elevated air pollutant emissions and/or atmospheric concentrations”

I suggest readers actually read what is said. Hundreds of studies cannot identify, beyond reasonable doubt, that there is a significant large risk to human health. If any single study did establish this it would be world news. It is just hearsay, that would be dismissed by a criminal court in the UK. A suggestion is from what the  PLOS-ONE Journal does not include in the submission criteria, that is normal in traditional journals – that submissions should have something novel to say about the subject area. As an online journal it does not have to pay its way by subscriptions, as authors usually have to pay a fee of $1495 prior to publication.

But this still leaves the biggest piece of misinformation that FoE harps on about, but was not included in the ruling. Below is the BBC’s two pictures of the leaflet.

foe-false-fracking-leaflet-bbc

foe-false-fracking-leaflet-bbc2

It is the the issue of climate change that goes unchallenged. Yet it is the most pernicious and misleading claim of the lot. If fracking goes ahead in the UK it will make not a jot of difference. According to the EU EDGAR data the UK emitted just 1.1% of global GHG emissions in 2012. That proportion is falling principally because emissions are rising in other countries. It will continue to fall as emissions in developing countries rise, as those countries develop. That is China, India, the rest of South East Asia and 50+ African nations. These developing countries, which are exempt from any obligation to constrain emissions under the 1992 Rio Declaration, have 80% of the global population and accounted for over 100% of emissions growth between 1990 and 2012. I have summarized the EDGAR data below.

ghg-ems-annnon

So who does the FoE speak for when they say “We could trigger catastrophic global temperature increases if we burn shale in addition to other fossil fuels“?

They do not speak for the USA, where shale gas has replaced coal, and where total emissions have reduced as a result, with real pollutants falling. There the bonus has been hundreds of thousands of extra jobs. They do not speak for China where half of the global increase (well 53%) in GHG emissions between 1990 and 2012 occurred. They cannot speak for Britain, as if it triggers massive falls in energy costs like in the USA (and geologically the North of England Bowland shale deposits look to be much deeper than the US deposits, so potentially cheaper to extract) then industry could be attracted back to the UK from countries like China with much higher emissions per unit of output.

Even worse, F0E do not speak for the British people. In promoting renewables, they are encouraging higher energy prices, which have lead to increasing fuel poverty and increased winter deaths among the elderly. On the other hand the claims of climate catastrophism from human emissions look to be far fetched when this century global average temperature rises have stalled, when according to theory they should have increased at an accelerated rate.

Kevin Marshall

Update 7th Jan 11am

Ron Clutz has posted a good summary of the initial ruling, along with pointing to a blog run by retired minister Rev. Michael Roberts, who was one of the two private individuals (along with gas exploration company Cuadrilla) who made the complaint to ASA.

The Rev Roberts has a very detailed post on the 4th January giving extensive background history of FoE’s misinformation campaign against shale gas exploration in the Fylde. There is one link I think they should amend. The post finishes with what I believe to be a true statement.

Leaflet omits main reason for opposition is Climate change

https://www.foe.co.uk/page/no-fracking-lancashire

The link is just to a series of posts on fracking in Lancashire. It is one of them is

Lancashire fracking inquiry: 3 reasons fracking must be stopped

The first reason is climate change. But rather than relate emissions to catastrophic global warming, they point to the how allowing development of fossil fuels appears in relation to Government commitments made in the Climate Change Act 2008 and the Paris Agreement. FoE presents their unbalanced case in much fuller detail in the mis-named Fracking Facts.

Update 2 7th Jan 2pm

I have rechecked the post Cuadrilla’s leaflet complaint is closed without a ruling, while evidence of fracking risks grows, where Friends of the Earth activist Tony Bosworth makes the grossly misleading statement that ASA closed the case without a ruling. The claim is still there, but no acknowledgement of the undertaking that F0E made to ASA. F0E mislead the public in order to gain donations, and now tries to hide the information from its supporters by misinformation. Below, is a screenshot of the beginning of the article.

How strong is the Consensus Evidence for human-caused global warming?

You cannot prove a vague theory wrong. If the guess that you make is poorly expressed and the method you have for computing the consequences is a little vague then ….. you see that the theory is good as it can’t be proved wrong. If the process of computing the consequences is indefinite, then with a little skill any experimental result can be made to look like an expected consequence.

Richard Feynman – 1964 Lecture on the Scientific Method

It’s self-evident that democratic societies should base their decisions on accurate information. On many issues, however, misinformation can become entrenched in parts of the community, particularly when vested interests are involved. Reducing the influence of misinformation is a difficult and complex challenge.

The Debunking Handbook 2011 – John Cook and Stephan Lewandowsky

My previous post looked at the attacks on David Rose for daring to suggest that the rapid fall in global land temperatures at the El Nino event were strong evidence that the record highs in global temperatures were not due to human greenhouse gas emissions. The technique used was to look at long-term linear trends. The main problems with this argument were
(a) according to AGW theory warming rates from CO2 alone should be accelerating and at a higher rate than the estimated linear warming rates from HADCRUT4.
(b) HADCRUT4 shows warming stopped from 2002 to 2014, yet in theory the warming from CO2 should have accelerated.

Now there are at least two ways to view my arguments. First is to look at Feynman’s approach. The climatologists and associated academics attacking journalist David Rose chose to do so from a perspective of a very blurred specification of AGW theory. That is human emissions will cause greenhouse gas levels to rise, which will cause global average temperatures to rise. Global average temperature clearly have risen from all long-term (>40 year) data sets, so theory is confirmed. On a rising trend, with large variations due to natural variability, then any new records will be primarily “human-caused”. But making the theory and data slightly less vague reveals an opposite conclusion. Around the turn of the century the annual percentage increase in CO2 emissions went from 0.4% to 0.5% a year (figure 1), which should have lead to an acceleration in the rate of warming. In reality warming stalled.

The reaction was to come up with a load of ad hoc excuses. Hockey Schtick blog reached 66 separate excuses for the “pause” by November 2014, from the peer-reviewed to a comment in the UK Parliament.  This could be because climate is highly complex, with many variables, the presence of each contributing can only be guessed at, let alone the magnitude of each factor and the interrelationships with all factors. So how do you tell which statements are valid information and which are misinformation? I agree with Cook and Lewandowsky that misinformation is pernicious, and difficult to get rid of once it becomes entrenched. So how does one evaluate distinguish between the good information and the bad, misleading or even pernicious?

The Lewandowsky / Cook answer is to follow the consensus of opinion. But what is the consensus of opinion? In climate one variation is to follow a small subset of academics in the area who answer in the affirmative to

1. When compared with pre-1800s levels, do you think that mean global temperatures have generally risen, fallen, or remained relatively constant?

2. Do you think human activity is a significant contributing factor in changing mean global temperatures?

Problem is that the first question is just reading a graph and the second could be is a belief statement will no precision. Anthropogenic global warming has been a hot topic for over 25 years now. Yet these two very vague empirically-based questions, forming the foundations of the subject, should be able to be formulated more precisely. On the second it is a case of having pretty clear and unambiguous estimates as to the percentage of warming, so far, that is human caused. On that the consensus of leading experts are unable to say whether it is 50% or 200% of the warming so far. (There are meant to be time lags and factors like aerosols that might suppress the warming). This from the 2013 UNIPCC AR5 WG1 SPM section D3:-

It is extremely likely that more than half of the observed increase in global average surface temperature from 1951 to 2010 was caused by the anthropogenic increase in greenhouse gas concentrations and other anthropogenic forcings together.

The IPCC, encapsulating the state-of-the-art knowledge, cannot provide firm evidence in the form of a percentage, or even a fairly broad range even with over 60 years of data to work on..  It is even worse than it appears. The extremely likely phrase is a Bayesian probability statement. Ron Clutz’s simple definition from earlier this year was:-

Here’s the most dumbed-down description: Initial belief plus new evidence = new and improved belief.

For the IPCC claim that their statement was extremely likely, at the fifth attempt, they should be able to show some sort of progress in updating their beliefs to new evidence. That would mean narrowing the estimate of the magnitude of impact of a doubling of CO2 on global average temperatures. As Clive Best documented in a cliscep comment in October, the IPCC reports, from 1990 to 2013 failed to change the estimate range of 1.5°C to 4.5°C. Looking up Climate Sensitivity in Wikipedia we get the origin of the range estimate.

A committee on anthropogenic global warming convened in 1979 by the National Academy of Sciences and chaired by Jule Charney estimated climate sensitivity to be 3 °C, plus or minus 1.5 °C. Only two sets of models were available; one, due to Syukuro Manabe, exhibited a climate sensitivity of 2 °C, the other, due to James E. Hansen, exhibited a climate sensitivity of 4 °C. “According to Manabe, Charney chose 0.5 °C as a not-unreasonable margin of error, subtracted it from Manabe’s number, and added it to Hansen’s. Thus was born the 1.5 °C-to-4.5 °C range of likely climate sensitivity that has appeared in every greenhouse assessment since…

It is revealing that quote is under the subheading Consensus Estimates. The climate community have collectively failed to update the original beliefs, based on a very rough estimate. The emphasis on referring to consensus beliefs about the world, rather than looking outward for evidence in the real world, I would suggest is the primary reason for this failure. Yet such community-based beliefs completely undermines the integrity of the Bayesian estimates, making its use in statements about climate clear misinformation in Cook and Lewandowsky’s use of the term. What is more, those in the climate community who look primarily to these consensus beliefs rather than the data of the real world will endeavour to dismiss the evidence, or make up ad hoc excuses, or smear those who try to disagree. A caricature of these perspectives with respect to global average temperature anomalies is available in the form of a flickering widget at John Cooks’ skepticalscience website. This purports to show the difference between “realist” consensus and “contrarian” non-consensus views. Figure 2 is a screenshot of the consensus views, interpreting warming as a linear trend. Figure 3 is a screenshot of the non-consensus or contrarian views. They is supposed to interpret warming as a series of short, disconnected,  periods of no warming. Over time, each period just happens to be at a higher level than the previous. There are a number of things that this indicates.

(a) The “realist” view is of a linear trend throughout any data series. Yet the period from around 1940 to 1975 has no warming or slight cooling depending on the data set. Therefore any linear trend line derived for a longer period than 1970 to 1975 and ending in 2015 will show a lower rate of warming. This would be consistent the rate of CO2 increasing over time, as shown in figure 1. But for shorten the period, again ending in 2015, and once the period becomes less than 30 years, the warming trend will also decrease. This contracts the theory, unless ad hoc excuses are used, as shown in my previous post using the HADCRUT4 data set.

(b) Those who agree with the consensus are called “Realist”, despite looking inwards towards common beliefs. Those who disagree with warming are labelled “Contrarian”. This is not inaccurate when there is a dogmatic consensus. But it utterly false to lump all those who disagree with the same views, especially when no examples are provided of those who hold such views.

(c) The linear trend appears as a more plausible fit than the series of “contrarian” lines. By implication, those who disagree with the consensus are viewed as as having a distinctly more blinkered and distorted perspective than those who follow the consensus. Yet even using gistemp data set (which is gives greatest support to the consensus views) there is a clear break in the linear trend. The less partisan HADCRUT4 data shows an even greater break.

Those who spot the obvious – that around the turn of the century warming stopped or slowed down, when in theory it should have accelerated – are given a clear choice. They can conform to the scientific consensus, denying the discrepancy between theory and data. Or they can act as scientists, denying the false and empirically empty scientific consensus, receiving the full weight of all the false and career-damaging opprobrium that accompanies it.

fig2-sks-realists

 

 

fig3-sks-contras

Kevin Marshall

 

Climate Experts Attacking a Journalist by Misinformation on Global Warming

Summary

Journalist David Rose was attacked for pointing out in a Daily Mail article that the strong El Nino event, that resulted in record temperatures, was reversing rapidly. He claimed record highs may be not down to human emissions. The Climate Feedback attack article claimed that the El Nino event did not affect the long-term human-caused trend. My analysis shows

  • CO2 levels have been rising at increasing rates since 1950.
  • In theory this should translate in warming at increasing rates. That is a non-linear warming rate.
  • HADCRUT4 temperature data shows warming stopped in 2002, only resuming with the El Nino event in 2015 and 2016.
  • At the central climate sensitivity estimate of doubling of CO2 leads to 3C of global warming, HADCRUT4 was already falling short of theoretical warming in 2000. This is without the impact of other greenhouse gases.
  • Putting a linear trend lines through the last 35 to 65 years of data will show very little impact of El Nino, but has a very large visual impact on the divergence between theoretical human-caused warming and the temperature data. It reduces the apparent impact of the divergence between theory and data, but does not eliminate it.

Claiming that the large El Nino does not affect long-term linear trends is correct. But a linear trend neither describes warming in theory or in the leading temperature set. To say, as experts in their field, that the long-term warming trend is even principally human-caused needs a lot of circumspection. This is lacking in the attack article.

 

Introduction

Journalist David Rose recently wrote a couple of articles in the Daily Mail on the plummeting global average temperatures.
The first on 26th November was under the headline

Stunning new data indicates El Nino drove record highs in global temperatures suggesting rise may not be down to man-made emissions

With the summary

• Global average temperatures over land have plummeted by more than 1C
• Comes amid mounting evidence run of record temperatures about to end
• The fall, revealed by Nasa satellites, has been caused by the end of El Nino

Rose’s second article used the Met Offices’ HADCRUT4 data set, whereas the first used satellite data. Rose was a little more circumspect when he said.

El Nino is not caused by greenhouse gases and has nothing to do with climate change. It is true that the massive 2015-16 El Nino – probably the strongest ever seen – took place against a steady warming trend, most of which scientists believe has been caused by human emissions.

But when El Nino was triggering new records earlier this year, some downplayed its effects. For example, the Met Office said it contributed ‘only a few hundredths of a degree’ to the record heat. The size of the current fall suggests that this minimised its impact.

There was a massive reaction to the first article, as discussed by Jaime Jessop at Cliscep. She particularly noted that earlier in the year there were articles on the dramatically higher temperature record of 2015, such as in a Guardian article in January.There was also a follow-up video conversation between David Rose and Dr David Whitehouse of the GWPF commenting on the reactions. One key feature of the reactions was claiming the contribution to global warming trend of the El Nino effect was just a few hundredths of a degree. I find particularly interesting the Climate Feedback article, as it emphasizes trend over short-run blips. Some examples

Zeke Hausfather, Research Scientist, Berkeley Earth:
In reality, 2014, 2015, and 2016 have been the three warmest years on record not just because of a large El Niño, but primarily because of a long-term warming trend driven by human emissions of greenhouse gases.

….
Kyle Armour, Assistant Professor, University of Washington:
It is well known that global temperature falls after receiving a temporary boost from El Niño. The author cherry-picks the slight cooling at the end of the current El Niño to suggest that the long-term global warming trend has ended. It has not.

…..
KEY TAKE-AWAYS
1.Recent record global surface temperatures are primarily the result of the long-term, human-caused warming trend. A smaller boost from El Niño conditions has helped set new records in 2015 and 2016.

…….

2. The article makes its case by relying only on cherry-picked data from specific datasets on short periods.

To understand what was said, I will try to take the broader perspective. That is to see whether the evidence points conclusively to a single long-term warming trend being primarily human caused. This will point to the real reason(or reasons) for downplaying the impact of an extreme El Nino event on record global average temperatures. There are a number of steps in this process.

Firstly to look at the data of rising CO2 levels. Secondly to relate that to predicted global average temperature rise, and then expected warming trends. Thirdly to compare those trends to global data trends using the actual estimates of HADCRUT4, taking note of the consequences of including other greenhouse gases. Fourthly to put the calculated trends in the context of the statements made above.

 

1. The recent data of rising CO2 levels
CO2 accounts for a significant majority of the alleged warming from increases in greenhouse gas levels. Since 1958 CO2 (when accurate measures started to be taken at Mauna Loa) levels have risen significantly. Whilst I could produce a simple graph either the CO2 level rising from 316 to 401 ppm in 2015, or the year-on-year increases CO2 rising from 0.8ppm in the 1960s to over 2ppm in in the last few years, Figure 1 is more illustrative.

CO2 is not just rising, but the rate of rise has been increasing as well, from 0.25% a year in the 1960s to over 0.50% a year in the current century.

 

2. Rising CO2 should be causing accelerating temperature rises

The impact of CO2 on temperatures is not linear, but is believed to approximate to a fixed temperature rise for each doubling of CO2 levels. That means if CO2 levels were rising arithmetically, the impact on the rate of warming would fall over time. If CO2 levels were rising by the same percentage amount year-on-year, then the consequential rate of warming would be constant over time.  But figure 1 shows that percentage rise in CO2 has increased over the last two-thirds of a century.  The best way to evaluate the combination of CO2 increasing at an accelerating rate and a diminishing impact of each unit rise on warming is to crunch some numbers. The central estimate used by the IPCC is that a doubling of CO2 levels will result in an eventual rise of 3C in global average temperatures. Dana1981 at Skepticalscience used a formula that produces a rise of 2.967 for any doubling. After adjusting the formula, plugging the Mauna Loa annual average CO2 levels into values in produces Figure 2.

In computing the data I estimated the level of CO2 in 1949 (based roughly on CO2 estimates from Law Dome ice core data) and then assumed a linear increased through the 1950s. Another assumption was that the full impact of the CO2 rise on temperatures would take place in the year following that rise.

The annual CO2 induced temperature change is highly variable, corresponding to the fluctuations in annual CO2 rise. The 11 year average – placed at the end of the series to give an indication of the lagged impact that CO2 is supposed to have on temperatures – shows the acceleration in the expected rate of CO2-induced warming from the acceleration in rate of increase in CO2 levels. Most critically there is some acceleration in warming around the turn of the century.

I have also included the impact of linear trend (by simply dividing the total CO2 increase in the period by the number of years) along with a steady increase of .396% a year, producing a constant rate of temperature rise.

Figure 3 puts the calculations into the context of the current issue.

This gives the expected temperature linear temperature trends from various start dates up until 2014 and 2016, assuming a one year lag in the impact of changes in CO2 levels on temperatures. These are the same sort of linear trends that the climate experts used in criticizing David Rose. The difference in warming by more two years produces very little difference – about 0.054C of temperature rise, and an increase in trend of less than 0.01 C per decade. More importantly, the rate of temperature rise from CO2 alone should be accelerating.

 

3. HADCRUT4 warming

How does one compare this to the actual temperature data? A major issue is that there is a very indeterminate lag between the rise in CO2 levels and the rise in average temperature. Another issue is that CO2 is not the only greenhouse gas. More minor greenhouse gases may have different patterns if increases in the last few decades. However, the change the trends of the resultant warming, but only but the impact should be additional to the warming caused by CO2. That is, in the long term, CO2 warming should account for less than the total observed.
There is no need to do actual calculations of trends from the surface temperature data. The Skeptical Science website has a trend calculator, where one can just plug in the values. Figure 4 shows an example of the graph, which shows that the dataset currently ends in an El Nino peak.

The trend results for HADCRUT4 are shown in Figure 5 for periods up to 2014 and 2016 and compared to the CO2 induced warming.

There are a number of things to observe from the trend data.

The most visual difference between the two tables is the first has a pause in global warming after 2002, whilst the second has a warming trend. This is attributable to the impact of El Nino. These experts are also right in that it makes very little difference to the long term trend. If the long term is over 40 years, then it is like adding 0.04C per century that long term trend.

But there is far more within the tables than this observations. Concentrate first on the three “Trend in °C/decade” columns. The first is of the CO2 warming impact from figure 3. For a given end year, the shorter the period the higher is the warming trend. Next to this are Skeptical Science trends for the HADCRUT4 data set. Start Year 1960 has a higher trend than Start Year 1950 and Start Year 1970 has a higher trend than Start Year 1960. But then each later Start Year has a lower trend the previous Start Years. There is one exception. The period 2010 to 2016 has a much higher trend than for any other period – a consequence of the extreme El Nino event. Excluding this there are now over three decades where the actual warming trend has been diverging from the theory.

The third of the “Trend in °C/decade” columns is simply the difference between the HADCRUT4 temperature trend and the expected trend from rising CO2 levels. If a doubling of CO2 levels did produce around 3C of warming, and other greenhouse gases were also contributing to warming then one would expect that CO2 would eventually start explaining less than the observed warming. That is the variance would be positive. But CO2 levels accelerated, actual warming stalled, increasing the negative variance.

 

4. Putting the claims into context

Compare David Rose

Stunning new data indicates El Nino drove record highs in global temperatures suggesting rise may not be down to man-made emissions

With Climate Feedback KEY TAKE-AWAY

1.Recent record global surface temperatures are primarily the result of the long-term, human-caused warming trend. A smaller boost from El Niño conditions has helped set new records in 2015 and 2016.

The HADCRUT4 temperature data shows that there had been no warming for over a decade, following a warming trend. This is in direct contradiction to theory which would predict that CO2-based warming would be at a higher rate than previously. Given that a record temperatures following this hiatus come as part of a naturally-occurring El Nino event it is fair to say that record highs in global temperatures ….. may not be down to man-made emissions.

The so-called long-term warming trend encompasses both the late twentieth century warming and the twenty-first century hiatus. As the later flatly contradicts theory it is incorrect to describe the long-term warming trend as “human-caused”. There needs to be a more circumspect description, such as the vast majority of academics working in climate-related areas believe that the long-term (last 50+ years) warming  is mostly “human-caused”. This would be in line with the first bullet point from the UNIPCC AR5 WG1 SPM section D3:-

It is extremely likely that more than half of the observed increase in global average surface temperature from 1951 to 2010 was caused by the anthropogenic increase in greenhouse gas concentrations and other anthropogenic forcings together.

When the IPCC’s summary opinion, and the actual data are taken into account Zeke Hausfather’s comment that the records “are primarily because of a long-term warming trend driven by human emissions of greenhouse gases” is dogmatic.

Now consider what David Rose said in the second article

El Nino is not caused by greenhouse gases and has nothing to do with climate change. It is true that the massive 2015-16 El Nino – probably the strongest ever seen – took place against a steady warming trend, most of which scientists believe has been caused by human emissions.

Compare this to Kyle Armour’s statement about the first article.

It is well known that global temperature falls after receiving a temporary boost from El Niño. The author cherry-picks the slight cooling at the end of the current El Niño to suggest that the long-term global warming trend has ended. It has not.

This time Rose seems to have responded to the pressure by stating that there is a long-term warming trend, despite the data clearly showing that this is untrue, except in the vaguest sense. There data does not show a single warming trend. Going back to the skeptical science trends we can break down the data from 1950 into four periods.

1950-1976 -0.014 ±0.072 °C/decade (2σ)

1976-2002 0.180 ±0.068 °C/decade (2σ)

2002-2014 -0.014 ±0.166 °C/decade (2σ)

2014-2016 1.889 ±1.882 °C/decade (2σ)

There was warming for about a quarter of a century sandwiched between two periods of no warming. At the end is an uptick. Only very loosely can anyone speak of a long-term warming trend in the data. But basic theory hypotheses a continuous, non-linear, warming trend. Journalists can be excused failing to make the distinctions. As non-experts they will reference opinion that appears sensibly expressed, especially when the alleged experts in the field are united in using such language. But those in academia, who should have a demonstrable understanding of theory and data, should be more circumspect in their statements when speaking as experts in their field. (Kyle Armour’s comment is an extreme example of what happens when academics completely suspend drawing on their expertise.)  This is particularly true when there are strong divergences between the theory and the data. The consequence is plain to see. Expert academic opinion tries to bring the real world into line with the theory by authoritative but banal statements about trends.

Kevin Marshall