Changing a binary climate argument into understanding the issues

Last month Geoff Chambers posted “Who’s Binary, Us or Them? Being at cliscep the question was naturally about whether sceptics or alarmists were binary in their thinking. It reminded me about something that went viral on youtube a few year’s ago. Greg Craven’s The Most Terrifying Video You’ll Ever See.

To his credit, Greg Craven in introducing both that human-caused climate change can have a trivial impact recognize that mitigating climate (taking action) is costly. But for the purposes of his decision grid he side-steps these issues to have binary positions on both. The decision is thus based on the belief that the likely consequences (costs) of catastrophic anthropogenic global warming then the likely consequences (costs) of taking action. A more sophisticated statement of this was from a report commissioned in the UK to justify the draconian climate action of the type Greg Craven is advocating. Sir Nicholas (now Lord) Stern’s report of 2006 (In the Executive Summary) had the two concepts of the warming and policy costs separated when it claimed

Using the results from formal economic models, the Review estimates that if we don’t act, the overall costs and risks of climate change will be equivalent to losing at least 5% of global GDP each year, now and forever. If a wider range of risks and impacts is taken into account, the estimates of damage could rise to 20% of GDP or more. In contrast, the costs of action – reducing greenhouse gas emissions to avoid the worst impacts of climate change – can be limited to around 1% of global GDP each year.

Craven has merely simplified the issue and made it more binary. But Stern has the same binary choice. It is a choice between taking costly action, or suffering the much greater possible consequences.  I will look at the policy issue first.

Action on Climate Change

The alleged cause of catastrophic anthropogenic global warming is (CAGW) is human greenhouse gas emissions. It is not just some people’s emissions that must be reduced, but the aggregate emissions of all 7.6 billion people on the planet. Action on climate change (i.e. reducing GHG emissions to near zero) must therefore include all of the countries in which those people live. The UNFCCC, in the run-up to COP21 Paris 2015, invited countries to submit Intended Nationally Determined Contributions (INDCs). Most did so before COP21, and as at June 2018, 165 INDCs have been submitted, representing 192 countries and 96.4% of global emissions. The UNFCCC has made them available to read. So these intentions will be sufficient “action” to remove the risk of CAGW? Prior to COP21, the UNFCCC produced a Synthesis report on the aggregate effect of INDCs. (The link no longer works, but the main document is here.) They produced a graphic that I have shown on multiple occasions of the gap between policy intentions on the desired policy goals. A more recent graphic is from the UNEP Emissions Gap Report 2017, published last October and

Figure 3 : Emissions GAP estimates from the UNEP Emissions GAP Report 2017

In either policy scenario, emissions are likely to be slightly higher in 2030 than now and increasing, whilst the policy objective is for emissions to be substantially lower than today and and decreasing rapidly. Even with policy proposals fully implemented global emissions will be at least 25% more, and possibly greater than 50%, above the desired policy objectives. Thus, even if proposed policies achieve their objective, in Greg Craven’s terms we are left with pretty much all the possible risks of CAGW, whilst incurring some costs. But the “we” is for 7.6 billion people in nearly 200 countries. But the real costs are being incurred by very few countries. For the United Kingdom, with the Climate Change Act 2018 is placing huge costs on the British people, but future generations of Britain’s will achieve very little or zero benefits.

Most people in the world live in poorer countries that will do nothing significant to constrain emissions growth if it that conflicts with economic growth or other more immediate policy objectives. In terms of the some of the most populous developing countries, it is quite clear that achieving the policy objectives will leave emissions considerably higher than today. For instance, China‘s main aims of peaking CO2 emissions around 2030 and lowering carbon emissions per unit of GDP in 2030 by 60-65% compared to 2005 by 2020 could be achieved with emissions in 2030 20-50% higher than in 2017. India has a lesser but similar target of reducing emissions per unit of GDP in 2030 by 30-35% compared to 2005 by 2020. If the ambitious economic growth targets are achieve, emissions could double in 15 years, and still be increasing past the middle of the century. Emissions in Bangladesh and Pakistan could both more than double by 2030, and continue increasing for decades after.

Within these four countries are over 40% of the global population. Many other countries are also likely to have emissions increasing for decades to come, particularly in Asia and Africa. Yet without them changing course global emissions will not fall.

There is another group of countries that are have vested interests in obstructing emission reduction policies. That is those who are major suppliers of fossil fuels. In a letter to Nature in 2015, McGlade and Ekins (The geographical distribution of fossil fuels unused when limiting global warming to 2°C) estimate that the proven global reserves of oil, gas and coal would produce about 2900 GtCO2e. They further estimate that the “non-reserve resources” of fossil fuels represent a further 8000 GtCO2e of emissions. The estimated that to constrain warming to 2C, 75% of proven reserves, and any future proven reserves would need to be left in the ground. Using figures from the BP Statistical Review of World Energy 2016 I produced a rough split by major country.

Figure 4 : Fossil fuel Reserves by country, expressed in terms of potential CO2 Emissions

Activists point to the reserves in the rich countries having to be left in the ground. But in the USA, Australia, Canada and Germany production of fossil fuels is not a major part of the economy. Ceasing production would be harmful but not devastating. One major comparison is between the USA and Russia. Gas and crude oil production are similar volumes in both countries. But, the nominal GDP of the US is more than ten times that of Russia. The production of both countries in 2016 was about 550 million tonnes or 3900 million barrels. At $70 a barrel that is around $275bn, equivalent to 1.3% of America’s GDP and 16% of Russia’s. In gas, prices vary, being very low in the highly competitive USA, and highly variable for Russian supply, with major supplier Gazprom acting as a discriminating monopolist. But America’s revenue is likely to be less than 1% of GDP and Russia’s equivalent to 10-15%. There is even greater dependency in the countries of the Middle East. In terms of achieve emissions targets, what is trying to be achieved is the elimination of the major source of the countries economic prosperity in a generation, with year-on-year contractions in fossil fuel sales volumes.

I propose that there are two distinct groups of countries that appear to have a lot lose from a global contraction in GHG emissions to near zero. There are the developing countries who would have to reduce long-term economic growth and the major fossil fuel-dependent countries, who would lose the very foundation of their economic output in a generation. From the evidence of the INDC submissions, there is now no possibility of these countries being convinced to embrace major economic self-harm in the time scales required. The emissions targets are not going to be met. The emissions gap will not be closed to any appreciable degree.

This leaves Greg Craven’s binary decision option of taking action, or not, as irrelevant. As taking action by a country will not eliminate the risk of CAGW, pursuing aggressive climate mitigation policies will impose net harms wherever they implemented. Further, it is not the climate activists who are making the decisions, but policy-makers countries themselves. If the activists believe that others should follow another path, it is them that must make the case. To win over the policy-makers they should have sought to understand their perspectives of those countries, then persuade them to accept their more enlightened outlook. The INDCs show that the climate activists gave failed in this mission. Until such time, when activists talk about the what “we” are doing to change the climate, or what “we” ought to be doing, they are not speaking about

But the activists have won over the United Nations, those who work for many Governments and they dominate academia. For most countries, this puts political leaders in a quandary. To maintain good diplomatic relations with other countries, and to appear as movers on a world stage they create the appearance of taking significant action on climate change for the outside world. On the other hand they are serving their countries through minimizing the real harms that imposing the policies would create. Any “realities” of climate change have become largely irrelevant to climate mitigation policies.

The Risks of Climate Apocalypse

Greg Craven recognized a major issue with his original video. In the shouting match over global warming who should you believe? In How it all Ends (which was followed up by further videos and a book) Craven believes he has the answer.

Figure 5 : Greg Craven’s “How it all Ends”

It was pointed out that the logic behind the grid is bogus. As in Devil’s advocate guise Craven says at 3:50

Wouldn’t that grid argue for action against any possible threat, no matter how costly the action or how ridiculous the threat? Even giant mutant space hamsters? It is better to go broke building a load of rodent traps than risk the possibility of being hamster chow. So this grid is useless.

His answer is to get a sense of how likely the possibility of global warming being TRUE or FALSE is. Given that science is always uncertain, and there are divided opinions.

The trick is not to look at what individual scientists are saying, but instead to look at what the professional organisations are saying. The more prestigious they are, the more weight you can give their statements, because they have got huge reputations to uphold and they don’t want to say something that later makes them look foolish. 

Craven points to the “two most respected in the world“. The National Academy of Sciences (NAS) and the American Association for the Advancement of Science (AAAS). Back in 2007 they had “both issued big statements calling for action, now, on global warming“.  The crucial question from scientists (that is people will a demonstrable expert understanding of the natural world) is not for political advocacy, but whether their statements say their is a risk of climate apocalypse. These two bodies still have statements on climate change.

National Academy of Sciences (NAS) says

There are well-understood physical mechanisms by which changes in the amounts of greenhouse gases cause climate changes. The US National Academy of Sciences and The Royal Society produced a booklet, Climate Change: Evidence and Causes (download here), intended to be a brief, readable reference document for decision makers, policy makers, educators, and other individuals seeking authoritative information on the some of the questions that continue to be asked. The booklet discusses the evidence that the concentrations of greenhouse gases in the atmosphere have increased and are still increasing rapidly, that climate change is occurring, and that most of the recent change is almost certainly due to emissions of greenhouse gases caused by human activities.

Further climate change is inevitable; if emissions of greenhouse gases continue unabated, future changes will substantially exceed those that have occurred so far. There remains a range of estimates of the magnitude and regional expression of future change, but increases in the extremes of climate that can adversely affect natural ecosystems and human activities and infrastructure are expected.

Note, this is conjunction with the Royal Society, which is arguably is (or was) the most prestigious  scientific organisation of them all. It is what not said that is as important as what is actually said. They are saying that there is a an expectation that extremes of climate could get worse. There is nothing that solely backs up the climate apocalypse, but a range of possibilities, including changes somewhat trivial on a global scale. The statement endorses a spectrum of possible positions that undermines the binary TRUE /FALSE position on decision-making.

The RS/NAS booklet has no estimates of the scale of possible climate catastrophism to be avoided. Point 19 is the closest.

Are disaster scenarios about tipping points like ‘turning off the Gulf Stream’ and release of methane from the Arctic a cause for concern?

The summary answer is

Such high-risk changes are considered unlikely in this century, but are by definition hard to predict. Scientists are therefore continuing to study the possibility of such tipping points beyond which we risk large and abrupt changes.

This appears not to support Stern’s contention that unmitigated climate change will costs at least 5% of global GDP by 2100. Another context of the back-tracking on potential catastrophism is to to compare with  Lenton et al 2008 – Tipping elements in the Earth’s climate system. Below is a map showing the the various elements considered.

Figure 6 : Fig 1 of Lenton et al 2008, with explanatory note.

Of the 14 possible tipping elements discussed, only one makes it into the booklet six years later. Surely if the other 13 were still credible more would have been included in booklet, and less on documenting trivial historical changes.

American Association for the Advancement of Science (AAAS) has a video

Figure 7 : AAAS “What We Know – Consensus Sense” video

 

It starts with the 97% Consensus claims. After asking the listener on how many,  Marshall Sheppard, Prof of Geography at Univ of Georgia states.

The reality is that 97% of scientists are pretty darn certain that humans are contributing to the climate change that we are seeing right now and we better do something about it to soon.

There are two key papers that claimed a 97% consensus. Doran and Zimmerman 2009 asked two questions,

1. When compared with pre-1800s levels, do you think that mean global temperatures have generally risen, fallen, or remained relatively constant?

2. Do you think human activity is a significant contributing factor in changing mean global temperatures?

The second of these two responses was answered in the affirmative by 77 of 79 climate scientists. This was reduced from 3146 responses received. Read the original to find out why it was reduced.

Dave Burton has links to a number of sources on these studies. A relevant quote on Doran and Zimmerman is from the late Bob Carter

Both the questions that you report from Doran’s study are (scientifically) meaningless because they ask what people “think”. Science is not about opinion but about factual or experimental testing of hypotheses – in this case the hypothesis that dangerous global warming is caused by human carbon dioxide emissions.

The abstract to Cook et al. 2013 begins

We analyze the evolution of the scientific consensus on anthropogenic global warming (AGW) in the peer-reviewed scientific literature, examining 11 944 climate abstracts from 1991–2011 matching the topics ‘global climate change’ or ‘global warming’. We find that 66.4% of abstracts expressed no position on AGW, 32.6% endorsed AGW, 0.7% rejected AGW and 0.3% were uncertain about the cause of global warming. Among abstracts expressing a position on AGW, 97.1% endorsed the consensus position that humans are causing global warming. 

Expressing a position does not mean a belief. It could be an assumption. The papers were not necessarily by scientists, but merely authors of academic papers that involved the topics ‘global climate change’ or ‘global warming’. Jose Duarte listed some of the papers that were included in the survey, along with looking at some that were left out.

Neither paper asked a question concerning belief in future climate catastrophism. Sheppard does not make clear the scale of climate change trends from the norm, so the human-caused element could be insignificant. The 97% consensus does not include the policy claims.

The booklet is also misleading as well in the scale of changes. For instance on sea-level rise it states.

Over the past two decades, sea levels have risen almost twice as fast as the average during the twentieth century.

You will get that if you compare the tide gauge data with the two decades of satellite data. The question is whether those two sets of data are accurate. As individual tide gauges do not tend to show acceleration, and others cannot find statistically significant acceleration, the claim seems not to be supported.

At around 4.15 in the consensus video AAAS CEO Alan I. Leshner says

America’s leaders should stop debating the reality of climate change and start deciding the best solutions. Our What we Know report makes clear that climate change threatens us at every level. We can reduce the risk of global warming to protect out people, businesses and communities from harm. At every level from our personal and community health, our economy and our future as a global leader.  Understanding and managing climate change risks is an urgent problem. 

The statement is about combating the potential risks from CAGW. The global part of global warming is significant for policy. The United States share of global emissions is around 13% of global emissions. That share has been falling as America’s emissions have been falling why the global aggregate emissions have been rising. The INDC submission for the United States aimed as getting US emissions in 2025 at 26-28% of 2005 levels, with a large part of that reduction already “achieved” when the report was published. The actual policy difference is likely to be less than 1% of global emissions. So any reduction in risks with respect to climate change seems to be tenuous. A consensus of the best scientific minds should have been able to work this out for themselves.

The NAAS does not give a collective expert opinion on climate catastrophism. This is shown by the inability to distinguish between banal opinions and empirical evidence for a big problem. This is carried over into policy advocacy, where they fail to distinguish between the United States and the world as a whole.

Conclusions

Greg Laden’s decision-making grid is inapplicable to real world decision-making. The decision whether to take action or not is not a unitary one, but needs to be taken at country level. Different countries will have different perspectives on the importance of taking action on climate change relative to other issues. In the real world, the proposals for action are available. In aggregate they will not “solve” the potential risk of climate apocalypse. Whatever the actual scale of CAGW, countries who pursue expensive climate mitigation policies are likely to make their own people worse off than if they did nothing at all.

Laden’s grid assumes that the costs of the climate apocalypse are potentially far greater than the costs of action, no matter how huge. He tries to cut through the arguments by getting the opinions from the leading scientific societies. To put it mildly, they do not currently provide strong scientific evidence for a potentially catastrophic problem. The NAS / Royal Society suggest a range of possible climate change outcomes, with only vague evidence for potentially catastrophic scenarios. It does not seem to back the huge potential costs of unmitigated climate change in the Stern Review. The NAAAS seems to provide vague banal opinions to support political advocacy rather than rigorous analysis based on empirical evidence that one would expect from the scientific community.

It would appear that the binary thinking on both the “science” and on “policy” leads to a dead end, and is leading to net harmful public policy.

What are the alternatives to binary thinking on climate change?

My purpose in looking at Greg Laden’s decision grid is not to destroy an alternative perspective, but to understand where the flaws are for better alternatives. As a former, slightly manic, beancounter, I would (like the Stern Review  and William Nordhaus) look at translating potential CAGW into costs. But then weight it according to a discount rate, and the strength of the evidence. In terms of policy I would similarly look at the likely expected costs of the implemented policies, against the actual expected harms foregone. As I have tried to lay out above, the costs of policy and indeed the potential costs of climate change are largely subjective. Further, those implementing policies might be boxed in by other priorities and various interest groups jostling for position.

But what of the expert scientist who can see the impending on-coming catastrophes to which I am blind and to which climate mitigation will be useless? It is to endeavor to pin down the where, when, type and magnitude of potential changes to climate. With this information ordinary people can adjust their plans. The challenge for those who believe there are real problems is to focus on the data from the natural world and away from inbuilt biases of the climate community. But the most difficult part is from such methods they may lose their beliefs, status and friends.

First is to obtain some perspective. In terms of the science, it is worth looking at the broad range of  different perspectives on the Philosophy of Science. The Stanford Encyclopedia of Philosophy article on the subject is long, but very up to date. In the conclusions, the references to Paul Hoyningen-Huene’s views on what sets science apart seems to be a way out of consensus studies.

Second, is to develop strategies to move away from partisan positions with simple principles, or contrasts, that other areas use. In Fundamentals that Climate Science Ignores I list some of these.

Third, in terms of policy, it is worthwhile having a theoretical framework in which to analyze the problems. After looking at Greg Craven’s video’s in 2010, I developed a graphical analysis that will be familiar to people who have studied Marshallian Supply and Demand curves of Hicksian IS-LM. It is very rough at the edges, but armed with it you will not fall in the trap of thinking like the AAAS that US policy will stop US-based climate change.

Fourth, is to look from other perspectives. Appreciate that other people might have other perspectives that you can learn from. Or alternatively they may have entrenched positions which, although you might disagree with, are powerless to overturn. It should then be possible to orientate yourself, whether as an individual or as part of a group, towards aims that are achievable.

Kevin Marshall

Sea Level Rise Acceleration as a sign of Impending Climate Apoclaypse

Global warming alarmism first emerged in the late 1980s, three decades ago. Put very simply, the claim is that climate change, resulting from human-caused increases in trace gases, is a BIG potential problem. The BIG solution is to control reduce global greenhouse gas emissions through a co-ordinated global action. The actual evidence shows a curious symmetry. The proponents of alarmism have failed to show that rises in greenhouse gas levels are making non-trivial difference on a global scale, and the aggregate impact of the policy proposals on global emissions, if fully implemented, will make a trivial difference to global emissions pathways. The Adoption of the Paris Agreement communique paragraph 17 clearly states the failure. My previous post puts forward reasons why the impact of mitigation policies will remain trivial.

In terms of an emerging large problem, the easiest to visualize, and the most direct impact from rising average temperatures is rising sea levels. Rising temperatures will lead to sea level rise principally through meltwater from the polar ice-caps and thermal expansion of the oceans. Given that sea levels have been rising since the last ice age, if a BIG climate problem is emerging then it should be detectable in accelerating sea level rise. If the alarmism is credible, then after 30 years of failure to implement the BIG solution, the unrelenting increases in global emissions and the accelerating rise in CO2 levels for decades, then there should be a clear response in terms of acceleration in the rate of sea level rise.

There is a strong debate as to whether sea-level rise is accelerating or not. Dr. Roy Spencer at WUWT makes a case for there being mild acceleration since about 1950. Based on the graph below (from Church and White 2013) he concludes:-

The bottom line is that, even if (1) we assume the Church & White tide gauge data are correct, and (2) 100% of the recent acceleration is due to humans, it leads to only 0.3 inches per decade that is our fault, a total of 2 inches since 1950.

As Judith Curry mentioned in her continuing series of posts on sea level rise, we should heed the words of the famous oceanographer, Carl Wunsch, who said,

“At best, the determination and attribution of global-mean sea-level change lies at the very edge of knowledge and technology. Both systematic and random errors are of concern, the former particularly, because of the changes in technology and sampling methods over the many decades, the latter from the very great spatial and temporal variability. It remains possible that the database is insufficient to compute mean sea-level trends with the accuracy necessary to discuss the impact of global warming, as disappointing as this conclusion may be.”

In metric, the so-called human element of 2 inches since 1950 is 5 centimetres. The total in over 60 years is less than 15 centimetres. The time period for improving sea defences to cope with this is way beyond normal human planning horizons. Go to any coastal strip with sea defences, such as the dykes protecting much of the Netherlands, with a measure and imagine increasing those defences by 15 centimetres.

However, a far more thorough piece is from Dave Burton (of Sealevel.info) in three comments. Below is his a repost of his comments.

Agreed. On Twitter, or when sloppy and in a hurry, I say “no acceleration.” That’s shorthand for, “There’s been no significant, sustained acceleration in the rate of sea-level rise, over the last nine or more decades, detectable in the measurement data from any of the longest, highest-quality, coastal sea-level records.” Which is right.

That is true at every site with a very long, high-quality measurement record. If you do a quadratic regression over the MSL data, depending on the exact date interval you analyze, you may find either a slight acceleration or deceleration, but unless you choose a starting date prior to the late 1920s, you’ll find no practically-significant difference from perfect linearity. In fact, for the great majority of cases, the acceleration or deceleration doesn’t even manage statistical significance.

What do I mean by “practically-significant,” you might wonder? I mean that, if the acceleration or deceleration continued for a century, it wouldn’t affect sea-level by more than a few inches. That means it’s likely dwarfed by common coastal processes like vertical land motion, sedimentation, and erosion, so it is of no practical significance.

For instance, here’s one of the very best Pacific tide gauges. It is at a nearly ideal location (mid-ocean, which minimizes ENSO effects), on a very tectonically stable island, with very little vertical land motion, and a very trustworthy, 100% continuous, >113-year measurement record (1905/1 through 2018/3):

As you can see, there have been many five-year to ten-year “sloshes-up” and “sloshes-down,” but there’s been no sustained acceleration, and no apparent effect from rising CO2 levels.

The linear trend is +1.482 ±0.212 mm/year (which is perfectly typical).

Quadratic regression calculates an acceleration of -0.00539 ±0.01450 mm/yr².

The minus sign means deceleration, but it is nowhere near statistically significant.

To calculate the effect of a century of sustained acceleration on sea-level, you divide the acceleration by two, and multiply it by the number of years squared, 100² = 10,000. In this case, -0.00539/2 × 10,000 = -27 mm (about one inch).

That illustrates a rule-of-thumb that’s worth memorizing: if you see claimed sea-level acceleration or deceleration numbers on the order of 0.01 mm/yr² or less, you can stop calculating and immediately pronounce it practically insignificant, regardless of whether it is statistically significant.

However, the calculation above actually understates the effect of projecting the quadratic curve out another 100 years, compared to a linear projection, because the starting rate of SLR is wrong. On the quadratic curve, the point of “average” (linear) trend is the midpoint, not the endpoint. So to see the difference at 100 years out, between the linear and quadratic projections, we should calculate from that mid-date, rather than the current date. In this case, that adds 56.6 years, so we should multiply half the acceleration by 156.6² = 24,524.

-0.00539/2 × 24,524 = -66 mm = -2.6 inches (still of no practical significance).

Church & White have been down this “acceleration” road before. Twelve years ago they published the most famous sea-level paper of all, A 20th Century Acceleration in Global Sea-Level Rise, known everywhere as “Church & White (2006).”

It was the first study anywhere which claimed to have detected an acceleration in sea-level rise over the 20th century. Midway through the paper they finally tell us what that 20th century acceleration was:

“For the 20th century alone, the acceleration is smaller at 0.008 ± 0.008 mm/yr² (95%).”

(The paper failed to mention that all of the “20th century acceleration” which their quadratic regression detected had actually occurred prior to the 1930s, but never mind that.)

So, applying the rule-of-thumb above, the first thing you should notice is that 0.008 mm/yr² of acceleration, even if correct, is practically insignificant. It is so tiny that it just plain doesn’t matter.

In 2009 they posted on their web site a new set of averaged sea-level data, from a different set of tide gauges. But they published no paper about it, and I wondered why not. So I duplicated their 2006 paper’s analysis, using their new data, and not only did it, too, show slight deceleration after 1925, all the 20th century acceleration had gone away, too. Even for the full 20th century their data showed a slight (statistically insignificant) deceleration.

My guess is that the reason they wrote no paper about it was that the title would have had to have been something like this:

Church and White (2009), Never mind: no 20th century acceleration in global sea-level rise, after all.

There is no real disagreement between the too accounts. Roy Spencer is saying that if the Church and White paper is correct there is trivial acceleration, Dave Burton is making a more general point about there being no statistically significant acceleration or deceleration in any data set.                                                At Key West in low-lying Florida, the pattern of near constant of sea level rise over the past century is similar to Honolulu. The rate of rise is about 50% more at 9 inches per century but more in line with the long-term global average from tide gauges. Given that Hawaii is a growing volcanic island, this should not come as a surprise.

I choose Key West from Florida, as supposedly from projecting from this real data, and climate models, the Miami-Dade Sea Level Rise Task Force produced the following Unified Sea Level Rise Projection.

The projections of significant acceleration in the rate of sea level rise are at odds with the historical data, but should be discernible as the projection includes over two decades of actual data. Further, as the IPCC AR5 RCP8.5 scenario is the projection without climate mitigation policy, the implied assumption for this report for adapting to a type of climate change is that climate mitigation policies will be completely useless. As this graphic is central to the report, it would appear it is the usage of the most biased projections that appears to be influencing public policy. Basic validation of theory against modelled trends in the peer-reviewed literature (Dr Roy Spencer) or against actual measured data (Dave Burton) appears to be rejected in favour of beliefs in the mainstream climate consensus.

The curious symmetry of climate alarmism between evidence for BIG potential climate problem and the lack of an agreed BIG mitigation policy solution is evident is sea level rise projections. Unfortunately, given that policy is based on the ridiculous projections, it is people outside of the consensus that will suffer. Expensive and unnecessary flood defences will be built and low-lying areas will be blighted by alarmist reports.

 

Kevin Marshall

 

Charles Moore nearly gets Climate Change Politics post Paris Agreement

Charles Moore of the Telegraph has long been one of the towering figures of the mainstream media. In Donald Trump has the courage and wit to look at ‘green’ hysteria and say: no deal (see also at GWPF, Notalotofpeopleknowthat and Tallbloke) he understands not only the impact of Trump withdrawing from the climate agreement on future global emissions, but recognizes that two other major developed countries – Germany and Japan – whilst committed to reduce their emissions and spending lots of money on renewables are also investing heavily in coal. So without climate policy, the United States is reducing its emissions, but with climate commitments, emissions in Japan and Germany are increasing their emissions. However, there is one slight inaccuracy in Charles Moore’s account. He states

As for “Paris”, this is failing, chiefly for the reason that poorer countries won’t decarbonise unless richer ones pay them stupendous sums.

It is worse than this. Many of the poorer countries have not said they will decarbonize. Rather they have said that they will use the money to reduce emissions relative to a business as usual scenario.

Take Pakistan’s INDC. In 2015 they estimate emissions were 405 MtCO2e, up from 182 in 1994. As a result of ambitious planned economic growth, they forecast a BAU of 1603 MtCO2e in 2030. However, they can reduce that by 20% with about $40 billion in finance. That is, with $40bn, average annual emissions growth from 2015-2030 will still be twice that of 1994-2015. Plus Pakistan would like $7-$14bn pa for adaptation to climate change. The INDC Table 7 summarizes the figures.

Or Bangladesh’s INDC. Estimated BAU increase in emissions from 2011 to 2030 is 264%. They will unconditionally cut this by 5% and conditionally by a further 15%. The BAU is 7.75% annual emissions growth, cut to 7.5% unconditionally and 6% with lots of finance. The INDC Table 7 summarizes the figures.

I do not blame either country for taking such an approach, or the many others adopting similar strategies. They are basically saying that they will do nothing that impedes trying to raise living standards through high levels of sustained economic growth. They will play the climate change game, so long as nobody demands that Governments compromise on serving the best interests of their peoples. If only the Government’s of the so-called developed nations would play similar games, rather than impose useless burdens on the people they are supposed to be serving.

There is another category of countries that will not undertake to reduce their emissions – the OPEC members. Saudi Arabia, Iran, Venezuela, Kuwait, UAE and Qatar have all made submissions. Only Iran gives a figure. It will unilaterally cut emissions by 4% against BAU. With the removal of “unjust sanctions” and some financial assistance and technology transfer it conditional offer would be much more. But nowhere is the BAU scenario stated in figures. The reason these OPEC countries will not play ball is quite obvious. To achieve the IPCC objective of constraining warming to 2°C according to McGlade and Ekins 2015 (The geographical distribution of fossil fuels unused when limiting global warming to 2°C) would mean leaving 75% of proven reserves of fossil fuels in the ground and all of the unproven reserves. I did an approximate breakdown by major countries last year, using the BP Statistical Review of World Energy 2016.

It does not take a genius to work out that meeting the 2°C climate mitigation target would shut down a major part of the economies of fossil fuel producing countries in about two decades. No-one has proposed either compensating them, or finding alternatives.

But the climate alarmist community are too caught up in their Groupthink to notice the obvious huge harms that implementing global climate mitigation policies would entail.

Kevin Marshall

Did Brexit Influence the General Election 2017 Result?

In the year following the EU Referendum, I wrote a number of posts utilizing Chris Hanretty’s estimates of the vote split by constituency for England and Wales. Hanretty estimates that 421 of the 573 constituencies in England and Wales voted to leave. These estimates were necessary as the vote was counted by different – and mostly larger – areas than the parliamentary constituencies.

Politically, my major conclusion was that it was the Labour Party who could potentially suffer more from Brexit. There are two major reasons for this situation.

First, is that the Labour constituencies had a far greater spread of views than the Conservative constituencies. This is in both the divergence between regions and the disproportionate numbers of constituencies that are were either extreme Remain or extreme Leave in the referendum. Figure 1 is for the result for constituencies with Conservative MPs in 2016, and Figure 2 for constituencies with Labour MPs.

Figure 1: Constituencies in England and Wales with Conservative MPs in 2016, by estimated Leave or Remain Band. 

Figure 2: Constituencies in England and Wales with Labour Party MPs in 2016, by estimated Leave or Remain Band. 

In particular, London, where much of the current Labour Leadership are based, has views on the EU diametrically opposed views to the regions where most of the traditional Labour vote resides. Further analysis, from July 2016, is here.

Second, is the profile of the Leave supports. Based on an extensive poll conducted by Lord Ashcroft on EU Referendum day, Leave support was especially strong on those retired on a State Pension, council and housing association tenants, those whose formal education did not progress beyond secondary school, and the C2DEs. That is, groups that traditionally disproportionately vote Labour. Further details, from May 2017, are here.

Yet, the results of the snap General Election in June 2017 suggest that it was the Conservatives that suffered from Brexit. Despite their share of the popular vote increasing by over 5%, to the highest share in 25 years, they had a net loss of 13 seats and lost their majority. Labour increased their share of the vote by 10%, but only had a net gain of 30 seats.

Do the positions on Brexit appear to have had an influence? The Conservatives were seeking a stronger mandate for the Brexit negotiations, whilst Labour strongly avoided taken a firm position one way or the other. Chris Hanretty has revised his estimates, with the number of Leave-majority constituencies in England and Wales reduced from 421 to 401. The general picture is unchanged from the previous analysis. I have taken these revised figures, put them into the eight bands used previously and compared to the full election results available from the House of Commons Library.

The main seat results are in Figure 3.

Main points from Figure 3 (for England and Wales) are

  • Conservatives had a net loss of 25 seats, 14 of which likely voted Remain in the EU Referendum and 11 likely voted Leave. Remain seats reduced by 18% and Leave seats by 4%.
  • All 6 gains from Labour were in strongly Remain constituencies. This includes Copeland, which was gained in a by-election in early 2017 and retained in the General Election.
  • Labour had a net gain of 24 seats, 13 of which likely voted Remain in the EU Referendum and 11 likely voted Leave. Remain seats increased by 16% and Leave seats by 7%.

Figure 4 is the average percentage change in the constituency vote from 2015 to 2017 for the Conservative Party.

Main point from Figure 4 for the Conservative Party is

  • The estimated Referendum vote is a strong predictor of change in Conservative Party vote share from 2015 to 2017 General Election.

Figure 5 is the average percentage change in the constituency vote from 2015 to 2017 for the Labour Party.

Main points from Figure 5 for the Labour Party are

  • Overall average constituency vote share increased by 10% on the 2015 General Election.
  • In the 6 seats lost to the Conservatives, Labour’s share of the vote increased.
  • In every area, Labour increased its share of the constituency vote with one exception. In the 6 seats that the Liberal Democrats gained from the Conservatives, the Labour share of the vote was on average unchanged. This suggests some tactical voting.
  • In Conservative “hold” seats Labour’s increase in vote share did not have a “Remain” bias.
  • In Labour “hold” seats Labour’s increase in vote share had a strong “Remain” bias.

In summary, it would appear that the Conservatives in implementing Brexit have mostly suffered at the ballot in Remain areas. Labour, in being the Party of Opposition and avoiding taking a clear position on Brexit, benefited from the Remain support without being deserted by the Leave vote. I will leave it for another day – and for others – to draw out further conclusions.

Kevin Marshall

Update 23rd May

Whilst writing the above, I was unaware of a report produced by political pundit Prof John Curtice last December Has Brexit Reshaped British Politics?

Key findings

In the 2017 election the Conservatives gained support amongst Leave voters but fell back amongst Remain supporters. Labour, in contrast, advanced more strongly amongst Remain than amongst Leave voters.

That is pretty much my own findings by a different method. Both methods can produce different insights. My own approach can give regional analysis.

Does data coverage impact the HADCRUT4 and NASA GISS Temperature Anomalies?

Introduction

This post started with the title “HADCRUT4 and NASA GISS Temperature Anomalies – a Comparison by Latitude“.  After deriving a global temperature anomaly from the HADCRUT4 gridded data, I was intending to compare the results with GISS’s anomalies by 8 latitude zones. However, this opened up an intriguing issue. Are global temperature anomalies impacted by a relative lack of data in earlier periods? The leads to a further issue of whether infilling of the data can be meaningful, and hence be considered to “improve” the global anomaly calculation.

A Global Temperature Anomaly from HADCRUT4 Gridded Data

In a previous post, I looked at the relative magnitudes of early twentieth century and post-1975 warming episodes. In the Hadley datasets, there is a clear divergence between the land and sea temperature data trends post-1980, a feature that is not present in the early warming episode. This is reproduced below as Figure 1.

Figure 1 : Graph of Hadley Centre 7 year moving average temperature anomalies for Land (CRUTEM4), Sea (HADSST3) and Combined (HADCRUT4)

The question that needs to be answered is whether the anomalous post-1975 warming on the land is due to real divergence, or due to issues in the estimation of global average temperature anomaly.

In another post – The magnitude of Early Twentieth Century Warming relative to Post-1975 Warming – I looked at the NASA Gistemp data, which is usefully broken down into 8 Latitude Zones. A summary graph is shown in Figure 2.

Figure 2 : NASA Gistemp zonal anomalies and the global anomaly

This is more detail than the HADCRUT4 data, which is just presented as three zones of the Tropics, along with Northern and Southern Hemispheres. However, the Hadley Centre, on their HADCRUT4 Data: download page, have, under  HadCRUT4 Gridded data: additional fields, a file HadCRUT.4.6.0.0.median_ascii.zip. This contains monthly anomalies for 5o by 5o grid cells from 1850 to 2017. There are 36 zones of latitude and 72 zones of longitude. Over 2016 months, there are over 5.22 million grid cells, but only 2.51 million (48%) have data. From this data, I have constructed a global temperature anomaly. The major issue in the calculation is that the grid cells are of different areas. A grid cell nearest to the equator at 0o to 5o has about 23 times the area of a grid cell adjacent to the poles at 85o to 90o. I used the appropriate weighting for each band of latitude.

The question is whether I have calculated a global anomaly similar to the Hadley Centre. Figure 3 is a reconciliation with the published global anomaly mean (available from here) and my own.

Figure 3 : Reconciliation between HADCRUT4 published mean and calculated weighted average mean from the Gridded Data

Prior to 1910, my calculations are slightly below the HADCRUT 4 published data. The biggest differences are in 1956 and 1915. Overall the differences are insignificant and do not impact on the analysis.

I split down the HADCRUT4 temperature data by eight zones of latitude on a similar basis to NASA Gistemp. Figure 4 presents the results on the same basis as Figure 2.

Figure 4 : Zonal surface temperature anomalies a the global anomaly calculated using the HADCRUT4 gridded data.

Visually, there are a number of differences between the Gistemp and HADCRUT4-derived zonal trends.

A potential problem with the global average calculation

The major reason for differences between HADCRUT4 & Gistemp is that the latter has infilled estimated data into areas where there is no data. Could this be a problem?

In Figure 5, I have shown the build-up in global coverage. That is the percentage of 5o by 5o grid cells with an anomaly in the monthly data.

Figure 5 : HADCRUT4 Change in the percentage coverage of each zone in the HADCRUT4 gridded data. 

Figure 5 shows a build-up in data coverage during the late nineteenth and early twentieth centuries. The World Wars (1914-1918 & 1939-1945) had the biggest impact on the Southern Hemisphere data collection. This is unsurprising when one considers it was mostly fought in the Northern Hemisphere, and European powers withdrew resources from their far-flung Empires to protect the mother countries. The only zones with significantly less than 90% grid coverage in the post-1975 warming period are the Arctic and the region below 45S. That is around 19% of the global area.

Finally, comparing comparable zones in the Northen and Southern hemispheres, the tropics seem to have comparable coverage, whilst for the polar, temperate and mid-latitude areas the Northern Hemisphere seems to have better coverage after 1910.

This variation in coverage can potentially lead to wide discrepancies between any calculated temperature anomalies and a theoretical anomaly based upon one with data in all the 5o by 5o grid cells. As an extreme example, with my own calculation, if just one of the 72 grid cells in a band of latitude had a figure, then an “average” would have been calculated for a band right around the world 555km (345 miles) from North to South for that month for that band. In the annual figures by zone, it only requires one of the 72 grid cells, in one of the months, in one of the bands of latitude to have data to calculate an annual anomaly. For the tropics or the polar areas, that is just one in 4320 data points to create an anomaly. This issue will impact early twentieth-century warming episode far more than the post-1975 one. Although I would expect the Hadley centre to have done some data cleanup of the more egregious examples in their calculation, potentially lack of data in grid cells could have quite random impacts, thus biasing the global temperature anomaly trends to an unknown, but significant extent. An appreciation of how this could impact can be appreciated from an example of NASA GISS Global Maps.

NASA GISS Global Maps Temperature Trends Example

NASA GISS Global Maps from GHCN v3 Data provide maps with the calculated change in average temperatures. I have run the maps to compare annual data for 1940 with a baseline of 1881-1910, capturing much of the early twentieth-century warming. I have run the maps at both the 1200km and 250km smoothing.

Figure 6 : NASA GISS Global anomaly Map and average anomaly by Latitude comparing 1940 with a baseline of 1881 to 1910 and a 1200km smoothing radius

Figure 7 : NASA GISS Global anomaly Map and average anomaly by Latitude comparing 1940 with a baseline of 1881 to 1910 and a 250km smoothing radius. 

With respect to the maps in figures 6 & 7

  • There is no apparent difference in the sea data between the 1200km and 250km smoothing radius, except in the polar regions with more cover in the former. The differences lie in the land area.
  • The grey areas with insufficient data all apply to the land or ocean areas in polar regions.
  • Figure 6, with 1200km smoothing, has most of the land infilled, whilst the 250km smoothing shows the lack of data coverage for much of South America, Africa, the Middle East, South-East Asia and Greenland.

Even with these land-based differences in coverage, it is clear that from either map that at any latitude there are huge variations in calculated average temperature change. For instance, take 40N. This line of latitude is North of San Francisco on the West Coast USA, clips Philidelphia on the East Coast. On the other side of the Atlantic, Madrid, Ankara and Beijing are at about 40N. There are significant points on the line on latitude with estimate warming greater than 1C (e.g. California), whilst at the same time in Eastern Europe, cooling may have exceeded 1C in the period. More extreme is at 60N (Southern Alaska, Stockholm, St Petersburg) the difference in temperature along the line of latitude is over 3C. This compares to a calculated global rise of 0.40C.

This lack of data may have contributed (along with a faulty algorithm) to the differences in the Zonal mean charts by Latitude. The 1200km smoothing radius chart bears little relation to the 250km smoothing radius. For instance:-

  •  1200km shows 1.5C warming at 45S, 250km about zero. 45S cuts through South Island, New Zealand.
  • From the equator to 45N, 1200km shows rise from 0.5C to over 2.0C, 250km shows drop from less than 0.5C to near zero, then rise to 0.2C. At around 45N lies Ottowa, Maine, Bordeaux, Belgrade, Crimea and the most Northern point in Japan.

The differences in the NASA Giss Maps, in a period when available data covered only around half the 2592 5o by 5o grid cells, indicate quite huge differences in trends between different areas. As a consequence, trying to interpolate warming trends from one area to adjacent areas appears to give quite different results in terms of trends by latitude.

Conclusions and Further Questions

The issue I originally focussed upon was the relative size of the early twentieth-century warming to the Post-1975. The greater amount of warming in the later period seemed to be due to the greater warming on land covering just 30% of the total global area. The sea temperature warming phases appear to be pretty much the same.

The issue that I focussed upon was a data issue. The early twentieth century had much less data coverage than after 1975. Further, the Southern Hemisphere had worse data coverage than the Northern Hemisphere, except in the Tropics. This means that in my calculation of a global temperature anomaly from the HADCRUT4 gridded data (which in aggregate was very similar to the published HADCRUT4 anomaly) the average by latitude will not be comparing like with like in the two warming periods. In particular, in the early twentieth-century, a calculation by latitude will not average right the way around the globe, but only on a limited selection of bands of longitude. On average this was about half, but there are massive variations. This would be alright if the changes in anomalies were roughly the same over time by latitude. But an examination of NASA GISS global maps for a period covering the early twentieth-century warming phase reveals that trends in anomalies at the same latitude are quite different over time. This implies that there could be large, but unknown, biases in the data.

I do not believe the analysis ends here. There are a number of areas that I (or others) can try to explore.

  1. Does the NASA GISS infilling of the data get us closer or further away from a what a global temperature anomaly would look like with full data coverage? My guess, based on the extreme example of Antartica trends (discussed here) is that the infilling will move away from the more perfect trend. The data could show otherwise.
  2. Are the changes in data coverage on land more significant than the global average or less? Looking at CRUTEM4 data could resolve this question.
  3. Would anomalies based upon similar grid coverage after 1900 give different relative trend patterns to the published ones based on dissimilar grid coverage?

Whether I get the time to analyze these is another issue.

Finally, the problem of trends varying considerably and quite randomly across the globe is the same issue that I found with land data homogenisation discussed here and here. To derive a temperature anomaly for a grid cell, it is necessary to make the data homogeneous. In standard homogenisation techniques, it is assumed that the underlying trends in an area is pretty much the same. Therefore, any differences in trend between adjacent temperature stations will be as a result of data imperfections. I found numerous examples where there were likely differences in trend between adjacent temperature stations. Homogenisation will, therefore, eliminate real but local climatic trends. Averaging incomplete global data where missing data could contain regional but unknown data trends may cause biases at a global scale.

Kevin Marshall

 

 

More Coal-Fired Power Stations in Asia

A lovely feature of the GWPF site is its extracts of articles related to all aspects of climate and related energy policies. Yesterday the GWPF extracted from an opinion piece in the Hong Kong-based South China Morning Post A new coal war frontier emerges as China and Japan compete for energy projects in Southeast Asia.
The GWPF’s summary:-

Southeast Asia’s appetite for coal has spurred a new geopolitical rivalry between China and Japan as the two countries race to provide high-efficiency, low-emission technology. More than 1,600 coal plants are scheduled to be built by Chinese corporations in over 62 countries. It will make China the world’s primary provider of high-efficiency, low-emission technology.

A summary point in the article is not entirely accurate. (Italics mine)

Because policymakers still regard coal as more affordable than renewables, Southeast Asia’s industrialisation continues to consume large amounts of it. To lift 630 million people out of poverty, advanced coal technologies are considered vital for the region’s continued development while allowing for a reduction in carbon emissions.

Replacing a less efficient coal-fired power station with one of the latest technology will reduce carbon (i.e CO2) emissions per unit of electricity produced. In China, these efficiency savings replacement process may outstrip the growth in power supply from fossil fuels. But in the rest of Asia, the new coal-fired power stations will be mostly additional capacity in the coming decades, so will lead to an increase in CO2 emissions. It is this additional capacity that will be primarily responsible for driving the economic growth that will lift the poor out of extreme poverty.

The newer technologies are important in other types emissions. That is the particle emissions that has caused high levels of choking pollution and smogs in many cities of China and India. By using the new technologies, other countries can avoid the worst excesses of this pollution, whilst still using a cheap fuel available from many different sources of supply. The thrust in China will likely be to replace the high pollution power stations with new technologies or adapt them to reduce the emissions and increase efficiencies. Politically, it is a different way of raising living standards and quality of life than by increasing real disposable income per capita.

Kevin Marshall

 

HADCRUT4, CRUTEM4 and HADSST3 Compared

In the previous post, I compared early twentieth-century warming with the post-1975 warming in the Berkeley Earth Global temperature anomaly. From a visual inspection of the graphs, I determined that the greater warming in the later period is due to more land-based warming, as the warming in the oceans (70% of the global area) was very much the same. The Berkeley Earth data ends in 2013, so does not include the impact of the strong El Niño event in the last three years.

Global average temperature series page of the Met Office Hadley Centre Observation Datasets has the average annual temperature anomalies for CRUTEM4 (land-surface air temperature) and HADSST3 (sea-surface temperature)  and HADCRUT4 (combined). From these datasets, I have derived the graph in Figure 1.

Figure 1 : Graph of Hadley Centre annual temperature anomalies for Land (CRUTEM4), Sea (HADSST3) and Combined (HADCRUT4)

  Comparing the early twentieth-century with 1975-2010,

  • Land warming is considerably greater in the later period.
  • Combined land and sea warming is slightly more in the later period.
  • Sea surface warming is slightly less in the later period.
  • In the early period, the surface anomalies for land and sea have very similar trends, whilst in the later period, the warming of the land is considerably greater than the sea surface warming.

The impact is more clearly shown with 7 year centred moving average figures in Figure 2.

Figure 2 : Graph of Hadley Centre 7 year moving average temperature anomalies for Land (CRUTEM4), Sea (HADSST3) and Combined (HADCRUT4)

This is not just a feature of the HADCRUT dataset. NOAA Global Surface Temperature Anomalies for land, ocean and combined show similar patterns. Figure 3 is on the same basis as Figure 2.

Figure 3 : Graph of NOAA 7 year moving average temperature anomalies for Land, Ocean and Combined.

The major common feature is that the estimated land temperature anomalies have shown a much greater warming trend that the sea surface anomalies since 1980, but no such divergence existed in the early twentieth century warming period. Given that the temperature data sets are far from complete in terms of coverage, and the data is of variable quality, is this divergence a reflection of the true average temperature anomalies based on far more complete and accurate data? There are a number of alternative possibilities that need to be investigated to help determine (using beancounter terminology) whether the estimates are a true and fair reflection of the prespective that more perfect data and techniques would provide. My list might be far from exhaustive.

  1. The sea-surface temperature set understates the post-1975 warming trend due to biases within data set.
  2. The spatial distribution of data changed considerably over time. For instance, in recent decades more data has become available from the Arctic, a region with the largest temperature increases in both the early twentieth century and post-1975.
  3. Land data homogenization techniques may have suppressed differences in climate trends where data is sparser. Alternatively, due to relative differences in climatic trends between nearby locations increasing over time, the further back in time homogenization goes, the more accentuated these differences and therefore the greater the suppression of genuine climatic differences. These aspects I discussed here and here.
  4. There is deliberate manipulation of the data to exaggerate recent warming. Having looked at numerous examples three years ago, this is a perspective that I do not believe to have had any significant impact. However, simply believing something not to be the case, even with examples, does not mean that it is not there.
  5. Strong beliefs about how the data should look have, over time and multiple data adjustments created biases within the land temperature anomalies.

What I do believe is that an expert opinion to whether this divergence between the land and sea surface anomalies is a “true and fair view” of the actual state of affairs can only be reached by a detailed examination of the data. Jumping to conclusions – which is evident from many people across the broad spectrum of opinions on catastrophic anthropogenic global warming debate – will fall short of the most rounded opinion that can be gleaned from the data.

Kevin Marshall

 

The magnitude of Early Twentieth Century Warming relative to Post-1975 Warming

I was browsing the Berkeley Earth website and came across their estimate of global average temperature change. Reproduced as Figure 1.

Figure 1 – BEST Global Temperature anomaly

What clearly stands out is the 10-year moving average line. It clearly shows warming from in the early twentieth century, (the period 1910 to 1940) being very similar warming from the mid-1970s to the end of the series in both time period and magnitude. Maybe the later warming period is up to one-tenth of a degree Celsius greater than the earlier one. The period from 1850 to 1910 shows stasis or a little cooling, but with high variability. The period from the 1940s to the 1970s shows stasis or slight cooling, and low variability.

This is largely corroborated by HADCRUT4, or at least the version I downloaded in mid-2014.

Figure 2 – HADCRUT4 Global Temperature anomaly

HADCRUT4 estimates that the later warming period is about three-twentieths of a degree Celsius greater than the earlier period and that the recent warming is slightly less than the BEST data.

The reason for the close fit is obvious. 70% of the globe is ocean and for that BEST use the same HADSST dataset as HADCRUT4. Graphics of HADSST are a little hard to come by, but KevinC at skepticalscience usefully produced a comparison of the latest HADSST3 in 2012 with the previous version.

Figure 3  – HADSST Ocean Temperature anomaly from skepticalscience 

This shows the two periods having pretty much the same magnitudes of warming.

It is the land data where the differences lie. The BEST Global Land temperature trend is reproduced below.

Figure 4 – BEST Global Land Temperature anomaly

For BEST global land temperatures, the recent warming was much greater than the early twentieth-century warming. This implies that the sea surface temperatures showed pretty much the same warming in the two periods. But if greenhouse gases were responsible for a significant part of global warming then the warming for both land and sea would be greater after the mid-1970s than in the early twentieth century. Whilst there was a rise in GHG levels in the early twentieth century, it was less than in the period from 1945 to 1975, when there was no warming, and much less than the post-1975 when CO2 levels rose massively. Whilst there can be alternative explanations for the early twentieth-century warming and the subsequent lack of warming for 30 years (when the post-WW2 economic boom which led to a continual and accelerating rise in CO2 levels), without such explanations being clear and robust the attribution of post-1975 warming to rising GHG levels is undermined. It could be just unexplained natural variation.

However, as a preliminary to examining explanations of warming trends, as a beancounter, I believe it is first necessary to examine the robustness of the figures. In looking at temperature data in early 2015, one aspect that I found unsatisfactory with the NASA GISS temperature data was the zonal data. GISS usefully divide the data between 8 bands of latitude, which I have replicated as 7 year centred moving averages in Figure 5.

Figure 5 – NASA Gistemp zonal anomalies and the global anomaly

What is significant is that some of the regional anomalies are far greater in magnitude

The most Southerly is for 90S-64S, which is basically Antarctica, an area covering just under 5% of the globe. I found it odd that there should a temperature anomaly for the region from the 1880s, when there were no weather stations recording on the frozen continent until the mid-1950s. The nearest is Base Orcadas located at 60.8 S 44.7 W, or about 350km north of 64 S. I found that whilst the Base Orcadas temperature anomaly was extremely similar to the Antarctica Zonal anomaly in the period until 1950, it was quite dissimilar in the period after.

Figure 6. Gistemp 64S-90S annual temperature anomaly compared to Base Orcadas GISS homogenised data.

NASA Gistemp has attempted to infill the missing temperature anomaly data by using the nearest data available. However, in this case, Base Orcadas appears to climatically different than the average anomalies for Antarctica, and from the global average as well. The result of this is to effectively cancel out the impact of the massive warming in the Arctic on global average temperatures in the early twentieth century. A false assumption has effectively shrunk the early twentieth-century warming. The shrinkage will be small, but it undermines the NASA GISS being the best estimate of a global temperature anomaly given the limited data available.

Rather than saying that the whole exercise of determining a valid comparison the two warming periods since 1900 is useless, I will instead attempt to evaluate how much the lack of data impacts on the anomalies. To this end, in a series of posts, I intend to look at the HADCRUT4 anomaly data. This will be a top-down approach, looking at monthly anomalies for 5o by 5o grid cells from 1850 to 2017, available from the Met Office Hadley Centre Observation Datasets. An advantage over previous analyses is the inclusion of anomalies for the 70% of the globe covered by ocean. The focus will be on the relative magnitudes of the early twentieth-century and post-1975 warming periods. At this point in time, I have no real idea of the conclusions that can be drawn from the analysis of the data.

Kevin Marshall

 

 

Ocean Impact on Temperature Data and Temperature Homgenization

Pierre Gosselin’s notrickszone looks at a new paper.

Temperature trends with reduced impact of ocean air temperature – Frank LansnerJens Olaf Pepke Pedersen.

The paper’s abstract.

Temperature data 1900–2010 from meteorological stations across the world have been analyzed and it has been found that all land areas generally have two different valid temperature trends. Coastal stations and hill stations facing ocean winds are normally more warm-trended than the valley stations that are sheltered from dominant oceans winds.

Thus, we found that in any area with variation in the topography, we can divide the stations into the more warm trended ocean air-affected stations, and the more cold-trended ocean air-sheltered stations. We find that the distinction between ocean air-affected and ocean air-sheltered stations can be used to identify the influence of the oceans on land surface. We can then use this knowledge as a tool to better study climate variability on the land surface without the moderating effects of the ocean.

We find a lack of warming in the ocean air sheltered temperature data – with less impact of ocean temperature trends – after 1950. The lack of warming in the ocean air sheltered temperature trends after 1950 should be considered when evaluating the climatic effects of changes in the Earth’s atmospheric trace amounts of greenhouse gasses as well as variations in solar conditions.

More generally, the paper’s authors are saying that over fairly short distances temperature stations will show different climatic trends. This has a profound implication for temperature homogenization. From Venema et al 2012.

The most commonly used method to detect and remove the effects of artificial changes is the relative homogenization approach, which assumes that nearby stations are exposed to almost the same climate signal and that thus the differences between nearby stations can be utilized to detect inhomogeneities (Conrad and Pollak, 1950). In relative homogeneity testing, a candidate time series is compared to multiple surrounding stations either in a pairwise fashion or to a single composite reference time series computed for multiple nearby stations. 

Lansner and Pederson are, by implication, demonstrating that the principle assumption on which homogenization is based (that nearby temperature stations are exposed to almost the same climatic signal) is not valid. As a result data homogenization will not only eliminate biases in the temperature data (such a measurement biases, impacts of station moves and the urban heat island effect where it impacts a minority of stations) but will also adjust out actual climatic trends. Where the climatic trends are localized and not replicated in surrounding areas, they will be eliminated by homogenization. What I found in early 2015 (following the examples of Paul Homewood, Euan Mearns and others) is that there are examples from all over the world where the data suggests that nearby temperature stations are exposed to different climatic signals. Data homogenization will, therefore, cause quite weird and unstable results. A number of posts were summarized in my post Defining “Temperature Homogenisation”.  Paul Matthews at Cliscep corroborated this in his post of February 2017 “Instability og GHCN Adjustment Algorithm“.

During my attempts to understand the data, I also found that those who support AGW theory not only do not question their assumptions but also have strong shared beliefs in what the data ought to look like. One of the most significant in this context is a Climategate email sent on Mon, 12 Oct 2009 by Kevin Trenberth to Michael Mann of Hockey Stick fame, and copied to Phil Jones of the Hadley centre, Thomas Karl of NOAA, Gavin Schmidt of NASA GISS, plus others.

The fact is that we can’t account for the lack of warming at the moment and it is a travesty that we can’t. The CERES data published in the August BAMS 09 supplement on 2008 shows there should be even more warming: but the data are surely wrong. Our observing system is inadequate. (emphasis mine)

Homogenizing data a number of times, and evaluating the unstable results in the context of strongly-held beliefs will bring the trends evermore into line with those beliefs. There is no requirement for some sort of conspiracy behind deliberate data manipulation for this emerging pattern of adjustments. Indeed a conspiracy in terms of a group knowing the truth and deliberately perverting that evidence does not really apply. Another reason for the conspiracy not applying is the underlying purpose of homogenization. It is to allow that temperature station to be representative of the surrounding area. Without that, it would not be possible to compile an average for the surrounding area, from which the global average in constructed. It is this requirement, in the context of real climatic differences over relatively small areas, I would suggest leads to the deletions of “erroneous” data and the infilling of estimated data elsewhere.

The gradual bringing the temperature data sets into line will beliefs is most clearly shown in the NASA GISS temperature data adjustments. Climate4you produces regular updates of the adjustments since May 2008. Below is the March 2018 version.

The reduction of the 1910 to 1940 warming period (which is at odds with theory) and the increase in the post-1975 warming phase (which correlates with the rise in CO2) supports my contention of the influence of beliefs.

Kevin Marshall

 

Climate Alarmist Bob Ward’s poor analysis of Research Data

After Christopher Booker’s excellent new Report for the GWPF “Global Warming: A Case Study In Groupthink” was published on 20th February, Bob Ward (Policy and Communications Director at the Grantham Research Institute on Climate Change and the Environment at the LSE) typed a rebuttal article “Do male climate change ‘sceptics’ have a problem with women?“. Ward commenced the article with a highly misleading statement.

On 20 February, the Global Warming Policy Foundation launched a new pamphlet at the House of Lords, attacking the mainstream media for not giving more coverage to climate change ‘sceptics’.

I will lead it to the reader to judge for themselves how misleading the statement is by reading the report or alternatively reading his summary at Capx.co.

At Cliscep (reproduced at WUWT), Jaime Jessop has looked into Ward’s distractive claims about the GWPF gender bias. This comment by Ward particularly caught my eye.

A tracking survey commissioned by the Department for Business, Energy and Industrial Strategy showed that, in March 2017, 7.6% answered “I don’t think there is such a thing as climate change” or “Climate change is caused entirely caused by natural processes”, when asked for their views. Among men the figure was 8.1%, while for women it was 7.1%.

I looked at the Tracking Survey. It is interesting that the Summary of Key Findings contains no mention of gender bias, nor of beliefs on climate change. It is only in the Wave 21 full dataset spreadsheet that you find the results of the question 22.

Q22. Thinking about the causes of climate change, which, if any, of the following best describes your opinion?
[INVERT ORDER OF RESPONSES 1-5]
1. Climate change is entirely caused by natural processes
2. Climate change is mainly caused by natural processes
3. Climate change is partly caused by natural processes and partly caused by human activity
4. Climate change is mainly caused by human activity
5. Climate change is entirely caused by human activity
6. I don’t think there is such a thing as climate change.
7. Don’t know
8. No opinion

Note that the first option presented to the questionee is 5, then 4, then 3, then 2, then 1. There may, therefore, be an inbuilt bias in overstating the support for Climate Change being attributed to human activity. But the data is clearly presented, so a quick pivot table was able to check Ward’s results.

The sample was of 2180 – 1090 females and 1090 males. Adding the responses  to “I don’t think there is such a thing as climate change” or “Climate change is caused entirely caused by natural processes” I get 7.16% for females – (37+41)/1090 – and 8.17% for males – (46+43)/1090. Clearly, Bob Ward has failed to remember what he was taught in high school about roundings.

Another problem is that this is raw data. The opinion pollsters have taken time and care to adjust for various demographic factors by adding a weighting to each line. On this basis, Ward should have reported 6.7% for females, 7.6% for males and 7.1% overall.

More importantly, if males tend to be more sceptical of climate change than females, then they will be less alarmist than females. But the data says something different. Of the weighted responses, to those who opted for the most extreme “Climate change is entirely caused by natural processes“, 12.5% were female and 14.5% were male. Very fractionally at the extreme, men are proportionality more alarmist than females than they are sceptical. More importantly, men are slightly more extreme in their opinions on climate change (for or against) than women.

The middle ground is the response to “Climate change is partly caused by natural processes and partly caused by human activity“. The weighted response was 44.5% female and 40.7% male, confirming that men are more extreme in their views than women.

There is a further finding that can be drawn. The projections by the IPCC for future unmitigated global warming assume that all, or the vast majority of, global warming since 1850 is human-caused. Less than 41.6% of British women and 43.2% of British men agree with this assumption that justifies climate mitigation policies.

Below are my summaries. My results are easily replicated for those with an intermediate level of proficiency in Excel.

Learning Note

The most important lesson for understanding data is to analyse that data from different perspectives, against different hypotheses. Bob Ward’s claim of a male gender bias towards climate scepticism in an opinion survey, upon a slightly broader analysis, becomes one where British males are slightly more extreme and forthright in their views than British females whether for or against. This has parallels to my conclusion when looking at the 2013 US study The Role of Conspiracist Ideation and Worldviews in Predicting Rejection of Science – Stephan Lewandowsky, Gilles E. Gignac, Klaus Oberauer. Here I found that rather than the paper’s finding that conspiracist ideation being “associated with the rejection of all scientific propositions tested”, the data strongly indicated that people with strong opinions on one subject, whether for or against, tend to have strong opinions on other subjects, whether for or against. Like with any bias of perspective, (ideological, religious, gender, race, social class, national, football team affiliation etc.) the way to counter bias is to concentrate on the data. Opinion polls are a poor starting point, but at least they may report on perspectives outside of one’s own immediate belief systems. 

Kevin Marshall