Beliefs and Uncertainty: A Bayesian Primer

Ron Clutz’s introduction, based on a Scientific American article by John Horgan on January 4, 2016, starts to grapple with the issues involved.

The take home quote from Horgan is on the subject of false positives.

Here is my more general statement of that principle: The plausibility of your belief depends on the degree to which your belief–and only your belief–explains the evidence for it. The more alternative explanations there are for the evidence, the less plausible your belief is. That, to me, is the essence of Bayes’ theorem.

“Alternative explanations” can encompass many things. Your evidence might be erroneous, skewed by a malfunctioning instrument, faulty analysis, confirmation bias, even fraud. Your evidence might be sound but explicable by many beliefs, or hypotheses, other than yours.

In other words, there’s nothing magical about Bayes’ theorem. It boils down to the truism that your belief is only as valid as its evidence. If you have good evidence, Bayes’ theorem can yield good results. If your evidence is flimsy, Bayes’ theorem won’t be of much use. Garbage in, garbage out.
With respect to the question of whether global warming is human caused, there is basically a combination of three elements – (i) Human caused (ii) Naturally caused (iii) Random chaotic variation. There may be a number of sub-elements and an infinite number of combinations including some elements counteracting others, such as El Nino events counteracting underlying warming. Evaluation of new evidence is in the context of explanations being arrived at within a community of climatologists with strong shared beliefs that at least 100% of recent warming is due to human GHG emissions. It is that same community who also decide the measurement techniques for assessing the temperature data; the relevant time frames; and the categorization of the new data. With complex decisions the only clear decision criteria is conformity to the existing consensus conclusions. As a result, the original Bayesian estimates become virtually impervious to new perspectives or evidence that contradicts those original estimates.

Science Matters

Those who follow discussions regarding Global Warming and Climate Change have heard from time to time about the Bayes Theorem. And Bayes is quite topical in many aspects of modern society:

Bayesian statistics “are rippling through everything from physics to cancer research, ecology to psychology,” The New York Times reports. Physicists have proposed Bayesian interpretations of quantum mechanics and Bayesian defenses of string and multiverse theories. Philosophers assert that science as a whole can be viewed as a Bayesian process, and that Bayes can distinguish science from pseudoscience more precisely than falsification, the method popularized by Karl Popper.

Named after its inventor, the 18th-century Presbyterian minister Thomas Bayes, Bayes’ theorem is a method for calculating the validity of beliefs (hypotheses, claims, propositions) based on the best available evidence (observations, data, information). Here’s the most dumbed-down description: Initial belief plus new evidence = new and improved belief.   (A fuller and…

View original post 1,082 more words

CO2 Emissions from Energy production forecast to be rising beyond 2040 despite COP21 Paris Agreement

Last week the US Energy Information Administration (EIA) published their INTERNATIONAL ENERGY OUTLOOK 2016. The Daily Caller (and the GWPF) highlighted the EIA’s summary energy energy production. This shows that the despite the predicted strong growth in nuclear power and implausibly high growth in renewables, usage of fossil fuels are also predicted to rise, as shown in their headline graphic below.

For policy purposes, the important aspect is the translation into CO2 emissions. In the final Chapter 9. Energy-related CO2 Emissions figure 9.3 shows the equivalent CO2 Emissions in billions of tonnes of CO2. I have reproduced the graphic as a stacked bar chart.

Data reproduced as a stacked bar chart.

In 2010 these CO2 emissions are just under two-thirds of total global greenhouse gas emissions. The question is how does this fit into the policy requirements to avoid 2°C from the IPCC’s Fifth Assessment Report? The International Energy Authority summarized the requirements very succicently in World Energy Outlook 2015 Special Report page 18

The long lifetime of greenhouse gases means that it is the cumulative build-up in the atmosphere that matters most. In its latest report, the Intergovernmental Panel on Climate Change (IPCC) estimated that to preserve a 50% chance of limiting global warming to 2 °C, the world can support a maximum carbon dioxide (CO2) emissions “budget” of 3 000 gigatonnes (Gt) (the mid-point in a range of 2 900 Gt to 3 200 Gt) (IPCC, 2014), of which an estimated 1 970 Gt had already been emitted before 2014. Accounting for CO2 emissions from industrial processes and land use, land-use change and forestry over the rest of the 21st century leaves the energy sector with a carbon budget of 980 Gt (the midpoint in a range of 880 Gt to 1 180 Gt) from the start of 2014 onwards.

From the forecast above, cumulative CO2 emissions from 2014 with reach 980 Gt in 2038. Yet in 2040, there is no sign of peak emissions.

Further corroboration comes from the UNFCCC. In preparation for the COP21 from all the country policy proposals they produced a snappily titled Synthesis report on the aggregate effect of intended nationally determined contributions. The UNFCCC have updated the graphics since. Figure 2 of 27 Apr 2016 shows the total GHG emissions, which were about 17 Gt higher than the CO2 emissions from energy emissions in 2010.

The graphic clearly shows that the INDCs – many with very vague and non-verifiable targets – will make very little difference to the non-policy emissions path. Yet even this small impact is contingent on those submissions being implemented in full, which is unlikely in many countries. The 2°C target requires global emissions to peak in 2016 and then head downwards. There are no additional policies even being tabled to achieve this, except possibly by some noisy, but inconsequential, activist groups. Returning to the EIA’s report, figure 9.4 splits the CO2 emissions between the OECD and non-OECD countries.

The OECD countries represent nearly all countries who propose to reduce their CO2 emissions on the baseline 1990 level, but their emissions are forecast by the EIA still to be 19% higher in 2040. However, the increase is small compared to the non-OECD countries – who mostly are either proposing to constrain emissions growth or have no emissions policy proposals – with emissions forecast to treble in fifty years. As a result the global forecast is for CO2 emissions to double. Even if all the OECD countries completely eliminate CO2 emissions by 2040, global emissions will still be a third higher than in 1990. As the rapid economic growth in the former Third World reduces global income inequalities, it is also reducing the inequalities in fossil fuel consumption in energy production. This will continue beyond 2040 when the OECD with a sixth of the world population will still produce a third of global CO2 emissions.

Unless the major emerging economies peak their emissions in the next few years, then reduce the emissions rapidly thereafter, the emissions target allegedly representing 2°C or less of global warming by 2100 will not be met. But for countries like India, Vietnam, Indonesia, Bangladesh, Nigeria, and Ethiopia to do so, with the consequent impact on economic growth, is morally indefensible.

Kevin Marshall

 

Insight into the mindset of FoE activists

Bishop Hill comments about how

the Charities Commissioners have taken a dim view of an FoE leaflet that claimed that silica – that’s sand to you or me – used in fracking fluid was a known carcinogen.

Up pops a FoE activist making all sorts of comments, including attacking the hosts book The Hockey Stick Illusion. Below is my comment

Phil Clarke’s comments on the hosts book are an insight into the Green Activists.
He says Jan 30, 2016 at 9:58 AM

So you’ve read HSI, then?
I have a reading backlog of far more worthwhile volumes, fiction and non-fiction. Does anybody dispute a single point in Tamino’s adept demolition?

and

Where did I slag off HSI? I simply trust Tamino; the point about innuendo certainly rings true, based on other writings.
So no, I won’t be shelling out for a copy of a hatchet job on a quarter-century old study. But I did read this, in detail
http://www.nature.com/ngeo/journal/v6/n5/full/ngeo1797.html

Tamino’s article was responded to twice by Steve McIntyre. The first looks at the use of non-standard statistical methods and Re-post of “Tamino and the Magic Flute” simply repeats the post of two years before. Tamino had ignored previous rebuttals. A simple illustration is the Gaspé series that Tamino defends. He misses out many issues with this key element in the reconstruction, including that a later sample from the area failed to show a hockey stick.
So Phil Clarke has attacked a book that he has not read, based on biased review by an author in line with his own prejudices. He ignores the counter-arguments, just as the biased review author does as well. Says a lot about the rubbish Cuadrilla are up against.

Kevin Marshall

William Connolley is on side of anti-science not the late Bob Carter

In the past week there have been a number of tributes to Professor Bob Carter, retired Professor of Geology and leading climate sceptic. This includes Jo Nova, James Delingpole, Steve McIntyre, Ian Pilmer at the GWPF, Joe Bast of The Heartland Institute and E. Calvin Beisner of Cornwall Alliance. In complete contrast William Connolley posted this comment in a post Science advances one funeral at a time

Actually A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it, but I’m allowed to paraphrase in titles. And anyway he said it in German, naturally. Today brings us news of another such advancement in science, with the reported death of Robert Carter.

Below is a comment I posted at Climate Scepticism

I believe Max Planck did have a point. In science people tenaciously hold onto ideas even if they have been falsified by the evidence or (as more often happens) they are supplanted by better ideas. Where the existing ideas form an institutionalized consensus, discrimination has occurred against those with the hypotheses can undermine that consensus. It can be that the new research paradigm can only gain prominence when the numbers dwindle in the old paradigm. As a result the advance of new knowledge and understanding is held back.

To combat this innate conservatism in ideas I propose four ideas.

First is to promote methods of evaluating competing theories that are independent of consensus or opinion. In pure science that is by conducting experiments that would falsify a hypothesis. In complex concepts, for which experiment is not possible and data is incomplete and of poor quality, like the AGW hypothesis or economic theories, comparative analysis needs to be applied based upon independent standards.

Second is to recognize institutional bias by promoting pluralism and innovation.

Third is to encourage better definition of concepts, more rigorous standards of data within the existing research paradigm to push the boundaries.

Fourth is to train people to separate scientific endeavours from belief systems, whether religious, political or ethical.

The problem for William Connolley is that all his efforts within climatology – such as editing Wikipedia to his narrow views, or helping set up Real Climate to save the Mannian Hockey Stick from exposure of its many flaws – are with enforcing the existing paradigm and blocking any challenges. He is part of the problem that Planck was talking about.

As an example of the narrow and dogmatic views that Connolley supports, here is the late Bob Carter on his major point about how beliefs in unprecedented human-caused warming are undermined by the long-term temperature proxies from ice core data. The video quality is poor, probably due to a lack of professional funding that Connolley and his fellow-travellers fought so hard to deny.

Kevin Marshall

Shotton Open Cast Coal Mine Protest as an example of Environmental Totalitarianism

Yesterday, in the Greens and the Fascists, Bishop Hill commented on Jonah Goldberg’s book Liberal Fascists. In summing up, BH stated:-

Goldberg is keen to point out that the liberal and progressive left of today do not share the violent tendencies of their fascist forebears: theirs is a gentler totalitarianism (again in the original sense of the word). The same case can be made for the greens. At least for now; it is hard to avoid observing that their rhetoric is becoming steadily more violent and the calls for unmistakably fascist policy measures are ever more common.

The link is to an article in the Ecologist (reprinted from Open Democracy blog) – “Coal protesters must be Matt Ridley’s guilty consience

The coal profits that fill Matt Ridley’s bank account come wet with the blood of those killed and displaced by the climate disaster his mines contribute to, writes T. If hgis consicence is no longer functioning, then others must step into that role to confront him with the evil that he is doing. (Spelling as in the original)

The protest consisted of blocking the road for eight hours to Shotton open cast coal mine. The reasoning was

This was an effective piece of direct action against a mine that is a major contributor to climate disaster, and a powerful statement against the climate-denying Times columnist, Viscount Matt Ridley, that owns the site. In his honour, we carried out the action as ‘Matt Ridley’s Conscience’.

The mine produces about one million tonnes of coal a year out of 8,000 million tonnes globally. The blocking may have reduced annual output by 0.3%. This will be made up from the mine, or from other sources. Coal is not the only source of greenhouse gas emissions, so the coal resulting in less than 0.004% of global greenhouse gas emissions. Further, the alleged impact of GHG emissions on the climate is cumulative. The recoverable coal at Shotton is estimated at 6 million tonnes or 0.0007% of the estimated global reserves of 861 billion tonnes (Page 5). These global reserves could increase as new deposits are found, as has happened in the recent past for coal, gas and oil. So far from being “a major contributor to climate disaster”, Shotton Open Cast Coal Mine is a drop in the ocean.

But is there a climate disaster of which Matt Ridley is in denial? Anonymous author and convicted criminal T does not offer any evidence of current climate disasters. He is not talking about modelled projections, but currently available evidence. So where are all the dead bodies, or the displaced persons? Where are the increased deaths through drought-caused famines? Where are the increased deaths from malaria or other diseases from warmer and worsening conditions? Where is the evidence of increased deaths from extreme weather, such as hurricanes? Where are the refugees from drought-stricken areas, or from low-lying areas now submerged beneath the waves?

The inability to evaluate the evidence is shown by the comment.

Ridley was ( … again) offered a platform on BBC Radio 4 just a week before our hearing, despite his views being roundly debunked by climate scientists.

The link leads to a script of the Radio 4 interview with annotated comments. I am not sure that all the collective brains do debunk (that is expose the falseness or hollowness of (an idea or belief)) Matt Ridley’s comments. Mostly it is based on nit-picking or pointing out the contradictions with their own views and values. There are two extreme examples among 75 comments I would like to highlight two.

First is that Matt Ridley mentioned the Hockey Stick graphs and the work of Steve McIntyre in exposing the underlying poor data. The lack of a medieval warm period would provide circumstantial (or indirect) evidence that the warming of the last 200 years is unprecedented. Gavin Schmidt, responded with comments (5) and (6) shown below.

Schmidt is fully aware that Steve McIntyre also examined the Wahl and Amman paper and thoroughly discredited it. In 2008 Andrew Montford wrote a long paper of the shenanigans that went into the publication of the paper, and its lack of statistical significance. Following from this Montford wrote the Hockey Stick Illusion in 2010, which was reviewed by Tamino of RealClimate. Steve McIntyre was able to refute the core arguments in Tamino’s polemic by reposting Tamino and the Magic Flute, which was written in 2008 and covered all the substantial arguments that Tamino made. Montford’s book further shows a number of instances where peer review in academic climatology journals is not a quality control mechanism, but more a device of discrimination between those that support the current research paradigm and those that would undermine that consensus.

Comment 6 concludes

The best updates since then – which include both methodology improvements and expanded data sources – do not show anything dramatically different to the basic picture shown in MBH.

The link is to Chapter 5 on the IPCC AR5 WG1 assessment report. The paleoclimate discussion is a small subsection, a distinct reversal from the prominent place given to the original hockey stick in the third assessment report of 2001. I would contend the picture is dramatically different. Compare the original hockey stick of the past 1,000 years with Figure 5.7 on page 409 of AR5 WG1 Chapter 5.

In 2001, the MBH reconstruction was clear. From 1900 to 2000 average temperatures in the Northern Hemisphere have risen by over 1C, far more than the change in any of century. But from at least two of the reconstructions – Ma08eivl and Lj10cps – there have been similarly sized fluctuations in other periods. The evidence now seems to back up Matt Ridley’s position of some human influence on temperatures, but does not support the contention of unprecedented temperature change. Gavin Schmidt’s opinions are not those of an expert witness, but of a blinkered activist.

Schmidt’s comments on hockey stick graphs are nothing compared to comment 35

The Carbon Brief (not the climate scientists) rejects evidence that contradicts their views based on nothing more than ideological prejudice. A search for Indur Goklany will find his own website, where he has copies of his papers. Under the “Climate Change” tab is not only the 2009 paper, but a 2011 update – Wealth and Safety: The Amazing Decline in Deaths from Extreme Weather in an Era of Global Warming, 1900–2010. Of interest are two tables.

Table 2 is a reproduction of World Health Organisation data from 2002. It clearly shows that global warming is well down the list of causes of deaths. Goklany states in the article why these figures are based on dubious assumptions. Anonymous T falsely believes that global warming is curr

Figure 6 for the period 1990-2010 shows

  • the Global Death and Death Rates per million Due to Extreme Weather Events
  • CO2 Emissions
  • Global average GDP Per Capita

Figure 6 provides strong empirical evidence that increasing CO2 emissions (about 70-80% of total GHG emissions) have not caused increased deaths. They are a consequence of increasing GDP per capita, which as Goklany argues, have resulted in fewer deaths from extreme weather. More importantly, increasing GDP has resulted in increased life expectancy and reductions in malnutrition and deaths that be averted by access to rudimentary health care. Anonymous T would not know this even if he had read all the comments, yet it completely undermines the beliefs that caused him to single out Matt Ridley.

The worst part of Anonymous T’s article

Anonymous T concludes the article as follows (Bold mine)

The legal process efficiently served its function of bureaucratising our struggle, making us attempt to justify our actions in terms of the state’s narrow, violent logic. The ethics of our action are so clear, and declaring myself guilty felt like folding to that.

We found ourselves depressed and demoralised, swamped in legal paperwork. Pleading guilty frees us from the stress of a court case, allowing us to focus on more effective arenas of struggle.

I faced this case from a position of relative privilege – with the sort of appearance, education and lawyers that the courts favour. Even then I found it crushing. Today my thoughts are with those who experience the racism, classism and ableism of the state and its laws in a way that I did not.

That reflection makes me even more convinced of the rightness of our actions. Climate violence strikes along imperialist lines, with those least responsible, those already most disadvantaged by colonial capitalism, feeling the worst impacts.

Those are the people that lead our struggle, but are often also the most vulnerable to repression in the struggle. When fighting alongside those who find themselves at many more intersections of the law’s oppression than I do, I have a responsibility to volunteer first when we need to face up to the police and the state.

Faced with structural injustice and laws that defend it, Matt Ridley’s Conscience had no choice but to disobey. Matt Ridley has no conscience and neither does the state nor its system of laws. Join in. Be the Conscience you want to see in the world.

The writer rejects the rule of law, and is determined to carry out more acts of defiance against it. He intends to commit more acts of violence, with “climate” as a cover for revolutionary Marxism. Further the writer is trying to incite others to follow his lead. He claims to know Matt Ridley’s Conscience better than Ridley himself, but in the next sentence claims that “Matt Ridley has no conscience“. Further this statement would seem to contradict a justification for the criminal acts allegedly made in Bedlington Magistrates Court on December 16th
that the protesters were frustrated by the lack of UK Government action to combat climate change.

It is not clear who is the author of this article, but he/she is one of the following:-

Roger Geffen, 49, of Southwark Bridge Road, London.

Ellen Gibson, 21, of Elm Grove, London;

Philip MacDonald, 28, of Blackstock Road, Finsbury Park, London;

Beth Louise Parkin, 29, of Dodgson House, Bidborough Street, London;

Pekka Piirainen, 23, of Elm Grove, London;

Thomas Youngman, 22, of Hermitage Road, London.

Laurence Watson, 27, of Blackstock Road, Finsbury Park, London;

Guy Shrubsole, 30, of Bavent Road, London;

Lewis McNeill, 34, of no fixed address.

Kevin Marshall

aTTP falsely attacks Bjorn Lomborg’s “Impact of Current Climate Proposals” Paper

The following is a comment to be posted at Bishop Hill, responding to another attempt by blogger ….andThenThere’sPhysics to undermine the work of Bjorn Lomborg. The previous attempt was discussed here. This post includes a number of links, as well as a couple of illustrative screen captures at the foot of the table.

aTTP’s comment is

In fact, you should read Joe Romm’s post about this. He’s showing that the INDCs are likely to lead to around 3.5C which I think is relative to something like the 1860-1880 mean. This is very similar to the MIT’s 3.7, and quite a bit lower than the RCP8.5 of around 4.5C. So, yes, we all know that the INDCs are not going to do as much as some might like, but the impact is likely to be a good deal greater than that implied by Lomborg who has essentially assumed that we get to 2030 and then simply give up.

Nov 11, 2015 at 9:31 AM | …and Then There’s Physics

My Comment

aTTP at 9.31 refers to Joe Romm’s blog post of Nov 3 “Misleading U.N. Report Confuses Media On Paris Climate Talks“. Romm uses Climate Interactive’s Climate Scoreboard Tool to show the INDC submissions (if fully implemented) will result in 3.5°C as against the 4.5°C in the non-policy “No Action” Scenario. This is six times the claimed maximum impact of 0.17°C claimed in Lomberg’s new paper. Who is right? What struck me first was that Romm’s first graph, copied straight from the Climate Interactive’s seem to have a very large estimate for emissions in the “No Action” Scenario producing. Downloading the underlying data, I find the “No Action” global emissions in 2100 are 139.3 GtCO2e, compared with about 110 GtCO2e in Figure SPM5(a) of the AR5 Synthesis Report for the RCP8.5 scenario high emissions scenario. But it is the breakdown per country or region that matters.

For the USA, without action emissions are forecast to rise from 2010 to 2030 by 40%, in contrast to a rise of just 9% in the period 1990 to 2010. It is likely that emissions will fall without policy and will be no higher in 2100 than in 2010. The “no action” scenario overestimates 2030 emissions by 2-3 GtCO2e in 2030 and about 7-8 GtCO2e in 2100.

For the China the overestimation is even greater. Emissions will peak during the next decade as China fully industrializes, just as emissions peaked in most European countries in the 1970s and 1980s. Climate Interactive assumes that emissions will peak at 43 GtCO2e in 2090, whereas other estimates that the emissions peak will be around 16-17 GtCO2e before 2030.

Together, overestimations of the US and China’s “No Action” scenarios account for over half 55-60 GtCO2e 2100 emissions difference between the “No Action” and “Current INDC” scenarios. A very old IT term applies here – GIGO. If aTTP had actually checked the underlying assumptions he would realise that Romm’s rebuttal of Lomborg based on China’s emission assumptions (and repeated on his own blog) are as false as claiming that the availability of free condoms is why population peaks.

Links posted at https://manicbeancounter.com/2015/11/11/attp-falsely-attacks-bjorn-lomborgs-impact-of-current-climate-proposals-paper/

Kevin Marshall

 

Figures referred to (but not referenced) in the comment above

Figure 1: Climate Interactive’s graph, referenced by Joe Romm.


Figure 2: Reproduction of Figure SPM5(a) from Page 9 of the AR5 Synthesis Report.

 

Update – posted the following to ATTP’s blog



 

Lomborg and the Grantham Institute on the INDC submissions

Bjorn Lomborg has a new paper published in the Global Policy journal, titled: Impact of Current Climate Proposals. (hattip Bishop Hill and WUWT)

From the Abstract

This article investigates the temperature reduction impact of major climate policy proposals implemented by 2030, using the standard MAGICC climate model. Even optimistically assuming that promised emission cuts are maintained throughout the century, the impacts are generally small. ………… All climate policies by the US, China, the EU and the rest of the world, implemented from the early 2000s to 2030 and sustained through the century will likely reduce global temperature rise about 0.17°C in 2100. These impact estimates are robust to different calibrations of climate sensitivity, carbon cycling and different climate scenarios. Current climate policy promises will do little to stabilize the climate and their impact will be undetectable for many decades.

That is pretty clear. COP21 in Paris is a waste of time.

An alternative estimate is provided in a paper by Boyd, Turner and Ward (BTW) of the LSE Grantham Institute, published at the end of October.

They state

The most optimistic estimate of global emissions in 2030 resulting from the INDCs is about halfway between hypothetical ‘business as usual’ and a pathway that is consistent with the 2°C limit

The MAGICC climate model used by both Lomborg & the IPCC predicts warming of about 4.7°C under BAU, implying up to a 1.35°C difference from the INDCs, compared to the 0.17°C maximum calculated by Lomborg, 8 times the amount. Lomborg says this is contingent on no carbon leakage (exporting industry from policy to non-policy countries), whilst citing studies showing that it could offset 10-40%, or even over 100% of the emissions reduction. So the difference between sceptic Lomborg and the mighty LSE Grantham Institute is even greater than 8 times. Yet Lomborg refers extensively to the August Edition of BTW. So why the difference? There is no explicit indication in BTW of how they arrive at their halfway conclusion. nor a comparison by Lomborg.

Two other estimates are from the UNFCCC, and Climate Action Tracker. Both estimate the INDCs will constrain warming to 2.7°C, or about 2.0°C below the MAGICC BAU scenario. They both make assumptions about massive reductions in emissions post 2030 that are not in the INDCs. But at least the UNFCCC and CAT have graphs that show the projection through to 2100. Not so with BTW.

This is where the eminent brain surgeons and Nobel-Prize winning rocket scientists among the readership will need to concentrate to achieve the penetrating analytical powers of a lesser climate scientist.

From the text of BTW, the hypothetical business as usual (BAU) scenario for 2030 is 68 GtCO2e. The most optimistic scenario for emissions from the INDCs (and pessimistic for economic growth in the emerging economies) us that 2030 emissions will be 52 GtCO2e. The sophisticated climate projection models have whispered in code to the climate scientists that to be on target for the limit of 2.0°C, 2030 emissions show be not more than 36 GtCO2e. The mathematicians will be able to determine that 52 is exactly halfway between 36 and 68.

Now for the really difficult bit. I have just spent the last half hour in the shed manically cranking the handle of my patent beancounter extrapolator machine to get this result. By extrapolating this halfway result for the forecast period 2010-2030 through to 2100 my extrapolator tells me the INDCs are halfway to reaching the 2.0°C maximum warming target.

As Bob Ward will no doubt point out in his forthcoming rebuttal of Bjorn Lomborg’s paper, it is only true climate scientists who can reach such levels of analysis and understanding.

I accept no liability for any injuries caused, whether physical or psychological, by people foolishly trying to replicate this advanced result. Please leave this to the experts.

But there is a serious side to this policy advocacy. The Grantham Institute, along with others, is utterly misrepresenting the effectiveness of policy to virtually every government on the planet. Lomborg shows by rigorous means that policy is ineffective even if loads of ridiculous assumptions are made, whether on climate science forecasting, policy theory, technological solutions, government priorities, or the ability of  current governments to make policy commitments for governments for decades ahead. My prediction is that the reaction of the Grantham Institute, along with plenty of others, is a thuggish denunciation of Lomborg. What they will not consider is the rational response to wide differences of interpretation. That is to compare and contrast the arguments and the assumptions made, both explicit and implicit. 

Kevin Marshall

Climatic Temperature Variations

In the previous post I identified that the standard definition of temperature homogenisation assumes that there are little or no variations in climatic trends within the homogenisation area. I also highlighted specific instances of where this assumption has failed. However, the examples may be just isolated and extreme instances, or there might be other, offsetting instances so the failures could cancel each other out without a systematic bias globally. Here I explore why this assumption should not be expected to hold anywhere, and how it may have biased the picture of recent warming. After a couple of proposals to test for this bias, I look at alternative scenarios that could bias the global average temperature anomalies. I concentrate on the land surface temperatures, though my comments may also have application to the sea surface temperature data sets.

 

Comparing Two Recent Warming Phases

An area that I am particularly interested in is the relative size of the early twentieth century warming compared to the more recent warming phase. This relative size, along with the explanations for those warming periods gives a route into determining how much of the recent warming was human caused. Dana Nuccitelli tried such an explanation at skepticalscience blog in 20111. Figure 1 shows the NASA Gistemp global anomaly in black along with a split be eight bands of latitude. Of note are the polar extremes, each covering 5% of the surface area. For the Arctic, the trough to peak of 1885-1940 is pretty much the same as the trough to peak from 1965 to present. But in the earlier period it is effectively cancelled out by the cooling in the Antarctic. This cooling, I found was likely caused by use of inappropriate proxy data from a single weather station3.

Figure 1. Gistemp global temperature anomalies by band of latitude2.

For the current issue, of particular note is the huge variation in trends by latitude from the global average derived from the homogenised land and sea surface data. Delving further, GISS provide some very useful maps of their homogenised and extrapolated data4. I compare two identical time lengths – 1944 against 1906-1940 and 2014 against 1976-2010. The selection criteria for the maps are in figure 2.

Figure 2. Selection criteria for the Gistemp maps.

Figure 3. Gistemp map representing the early twentieth surface warming phase for land data only.


Figure 4. Gistemp map representing the recent surface warming phase for land data only.

The later warming phase is almost twice the magnitude of, and has much the better coverage than, the earlier warming. That is 0.43oC against 0.24oC. In both cases the range of warming in the 250km grid cells is between -2oC and +4oC, but the variations are not the same. For instance, the most extreme warming in both periods is at the higher latitudes. But, with the respect to North America in the earlier period the most extreme warming is over the Northwest Territories of Canada, whilst in the later period the most extreme warming is over Western Alaska, with the Northwest Territories showing near average warming. In the United States, in the earlier period there is cooling over Western USA, whilst in the later period there is cooling over much of Central USA, and strong warming in California. In the USA, the coverage of temperature stations is quite good, at least compared with much of the Southern Hemisphere. Euan Mearns has looked at a number of areas in the Southern Hemisphere4, which he summarised on the map in Figure 5

Figure 5. Euan Mearns says of the above “S Hemisphere map showing the distribution of areas sampled. These have in general been chosen to avoid large centres of human population and prosperity.

For the current analysis Figure 6 is most relevant.

Figure 6. Euan Mearns’ says of the above “The distribution of operational stations from the group of 174 selected stations.

The temperature data for the earlier period is much sparser than for later period. Even where there is data available in the earlier period the temperature data could be based on a fifth of the number of temperature stations as the later period. This may exaggerate slightly the issue, as the coasts of South America and Eastern Australia are avoided.

An Hypothesis on the Homogenisation Impact

Now consider again the description of homogenisation Venema et al 20125, quoted in the previous post.

 

The most commonly used method to detect and remove the effects of artificial changes is the relative homogenization approach, which assumes that nearby stations are exposed to almost the same climate signal and that thus the differences between nearby stations can be utilized to detect inhomogeneities. In relative homogeneity testing, a candidate time series is compared to multiple surrounding stations either in a pairwise fashion or to a single composite reference time series computed for multiple nearby stations. (Italics mine)

 

The assumption of the same climate signal over the homogenisation will not apply where the temperature stations are thin on the ground. The degree to which homogenisation eliminates real world variations in trend could be, to some extent, inversely related to the density. Given that the density of temperature data points diminishes in most areas of the world rapidly when one goes back in time beyond 1960, homogenisation in the early warming period far more likely to be between climatically different temperature stations than in the later period. My hypothesis is that, relatively, homogenisation will reduce the early twentieth century warming phase compared the recent warming phase as in earlier period homogenisation will be over much larger areas with larger real climate variations within the homogenisation area.

Testing the Hypothesis

There are at least two ways that my hypothesis can be evaluated. Direct testing of information deficits is not possible.

First is to conduct temperature homogenisations on similar levels of actual data for the entire twentieth century. If done for a region, the actual data used in global temperature anomalies should be run for a region as well. This should show that the recent warming phase is post homogenisation is reduced with less data.

Second is to examine the relate size of adjustments to the availability of comparative data. This can be done in various ways. For instance, I quite like the examination of the Manaus Grid block record Roger Andrews did in a post The Worst of BEST6.

Counter Hypotheses

There are two counter hypotheses on temperature bias. These may undermine my own hypothesis.

First is the urbanisation bias. Euan Mearns in looking at temperature data of the Southern Hemisphere tried to avoid centres of population due to the data being biased. It is easy to surmise the lack of warming Mearns found in central Australia7 was lack of an urbanisation bias from the large cities on the coast. However, the GISS maps do not support this. Ronan and Michael Connolly8 of Global Warming Solved claim that the urbanisation bias in the global temperature data is roughly equivalent to the entire warming of the recent epoch. I am not sure that the urbanisation bias is so large, but even if it were, it could be complementary to my hypothesis based on trends.

Second is that homogenisation adjustments could be greater the more distant in past that they occur. It has been noted (Steve Goddard in particular) that each new set of GISS adjustments adjusts past data. The same data set used to test my hypothesis above could also be utilized to test this hypothesis, by conducting homogenisations runs on the data to date, then only to 2000, then to 1990 etc. It could be that the earlier warming trend is somehow suppressed by homogenizing the most recent data, then working backwards through a number of iterations, each one using the results of the previous pass. The impact on trends that operate over different time periods, but converge over longer periods, could magnify the divergence and thus cause differences in trends decades in the past to be magnified. As such differences in trend appear to the algorithm to be more anomalous than in reality they actually are.

Kevin Marshall

Notes

  1. Dana Nuccitelli – What caused early 20th Century warming? 24.03.2011
  2. Source http://data.giss.nasa.gov/gistemp/graphs_v3/
  3. See my post Base Orcadas as a Proxy for early Twentieth Century Antarctic Temperature Trends 24.05.2015
  4. Euan Mearns – The Hunt For Global Warming: Southern Hemisphere Summary 14.03.2015. Area studies are referenced on this post.
  5. Venema et al 2012 – Venema, V. K. C., Mestre, O., Aguilar, E., Auer, I., Guijarro, J. A., Domonkos, P., Vertacnik, G., Szentimrey, T., Stepanek, P., Zahradnicek, P., Viarre, J., Müller-Westermeier, G., Lakatos, M., Williams, C. N., Menne, M. J., Lindau, R., Rasol, D., Rustemeier, E., Kolokythas, K., Marinova, T., Andresen, L., Acquaotta, F., Fratianni, S., Cheval, S., Klancar, M., Brunetti, M., Gruber, C., Prohom Duran, M., Likso, T., Esteban, P., and Brandsma, T.: Benchmarking homogenization algorithms for monthly data, Clim. Past, 8, 89-115, doi:10.5194/cp-8-89-2012, 2012.
  6. Roger Andrews – The Worst of BEST 23.03.2015
  7. Euan Mearns – Temperature Adjustments in Australia 22.02.2015
  8. Ronan and Michael Connolly – Summary: “Urbanization bias” – Papers 1-3 05.12.2013


Defining “Temperature Homogenisation”

Summary

The standard definition of temperature homogenisation is of a process that cleanses the temperature data of measurement biases to only leave only variations caused by real climatic or weather variations. This is at odds with GHCN & GISS adjustments which delete some data and add in other data as part of the homogenisation process. A more general definition is to make the data more homogenous, for the purposes of creating regional and global average temperatures. This is only compatible with the standard definition if assume that there are no real data trends existing within the homogenisation area. From various studies it is clear that there are cases where this assumption does not hold good. The likely impacts include:-

  • Homogenised data for a particular temperature station will not be the cleansed data for that location. Instead it becomes a grid reference point, encompassing data from the surrounding area.
  • Different densities of temperature data may lead to different degrees to which homogenisation results in smoothing of real climatic fluctuations.

Whether or not this failure of understanding is limited to a number of isolated instances with a near zero impact on global temperature anomalies is an empirical matter that will be the subject of my next post.

 

Introduction

A common feature of many concepts involved with climatology, the associated policies and sociological analyses of non-believers, is a failure to clearly understand of the terms used. In the past few months it has become evident to me that this failure of understanding extends to term temperature homogenisation. In this post I look at the ambiguity of the standard definition against the actual practice of homogenising temperature data.

 

The Ambiguity of the Homogenisation Definition

The World Meteorological Organisation in its’ 2004 Guidelines on Climate Metadata and Homogenization1 wrote this explanation.

Climate data can provide a great deal of information about the atmospheric environment that impacts almost all aspects of human endeavour. For example, these data have been used to determine where to build homes by calculating the return periods of large floods, whether the length of the frost-free growing season in a region is increasing or decreasing, and the potential variability in demand for heating fuels. However, for these and other long-term climate analyses –particularly climate change analyses– to be accurate, the climate data used must be as homogeneous as possible. A homogeneous climate time series is defined as one where variations are caused only by variations in climate.

Unfortunately, most long-term climatological time series have been affected by a number of nonclimatic factors that make these data unrepresentative of the actual climate variation occurring over time. These factors include changes in: instruments, observing practices, station locations, formulae used to calculate means, and station environment. Some changes cause sharp discontinuities while other changes, particularly change in the environment around the station, can cause gradual biases in the data. All of these inhomogeneities can bias a time series and lead to misinterpretations of the studied climate. It is important, therefore, to remove the inhomogeneities or at least determine the possible error they may cause.

 

That is temperature homogenisation is necessary to isolate and remove what Steven Mosher has termed measurement biases2, from the real climate signal. But how does this isolation occur?

Venema et al 20123 states the issue more succinctly.

 

The most commonly used method to detect and remove the effects of artificial changes is the relative homogenization approach, which assumes that nearby stations are exposed to almost the same climate signal and that thus the differences between nearby stations can be utilized to detect inhomogeneities (Conrad and Pollak, 1950). In relative homogeneity testing, a candidate time series is compared to multiple surrounding stations either in a pairwise fashion or to a single composite reference time series computed for multiple nearby stations. (Italics mine)

 

Blogger …and Then There’s Physics (ATTP) partly recognizes these issues may exist in his stab at explaining temperature homogenisation4.

So, it all sounds easy. The problem is, we didn’t do this and – since we don’t have a time machine – we can’t go back and do it again properly. What we have is data from different countries and regions, of different qualities, covering different time periods, and with different amounts of accompanying information. It’s all we have, and we can’t do anything about this. What one has to do is look at the data for each site and see if there’s anything that doesn’t look right. We don’t expect the typical/average temperature at a given location at a given time of day to suddenly change. There’s no climatic reason why this should happen. Therefore, we’d expect the temperature data for a particular site to be continuous. If there is some discontinuity, you need to consider what to do. Ideally you look through the records to see if something happened. Maybe the sensor was moved. Maybe it was changed. Maybe the time of observation changed. If so, you can be confident that this explains the discontinuity, and so you adjust the data to make it continuous.

What if there isn’t a full record, or you can’t find any reason why the data may have been influenced by something non-climatic? Do you just leave it as is? Well, no, that would be silly. We don’t know of any climatic influence that can suddenly cause typical temperatures at a given location to suddenly increase or decrease. It’s much more likely that something non-climatic has influenced the data and, hence, the sensible thing to do is to adjust it to make the data continuous. (Italics mine)

The assumption of a nearby temperature stations have the same (or very similar) climatic signal, if true would mean that homogenisation would cleanse the data of the impurities of measurement biases. But there is only a cursory glance given to the data. For instance, when Kevin Cowtan gave an explanation of the fall in average temperatures at Puerto Casado neither he, nor anyone else, checked to see if the explanation stacked up beyond checking to see if there had been a documented station move at roughly that time. Yet the station move is at the end of the drop in temperatures, and a few minutes checking would have confirmed that other nearby stations exhibit very similar temperature falls5. If you have a preconceived view of how the data should be, then a superficial explanation that conforms to that preconception will be sufficient. If you accept the authority of experts over personally checking for yourself, then the claim by experts that there is not a problem is sufficient. Those with no experience of checking the outputs following processing of complex data will not appreciate the issues involved.

 

However, this definition of homogenisation appears to be different from that used by GHCN and NASA GISS. When Euan Mearns looked at temperature adjustments in the Southern Hemisphere and in the Arctic6, he found numerous examples in the GHCN and GISS homogenisations of infilling of some missing data and, to a greater extent, deleted huge chunks of temperature data. For example this graphic is Mearns’ spreadsheet of adjustments between GHCNv2 (raw data + adjustments) and the GHCNv3 (homogenised data) for 25 stations in Southern South America. The yellow cells are where V2 data exist V3 not; the greens cells V3 data exist where V2 data do not.

 

 

Definition of temperature homogenisation

A more general definition that encompasses the GHCN / GISS adjustments is of broadly making the
data homogenous. It is not done by simply blending the data together and smoothing out the data. Homogenisation also adjusts anomalous data as a result of pairwise comparisons between local temperature stations, or in the case of extreme differences in the GHCN / GISS deletes the most anomalous data. This is a much looser and broader process than homogenisation of milk, or putting some food through a blender.

The definition I cover in more depth in the appendix.

 

 

The Consequences of Making Data Homogeneous

A consequence of cleansing the data in order to make it more homogenous gives a distinction that is missed by many. This is due to making the strong assumption that there are no climatic differences between the temperature stations in the homogenisation area.

Homogenisation is aimed at adjusting for the measurement biases to give a climatic reading for the location where the temperature station is located that is a closer approximation to what that reading would be without those biases. With the strong assumption, making the data homogenous is identical to removing the non-climatic inhomogeneities. Cleansed of these measurement biases the temperature data is then both the average temperature readings that would have been generated if the temperature station had been free of biases and a representative location for the area. This latter aspect is necessary to build up a global temperature anomaly, which is constructed through dividing the surface into a grid. Homogenisation, in the sense of making the data more homogenous by blending is an inappropriate term. All what is happening is adjusting for anomalies within the through comparisons with local temperature stations (the GHCN / GISS method) or comparisons with an expected regional average (the Berkeley Earth method).

 

But if the strong assumption does not hold, homogenisation will adjust these climate differences, and will to some extent fail to eliminate the measurement biases. Homogenisation is in fact made more necessary if movements in average temperatures are not the same and the spread of temperature data is spatially uneven. Then homogenisation needs to not only remove the anomalous data, but also make specific locations more representative of the surrounding area. This enables any imposed grid structure to create an estimated average for that area through averaging the homogenized temperature data sets within the grid area. As a consequence, the homogenised data for a temperature station will cease to be a closer approximation to what the thermometers would have read free of any measurement biases. As homogenisation is calculated by comparisons of temperature stations beyond those immediately adjacent, there will be, to some extent, influences of climatic changes beyond the local temperature stations. The consequences of climatic differences within the homogenisation area include the following.

 

  • The homogenised temperature data for a location could appear largely unrelated to the original data or to the data adjusted for known biases. This could explain the homogenised Reykjavik temperature, where Trausti Jonsson of the Icelandic Met Office, who had been working with the data for decades, could not understand the GHCN/GISS adjustments7.
  • The greater the density of temperature stations in relation to the climatic variations, the less that climatic variations will impact on the homogenisations, and the greater will be the removal of actual measurement biases. Climate variations are unlikely to be much of an issue with the Western European and United States data. But on the vast majority of the earth’s surface, whether land or sea, coverage is much sparser.
  • If the climatic variation at a location is of different magnitude to that of other locations in the homogenisation area, but over the same time periods and direction, then the data trends will be largely retained. For instance, in Svarlbard the warming temperature trends of the early twentieth century and from the late 1970s were much greater than elsewhere, so were adjusted downwards8.
  • If there are differences in the rate of temperature change, or the time periods for similar changes, then any “anomalous” data due to climatic differences at the location will be eliminated or severely adjusted, on the same basis as “anomalous” data due to measurement biases. For instance in large part of Paraguay at the end of the 1960s average temperatures by around 1oC. Due to this phenomena not occurring in the surrounding areas both the GHCN and Berkeley Earth homogenisation processes adjusted out this trend. As a consequence of this adjustment, a mid-twentieth century cooling in the area was effectively adjusted to out of the data9.
  • If a large proportion of temperature stations in a particular area have consistent measurement biases, then homogenisation will retain those biases, as it will not appear anomalous within the data. For instance, much of the extreme warming post 1950 in South Korea is likely to have been as a result of urbanization10.

 

Other Comments

Homogenisation is just part of the process of adjusting data for the twin purposes of attempting to correct for biases and building a regional and global temperature anomalies. It cannot, for instance, correct for time of observation biases (TOBS). This needs to be done prior to homogenisation. Neither will homogenisation build a global temperature anomaly. Extrapolating from the limited data coverage is a further process, whether for fixed temperature stations on land or the ship measurements used to calculate the ocean surface temperature anomalies. This extrapolation has further difficulties. For instance, in a previous post11 I covered a potential issue with the Gistemp proxy data for Antarctica prior to permanent bases being established on the continent in the 1950s. Making the data homogenous is but the middle part of a wider process.

Homogenisation is a complex process. The Venema et al 20123 paper on the benchmarking of homogenisation algorithms demonstrates that different algorithms produce significantly different results. What is clear from the original posts on the subject by Paul Homewood and the more detailed studies by Euan Mearns and Roger Andrews at Energy Matters, is that the whole process of going from the raw monthly temperature readings to the final global land surface average trends has thrown up some peculiarities. In order to determine whether they are isolated instances that have near zero impact on the overall picture, or point to more systematic biases that result from the points made above, it is necessary to understand the data available in relation to the overall global picture. That will be the subject of my next post.

 

Kevin Marshall

 

Notes

  1. GUIDELINES ON CLIMATE METADATA AND HOMOGENIZATION by Enric Aguilar, Inge Auer, Manola Brunet, Thomas C. Peterson and Jon Wieringa
  2. Steven Mosher – Guest post : Skeptics demand adjustments 09.02.2015
  3. Venema et al 2012 – Venema, V. K. C., Mestre, O., Aguilar, E., Auer, I., Guijarro, J. A., Domonkos, P., Vertacnik, G., Szentimrey, T., Stepanek, P., Zahradnicek, P., Viarre, J., Müller-Westermeier, G., Lakatos, M., Williams, C. N., Menne, M. J., Lindau, R., Rasol, D., Rustemeier, E., Kolokythas, K., Marinova, T., Andresen, L., Acquaotta, F., Fratianni, S., Cheval, S., Klancar, M., Brunetti, M., Gruber, C., Prohom Duran, M., Likso, T., Esteban, P., and Brandsma, T.: Benchmarking homogenization algorithms for monthly data, Clim. Past, 8, 89-115, doi:10.5194/cp-8-89-2012, 2012.
  4. …and Then There’s Physics – Temperature homogenisation 01.02.2015
  5. See my post Temperature Homogenization at Puerto Casado 03.05.2015
  6. For example

    The Hunt For Global Warming: Southern Hemisphere Summary

    Record Arctic Warmth – in 1937

  7. See my post Reykjavik Temperature Adjustments – a comparison 23.02.2015
  8. See my post RealClimate’s Mis-directions on Arctic Temperatures 03.03.2015
  9. See my post Is there a Homogenisation Bias in Paraguay’s Temperature Data? 02.08.2015
  10. NOT A LOT OF PEOPLE KNOW THAT (Paul Homewood) – UHI In South Korea Ignored By GISS 14.02.2015
  11.  

 

 

Appendix – Definition of Temperature Homogenisation

When discussing temperature homogenisations, nobody asks what the term actual means. In my house we consume homogenised milk. This is the same as the pasteurized milk I drank as a child except for one aspect. As a child I used to compete with my siblings to be the first to open a new pint bottle, as it had the cream on top. The milk now does not have this cream, as it is blended in, or homogenized, with the rest of the milk. Temperature homogenizations are different, involving changes to figures, along with (at least with the GHCN/GISS data) filling the gaps in some places and removing data in others1.

But rather than note the differences, it is better to consult an authoritative source. From Dictionary.com, the definitions of homogenize are:-

verb (used with object), homogenized, homogenizing.

  1. to form by blending unlike elements; make homogeneous.
  2. to prepare an emulsion, as by reducing the size of the fat globules in (milk or cream) in order to distribute them equally throughout.
  3. to make uniform or similar, as in composition or function:

    to homogenize school systems.

  4. Metallurgy. to subject (metal) to high temperature to ensure uniform diffusion of components.

Applying the dictionary definitions, data homogenization in science is not about blending various elements together, nor about additions or subtractions from the data set, or adjusting the data. This is particularly true in chemistry.

For UHCN and NASA GISS temperature data homogenization involves removing or adjusting elements in the data that are markedly dissimilar from the rest. It can also mean infilling data that was never measured. The verb homogenize does not fit the processes at work here. This has led to some, like Paul Homewood, to refer to the process as data tampering or worse. A better idea is to look further at the dictionary.

Again from Dictionary.com, the first two definitions of the adjective homogeneous are:-

  1. composed of parts or elements that are all of the same kind; not heterogeneous:

a homogeneous population.

  1. of the same kind or nature; essentially alike.

I would suggest that temperature homogenization is a loose term for describing the process of making the data more homogeneous. That is for smoothing out the data in some way. A false analogy is when I make a vegetable soup. After cooking I end up with a stock containing lumps of potato, carrot, leeks etc. I put it through the blender to get an even constituency. I end up with the same weight of soup before and after. A similar process of getting the same after homogenization as before is clearly not what is happening to temperatures. The aim of making the data homogenous is both to remove anomalous data and blend the data together.

 

 

Temperature Homogenization at Puerto Casado

Summary

The temperature homogenizations for the Paraguay data within both the BEST and UHCN/Gistemp surface temperature data sets points to a potential flaw within the temperature homogenization process. It removes real, but localized, temperature variations, creating incorrect temperature trends. In the case of Paraguay from 1955 to 1980, a cooling trend is turned into a warming trend. Whether this biases the overall temperature anomalies, or our understanding of climate variation, remains to be explored.

 

A small place in Mid-Paraguay, on the Brazil/Paraguay border has become the centre of focus of the argument on temperature homogenizations.

For instance here is Dr Kevin Cowtan, of the Department of Chemistry at the University of York, explaining the BEST adjustments at Puerto Casado.

Cowtan explains at 6.40

In a previous video we looked at a station in Paraguay, Puerto Casado. Here is the Berkeley Earth data for that station. Again the difference between the station record and the regional average shows very clear jumps. In this case there are documented station moves corresponding to the two jumps. There may be another small change here that wasn’t picked up. The picture for this station is actually fairly clear.

The first of these “jumps” was a fall in the late 1960s of about 1oC. Figure 1 expands the section of the Berkeley Earth graph from the video, to emphasise this change.

Figure 1 – Berkeley Earth Temperature Anomaly graph for Puerto Casado, with expanded section showing the fall in temperature and against the estimated mean station bias.

The station move is after the fall in temperature.

Shub Niggareth looked at the metadata on the actual station move concluding

IT MOVED BECAUSE THERE IS CHANGE AND THERE IS A CHANGE BECAUSE IT MOVED

That is the evidence of the station move was vague. The major evidence was the fall in temperatures. Alternative evidence is that there were a number of other stations in the area exhibiting similar patterns.

But maybe there was some, unknown, measurement bias (to use Steven Mosher’s term) that would make this data stand out from the rest? I have previously looked eight temperature stations in Paraguay with respect to the NASA Gistemp and UHCN adjustments. The BEST adjustments for the stations, along another in Paul Homewood’s original post, are summarized in Figure 2 for the late 1960s and early 1970s. All eight have similar downward adjustment that I estimate as being between 0.8 to 1.2oC. The first six have a single adjustment. Asuncion Airport and San Juan Bautista have multiple adjustments in the period. Pedro Juan CA was of very poor data quality due to many gaps (see GHCNv2 graph of the raw data) hence the reason for exclusion.

GHCN Name

GHCN Location

BEST Ref

Break Type

Break Year

 

Concepcion

23.4 S,57.3 W

157453

Empirical

1969

 

Encarcion

27.3 S,55.8 W

157439

Empirical

1968

 

Mariscal

22.0 S,60.6 W

157456

Empirical

1970

 

Pilar

26.9 S,58.3 W

157441

Empirical

1967

 

Puerto Casado

22.3 S,57.9 W

157455

Station Move

1971

 

San Juan Baut

26.7 S,57.1 W

157442

Empirical

1970

 

Asuncion Aero

25.3 S,57.6 W

157448

Empirical

1969

 

  

  

  

Station Move

1972

 

  

  

  

Station Move

1973

 

San Juan Bautista

25.8 S,56.3 W

157444

Empirical

1965

 

  

  

  

Empirical

1967

 

  

  

  

Station Move

1971

 

Pedro Juan CA

22.6 S,55.6 W

19469

Empirical

1968

 

  

  

  

Empirical

3 in 1970s

 
           

Figure 2 – Temperature stations used in previous post on Paraguayan Temperature Homogenisations

 

Why would both BEST and UHCN remove a consistent pattern covering and area of around 200,000 km2? The first reason, as Roger Andrews has found, the temperature fall was confined to Paraguay. The second reason is suggested by the UHCNv2 raw data1 shown in figure 3.

Figure 3 – UHCNv2 “raw data” mean annual temperature anomalies for eight Paraguayan temperature stations, with mean of 1970-1979=0.

There was an average temperature fall across these eight temperature stations of about half a degree from 1967 to 1970, and over one degree by the mid-1970s. But it was not at the same time. The consistency is only show by the periods before and after as the data sets do not diverge. Any homogenisation program would see that for each year or month for every data set, the readings were out of line with all the other data sets. Now maybe it was simply data noise, or maybe there is some unknown change, but it is clearly present in the data. But temperature homogenisation should just smooth this out. Instead it cools the past. Figure 4 shows the impact average change resulting from the UHCN and NASA GISS homogenisations.

Figure 4 – UHCNv2 “raw data” and NASA GISS Homogenized average temperature anomalies, with the net adjustment.

A cooling trend for the period 1955-1980 has been turned into a warming trend due to the flaw in homogenization procedures.

The Paraguayan data on its own does not impact on the global land surface temperature as it is a tiny area. Further it might be an isolated incident or offset by incidences of understating the warming trend. But what if there are smaller micro climates that are only picked up by one or two temperature stations? Consider figure 5 which looks at the BEST adjustments for Encarnacion, one of the eight Paraguayan stations.

Figure 5 – BEST adjustment for Encarnacion.

There is the empirical break in 1968 from the table above, but also empirical breaks in the 1981 and 1991 that look to be exactly opposite. What Berkeley earth call the “estimated station mean bias” is as a result of actual deviations in the real data. Homogenisation eliminates much of the richness and diversity in the real world data. The question is whether this happens consistently. First we need to understand the term “temperature homogenization“.

Kevin Marshall

Notes

  1. The UHCNv2 “raw” data is more accurately pre-homogenized data. That is the raw data with some adjustments.
Follow

Get every new post delivered to your Inbox.

Join 55 other followers