Plans to Increase Global Emissions at COP21 Paris


It is a necessary, but far from sufficient, condition to cut global greenhouse gas emissions for any increases in emissions in some parts of the world to be offset by emissions cuts elsewhere. INDC submissions for the COP21 in Paris contain proposed emissions targets between 2010 and 2030 suggest the opposite will be case. For every tonne of emissions reductions in 32 leading developed countries there will be at least three tonnes of emissions increases in 7 major developing countries. The net effect of these targets being achieved from these countries (which combined make up both 60% of global emissions and 60% of global population) will be to make global emissions 20% higher in 2030 than 2010. Using UNIPCC AR5 projections, unless there are large and rapid cuts in in global greenhouse emissions post 2030, any agreement based those submissions will not save the world from two degrees of dangerous global warming and will likely not save the world from three degrees of warming. This leads to a policy problem. Emissions reduction policies will only reduce a small part of the harms of climate change. So even if the more extreme claims of climate catastrophism are true, then it might be more beneficial for a nation to avoid emissions reduction policies.


In the following analysis makes these assumptions.

  • UNIPCC estimates of the relationship between global average temperature and atmospheric greenhouse gas levels are accurate.
  • UNIPCC estimates of the relationship between greenhouse gas emissions and atmospheric greenhouse gas levels are accurate.
  • Policy commitments will always turn into concrete policy.
  • Climate change policy priorities will not conflict with other priorities.
  • All policy will be effectively implemented in full, implying the requisite technological and project management capacities are available.

The Context

The World’s leaders meeting from 30 November to December 11 in Paris together to thrash out a plan to save the world from a dangerous two degrees of warming. In preparation 146 countries, representing 87% of Global Emissions have submitted plans to the United Nations Framework Convention on Climate Change (UNFCCC). These are available at the submissions website here. There is no-one who has gone through to evaluate whether these submissions are consistent with this objective. I have chosen a small sample of 7 major developing nations and 32 developing nations (EU 28 have a single target) which combined represent about 60% of global emissions and 60% of global population.

The level of global emissions control required to constrain global warming is given by the IPCC in their final version of the 2014 AR5 Synthesis Report page 21 Figure SPM 11(a) and reproduced below.

The dark blue band is the maximum emissions pathway to avoid going beyond 2 degrees of warming, with RCP2.6 denoting the central pathway. The dark orange pathway would produce 2.5-3.0 degrees of warming. According to the figure SPM 5(a) Annual GHG emissions in 2010 were 49 GtCO2. They are currently increasing by at least 2% a year. The extrapolated projection for 2030 is 70-75 GtCO2, roughly following the solid black line of the RCP8.5 BAU (non-policy) scenario. In 2015 this will be about 54 GtCO2. The minimum for policy is that global emissions should be at least no higher than they were in 2010, and preferably below that level to offset the cumulative overshoot that will occur.

How does the global policy requirement fit in with the country submissions?

If the IPCC projections are correct, to avoid 2 degrees of warming being exceeded there needs to be a global cap on greenhouse gas emissions of around 50 GtCO2 almost immediately and for that level to start to start falling in the early 2020s. Alternatively, if global emissions reach 60 GtCO2 without any prospect of major reductions thereafter then from the models projections three degrees of warming is likely to be exceeded. There is a large gap between these two scenarios, but even with submissions from a limited number of the major countries it is possible to state that the lower limit will be exceeded. This can be done by calculating emissions increases in the major high growth developing countries and the proposed emissions reductions in the major developed countries. This is not straight forward, as in most country submissions there are no clear figures, so various assumptions need to be made. For developing countries this is particularly difficult, as the estimated business as usual (BAU) emissions are usually not stated and are dependent upon assumptions of economic growth, though sometimes there are clues within the text. For the developed countries the projections are easier to calculate, as they are relative to a date in the past. There is a further issue of which measure of emissions to use. I have used the UNFCCC issued estimates of GHG emissions in its Country Briefs for 1990, 2000, 2005 & 2010.1 In many of the submissions there often both conditional and unconditional estimates of 2030 emissions. For developing countries the lower estimates are dependent on external funding. For the other countries, emissions reductions are expressed as a range. In every case I have used the lower emissions figure.2

For the developing countries, those with major projected emissions increases countries are as follows.3

Estimated targeted emissions increases from 2010 to 2030 for major developing countries based on INDC Submissions



Emissons Change

INDC Submission

Country Brief






























The targeted total increase GHG for these seven countries between 2010 and 2030 is estimated to be in excess of 13 Gt.

According to World Bank Data there were 3300 million people in these seven countries in 2013, or 46% of the global population.

For the developed countries those with the largest quantitative emissions reductions are as follows.4

Estimated targeted emissions change from 2010 to 2030 for major developed countries from INDC Submissions



Emissons Change

INDC Submission

Country Brief






















The targeted total decrease GHG for these thirty-two countries between 2010 and 2030 is estimated to be 4 Gt.

According to World Bank Data there were 900 million people in these thirty-two countries in 2013, or 13% of the global population.

For every one tonne of emissions reduction by developed countries, it will be replaced by at least three tonnes of emissions elsewhere. Bigger reductions by these developed countries will not close the gap, as their total 2010 emissions are just 12.9 G. The developing countries do not include a single African country, nor Pakistan, Iran, Venezuela, or numerous other countries. Yet it does include all the major developed countries.

Whilst the developing countries way not achieve this increase in emissions by 2030, collectively they will achieve this increase shortly after that date. Many of the developed countries may not achieve the emissions reductions due to changing priorities. For instance the EU targets reduction may not be achieved due to Germany abandoning nuclear power in favour of coal and Southern European states reducing renewables subsidies as a response to recent economic crises.

The Elephant in the Room

In 2030, even with an agreement based on the INDC submissions signed this December in Paris, and then fully implemented without compromise there is still a problem. If the IPCC models are correct, the only way to stop the 3 degrees of warming being exceeded is through rapid reductions in emissions in those countries where emissions have recently peaked (e.g. South Korea and China) along with steep reductions in emissions of countries where they are still increasing rapidly (e.g. India and Bangladesh). Unless a technological miracle happens in the next decade this is not going to happen. More likely is that global emissions may keep on rising as many slower-growing African and Asian nations have ever larger unit increases in emissions each year.

The Policy Problem

The justification for mitigation policy is most clearly laid out in the British 2006 Stern Review Summary of Conclusions page vi

Using the results from formal economic models, the Review estimates that if we don’t act, the overall costs and risks of climate change will be equivalent to losing at least 5% of global GDP each year, now and forever. If a wider range of risks and impacts is taken into account, the estimates of damage could rise to 20% of GDP or more.

That is the unknown and random costs of climate change can be exchanged for the lesser and predictable costs of policy. A necessary, but far from sufficient, condition of this happening is that policy will eradicate all the prospective costs of climate change. It could be that if warming is constrained to less than 2 degrees the costs of climate change would be trivial, so the reality could be a close approximation of Stern’s viewpoint. But if warming exceeds 3 degrees and the alleged harms are correct, then emissions reducing policies are likely to lead to net harms for the countries implementing those policies and a small net benefit for those countries without policy.

Kevin Marshall


  1. The exception is for Bangladesh. They are one of the few countries that clearly lays out 2030 estimates in MtCO2, but the 2010 estimate is about 20% lower than the UNFCCC figure. I have just lifted the Bangladeshi figures.
  2. For instance the USA the target is to reduce is emissions 26-28% on the 2005 level. I have used the 28% figure. The United States is about the only country not providing target figures for 2030. I would be imprudent to assume any greater reductions given that it is not certain even this level will be ratified by Congress.
  3. Not all the countries outside of the rich are targeting emissions increases. Brazil and Argentina are targeting emissions reductions, whilst Thailand and South Korea would appear to be targeting to maintaining emissions at around 2010 levels.
  4. Not all developed countries have emissions reduction targets.
  5. South Korea with 1.3% of 2010 global emissions could be included in developed countries, but its target it is to roughly maintain emissions at 2010 levels. Switzerland, Norway and Singapore are all committed to emissions reductions, but combined they have less 0.3 GT of emissions.






A note on Bias in Australian Temperature Homogenisations

Jo Nova has an interesting and detailed post guest post by Bob Fernley-Jones on heavily homogenised rural sites in Australia by the Australian BOM.

I did a quick comment that was somewhat lacking in clarity. This post is to clarify my points.

In the post Bob Fernley-Jones stated

The focus of this study has been on rural stations having long records, mainly because the BoM homogenisation process has greatest relevance the older the data is.

Venema et al. 2012 stated (Italics mine)

The most commonly used method to detect and remove the effects of artificial changes is the relative homogenization approach, which assumes that nearby stations are exposed to almost the same climate signal and that thus the differences between nearby stations can be utilized to detect inhomogeneities (Conrad and Pollak, 1950). In relative homogeneity testing, a candidate time series is compared to multiple surrounding stations either in a pairwise fashion or to a single composite reference time series computed for multiple nearby stations.

This assumption of nearby temperature stations being exposed to same climate signal is standard practice. Victor Venema, (who has his own blog) is a leading academic expert on temperature homogenisation. However, there are extreme examples where this assumption does not hold. One example is at the end of the 1960s in much of Paraguay where average temperatures fell by one degree. As this was not replicated in the surrounding area both GISTEMP and Berkeley Earth homogenisations eliminated this anomaly. This was despite using very different homogenisation techniques. My analysis is here.

On a wider scale take a look at the GISTEMP land surface temperature anomaly map for 2014 against 1976-2010. (obtained from here)

Despite been homogenised and smoothed it is clear that trends are different. Over much of North America there was cooling, bucking the global trend. What this suggests to me is that the greater the distance between weather stations the greater the likelihood that the climate signals will be different. Most importantly for temperature anomaly calculations, over the twentieth century the number of weather stations increased dramatically. So it is more likely homogenisation will end up smoothing out local and sub-regional variations in temperature trends in the early twentieth century than in the later period. This is testable.

Why should this problem occur with expert scientists? Are they super beings who know the real temperature data, but have manufactured some falsehood? I think it is something much more prosaic. Those who work at the Australian BOM believe that the recent warming is human caused. In fact they believe that more than 100% of warming is human caused. When looking at outlier data records, or records that show inconsistencies there is a very human bias. Each time the data is reprocessed they find new inconsistencies, having previously corrected the data.

Kevin Marshall

Islamophobic and Anti-Semitic Hate Crime in London

The BBC has rightly highlighted the 70.7% rise in Islamophobic crime in the 12 months to July 2015 compared to the previous 12 months to 718 instances. Any such jump in crime rates should be taken seriously and tackled. To be attacked for one’s religion, including being punched and having dog faeces smeared on one’s head is repulsive. However, according to the Metropolitan Police Crime Figures it is still less than 0.1% of total 720,939 crimes reported, and still a fraction of the crimes of Rape (5,300) and Robbery against the Person (20,300).

Raheem Kassam of Breitbart has a point when he states that there has been a 93.4% rise in Anti-Semitic crimes to 499 in the same period. He then points out that a Jew is a number of times more likely to be a victim of a religious hate crime in London than a Muslim. However, he fluffs the figures, as he makes a comparison between London crime figures and total numbers of adherents of each religion in the UK. Yet the Greater London Authority has a Datastore with the population by borough, along with the proportion of each religious group. The Metropolitan Police Crime Figures are also by borough. From this I have looked at the ten worst boroughs for Islamophobic and Anti-Semitic Hate Crime, which I have appended below.

In Summary

  • The London Borough with the highest number of reported Islamophobic hate crimes was Westminster with 54 reported in the 12 months ended July 2015, but relative to the number of Muslims living in the borough, Islington had the highest rate with 3.0 hate crimes per 1,000 Muslims.
  • Overall in London reported 718 in Islamophobic hate crimes reported was equivalent to 0.6 per 1,000 Muslims.
  • The London Borough with the highest number of reported Anti-Semitic hate crimes was Hackney with 122 reported in the 12 months ended July 2015, but relative to the number of Jews living in the borough, Tower Hamlets had the highest rate with 10.6 hate crimes per 1,000 Jews.
  • Overall in London reported 499 in Anti-Semitic hate crimes reported was equivalent to 3.2 per 1,000 Jews.
  • A Jew in London is therefore more than five times more likely to be the victim of a religious hate crime than a Muslim. In the London Borough of Tower Hamlets the Jew is over thirty times more likely to be a victim than a Muslim. Even Islington, proportionately the worst borough for Muslims, the Jew is still more than twice as likely to be a victim as the Muslim.

As a final note, late yesterday evening there was an extreme Anti-Semitic attack in North Manchester. Four young men were brutally attacked at a Metrolink Station. The youngest, for a period, was into a coma according to The Jewish Chronicle. I join in the prayers for his speedy and full recovery.

Kevin Marshall

Degenerating Climatology 1: IPCC Statements on Human Caused Warming

This is the first in an occasional series of illustrating the degeneration of climatology away from an empirical science. In my view, for climatology to be progressing it needs to be making ever clearer empirical statements that support the Catastrophic Anthropogenic Global Warming (CAGW) hypothesis and moving away from the bland statements that can just as easily support a weaker form of the hypothesis, or support random fluctuations. In figure 1 this progression is illustrated by the red arrow, with increasing depth of colour. The example given below is an illustration of the opposite tendency.

Obscuring the slowdown in warming in AR5

Every major temperature data set shows that the warming rate this century has been lower than that towards the end of the end of the twentieth century. This is becoming a severe issue for those who believe that the main driver of warming is increasing atmospheric greenhouse gas levels. This gave a severe problem for the IPCC in trying to find evidence for the theory when they published in late 2013.

In the IPCC Fifth Assessment Report Working Group 1 (The Physical Science Basis) Summary for Policy Makers, the headline summary on the atmosphere is:-

Each of the last three decades has been successively warmer at the Earth’s surface than any preceding decade since 1850. In the Northern Hemisphere, 1983–2012 was likely the warmest 30-year period of the last 1400 years (medium confidence).

There are three parts to this.

  • The last three decades have been successively warmer according to the major surface temperature data sets. The 1980s were warmer than the 1970s; the 1990s warmer than the 1980s; and the 2000s warmer than the 1990s.
  • The 1980s was warmer than any preceding decade from the 1850s.
  • In the collective opinion of the climate experts there is greater than a 66% chance that the 1980s was the warmest decade in 1400 years.

What the does not include are the following.

  1. That global average temperature rises have slowed down in the last decade compared with the 1990s. From 2003 in the HADCRUT4 temperature series warming had stopped.
  2. That global average temperature also rose significantly in the mid-nineteenth and early twentieth centuries.
  3. That global average temperature fell in 4 or 5 of the 13 decades from 1880 to 2010.
  4. That in the last 1400 years there was a warm period about 1000 years ago and a significantly cold period that could have reached bottomed out around 1820. That is a Medieval Warm Period and the Little Ice Age.
  5. That there is strong evidence of Roman Warm Period that about 2000 years ago and a Bronze Age warm period about 3000 years ago.

Point (i) to (iii) can be confirmed by figure 2. Both the two major data surface temperature anomalies show warming trends in each of the last three decades, implying successive warming. A similar statement could have been made in 1943 if the data had been available.

In so far as the CAGW hypothesis is broadly defined as a non-trivial human-caused rise in temperatures (the narrower more precise definition being that the temperature change has catastrophic consequences) there is no empirical support found from the actual temperature records or from the longer data reconstructions from proxy data.

The major statement above is amplified by the major statement from the press release of 27/09/2013.

It is extremely likely that human influence has been the dominant cause of the observed warming since the mid-20th century. The evidence for this has grown, thanks to more and better observations, an improved understanding of the climate system response and improved climate models.

This statement does exclude other types of temperature change, let alone other causes of the temperature change. The cooling in the 1960s is not included. The observed temperature change is only the net impact of all influences, known or unknown. Further, the likelihood is based upon expert opinion. If the experts have always given prominence to human influences on warming (as opposed to natural and random influences) then their opinion will be biased. Over time if this opinion is not objectively adjusted in the light of evidence that does not conform to the theory the basis of Bayesian statistic is undermined.

Does the above mean that climatology is degenerating away from a rigorous scientific discipline? I have chosen the latest expert statements, but not compared them with previous statements. A comparable highlighted statement to the human influence statement from the fourth assessment report WG1 SPM (Page 3) is

The understanding of anthropogenic warming and cooling influences on climate has improved since the TAR, leading to very high confidence that the global average net effect of human activities since 1750 has been one of warming, with a radiative forcing of +1.6 [+0.6 to +2.4] W m–2

The differences are

  • The greenhouse gas effect is no longer emphasised. It is now the broader “human influence”.
  • The previous statement was prepared to associate the influence with a much longer period. Probably the collapse of hockey stick studies, with their portrayal of unprecedented warming, has something to do with this.
  • Conversely, the earlier statement is only prepared to say that since 1750 the net effect of human influences has been one of warming. The more recent statement claims a dominant cause of warming has been human caused.

This leads my final point indicating degeneration of climatology away from science. When comparing the WG1 SPMs for TAR, AR4 and AR5 there are shifting statements. In each report the authors have chosen the best statements to fit their case at that point in time. The result is a lack of continuity that might demonstrate and increasing correspondence between theory and data.

Kevin Marshall

Can Climatology Ever Be Considered a Science?

Can climatology ever be considered a science? My favourite Richard Feynman quote.

You cannot prove a vague theory wrong. If the guess that you make is poorly expressed and the method you have for computing the consequences is a little vague then ….. you see that the theory is good as it can’t be proved wrong. If the process of computing the consequences is indefinite, then with a little skill any experimental result can be made to look like an expected consequence.

I would maintain that by its nature climatology will always be a vague theory. Climate consists of an infinite number of interrelationships that can only be loosely modelled by empirical generalisations. These can only ever be imperfectly measured, although that is improving both in scope and period of observations. Tweaking the models can always produce a desired outcome. In this sense climatology is never going to be a science way that physics and chemistry have become. But this does not mean that climatology cannot become more scientific. A step forward might be to classify empirical statements according to the part of the global warming theory they support, and the empirical content of those statements.

Catastrophic Anthropogenic Global Warming (CAGW) is a subset of AGW. The other elements of AGW are trivial, or positive. I would also include the benign impacts of aerosols in reducing the warming impacts. So AGW’ is not an empty set.

AGW is a subset of GW, where GW is the hypothesis that an increase in greenhouse gas levels will cause temperatures to rise. There could be natural causes of the rise in greenhouse gases as well, so GW’ is not an empty set.

GW is a subset of Climate Change CC. That is all causes of changing climate, both known and unknown, including entirely random causes.

In summary


Or diagrammatically the sets can be represented by a series of concentric rings.

To become more scientific, climatology as an academic discipline should be moving on two complementary fronts. Firstly, through generating clearer empirical confirmations, as against banal statements or conditional forecasts. Secondly, for the statements to become more unambiguous in being ascribable solely to the CAGW hypothesis in particular rather being just as easily be ascribed to vague and causeless climate change in general. These twin aims are shown in the diagram below, where the discipline should be aiming in the direction of the red progressing arrow towards science, rather the green degenerating arrow.

Nullis in verba on a recent Bishop Hill discussion forum rightly points out the statement

“you acknowledge that scientists predicted warming. And warming is what we observed”

commits the fallacy of “confirming the consequent”.

If your definition of climate change is loose enough, the observed rise could be a member the CC set. But to infer it is not part of GW’ (outside of the GW set) requires more empirical content. As Nullis has shown in his tightly worded comment to prove this is impossible. But the greater empirical content will give more confidence that the scientists did not just strike lucky. Two years ago Roy Spencer did attempt just that. From 73 climate models the prediction was that between 1979 and 2012 average global temperatures would rise by between 0.3 and 1.5C, with an average estimate of 0.8C. Most were within the 0.6 to 1.2C, so any actual rise in that range, which is pretty unusual historically, would be a fairly strong confirmation of a significant AGW impact. The actual satellite and weather balloon data showed a rise of about 0.2C. The scientists got it wrong on the basis of their current models. At a minimum the models are running too hot, at a minimum failing to confirm the CAGW hypothesis.

By more clearly specifying the empirical content of statements the scope of alternative explanations is narrowed. In this case we have an explanation for someone using a more banal statement.

I would contend that to obtain confirmation of CAGW requires a combination of the warming and the adverse consequences. So even if the hurricanes had got worse after Katrina in 2005, with zero warming on its own it is just that an observation climate has changed. But together they form a more empirically rich story that is explained by CAGW theory. Still better is a number of catastrophic consequences.

In the next post I shall show some further examples of the discipline moving in the direction of degenerating climatology.

Kevin Marshall

John Cook undermining democracy through misinformation

It seems that John Cook was posting comments in 2011 under the pseudonym Lubos Motl. The year before physicist and blogger Luboš Motl had posted a rebuttal of Cook’s then 104 Global Warming & Climate Change Myths. When someone counters your beliefs point for point, then most people would naturally feel some anger. Taking the online identity of Motl is potentially more than identity theft. It can be viewed as an attempt to damage the reputation of someone you oppose.

However, there is a wider issue here. In 2011 John Cook co-authored with Stephan Lewandowsky The Debunking Handbook, that is still featured prominently on the This short tract starts with the following paragraphs:-

It’s self-evident that democratic societies should base their decisions on accurate information. On many issues, however, misinformation can become entrenched in parts of the community, particularly when vested interests are involved. Reducing the influence of misinformation is a difficult and complex challenge.

A common misconception about myths is the notion that removing its influence is as simple as packing more information into people’s heads. This approach assumes that public misperceptions are due to a lack of knowledge and that the solution is more information – in science communication, it’s known as the “information deficit model”. But that model is wrong: people don’t process information as simply as a hard drive downloading data.

If Cook was indeed using the pseudonym Lubos Motl then he was knowingly putting out into the public arena misinformation in a malicious form. If he misrepresented Motl’s beliefs, then the public may not know who to trust. Targeted against one effective critic, it could trash their reputation. At a wider scale it could allow morally and scientifically inferior views to gain prominence over superior viewpoints. If the alarmist beliefs were superior it what be necessary to misrepresent alternative opinions. Open debate would soon reveal which side had the better views. But in debating and disputing, all sides would sharpen their arguments. What would quickly disappear is the reliance on opinion surveys and rewriting of dictionaries. Instead, proper academics would be distinguishing between quality, relevant evidence from dogmatic statements based on junk sociology and psychology. They would start defining the boundaries of expertise between the basic physics, computer modelling, results analysis, public policy-making, policy-implementation, economics, ethics and the philosophy of science. They may then start to draw on the understanding that has been achieved in these subject areas.

Kevin Marshall

A Great and Humble Man Dies

Sir Nicholas Winton died today at 106 Years Old. A true hero of mine.

Climatic Temperature Variations

In the previous post I identified that the standard definition of temperature homogenisation assumes that there are little or no variations in climatic trends within the homogenisation area. I also highlighted specific instances of where this assumption has failed. However, the examples may be just isolated and extreme instances, or there might be other, offsetting instances so the failures could cancel each other out without a systematic bias globally. Here I explore why this assumption should not be expected to hold anywhere, and how it may have biased the picture of recent warming. After a couple of proposals to test for this bias, I look at alternative scenarios that could bias the global average temperature anomalies. I concentrate on the land surface temperatures, though my comments may also have application to the sea surface temperature data sets.


Comparing Two Recent Warming Phases

An area that I am particularly interested in is the relative size of the early twentieth century warming compared to the more recent warming phase. This relative size, along with the explanations for those warming periods gives a route into determining how much of the recent warming was human caused. Dana Nuccitelli tried such an explanation at skepticalscience blog in 20111. Figure 1 shows the NASA Gistemp global anomaly in black along with a split be eight bands of latitude. Of note are the polar extremes, each covering 5% of the surface area. For the Arctic, the trough to peak of 1885-1940 is pretty much the same as the trough to peak from 1965 to present. But in the earlier period it is effectively cancelled out by the cooling in the Antarctic. This cooling, I found was likely caused by use of inappropriate proxy data from a single weather station3.

Figure 1. Gistemp global temperature anomalies by band of latitude2.

For the current issue, of particular note is the huge variation in trends by latitude from the global average derived from the homogenised land and sea surface data. Delving further, GISS provide some very useful maps of their homogenised and extrapolated data4. I compare two identical time lengths – 1944 against 1906-1940 and 2014 against 1976-2010. The selection criteria for the maps are in figure 2.

Figure 2. Selection criteria for the Gistemp maps.

Figure 3. Gistemp map representing the early twentieth surface warming phase for land data only.

Figure 4. Gistemp map representing the recent surface warming phase for land data only.

The later warming phase is almost twice the magnitude of, and has much the better coverage than, the earlier warming. That is 0.43oC against 0.24oC. In both cases the range of warming in the 250km grid cells is between -2oC and +4oC, but the variations are not the same. For instance, the most extreme warming in both periods is at the higher latitudes. But, with the respect to North America in the earlier period the most extreme warming is over the Northwest Territories of Canada, whilst in the later period the most extreme warming is over Western Alaska, with the Northwest Territories showing near average warming. In the United States, in the earlier period there is cooling over Western USA, whilst in the later period there is cooling over much of Central USA, and strong warming in California. In the USA, the coverage of temperature stations is quite good, at least compared with much of the Southern Hemisphere. Euan Mearns has looked at a number of areas in the Southern Hemisphere4, which he summarised on the map in Figure 5

Figure 5. Euan Mearns says of the above “S Hemisphere map showing the distribution of areas sampled. These have in general been chosen to avoid large centres of human population and prosperity.

For the current analysis Figure 6 is most relevant.

Figure 6. Euan Mearns’ says of the above “The distribution of operational stations from the group of 174 selected stations.

The temperature data for the earlier period is much sparser than for later period. Even where there is data available in the earlier period the temperature data could be based on a fifth of the number of temperature stations as the later period. This may exaggerate slightly the issue, as the coasts of South America and Eastern Australia are avoided.

An Hypothesis on the Homogenisation Impact

Now consider again the description of homogenisation Venema et al 20125, quoted in the previous post.


The most commonly used method to detect and remove the effects of artificial changes is the relative homogenization approach, which assumes that nearby stations are exposed to almost the same climate signal and that thus the differences between nearby stations can be utilized to detect inhomogeneities. In relative homogeneity testing, a candidate time series is compared to multiple surrounding stations either in a pairwise fashion or to a single composite reference time series computed for multiple nearby stations. (Italics mine)


The assumption of the same climate signal over the homogenisation will not apply where the temperature stations are thin on the ground. The degree to which homogenisation eliminates real world variations in trend could be, to some extent, inversely related to the density. Given that the density of temperature data points diminishes in most areas of the world rapidly when one goes back in time beyond 1960, homogenisation in the early warming period far more likely to be between climatically different temperature stations than in the later period. My hypothesis is that, relatively, homogenisation will reduce the early twentieth century warming phase compared the recent warming phase as in earlier period homogenisation will be over much larger areas with larger real climate variations within the homogenisation area.

Testing the Hypothesis

There are at least two ways that my hypothesis can be evaluated. Direct testing of information deficits is not possible.

First is to conduct temperature homogenisations on similar levels of actual data for the entire twentieth century. If done for a region, the actual data used in global temperature anomalies should be run for a region as well. This should show that the recent warming phase is post homogenisation is reduced with less data.

Second is to examine the relate size of adjustments to the availability of comparative data. This can be done in various ways. For instance, I quite like the examination of the Manaus Grid block record Roger Andrews did in a post The Worst of BEST6.

Counter Hypotheses

There are two counter hypotheses on temperature bias. These may undermine my own hypothesis.

First is the urbanisation bias. Euan Mearns in looking at temperature data of the Southern Hemisphere tried to avoid centres of population due to the data being biased. It is easy to surmise the lack of warming Mearns found in central Australia7 was lack of an urbanisation bias from the large cities on the coast. However, the GISS maps do not support this. Ronan and Michael Connolly8 of Global Warming Solved claim that the urbanisation bias in the global temperature data is roughly equivalent to the entire warming of the recent epoch. I am not sure that the urbanisation bias is so large, but even if it were, it could be complementary to my hypothesis based on trends.

Second is that homogenisation adjustments could be greater the more distant in past that they occur. It has been noted (Steve Goddard in particular) that each new set of GISS adjustments adjusts past data. The same data set used to test my hypothesis above could also be utilized to test this hypothesis, by conducting homogenisations runs on the data to date, then only to 2000, then to 1990 etc. It could be that the earlier warming trend is somehow suppressed by homogenizing the most recent data, then working backwards through a number of iterations, each one using the results of the previous pass. The impact on trends that operate over different time periods, but converge over longer periods, could magnify the divergence and thus cause differences in trends decades in the past to be magnified. As such differences in trend appear to the algorithm to be more anomalous than in reality they actually are.

Kevin Marshall


  1. Dana Nuccitelli – What caused early 20th Century warming? 24.03.2011
  2. Source
  3. See my post Base Orcadas as a Proxy for early Twentieth Century Antarctic Temperature Trends 24.05.2015
  4. Euan Mearns – The Hunt For Global Warming: Southern Hemisphere Summary 14.03.2015. Area studies are referenced on this post.
  5. Venema et al 2012 – Venema, V. K. C., Mestre, O., Aguilar, E., Auer, I., Guijarro, J. A., Domonkos, P., Vertacnik, G., Szentimrey, T., Stepanek, P., Zahradnicek, P., Viarre, J., Müller-Westermeier, G., Lakatos, M., Williams, C. N., Menne, M. J., Lindau, R., Rasol, D., Rustemeier, E., Kolokythas, K., Marinova, T., Andresen, L., Acquaotta, F., Fratianni, S., Cheval, S., Klancar, M., Brunetti, M., Gruber, C., Prohom Duran, M., Likso, T., Esteban, P., and Brandsma, T.: Benchmarking homogenization algorithms for monthly data, Clim. Past, 8, 89-115, doi:10.5194/cp-8-89-2012, 2012.
  6. Roger Andrews – The Worst of BEST 23.03.2015
  7. Euan Mearns – Temperature Adjustments in Australia 22.02.2015
  8. Ronan and Michael Connolly – Summary: “Urbanization bias” – Papers 1-3 05.12.2013

Defining “Temperature Homogenisation”


The standard definition of temperature homogenisation is of a process that cleanses the temperature data of measurement biases to only leave only variations caused by real climatic or weather variations. This is at odds with GHCN & GISS adjustments which delete some data and add in other data as part of the homogenisation process. A more general definition is to make the data more homogenous, for the purposes of creating regional and global average temperatures. This is only compatible with the standard definition if assume that there are no real data trends existing within the homogenisation area. From various studies it is clear that there are cases where this assumption does not hold good. The likely impacts include:-

  • Homogenised data for a particular temperature station will not be the cleansed data for that location. Instead it becomes a grid reference point, encompassing data from the surrounding area.
  • Different densities of temperature data may lead to different degrees to which homogenisation results in smoothing of real climatic fluctuations.

Whether or not this failure of understanding is limited to a number of isolated instances with a near zero impact on global temperature anomalies is an empirical matter that will be the subject of my next post.



A common feature of many concepts involved with climatology, the associated policies and sociological analyses of non-believers, is a failure to clearly understand of the terms used. In the past few months it has become evident to me that this failure of understanding extends to term temperature homogenisation. In this post I look at the ambiguity of the standard definition against the actual practice of homogenising temperature data.


The Ambiguity of the Homogenisation Definition

The World Meteorological Organisation in its’ 2004 Guidelines on Climate Metadata and Homogenization1 wrote this explanation.

Climate data can provide a great deal of information about the atmospheric environment that impacts almost all aspects of human endeavour. For example, these data have been used to determine where to build homes by calculating the return periods of large floods, whether the length of the frost-free growing season in a region is increasing or decreasing, and the potential variability in demand for heating fuels. However, for these and other long-term climate analyses –particularly climate change analyses– to be accurate, the climate data used must be as homogeneous as possible. A homogeneous climate time series is defined as one where variations are caused only by variations in climate.

Unfortunately, most long-term climatological time series have been affected by a number of nonclimatic factors that make these data unrepresentative of the actual climate variation occurring over time. These factors include changes in: instruments, observing practices, station locations, formulae used to calculate means, and station environment. Some changes cause sharp discontinuities while other changes, particularly change in the environment around the station, can cause gradual biases in the data. All of these inhomogeneities can bias a time series and lead to misinterpretations of the studied climate. It is important, therefore, to remove the inhomogeneities or at least determine the possible error they may cause.


That is temperature homogenisation is necessary to isolate and remove what Steven Mosher has termed measurement biases2, from the real climate signal. But how does this isolation occur?

Venema et al 20123 states the issue more succinctly.


The most commonly used method to detect and remove the effects of artificial changes is the relative homogenization approach, which assumes that nearby stations are exposed to almost the same climate signal and that thus the differences between nearby stations can be utilized to detect inhomogeneities (Conrad and Pollak, 1950). In relative homogeneity testing, a candidate time series is compared to multiple surrounding stations either in a pairwise fashion or to a single composite reference time series computed for multiple nearby stations. (Italics mine)


Blogger …and Then There’s Physics (ATTP) partly recognizes these issues may exist in his stab at explaining temperature homogenisation4.

So, it all sounds easy. The problem is, we didn’t do this and – since we don’t have a time machine – we can’t go back and do it again properly. What we have is data from different countries and regions, of different qualities, covering different time periods, and with different amounts of accompanying information. It’s all we have, and we can’t do anything about this. What one has to do is look at the data for each site and see if there’s anything that doesn’t look right. We don’t expect the typical/average temperature at a given location at a given time of day to suddenly change. There’s no climatic reason why this should happen. Therefore, we’d expect the temperature data for a particular site to be continuous. If there is some discontinuity, you need to consider what to do. Ideally you look through the records to see if something happened. Maybe the sensor was moved. Maybe it was changed. Maybe the time of observation changed. If so, you can be confident that this explains the discontinuity, and so you adjust the data to make it continuous.

What if there isn’t a full record, or you can’t find any reason why the data may have been influenced by something non-climatic? Do you just leave it as is? Well, no, that would be silly. We don’t know of any climatic influence that can suddenly cause typical temperatures at a given location to suddenly increase or decrease. It’s much more likely that something non-climatic has influenced the data and, hence, the sensible thing to do is to adjust it to make the data continuous. (Italics mine)

The assumption of a nearby temperature stations have the same (or very similar) climatic signal, if true would mean that homogenisation would cleanse the data of the impurities of measurement biases. But there is only a cursory glance given to the data. For instance, when Kevin Cowtan gave an explanation of the fall in average temperatures at Puerto Casado neither he, nor anyone else, checked to see if the explanation stacked up beyond checking to see if there had been a documented station move at roughly that time. Yet the station move is at the end of the drop in temperatures, and a few minutes checking would have confirmed that other nearby stations exhibit very similar temperature falls5. If you have a preconceived view of how the data should be, then a superficial explanation that conforms to that preconception will be sufficient. If you accept the authority of experts over personally checking for yourself, then the claim by experts that there is not a problem is sufficient. Those with no experience of checking the outputs following processing of complex data will not appreciate the issues involved.


However, this definition of homogenisation appears to be different from that used by GHCN and NASA GISS. When Euan Mearns looked at temperature adjustments in the Southern Hemisphere and in the Arctic6, he found numerous examples in the GHCN and GISS homogenisations of infilling of some missing data and, to a greater extent, deleted huge chunks of temperature data. For example this graphic is Mearns’ spreadsheet of adjustments between GHCNv2 (raw data + adjustments) and the GHCNv3 (homogenised data) for 25 stations in Southern South America. The yellow cells are where V2 data exist V3 not; the greens cells V3 data exist where V2 data do not.



Definition of temperature homogenisation

A more general definition that encompasses the GHCN / GISS adjustments is of broadly making the
data homogenous. It is not done by simply blending the data together and smoothing out the data. Homogenisation also adjusts anomalous data as a result of pairwise comparisons between local temperature stations, or in the case of extreme differences in the GHCN / GISS deletes the most anomalous data. This is a much looser and broader process than homogenisation of milk, or putting some food through a blender.

The definition I cover in more depth in the appendix.



The Consequences of Making Data Homogeneous

A consequence of cleansing the data in order to make it more homogenous gives a distinction that is missed by many. This is due to making the strong assumption that there are no climatic differences between the temperature stations in the homogenisation area.

Homogenisation is aimed at adjusting for the measurement biases to give a climatic reading for the location where the temperature station is located that is a closer approximation to what that reading would be without those biases. With the strong assumption, making the data homogenous is identical to removing the non-climatic inhomogeneities. Cleansed of these measurement biases the temperature data is then both the average temperature readings that would have been generated if the temperature station had been free of biases and a representative location for the area. This latter aspect is necessary to build up a global temperature anomaly, which is constructed through dividing the surface into a grid. Homogenisation, in the sense of making the data more homogenous by blending is an inappropriate term. All what is happening is adjusting for anomalies within the through comparisons with local temperature stations (the GHCN / GISS method) or comparisons with an expected regional average (the Berkeley Earth method).


But if the strong assumption does not hold, homogenisation will adjust these climate differences, and will to some extent fail to eliminate the measurement biases. Homogenisation is in fact made more necessary if movements in average temperatures are not the same and the spread of temperature data is spatially uneven. Then homogenisation needs to not only remove the anomalous data, but also make specific locations more representative of the surrounding area. This enables any imposed grid structure to create an estimated average for that area through averaging the homogenized temperature data sets within the grid area. As a consequence, the homogenised data for a temperature station will cease to be a closer approximation to what the thermometers would have read free of any measurement biases. As homogenisation is calculated by comparisons of temperature stations beyond those immediately adjacent, there will be, to some extent, influences of climatic changes beyond the local temperature stations. The consequences of climatic differences within the homogenisation area include the following.


  • The homogenised temperature data for a location could appear largely unrelated to the original data or to the data adjusted for known biases. This could explain the homogenised Reykjavik temperature, where Trausti Jonsson of the Icelandic Met Office, who had been working with the data for decades, could not understand the GHCN/GISS adjustments7.
  • The greater the density of temperature stations in relation to the climatic variations, the less that climatic variations will impact on the homogenisations, and the greater will be the removal of actual measurement biases. Climate variations are unlikely to be much of an issue with the Western European and United States data. But on the vast majority of the earth’s surface, whether land or sea, coverage is much sparser.
  • If the climatic variation at a location is of different magnitude to that of other locations in the homogenisation area, but over the same time periods and direction, then the data trends will be largely retained. For instance, in Svarlbard the warming temperature trends of the early twentieth century and from the late 1970s were much greater than elsewhere, so were adjusted downwards8.
  • If there are differences in the rate of temperature change, or the time periods for similar changes, then any “anomalous” data due to climatic differences at the location will be eliminated or severely adjusted, on the same basis as “anomalous” data due to measurement biases. For instance in large part of Paraguay at the end of the 1960s average temperatures by around 1oC. Due to this phenomena not occurring in the surrounding areas both the GHCN and Berkeley Earth homogenisation processes adjusted out this trend. As a consequence of this adjustment, a mid-twentieth century cooling in the area was effectively adjusted to out of the data9.
  • If a large proportion of temperature stations in a particular area have consistent measurement biases, then homogenisation will retain those biases, as it will not appear anomalous within the data. For instance, much of the extreme warming post 1950 in South Korea is likely to have been as a result of urbanization10.


Other Comments

Homogenisation is just part of the process of adjusting data for the twin purposes of attempting to correct for biases and building a regional and global temperature anomalies. It cannot, for instance, correct for time of observation biases (TOBS). This needs to be done prior to homogenisation. Neither will homogenisation build a global temperature anomaly. Extrapolating from the limited data coverage is a further process, whether for fixed temperature stations on land or the ship measurements used to calculate the ocean surface temperature anomalies. This extrapolation has further difficulties. For instance, in a previous post11 I covered a potential issue with the Gistemp proxy data for Antarctica prior to permanent bases being established on the continent in the 1950s. Making the data homogenous is but the middle part of a wider process.

Homogenisation is a complex process. The Venema et al 20123 paper on the benchmarking of homogenisation algorithms demonstrates that different algorithms produce significantly different results. What is clear from the original posts on the subject by Paul Homewood and the more detailed studies by Euan Mearns and Roger Andrews at Energy Matters, is that the whole process of going from the raw monthly temperature readings to the final global land surface average trends has thrown up some peculiarities. In order to determine whether they are isolated instances that have near zero impact on the overall picture, or point to more systematic biases that result from the points made above, it is necessary to understand the data available in relation to the overall global picture. That will be the subject of my next post.


Kevin Marshall



  1. GUIDELINES ON CLIMATE METADATA AND HOMOGENIZATION by Enric Aguilar, Inge Auer, Manola Brunet, Thomas C. Peterson and Jon Wieringa
  2. Steven Mosher – Guest post : Skeptics demand adjustments 09.02.2015
  3. Venema et al 2012 – Venema, V. K. C., Mestre, O., Aguilar, E., Auer, I., Guijarro, J. A., Domonkos, P., Vertacnik, G., Szentimrey, T., Stepanek, P., Zahradnicek, P., Viarre, J., Müller-Westermeier, G., Lakatos, M., Williams, C. N., Menne, M. J., Lindau, R., Rasol, D., Rustemeier, E., Kolokythas, K., Marinova, T., Andresen, L., Acquaotta, F., Fratianni, S., Cheval, S., Klancar, M., Brunetti, M., Gruber, C., Prohom Duran, M., Likso, T., Esteban, P., and Brandsma, T.: Benchmarking homogenization algorithms for monthly data, Clim. Past, 8, 89-115, doi:10.5194/cp-8-89-2012, 2012.
  4. …and Then There’s Physics – Temperature homogenisation 01.02.2015
  5. See my post Temperature Homogenization at Puerto Casado 03.05.2015
  6. For example

    The Hunt For Global Warming: Southern Hemisphere Summary

    Record Arctic Warmth – in 1937

  7. See my post Reykjavik Temperature Adjustments – a comparison 23.02.2015
  8. See my post RealClimate’s Mis-directions on Arctic Temperatures 03.03.2015
  9. See my post Is there a Homogenisation Bias in Paraguay’s Temperature Data? 02.08.2015
  10. NOT A LOT OF PEOPLE KNOW THAT (Paul Homewood) – UHI In South Korea Ignored By GISS 14.02.2015



Appendix – Definition of Temperature Homogenisation

When discussing temperature homogenisations, nobody asks what the term actual means. In my house we consume homogenised milk. This is the same as the pasteurized milk I drank as a child except for one aspect. As a child I used to compete with my siblings to be the first to open a new pint bottle, as it had the cream on top. The milk now does not have this cream, as it is blended in, or homogenized, with the rest of the milk. Temperature homogenizations are different, involving changes to figures, along with (at least with the GHCN/GISS data) filling the gaps in some places and removing data in others1.

But rather than note the differences, it is better to consult an authoritative source. From, the definitions of homogenize are:-

verb (used with object), homogenized, homogenizing.

  1. to form by blending unlike elements; make homogeneous.
  2. to prepare an emulsion, as by reducing the size of the fat globules in (milk or cream) in order to distribute them equally throughout.
  3. to make uniform or similar, as in composition or function:

    to homogenize school systems.

  4. Metallurgy. to subject (metal) to high temperature to ensure uniform diffusion of components.

Applying the dictionary definitions, data homogenization in science is not about blending various elements together, nor about additions or subtractions from the data set, or adjusting the data. This is particularly true in chemistry.

For UHCN and NASA GISS temperature data homogenization involves removing or adjusting elements in the data that are markedly dissimilar from the rest. It can also mean infilling data that was never measured. The verb homogenize does not fit the processes at work here. This has led to some, like Paul Homewood, to refer to the process as data tampering or worse. A better idea is to look further at the dictionary.

Again from, the first two definitions of the adjective homogeneous are:-

  1. composed of parts or elements that are all of the same kind; not heterogeneous:

a homogeneous population.

  1. of the same kind or nature; essentially alike.

I would suggest that temperature homogenization is a loose term for describing the process of making the data more homogeneous. That is for smoothing out the data in some way. A false analogy is when I make a vegetable soup. After cooking I end up with a stock containing lumps of potato, carrot, leeks etc. I put it through the blender to get an even constituency. I end up with the same weight of soup before and after. A similar process of getting the same after homogenization as before is clearly not what is happening to temperatures. The aim of making the data homogenous is both to remove anomalous data and blend the data together.



Ivanpah Solar Project Still Failing to Achieve Potential

Paul Homewood yesterday referred to a Marketwatch report titled “High-tech solar projects fail to deliver.” This was reposted at Tallbloke.

Marketwatch looks at the Ivanpah solar project. They comment

The $2.2 billion Ivanpah solar power project in California’s Mojave Desert is supposed to be generating more than a million megawatt-hours of electricity each year. But 15 months after starting up, the plant is producing just 40% of that, according to data from the U.S. Energy Department.

I looked at the Ivanpah solar project last fall, when the investors applied for a $539million federal grant to help pay off a $1.5 billion federal loan. One of the largest investors was Google, who at the end of 2013 had Cash, Cash Equivalents & Marketable Securities of $58,717million, $10,000million than the year before.

Technologically the Ivanpah plant seems impressive. It is worth taking a look at the website.

That might have been the problem. The original projections were for 1065,000 MWh annually from a 392 MW nameplate implying a planned output of 31% of capacity. When I look at the costings on Which? for solar panels on the roof of a house, they assume just under 10% of capacity. Another site, Wind and Sun UK, say

1 kWp of well sited PV array in the UK will produce 700-800 kWh of electricity per year.

That is around 8-9.5% of capacity. Even considering the technological superiority of the project and the climatic differences, three times is a bit steep, although 12.5% (40% of 31%) is very low. From Marketwatch some of the difference is can be explained by

  • Complex equipment constantly breaking down
  • Optimization of complex new technologies
  • Steam pipes leaking due to vibrations
  • Generating the initial steam takes longer than expected
  • It is cloudier than expected

However, even all of this cannot account for the output only being at 40% of expected. With the strong sun of the desert I would expect daily output to never exceed 40% of theoretical, as it is only daylight for 50% of the time, and just after sunrise and before sunset the sun is less strong than at midday. As well as the teething problems with complex technology, it appears that the engineers were over optimistic. A lack of due diligence in appraising the scheme – a factor common to many large scale Government backed initiatives – will have let the engineers have the finance for a fully scaled-up version of what should have been a small-scale project to prove the technology.



Get every new post delivered to your Inbox.

Join 48 other followers