Royal Baby Names to Save the United Kingdom

In a complete break from my normal posts on climate, after discussions with my daughter, I am going to speculate on Royal baby names.

In less than two weeks’ time there will be a General Election. Given that the betting is that a Labour/ Scots Nat Coalition is likely there is a strong possibility that the incoming Government could lead to the breakup of the United Kingdom. The Scottish Nationalists have very similar left-of-centre policies , there is which will in turn lead to the breakup of the United Kingdom. The Royal Family, particularly The Queen, firmly believes in the United Kingdom, has long been proud of its Scottish routes (the Queen is half Scots), but at the same time does not directly intercede in politics, except in the most tangential ways. Naming of a Royal Baby who is fourth in line to the throne is one of the few methods open to the Royal Family of sending a political message. The naming cannot offend the Scots, but at the same time will satisfy the far more numerous English. It must also be a name seen to be reasonably modern, but also in keeping with royal traditions.

If the Katherine, Duchess of Cambridge gives birth to a baby boy there will be a number of names that could be chosen. Prince George Alexander Louis set a precedent. George is the names of six British Monarchs, but identified as very English. It was in the reign of George II (1727-1760) for instance that the Young Pretender Bonnie Prince Charlie was finally defeated at Culloden in 1745. The second name, whilst being highly international was also the name of three medieval Scottish Kings, and the Gaelic form is Alistair. Louis is from Prince William’s great uncle Lord Louis Mountbatten, who was very close to the Prince of Wales. There was both traditional English and Scottish elements in the name, without seeming too old fashioned. Alexander has already been used, so is counted out. Some names of Scottish Kings cannot be considered. “Kenneth” and “Duncan” are very old fashioned. Macbeth was trashed as a plausible name by William Shakespeare. Lulach, Amlaíb, Cuilén, Dub and Indulf as too lost in time to inflict on any child, and would need explaining. This leaves James, David and Robert. In Scotland James is currently fourth most popular, behind Jack, Lewis and Riley. At Befair it is the most popular Boys’ name. So this might be a strong contender. However, the Royal Family will want to make an imprint less than two weeks before a General Election that could destroy the United Kingdom that the Queen pledged to defend. James is both Scottish and English. We have the King James Bible of 1611 that helped unite the factions in the Church of England for a while. But King James VI of Scotland (and James I of England) was an anomaly. He was a strong Scottish Presbyterian, who in commissioning this great work sought to bring together both the Puritan and Catholic elements of the Anglican Church. His grandson, James II almost caused a second Civil War through his Catholic tendencies, resulting in the current inability of the heir to the throne to marry a Roman Catholic. The betting markets, along with my daughter, may favour such a name, but the Queen may advise against.

So what is an appropriate boys’ name for a possible (but unlikely) future monarch, whose only role may be to save the Union by being born?

There are two courses that the Royal Family may take. I believe that they will take the safe course, and call the boy David. It has both strong Scottish routes, and David is the patron saint of Wales. But the option to save the United Kingdom is Robert. On 13th June 1214, Robert the Bruce defeated the English forces of Edward I (“Hammer of the Scots”) at Bannockburn near Stirling. Less than 150 years earlier William of Normandy had defeated the Anglo Saxon (English) at the Battle of Hastings. Although “Edward” was Anglo Saxon in origin the “English Kings” still spoke French at Court. Most fighting on the side of Edward could as little understand their Sovereigns’ words as the Gaelic-speaking Scots. If Robert is chosen, a second name cannot be Edward. But the older Anglo Saxon form of Edward (and still used today) is Edmund. How better for the Royal Family to remember the subjected of 800 years ago, whilst uniting both the downtrodden of both Scotland and England, whilst reconciling ancient enmities, whilst remembering the ancient Kings of both countries. A third name could be David, or one than avers to the Irish, such as Kevin J

Girls names are more difficult. The most famous Scottish girls name is Margaret, and until the 1960s was easily the most popular name. I have an Auntie Margaret and have fond memories of my Great Aunt Margaret, and had (by all accounts) a formidable Great Grandmother Margaret Ross, who died at the age of 93 when I was 3 years old. Many will remember the Queen’s Sister, Princess Margaret. But the name is now not in the top 100 of girl names in Scotland, and (due to the Royal connection) will not be viewed as particularly Scottish. In left-of-centre Scottish minds, it is also the Christian name of one of twentieth century’s greatest Prime Ministers.

There are not many Queens of Scotland. The most famous is Mary Queen of Scots, who, being a French-speaking Catholic, was hardly a figurehead for an increasingly Presbyterian Scotland of the time, nor for the a British Monarchy who has defended the middle-of-the-road Anglican Communion for well over 300 hundred years. Scottish Queens consorts were undistinguished and with names such as Maud, Joan, Sybilla, Ethelreda and Grouoch are hardly able to capture the imagination of the Scottish public. Margaret is again the most popular name, followed by Elizabeth. Looking at current most popular Scottish Girls names in 2012, they are Sophie followed by Emily, Olivia, Ava and Lucy. Hardly Royal, and not much different from England. A statement cannot be easily be made. The last truly Scottish Royal was Queen Elizabeth the Queen Mother, so Elizabeth might be a family option. The policy might be to play safe, or in a thorough break with tradition, let the parents decide.

Kevin Marshall

Declaration of Interest

I was born and bred in England, but my Mother is, and three of my grandparents were, Scottish. I named my son Edmund Alexander. The latter name was after a Great Grandfather and an Uncle who was always known as Alistair. I consider myself British, and am proud of both my Derbyshire and North Scottish ancestry.


Labour Manifesto is misleading the British Public

Today Ed Miliband formally launched the Labour Election Manifesto 2015. See the summary at the BBC.

David Cameron has called it a con trick. (Hattip Conservative Home)

This con trick claim can be substantiated by reading the Manifesto. Here are a few snippets.


The Economy

On the Economy, Labour realize they have an uphill struggle. A couple of examples

We will cut the deficit every year with a surplus on the current budget..”

The current budget deficit is the difference between tax revenue and current spending. To get the total deficit you need to add in (what used to be called) capital expenditure.

Remember Gordon Brown’s Golden Rule of only borrowing to invest?

Ed Miliband will return Britain to the days of 2001-2008, when Labour built a structural deficit of £50-£70bn. It is this reason that there is still a huge deficit, not the credit crunch. Labour still do not understand the public sector capital investment does not provide financial returns. New roads, schools and hospitals are not constructed to generate revenue like in a business but to provide social returns. Properly spent, overall welfare is increased, despite capital spending creating additional financial burdens in terms of staffing and maintenance.

There is not a single policy in this manifesto that is funded by additional borrowing.

This is grossly misleading. Labour are committed to at least maintaining current spending levels. When there is a deficit that means new additional borrowing is required, adding to the total debt. What Labour mean is that additional spending will be funded by additional taxes.


Discouraging entrepreneurship, jobs and growth

There is a subsection headed “We will back our entrepreneurs and businesses

The measures are tiny. Instead here are a scattering of policy initiatives which will likely damage British businesses and help undermine economic growth.

  1. We will reverse the Government’s top-rate tax cut.

    British Entrepreneurs will be discouraged from investing in Britain. They will go elsewhere.


  2. We will abolish the non-dom rules…”

    Ed Balls in January said

    “I think if you abolish the whole (nom-dom) status then probably it ends up costing Britain money”

    There on a lot of people who rely on the non-doms for jobs. Many invest money in Britain.


  3. We will close tax loopholes that cost the public billions of pounds a year,”

    The tax system will become even more complex, especially for small businesses. This could reduce revenues.


  4. We will end unfair tax breaks used by hedge funds and others

    A major part of Britain’s exports come from the financial services sector. Labour’s antipathy to this sector threatens hundreds of thousands of jobs and may demote the City of London to a second tier financial sector.


  5. “We will increase the National Minimum Wage

    We will ban exploitative zero-hours contracts

    We will promote the Living Wage”

    The cost of employing people will rise. Businesses who do not toe the official line on the living wage might be unable to sell to the State Sector. Start-up businesses will be reduced and small businesses will not expand as inflexible employment laws will increase the risks of taking people on. The unemployed will become locked out of jobs. Youth unemployment will rise.


  6. We will freeze gas and electricity prices until 2017

    Prices have been rising because of the Climate Change Act 2008 that Ed Miliband was responsible for steering through Parliament. There is huge investment needed in new sources of electricity. That ain’t going to happen if profit rates fall. This is a policy to ensure the lights go out in a couple of winters time.


  7. We will introduce a fairer deal for renters

    This will be at the expense of landlords, many of whom rent as a business.


  8. We will expand free childcare from 15 to 25 hours per week for parents of three- and four-year-olds, paid for with an increase in the bank levy.”

    See point 4 on the City of London

I am really concerned that a Labour Government will jeopardize the prosperity of this country, and my children’s future. Rather than learning from past Labour continue to deceive themselves through spin. Rather than and admitting that they got things wrong Labour blame others.

Kevin Marshall

Freeman Dyson on Climate Models

One of the leading physicists on the planet, Freeman Dyson, has given a video interview to the Vancouver Sun. Whilst the paper emphasizes Dyson’s statements about the impact of more CO2 greening the Earth, there is something more fundamental that can be gleaned.

Referring to a friend who constructed the first climate models, Dyson says at about 10.45

These climate models are excellent tools for understanding climate, but that they are very bad tools for predicting climate. The reason is simple – that they are models which have very few of the factors that may be important, so you can vary one thing at a time ……. to see what happens – particularly carbon dioxide. But there are a whole lot of things that they leave out. ….. The real world is far more complicated than the models.

I believe that Climate Science has lost sight of what this understanding of what their climate models actually are literally attempts to understand the real world, but are not the real world at all. It reminds me of something another physicist spoke about fifty years ago. Richard Feynman, a contemporary that Dyson got to know well in the late 1940s and early 1950s said of theories:-

You cannot prove a vague theory wrong. If the guess that you make is poorly expressed and the method you have for computing the consequences is a little vague then ….. you see that the theory is good as it can’t be proved wrong. If the process of computing the consequences is indefinite, then with a little skill any experimental result can be made to look like an expected consequence.

Complex mathematical models suffer from this vagueness in abundance. When I see supporters of climate arguing the critics of the models are wrong by stating some simple model, and using selective data they are doing what lesser scientists and pseudo-scientists have been doing for decades. How do you confront this problem? Climate is hugely complex, so simple models will always fail on the predictive front. However, unlike Dyson I do not think that all is lost. The climate models have had a very bad track record due to climatologists not being able to relate their models to the real world. There are a number of ways they could do this. A good starting point is to learn from others. Climatologists could draw upon the insights from varied sources. With respect to the complexity of the subject matter, the lack of detailed, accurate data and the problems of prediction, climate science has much in common with economics. There are insights that can be drawn on prediction. One of the first empirical methodologists was the preeminent (or notorious) economist of the late twentieth century – Milton Friedman. Even without his monetarism and free-market economics, he would be known for his 1953 Essay “The Methodology of Positive Economics”. Whilst not agreeing with the entirety of the views expressed (there is no satisfactory methodology of economics) Friedman does lay emphasis on making simple, precise and bold predictions. It is the exact opposite of the Cook et al. survey which claims a 97% consensus on climate, implying that it relates to a massive and strong relationship between greenhouse gases and catastrophic global warming when in fact it relates to circumstantial evidence for a minimal belief in (or assumption of) the most trivial form of human-caused global warming. In relation to climate science, Friedman would say that it does not matter about consistency with the basic physics, nor how elegantly the physics is stated. It could be you believe that the cause of warming comes from the hot air produced by the political classes. What matters that you make bold predictions based on the models that despite being simple and improbable to the non-expert, nevertheless turn out to be true. However, where bold predictions have been made that appear to be improbable (such as worsening hurricanes after Katrina or the effective disappearance of Arctic Sea ice in late 2013) they have turned out to be false.

Climatologists could also draw upon another insight, held by Friedman, but first clearly stated by John Neville Keynes (father of John Maynard Keynes). That is on the need to clearly distinguish between the positive (what is) and the normative (what ought to be). But that distinction was alienate the funders and political hangers-on. It would also mean a clear split of the science and policy.

Hattips to Hilary Ostrov, Bishop Hill, and Watts up with that.


Kevin Marshall

Massive Exaggeration on Southern Alaskan Glacial ice melt

Paul Homewood has a lovely example of gross exaggeration on climate change. He has found the following quote from a University of Oregon study

Incessant mountain rain, snow and melting glaciers in a comparatively small region of land that hugs the southern Alaska coast and empties fresh water into the Gulf of Alaska would create the sixth largest coastal river in the world if it emerged as a single stream, a recent study shows.

Since it’s broken into literally thousands of small drainages pouring off mountains that rise quickly from sea level over a short distance, the totality of this runoff has received less attention, scientists say. But research that’s more precise than ever before is making clear the magnitude and importance of the runoff, which can affect everything from marine life to global sea level.

The collective fresh water discharge of this region is more than four times greater than the mighty Yukon River of Alaska and Canada, and half again as much as the Mississippi River, which drains all or part of 31 states and a land mass more than six times as large.

“Freshwater runoff of this magnitude can influence marine biology, near shore oceanographic studies of temperature and salinity, ocean currents, sea level and other issues,” said David Hill, lead author of the research and an associate professor in the College of Engineering at Oregon State University.

“This is an area of considerable interest, with its many retreating glaciers,” Hill added, “and with this data as a baseline we’ll now be able to better monitor how it changes in the future.” (Bold mine)

This implies that melting glaciers are a significant portion of the run-off. I thought I would check this out. From the yukoninfo website I find

The watershed’s total drainage area is 840 000 sq. km (323 800 sq. km in Canada) and it discharges 195 cubic kilometres of water per year.

Therefore the runoff is about 780 cubic kilometres per year.

From Wikipedia I find that the Mississippi River has an average annual discharge of 16,792 m3/s. This implies the average discharge into the Gulf of Alaska is about 25,000 m3/s. This equates to 90,000,000 m3 per hour or 2,160,000,000 m3 per day. That is 2.16 cubic kilometres per day, or 788 cubic kilometres per year. If this gross runoff was net, it would account for two thirds of the 3.2mm sea level rise recorded by the satellites. How much of this might be from glacial ice melt? This is quite difficult to estimate. From the UNIPCC AR5 WGI SPM of Sept-13 we have the following statement.

Since the early 1970s, glacier mass loss and ocean thermal expansion from warming together explain about 75% of the observed global mean sea level rise (high confidence). Over the period 1993 to 2010, global mean sea level rise is, with high confidence, consistent with the sum of the observed contributions from ocean thermal expansion due to warming (1.1 [0.8 to 1.4] mm yr–1), from changes in glaciers (0.76 [0.39 to 1.13] mm yr–1), Greenland ice sheet (0.33 [0.25 to 0.41] mm yr–1), Antarctic ice sheet (0.27 [0.16 to 0.38] mm yr–1), and land water storage (0.38 [0.26 to 0.49] mm yr–1). The sum of these contributions is 2.8 [2.3 to 3.4] mm yr–1. {13.3}

How much of this 0.76 mm yr–1 (around 275 cubic kilometres) is accounted for by Southern Alaska?

The author of the Oregon study goes onto say.

This is one of the first studies to accurately document the amount of water being contributed by melting glaciers, which add about 57 cubic kilometers of water a year to the estimated 792 cubic kilometers produced by annual precipitation in this region.

That is 20% (range 14-40%) of the global glacial ice melt outside of Greenland and Iceland is accounted for by Southern Alaska. Northern and Central Alaska, along with Northern Canada are probably far more significant. The Himalayan glaciers are huge, especially compared to the Alps or the Andes which are also meant to be melting. There might be glaciers in Northern Russia as well. Maybe 1%-10% of the global total comes from Southern Alaska, or 3 to 30 cubic kilometres per annum, not 14-40%. The Oregon Article points to two photographs on Flikr (1 & 2) which together seem less than a single cubic kilometre of loss per year. From Homewood’s descriptions of the area, most of the glacial retreat in the area may have been in the nineteenth and early twentieth centuries.

Maybe someone can provide a reconciliation that will make the figures stack up. Maybe the 57 cubic kilometres is a short-term tend – a sibgle year even?

Kevin Marshall

Dixon and Jones confirm a result on the Stephan Lewandowsky Surveys

Congratulations to Ruth Dixon and Jonathan Jones on managing to get a commentary on the two Stephan Lewandowsky, Gilles Gignac & Klaus Oberauer surveys published in Psychological Science. Entitled “Conspiracist Ideation as a Predictor of Climate Science Rejection: An Alternative Analysis” it took two years to get published. Ruth Dixon gives a fuller description on her blog, My Garden Pond. It confirms something that I have stated independently, with the use of pivot tables instead of advanced statistical techniques. In April last year I compared the two surveys in a couple of posts – Conspiracist Ideation Falsified? (CIF) & Extreme Socialist-Environmentalist Ideation as Motivation for belief in “Climate Science” (ESEI).

The major conclusion through their analysis of the survey

All the data really shows is that people who have no opinion about one fairly technical matter (conspiracy theories) also have no opinion about another fairly technical matter (climate change). Complex models mask this obvious (and trivial) finding.

In CIF my summary was

A recent paper, based on an internet survey of American people, claimed that “conspiracist ideation, is associated with the rejection of all scientific propositions tested“. Analysis of the data reveals something quite different. Strong opinions with regard to conspiracy theories, whether for or against, suggest strong support for strongly-supported scientific hypotheses, and strong, but divided, opinions on climate science.

In the concluding comments I said

The results of the internet survey confirm something about people in the United States that I and many others have suspected – they are a substantial minority who love their conspiracy theories. For me, it seemed quite a reasonable hypothesis that these conspiracy lovers should be both suspicious of science and have a propensity to reject climate science. Analysis of the survey results has over-turned those views. Instead I propose something more mundane – that people with strong opinions in one area are very likely to have strong opinions in others. (Italics added)

Dixon and Jones have a far superior means of getting to the results. My method is to input the data into a table, find groupings or classifications, then analyse the results via pivot tables or graphs. This mostly leads up blind alleys, but can develop further ideas. For every graph or table in my posts, there can be a number of others stashed on my hard drive. To call it “trial and error” misses out the understanding to be gained from analysis. Their method (through rejecting linear OLS) is loess local regression. They derive the following plot.

This compares with my pivot table for the same data.

The shows in the Grand Total row that the strongest Climate (band 5) comprise 12% of the total responses. For the smallest group of beliefs about conspiracy theories with just 60/5005 responses, 27% had the strongest beliefs in about climate. The biggest percentage figure is the group who averaged a middle “3” score on both climate and conspiracy theories. That is those with no opinion on either subject.

The more fundamental area that I found is that in the blog survey between strong beliefs in climate science and extreme left-environmentalist political views. It is a separate topic, and its inclusion by Dixon and Jones would have both left much less space for the above insight in 1,000 words, and been much more difficult to publish. The survey data is clear.

The blog survey (which was held on strongly alarmist blogs) shows that most of the responses were highly skewed to anti-free market views (that is lower response score) along with being strongly pro-climate.

The internet survey of the US population allowed 5 responses instead of 4. The fifth was a neutral. This shows a more normal distribution of political beliefs, with over half of the responses in the middle ground.

This shows what many sceptics have long suspected, but I resisted. Belief in “climate science” is driven by leftish world views. Stephan Lewandowsky can only see the link between the “climate denial” beliefs and free-market, because he views left-environmentalist perspectives and “climate science” as a priori truths. This is the reality that everything is to be measured. From this perspective climate science has not failed due to being falsified by the evidence, but because scientists have yet to find the evidence; the models need refining; and there is a motivated PR campaign to undermine these efforts.

Kevin Marshall






Understanding GISS Temperature Adjustments

A couple of weeks ago something struck me as odd. Paul Homewood had been going on about all sorts of systematic temperature adjustments, showing clearly that the past has been cooled between the UHCN “raw data” and the GISS Homogenised data used in the data sets. When I looked at eight stations in Paraguay, at Reykjavik and at two stations on Spitzbergen I was able to corroborate this result. Yet Euan Mearns has looked at groups of stations in central Australia and Iceland, in both finding no warming trend between the raw and adjusted temperature data. I thought that Mearns must be wrong, so when he published on 26 stations in Southern Africa1, I set out to evaluate those results, to find the flaw. I have been unable to fully reconcile the differences, but the notes I have made on the Southern African stations may enable a greater understanding of temperature adjustments. What I do find is that clear trends in the data across a wide area have been largely removed, bringing the data into line with Southern Hemisphere trends. The most important point to remember is that looking at data in different ways can lead to different conclusions.

Net difference and temperature adjustments

I downloaded three lots of data – raw, GCHNv3 and GISS Homogenised (GISS H), then replicated Mearns’ method of calculating temperature anomalies. Using 5 year moving averages, in Chart 1 I have mapped the trends in the three data sets.

There is a large divergence prior to 1900, but for the twentieth century the warming trend is not excessively increased. Further, the warming trend from around 1900 is about half of that in the GISTEMP Southern Hemisphere or global anomalies. Looked in this way Mearns would appear to have a point. But there has been considerable downward adjustment of the early twentieth century warming, so Homewood’s claim of cooling the past is also substantiated. This might be the more important aspect, as the adjusted data makes the warming since the mid-1970s appear unusual.

Another feature is that the GCHNv3 data is very close to the GISS Homogenised data. So in looking the GISS H data used in the creation of the temperature data sets is very much the same as looking at GCHNv3 that forms the source data for GISS.

But why not mention the pre-1900 data where the divergence is huge?

The number of stations gives a clue in Chart 2.

It was only in the late 1890s that there are greater than five stations of raw data. The first year there are more data points left in than removed is 1909 (5 against 4).

Removed data would appear to have a role in the homogenisation process. But is it material? Chart 3 graphs five year moving averages of raw data anomalies, split between the raw data removed and retained in GISS H, along with the average for the 26 stations.

Where there are a large number of data points, it does not materially affect the larger picture, but does remove some of the extreme “anomalies” from the data set. But where there is very little data available the impact is much larger. That is particularly the case prior to 1910. But after 1910, any data deletions pale into insignificance next to the adjustments.

The Adjustments

I plotted the average difference between the Raw Data and the adjustment, along with the max and min values in Chart 4.

The max and min of net adjustments are consistent with Euan Mearns’ graph “safrica_deltaT” when flipped upside down and made back to front. It shows a difficulty of comparing adjusted, where all the data is shifted. For instance the maximum figures are dominated by Windhoek, which I looked at a couple of weeks ago. Between the raw data and the GISS Homogenised there was a 3.6oC uniform increase. There were a number of other lesser differences that I have listed in note 3. Chart 5 shows the impact of adjusting the adjustments is on both the range of the adjustments and the pattern of the average adjustments.

Comparing this with this average variance between the raw data and the GISS Homogenised shows the closer fit if the adjustments to the variance. Please note the difference in scale on Chart 6 from the above!

In the earlier period has by far the most deletions of data, hence the lack of closeness of fit between the average adjustment and average variance. After 1945, the consistent pattern of the average adjustment being slightly higher than the average variance is probably due to a light touch approach on adjustment corrections than due to other data deletions. The might be other reasons as well for the lack of fit, such as the impact of different length of data sets on the anomaly calculations.

Update 15/03/15

Of note is that the adjustments in the early 1890s and around 1930 is about three times the size of the change in trend. This might be partly due to zero net adjustments in 1903 and partly due to the small downward adjustments in post 2000.

The consequences of the adjustments

It should be remembered that GISS use this data to create the GISTEMP surface temperature anomalies. In Chart 7 I have amended Chart 1 to include Southern Hemisphere annual mean data on the same basis as the raw data and GISS H.

It seems fairly clear that the homogenisation process has achieved bringing the Southern Africa data sets into line with the wider data sets. Whether the early twentieth century warming and mid-century cooling are outliers that have been correctly cleansed is a subject for further study.

What has struck me in doing this analysis is that looking at individual surface temperature stations becomes nonsensical, as they are grid reference points. Thus comparing the station moves for Reykjavik with the adjustments will not achieve anything. The implications of this insight will have to wait upon another day.

Kevin Marshall


1. 26 Data sets

The temperature stations, with the periods for the raw data are below.








17.9 S

31.1 E



1897 – 2011


28.8 S

24.8 E



1897 – 2011


19.4 S

29.8 E



1898 – 1970


20.1 S

28.6 E



1897 – 2011


19.8 S

34.9 E



1913 – 1991


14.4 S

28.5 E



1925 – 2011


17.8 S

25.8 E



1918 – 2010


15.2 S

23.1 E


< 10,000

1923 – 2010


11.8 S

24.4 E


< 10,000

1923 – 1970


13.0 S

28.6 E



1923 – 1981

Capetown Safr

33.9 S

18.5 E



1880 – 2011


31.5 S

19.8 E


< 10,000

1941 – 2011

East London

33.0 S

27.8 E



1940 – 2011


22.6 S

17.1 E



1921 – 1991


26.5 S

18.1 E



1931 – 2010


29.1 S

26.3 E



1943 – 2011

De Aar

30.6 S

24.0 E



1940 – 2011


31.9 S

26.9 E



1940 – 1991


26.4 S

29.5 E



1940 – 1991


18.8 S

47.5 E



1889 – 2011


18.1 S

49.4 E



1951 – 2011

Porto Amelia

13.0 S

40.5 E


< 10,000

1947 – 1991


26.7 S

27.1 E



1940 – 1991


6.2 S

39.2 E



1880 – 1960


5.1 S

32.8 E



1893 – 2011

Dar Es Salaam

6.9 S

39.2 E



1895 – 2011

2. Temperature trends

To calculate the trends I used the OLS method, both from the formula and using the EXCEL “LINEST” function, getting the same answer each time. If you are able please check my calculations. The GISTEMP Southern Hemisphere and global data can be accessed direct from the NASA GISS website. The GISTEMP trends are from the skepticalscience trends tool. My figures are:-

3. Adjustments to the Adjustments


Recent adjustment

Other adjustment

Other Period






Mid-70s + inter-war



Dar Es Salaam






About 1999-2002
















Windhoek Temperature adjustments

At Euan Mean’s blog I made reference to my findings, posted in full last night, that in the Isfjord Radio weather station had adjustments that varied between +4.0oC in 1917 to -1.7oC in the 1950s. I challenged anyone to find bigger adjustments than that. Euan came back with the example of Windhoek in South Africa, claiming 5oC of adjustments between the “raw” and GISS homogenised data.

I cry foul, as the adjustments are throughout the data set. J

That is the whole of the data set has been adjusted up by about 4 oC!

However, comparing the “raw” with the GISS homogenised data, with 5 year moving averages, (alongside the net adjustments) there are some interesting features.

The overall temperatures have been adjusted up by around 4oC, but

  • From the start of the record in 1920 to 1939 the cooling has been retained, if not slightly amplified.
  • The warming from 1938 to 1947 of 1.5oC has been erased by a combination of deleting the 1940 to 1944 data and reducing the 1945-1948 adjustments by 1.4oC.
  • The 1945-1948 adjustments, along with random adjustments and deletion of data mostly remove the near 1.5oC of cooling from the late 1940s to mid-1950s and the slight rebound through to the early 1960s.
  • The early 1970s cooling and the warming to the end of the series in the mid-1980s is largely untouched.

The overall adjustments leave a peculiar picture that cannot be explained by a homogenisation algorithm. The cooling in the 1920s offsets the global trend. Deletion of data and the adjustments in the data counter the peak of warming in the early 1940s in the global data. Natural variations in the raw data between the late 1940s and 1970 appear to have been removed, then the slight early 1970s cooling and the subsequent warming in the raw data left alone. However, the raw data shows average temperatures in the 1980s to be around 0.8oC higher than in the early 1920s. The adjustments seem to have removed this.

This removal of the warming trend tends to disprove something else. There appears to be no clever conspiracy, with a secret set of true figures. Rather, there are a lot of people dipping in to adjusting adjusted data to their view of the world, but nobody really questioning the results. They have totally lost sight of what the real data actually is. If they have compared the final adjusted data with the raw data, then they realised that the adjustments had managed to have eliminated a warming trend of over 1 oC per century.

Kevin Marshall

RealClimate’s Mis-directions on Arctic Temperatures


Real Climate attempted to rebut the claims that the GISS temperature data is corrupted with unjustified adjustments by

  • Attacking the commentary of Christopher Booker, not the primary source of the allegations.
  • Referring readers instead to a dogmatic source who claims that only 3 stations are affected, something clearly contradicted by Booker and the primary source.
  • Alleging that the complaints are solely about cooling the past, uses a single counter example for Svarlbard of a GISS adjustment excessively warming the past compared to the author’s own adjustments.
  • However, compared to the raw data, the author’s adjustments, based on local knowledge were smaller than GISS, showing the GISS adjustments to be unjustified. But the adjustments bring the massive warming trend into line with (the still large) Reykjavik trend.
  • Examination of the site reveals that the Stevenson screen at Svarlbard airport is right beside the tarmac of the runway, with the heat from planes and the heat from snow-clearing likely affecting measurements. With increasing use of the airport over the last twenty years, it is likely the raw data trend should be reduced, but at an increasing adjustment trend, not decreasing.
  • Further, data from a nearby temperature station at Isfjord Radio reveals that the early twentieth century warming on Spitzbergen may have been more rapid and of greater magnitude. GISS Adjustments reduce that trend by up to 4 degrees, compared with just 1.7 degrees for the late twentieth century warming.
  • Questions arise how raw data for Isfjord Radio could be available for 22 years before the station was established, and how the weather station managed to keep on recording “raw data” between the weather station being destroyed and abandoned in 1941 and being re-opened in 1946.


In climate I am used to mis-directions and turning, but in this post I may have found the largest temperature adjustments to date.

In early February, RealClimate – the blog of the climate science consensus – had an article attacking Christopher Booker in the Telegraph. It had strong similarities the methods used by anonymous blogger ….andthentheresphysics. In a previous post I provided a diagram to illustrate ATTP’s methods.

One would expect that a blog supported by the core of the climate scientific consensus would provide a superior defence than an anonymous blogger who censors views that challenge his beliefs. However, RealClimate may have dug an even deeper hole. Paul Homewood covered the article on February 12th, but I feel it only scratched the surface. Using the procedures outlined above I note similarities include:-

  • Attacking the secondary commentary, and not mentioning the primary sources.
  • Misleading statements that understate the extent of the problem.
  • Avoiding comparison of the raw and adjusted data.
  • Single counter examples that do not stand up.

Attacking the secondary commentary

Like ATTP, RealClimate attacked the same secondary source – Christopher Booker – but another article. True academics would have referred Paul Homewood, the source of the allegations.

Misleading statement about number of weather stations

The article referred to was by Victor Venema of Variable Variability. The revised title is “Climatologists have manipulated data to REDUCE global warming“, but the original title can be found from the link address –

It was published on 10th February and only refers to Christopher Booker’s original article in the Telegraph article of 24th January without mentioning the author or linking. After quoting from the article Venema states:-

Three, I repeat: 3 stations. For comparison, global temperature collections contain thousands of stations. ……

Booker’s follow-up article of 7th February states:-

Following my last article, Homewood checked a swathe of other South American weather stations around the original three. ……

Homewood has now turned his attention to the weather stations across much of the Arctic, between Canada (51 degrees W) and the heart of Siberia (87 degrees E). Again, in nearly every case, the same one-way adjustments have been made, to show warming up to 1 degree C or more higher than was indicated by the data that was actually recorded.

My diagram above was published on the 8th February, and counted 29 stations. Paul Homewood’s original article on the Arctic of 4th February lists 19 adjusted sites. If RealClimate had actually read the cited article, they would have known that quotation was false in connection to the Arctic. Any undergraduate who made this mistake in an essay would be failed.

Misleading Counter-arguments

Øyvind Nordli – the Real Climate author – provides a counter example from his own research. He compares his adjustments of the Svarlbard, (which is did as part of temperature reconstruction for Spitzbergen last year) with those of GISS.

Clearly he is right in pointing out that his adjustments created a lower warming trend than those of GISS.

I checked the “raw data” with the “GISS Homogenised” for Svarlbard and compare with the Reykjavik data I looked at last week, as the raw data is not part of the comparison. To make them comparable, I created anomalies based on the raw data average of 2000-2009. I have also used a 5 year centred moving average.

The raw data is in dark, the adjusted data in light. For Reykjavik prior to 1970 the peaks in the data have been clearly constrained, making the warming since 1980 appear far more significant. For the much shorter Svarlbard data the total adjustments from GHCN and GISS reduce the warming trend by a full 1.7oC, bringing the warming trend into line with the largely unadjusted Reykjavik. The GHCN & GISS seem to be adjusted to a pre-conceived view of what the data should look like. What Nordli et. al have effectively done is to restore the trend present in the raw data. So Nordli et al, using data on the ground, has effectively reached a similar to conclusion to Trausti Jonsson of the Iceland Met Office. The adjustments made thousands of miles away in the United States by homogenisation bots are massive and unjustified. It just so happens that in this case it is in the opposite direction to cooling the past. I find it somewhat odd Øyvind Nordli, an expert on local conditions, should not challenge these adjustments but choose to give the opposite impression.

What is even worse is that there might be a legitimate reason to adjust downwards the recent warming. In 2010, Anthony Watts looked at the citing of the weather station at Svarlbard Airport. Photographs show it to right beside the runway. With frequent snow, steam de-icers will regularly pass, along with planes with hot exhausts. The case is there for a downward adjustment over the whole of the series, with an increasing trend to reflect the increasing aircraft movements. Tourism quintupled between 1991 and 2008. In addition, the University Centre in Svarlbad founded in 1993 now has 500 students.

Older data for Spitzbergen

Maybe the phenomenal warming in the raw data for Svarlbard is unprecedented, despite some doubts about the adjustments. Nordli et al 2014 is titled Long-term temperature trends and variability on Spitsbergen: the extended Svalbard Airport temperature series, 1898-2012. Is a study that gathers together all the available data from Spitzbergen, aiming to create a composite temperature record from fragmentary records from a number of places around the Islands. From NASA GISS, I can only find Isfjord Radio for the earlier period. It is about 50km west of Svarlbard, so should give a similar shape of temperature anomaly. According to Nordli et al

Isfjord Radio. The station was established on 1 September 1934 and situated on Kapp Linne´ at the mouth of Isfjorden (Fig. 1). It was destroyed by actions of war in September 1941 but re-established at the same place in July 1946. From 30 June 1976 onwards, the station was no longer used for climatological purposes.

But NASA GISS has data from 1912, twenty-two years prior to the station citing, as does Berkeley Earth. I calculated a relative anomaly to Reykjavik based on 1930-1939 averages, and added the Isfjord Radio figures to the graph.

The portion of the raw data for Isfjord Radio, which seems to have been recorded before any thermometer was available, shows a full 5oC rise in the 5 year moving average temperature. The anomaly for 1917 was -7.8oC, compared with 0.6 oC in 1934 and 1.0 oC in 1938. For Svarlbard Airport lowest anomalies are -4.5 oC in 1976 and -4.7 oC in 1988. The peak year is 2.4 oC in 2006, followed by 1.5 oC in 2007. The total GHCNv3 and GISS adjustments are also of a different order. At the start of the Svarlbard series every month was adjusted up by 1.7. The Isfjord Radio 1917 data was adjusted up by 4.0 oC on average, and 1918 by 3.5 oC. February of 1916 & 1918 have been adjusted upwards by 5.4 oC.

So the Spitzbergen warming the trough to peak warming of 1917 to 1934 may have been more rapid and greater than in magnitude that the similar warming from 1976 to 2006. But from the adjusted data one gets the opposite conclusion.

Also we find from Nordli at al

During the Second World War, and also during five winters in the period 18981911, no observations were made in Svalbard, so the only possibility for filling data gaps is by interpolation.

The latest any data recording could have been made was mid-1941, and the island was not reoccupied for peaceful purposes until 1946. The “raw” GHCN data is actually infill. If it followed the pattern of Reykjavik – likely the nearest recording station – temperatures would have peaked during the Second World War, not fallen.


Real Climate should review their articles better. You cannot rebut an enlarging problem by referring to out-of-date and dogmatic sources. You cannot pretend that unjustified temperature adjustments in one direction are somehow made right by unjustified temperature adjustments in another direction. Spitzbergen is not only cold, it clearly experiences vast and rapid fluctuations in average temperatures. Any trend is tiny compared to these fluctuations.

Reykjavik Temperature Adjustments – a comparison


On 20th February, Paul Homewood made some allegations that the temperature adjustments for Reykjavík were not supported by any known reasons. The analysis was somewhat vague. I have looked into the adjustments by both the GHCN v3 and NASA GISS. The major findings, which support Homewood’s view, are:-

  • The GHCN v3 adjustments appear entirely arbitrary. They do not correspond to the frequent temperature relocations. Much of the period from 1901-1965 is cooled by a full one degree centigrade.
  • Even more arbitrary was the adjustments for the period 1939-1942. In years where there was no anomalous spike in the data, a large cool period was created.
  • Also, despite there being complete raw data, the GHCN adjusters decided to dismiss the data from 1926 and 1946.
  • The NASA GISS homogenisation adjustments were much smaller in magnitude, and to some extent partly offset the GHCN adjustments. The greatest warming was of the 1929-51 period.

The combined impact of the adjustments is to change the storyline from the data, suppressing the early twentieth century warming and massively reducing the mid-century cooling. As a result an impression is created that the significant warming since the 1980s is unprecedented.


Analysis of the adjustments

There are a number of data sets to consider. There is the raw data available from 1901 to 2011 at NASA GISS. Nick Stokes has confirmed that this is the same raw data issued by the Iceland Met Office, baring a few roundings. The adjustments made by the Iceland Met Office are unfortunately only available from 1948. Quite separate, is the Global Historical Climatology Network dataset (GHCN v3) from the US National Oceanic and Atmospheric Administration (NOAA) I accessed from NASA GISS, along with the GISS’s own homogenised data used to compile the GISTEMP global temperature anomaly.

The impact of the adjustments from the raw data is as follows

The adjustments by the Icelandic Met Office professionals with a detailed knowledge of the instruments and the local conditions, is quite varied from year-to-year and appears to impose no trend in the data. The impact of GCHN is to massively cool the data prior to 1965. Most years are by about a degree, more than the 0.7oC total twentieth century global average surface temperature increase. The pattern of adjustments has long periods of adjustments that are the same. The major reason could be relocations. Trausti Jonsson, Senior Meteorologist with the Iceland Met Office, has looked at the relocations. He has summarized in the graphic below, along with gaps in the data.

I have matched these relocations with the adjustments.

The relocation dates appear to have no impact on the adjustments. If it does affect the data, the wrong data must be used.

Maybe the adjustments reflect the methods of calculation? Trausti Jonsson says:-

I would again like to make the point that there are two distinct types of adjustments:

1. An absolutely necessary recalculation of the mean because of changes in the observing hours or new information regarding the diurnal cycle of the temperature. For Reykjavík this mainly applies to the period before 1924.

2. Adjustments for relocations. In this case these are mainly based on comparative measurements made before the last relocation in 1973 and supported by comparisons with stations in the vicinity. Most of these are really cosmetic (only 0.1 or 0.2 deg C). There is a rather large adjustment during the 1931 to 1945 period (- 0.4 deg C, see my blog on the matter – you should read it again: 
I am not very comfortable with this large adjustment – it is supposed to be constant throughout the year, but it should probably be seasonally dependent. The location of the station was very bad (on a balcony/rooftop).

So maybe there can be some adjustment prior to 1924, but nothing major after. There is also nothing in the this account, or in the more detailed history, that indicates a reason for the reduction in adjustments in 1917-1925, or the massive increase in negative adjustments in the period 1939-1942.

Further, there is nothing in the local conditions that I can see to then justify GISS imposing an artificial early twentieth century warming period. There are two possible non-data reasons. The first is due to software which homogenizes to the global pattern. The second is human intervention. The adjusters at GISS realised the folks at NOAA had been conspicuously over-zealous in their adjustments, so were trying to restore a bit of credibility to the data story.


The change in the Reykjavík data story

When we compare graphs of raw data to adjusted data, it is difficult to see the impact of adjustments on the trends. The average temperatures vary widely from year to year, masking the underlying patterns. As a rough indication I have therefore taken the average temperature anomaly per decade. The decades are as in common usage, so the 1970s is from 1970-1979. The first decade is actually 1901-1909, and for the adjusted data there are some years missing. The decade of 2000-2009 had no adjustments. The average temperature of 5.35oC was set to zero, to become the anomaly.

The warmest decade was the last decade of 2000-2009. Further, both the raw data (black) and the GISS Homogenised data (orange) show the 1930s to be the second warmest decade. However, whilst the raw data shows the 1930s to be just 0.05oC cooler than the 2000s, GISS estimates it to be 0.75oC cooler. The coolest decades are also different. The raw data shows the 1980s to be the coolest decade, whilst GISS shows the 1900s and the 1910s to be about 0.40oC cooler. The GHCN adjustments (green) virtually eliminate the mid-century cooling.

But adjustments still need to be made. Trausti Jonsson believes that the data prior to 1924 needs to be adjusted downwards to allow for biases in the time of day when readings were taken. This would bring the 1900s and the 1910s more into line with the 1980s, along with lowering the 1920s. The leap in temperatures from the 1910s to the 1930s becomes very similar to that from 1980s to the 2000s, instead of half the magnitude in the GHCNv3 data and two-thirds the magnitude in the GISS Homogenised data.

The raw data tell us there were two similar-sized fluctuations in temperature since 1900 of 1920s-1940s and from 1980s-2010s. In between there was a period cooling that almost entirely cancelled out the earlier warming period. The massive warming since the 1980s is not exceptional, though there might be some minor human influence if patterns are replicated elsewhere.

The adjusted data reduces the earlier warming period and the subsequent cooling that bottomed out in the 1980s. Using the GISS Homogenised data we get the impression of unprecedented warming closely aligned to the rise in greenhouse gas levels. As there is no reason for the adjustments from relocations, or from changes to the method of calculation, the adjustments would appear to be made to fit reality to the adjuster’s beliefs about the world.

Kevin Marshall


Is there a Homogenisation Bias in Paraguay’s Temperature Data?

Last month Paul Homewood at Notalotofpeopleknowthat looked at the temperature data for Paraguay. His original aim was to explain the GISS claims of 2014 being the hottest year.

One of the regions that has contributed to GISS’ “hottest ever year” is South America, particularly Brazil, Paraguay and the northern part of Argentina. In reality, much of this is fabricated, as they have no stations anywhere near much of this area…

….there does appear to be a warm patch covering Paraguay and its close environs. However, when we look more closely, we find things are not quite as they seem.

In “Massive Tampering With Temperatures In South America“, Homewood looked at the “three genuinely rural stations in Paraguay that are currently operating – Puerto Casado, Mariscal and San Juan.” A few days later in “All Of Paraguay’s Temperature Record Has Been Tampered With“, he looked at remaining six stations.

After identifying that all of the three rural stations currently operational in Paraguay had had huge warming adjustments made to their data since the 1950’s, I tended to assume that they had been homogenised against some of the nearby urban stations. Ones like Asuncion Airport, which shows steady warming since the mid 20thC. When I went back to check the raw data, it turns out all of the urban sites had been tampered with in just the same way as the rural ones.

What Homewood does not do is to check the data behind the graphs, to quantify the extent of the adjustment. This is the aim of the current post.

Warning – This post includes a lot of graphs to explain how I obtained my results.

Homewood uses comparisons of two graphs, which he helpful provides the links to. The raw GHCN data + UHSHCN corrections is available here up until 2011 only. The current after GISS homogeneity adjustment data is available here.

For all nine data sets that I downloaded both the raw and homogenised data. By simple subtraction I found the differences. In any one year, they are mostly the same for each month. But for clarity I selected a single month – October – the month of my wife’s birthday.

For the Encarnacion (27.3 S,55.8 W) data sets the adjustments are as follows.

In 1967 the adjustment was -1.3C, in 1968 +0.1C. There is cooling of the past.

The average adjustments for all nine data sets is as follows.

This pattern is broadly consistent across all data sets. These are the maximum and minimum adjustments.

However, this issue is clouded by the special adjustments required for the Pedro Juan CA data set. The raw data set has been patched from four separate files,

Removing does not affect the average picture.

But does affect the maximum and minimum adjustments. This is shows the consistency in the adjustment pattern.

The data sets are incomplete. Before 1941 there is only one data set – Ascuncion Aero. The count for October each year is as follows.

In recent years there are huge gaps in the data, but for the late 1960s when the massive switch in adjustments took place, there are six or seven pairs of raw and adjusted data.

Paul Homewood’s allegation that the past has been cooled is confirmed. However, it does not give a full understanding of the impact on the reported data. To assist, for the full year mean data, I have created temperature anomalies based on the average anomaly in that year.

The raw data shows a significant cooling of up to 1oC in the late 1960s. If anything there has been over-compensation in the adjustments. Since 1970, any warming in the adjusted data has been through further adjustments.

Is this evidence of a conspiracy to “hide a decline” in Paraguayan temperatures? I think not. My alternative hypothesis is that this decline, consistent over a number of thermometers is unexpected. Anybody looking at just one of these data sets recently, would assume that the step change in 40-year-old data from a distant third world country is bound to be incorrect. (Shub has a valid point) That change goes against the known warming trend for over a century from the global temperature data sets and the near stationary temperatures from 1950-1975. More importantly cooling goes against the “known” major driver of temperature recent change – rises in greenhouse gas levels. Do you trust some likely ropey instrument data, or trust your accumulated knowledge of the world? The clear answer is that the instruments are wrong. Homogenisation is then not to local instruments in the surrounding areas, but to the established expert wisdom of the world. The consequent adjustment cools past temperatures by one degree. The twentieth century warming is enhanced as a consequence of not believing what the instruments are telling you. The problem is that this step change is replicated over a number of stations. Paul Homewood had shown that it probably extends into Bolivia as well.

But what happens if the converse happens? What if there is a step rise in some ropey data set from the 1970s and 1980s? This might be large, but not inconsitent with what is known about the world. It is unlikely to be adjusted downwards. So if there have been local or regional step changes in average temperature over time both up and down, the impact will be to increase the rate of warming if the data analysts believe that the world is warming and human beings are the cause of it.

Further analysis is required to determine the extent of the problem – but not from this unpaid blogger giving up my weekends and evenings.

Kevin Marshall

All first time comments are moderated. Please also use the comments as a point of contact, stating clearly that this is the case and I will not click the publish button, subject to it not being abusive. I welcome other points of view, though may give a robust answer.


Get every new post delivered to your Inbox.

Join 45 other followers