Michael Mann’s narrow definition of “Skepticism”

Climate Scientist MM continues his dogged defence of the climate consensus at Thinkprogress.

Consider the following statement

Make no mistake: Skepticism is fundamental to good science. Whenever a conclusion is drawn or a proposition is made, the demand that it stand up to scrutiny is the self-correcting machinery that drives us towards a better understanding of the way the world works. In this sense, every scientist should be a skeptic. Good science responds to good faith challenges, and to contradictory evidence that is presented, and climate-change science should be no different.

The spirit of the following statement is at first beguiling, and the spirit is something that many would agree with, although “good faith challenges” allows for discrimination against people you disagree with. However, it is his meaning of “scepticism” that I want to take issue with here.

Mann’s definition is most clearly expressed by John Cook of “Skeptical Science“, but also supported by (amongst others) Tamino of “Open Mind” blog. The clearest expression is in the article “Are you a genuine skeptic or a climate denier?

Genuine skeptics consider all the evidence in their search for the truth. Deniers, on the other hand, refuse to accept any evidence that conflicts with their pre-determined views.

Compare this with a more established source of word definitions – the Oxford English Dictionary. I don’t have the full 20 volume edition, but I think my 1983 book club edition of the Shorter OED will do well enough. There are a number of definitions of “sceptic” on page 1900.

Definition 1 pertains to a school of philosophy after the Greek Pyrrho, which doubts the possibility of knowledge of any kind.

Definition 2 is someone who doubts the validity of knowledge claims in a particular area of inquiry. This includes, but is not confined to the natural sciences. In the area of climate is the Climate Realists like Tallbloke, who doubt the greenhouse gas theory.

Definition 2.1 “one who maintains a doubting attitude with reference to a particular question or statement“. The OED has this as the popular definition.

Definition 3 is one who doubts the truth of Christianity. An older definition, not applicable here.

Definition 4 is one who is seeking the truth. That is “an inquirer who has not arrived at definite convictions“. This is only occasionally used, at least in the late 20th century.

Cook’s definition is at odds with all the definitions in the dictionary. There is nothing there about how much evidence a genuine skeptic must consider. Indeed, it by his own definition Cook is not a skeptic. More seriously, Cook is disagreeing with the experts in their field. According to Cook’s definition, a skeptic is someone who formerly had a doubting attitude as in 2.1, but now has fallen into line. The philosophers are a school of deniers. (Desmogblog will no doubt now unearth evidence that they were in the pay of big olive oil producers.) Some who still doubts the truth of climate change will not have “considered all the evidence” yet. For those who have read the evidence, this category ceases to exist. The doubter of Christianity is irrelevant, whilst the seeker of truth is someone who is behind the curve.

But then who do you believe on the definition of “skeptic / sceptic”. A consensus of the world’s leading experts, or a group of dogmatic people using language for partisan purposes?

NB. I use sceptic with a “c” to denote the expert definition, and with a “k” to define the partisan definition. However, I quite realise the use of “c” was probably as a result of King George III trying the separate Britain from the revolting colonies by means of a common language.

With respect to Dr Mann, I may have got him totally wrong. Maybe he does not realise that skepticalscience.com is based on a misrepresentation. If Dr Mann (or a nominated associate) would like to clarify that he follows expert opinion, I will be more than happy to distance him from the polemicists who allegedly support him.

Aerosols – The UNIPCC AR4 adjustment factor

Scientific effort should be dedicated towards resolving the biggest unknowns. After feedbacks, the largest area of uncertainty in forecasting future global warming is the measurement of radiative forcing components.

A quick analysis of the radiative forcing components table in the 2007 AR4 Summary Figure 2.4, page 17 (4.1MB pdf) , would suggest a number of fudge factors have been used to arrive at the results.

I have summarised the table below, less the fancy bars, but with the uncertainty spreads and some check totals.

Radiative Forcing Components
Derived from AR4 (accessed March 2012)

RF Effect (W m-2)

Forcing Component Mid-point

Low

High

Spread %

Carbon Dioxide

1.66

1.49

1.83

20%

Methane

0.48

0.43

0.53

21%

Nitrous Oxide

0.16

0.14

0.18

25%

Halocarbons

0.34

0.31

0.37

18%

Ozone – Stratospheric

-0.05

-0.15

0.05

400%

Ozone – Tropospheric

0.35

0.25

0.65

114%

Stratospheric water vapour from CH4

0.07

0.02

0.12

143%

Surface Albedo – Land Use

-0.20

-0.40

0.00

-200%

Surface Albedo – Black Carbon on Snow

0.10

0.00

0.20

200%

Aerosol – Direct effect

-0.50

-0.90

-0.10

-160%

Aerosol – Cloud lbedo effect

-0.70

-1.80

-0.30

-214%

Linear Contrails

0.01

0.003

0.03

270%

Net total

1.72

-0.61

3.56

242%

Of which:-
Positive Forcings

3.17

2.64

3.90

40%

Negative Forcings

-1.45

-3.25

-0.35

-200%

Assymetric Summing

1.72

0.65

2.29

96%

Total per the Report

1.60

0.60

2.40

113%

If these were financial figures, an external auditor might ask the following questions.

  1. Why do the columns not add up? The difference of 0.12 is the same as the figure for solar irradiance. I would guess that the error in the mid-point is due to someone having deducted this figure from the total, erroneously believing that they had previously included it.
  2. Given the breadth of uncertainty, is it more than a coincidence that the negative forcings almost exactly offset all the positive forcings with the exception of CO2? This conveniently reduces the language of the debate from discussing “anthropogenic greenhouse gases”, to “rising CO2”.
  3. Given the breadth of uncertainty, is it more than a coincidence that the range of negative forcings are exactly equal to 200% of the sum of the mid-points?
  4. Given the breadth of uncertainty, is it more than a coincidence that the range of postive forcings are almost exactly equal to 40% of the sum of the mid-points? Adjust any of the figures by .01, and the result becomes less exact.

This is an important issue, as this situation doubly increases the influence of CO2 on future warming. Firstly, it is the anthropogenic greenhouse gas that is consistently increasing. Others, like methane, levels, have stablised. Secondly, aerosols are likely to decrease in the future as countries develop and clean air legislation is enacted. Given the huge uncertainties in the other forcings, and possible fudge factors employed, it is possible that the significance of CO2 could be over-estimated a number of times. This is before water vapour feedbacks are considered.

Update June 3rd 2012.

Comparing with a paper published by James Hansen et al. in 2000, gives further circumstantial support to the fudge factors being employed.


John Redwood lights the “Global Warming” fuse again

John Redwood bravely touched on the global warming subject again in “Challenging establishment orthodoxies“.

One of the strange features of global warming theory is the reaction of its leading protagonists. They say it is scientifically derived, but then go on to say the science is proven and established. I thought the essence of scientific method was to reach a hypothesis that seemed to fit the facts, and then to keep trying to improve or destroy it by further testing or experiment. This seems to be a thesis where the aim is always to buttress it rather than test it. For many years scientists thought Newton had said the last word on planetary motion, but the twentieth century did not rest until they had replaced or improved on the Newtonian universe in a dramatic way.”

My comment (following from previous comments made on this blog) was

You will notice that whenever you mention “Global Warming” that you are guaranteed to get a greater number of comments compared to practically any other issue. Further the views are probably more polarized and politicized than any other issue.
However, the way to proceed might not be one of hypothesis testing. The data is complex and most of the science is about future events. Rather, it might be worth using the experience with which you are more familiar.
1. In business, a new investment proposal will just be assessed on the theoretical profits, but on the capacity to see that proposal through to actual success. The Stern Review allegedly gave the theory, but there was nothing on public policy issues of controlling policy costs, and maximizing policy benefits (CO2 benefits). Whatever the policy, this failure to focus and project management is a sure guarantee of policy disaster.
2. In politics, the greatest threat to extremist and untenable viewpoints has been from the majority who are able to compare these viewpoints to their other perspectives. That is why authoritarian regimes only can exist in an environment where they silence criticism. There is growing evidence of excluding contrary views without a fair hearing in our scientific institutions, in research funding, and in the mainstream media.
3. Science at the frontiers about making bold hypotheses that can be falsified by later testing. Similarly, the police in a crime investigation make conjectures and then gather evidence. Established science (on which policy should be based) is like a successful prosecution in a criminal case. It is about presenting the evidence and under-going a cross examination by the opponents. This to convince a randomly-selected group of people. My contention would be that the strongest evidence of catastrophic global warming is the most trivial, whilst the most alarming aspects of climate change are based on weak, circumstantial and hearsay evidence.

Two relevant references
https://manicbeancounter.com/2011/02/13/climate-change-in-perspective-%E2%80%93-part-2-of-4-the-mitigation-curve/
https://manicbeancounter.com/2012/02/20/a-climate-change-global-warming-spectrum/

The Morality of Lying and Exaggerating for Climate Science

In the Guardian today, James Garvey, argues that the actions of Peter Gleick in lying to obtain documents from the Heartland Institute could be justified in the interests of the wider good. He says

The documents, if authentic, show that Heartland takes money – in secret – from people who have something to gain by the idea that climate science is uncertain, and then spread that idea with enthusiasm. Do I actually need to say this in 2012? There is no controversy in the scientific community about Heartland’s target: the fact of warming and the human role played in it.

What Heartland is doing is harmful, because it gets in the way of public consensus and action. Was Gleick right to lie to expose Heartland and maybe stop it from causing further delay to action on climate change?

There are some issues with this statement

  • The most important strategy document is almost certainly fake. Peter Gleick was accused of being the source of the leak by Steven Mosher, because this document was in his distinctive style of writing, including grammatical errors. Gleick denies he wrote the fake document, but now admits to (the lesser crime of) obtaining the other documents by deception.
  • The following statement is ambiguous

    There is no controversy in the scientific community about Heartland’s target: the fact of warming and the human role played in it

    It can mean one of four options. First, that the “scientific community” believe what the Heartland’s target is (so there must be a straw poll somewhere). Second, the scientific community believe in anthropogenic global warming. In which case there a definition of who is in the “scientific community” and who is out. The “97% of scientists believe” was a small subset of all scientists in the climate field, who were asked two very trivial questions, so the degree of belief is not in the predicted level of catastrophe that will justify drastic action. Third as to whether the human role played in (global warming) is a fact. The statement of global average surface temperatures being higher than they were 50, 100, 150, or 400 years ago is incontrovertible (though the actual amount is debatable), but the human role is a subject of wide controversy. They are two separate facts, so the human role is just a belief of the 97% of 1.6% of those who answered two trivial questions, which was just over 30% of those who received questionnaires. Whatever the ambiguities in the statement, it does not rely on scientific evidence, as there is plenty of controversy of the anthropogenic contribution due to a lack of incontrovertible scientific evidence.

  • If the scientific consensus was created by a minority  and maintained by “outing” any who voiced concerns, with activists seeking to annul their funding, then that “consensus” opinion should be viewed with a little bit of scepticism.
  • The statement “What Heartland is doing is harmful, because it gets in the way of public consensus and action.” is a potential moral minefield. If 90% of the population decide that it is alright to persecute a peaceful minority would that be alright? If 90% of the population strongly believe that potential terrorists should be held without trial and tortured, would that be alright?

But leaving these issues aside, the problem with telling lies, or exaggerating, is when you are found out. Once you have lost people’s trust, it is very hard to regain that trust. Dale Carnegie in “How To Win Friends And Influence People” made this very point. 
However, from a purely utilitarian point of view it might be permissible to mislead a suspect criminal in order to find the evidence, at it is not that person’s trust that you want to maintain. The wider public will generally think well of you if you get a criminal off the streets. But if it is to marginalise you opponents, it will backfire if the wider public then perceive that you cannot be trusted. This is especially true when much of the case for climate change is based on trust in scientists to report accurately on a complex subject.

The reasons that there is growing distrust in the scientific consensus are multiple:

  • Michael Mann’s hockey stick studies were based on cherry-picked data, biased weightings of individual studies that showed hockey sticks over the ones that did not AND the favoured studies have all been overturned.
  • The UNIPCC 2007 report did not live up the projected image in a number of areas. The Himalayan Glaciers episode is only the tip of the non-melting iceberg. It is full of partisan analysis and exclusion of contrary science.
  • The Climategate email hack also showed the public image of certainties held by a wide number of scientists is nothing of the sort. The core group are highly partisan, and have taken strenuous efforts to exclude contrary views from the journals.

Finally, please remember that activists have got every major scientific body, including the Royal Society, to make proclamations in favour of Global Warming Alarmism. If public funding of science is seen to go to those who lie and exaggerate, then there will be increased distrust in all areas of science. These activists scientists are risking more than their own reputations.

 

Did Wivenhoe dam operators SEQwater swallow the CAGW hype on Australian Droughts?

The Australian “The Climate Sceptics Blog” takes a look at the Wivenhoe dam’s involvement in the catastrophic Queensland flood. I disagreed with the opinion that it might be sufficient to show that operators SEQwater did not undertake a proper, impartial risk assessment.


The question of having to prove the “AGW is not true” in the Wivenhoe case may be a little extreme.

Rather, they would need to show that the operators had a revised policy that gave due weighting to the Australian Government’s Report. I have only read the results. It says here quite clearly

“Observed trends in exceptionally low rainfall years are highly dependent on the period of analysis due to large variability between decades.”

In other words the results are not robust. This is not surprising. The report only looked at period of 40 years, so could say little about the frequency of once-in-a-generation extreme events. It does not say that floods will never occur again, like they have in the area since time immemorial.

If the authorities did not undertake a proper risk assessment of future scenarios based upon a balance of existing knowledge, and the report, then the change of purpose from flood management to reserve storage facility is flawed. This is unless there is near certainty that a climatic shift has occurred in a definite way. This was because

  1. The Report clearly stated that its results were not robust, AND did not predict that extreme rainfall would never happen again.
  2. There is a further complication that may hold. If there is not an extreme climatic shift (or only a partial one, or are in a slow transition from one state to another), then an area with extreme floods in the past will still likely have extreme floods in the future.
  3. Further, the lack of extreme floods for an extended period might pose a greater risk of extreme flooding in the immediate future.

This whole thing becomes a complex matter of balance of risks. That is why they should have solicited expert opinion on risk management from different perspectives, and tried to eliminate any corporate or individual biases. Furthermore, a risk management body should have publicly stated this change of use of the Wivenhoe Dam, so that householders could make adjustments to their risk portfolios.



These conclusions are based analysis of unfolding news reports hype on droughts and floods; the hype that exists for Catastrophic Anthropogenic Global Warming; and my developing analysis of Climate Change (see here, here and here) This comment is not intended as a legal opinion on the case, nor should it be taken as such.

A Climate Change / Global Warming Spectrum

In politics, most people’s views can be placed on a spectrum, when it comes to climate change / global warming there is no such perspectives. The views are often polarized, particularly by those who believe in a future climate catastrophe. This is an initial attempt at a grid aimed at clarifying the issues. Your constructive advice is sought on how this might be improved.

When there are contentious or politicized issues, a spectrum of opinions emerge where there is free discussion of ideas. This is true in politics and the Christian religion. In both, there is not just a one-dimensional spectrum of ideas, but multi-dimensional perspectives. For instance, in politics it has been argued that the left-right spectrum should be split into economic and moral issues. The United States Libertarian Party has had a simple survey running since 1995. A more comprehensive (but still American-orientated) survey is the Political Spectrum Quiz.

Another idea comes from Greg Craven, who did a series of zany You-Tube videos on Climate Change, particularly such as The Most Terrifying Video You’ll Ever See” and “How it all ends“. He claimed that for the mass of non-scientists it was best to take a risk-based approach, grading the science on the credibility of those who made the claims. One objection with his analysis was it was based on polar extremes. That is either the worst climate catastrophe imaginable, or it is all a hoax. I proposed that there was a spectrum of possible outcomes, with the apocalyptic catastrophe at one extreme and the null outcome at the other. Basically there is a spectrum of views.

For this spectrum, the possible scenarios are from the null outcome on the left, rising to a huge climate catastrophe on the right.

Craven’s argument was to consider either 0 or 1000, whereas I claimed that the UNIPCC scenarios (representing the “consensus” of climate scientists), allowed for a fair range of outcomes. I have provided a log scale, as this puts clear distance between someone who believes in a low risk of catastrophe of extreme catastrophe to someone who says there is no risk at all. For instance, if someone believes that there is a 1% chance of the worst case, a 9% chance of loss of 100 and a 90% chance of a loss of 10, then their score would be 0.01*1000 + 0.09*100 + 0.90*10 = 28. In other words, for that person, especially if they are risk averse, there is still a very significant issue that should justify serious consideration of some type of global policy action.

But this measure of the prospective level of climate catastrophe needs to be based upon something. That something is scientific evidence, not people’s intuitions or gut feelings. If we imagine that the uncertainties can be measured as risks (as neoclassical economists do) then then the worst case scenario can only be attained if there is near certain, unambiguous scientific evidence in support of that prediction. If the evidence is weak statistically, gives highly variables results depending on methodology or data sets, or only tangential to the prediction, then a lower risk weighting lower than 1 will need to be ascribed. For an overall picture, we need to ascribe a weighting to the body of evidence. I propose a traffic light system. In outline green is for an overwhelming body of evidence, red is for no proper evidence whatsoever, and amber is for some weak evidence. Something along the following lines:-

Basically, an unambiguous case for impending global catastrophe must have a substantial body of strong scientific evidence to substantiate that case, with little or no contrary evidence. I will develop on another day the analogy with evidence presented to a criminal court by the prosecution. However, for the present, an analogy that is relevant is that this conclusion is only reached once the evidence fails to fall over under independent cross-examination.

This gives us a grid with the magnitude of the climate catastrophe on the X axis, and the scientific case on the Y axis. The grid, with my first opinion of where people various groups are placed, is given below. I know it is controversial – the whole aim is to get people to start thinking constructively about their views.

Alarmist Blogs (for instance Skeptical Science and Desmogblog) have an extreme black-and-white one world where they are always right, and anyone who disagrees is the polar opposite . “Deniers” is a bogeyman construct of their making.

If one reads the detail of UNIPCC AR4 report, the “Consensus” of climate scientists allow for some uncertainties, and for scenarios which are not so catastrophic.

The more Sceptical Scientists, such as Richard Lindzen, Roger Pielke Snr and Roy Spencer, view increasing greenhouse gases as a serious issue for study. However, they view the evidence as being both much weaker than the “consensus” and pointing to a much less alarming future.

The most popular Sceptic Blogs, such as Wattsupwiththat, Climate Audit and Bishop Hill I characterise as having a position of “The stronger the evidence, the weaker the relevance“. That is they allow for a considerable spread of views, but neither dismiss rise in CO2 as of no consequence, nor claim that the available evidence is strong.

Finally, the Climate Realists such as Joanne Nova and the British Climate Realists website. They occupy a similar position as the “deniers”, but from a much more substantial position. They can see little or no evidence of catastrophe, but huge amounts of exaggeration dressed up as science.

What are your opinions? What position do you think you lie on the grid? Is there an alternative (and more informative) way of characterizing the different positions?

Heartland Leak – The Implications

The stolen documents from the Heartland Institute have caused a lot of comment on the blogs. There are a number of things that will come out of this.

1. The consensus climate scientists and their cohorts cannot deal with numbers. Just as they have no sense of proportion with financial values (see Jo Nova on this), they likewise have no sense of proportion with sea level rise, temperature rise, or extreme weather events.

2. A better antonym of “sceptical” than “undoubting” or “believer” is “gullible”. Seems DeSmogBlog did not think to check out the authenticity of the damming 2012 strategy document, neither do they accept the Heartland rebuttal. It fitted the narrative, so they published within an hour of receiving the mail. Similarly The Guardian posted a number of one-sided reports (here, here, here), as did Roger Black of the BBC, without waiting to verify the facts. The most alarming 2012 strategy document is a fax (Judith Curry has other references)

3. A number of people, like me, will visit Heartland.org for the first time. They will find they have 7 policy areas employing 20 people, of which “Environment & Energy” employs 3. They specialise in providing cogent summaries of these issues to policy-makers. Whatever you think of their political stance, they are hardly the secretive, rabid backwoodsmen right-wingers that the alarmists project.

4. This support for spreading information in a concise, intelligible form also comes out in the sceptic-funding “exposes”. There is one-off support for Antony Watts who

proposes to create a new Web site devoted to accessing the new temperature data from NOAA’s web site and converting them into easy-to-understand graphs that can be easily found and understood by weathermen and the general interested public.” 

The, alleged, biggest recipient by far of monthly funding is Craig D Idso, who founded the co2science.org website. This provides summaries of climate science papers, collating their results to help give an overall picture of such as the medieval warm period, ocean acidification and the effect of CO2 on plant growth. For instance, I like this graphic summarising the proxy studies of the MWP showing that the Mannian Hockey Stick studies need to at least reconcile their claim that average global temperatures are warmer than in the last 1000 years.

5. It illustrates the upside-down nature of climatology, compared with conventional science. Conventional science is based on making bold statements and predictions that are substantiated by the evidence, with very clear and replicable methods. Over time it refines its techniques, strengthens its methods of analysis and sees its predictions confirmed. It does not need to denigrate, or attempt to silence its detractors. Like the historians of the holocaust, conventional science just points to the evidence and enlightens those who seek the truth. The real deniers of truth in history have been those who silence their opponents and fabricate distortions.

Overall, the leak exposes why the little Heartland Institute is so evil and dangerous to many. They threaten the jobs and reputations of tens of thousands of climate scientists, “policy-makers”, regulators, and powerful business interests in the alternatives to reliable energy. On the other hand, they are on the side of those made hungry by fuel crops competing with food, and of future generations globally, who will be worse-off by growth-sapping mitigation policies.

Cold water on sea level rise alarmism

The new article in Nature on “Recent contributions of glaciers and ice caps to sea level rise” (Jacob et al. 2012) is in stark contrast to what has gone before. It is far from the previous claims.

The main estimates before Jacob et al. 2012 were:-

  • The Himalayan Glaciers will disappear by 2035. (UNIPCC AR4 2007) Changed to the Himalayan Glaciers may disappear by 2350. (UNIPCC 2010)
  • The Grace Satellite data shows that the polar ice caps are not only melting, but the melt rate is accelerating. Velicogna 2009 claimed that the acceleration in Greenland was −30 ± 11 bnt/yr2 to 286 bnt/yr-1 in 2007 to 2009, and in Antarctica was −26 ± 14 bnt/yr2 to 246 bnt/yr-1 in 2007 to 2009. Concentrating on the period from 2006 to early 2009 for Antarctica only , Chen et al. 2009 estimated that the continent was losing ice at the rate of 190 ± 77 bnt/yr-1, two-thirds is of which comes from West Antarctica, covering about a quarter of the total land surface area. By 2010, the loss from both polar caps would, by Veligona’s estimate be 600 to 650 bnt/yr-1.
  • The average of these two articles was that in 2010 there would be around 600 bnt/yr-1 loss per year.
  • One of the articles’ authors, Prof John Wahr of University of Colarado, Boulder, had previously stated that the Grace measurements indicate an accelerating trend in Greenland. The current graph at Wahr’s website for Greenland shows a distinct accelerating trend through to the start of 2010.

    Mass variability summed over the entire Greenland Ice Sheet, monthly Gravity Recovery and Climate Experiment (GRACE) results (black line; the orange line is a smoothed version) April 2002 and December 2009.

    Prof John Wahl’s graph of Greenland Ice sheet loss, indicating a doubling of the rate of loss over the period to around 150 bnt/yr-1 in 2009.

  • In Zwally and Giovinetto 2011, using three separate estimation techniques, and including the pre-satellite data from 1992 to 2002, estimated the range of +27 to -40 bnt/yr-1.

The new paper in Nature:-

  • Estimates no net loss from the Himalayas in the period 2003 to 2010. When the claim that the Himalayas would lose their glaciers by 2035, Rajendra Pauchari, head of the UNIPCC said the doubts were “voodoo science”. Now even the more moderate claim of melting over hundreds of years looks to be in doubt. Josh has penned a cartoon to illustrate this point.

  • Velicogna 2009, seems somewhat extreme. The Nature paper would estimates a loss of 50% to 75% Velicogna estimate for 2010.
  • Most importantly, there is no mention of acceleration of ice melt from the polar ice caps. This sudden turn-around might be to a sudden change in the data. The sea level rise appears to have stalled in the last 18-24 months, so the sea ice melt (which the Nature paper estimates accounts for 40% of the sea level rise) may have stalled as well. (See Appendix 2). It is necessary to re-run the Nature paper numbers for 2011 data to confirm if this is the case.

In conclusion, it looks that the new nature paper reaches a more moderate position than previous papers using the GRACE satellite data, as it uses a longer period, and subjects the data to a more detailed breakdown. However, in terms of the polar ice melt, it still more extreme than a paper that uses a longer timeframe and three distinct methods of calculation.

Appendix 1 – Leo Hickman in the Guardian has a breakdown of the figures, that nicely puts the issue in context.

Glaciers
Ignore Region Rate (Gt yr-1)
1 Iceland -11.±.2
2 Svalbard -3.±.2
3 Franz Josef Land 0.±.2
4 Novaya Zemlya -4.±.2
5 Severnaya Zemlya -1.±.2
6 Siberia and Kamchatka 2.±.10
7 Altai 3.±.6
8 High Mountain Asia -4.±.20
8a Tianshan -5.±.6
8b Pamirs and Kunlun Shan -1.±.5
8c Himalaya and Karakoram -5.±.6
8d Tibet and Qilian Shan 7.±.7
9 Caucasus 1.±.3
10 Alps -2.±.3
11 Scandinavia 3.±.5
12 Alaska -46.±.7
13 Northwest America excl. Alaska 5.±.8
14 Baffin Island -33.±.5
15 Ellesmere, Axel Heiberg and Devon Islands -34.±.6
16 South America excl. Patagonia -6.±.12
17 Patagonia -23.±.9
18 New Zealand 2.±.3
19 Greenland ice sheet.+.PGICs -222.±.9
20 Antarctica ice sheet.+.PGICs -165.±.72
  Total -536.±.93
  GICs excl. Greenland and Antarctica PGICs -148.±.30
  Antarctica.+.Greenland ice sheet and PGICs -384.±.71
  Total contribution to SLR -1.48.±.0.26
  SLR due to GICs excl. Greenland and Antarctica PGICs -0.41.±.0.08
  SLR due to Antarctica.+.Greenland ice sheet and PGICs -1.06.±.0.19

 

Appendix 2 – University of Colarado Sea level Rise Estimates

A Bet Won on the Warming Standstill

Congratulations to Dr David Whitehouse of the GWPF for winning a bet with Dr James Annan.

The bet, made in 2007, was that by 2011 that HADCRUT3 temperature record of 1998 would not be beaten by 2011. The bet was made at the instigation of the BBC Radio 4 program “More or Less”. Annan then provided data analysis to show why he was odds-on favourite to win the bet here and here. Both RealClimate and Mark Lynas had earlier weighed in with articles giving the mainstream viewpoint. I post on Dr Annan’s blog the following comment

The mark of good science is not to predict the obvious, but to predict the unlikely.

Dr Whitehouse has stated that it was going beyond the obvious that enabled him to take on the bet. His full analysis can be found at both the GWPF and wattsupwiththat.

Of course there are those who will point to the biased GISSTEMP to show that the warming is continuing. See my analysis here about why that dataset looks to be a little biased. There are of course those who will still maintain the warming is continuing (such as Roger Black of the BBC), but the true measure is the predictive ability.


Adelaide – a decline in extreme heatwaves?

Joanne Nova has posted data from Ian Hill on extreme heatwaves in Adelaide, Australia. To quote

It’s another mindless record used to remind the public to “keep the faith” and recite the litancy:

“Adelaide had it’s hottest start to the year since 1900 Sky news

Picking three particular days out of 365 and comparing them over a century is about as cherry-pickingly meaningless as it gets. But Ian Hill went back through the records to find that not only have there been 79 heatwaves in Adelaide since 1887, but there have been 51 heat-waves that were hotter since 1887.

I have done a bit of number crunching of my own, that is quite revealing. Higher temperatures are meant to lead to more extreme heatwaves. Using Ian Hill’s figures Adelaide is providing an exception. Hill’s definition of a heatwave is 3 or more consecutive days where the maximum temperature exceeds 38oC. I have downloaded the figures and categorised by decade. There are two ways I have analysed this data. First is by the number of heatwaves per decade. Second is the number of days per decade.


There are a number of points.

  1. The temperature data only starts in 1887, so the 1880s are probably more significant.
  2. The current decade might be more significant.
  3. Last decade, beginning 2000 was no more significant than the decades 1890s, 1900s, or 1930s.
  4. The 1990s was no more significant than the decades 1910s and 1920s

Is there, however, a revival of extreme heatwaves in the last twenty years?

Nope. Just a couple of extreme years in 2008 and 2009.