Tamino on Australian Sea-Levels

Tamino attempts a hatchet-job on a peer-reviewed paper on Australian Sea Levels. Whilst making some valid comments, it gives the misleading impression that he has overturned the main conclusion.

The sceptic blogs (GWPF, Wattsupwiththat, Jo Nova) are highlighting a front page article in the Australian about a peer-reviewed paper by P.J. Watson about Australian sea levels trends over the past century.

The major conclusion is that:-

“The analysis reveals a consistent trend of weak deceleration at each of these gauge sites throughout Australasia over the period from 1940 to 2000. Short period trends of acceleration in mean sea level after 1990 are evident at each site, although these are not abnormal or higher than other short-term rates measured throughout the historical record.”

The significance is that Watson shows a twentieth century rise of 17cm +/-5cm in Australia, whilst Government policy is based a sea level rise of up to 90cm by the end of the century. If there is deceleration from an already low base, then government action is no longer required, potentially saving billions of dollars.

Looking for other viewpoints I found a direction from Real Climate to Tamino’s Open Mind blog. Given my last encounter when he tried to defend the deeply flawed Hockey Stick (see my comments here and here) I curious to know if this was another misdirection. I was not disappointed. Tamino manages to produce a graph showing the opposite to Watson. That is rapid acceleration, not gentle deceleration.

How does he end up with this contrary result? In Summary

  1. Chooses just one of the four data sets used. That is the Freemantle data set.
  2. Making valid, but largely irrelevant criticisms, to undermine the scientific and statistical competency of the author.
  3. Takes time to make the point about treating 20 year moving averages as data for analysis purposes. The problem is that it underweights the data points at the beginning and the end. In particular, any recent acceleration will be understated.
  4. Criticizes the modelling method, with good reasons.
  5. Slips in an alternative model that may answer that criticism.
  6. Shows the results of that model output.

Tamino’s choice of the Freemantle data set should be justified, especially as Watson gives the comment in the conclusion.

“There is evidence of significant mine subsidence embedded in the historical tide gauge record for Newcastle and a likelihood of inferred subsidence within the later (after the mid 1990s) portion of the Fremantle record. In this respect, it is timely and necessary to augment these relative tide gauge measurements with CGPS to gain accurate data on the vertical movement (if any) at each gauge site to measure eustatic sea level rise. At present only the Auckland gauge is fitted with such precision levelling technology.”

That is, the Freemantle data shows the largest acceleration towards the end and this extra acceleration might be because land levels are falling, not sea levels rising.

The underweighting of recent data is important and could be dealt with by looking at shorter period moving averages and observing the acceleration rates. That is looking at moving averages for 19, 18, 17 years etc. If the acceleration rates cross the 20cm a century rate with the shortening of the time periods then this will undermine Watson’s conclusion. Tamino does not do this, despite being well within his capabilities. Until such an analysis is carried out, the claim abstract in the abstract that “(s)hort period trends of acceleration in mean sea level after 1990 … are not abnormal or higher than other short-term rates measured throughout the historical record ” is not undermined.

Instead of pursuing the point, Tamino then goes on to substitute Watson’s modelling method for an arbitrary one plucked from the air, with the comment

“Finally, we come to the other very big problem with this analysis: the model itself. Watson models his data as a quadratic function of time:

.

He then uses  (the 2nd time derivative of the model) as the estimated acceleration. But this model assumes that the acceleration is constant throughout the observed time span. That’s clearly not so. ”

Instead he flippantly inserts a quartic equation, which gives the time-varying acceleration (the second derivation) as a quadratic function against time.

There are some problems with a quadratic functions as a model against time. Primarily it only has one turning point. Extend the graph far enough and it reaches infinity. So at some point in the future sea levels will reach the sun, and later the rate of rise will be faster than the speed of light. More seriously, if this quadratic is the closest fit to all the data series, it will either have, or soon will have, overstated the actual acceleration. If used to project 90 years or more ahead, it will provide a grossly exaggerated projection based on known data.

On this basis I have edited to give all the inferences that can be drawn from rising sea levels in Australasia.

That is, a pure maths exercise in plotting a quadratic equation on a graph, unrelated to any reality.

An alternative to this is to claim simply that there is not sufficient valid data, or the analysis is too poor draw any long-term inferences.

An alternative approach is to relate the sea level rises to the global temperature rises. Try comparing Watson’s graph of rate of change in sea levels to the two major temperature anomalies.



First it should be pointed out that Watson uses a twenty year moving average, so his data should lag the temperature data. The strong warming in the HADCRUT data in the 1920s to 1940s is replicated in Fort Denison and Auckland sea level data. The Lack of warming in the 1945 to 1975 period is replicated be marked deceleration in all four data sets from 1950 to the 1970s. The warming phase thereafter is similarly replicated in all four data sets. The current static phase, according to the more reliable HADCRUT data, should similarly be marked by a deceleration in sea level rise from an already low level. Further analysis of Watson’s data is needed to confirm this.

There is no reason in the existing data to believe that Watson’s conclusions are invalid. It is necessary to play fast and loose with the data and get lost in computer games models to draw alternative inferences. Yet if a member of the Australian Parliament says legislation to cope with sea level rise should be withdrawn due to a new study, the alarmist consensus, (who have just skimmed through Tamino’s debunking), will say that the study has been overturned. As a result, ordinary, coastal-dwelling people in Australia will continue to endure real hardship due to legislation based on alarmist exaggerations. (here & here).

IPCC & Greenpeace

The Shub Niggurath (Hattip BishopHill) arguments against the IPCC’s SSREN growth figures are complex. The Greenpeace model on which they were based basically took a baseline projection and backcast from there. A cursory look at the figure GDP figures shows that the economic models point to knife-edge scenario. The economic models indicate that the wrong combination of policies, but successfully applied, could cause a global depression for a nigh-on a generation and lead to 330 million less people in 2050 than the do-nothing scenario. But successful combination of policies will have absolutely no economic impact.

Shub examines this table :-

Table 10.3, page 1187, chapter 10 IPCC SRREN

(Page 32 of 106 in Chapter 10. Download available from here)

I have looked at the GDP per capita and population figures.


To see whether the per capita GDP projections are realistic, I have first estimated the implied annual growth rates. The IEA calculates a baseline of around 2% growth to 2030. The German Aerospace Centre then believes growth rates will fall to 1.7% in the following 20 years. Why, I am not sure, but it certainly gives a lower target to aim at. Projecting the 2030 to 2050 growth rate forward to the end of the century gives a GDP per capita (in 2005 constant values) of $56,000. That is a greater than five-fold increase in 93 years.

On a similar basis there are two scenarios examined for climate change policies. In the Category III+IV case, growth rates drop to 0.5% until 2030. It then picks up to 2% per annum. Why a policy that reduces global growth by 75% for 23 years should then cause a rebound is beyond me. However, the impact on living standards is profound. Almost 30% lower by 2030. Even if the higher growth is extrapolated to the end of the century, future generations are still 12% worse off than if nothing was done.

But the Category I+II case makes this global policy disaster seem mild by comparison. Here the assume is that global output per capita will fall year-on-year by 0.5% for nearly a generation. That is falling living standards for 23 years, ending up at little over half what they were in 2007. This scenario will be little changed in 2050 or 2100. Falling living standards mean lower life expectancy and a reduction in population growth. The model reflects this by projecting that these climate change policies will lead to 330 million less people than a do-nothing scenario.

Let us be clear what this table is saying. If the world gets together and successfully implements a set of policies to contain CO2 levels at 440ppm, the global output in 2050 will be 40% lower. There is a downside risk here as well – that this cost will not contain the growth in CO2, or that the alternative power supplies will mean power outages, or that large-scale, long-term government projects tend to massively overrun on costs and under perform on benefits.

Let us hark back to the Stern Review, published in 2006. From the Summary of Conclusions

“Using the results from formal economic models, the Review estimates that if we don’t

act, the overall costs and risks of climate change will be equivalent to losing at least

5% of global GDP each year, now and forever. If a wider range of risks and impacts

is taken into account, the estimates of damage could rise to 20% of GDP or more.

In contrast, the costs of action – reducing greenhouse gas emissions to avoid the

worst impacts of climate change – can be limited to around 1% of global GDP each

year.”

Stern looked at the costs, but not at the impact on economic growth. So even if you accept his alarmist prediction costs of 5% or more of GDP, would you bequeath that to your great grandchildren, or a 40% or more reduction lowering of their living standards along with the risk of the policies being ineffective? Add into the mix that The Stern Review took the more alarming estimates, rather a balanced perspective(1) then the IPCC case for reducing CO2 by more solar panels and wind farms is looking highly irresponsible.

From my own perspective, I would not have thought that the impact of climate mitigation policies could be so harmful to economic growth. If the models are correct that the wrong policies are hugely harmful to economic growth, then due diligence should be applied to any policy proposals. If the economic models from the IPCC are too sensitive to minor changes, then we must ask if their climate models suffer from the same failings.

  1. See for instance Tol & Yohe (WORLD ECONOMICS • Vol. 7 • No. 4• October–December 2006)

Update 27th July.

Have just read through Steve McIntyre’s posting on the report. Unusually for him, he concentrates on the provenance of the report and not on analysing the data.

Outflanking Al Gore & other alarmists

At Wattupwiththat there is a proposal to build a database by

Find(ing) every false, misleading, scary, idiotic, non-scientific statement they have made in the past twenty years. Create an index by name with pages listing those statement with links to the source. Keep it factual. Let their own words come back to haunt them.

My comment was

A database of all the exaggerations, errors and false prophesies on its own will do no good. No matter how extensive and thorough and rigorous, it will be dismissed as having been compiled by serial deniers funded by big oil. Getting a fair hearing in the MSM will be impossible. It the coming battle the alarmists have decided the field of battle and have impenetrable armour.

To be brief, there needs to be two analogies brought to the fore.

First is the legal analogy. If there is a case for CAGW, it must be demonstrated by primary, empirical evidence. That evidence must be tested by opponents. It is not the bits, that may be true – like lots more CO2 will cause some warming. But that there is sufficient CO2 to cause some warming, which will be magnified by positive feedbacks to cause even greater warming, and this substantive warming will destabilize the planet weather systems in a highly negative way. The counter-argument is two-fold – that many of dire, immediate, forecasts have been highly exaggerated and more importantly, the compound uncertainties that have been vastly underestimated. That the case is weak is shown by the prominence given to what is hearsay evidence, such as the consensus, or the proclamations of groups of scientists, or to the image of the hockey stick. In some cases, it has been tantamount to jury-tampering.

Second is the medical analogy. A medical doctor, in proscribing a painful and potentially harmful course of treatment, should at last have a strong professionally-based expectation that post treatment the patient will be better off than if nothing was done. The very qualities that make politicians electable – of being able to make build coalitions by fudging, projecting an image, and undermining the opponents by polarizing views – make them patently unfit for driving through and micro-managing effective policy to reduce CO2. They will of necessity overstate the benefits and massively understate the costs, whether financial or in human suffering. They will not admit that the problem is beyond their capabilities, nor that errors had been made. The problem is even worse in powerful dictatorships than democracies.

I have tried to suggest a method (for those who are familiar with microeconomics) the IPCC/Stern case for containing CO2 here.

https://manicbeancounter.wordpress.com/2011/02/11/climate-change-policy-in-perspective-%E2%80%93-part-1-of-4/

Also, why there is no effective, global political solution possible.

https://manicbeancounter.wordpress.com/2011/02/13/climate-change-in-perspective-%E2%80%93-part-2-of-4-the-mitigation-curve/

What is missing is why the costs of global warming have been grossly exaggerated.

Question for Sir John Beddington

According to Bishop HillSir John Beddington is seeking feedback on the climate impacts report I blogged about yesterday.”

My question is of a technical nature. Given that the Stern Review of 2006 received worldwide acclamation for its novel conclusions, I would have thought Sir John Beddington would have utilised this work. Apart from a footnote or two, the only reference is in a box on page 63.

Dear Sir John,

I am a humble beancounter, who spends his time in analysing complex project costs and application forms for capital expenditures. In this vein, on page 63 of your report you claim that the Stern Review had a social discount rate of 1.4%, whilst

conclude that the Lord Stern used a discount rate of 0.1%. Have we all misread the report?

Show Warming After it Has Stopped Part 2

Last week I posted how Miles Allen had pulled off a trick to show warming in the 21st century after that trend had stopped in 1998. According to David Middleton at Watts up with That, the BBC’s Richard Black is using a similar decadal comparison to show that warming has continued. There are two Richard Black’s claim that the GWPF are cherry-picking the data. First, that an employee of the UK state broadcaster should choose to use a foreign temperature record over the UK one. Second, why the switch to decadal comparisons, when the IPCC has long used the norm.

Let me break this down with two graphs. Like with the previous posting, I see no scientific reason to necessitate why the starting point for the earth’s orbit of the sun has to be on 1st January. I therefore include all 12 month moving averages. That is Jan-Dec, Feb-Jan, Mar-Feb etc. I have also included three lines on my analysis. First the NASA GISSTEMP; second the HADCRUT3 and third the difference between the two.

The first graph shows the decadal change in the NASA GISS figures that Richard Black is talking about. Sure enough the only period where the 12 month average temperature anomaly is lower than a decade before is in 2008. Using the HADCRUT3 data reveals a similar pattern, but the negative period is much longer. If The HADCRUT3 decadal change is subtracted from the GISSTEMP, there is shown to be a greater decadal warming trend in the NASA than in the UK figures. This might suggest the reason for Richard Black’s preference for foreign data over that paid for by the UK taxpayer’s.

The second graph shows the 12 month moving average data – and clearly shows the reasons for both using decadal temperature changes over annual, and foreign data over British. From 1988 to 1997, there was no real warming trend if the Pinatubo cooling is removed from 1995. However the NASA anomaly seems to be around twice as volatile is the Hadley. But in 1998 the position reverses. The natural 1998 El Nino effect is twice according to the British scientists, as it is to Dr Hansen and his team. Post 1998 the story diverges. According to NASA, the warming resumes on an upward trend. According to the Hadley scientists, the 1998 El Nino causes a step change in average temperatures and the warming stops. As a result the NASA GISS warming trend is mirrored by its divergence from the more established and sober British series.

Oppenheimer – False prophet or multi-layered alarmist?

Haunting the Library has a posting “Flashback 1988: Michael Oppenheimer Warn Seas to Surge 83 Feet Inland by 2020“.

Apart from being a false and alarmist forecast in retrospect, even if in 1988 the climate models on which it was based were correct and unbiased, there could still have been less than a 1 in 1000 chance of this scenario being forecast. Here is why.

The relevant quote from the “Hour” newspaper is

“Those actions could force the average temperature up by 2 degrees Fahrenheit in the next three decades….Such a temperature increase, for example, would cause the sea level to rise by 10 inches, bringing sea water an average of 83 feet inland”

There are at possibly three, or more, levels of alarmism upon which this conclusion depends:-

  1. The sea level rise was contingent on a 2oF (1.1oC) over 32 years would have been at the top end of forecasts. Although the centennial rate of increase is around 3.5oC, my understanding of the climate models it is not just the global temperatures that are projected to rise, but the decadal rate of increase in temperatures. This is consistent with the accelerating rate of CO2 increase. Normally the range of projections is over a 95% probability range, so the models would have projected a 2.5% chance of this temperature increase.
  2. The rise in sea levels would lag air temperature rises by a number of years. This is due to the twin primary causes of sea level rise – thermal expansion of the ocean and melting pack ice. Therefore, I would suggest a combination of three reasons for this projection. First, the models projection of 10 inch (25cm) rise was exaggerated, due to faulty modelling. (IPCC AR4 of 2007 estimates a centennial rise of 30cm to 60cm, with accelerating rates of sea-level rises correlating with, but lagging, temperature rises). Second it was at the top end of forecast probability ranges, so there was just a 2.5% change of the sea level rise reaching this level for a 2oF rise. Third, time lags were not fully taken into account.
  3. The mention of the impact on the horizontal average sea water movement of 83 feet (25m) is to simply spread alarmism. For low-lying populated coastal areas, such as Holland, it probably assumes the non-existence (or non-maintenance) of coastal defences. The calculation may also assume land levels do not naturally change. In the case of the heavily populated deltas and the coral islands, this ignores natural processes that have caused land levels to rise with sea levels.

So it could be that, based on the climate models in 1988, there as a 2.5% chance of a 2.5% chance of sea levels rising by 10 inches in 32 years, subject to the models being correct. There are a number reasons to suspect that the models of climate and sea level rise are extreme. For instance, the levels of temperature rise rely on extreme estimates of sensitivity of temperature to CO2 and/or the feedback effect of temperature increases on water vapour levels (See Roy Spencer here). Sea level rises were probably overstated, as it was assumed that Antarctic temperatures would rise in parallel with those of the rest of the world. As 70-80% of the global pack ice is located there, the absence of warming on the coldest continent, will have a huge impact on future sea level forecasts.

Although this forecast was made a climate scientist, it was not couched in nuanced terms that the empirical scientific modelling techniques require. But it is on such statements that policy is made.

Showing Warming after it has Stopped

Bishop Hill points to an article by Miles Allen that

“examines how predictions he made in 2000 compare to outturn. The match between prediction and outturn is striking…..”

Bishop Hill points out that this using HADCRUT decadal data. Maybe a quick examination of the figures will reveal something? Using the HADCRUT3 data here is are the data for the last five decade.

This shows that the decadal rate of warming has been rising at a pretty constant rate for the last three decades. So all those sceptics who claim that global warming has stopped must have got it wrong then?

Let us examine the data a bit more closely.

The blue line is the Hadcrut annual anomaly figures from 1965 to 2010. The smoother red line is the 10 year average anomaly, starting with the 1956-1965 average and finishing with the 2001-2010 average. The decadal averages are highlighted by the red triangles.

The blue would indicate to me that there was a warming trend from 1976 to 1998, since then it has stopped. This is borne out by the 10 year moving average, but (due to the averaging) the plateau arrives five years later. But the story from the decadal figures is different, simply due to timing.

So what scientific basis is there for using the decadal average? Annual data seems reasonable, at it is the time for the earth make one rotation around the sun. But the calendar is fixed where it is because 1500 years ago Dionysius Exiguus devised a calendar with a mistaken estimate of the birth (or conception) of Jesus Christ as Year 1, and we have number base 10 possibly to the number of fingers we have. Both are a human artefact. Further, the data is actually held in months, so it is only due to the Christian calendar that we go from January to December. This means of the 120 possible periods for decadal averages, Myles Allen shows a cultural prejudice, and in choosing decadal averages, he shows a very human bias, over real world selectivity.

How does this affect the analysis of the performance of the models? The global temperature averages showed a sharp uptick in 1998. Therefore, if the models simply predicted a continuation of the trend of the previous twenty years, they would have been quite accurate. The fact was the prediction was higher than the outturn, so the models overestimated. It is only by exploiting the arbitrary construct of decadal data that the difference appears insignificant. Drop to 5 years moving average, and you will get a bigger divergence. Wait a couple of years, and you will get a bigger divergence. Use annual figures and you will get a bigger divergence. The result is not robust.

Keynes, Hayek and Global Warming

Jo Nova points to the excellent Keynes versus Hayek rap videos and compares with global warming views. My own observations are more to do with the nature of theory.

To compare Keynes & Hayek, I believe that we need to separate Keynes from the mainstream Keynesians. Keynes saw theory as a means to get the policy he wanted. It was the Keynesians (starting with John Hicks’ IS-LM analysis) that started the modelling approach. Both Keynes and Hayek eschewed the mathematical modelling of modern economics. In this Keynes would be closer to the perspective of GLS Shackle than Keynesians

  1. Keynes saw the economic system as being essentially unstable. There was no tendency for the economic system to tend towards an optimal equilibrium. Rather it could get stuck for long periods with high unemployment. This seems to parallel to the notion of tipping points. The Keynesian multiplier The parallel in CAGW theory can be seen in the positive feedbacks and tipping points. When Bob Carter says that climate is homeostatic (or Warren Meyer at climate-skeptic uses his ball in a bowl illustration), they criticize the climate models for being Keynesian. I would think that the Carter/Meyer view of climate is similar to that of Hayek on economic phenomena. Climate is essentially chaotic, having only general empirical regularities. However, it has tendencies towards equilibrium. Please note that Hayek occupies a position close to Keynes this issue. Walrasian General Equilibrium with perfect knowledge and instantaneous leaps from one equilibrium to another is an extreme caricature of more mainstream economics. Here Keynes v. Hayek is more apt for the views on climate.
  2. Keynesians view all the essential features of the economic system as being essentially knowable, capable of being reasonably represented in mathematical models. Hayek calls this a “pretence of knowledge” (the title of his Nobel Prize lecture), as although we may know essential features of the system, the relationships are highly complex and changing. The problem is not just lack of measurement, it is having data that is capable of being modelled in order make manipulation of these variables possible. In economics, the manipulation is control of macro economy. In climate, it is to control the global average temperature.
  3. Keynesians believe that a few major measures are sufficient to describe an economy. CAGW theorists believe that the global surface temperature and atmospheric CO2 are key measures. Hayek questioned whether such variables were meaningful. CAGW theorists are on much shakier ground than the Keynesians here. Bob Carter points out in his book that the stored heat in the atmosphere is a tiny fraction of that stored in the oceans. When it comes to stored CO2 the problems are even greater.

But when it comes to the rhetoric of global warming, the analogy should not be with Keynes, but with Karl Marx. Climate models give true scientists perfect insight into the real nature of climate. Those who are on the outside are delusional and/or are either knowingly, or subconsciously, acting as lackeys of the oppressive class. In Marx the oppressive class are the bourgeois, in climate alarmism they are Big Oil.

Prof Nir Shaviv Presentation

A couple of blogs (Bishophill and Jo Nova) direct you to a short 30 minute presentation by Prof Vincent Courtillot. The proceeding presentation by Prof. Nir Shaviv on cosmic ray theory, though more technically advanced, is worth a look, especially if you compare the strength of his argument with the IPCC greenhouse theory.

For the non-scientist, the Shaviv thesis of solar changes explaining the 20th century warming episode is better than the IPCC greenhouse theory as

  • Has some corroborating evidence to suggest that cosmic rays are affecting the climate, with the extent.
  • Has a simple computer model that explains most of the twentieth century warming. In particular the two similar periods of warming from 1915 to 1940 and 1975 to 1998, and the pauses are all modelled quite well. Using Occams Razor  (the most succinct hypothesis, or that which needs the fewest assumptions), it beats the anthropogenic greenhouse gas theories. Alternatively, it is a better fit of the data, as AGW only fits the later warming. The early 20th century warming can only be explained by predominantly natural factors.
  • Is happy flicking between the decadal time-scale that he is trying to explain to geological time scale of hundreds of millions of years and then to the influence of solar flares that last a few days. Neither does he have problems with natural variations.

The IPCC greenhouse gas models do have a number of models that concur. But this can be explained that they have similar assumptions and assumptions behind them. Indeed, given the strong coherence it is a weakness that they have such a wide variation in the data. The IPCC

  • Lacks corroborating evidence, particularly of the tropical tropospheric hotspot.
  • Relies on computer models are highly complex, rely on a two-stage process (see note below), and have many ad hoc adjustments.
  • Yet these computer models that do not tie in very well with the data. To explain the lack of warming in the 1945 to 1978 period and post 1998 you have to resort to an ad hoc inclusion of aerosols. The early-twentieth century warming, so similar empirically, has to have a different explanations.
  • Greenhouse gas theory is uncomfortable with looking beyond the twentieth. It cannot explain the medieval warm period, hence the amount of backing for the infamous hockey stick which suggests the twentieth century warming was unusual. Neither can it explain the other natural fluctuations in the current inter-glacial.

An opposite view that Shaviv’s work is insignificant can be referenced at Sourcewatch, a highly pro-AGW site. They state

“While he does believe the earth is warming, he contends that the sun’s rays, rather than human produced CO2, are the cause. But a 2009 analysis of data “on the sun’s output in the last 25 years of the 20th century has firmly put the notion to rest. The data shows that even though the sun’s activity has been decreasing since 1985, global temperatures have continued to rise at an accelerating rate.”

There are counters to this is that Sourcewatch is speaking about the wrong thing. Shaviv contends it is cosmic rays emanating from elsewhere in the galaxy that affect cloud cover and by this means temperature. Solar winds (determined by solar activity) heavily influence the levels of cosmic rays reaching the earth. A much smaller influence is the solar variability. Shaviv shows the following slide (at 17 mins) to show the difference in his measured magnitudes.

Note on IPCC Climate models

The IPCC climate models do not just rely on greenhouse gases directly impacting on the temperature to generate global climate catastrophe. This was nicely summarized by Prof Richard Lindzen in his Congressional testimony of November 17th 2010. (Full pdf here, Warren Meyer comments here)

  1. A doubling of CO2, by itself, contributes only about 1C to greenhouse warming. All models project more warming, because, within models, there are positive feedbacks from water vapour and clouds, and these feedbacks are considered by the IPCC to be uncertain.
  2. If one assumes all warming over the past century is due to anthropogenic greenhouse forcing, then the derived sensitivity of the climate to a doubling of CO2 is less than 1C. The higher sensitivity of existing models is made consistent with observed warming by invoking unknown additional negative forcings from aerosols and solar variability as arbitrary adjustments.

Biofuels – a policy that is killing the poor

The GWPF reports on a new paper by Indur M. Goklany, Ph.D. that estimates the biofuels policy may be causing 200,000 additional deaths a year. This is compared to the 141,000 deaths (on a like by like basis) that WHO claims may be attributable to climate change.

This paper understates the comparison as the biofuels estimates are many times more robust than the climate change deaths estimates.

The biofuels element is a direct relationship. As real income increases above $1.25 per day, the quantity of food that people can buy increases. From mostly a subsistence existence people can trade. Variety and calorific value of food increase. Also constancy of food supply is assured as a rapidly shrinking portion is reliant on the local harvest. Push up the real cost of food rapidly and this virtuous growth cycle is reversed.

The aspect of Global Warming comes from page 72 of the WHO World Health Report 2002.

“Climate change was estimated to be responsible in 2000 for approximately 2.4% of

worldwide diarrhoea, 6% of malaria in some middle income countries and 7% of dengue

fever in some industrialized countries. In total, the attributable mortality was 154 000 (0.3%)

deaths and the attributable burden was 5.5 million (0.4%) DALYs. About 46% this burden

occurred in SEAR-D, 23% in AFR-E and a further 14% in EMR-D.”

The global warming element comes from

  1. Looking at other elements and relating the impacts to temperature and climate volatility empirically.
  2. Measuring accurately recent temperature record to show increases in temperature. The warming in recent years may have been overstated due to failure to adjust for the urban heat island effect and possible biases in the calculation.
  3. Correctly relating this a proportion of this warming to anthropogenic factors. If it is overstated, then so is the justification for policy to mitigate the climatic effects of that warming.
  4. Accurately measuring the impacts of warming on the climate factors such as floods, droughts, sea level rise, extreme heat waves etc.

If any of these issues are overstated individually, then they can significantly reduce the relationship. But compound and they make the global warming deaths insignificantly different from Zero. For instance the relationship between temperature and malaria is highly controversial and has been dismissed. This might be 10% of the deaths. If the recent rise is only 0.3 degrees, rather than 0.4 degrees, then the mortality impact will reduce more than proportionately. If half the temperature rise due to anthropogenic factors, then it more than halves the impact. Most importantly there is the influence on climate variability. If extreme weather has not increased due to global warming – for instance the hurricane impacts were based on insurance claims rather than increasing frequency and intensity of storms (they may be decreasing), then some of the factors are decreased. Let us give a minimal impact of each of these impacts. Linking each of the elements to climate change could reduce of the attribution by 10% to >90% (say 60%). Measurement actual AGW reduces by 20% to 60% (say 40%). Weather variability due to AGW is highly suspect due to separation from the highly variable natural variability, so the will reduce the attribution by 50% to >100%. Take this as an 80% reduction. The compound effect on attributable deaths is 154,000(100%-60%)(100%-40%)(100%-80%) equals around 7,400. In other words, it is statistically insignificant.

On the other hand there is no mention of the most direct and beneficial impact of increasing greenhouse gases on the health and well-being of the poorest. Higher CO2 levels are directly related to increased plant growth rates and biomass. That means increased agricultural productivity for free.

The later 2003 WHO report “Climate Change and Human Health – Risks and Responses” used this report’s findings, but had plenty of hidden warnings. For instance the final conclusion was

“The increasing trend in natural disasters is partly due to better reporting, partly due to increasing population vulnerability, and may include a contribution from ongoing global climate change.”

Finally, one must consider that if the global warming estimate is accurate, it is not an either/or comparison. Current climate change policies will not achieve a significant reduction in CO2 levels. So the poor will be hit with extra deaths from both sources.