Bjorn Lomborg on Climate Costs in the Australian

Australian Climate Madness blog points to an article, “Wrong way, go back“, in the Australian Newspaper by Skeptical Environmentalist Bjorn Lomberg on Australia’s climate policies. This is my comment.

This statement in the article is significant

When economists estimate the net damage from global warming as a percentage of gross domestic product, they find it will indeed have an overall negative impact in the long run but the impact of moderate warming (1C-2C) will be beneficial. It is only towards the end of the century, when temperatures have risen much more, that global warming will turn negative.

Now consider the Apocalypse Delayed? posting of March 28th. Referring to an Economist article, it says that a number of empirical studies show that climate sensitivity is much lower than the climate models assume. Therefore, moving into the net cost range seems much less likely.
But why are there net costs? Lomberg’s calculations are based on William Nordhaus’s DICE model that

calculates the total costs (from heat waves, hurricanes, crop failure and so on) as well as the total benefits (from cold waves and CO2 fertilisation).

I would claim that the destablisation of the planet’s climate by rapid warming has very little evidence. Claims in AR4 that hurricanes were getting worse; that some African countries would see up to a 50% reduction in crop yields by 2020; that the Himalayan Glaciers would largely disappear by 2035; that the Amazon rainforest could catastrophically collapse – all have been over-turned.
Thus the policy justification for avoiding climate catastrophe as a result rising greenhouse gases is a combination of three components. First, a large rise in temperatures. Second, the resulting destablisation of the climate system having net adverse consequences. Third, is that the cost of constraining the rise in greenhouse gases is less than the cost of doing nothing.
It is only this third aspect that Bjorn Lomberg deals with. Yet despite that he shows that the Australian Government is not “saving the planet for future generations”, but causing huge net harm. Policy-making should consider all three components.

That is, there are three components to the policy justification to combatting “climate change” by constraining the growth in greenhouse gas emissions

  1. That there will be a significant amount of global warming.
  2. That this is net harmful to the planet and the people on it.
  3. That the net harm of policies is less than the net harm of warming. To use a medical analogy, the pain and risks of treatment are less than the disease.

Lomberg, using the best cost model available, comes up with far less costs of global warming than, say, the Stern Review of 2006. He also uses actual policy costs to assess the net harm of global warming. Lomberg does not, however, challenge the amount of warming from a given quantity of CO2 rise, nor the adverse consequences of that warming. The Economist article
and editorial of March 30th conversely challenges the quantity of warming from arising from a given rise in CO2, but just sees it as “apocalypse delayed” and not “apocalypse debunked“.

Kevin Marshall

IPCC’s 1990 Temperature Projections – David Evans against Mike Buckley

The following comments by Mike Buckley (referenced here) are more revealing about the state of climate science than any errors on Evan’s part.


  1. Surface Temperatures v lower tropospheric temperatures.

    As a beancounter (accountant) I like to reconcile figures. That is to account for the discrepancies. Jo Nova, Anthony Watts and others have found numerous reasons for the discrepancies. The surface temperature records have many “adjustments” that brings reality into line with the models. Whatever excuses you can conjure up, as an accountant I would say that they fail to offer a “true and fair view”.

  2. Trend lines should not start at the origin.

    So you disagree with standard forecasting? That is you start with the current position.

  3. Trend lines should be curved.

    Agreed. This is for simplicity. See next point.

  4. Trend lines should be further apart.

    Are you saying that the climate models have a wider predictive band of 0.75 celsius over 25 years? If they were straight lines, over a century they cannot get within 3 degrees. If Dr Evans had not simplified, the range would have been much greater.

There is a way of more precisely comparing the models with the actuals. The critical variable is CO2 levels. Therefore we should re-run the models from 1990 with actual CO2 data. By then explaining the variances, we can better achieve better understanding and adjust the models for the future. But there is plenty of evidence that this needs to be done by people who are independent. It will not happen, as the actual rise in CO2 was similar to the highest projections of the time.

The philosopher of science Karl Popper is remembered for the falsification principle. A less stringent criteria is that progressive science confronts the anomalies and gets predictions ever closer to the data. Pseudo-science closes ranks, makes up excuses, and “adjusts” perceptions of reality to fit the theory. Progressive science is highly competitive and open, whilst pseudo-science becomes ever more dogmatic, intolerant and insular.

Climate Change Damage Impacts – A story embellished at every retelling

Willis Eschenbach has a posting on a recent paper on climate change damage impacts. This is my comment, with hyperlinks and tables.

My first reaction was “Oi– they have copied my idea!”

Well the damage function at least!

https://manicbeancounter.wordpress.com/2011/02/11/climate-change-policy-in-perspective-%E2%80%93-part-1-of-4/

Actually, this can be found by the claims of the Stern Review or AR4. Try looking at the table in AR4 of “Examples of impacts associated with global average temperature change” and you will get the idea.

A simpler, but more visual, perspective is gained from a slide produced for the launch of the Stern Review.

More seriously Willis, this is worse than you thought. The paper makes the claim that unlikely but high impact events should be considered. The argument is that the likelihood and impacts of potential catastrophes are both higher than previous thought. The paper then states

“Various tipping points can be envisaged (Lenton et al., 2008; Kriegler et al., 2009), which would lead to severe sudden damages. Furthermore, the consequent political or community responses could be even more serious.”

Both of these papers are available online at PNAS. The Lenton paper consisted of a group of academics specialising in catastrophic tipping points getting together for a retreat in Berlin. They concluded that these tipping points needed to include “political time horizons”, “ethical time horizons”, and where a “A significant number of people care about the fate of (a)

component”. That is, there is a host of non-scientific reasons for exaggerating the extent and the likelihood of potential events.

The Krieger paper says “We have elicited subjective probability intervals for the occurrence of such major changes under global warming from 43 scientists.” Is anybody willing to assess if the subjective probability intervals might deviate from objective probability intervals, and in which direction.

So the “Climate Change damage impacts” paper takes two embellished tipping points papers and adds “…the consequent political or community responses could be even more serious.”

There is something else you need to add into the probability equation. The paper assumes the central estimate of temperature rise from a doubling of CO2 levels is 2.8 degrees centigrade. This is only as a result of strong positive feedbacks. Many will have seen the recent discussions at Climateaudit and wattsupwiththat about the Spencer & Bracewell, Lindzen and Choi and Dessler papers. Even if Dessler is given the benefit of the doubt on this, the evidence for strong positive feedbacks is very weak indeed.

In conclusion, the most charitable view is that this paper takes an exaggerated view (both magnitude and likelihood) of a couple of papers with exaggerated views (both magnitude and likelihood), all subject to the occurrence of a temperature rise for which there is no robust empirical evidence.

Al Gore’s faulty case for CAGW

Wattsupwiththat have an estimate of Al Gore’s Climate Reality online viewing figures. I posted at the follow-up article

manicbeancounter says:

September 22, 2011 at 12:17 pm

I was one of those who stayed for over 5 minutes.

A video they had justified the case for global warming by re-doing the CO2 in a jar experiment – very nicely as well. Only they did not say what the concentrations were compared to the atmosphere (probably >1000 times the 0.04% at the moment).

Then in the space of a sentence mentioned feedbacks amplifying the effect.

So of the predicted warming up to 6 degrees centigrade this century predicted by the most extreme alarmists, Al Gore’s little video had a flawed experiment to justify the insignificant first 20%, and a “trust the computer models” for the alarming bit.

Don’t take my word for it – I am just a (slightly) manic beancounter. Check out for yourself at http://climaterealityproject.org/video/climate-101/

By the way – don’t worry about the stats – Alexa currently ranks this site at 16,989 and Climate Reality at 64,734.

REPLY: I have a post coming up on this video, which is online here also, without the need to visit Gore’s site: http://vimeo.com/28991442
-Anthony


Feedbacks in Climate Science and Keynesian Economics

Warren Meyer posts of a parallel between Climate Science and Keynesian Economics. I posted about a subject close to his heart, and central to Keynesianism – Feedbacks. I have also attempted to update on the current debate on feedbacks.

Warren

There is a parallel between Keynes and the CAGW that is close to your heart – feedbacks. Pure Keynesianism is that an increase in government expenditure at less than full employment would have a positive feedback response. Keynes called the feedback measure the multiplier. (The multiplier is the reciprocal of the proportion of Government expenditure to GDP. So if government expenditure was 20% of GDP, then a $1bn fiscal boost would increase output by $5bn.)

By the 1950’s the leading sceptic was Milton Friedman who, in his 1962 book “Capitalism and Freedom”, estimated empirically that the multiplier was about 1 – that is it did not have any impact. Friedman was denounced as a denier and a dinosaur. (At the same time, mainstream economics adapted his verificationist methodology.) Indeed by the end of the 1960s it was generally agreed that the long-term feedback impact of government demand management was negative, as increased government expenditure crowded out the private sector, caused escalating inflation (as economic actors ceased to be fooled by the false signals 0f increased expenditure), slowed economic growth and generally undermined the very structures of the capitalist system. (see Friedman’s Nobel Prize lecture “Inflation and Unemployment“)

Keynesian thinking is that the capitalist economic system is inherently unstable. Stability is only achieved through the guiding hand of government. Keynes contrasted this with a caricature of neoclassical economics, with the macroeconomic system would rapidly come back into equilibrium. Similarly, the climate models assumption of chronic instability is contrasted by an extreme caricature of those who disagree with them. That is the “deniers” are saying that the climate is incredibly stable, with human beings having no influence. In both cases the consequence of this caricaturing is to automatically claim any extreme occurrence as vindification of their perspective.