Freeman Dyson on Climate Models

One of the leading physicists on the planet, Freeman Dyson, has given a video interview to the Vancouver Sun. Whilst the paper emphasizes Dyson’s statements about the impact of more CO2 greening the Earth, there is something more fundamental that can be gleaned.

Referring to a friend who constructed the first climate models, Dyson says at about 10.45

These climate models are excellent tools for understanding climate, but that they are very bad tools for predicting climate. The reason is simple – that they are models which have very few of the factors that may be important, so you can vary one thing at a time ……. to see what happens – particularly carbon dioxide. But there are a whole lot of things that they leave out. ….. The real world is far more complicated than the models.

I believe that Climate Science has lost sight of what this understanding of what their climate models actually are literally attempts to understand the real world, but are not the real world at all. It reminds me of something another physicist spoke about fifty years ago. Richard Feynman, a contemporary that Dyson got to know well in the late 1940s and early 1950s said of theories:-

You cannot prove a vague theory wrong. If the guess that you make is poorly expressed and the method you have for computing the consequences is a little vague then ….. you see that the theory is good as it can’t be proved wrong. If the process of computing the consequences is indefinite, then with a little skill any experimental result can be made to look like an expected consequence.

Complex mathematical models suffer from this vagueness in abundance. When I see supporters of climate arguing the critics of the models are wrong by stating some simple model, and using selective data they are doing what lesser scientists and pseudo-scientists have been doing for decades. How do you confront this problem? Climate is hugely complex, so simple models will always fail on the predictive front. However, unlike Dyson I do not think that all is lost. The climate models have had a very bad track record due to climatologists not being able to relate their models to the real world. There are a number of ways they could do this. A good starting point is to learn from others. Climatologists could draw upon the insights from varied sources. With respect to the complexity of the subject matter, the lack of detailed, accurate data and the problems of prediction, climate science has much in common with economics. There are insights that can be drawn on prediction. One of the first empirical methodologists was the preeminent (or notorious) economist of the late twentieth century – Milton Friedman. Even without his monetarism and free-market economics, he would be known for his 1953 Essay “The Methodology of Positive Economics”. Whilst not agreeing with the entirety of the views expressed (there is no satisfactory methodology of economics) Friedman does lay emphasis on making simple, precise and bold predictions. It is the exact opposite of the Cook et al. survey which claims a 97% consensus on climate, implying that it relates to a massive and strong relationship between greenhouse gases and catastrophic global warming when in fact it relates to circumstantial evidence for a minimal belief in (or assumption of) the most trivial form of human-caused global warming. In relation to climate science, Friedman would say that it does not matter about consistency with the basic physics, nor how elegantly the physics is stated. It could be you believe that the cause of warming comes from the hot air produced by the political classes. What matters that you make bold predictions based on the models that despite being simple and improbable to the non-expert, nevertheless turn out to be true. However, where bold predictions have been made that appear to be improbable (such as worsening hurricanes after Katrina or the effective disappearance of Arctic Sea ice in late 2013) they have turned out to be false.

Climatologists could also draw upon another insight, held by Friedman, but first clearly stated by John Neville Keynes (father of John Maynard Keynes). That is on the need to clearly distinguish between the positive (what is) and the normative (what ought to be). But that distinction was alienate the funders and political hangers-on. It would also mean a clear split of the science and policy.

Hattips to Hilary Ostrov, Bishop Hill, and Watts up with that.

 

Kevin Marshall

Economic v Climate Models

Luboš Motl has a polemical look at the supposed refutation of a sceptics arguments. This is an extended version of my comment.

Might I offer an alternative view of item 30 – economic v climate models?

Economic models are different from climate models. They try to model empirical generalisations and (with a bit of theory & a lot of opinion) try to forecast future trends. They tend to be best over the short term when things are pretty much the same from one year to the next. The consensus of forecasts are pretty useless at predicting discontinuities in trends, such as the credit crunch. At there best their forecasts at little better than the dumb forecast that next period will be the same as last period. In general the accuracy of economic forecasts is inversely proportional to their utility.

Climate models are somewhat different according to Dr MacCracken.

“In physical systems, we do have a theory—make a change and there will be a response in largely understandable and calculatable ways. Models don’t replace theory; their very structure is based on our theoretical understanding, which is why they are called theoretical models. All that the computers do is to very rapidly make the calculations in accord with their theoretical underpinnings, doing so much, much faster than scientists could with pencil and paper.”

The good doctor omits to mention some other factors. It might be the case that climate scientists have all the major components of the climate system (though clouds are a significant problems), but he omits to include measurements. The interplay of complex factors can cause unpredictable outcomes depending on timing and extent, as well as the theory. The climate models, though they have a similarity of theory and extent, come up with widely different forecasts. Even this variation is probably limited by sense-checking the outcomes and making ad hoc adjustments. If the models are basically correct then major turning points should capable of being predicted. The post 1998 stasis in temperatures, the post 2003 stability in sea temperatures and the decline in hurricanes post Katrina are all indicators that models are overly sensitive. The novelty that the models do predict tend not to be there, but the novelties that do exist are not predicted.

If it is the case that climate models are still boldly proclaiming a divergency from trend, whilst economic models have much more modest in their claims, is this not an indicator of climate model’s superiority? It would be if one could discount the various economic incentives. Economic models are funded by competing in institutions. Some are private sector, and some are public sector. For most figures there is forecast verification monthly (e.g. inflation, jobs) or quarterly (growth). If a model were consistently an outlier if would lose standing, as the forecasts are evaluated against each other. If it was more accurate then the press would quote it, being good name placement for the organisation. In the global warming forecasts, there is not even an annual variation. The incentive is either to conform, or to provide more extreme (it is worse than we thought) prognostications. If the model projected basically said “chill-out, it ain’t that bad man”, they authors would be ostracized and called deniers. At a minimum the academics would lose status and ultimately lose out on the bigger research grants.

(A more extreme example is of a major earthquake forecast. “There will not be one today” is a very accurate prediction. In the case of Tokyo area over the last 100 years that would have been wrong only twice, an accuracy of greater than 1 in 10,000).

Oppenheimer – False prophet or multi-layered alarmist?

Haunting the Library has a posting “Flashback 1988: Michael Oppenheimer Warn Seas to Surge 83 Feet Inland by 2020“.

Apart from being a false and alarmist forecast in retrospect, even if in 1988 the climate models on which it was based were correct and unbiased, there could still have been less than a 1 in 1000 chance of this scenario being forecast. Here is why.

The relevant quote from the “Hour” newspaper is

“Those actions could force the average temperature up by 2 degrees Fahrenheit in the next three decades….Such a temperature increase, for example, would cause the sea level to rise by 10 inches, bringing sea water an average of 83 feet inland”

There are at possibly three, or more, levels of alarmism upon which this conclusion depends:-

  1. The sea level rise was contingent on a 2oF (1.1oC) over 32 years would have been at the top end of forecasts. Although the centennial rate of increase is around 3.5oC, my understanding of the climate models it is not just the global temperatures that are projected to rise, but the decadal rate of increase in temperatures. This is consistent with the accelerating rate of CO2 increase. Normally the range of projections is over a 95% probability range, so the models would have projected a 2.5% chance of this temperature increase.
  2. The rise in sea levels would lag air temperature rises by a number of years. This is due to the twin primary causes of sea level rise – thermal expansion of the ocean and melting pack ice. Therefore, I would suggest a combination of three reasons for this projection. First, the models projection of 10 inch (25cm) rise was exaggerated, due to faulty modelling. (IPCC AR4 of 2007 estimates a centennial rise of 30cm to 60cm, with accelerating rates of sea-level rises correlating with, but lagging, temperature rises). Second it was at the top end of forecast probability ranges, so there was just a 2.5% change of the sea level rise reaching this level for a 2oF rise. Third, time lags were not fully taken into account.
  3. The mention of the impact on the horizontal average sea water movement of 83 feet (25m) is to simply spread alarmism. For low-lying populated coastal areas, such as Holland, it probably assumes the non-existence (or non-maintenance) of coastal defences. The calculation may also assume land levels do not naturally change. In the case of the heavily populated deltas and the coral islands, this ignores natural processes that have caused land levels to rise with sea levels.

So it could be that, based on the climate models in 1988, there as a 2.5% chance of a 2.5% chance of sea levels rising by 10 inches in 32 years, subject to the models being correct. There are a number reasons to suspect that the models of climate and sea level rise are extreme. For instance, the levels of temperature rise rely on extreme estimates of sensitivity of temperature to CO2 and/or the feedback effect of temperature increases on water vapour levels (See Roy Spencer here). Sea level rises were probably overstated, as it was assumed that Antarctic temperatures would rise in parallel with those of the rest of the world. As 70-80% of the global pack ice is located there, the absence of warming on the coldest continent, will have a huge impact on future sea level forecasts.

Although this forecast was made a climate scientist, it was not couched in nuanced terms that the empirical scientific modelling techniques require. But it is on such statements that policy is made.

Showing Warming after it has Stopped

Bishop Hill points to an article by Miles Allen that

“examines how predictions he made in 2000 compare to outturn. The match between prediction and outturn is striking…..”

Bishop Hill points out that this using HADCRUT decadal data. Maybe a quick examination of the figures will reveal something? Using the HADCRUT3 data here is are the data for the last five decade.

This shows that the decadal rate of warming has been rising at a pretty constant rate for the last three decades. So all those sceptics who claim that global warming has stopped must have got it wrong then?

Let us examine the data a bit more closely.

The blue line is the Hadcrut annual anomaly figures from 1965 to 2010. The smoother red line is the 10 year average anomaly, starting with the 1956-1965 average and finishing with the 2001-2010 average. The decadal averages are highlighted by the red triangles.

The blue would indicate to me that there was a warming trend from 1976 to 1998, since then it has stopped. This is borne out by the 10 year moving average, but (due to the averaging) the plateau arrives five years later. But the story from the decadal figures is different, simply due to timing.

So what scientific basis is there for using the decadal average? Annual data seems reasonable, at it is the time for the earth make one rotation around the sun. But the calendar is fixed where it is because 1500 years ago Dionysius Exiguus devised a calendar with a mistaken estimate of the birth (or conception) of Jesus Christ as Year 1, and we have number base 10 possibly to the number of fingers we have. Both are a human artefact. Further, the data is actually held in months, so it is only due to the Christian calendar that we go from January to December. This means of the 120 possible periods for decadal averages, Myles Allen shows a cultural prejudice, and in choosing decadal averages, he shows a very human bias, over real world selectivity.

How does this affect the analysis of the performance of the models? The global temperature averages showed a sharp uptick in 1998. Therefore, if the models simply predicted a continuation of the trend of the previous twenty years, they would have been quite accurate. The fact was the prediction was higher than the outturn, so the models overestimated. It is only by exploiting the arbitrary construct of decadal data that the difference appears insignificant. Drop to 5 years moving average, and you will get a bigger divergence. Wait a couple of years, and you will get a bigger divergence. Use annual figures and you will get a bigger divergence. The result is not robust.