Reply to Hengist McStone’s “Climate Truthers and 9/11 Skeptics”

At the Heretics Corner blog of Hengist McStone has a posting “Why can’t we have climate truthers and 911 skeptics?” My comment, which I am about to submit is:-

Your statement that

“running through the heart of climate skepticism is the belief that truth about climate science has been suppressed”

is a new one on me. Major climate sceptic blogs (WUWT, Jo Nova, BishopHill) do not see a hiding of the truth, but that a lot of spurious claims are based on very little evidence and of prophesies that fail to come true. They also point to other ways of looking at the data. They would agree that the public is being misled, but this is about the quality of the science, and ultimately the very definition of what is called “science”.

There is a huge weight of evidence for 911 being an act of al-Qaeda terrorism, with no assistance from the CIA. Similarly there is a huge weight of evidence for millions of Jews being killed in the Holocaust and that the average adult smoking 60 cigarettes a day from age 18 will live a much shorter and unhealthier life than the average adult who never inhales a single lung full.

Analogy with these different strongly-supported propositions can be in three areas. The first is on based on numbers of expert supporters of a proposition. The second is showing that there is similarly very strong evidence. The third is showing that techniques and standards of outside from other areas are utilized.

Use of the first area is attempting to gain credibility by association. The second area would make analogy and name-calling superfluous. The third area is contradicted by claims that only expert climate scientists can divine the real truth.

Perhaps another analogy would help. Suppose that a popular and charismatic celebrity is accused of rape of young children. Despite the overwhelming evidence showing that person’s guilt the accused vehemently denies the charges and many who idolise that person make all sorts of spurious claims about the evidence and the victims. What would be the best course of action?

  1. Dispense with a trial due to the overwhelming evidence, then deny a voice to those who not believing that their idol is guilty, question the evidence. Furthermore, mount a propaganda campaign against “evidence deniers” and “supporters of paedophilia”.
  2. Have a fair trial, even funding the defence, so that people can see the evidence being presented and challenged. If the evidence is overwhelming, the idolizers will be silenced.

I would suggest that the first course of action is taken by those who dogmatic belief in their being right is based upon very little evidence.  Widely applied would undermine people’s faith in the ability of the court system to achieve justice, thereby undermining the rule of law. Widespread practice will result in highly repressive regimes, often with discrimination against sections of the community, in particular anyone who challenges orthodoxy. The second approach might sometimes result in the guilty getting found not guilty on a technicality, or getting found guilty of lesser crimes. But pursuit of the highest standards will win over the doubters and gain support for the rule of law. This is the thinking that led to the development of the trial by jury system in Anglo-Saxon England. If you give people a fair and open trial, then others will trust authority. If you let a ruler or appointed expert divine the truth, then, even when they consistently get decisions right there will be distrust. If they are perceived to get things wrong, or the process is hidden from public view, then distrust will emerge.

In a similar fashion, supporters of climatology are making a massive public relations blunder. Rather than engaging in open debate, and encouraging people to analyse the differing arguments they make false analogies, misrepresent the opponents and discourage people from questioning, or comparing differing points of view. 

Tram to East Didsbury – a net detriment to Society

Today was opened a 2.7 mile tram link. The plush new trams will transport people to the centre of Manchester in 25 minutes. For those who normally take the car, there is a large car park, meaning people do not have to endure up to sixty minutes of heavy traffic congestion and then to face exorbitant city-centre car parking charges. For people normally taking the bus, there is a similar time saving, and a much more pleasant ride. Further the route goes via Chorlton, a route that is only served with public transport by a painfully slow bus at any time of day. People who do not use public transport now will start doing so. People who already use public transport have a much superior option.


So how on earth, with all these positives can I claim that this fantastic new tram is to the net detriment of society?

Within 400 metres of East Didsbury tram station is East Didsbury railway station. To Manchester it takes 12 minutes, half the time of the tram. Off peak by train is around £2.50, as against £3.80 tram cost I was quoted at 8pm when I took the photograph.

The bridge in the background is of the A34 Kingsway. Less than 50 metres away is a stop for the 50 bus into Manchester. 100 metres left of shot is East Didsbury Bus station. In between on Didsbury Road is a bus stop for buses from Stockport going through towards Altrincham, Chorlton and also North through Didsbury, Withington, Rusholme into Manchester. The Withington to Manchester bus corridor is probably the busiest in Europe, mostly due to serving two large University campuses. The tram route is useless to the students.

Then there is the cost. A weekly ticket on the tram to Manchester is £21.00. An annual ticket is £800.00. This compares with £12.50 for a weekly bus ticket. For students, Stagecoach (the dominant bus company) provide an unlimited travel ticket for £60.00 a term, equivalent to £5.00 to £7.50 per week.

But the biggest costs are to society as a whole. In December 10th 2008 there was a referendum in Greater Manchester on a package of measures to improve public transport, paid for by a congestion charge. There was a huge, publicly-funded website called “wevoteyes.co.uk”. The most expensive item Although the website is now suspended, thanks to the Wayback Machine, the claims can be charted. Most important amongst the claims was


FACT: There’s no Plan B. If we vote NO in December the money goes back to Government, all £3 billion of it.

I wrote on 09/11/08, a full month before the referendum:-

…..it is a false statement, as the total investment is less than £2.8bn, including £313m for the congestion charge investment. The central government is only providing, £1.5bn of this, £1.2bn is to be funded by the congestion charge and £100m is from other sources. The full £2.8bn includes contingencies, so will only be “achieved” if there is an overspend.

Anyway, the full and permanent withdrawal of the £1.5bn funding may not occur. The Prime Minister, in a response to a question tabled by Manchester Withington MP John Leech, said “If Greater Manchester came back with a revised proposition, we would need to assess it on its merits.”

Just six months later the decision was made to go ahead with the scheme. Maybe you can excuse a Government trying to cajole people into making people decide in the general interest. But they commissioned a detailed study which relied upon a congestion charge to force large sections of the community from private to public transport. That the people who would be forced to switch would be those currently only just able to afford the luxury of private transport, or that the figures ignored empirical economic evidence that undermined their case are beside the point. What is important is that the wider economic validity of the case for an expanded Metrolink to East Didsbury relied upon the “stick” of the congestion charge. Without that “stick”, the costs of the Metrolink extensions are significantly greater than the benefits, so society as a whole is worse off than if the money had never been spent. The Labour Party would have known this if they had read and interpreted the report they commissioned. Yet the spin doctors put Labour Party interest before the interest of the wider society and ploughed on regardless. If a GP had done this in regard to a patient they would have been struck off. If a business director had put personal interest before the interests of the company they would have been disqualified, with all costs falling upon them personally. But when a political party tries to hang onto power by favouring voters in areas where they are strong, or marginals where they are a close second, at the expense of the country at large, then this is not viewed as a moral issue. I beg to differ. Political decisions have wide implications. Our political masters should seek the net betterment of society as a whole. In the case of the Metrolink extensions we have a lovely service that will never justify the original outlay of £1.6 billion, but the long-term passenger revenues may not cover the operating costs, whilst custom taken away from the trains will increase subsidies and/or reduce services in that area along with bus services being diminished that currently run without subsidy.

http://www.manchestereveningnews.co.uk/news/greater-manchester-news/first-passengers-travel-tram-extension-4004861

Suyts on Krugman

Suyts quite rightly criticizes Paul Krugman on the Nobel Laureate’s latest ramblings. However, his analysis misses a couple of issues. This is an extended comment.

You are quite right on two issues here, which I believe have been called the ratchet effect and the debt servicing impact.

The first is that it is easy to increase government expenditure, but much more difficult to scale it back as there are entrenched interests to stop the scale back. It is easy to give people welfare benefits or create jobs. But try to take these away and people will fight like crazy to keep them.


The second on debt servicing you demonstrate very well. As total debt goes up, so does the interest on that debt.

There are other issues that should be taken into consideration on the deficit and debt problem.

The first issue is the size of government. When an economy enters recession, the tax receipts fall and expenditure rises. Corporation tax is the first area to go down, followed by income tax as unemployment rises. In expenditure terms, welfare payments will rise along with (possibly) business bail outs. With small government, taxing little and spending little, this impact was small. With large government – in Britain rising nearly 50% of GDP – this effect is large. A 6% decline in GDP perhaps increased the deficit by 6-7% of GDP. Under the Eisenhower administration, a similar decline would worsen government finances by maybe 2% of GDP. Big government exacerbates the size of cyclical swings.

The second issue is the position at the start of downturn. In mid-2008 both USA and Britain had structural deficits – in the USA to finance the wars in Iraq and Afghanistan, in Britain finance a huge increase in public sector pay and capital spending on schools and hospitals. A structural deficit is the measure of the average government deficit over the course of the business cycle. In Britain at the top of the cycle, the actual deficit was around 3% of GDP, with a planned rise to 4%. The structural deficit was probably greater than 4% of GDP in mid-2008 in Britain and maybe slightly smaller in the USA. Below is my estimate of the impact of Britain’s structural deficit in April 2010. I estimated that the structural deficit built up between 2001 and 2008 would in the long-term increase National Debt by 40% of GDP. I was overly optimistic in my assessment.


The third issue is with the classical Keynesian Multiplier. Crude textbook Keynesianism of the 1960s for a closed economy stated

E = C+I+G

Or national expenditure is the sum of Consumption, Investment and Government expenditure.

The theoretical impact of increasing government expenditure on total output, when the economy is at less than full employment, is Y/G. If government expenditure is 10% of national income, then increase G by $1 and Y will increase by $10. If government expenditure is 40% of national income, then increase G by $1 and Y will increase by $2.50. However, crudely put, if the government expenditure does not take up the slack in the economy (the deficit in aggregate demand), then (in an inflation-free economy) the government expenditure “crowds out” private expenditure. Another way of putting the situation, if the economy is not “stuck in a rut” as Keynes assumed in his “General Theory”, but merely reacting to overinvestment (such as a housing bubble), then increased government expenditure will have no effect on total output, but “crowd out” other expenditure. It will also add to the nominal national debt, without adding to total national income, thereby increasing national debt as a percentage of national income, or expanding national income leading to increased tax revenues and thus closing the deficit.

The fourth issue is fiscal tipping points. If the increased government expenditure fails to stimulate the economy, then the result will be a larger structural deficit. If, like some European countries, there is a further contraction then the deficit will increase. In Greece, Spain, Italy and Portugal, this further downturn has led to increased economic risk, pushing up interest rates. This increases short-term debt costs, further increasing the deficit. The only way to stem total collapse is to massively cut public expenditure and increase taxes to not only pay for the debt-financing costs but to rapidly cut the deficit as well. In climate change there has been much spurious talk about possible tipping points in the remote future if certain things come true. But in OECD economies, with some already having gone beyond the fiscal tipping points, many (including Krugman) seem oblivious to the possibility. Should we not use a smidgeon of the precautionary principle in economics , proclaiming austerity as an insurance against severe depression.?

Kevin Marshall


Help Launch Climate Skeptic Film Project: 50 to 1

This looks a very interesting project that mirrors my own thoughts. Take the UNIPCC “projections” of a future catastrophe – including all the nonsense about falling crop yields in Africa, collapsing polar ice caps, etc. – and you still have not got anything like the justification for policy.
If you then add the additional costs of ineffectual implementation, poor policy-making and other “policy” that economists would not recommend (e.g. bio-fuels) then the 50 to 1 ratio balloons. If the “science” turns out to be too extreme. If for instance a doubling of CO2 leads to just 1.5 degrees of temperature rise instead of 3 degrees, then the catastrophic consequences will not just halve, but be many times smaller. If the catastrophic consequences of a given amount of warming have been overstated, (such as storms not becoming more extreme, but less – as Hansen and Lindzen now agree) then it becomes worse still. If it turns out that a small amount of warming of net benefit to the planet (as it will be in Northern Europe) and/or that higher CO2 levels are of net benefit to the planet, then the whole exercise turns from incurring costs to prevent the future consequences of another lot of costs to incurring costs to prevent a benefit from happening.

James Hansen favouring Richard Lindzen over IPCC

Much has been made of James Hansen’s recent claim in a youtube video that runaway global warming will make the oceans boil. However, people have not picked up an earlier point, where the father of global warming alarmism clearly contradicts the consensus.

In the first minute of the clip, Hansen talks about the impact of ice sheets disintegrating in the polar regions. All this extra cold fresh water decreases ocean temperatures. This, in turn, increases the temperature gradient between the poles and the tropics. This, in turn, increases the strength of storms.

If Hansen looks his own GISSTEMP figures for global average temperatures, he will notice that the warming has been higher is the Artic than in the tropics. According to UNIPCC in 2007, the fastest warming in this century will be in the Arctic. I propose that cooling of the Arctic Ocean will have two effects. First it will counterbalance the most extreme warming of the planet, thereby reduce the total temperature rise. Also it will counter-balance some of the rise in temperatures, so reducing the impact of Greenland ice melt and slowing the reduction in sea ice. Second, it will reduce the impact of extreme storms. If melting ice cools the oceans, it is a negative feedback.



Sources of the boiling oceans comment are:-

WUWT comments 2 and 3 by Eric Worrall

http://carbon-sense.com/ on April 13th 2013

C3 Headlines


China’s Renewable Policy in Context – The Ningxia Example

China has been lauded for an aggressive renewable policy, particularly for wind turbines. When you next hear praise for this policy, consider the example of the Ningxia Hui Autonomous Region in Mid-China. There are wind turbines being developed here, but only in the context of massive industrial development. That primary motive for the industrial development in this area is coal. For instance

Sun Mountain has something China needs very badly to feed the thundering beast of its economy: 14.6 billion tons of coal reserves lying under its rocky, arid desert. There are also 5 billion tons of limestone, nearly 2 billion tons of dolomite, and – a modern touch this – 300 days of wind power per year. But there is no doubt that King Coal, a tyrannical monarch who has devoured land and lives in Ningxia for the past 50 years, rules Sun Mountain. If China is to quench its thirst for electricity and industrial chemicals the old king will be on the throne for many years to come.

The scale of the development is seen from another, 2008, article.

Shenhua Ningxia Coal Industry Company….. has begun construction of a 1000 square kilometer coal-chemical complex in northwest China’s Ningxia province. The 280 billion yuan (40 billion USD) project, located at Ningdong, 42 kilometers southeast of provincial capital, Yinchuan, will include coal production, electricity generation and coal chemicals, including coal to liquid fuel conversion (CTL). (Italics mine)

The coal will be partly used for power generation.

By the time the base is fully operational in 2020 it will have eight power plants with a capacity of 30 million KW.

That is eight power plants in one small region, each bigger than anything in Britain. But why develop coal to liquid fuel conversion?

With China’s crude oil imports rising 12.3 percent to 163.17 million tons in 2007, and the price of oil reaching $140 a barrel in 2008, one of the most keenly watched facilities in the Ningdong base will be its coal to oil conversion plants.

As of 2013, one of these plants is already in operation, and should be producing the equivalent of 70,000 barrels per day (bpd) if the mid-2006 forecasts were correct. The other is being constructed, with a capacity of over 90,000 bpd. Although these two plants will only provide the equivalent of 4% of the 163.17 million tonnes imported in 2007, China has huge reserves of coal. Further, Ningxia is one of just 30 main coal producing areas.

This 2008 article admits to drawbacks of CTL.

Coal liquefaction projects have many drawbacks from the point of view of the environment and resource conservation. Firstly they consume vast amounts of water, which is a huge concern in China’s dry northwest. Fifty-seven percent of the land area of Ningxia is desert. The Ningdong coal-chemical base will draw 100 million tons of water from the Yellow river every year. Secondly, the process of liquefying coal emits much more carbon dioxide than conventional coal fired power stations. When fully operational, the Ningdong base will discharge 80,000 cubic meters of Carbon Dioxide (CO2) per day …….. Finally, while liquefied coal fuels provide an alternative to crude oil, they are not necessarily an efficient use of coal. It takes four to five tons of coal to produce one ton of oil, so coal to oil projects deplete coal reserves much more rapidly than conventional coal power generation.

Therefore, China’s rush into renewables should be seen as just a small part of the general industrialisation of China, whilst minimising dependence on external energy sources. The eco-image, such as support for Earth Day and Kite Tournaments is just to keep the environmentalists from trying to sabotage China’s rush to western levels of prosperity for 1300 million people.

Velicogna 2009 and Chen et al 2009 on Acceleration in Antarctic Ice Melt

This blog post started out as some musings on the different way of measuring the changes in the mass of Antarctic land ice, as a follow up to a couple of comments to Jo Nova’s posting “Antarctica gaining Ice Mass — and is not extraordinary compared to 800 years of data.” The problem with this is that it looks at just part of the total ice mass balance. These lead me to look at the major papers that looked to Total Mass Balance. There are two from 2009, using early data from the GRACE satellite gravity mission Velicogna and Chen et al. In comparing the various estimates, I discovered three anomalies that should have been detected as part of the peer review process.

Error in Velicogna Summary

The abstract notes

In Greenland, the mass loss increased from 137 Gt/yr in 2002–2003 to 286 Gt/yr in 2007–2009, i.e., an acceleration of −30 ± 11 Gt/yr2 in 2002–2009. In Antarctica the mass loss increased from 104 Gt/yr in 2002–2006 to 246 Gt/yr in 2006–2009, i.e., an acceleration of −26 ± 14 Gt/yr2 in 2002–2009.

When I tried to replicate this for Greenland, the figures worked out. Starting with 122 Gt/yr a year ice loss in 1992 and adding 30 to each year gives the “137 Gt/yr in 2002–2003 to 286 Gt/yr in 2007–2009“. But for Antarctica, adding 26 to each year cannot give “the mass loss increased from 104 Gt/yr in 2002–2006 to 246 Gt/yr in 2006–2009“. However, if the statement is rephrased with the Greenland timescales as “the mass loss increased from 104 Gt/yr in 2002–2003 to 246 Gt/yr in 2007–2009” then the numbers work out.


The spread sheet is easy to construct. For Velicogna Antarctica, start with -90 in 2002 and subtract 26 from the preceding year. The average uses the “=AVERAGE()” function in Excel.

So why did this dating error occur? There is no apparent reason in the Velicogna paper to use two different averages over such a short time frame. I might suggest that there is another reason. The two papers were published weeks apart (Velicogna 13th Oct and Chen 22nd Nov) and used the same data for Antarctica over similar periods (Velicogna Apr 02 – Feb 09 and Chen Apr 02 – Jan 09). The impact of both would be enhanced if they had comparative statistics. For instance Zwally & Giovinetto 2011 state

Table 2 includes two GRACE-based mass loss estimates of 104 Gt/year (Velicogna 2009) and 144 Gt/year (Chen et al. 2009) for the period 2002–2006 and two estimates of 246 Gt/year (Velicogna 2009) and of 220 Gt/year (Chen et al. 2009) for the period 2006–2009.

Correcting Velicogna, it becomes

Table 2 includes two GRACE-based mass loss estimates of 142 Gt/year (Velicogna 2009) and 144 Gt/year (Chen et al. 2009) for the period 2002–2006 and two estimates of 233 Gt/year (Velicogna 2009) and of 220 Gt/year (Chen et al. 2009) for the period 2006–2009.

That is, the two papers become far more consistent if the averages are corrected. It would appear that Velicogna changed the dates without doing the maths.

Form of the acceleration

Velicogna states in the abstract

We find that during this time period the mass loss of the ice sheets is not a constant, but accelerating with time, i.e., that the GRACE observations are better represented by a quadratic trend than by a linear one, implying that the ice sheets contribution to sea level becomes larger with time.

This quadratic trend is backed up by graphs on the NASA website (Antarctica) and NOAA websites (Greenland)


For ice melt Velicogna is stating that, not only would the trend be for each year to be greater than the previous year, but for the incremental increase to be greater than the last.

But, if ∂M is the change in ice mass, from the following functions were used in my spread sheet to replicate both Velicogna’s and Chen’s results.

For Velicogna 2009, Antarctica

∂M = -90 – 26(Year-2002)

For Velicogna 2009, Greenland

∂M = -122 + 30(Year-2002)

For Chen et al. 2009, Antarctica

∂M = -126 + 17(Year-2002)

These are all linear functions. I do not have access to Chen’s paper, but Velicogna’s abstract does not conform to her model.

Discontinuous functions in Chen et al. 2009

The abstract for Chen states

… our data suggest that East Antarctica is losing mass, mostly in coastal regions, at a rate of −57±52 Gt yr−1, apparently caused by increased ice loss since the year 2006.

Chen detection of increased ice loss is similar to Velicogna’s. But unlike Velicogna, Chen suggests that there is a discontinuous function. In other words, Chen’s graph would look like this.


Although it is possible to extrapolate from a discontinuous function, it would be highly misleading to do so. It suggests there is no underlying empirical relationship to be observed, in direct contradiction to Velicogna. Further, over a short period it is impossible to say whether this is the shift in the underlying rate of change in Antarctic melt, or if this new direction be quickly reversed. Fortunately, the two studies were published over three years ago, so there are alternative studies to compare the projection against. This will be the topic of the next post.

J. L. Chen, C. R. Wilson, D. Blankenship & B. D. Tapley Nature Geoscience 2, 859 – 862 (2009) Published online: 22 November 2009 doi:10.1038/ngeo694

Velicogna, I. (2009), Increasing rates of ice mass loss from the Greenland and Antarctic ice sheets revealed by GRACE, Geophys. Res. Lett., 36, L19503, doi:10.1029/2009GL040222

H. Jay Zwally, Mario B. Giovinetto (2011) Surveys in Geophysics September 2011, Volume 32, Issue 4-5, pp 351-376, Overview and Assessment of Antarctic Ice-Sheet Mass Balance Estimates: 1992–2009 10.1007/s10712-011-9123-5

Two Comments on Antarctic Ice Accumulation

Jo Nova blogs on a study that claims the Antarctic continent is accumulating ice mass at a rapid rate. I have made two comments. One is opposing someone who claims that Antarctica is actually losing ice. The other is that the claimed rate of ice accumulation does not make sense against known data on sea levels.

Manicbeancounter

April 17, 2013 at 6:27 am · Reply

John Brooks says

I’m also interested that the mass of antarctic land ice follows solar irradiance. This makes perfect sense. However I can’t see why the effective of an increase in the greenhouse effect wouldn’t have exactly the same result.

Maybe you should look at the period covered by the graph John. There is an 800 year correlation of mass of antarctic land ice with solar irradiance, with the biggest movements in both prior to 1800. Insofar as the greenhouse effect is significant, it is nearly all after 1945.

And for some reason, I’ve got the idea in my head that antarctic land ice is decreasing.

Sure enough from the Carbon Brief link, this quote

Measurements from the Gravity Recovery and Climate Experiment (GRACE) satellite since 2002 have shown that the mass of the Antarctic ice sheet is decreasing at an average rate of 100 cubic kilometres every year – the size of a small UK city.

(emphasis mine)
The size of a city is usually measured in area, not volume. The ancient City of York, for instance, has an area of 272 square kilometres (105 square miles) and a population of 125,000. Or maybe they mean the volume of the buildings in a city? A famous building in New York is the Empire State Building. Not only is it quite tall it also has quite a large volume. Around 1,040,000 cubic metres or 0.001 cubic kilometres in fact. So does the Carbon Brief claim that a small UK city have a volume of buildings equivalent to 100,000 Empire state buildings? Or each average person in a small UK city occupies a building volume greater than Buckingham Palace?
Alternatively, does John Brooks quote a source that does not have a clue about basic maths?

Manicbeancounter

April 17, 2013 at 8:01 am · Reply

I think this paper does not stack up. I worked as a management accountant in industry for 25 years. One thing I learnt early on when estimating or forecasting was to sense-check the estimates. No matter how good your assumptions are, when estimating or extrapolating well beyond the data trend (where there is potential for error), the best check on the data is by reconciling with other data.
From the above

“The SMB of the grounded AIS is approximately 2100 Gt yr−1, with a large interannual variability. Those changes can be as large as 300 Gt yr−1 and represent approximately 6% of the 1989–2009 average (Van den Broeke et al., 2011).”

A gigatonne of ice is equivalent to a cubic kilometre of water. If the land ice volume is increasing, the water must come from somewhere. Nearly all of that water needs to come from the oceans.
Now for some basic maths. A gigatonne is a billion tonnes. As water has a relative density of 1.0, a tonne of water (1,000 litres) is a cubic metre. Therefore a gigatonne of water is a cubic kilometre (1000^3 = 1,000,000,000 = one billion).
A further factor to consider is the area of the oceans. According to my Times Concise Atlas, the total area of the oceans and seas (excluding the enclosed waters like the Dead Sea and Lake Baykal) is 325,000,000km^2. A cubic kilometre of water added to an enclosed sea of one million square kilometres, would raise the sea level by just 1mm (1000mm x 1000m = 1,000,000mm in a kilometre). So 325km^3 = 325Gt-1 of new ice accumulation above sea level in Antarctica would reduce sea levels by 1mm, or 2100GT-1 by 6.5mm.
Some of the ice accumulation will be on ice shelves, so the impact of 2100GT-1 extra ice per annum extra ice might be to reduce sea levels by just 5mm per annum. Also sea levels might be rising by a little less than the 3.2mm a year that official figures claim, but there is no evidence that sea levels are falling. Further, any net ice melt elsewhere (mostly Greenland) is only adding 1mm to sea level rise. So the rest must be mostly due to thermal expansion of the oceans. I think that the evidence for the oceans heating is very weak and of insignificant amounts. Even Kevin Trenberth in his wildest flights of fantasy would not claim the missing heat (from the air surface temperatures) adds more than 1-2mm to sea level rise.
What this study does show is that by honestly looking at data in different ways, it is possible to reach widely different conclusions. It is only by fitting the data to predetermined conclusions (and suppressing anything outside the consensus) that consistency of results can be achieved.

My scepticism on global warming stems from a belief that scientific evidence is strengthened by being corroborated from independent sources. Honest and independent data analysis means that wildly different conclusions can be reached. Comparing and contrasting these independent sources leads me to believe that the public face of the global warming climate change consensus massively exaggerates the problem.

Kevin Marshall

Bjorn Lomborg on Climate Costs in the Australian

Australian Climate Madness blog points to an article, “Wrong way, go back“, in the Australian Newspaper by Skeptical Environmentalist Bjorn Lomberg on Australia’s climate policies. This is my comment.

This statement in the article is significant

When economists estimate the net damage from global warming as a percentage of gross domestic product, they find it will indeed have an overall negative impact in the long run but the impact of moderate warming (1C-2C) will be beneficial. It is only towards the end of the century, when temperatures have risen much more, that global warming will turn negative.

Now consider the Apocalypse Delayed? posting of March 28th. Referring to an Economist article, it says that a number of empirical studies show that climate sensitivity is much lower than the climate models assume. Therefore, moving into the net cost range seems much less likely.
But why are there net costs? Lomberg’s calculations are based on William Nordhaus’s DICE model that

calculates the total costs (from heat waves, hurricanes, crop failure and so on) as well as the total benefits (from cold waves and CO2 fertilisation).

I would claim that the destablisation of the planet’s climate by rapid warming has very little evidence. Claims in AR4 that hurricanes were getting worse; that some African countries would see up to a 50% reduction in crop yields by 2020; that the Himalayan Glaciers would largely disappear by 2035; that the Amazon rainforest could catastrophically collapse – all have been over-turned.
Thus the policy justification for avoiding climate catastrophe as a result rising greenhouse gases is a combination of three components. First, a large rise in temperatures. Second, the resulting destablisation of the climate system having net adverse consequences. Third, is that the cost of constraining the rise in greenhouse gases is less than the cost of doing nothing.
It is only this third aspect that Bjorn Lomberg deals with. Yet despite that he shows that the Australian Government is not “saving the planet for future generations”, but causing huge net harm. Policy-making should consider all three components.

That is, there are three components to the policy justification to combatting “climate change” by constraining the growth in greenhouse gas emissions

  1. That there will be a significant amount of global warming.
  2. That this is net harmful to the planet and the people on it.
  3. That the net harm of policies is less than the net harm of warming. To use a medical analogy, the pain and risks of treatment are less than the disease.

Lomberg, using the best cost model available, comes up with far less costs of global warming than, say, the Stern Review of 2006. He also uses actual policy costs to assess the net harm of global warming. Lomberg does not, however, challenge the amount of warming from a given quantity of CO2 rise, nor the adverse consequences of that warming. The Economist article
and editorial of March 30th conversely challenges the quantity of warming from arising from a given rise in CO2, but just sees it as “apocalypse delayed” and not “apocalypse debunked“.

Kevin Marshall

Dehumanizing Climate Sceptics

Steve Mcintyre did some research on Dr Paul Bain – the same who Jo Nova had a long correspondence with a few months ago.

Dehumanizing Language
A few months ago, in an article in Nature Climate Change, Paul Bain, another Australian psychologist, repeatedly used the term “denier” to refer to climate skeptics. Bain defended this usage at Judy Curry’s on the basis that it would “activate the strongest confirming stereotypes” in his target audience:
By using the term “denier” we wanted to start with something that would activate the strongest confirming stereotypes in this audience
Bain’s usage was sharply criticized by skeptic blogs (though it was not an issue that I bothered with.) Judy Curry made the following interesting suggestion:

Somebody needs to research the sociology and psychology of people that insist that anyone that does not accept AGW as a rationale for massive CO2 mitigation efforts is a “denier.”

Judy’s invitation unfortunately was not followed up in the comments. Had this been done, people would have made the surprising discovery that, in his “day job”, Bain primarily wrote about the use and function of derogatory epithets (e.g. cockroach in the Hutu-Tutsi and other racially charged terms). Bain observed that a primary function of dehumanizing language is to reinforce the self-esteem of the “in group”:
For example, Bain observed

Subtle forms of dehumanization are often explained with reference to …the idea that the in group is attributed “the human essence” more than outgroups, and hence outgroups are implicitly seen as “non-human”. ..

People typically evaluate their in-groups more favorably than out-groups and themselves more favorably than others…

such labeling has the effect of denying full humanness to the out group, reinforcing the self-esteem of the in-group..

The denial of full humanness to others, and the cruelty and suffering that accompany it, is an all-too familiar phenomenon…

Despite Bain’s prolific writing on the use and abuse of dehumanizing epithets, he was oddly oblivious to the function of the term “denier” as a means of dehumanizing IPCC critics.

My interpretation of Bains’ scientific research is that likening sceptics to Nazis or pedophiles shows the collective insecurities and feelings of inadequacy of those making the comments. Deep down they know that their beliefs are built on sand, and are desperately finding ways not to acknowledge this. Dehumanizing those who challenge their beliefs is nothing new. It is an easy position to fall into, and takes courage to challenge.