Was time running out for tackling CO2 pollution in 1965?

In a recent Amicus Brief it was stated that the Petroleum Industry was told

– CO2 would cause significant warming by the year 2000.
– Time was running out to save the world’s peoples from the catastrophic consequence of pollution.

The Amicus Brief does not mention

– The Presentation covered legislative impacts on the petroleum industry in the coming year, with a recommendation to prioritize according to the “thermometer of legislative activity”.
– The underlying report was on pollution in general.
– The report concluded CO2 emissions were not controllable at local or even the national level.
– The report put off taking action on climate change to around 2000, when it hoped “countervailing changes by deliberately modifying other processess” would be possible.

The Claim

In the previous post I looked at a recent Amicus Brief that is in the public domain

In this post I look at the following statement. 

Then in 1965, API President Frank Ikard delivered a presentation at the organization’s annual meeting. Ikard informed API’s membership that President Johnson’s Science Advisory Committee had predicted that fossil fuels would cause significant global warming by the end of the century. He issued the following warning about the consequences of CO2 pollution to industry leaders:

This report unquestionably will fan emotions, raise fears, and bring demands for action. The substance of the report is that there is still time to save the world’s peoples from the catastrophic consequence of pollution, but time is running out.

The Ikard Presentation

Note 6 contains a link to the presentation 

6. Frank Ikard, Meeting the challenges of 1966, Proceedings of the American Petroleum Institute 12-15 (1965), http://www.climatefiles.com/trade-group/american-petroleuminstitute/1965-api-president-meeting-the-challenges-of-1966/.

The warning should be looked at in context of the presentation.
– Starts with the massive increase in Bills introduced in the current Congress – more than the previous two Congresses combined.
– Government fact gathering
– Land Law Review
– Oil and Gas Taxation
– Air and Water Conservation where the alleged statement was made above.
– Conclusion

The thrust of the presentation is about how new legislation impacts on the industry. I have transcribed a long quotation from Air and Water Conservation section, where the “time is running out” statement was made.

Air and Water Conservation
The fact that our industry will continue to be confronted with problems of air and water conservation for many years to come is demonstrated by the massive report of the Environment Pollution Panel of the President’s Science Advisory Committee, which was presented to President Johnson over the weekend.
This report unquestionably will fan emotions, raise fears and bring demands for action. The substance of the report is that there is still time to save the world’s peoples from the catastrophic consequence of pollution, but time is running out.
One of the most important predictions of the report is that carbon dioxide is being added to the earth’s atmosphere at such a rate that by the year 2000 the heat balance will be so modified as possibly to cause marked changes in climate beyond local or even national efforts. The report further states, and I quote: “… the pollution from internal combustion engines is so serious, and is growing so fast, that an alternative nonpollution means of powering automobiles, buses and trucks is likely to become a national necessity.”
The report, however, does conclude that urban air pollution, while having some unfavourable effects, has not reached the stage where the damage is as great as that associated with cigarette smoking. Furthermore, it does not find that present levels of pollution in air, water, soils and living organisms are such as to be a demonstrated cause of disease or death in people: but it is fearful of the future. As a safeguard it would attempt to assert the right of man to freedom from pollution and to deny the right of anyone to pollute air, land or water.
There are more than 100 recommendations in this sweeping report, and I commend it to your study. Implementation of even some of them will keep local, state and federal legislative bodies, as well as the petroleum and other industries, at work for generations.
The scope of our involvement is suggested, once again, by the thermometer of legislative activity this past year. On the federal level, hearings and committee meetings relating to air and water conservation were held almost continuously. The results, of course, are the Water Quality Act of 1965 and an important amendment to the Clean Air Act of 1963.

My reading is that Ikard is referring to a large report on pollution as a whole, with more 100 recommendations, when saying “time is running out”. However, whether the following paragraph on atmospheric CO2 is related to the urgency claim will depend whether the report treats tackling pollution from atmospheric CO2 with great urgency. Ikard commends the report for study, prioritizing by the “thermometer of legislative activity”.
Further. this Amicus Brief was submitted by a group of academics, namely Dr. Naomi Oreskes, Dr. Geoffrey Supran, Dr. Robert Brulle, Dr. Justin Farrell, Dr. Benjamin Franta and Professor Stephan Lewandowsky. When I was at University, I was taught to read the original sources. In his presentation Frank Ikard also commends listeners to study the original document. Yet the Amicus Brief contains no reference to the original document. Instead, they make an opinion based on an initial opinion voiced just after publication.

1965 Report of the Environmental Pollution Panel 

Nowadays, mighty internet search engines can deliver now-obscure documents more quickly than a professional researcher would think where to find the catalogues with a reference in a major library.
I found two sources.
First, from the same website that had the Ikard presentation – climatefiles.com.
http://www.climatefiles.com/climate-change-evidence/presidents-report-atmospher-carbon-dioxide/
As the filename indicates, it is not a copy of the full report. The contents include a letter from President Johnson; Contents; Acknowledgements; Introduction; and Appendix Y4 – Atmospheric Carbon Dioxide. Interestingly, it does not include “Climatic Effects of Pollution” on page 9.
Fortunately a full copy of the report is available at https://babel.hathitrust.org/cgi/pt?id=uc1.b4315678;view=1up;seq=5

I have screen-printed President Johnson’s letter and an extract of Page 9, with some comments.

President Johnson made a general reference to air pollution in general, but nothing about the specific impacts of carbon dioxide on climate. Page 9 is more forthcoming.

CLIMATIC EFFECTS OF POLLUTION

Carbon dioxide is being added to the earth’s atmosphere by the burning of coal, oil and natural gas at the rate of 6 billion tons a year. By the year 2000 there will be about 25% more CO2 in our atmosphere than at present. This will modify the heat balance of the atmosphere to such an extent that marked changes in the climate, not controllable though local or even national efforts, could occur. Possibilities of bringing about countervailing changes by deliberately modifying other processes that affect climate may then be very important.

The page 9 paragraph is very short. It makes the prediction that Ikard referred to in his presentation. By 2000, there could be “marked changes in climate not controllable though local or even national efforts”. I assume that there is a typo here, as “not controllable through local or even national efforts” makes more sense.
I interpret the conclusion, in more modern language, is as follows:-
The earth is going to warm significantly due to fossil fuel emissions, which might cause very noticeable changes in the climate by 2000. But the United States, the world’s largest source of those emissions, cannot control those emissions. Around 2000 there might be ways of controlling the climate that will counteract the impact of the higher CO2 levels.

Concluding Comments

Based on my reading of API President Frank Ikard’s presentation, he was not warning about the consequences of CO2 emissions when he stated

This report unquestionably will fan emotions, raise fears, and bring demands for action. The substance of the report is that there is still time to save the world’s peoples from the catastrophic consequence of pollution, but time is running out.

This initial interpretation is validated by the lack of urgency given in the report to rising tackling possible impacts of rising CO2 levels. Given that Ikard very clearly recommends reading the report, one would have expected over fifty years later for a group of scholars to follow that lead before formulating an opinion.
The report is not of the opinion that “time is running out” for combating the climatic effects of carbon dioxide. It further pushes taking action to beyond 2000, with action on climate seeming to be of a geo-engineering type, rather than adaptation. Insofar as Izard may have implied urgency with respect to CO2, the report flatly contradicts this.
The bigger question is why the report chose not to recommend taking urgent action at the time. This might inform why people of the time did not see rising CO2 as something for which they needed to take action. It is the Appendix Y4 (authored by the leading American climatologists at that time) that makes the case for the impact of CO2 and courses of action to tackle those impacts. In another post I aim to look at the report through the lens of those needing to be convinced. 

Kevin Marshall

Climate Alarmism from Edward Teller in 1959

The Daily Caller had an article on 30th January SEVERAL HIGH-PROFILE ENVIROS ARE WORKING TO RESUSCITATE CALIFORNIA’S DYING CLIMATE CRUSADE

What caught my interest was the following comment

Researchers Naomi Oreskes and Geoffrey Supran were among those propping up the litigation, which seeks to hold Chevron responsible for the damage climate change has played on city infrastructure.

The link is to an Amicus Brief submitted by Dr. Naomi Oreskes, Dr. Geoffrey Supran, Dr. Robert Brulle, Dr. Justin Farrell, Dr. Benjamin Franta and Professor Stephan Lewandowsky. I looked at the Supran and Oreskes paper Assessing ExxonMobil’s Climate Change Communications (1977–2014) in a couple of posts back in September 2017. Professor Lewandowsky on probably gets more mentions on this blog than any other.

The Introduction starts with the following allegation against Chevron

At least fifty years ago, Defendants-Appellants (hereinafter, “Defendants”) had information from their own internal research, as well as from the international scientific community, that the unabated extraction, production, promotion, and sale of their fossil fuel products would result in material dangers to the public. Defendants failed to disclose this information or take steps to protect the public. They also acted affirmatively to conceal their knowledge and discredit climate science, running misleading nationwide marketing campaigns and funding junk science to manufacture uncertainty, in direct contradiction to their own research and the actions they themselves took to protect their assets from climate change impacts such as sea level rise.

This are pretty serious allegations to make against a major corporation, so I have been reading with great interest the Amicus Brief and started making notes. As an ardent climate sceptic, I started reading with trepidation. Maybe there would be starkly revealed to me the real truth of climate denial. Instead, it has made very entertaining reading. After a three thousand words of notes and having only got up to 1972 in the story, I have decided to break up the story into a few separate posts.

Edward Teller 1959

The Amicus Brief states

In 1959, physicist Edward Teller delivered the first warning of the dangers of global warming to the petroleum industry, at a symposium held at Columbia University to celebrate the 100th anniversary of the industry. Teller described the need to find energy sources other than fossil fuels to mitigate these dangers, stating, “a temperature rise corresponding to a 10 per cent increase in carbon dioxide will be sufficient to melt the icecap and submerge New York. All the coastal cities would be covered, and since a considerable percentage of the human race lives in coastal regions, I think that this chemical contamination is more serious than most people tend to believe.”

Edward Teller was at the height of his fame, beingcredited with developing the world’s first thermonuclear weapon, and he became known in the United States as “the father of the H-bomb.” At the height of the cold war it must have been quite a coup to have one of the world’s leading physicists and noted anti-communist to give an address. As top executives from all the major oil companies would have been there, I would not sure they would have greeted the claims with rapturous applause. More likely thought the Professor has caught some new religion. They might have afterwards made some inquiries. Although climatology was in its infancy, the oil majors would have teams of geologists, who could make enquiries. The geologists  may have turned up the Revelle and Suess 1957 paper Carbon Dioxide Exchange Between Atmosphere and Ocean and the Question of an Increase of Atmospheric CO2 during the Past Decades, 9 Tellus 18 (1957) that is mentioned in the previous paragraph of the Amicus Brief.

Revelle and Suess state in the Introduction

(A) few percent increase in the CO2 content of the air, even if it has occurred, might not produce an observable increase in average air temperature near the ground in the face of fluctuations due to other causes. So little is known about the thermodynamics of the atmosphere that it is not certain whether or how a change in infrared back radiation from the upper air would affect the temperature near the surface. Calculations by PLASS (1956) indicate that a 10% increase in atmospheric carbon dioxide would increase the average temperature by 0.36oC. But amplifying or feed-back processes may exist such that a slight change in the character of the back radiation might have a more pronounced effect.

So some experts in the field report that it is uncertain how much warming could occur from a small rise in CO2 levels. The only actual estimate is 0.36oC from a 10% rise. So how could that melt the icecap and flood New York? If this was first introduction that oil executives had to the concept of CO2-induced global warming might they have become a little on their guard about any future, more moderate, claims?

They would have been right to be uneasy. 1959 was the first full year CO2 levels were monitored at Mauna Loa, Hawaii. The mean CO2 Level for that year was 315.97 ppm. The 10% increase was passed in 1987, and for 2018 the figure was 408.52 ppm, 29.3% higher. The polar icecaps are still in place. From Sea Level Info, tide gauges show linear sea level rises over the last 59 years of  7.6 inches for Washington DC; 6.9 inches for Philadelphia 6.9 inches, and 6.6 inches for Battery at the tip of Lower Manhattan . This assumes a linear rise over 60 years.

The chart for The Battery, NY shows no discernible acceleration in the last 60 years, despite the acceleration in the rate of CO2 rise shown in green. It is the same for the other tide gauges.

The big question here is that 60 years later, what were the authors of the Amicus Brief thinking when they quoted such a ridiculous claim?

Kevin Marshall

Thomas Fuller on polar-bear-gate at Cliscep

This is an extended version of a comment made at Thomas Fuller’s cliscep article Okay, just one more post on polar-bear-gate… I promise…

There are three things highlighted in the post and the comments that illustrate the Polar Bear smear paper as being a rich resource towards understanding the worst of climate alarmism.

First is from Alan Kendall @ 28 Dec 17 at 9:35 am

But what Harvey et al. ignores is that Susan Crockford meticulously quotes from the “approved canon of polar bear research” and exhorts her readers to read it (making an offer to provide copies of papers difficult to obtain). She provides an entree into that canon- an entree obviously used by many and probably to the fury of polar bear “experts”.

This is spot on about Susan Crockford, and, in my opinion, what proper academics should be aiming at. To assess an area where widely different perspectives are possible, I was taught that it is necessary to read and evaluate the original documents. Climate alarmists in general, and this paper in particular, evaluate in relation collective opinion as opposed to more objective criteria. In the paper, “science” is about support for a partly fictional consensus, “denial” is seeking to undermine that fiction. On polar bears this is clearly stated in relation to the two groups of blogs.

We found a clear separation between the 45 science-based blogs and the 45 science-denier blogs. The two groups took diametrically opposite positions on the “scientific uncertainty” frame—specifically regarding the threats posed by AGW to polar bears and their Arctic-ice habitat. Scientific blogs provided convincing evidence that AGW poses a threat to both, whereas most denier blogs did not.

A key element is to frame statements in terms of polar extremes.

Second, is the extremely selective use of the data (or selective analysis methods) to enable the desired conclusion to be reached. Thomas Fuller has clearly pointed out in the article and restated in the comments with respect to WUWT, the following.

Harvey and his 13 co-authors state that WUWT overwhelmingly links to Crockford. I have shown that this is not the case.

Selective use of data (or selective analysis methods) is common on climate alarmism. For instance

  • The original MBH 98 Hockey-Stick graph used out-of-date temperature series, or tree-ring proxies such as at Gaspe in Canada, that were not replicated by later samples.
  • Other temperature reconstructions. Remember Keith Briffa’s Yamal reconstruction, which relied on one tree for the post-1990 reconstructions? (see here and here)
  • Lewandowsky et al “Moon Hoax” paper. Just 10 out of 1145 survey respondents supported the “NASA faked the Moon Landings” conspiracy theory. Of these just 2 dogmatically rejected “climate”. These two faked/scam/rogue respondents 860 & 889 supported every conspiracy theory, underpinning many of the correlations.
  • Smoothing out the pause in warming in Risbey, Lewandowsky et al 2014 “Well-estimated global surface warming in climate projections selected for ENSO phase”. In The Lewandowsky Smooth, I replicated the key features of the temperature graph in Excel, showing how no warming for a decade in Hadcrut4 was made to appear as if there was hardly a cessation of warming.

Third, is to frame the argument in terms of polar extremes. Richard S J Tol @ 28 Dec 17 at 7:13 am

And somehow the information in those 83 posts was turned into a short sequence of zeros and ones.

Not only one many issues is there a vast number of intermediate positions possible (the middle ground), there are other dimensions. One is the strength of evidential support for a particular perspective. There could be little or no persuasive evidence. Another is whether there is support for alternative perspectives. For instance, although sea ice data is lacking for the early twentieth-century warming, average temperature data is available for the Arctic. NASA Gistemp (despite its clear biases) has estimates for 64N-90N.

The temperature data seems to clearly indicate that all of the decline in Arctic sea ice from 1979 is unlikely to be attributed to AGW. From the 1880s to 1940 there was a similar magnitude of Arctic warming as from 1979 t0 2010 with cooling in between. Yet the rate of increase in GHG levels was greater from greater in 1975-2010 than 1945-1975, which was in turn greater than the period decades before.

Kevin Marshall

 

How strong is the Consensus Evidence for human-caused global warming?

You cannot prove a vague theory wrong. If the guess that you make is poorly expressed and the method you have for computing the consequences is a little vague then ….. you see that the theory is good as it can’t be proved wrong. If the process of computing the consequences is indefinite, then with a little skill any experimental result can be made to look like an expected consequence.

Richard Feynman – 1964 Lecture on the Scientific Method

It’s self-evident that democratic societies should base their decisions on accurate information. On many issues, however, misinformation can become entrenched in parts of the community, particularly when vested interests are involved. Reducing the influence of misinformation is a difficult and complex challenge.

The Debunking Handbook 2011 – John Cook and Stephan Lewandowsky

My previous post looked at the attacks on David Rose for daring to suggest that the rapid fall in global land temperatures at the El Nino event were strong evidence that the record highs in global temperatures were not due to human greenhouse gas emissions. The technique used was to look at long-term linear trends. The main problems with this argument were
(a) according to AGW theory warming rates from CO2 alone should be accelerating and at a higher rate than the estimated linear warming rates from HADCRUT4.
(b) HADCRUT4 shows warming stopped from 2002 to 2014, yet in theory the warming from CO2 should have accelerated.

Now there are at least two ways to view my arguments. First is to look at Feynman’s approach. The climatologists and associated academics attacking journalist David Rose chose to do so from a perspective of a very blurred specification of AGW theory. That is human emissions will cause greenhouse gas levels to rise, which will cause global average temperatures to rise. Global average temperature clearly have risen from all long-term (>40 year) data sets, so theory is confirmed. On a rising trend, with large variations due to natural variability, then any new records will be primarily “human-caused”. But making the theory and data slightly less vague reveals an opposite conclusion. Around the turn of the century the annual percentage increase in CO2 emissions went from 0.4% to 0.5% a year (figure 1), which should have lead to an acceleration in the rate of warming. In reality warming stalled.

The reaction was to come up with a load of ad hoc excuses. Hockey Schtick blog reached 66 separate excuses for the “pause” by November 2014, from the peer-reviewed to a comment in the UK Parliament.  This could be because climate is highly complex, with many variables, the presence of each contributing can only be guessed at, let alone the magnitude of each factor and the interrelationships with all factors. So how do you tell which statements are valid information and which are misinformation? I agree with Cook and Lewandowsky that misinformation is pernicious, and difficult to get rid of once it becomes entrenched. So how does one evaluate distinguish between the good information and the bad, misleading or even pernicious?

The Lewandowsky / Cook answer is to follow the consensus of opinion. But what is the consensus of opinion? In climate one variation is to follow a small subset of academics in the area who answer in the affirmative to

1. When compared with pre-1800s levels, do you think that mean global temperatures have generally risen, fallen, or remained relatively constant?

2. Do you think human activity is a significant contributing factor in changing mean global temperatures?

Problem is that the first question is just reading a graph and the second could be is a belief statement will no precision. Anthropogenic global warming has been a hot topic for over 25 years now. Yet these two very vague empirically-based questions, forming the foundations of the subject, should be able to be formulated more precisely. On the second it is a case of having pretty clear and unambiguous estimates as to the percentage of warming, so far, that is human caused. On that the consensus of leading experts are unable to say whether it is 50% or 200% of the warming so far. (There are meant to be time lags and factors like aerosols that might suppress the warming). This from the 2013 UNIPCC AR5 WG1 SPM section D3:-

It is extremely likely that more than half of the observed increase in global average surface temperature from 1951 to 2010 was caused by the anthropogenic increase in greenhouse gas concentrations and other anthropogenic forcings together.

The IPCC, encapsulating the state-of-the-art knowledge, cannot provide firm evidence in the form of a percentage, or even a fairly broad range even with over 60 years of data to work on..  It is even worse than it appears. The extremely likely phrase is a Bayesian probability statement. Ron Clutz’s simple definition from earlier this year was:-

Here’s the most dumbed-down description: Initial belief plus new evidence = new and improved belief.

For the IPCC claim that their statement was extremely likely, at the fifth attempt, they should be able to show some sort of progress in updating their beliefs to new evidence. That would mean narrowing the estimate of the magnitude of impact of a doubling of CO2 on global average temperatures. As Clive Best documented in a cliscep comment in October, the IPCC reports, from 1990 to 2013 failed to change the estimate range of 1.5°C to 4.5°C. Looking up Climate Sensitivity in Wikipedia we get the origin of the range estimate.

A committee on anthropogenic global warming convened in 1979 by the National Academy of Sciences and chaired by Jule Charney estimated climate sensitivity to be 3 °C, plus or minus 1.5 °C. Only two sets of models were available; one, due to Syukuro Manabe, exhibited a climate sensitivity of 2 °C, the other, due to James E. Hansen, exhibited a climate sensitivity of 4 °C. “According to Manabe, Charney chose 0.5 °C as a not-unreasonable margin of error, subtracted it from Manabe’s number, and added it to Hansen’s. Thus was born the 1.5 °C-to-4.5 °C range of likely climate sensitivity that has appeared in every greenhouse assessment since…

It is revealing that quote is under the subheading Consensus Estimates. The climate community have collectively failed to update the original beliefs, based on a very rough estimate. The emphasis on referring to consensus beliefs about the world, rather than looking outward for evidence in the real world, I would suggest is the primary reason for this failure. Yet such community-based beliefs completely undermines the integrity of the Bayesian estimates, making its use in statements about climate clear misinformation in Cook and Lewandowsky’s use of the term. What is more, those in the climate community who look primarily to these consensus beliefs rather than the data of the real world will endeavour to dismiss the evidence, or make up ad hoc excuses, or smear those who try to disagree. A caricature of these perspectives with respect to global average temperature anomalies is available in the form of a flickering widget at John Cooks’ skepticalscience website. This purports to show the difference between “realist” consensus and “contrarian” non-consensus views. Figure 2 is a screenshot of the consensus views, interpreting warming as a linear trend. Figure 3 is a screenshot of the non-consensus or contrarian views. They is supposed to interpret warming as a series of short, disconnected,  periods of no warming. Over time, each period just happens to be at a higher level than the previous. There are a number of things that this indicates.

(a) The “realist” view is of a linear trend throughout any data series. Yet the period from around 1940 to 1975 has no warming or slight cooling depending on the data set. Therefore any linear trend line derived for a longer period than 1970 to 1975 and ending in 2015 will show a lower rate of warming. This would be consistent the rate of CO2 increasing over time, as shown in figure 1. But for shorten the period, again ending in 2015, and once the period becomes less than 30 years, the warming trend will also decrease. This contracts the theory, unless ad hoc excuses are used, as shown in my previous post using the HADCRUT4 data set.

(b) Those who agree with the consensus are called “Realist”, despite looking inwards towards common beliefs. Those who disagree with warming are labelled “Contrarian”. This is not inaccurate when there is a dogmatic consensus. But it utterly false to lump all those who disagree with the same views, especially when no examples are provided of those who hold such views.

(c) The linear trend appears as a more plausible fit than the series of “contrarian” lines. By implication, those who disagree with the consensus are viewed as as having a distinctly more blinkered and distorted perspective than those who follow the consensus. Yet even using gistemp data set (which is gives greatest support to the consensus views) there is a clear break in the linear trend. The less partisan HADCRUT4 data shows an even greater break.

Those who spot the obvious – that around the turn of the century warming stopped or slowed down, when in theory it should have accelerated – are given a clear choice. They can conform to the scientific consensus, denying the discrepancy between theory and data. Or they can act as scientists, denying the false and empirically empty scientific consensus, receiving the full weight of all the false and career-damaging opprobrium that accompanies it.

fig2-sks-realists

 

 

fig3-sks-contras

Kevin Marshall

 

John Cook undermining democracy through misinformation

It seems that John Cook was posting comments in 2011 under the pseudonym Lubos Motl. The year before physicist and blogger Luboš Motl had posted a rebuttal of Cook’s then 104 Global Warming & Climate Change Myths. When someone counters your beliefs point for point, then most people would naturally feel some anger. Taking the online identity of Motl is potentially more than identity theft. It can be viewed as an attempt to damage the reputation of someone you oppose.

However, there is a wider issue here. In 2011 John Cook co-authored with Stephan Lewandowsky The Debunking Handbook, that is still featured prominently on the skepticalscience.com. This short tract starts with the following paragraphs:-

It’s self-evident that democratic societies should base their decisions on accurate information. On many issues, however, misinformation can become entrenched in parts of the community, particularly when vested interests are involved. Reducing the influence of misinformation is a difficult and complex challenge.

A common misconception about myths is the notion that removing its influence is as simple as packing more information into people’s heads. This approach assumes that public misperceptions are due to a lack of knowledge and that the solution is more information – in science communication, it’s known as the “information deficit model”. But that model is wrong: people don’t process information as simply as a hard drive downloading data.

If Cook was indeed using the pseudonym Lubos Motl then he was knowingly putting out into the public arena misinformation in a malicious form. If he misrepresented Motl’s beliefs, then the public may not know who to trust. Targeted against one effective critic, it could trash their reputation. At a wider scale it could allow morally and scientifically inferior views to gain prominence over superior viewpoints. If the alarmist beliefs were superior it what be necessary to misrepresent alternative opinions. Open debate would soon reveal which side had the better views. But in debating and disputing, all sides would sharpen their arguments. What would quickly disappear is the reliance on opinion surveys and rewriting of dictionaries. Instead, proper academics would be distinguishing between quality, relevant evidence from dogmatic statements based on junk sociology and psychology. They would start defining the boundaries of expertise between the basic physics, computer modelling, results analysis, public policy-making, policy-implementation, economics, ethics and the philosophy of science. They may then start to draw on the understanding that has been achieved in these subject areas.

Kevin Marshall

Dixon and Jones confirm a result on the Stephan Lewandowsky Surveys

Congratulations to Ruth Dixon and Jonathan Jones on managing to get a commentary on the two Stephan Lewandowsky, Gilles Gignac & Klaus Oberauer surveys published in Psychological Science. Entitled “Conspiracist Ideation as a Predictor of Climate Science Rejection: An Alternative Analysis” it took two years to get published. Ruth Dixon gives a fuller description on her blog, My Garden Pond. It confirms something that I have stated independently, with the use of pivot tables instead of advanced statistical techniques. In April last year I compared the two surveys in a couple of posts – Conspiracist Ideation Falsified? (CIF) & Extreme Socialist-Environmentalist Ideation as Motivation for belief in “Climate Science” (ESEI).

The major conclusion through their analysis of the survey

All the data really shows is that people who have no opinion about one fairly technical matter (conspiracy theories) also have no opinion about another fairly technical matter (climate change). Complex models mask this obvious (and trivial) finding.

In CIF my summary was

A recent paper, based on an internet survey of American people, claimed that “conspiracist ideation, is associated with the rejection of all scientific propositions tested“. Analysis of the data reveals something quite different. Strong opinions with regard to conspiracy theories, whether for or against, suggest strong support for strongly-supported scientific hypotheses, and strong, but divided, opinions on climate science.

In the concluding comments I said

The results of the internet survey confirm something about people in the United States that I and many others have suspected – they are a substantial minority who love their conspiracy theories. For me, it seemed quite a reasonable hypothesis that these conspiracy lovers should be both suspicious of science and have a propensity to reject climate science. Analysis of the survey results has over-turned those views. Instead I propose something more mundane – that people with strong opinions in one area are very likely to have strong opinions in others. (Italics added)

Dixon and Jones have a far superior means of getting to the results. My method is to input the data into a table, find groupings or classifications, then analyse the results via pivot tables or graphs. This mostly leads up blind alleys, but can develop further ideas. For every graph or table in my posts, there can be a number of others stashed on my hard drive. To call it “trial and error” misses out the understanding to be gained from analysis. Their method (through rejecting linear OLS) is loess local regression. They derive the following plot.

This compares with my pivot table for the same data.

The shows in the Grand Total row that the strongest Climate (band 5) comprise 12% of the total responses. For the smallest group of beliefs about conspiracy theories with just 60/5005 responses, 27% had the strongest beliefs in about climate. The biggest percentage figure is the group who averaged a middle “3” score on both climate and conspiracy theories. That is those with no opinion on either subject.

The more fundamental area that I found is that in the blog survey between strong beliefs in climate science and extreme left-environmentalist political views. It is a separate topic, and its inclusion by Dixon and Jones would have both left much less space for the above insight in 1,000 words, and been much more difficult to publish. The survey data is clear.

The blog survey (which was held on strongly alarmist blogs) shows that most of the responses were highly skewed to anti-free market views (that is lower response score) along with being strongly pro-climate.

The internet survey of the US population allowed 5 responses instead of 4. The fifth was a neutral. This shows a more normal distribution of political beliefs, with over half of the responses in the middle ground.

This shows what many sceptics have long suspected, but I resisted. Belief in “climate science” is driven by leftish world views. Stephan Lewandowsky can only see the link between the “climate denial” beliefs and free-market, because he views left-environmentalist perspectives and “climate science” as a priori truths. This is the reality that everything is to be measured. From this perspective climate science has not failed due to being falsified by the evidence, but because scientists have yet to find the evidence; the models need refining; and there is a motivated PR campaign to undermine these efforts.

Kevin Marshall

 

 

 

 

 

Prof Lewandowsky – Where is the overwhelming evidence of climate change?

On Stephan Lewandowsky’s blog (funded by the Australian people) he claims that there is overwhelming evidence of climate change. My question is as follows

You claim that there is “overwhelming scientific evidence on climate change”. Does this apply to:-

  1. The trivial proposition that there is a greenhouse effect, so a rise in GHG levels will cause some rise in temperature?

    OR

  2. The non-trivial proposition that the unmitigated increase in GHG levels will lead to significant warming with catastrophic consequences?

The trivial proposition is something for a few academics to ponder. It is only when there is reasonable scientific evidence for the non-trivial proposition that a global policy to mitigate could be seriously contemplated.

Having attended John Cook’s lecture at Bristol University a few days ago, I found out that the vast survey of academic papers found a 97% consensus was about belief in the trivial proposition, and some of the papers were authored by non-scientists. That is, Cook presented weak, secondary, evidence of the trivial proposition.

Cook’s lecture also mentioned the four Hiroshima bombs a second of heat accumulation in the climate system since 1998, the widget for which you have on the left-hand side of this blog. Stated this way, there appears to be a non-trivial amount of warming, that anybody can perceive. It is equivalent to the average temperature of the ocean’s increasing at a rate less than 0.0018oC per annum. That is weak evidence for the trivial proposition.

So where is the overwhelming evidence that can justify policy?


This gives rise to a question that Australian citizen’s may one to raise with their elected representatives.

Should Australian taxpayers be funding a partisan blog that is strongly critical of mainstream political opinion, whose sole current contributor is a non-Australian working outside of Australia?

Kevin Marshall

Notes on John Cook’s presentation at Bristol University

On Friday 19th September John Cook talked on “ Dogma vs. consensus: Letting the evidence speak on climate change” at Bristol University. He was introduced by Stephen Lewandowsky, who is now a professor there. The slides are available at Skepticalscience. Here are some notes on the talk, along with brief comments.

The global warming hypothesis

John Cook started by asking people to see if they can define the greenhouse effect as a way of detecting if people know what they are talking about. However, he did not then apply this criteria in evaluating consensus views.

He stated that there is no single cause of global warming (including solar and volcanic), but that there is a major or principle one. From then on Cook proceeded as if there was a single cause. There was no evidence for relative size of each cause of global warming. Nor was there any consideration of the implications if AGW accounted for half or less of the warming rather than the entirety of it.

He stated that there are multiple lines of evidence for AGW actually operating, no mention of the quality of the evidence, or of contrary evidence that the pattern of warming does not fit the models.

Cook et. al 97% scientific consensus paper

Cook than went on to talk about his 97% consensus paper. He then showed the Barak Obama tweet.

In the Q&A John Cook admitted to two things. First, the paper only dealt with declared belief in the broadest, most banal, form of the global warming hypothesis. That is greenhouse gas levels are increasing and there is some warming as a consequence. Second is that the included papers that were outside the realm of climate science1, and quite possibly written by people without a science degree. The Barak Obama tweet account seems to have got the wrong impression.

This should be seen in the light of a comment about why consensus is important.

Communicating consensus isn’t about proving climate change. It’s addresses a public misconception about expert opinion.

John Cook has spectacularly failed on his own terms.

Fake Experts

Cook pointed to a petition of a few years ago signed by over 31,000 American scientists, opposing the Kyoto Treaty on the basis that it would harm the environment, hinder the advance of science and technology and damage the health and welfare of mankind. They also signed to say that there was no convincing evidence of catastrophic global warming.

He calls these people “fake experts” because these “scientists”, but are not “climate scientists”. But as we have seen neither were all the authors on his climate consensus paper.

If scientists from other areas are “fake experts” on climate science, then this equally applies to those making statements in support of the “climate consensus”. That means all the statements by various scientific bodies outside of the field of “climate” are equally worthless. Even more worthless are proclamations by political activists and politicians.

But most of all neither is John Cook a “climate expert”, as his degree is in physics.

Four Hiroshima Bombs and a Zillion Kitten Sneezes

As an encore, Cook had a short presentation on global warming. There were no hockey sticks showing the last thousand years of warming, or even a NASA Gistemp global surface temperature anomaly graph for the last century. The new graph is the earth’s cumulative heat energy accumulation since 1970, broken down into components. It was a bit like the UNIPCC’s graph below from AR5 WG1 Chapter 3. However, I do not remember the uncertainty bands being on Cook’s version.

Seeing that, I whispered to my neighbour “Four Hiroshima Bombs”. Lo and behold the next slide mentioned them. Not a great prediction on my part, as skepticalscience.com has a little widget. But an alternative variant of this was a zillion kitten sneezes a second, or some such preposterous figure. The next slide was a cute picture of a kitten. Cook seems to be parodying his work.

The Escalator with cherries on top

The last slide was of Cook’s “Escalator” graph, or at least the “Skeptics” view. The special feature for the evening was a pair of cherries in the top left, to emphasise that “skeptics” cherry-pick the evidence.

It was left flickering away for the last 15 minutes.

 

My thoughts on the presentation

Some of the genuine sceptics who left the room were seething, although they hung around and chatted.

But having reviewed my notes and the slides my view is different. John Cook started the presentation by trying to establish his expert authority on the global warming hypothesis. Then he let slip that he does not believe all global warming is from rising greenhouse gas levels. The centrepiece was the 97.4% scientific consensus paper where he was lead author. But, as Cook himself admitted, the survey looked for support for the most banal form of global warming, and the surveyed papers were not all written by climate scientists. Yet Barak Obama is enacting policy based on the false impression of a scientific consensus of dangerous warming.

Then in dissing an alternative viewpoint from actual scientists, Cook has implicitly undermined years of hard campaigning and entryism by green activists in getting nearly every scientific body in the world to make propaganda statements in support of the catastrophic global warming hypothesis and the necessity of immediate action to save the planet. Cook then parodied his own “four Hiroshima bombs a second” widget, before finishing off with a flickering gross misrepresentation of the sceptics, a number of whom were in the room listening politely.

About the final question was from someone who asked about why nearly all the questions were coming from sceptics, when the vast majority of the people in the room were in support of the “science”. At the end there was polite applause, and the room quickly emptied. I think the answer to the lack of questions was the embarrassment people felt. If John Cook is now the leading edge of climate alarmism, then the game is up.

Kevin Marshall

Notes

  1. This was in response to a question from blogger Katabasis pointed out some papers that were clearly not climate science, I believe using Jose Duarte’s list.

The Lewandowsky Smooth

Summary

The Risbey at al. 2014 paper has already had criticism of its claim that some climate models can still take account of actual temperature trends. However, those criticisms did not take into account the “actual” data used, nor did they account for why Stephan Lewandowsky, a professor of psychology, should be a co-author of a climate science paper. I construct simple model using Excel of surface temperature trends that accurately replicates the smoothed temperature data in Risbey et al. 2014. Whereas the HADCRUT4 data set shows the a cooling trend since 2006, a combination of three elements smooths it away to give the appearance of a minimal downturn in a warming trend. Those element are the use of the biases in Cowtan and Way 2013; the use of decadal changes in data (as opposed to change from previous period) and the use of 15 year centred moving averages. As Stephan Lewandowsky was responsible for the “analysis of models and observations” this piece of gross misinformation must be attributed to him, hence the title.

Introduction

Psychology Professor Stephan Lewandowsky has previously claimed that “inexpert mouths” should not be heard. He is a first a psychologist, cum statistician; then a specialist on ethics, and peer review; then publishes on the maths of uncertainty. Now Lewandowsky re-emerges as a Climate Scientist, in

Well-estimated global surface warming in climate projections selected for ENSO phase” James S. Risbey, Stephan Lewandowsky, Clothilde Langlais, Didier P. Monselesan, Terence J. O’Kane & Naomi Oreskes Nature Climate Change (Risbey et al. 2014)

Why the involvement?

Risbey et al. 2014 was the subject of a long post at WUWT by Bob Tisdale. That long post was concerned with the claim that the projections of some climate models could replicate surface temperature data.

Towards the end Tisdale notes

The only parts of the paper that Stephan Lewandowsky was not involved in were writing it and the analysis of NINO3.4 sea surface temperature data in the models. But, and this is extremely curious, psychology professor Stephan Lewandowsky was solely responsible for the “analysis of models and observations”.

Lewandowsky summarizes his contribution at shapingtomorrowsworld. The following is based on that commentary.

Use of Cowtan and Way 2013

Lewandowsky asks “Has global warming “stopped”?” To answer in the negative he uses Cowtan and Way 2013. This was an attempt to correct the coverage biases in the HADCRUT4 data set by infilling through modelling where the temperature series lacked data. Principally real temperature data was lacking at the poles and in parts of Africa. However, the authors first removed some of the HADCRUT4 data, stating reasons for doing so. In total Roman M found it was just 3.34% of the filled-in grid cells, but was strongly biased towards the poles. That is exactly where the HADCRUT4 data was lacking. A computer model was not just infilling for where data was absent, but replacing sparse data with modelled data.

Steve McIntyre plotted the differences between CW2013 and HADCRUT4.

Stephan Lewandowsky should have acknowledged that, through the greater use of modelling techniques, Cowtan and Way was a more circumstantial estimate of global average surface temperature trends than HADCRUT4. This aspect would be the case even if results were achieved by robust methods.

Modelling the smoothing methods

The Cowtan and Way modelled temperature series was then smoothed to create the following series in red.

The smoothing was achieved by employing two methods. First was to look at decadal changes rather than use temperature anomalies – the difference from a fixed point in time. Second was to use 15 year centred moving averages.

To help understand the impact these methods to the observations had on the data I have constructed a simple model of the major HADCRUT4 temperature changes. The skepticalscience.com website very usefully has a temperature trends calculator.

The phases I use in degrees Kelvin per decade are

The Cowtan and Way trend is simply HADCRUT4 with a trend of 0.120 Kelvin per decade for the 2005-2013 period. This simply coverts a cooling trend since 2005 into a warming one, illustrated below.

The next step is to make the trends into decadal trends, by finding the difference with between the current month figure and the one 120 months previous. This derives the following for the Cowtan and Way trend data.

Applying decadal trends spreads the impact of changes in trend over ten years following the change. Using HADCRUT4 would mean decadal trends are now zero.

The next step is to apply 15 year centred moving averages.

The centred moving average spreads the impact of a change in trend to before the change occurred. So warming starts in 1967 instead of 1973. This partly offsets the impact of decadal changes, but further smothers any step changes. The two elements also create a nice smoothing of the data. The difference of Cowtan and Way is to future-proof this conclusion.

Comparison of modelled trend with the “Lewandowsky Smooth”

Despite dividing up over sixty years of data into just 5 periods, I have managed to replicate the essentially features of the decadal trend data.

A. Switch from slight cooling to warming trend in late 1960s, some years before the switch occurred.

B. Double peaks in warming trend, the first below 0.2 degrees per decade, the second slightly above.

C. The smoothed graph ending with warming not far off the peak, obliterating the recent cooling in the HADCRUT4 data.

Lewandowsky may not have used the decadal change as the extra smoothing technique, but whichever technique that was used achieved very similar results to my simple Excel effort. So the answer to Lewandowsky’s question “Has global warming “stopped”?” the answer is “Yes”. Lewandowsky knew this, so has manipulated the data to smooth the problem away. The significance is in a quote from “the DEBUNKING Handbook“.

It’s self-evident that democratic societies should base their decisions on accurate information. On many issues, however, misinformation can become entrenched in parts of the community, particularly when vested interests are involved. Reducing the influence of misinformation is a difficult and complex challenge.

Lewandowsky is providing misinformation, and has an expert understanding of its pernicious effects.

Kevin Marshall

Conspiracist Ideation Falsified?

Summary

A recent paper, based on an internet survey of American people, claimed that “conspiracist ideation, is associated with the rejection of all scientific propositions tested“. Analysis of the data reveals something quite different. Strong opinions with regard to conspiracy theories, whether for or against, suggest strong support for strongly-supported scientific hypotheses, and strong, but divided, opinions on climate science.

Preamble

In 2012 I spent a lot of time looking at a paper “Lewandowsky, Oberauer & Gignac – NASA faked the moon landing:Therefore (Climate) Science is a Hoax: An Anatomy of the Motivated Rejection of Science” – hereafter called LOG12. The follow up in early 2013 was the notorious Recursive Fury paper that has now been withdrawn (Here and here). When a new paper came out, by the same authors reaching pretty much the same conclusions, I had lost interest.

However, Barry Woods, a victim of the Recursive Fury paper, suggested in a comment:-

Lewandowsky always claimed that his US study replicated LOG12

Could you try the same pivot table analysis as LOG12?

I had a quick look at the file, tried a few pivot tables, had a short email exchange, and found something interesting.

The 2013 US study is “The Role of Conspiracist Ideation and Worldviews in Predicting Rejection of Science” – Stephan Lewandowsky, Gilles E. Gignac, Klaus Oberauer. PLOS one. Hereafter called LOG13.

The two papers were similar in that

  • The three authors were the same.
  • Many of the questions were the same, or very similar.
  • The conclusions were similar.

The two papers are different in that

  • LOG12 was an internet survey, conducted solely through “pro-science” blogs. LOG13 was another internet survey, but this time of the U.S. population.
  • LOG12 had just 4 responses. Running 1 to 4 they are strongly/weakly reject and weakly strongly accept. LOG13 had 5 responses. In the middle there was a neutral/don’t know/no opinion option.

At “Shaping Tommorow’s World” Blog, Lewandowsky and Oberauer said of the LOG13 paper:-

Conclusions: Free-market worldviews are an important predictor of the rejection of scientific findings that have potential regulatory implications, such as climate science, but not necessarily of other scientific issues. Conspiracist ideation, by contrast, is associated with the rejection of all scientific propositions tested.

It is the last part that I will deal with in this posting. Free market views I may come back to at a later time.

 

Comparison of conspiracist orientations and science denial in LOG12 (pro-science blogs) and LOG13 (Americans)

LOG12 had thirteen questions on conspiracy theories and LOG13 nine. In the latter three were on science issues and one on “New World Order”. That left five that are comparable between the papers, but independent of the scientific / political subject matter1.

In LOG12 there were two scientific questions. In short they are “HIV causes AIDS” and “smoking causes lung cancer”. In LOG13 was added “lead in drinking water harms health”

This can be compared by banding the belief in conspiracy theories by the rounded average response.


The first column in the table is the band, taken by rounding the average response to the nearest whole number for the responses to the 5 conspiracy theories. The second column is the unrounded average response within the band. The third column is the number of responses. The fourth column is the average response to the two science questions. The fifth column is acceptance ratio.

For the LOG12 survey, conducted via “pro-climate science” blogs, the connection is clear. The belief in the five conspiracy theories is inversely related to belief in two well-accepted scientific hypotheses. However, there is strong acceptance of the two science questions by all but two respondents. The two respondents who were in the highest conspiracy category I referred to as “rogue responses” in my earlier analysis, and which Steve McIntyre called “super-scammers”. Take out the two scam responses and there is a picture of degrees of science acceptance and no science denial.


For the LOG13, an internet survey of the American public, there is a somewhat different picture. The belief in three well-accepted scientific hypotheses appears to be related to the strength of opinion for three conspiracy theories, independent of the direction of that opinion. The respondents with the least belief in the scientific hypotheses are those who are in the middle on conspiracy theories. That is those who express no opinion, or give similar weight to both sides. Yet they still are, on average, affirming of the scientific consensus. There is no “rejection of the science” at all by any band of belief in conspiracy theories. Further, the greatest “believers in science” are the 12 who have the greatest “conspiracist” ideation. Like the authors, I have no truck with conspiracy theories. But the evidence does not support the statement “conspiracist ideation, … is associated with the rejection of all scientific propositions tested“. Falsely maligning a group of people will only serve to confirm them in their beliefs.

 

Comparison of conspiracist orientations and climate science denial in LOG12 (pro-science blogs) and LOG13 (Americans)

A similar comparison can be made between the beliefs in conspiracy theories and the beliefs in climate science.


In LOG12 there appears to be a relationship. 97% of respondents strongly accept climate science and reject conspiracy theories. The 30 who have a modest acceptance of conspiracy theories are a little more lukewarm on climate science. The real odd result are the two scam responses.


In LOG13 there is a distinct relationship here – the stronger the belief in conspiracy theories, the lower the belief in climate science. But hold on. A score of 3 is neutral, and 5 is total acceptance. The difference is between very lukewarm acceptance and virtually no opinion either way. To claim rejection is misleading. However, the result appears to contradict the previous result the three scientific hypotheses. To understand this result needs closer examination. There were 5 statements and 1001 responses, so 5005 total responses in total. Counting all the responses gives the following result4


To clarify, the “Grand Total” row shows that there were 366 scores of 1 in the 5 CO2 science statements. Of these 15 were by the 12 respondents who averaged a score of 5 in the conspiracy theories. The proportions I believe can be better seen by the percentage of responses in each row.


So 7% of all the 5005 responses were a score of 1. Of the 60 responses by the strongest believers in conspiracy theories, 25% were score of 1.

We get a similar result for belief climate science to belief in three well-accepted scientific hypotheses. Those with the most extreme opinions on conspiracy theories are those with the most extreme opinions on climate change. But there is a crucial difference, in that opinions on climate change are split between acceptance and rejection. The 12 respondents, who were the strongest believers in conspiracy theories, also had the highest proportion of 1s and 5s on the climate questions. The second most extreme group was the 215 respondents on the strong rejection group. The highest proportion of 3s, along with the lowest proportions of 1s and 5s were those in middle band on conspiracy theories. Holding strong opinions on conspiracy theories seems to be a predictor of strong opinions on climate science, but not a predictor of whether that is strong belief or strong rejection.

Corroboration of the result

The results of the internet survey confirm something about people in the United States that I and many others have suspected – they are a substantial minority who love their conspiracy theories. For me, it seemed quite a reasonable hypothesis that these conspiracy lovers should be both suspicious of science and have a propensity to reject climate science. Analysis of the survey results has over-turned those views. Instead I propose something more mundane – that people with strong opinions in one area are very likely to have strong opinions in others.

In relation to the United States, there is a paradox if you follow the “conspiracist ideation”. Along with being a hotbed of conspiracy theorists, the US is also home to 11 or 15 of the World’s top universities and much of the technological revolutions of the past 50 years originate. If science is about conformity and belief in the established expert opinion, this could not have happened.

Kevin Marshall

 

Notes

  1. Five, non-science, conspiracy theories, in common to LOG12 and LOG13
  • CYMLK The assassination of Martin Luther King Jr. was the result of an organized conspiracy by U.S. government agencies such as the CIA and FBI.
  • CYMoon The Apollo moon landings never happened and were staged in a Hollywood film studio.
  • CYJFK The assassination of John F. Kennedy was not committed by the lone gunman Lee Harvey Oswald but was rather a detailed organized conspiracy to kill the President.
  • CY911 The U.S. government allowed the 9–11 attacks to take place so that it would have an excuse to achieve foreign (e.g., wars in Afghanistan and Iraq) and domestic (e.g., attacks on civil liberties) goals that had been determined prior to the attacks.
  • CYDiana Princess Diana’s death was not an accident but rather an organised assassination by members of the British royal family who disliked her.

     

  1. Acceptance Ratio

    Comparing the average scores across the two surveys can be confusing where there are a different number of options. The acceptance ratio makes average scores comparable where there are a large number of responses. Strong acceptance scores 1, strong rejection -1 and the mid-point 0.

     

  2. Climate Science

LOG 12 had four questions on Climate science

CO2TempUp I believe that burning fossil fuels increases atmospheric temperature to some measurable degree

CO2AtmosUp I believe that the burning of fossil fuels on the scale observed over the last 50 years has increased atmospheric temperature to an appreciable degree

CO2WillNegChange I believe that the burning of fossil fuels on the scale observed over the last 50 years will cause serious negative changes to the planet’s climate, unless there is a substantial switch to non-CO2 emitting energy sources

CO2HasNegChange I believe that the burning of fossil fuels on the scale observed over the last 50 years has caused serious negative changes to the planet’s climate

LOG 13 had five questions on Climate science

CNatFluct I believe that the climate is always changing and what we are currently observing is just natural fluctuation. (R)

CdueGHG I believe that most of the warming over the last 50 years is due to the increase in greenhouse gas concentrations.

CseriousDamage I believe that the burning of fossil fuels over the last 50 years has caused serious damage to the planet’s climate.

CO2causesCC Human CO2 emissions cause climate change.

HumansInsign Humans are too insignificant to have an appreciable impact on global temperature. (R)

  1. Response Count

To replicate my response table, create a pivot table for count of responses for each of the climate change statements. Make the conspiracy bands the row labels, and a climate statement as the column label. Add the results together.