Thursday, January 13, 2011
Anthony Watts gives an update on its progress on his blog Watts Up With That.
From Nature blog: Sunny days for CLOUD experiment
An experiment designed to investigate the link between solar activity and the climate has its first results in the bag. At the American Geophysical Union meeting in San Francisco today, Joachim Curtius presented data from the first runs of the CLOUD (‘cosmics leaving outdoor droplets’) experiment at CERN – the European particle physics lab outside of Geneva.
The experiment has a long and bumpy history. The idea is to test the theory that cosmic rays spur the formation of particles in the air that nucleate clouds, in turn making skies cloudier and the planet cooler. Researchers have noted a dearth of sunspots (which is linked to more cosmic rays) during the ‘little ice age’ of the seventeenth and eighteenth centuries, and a peak in sunspots (linked to a drop in cosmic rays) during the late 1980s, when global cloudiness dropped by about 3% (see Nature‘s feature on the project). No one knows how big this effect might be, and the idea that it might account for a big chunk of the warming over the last century is highly controversial.
CLOUD uses a particle beam from CERN as a stand-in for cosmic rays, and fires them through an ultra-clean steel chamber filled with select atmospheric gases, to see if and how particles that could nucleate clouds are formed. Project head Jasper Kirkby proposed the experiment back in 1998. But it had a hard time getting off the ground – perhaps in part because Kirkby received bad press for emphasizing the importance of cosmic rays to climate change (see this story from the National Post). CLOUD finally got going in 2006, and they started work with the full kit in November 2009 (here’s a CERN video update about that).
The results haven’t yet been published, so Curtius declined to discuss the details. But the important thing is that the project is working – they have seen sulphuric acid and water combine to make particles when blasted by the CERN beam, for example, in a way that matches predictions of the most recent models. The data should help the team to quantify how much of an impact the Sun is having on climate within 2-3 years, Curtius says – though there are a lot more pieces of the puzzle to fill in.
Dr. Roy Spencer has mentioned that it doesn’t take much in the way of cloud cover changes to add up to the “global warming signal” that has been observed. He writes in The Great Global Warming Blunder:
The most obvious way for warming to be caused naturally is for small, natural fluctuations in the circulation patterns of the atmosphere and ocean to result in a 1% or 2% decrease in global cloud cover. Clouds are the Earth’s sunshade, and if cloud cover changes for any reason, you have global warming — or global cooling.
This graph certainly lends credence to the theory:
Here’s a longer record of cosmic rays:
See also these WUWT stories:Message in the CLOUD for Warmists: The end is near?
Do solar scientists STILL think that recent warming is too large to explain by solar activity?
Study of the sun-climate link was energized in 1991 by Friis-Christensen and Lassen, who showed a strong correlation between solar-cycle length and global temperature:
This evidence that much of 20th century warming might be explained by solar activity was a thorn in the side of the newly powerful CO2 alarmists, who blamed recent warming on human burning of fossil fuels. That may be why Lassen and Thejll were quick to offer an update as soon as the 1997-98 El Nino made it look as if temperatures were suddenly skyrocketing:
The rapid temperature rise recently seems to call for a quantitative revisit of the solar activity-air temperature association …
We conclude that since around 1990 the type of Solar forcing that is described by the solar cycle length model no longer dominates the long-term variation of the Northern hemisphere land air temperature.
In other words, there was now too much warming to account for by solar cycle length, so some other factor, such as CO2, had to be driving the most recent warming. Of course everyone knew that the 1998 warming had actually been caused by ocean oscillations. Even lay people knew it. (El Nino storm tracks were all the news for six months here in California.)
When Lassen was writing his update in mid ’99, temperatures had already dropped back to 1990 levels. His 8 year update was outdated before it was published. 12 years later the 2010 El Nino year shows the same average temperature as the ’98 El Nino year, and if post-El Nino temperatures continue to fall off the way they did in 99, we’ll be back to 1990 temperatures by mid-2011. Isn’t it about time Friis-Cristensen, Lassen and Thejll issued another update? Do they still think there has been too much recent warming to be accounted for by solar activity?
The most important update may be the discovery that, where Lassen and his colleagues found a correlation between the length of a solar-cycle and temperatures over that cycle, others have been finding a much stronger correlation to temperatures over the next cycle (reported at WUWT this summer by David Archibald).
This further correlation has the advantage of allowing us make projections. As Archibald deciphers Solheim’s Norwegian:
since the period length of previous cycle (no 23) is at least 3 years longer than for cycle no 22, the temperature is expected to decrease by 0.6 – 1.8 degrees over the following 10-12 years.
Check out this alarming graphic from Stephen Strum of Frontier Weather Inc:
The snowed in Danes might like to see these projections, before they bet the rest of their climate eggs on a dangerous war against CO2.
From sins of omission to sins of commission
In 1985, the Sun did a U-turn in every respect. It no longer went in the right direction to contribute to global warming. We think it’s almost completely conclusive proof that the Sun does not account for the recent increases in global warming.
Actually, solar cycle 22, which began in 1986, was one of the most intense on record (part of the 20th century “grand maximum” that was the most active sun of the last 11 thousand years), and by almost every measure it was more intense than solar cycle 21. It had about the same sunspot numbers as cycle 21 (Hathaway 2006):
Cycle 22 ran more solar flux than cycle 21 (via Nir Shaviv):
Cycle 22 was shorter than cycle 21 (from Joseph D’Aleo):
Perhaps most important is solar activity as measured (inversely) by the cosmic ray flux (which many think is mechanism by which solar activity drives climate). Here cycle 22 is THE most intense in the 60 year record, stronger even than cycle 19, the sunspot number king. From the Astronomical Society of Australia:
Some “U-turn in every respect.”
If Lockwood and Frohlich simply wanted to argue that the peak of the modern maximum of solar activity was between solar cycles 21 and 22 it would be unobjectionable. What difference does it make exactly when the peak was reached? But this is exactly where their real misdirection comes in. They claim that the peak of solar activity marks the point where any solar-climate effect should move from a warming to a cooling direction. Here is the abstract from their 2007 Royal Society article:
Abstract There is considerable evidence for solar influence on the Earth’s pre-industrial climate and the Sun may well have been a factor in post-industrial climate change in the first half of the last century. Here we show that over the past 20 years, all the trends in the Sun that could have had an influence on the Earth’s climate have been in the opposite direction to that required to explain the observed rise in global mean temperatures.
In order to assert the need for some other explanation for recent warming (CO2), they are claiming that near peak levels of solar activity cannot have a warming effect once they are past the peak of the trend—that it is not the level of solar activity that causes warming or cooling, but the change in the level—which is absurd.
Ken Gregory has the most precise answer to this foolishness. His “climate smoothing” graphic shows how the temperature of a heat sink actually responds to a fall-off in forcing:
“Note that the temperature continues to rise for several years after the Sun’s forcing starts to decrease.”
Gregory’s numbers here are arbitrary. It could be many years before a fall off in forcing causes temperatures to start rising. In the case of solar cycle 22—where if solar forcing was actually past its peak, it had only fallen off a tiny bit—the only way temperature would not keep rising over the whole solar cycle is if global temperature had already equilibrated to peak solar forcing, which Lockwood and Frohlich make no argument for.
The obvious interpretation of the data is that we never did reach equilibrium temperatures, allowing grand maximum levels of solar activity to continue to warm the planet until the sun suddenly went quiet. Now there’s an update for Lockwood and Frohlich. How about telling the public when solar activity really did do “U” (October 2005).
Usoskin, Benestad, and a host of other solar scientists also mistakenly assume that temperature is driven by trend instead of level
Maybe it is because so much of the evidence for a sun-climate link comes from correlation studies, which look for contemporaneous changes in solar activity and temperature. Surely the scientists who are doing these studies all understand that there is no possible mechanism by which the rate of change in solar activity can itself drive temperature. If temperature changes when solar activity changes, it is because the new LEVEL of solar activity has a warming or cooling effect.
The long term trends in solar data and in northern hemisphere temperatures have a correlation coefficient of about 0.7 — .8 at a 94% — 98% confidence level. …
… Note that the most recent warming, since around 1975, has not been considered in the above correlations. During these last 30 years the total solar irradiance, solar UV irradiance and cosmic ray flux has not shown any significant secular trend, so that at least this most warming episode must have another source.
Set aside the other problems with Usoskin’s study. (The temperature record he compared his solar data to is Michael Mann’s “hockey stick.”) How can he claim overwhelming evidence for a sun-climate link, while simultaneously insisting that steady peak levels of solar activity can’t create warming? If steady peak levels coincide with warming, it supposedly means the sun-climate link is now broken, so warming must be due to some other cause, like CO2.
It is hard to believe that scientists could make such a basic mistake, and Usoskin et al. certainly have powerful incentive to play dumb: to pretend that their correlation studies are finding physical mechanisms by which it is changes in the level of solar activity, rather than the levels themselves, that drive temperature. Just elide this important little nuance and presto, modern warming gets misattributed to CO2, allowing these researchers to stay on the good side of the CO2 alarmists who control their funding. Still, the old adage is often right: never attribute to bad motives what can just as well be explained by simple error.
And of course there can be both.
RealClimate exchange on trend vs. level confusion
Finally we arrive at the beginning, for me anyway. I first came across trend-level confusion 5 years ago at RealClimate. Rasmus Benestad was claiming that, because post 1960′s levels of Galactic Cosmic Radiation have not been trending downwards, GCR cannot be the cause of post-60′s warming.
But solar activity has been well above historical norms since the 40’s. It doesn’t matter what the trend is. The solar-wind is up. According to the GCR-cloud theory, that blows away the GCR, which blows away the clouds, creating warming. The solar wind doesn’t have to KEEP going up. It is the LEVEL that matters, not the trend. Holy cow. Benestad was looking at the wrong derivative (one instead of zero).
A few months later I took an opportunity to state my rebuttal as politely as possible, which elicited a response from Gavin Schmidt. Here is our 2005 exchange:
Me: Nice post, but the conclusion: “… solar activity has not increased since the 1950s and is therefore unlikely to be able to explain the recent warming,” would seem to be a non-sequitur.
What matters is not the trend in solar activity but the level. It does not have to KEEP going up to be a possible cause of warming. It just has to be high, and it has been since the forties.
Presumably you are looking at the modest drop in temperature in the fifties and sixties as inconsistent with a simple solar warming explanation, but it doesn’t have to be simple. Earth has heat sinks that could lead to measured effects being delayed, and other forcings may also be involved. The best evidence for causality would seem to be the long term correlations between solar activity and temperature change. Despite the differences between the different proxies for solar activity, isn’t the overall picture one of long term correlation to temperature?
[Response: You are correct in that you would expect a lag, however, the response to an increase to a steady level of forcing is a lagged increase in temperature and then a asymptotic relaxation to the eventual equilibrium. This is not what is seen. In fact, the rate of temperature increase is rising, and that is only compatible with a continuing increase in the forcing, i.e. from greenhouse gases. - gavin]
Gavin admits here that it’s the level of solar activity, not the trend in solar activity, that drives temperature. He’s just assuming that grand maximum levels of solar forcing should have bought the planet close to equilibrium temperature before post-80′s warming hit, but that assumption is completely unwarranted. If solar activity is driving climate (the hypothetical that Schmidt is analyzing), we know that it can push temperatures a lot higher than they are today. Surely Gavin knows about the Viking settlement of Greenland.
The rapid warming in the late 90′s could easily have been caused by the monster solar cycle 22 and there is no reason to think that another big cycle wouldn’t have brought more of the same. Two or three more cycle 22s and we might have been hauling out the longships, which would be great. No one has ever suggested that natural warming is anything but benign. Natural cooling bad, natural warming good. But alas, a longer grand maximum was not to be.
Gavin’s admission that it is level not trend that drives temperature change is important because ALL of the alarmist solar scientists are making the trend-level mistake. If they would admit that the correct framework is to look at the level of forcing and the lapse to equilibrium then they would be forced to look at the actual mechanisms of forcing and equilibration, instead of ignoring key forcings on the pretense that steady peak levels of forcing cannot cause warming.
That’s the big update that all of our solar scientists need to make. They need to stop tolerating this crazy charade that allows the CO2 alarmists to ignore the impact of decades of grand maximum solar activity and misattribute the resulting warming to fossil fuel burning. It is a scientific fraud of the most disastrous proportions, giving the eco-lunatics the excuse they need to unplug the modern world.
Warming Trend: PDO And Solar Correlate Better Than CO2
Joe wrote then:
Clearly the US annual temperatures over the last century have correlated far better with cycles in the sun and oceans than carbon dioxide. The correlation with carbon dioxide seems to have vanished or even reversed in the last decade.
There’s a new paper by Paulo Cesar Soares in the International Journal of Geosciences supporting Joe’s idea, and it is full and open access. See link below.
Warming Power of CO2 and H2O: Correlations with Temperature Changes
Author: Paulo Cesar Soares
The dramatic and threatening environmental changes announced for the next decades are the result of models whose main drive factor of climatic changes is the increasing carbon dioxide in the atmosphere. Although taken as a premise, the hypothesis does not have verifiable consistence. The comparison of temperature changes and CO2 changes in the atmosphere is made for a large diversity of conditions, with the same data used to model climate changes. Correlation of historical series of data is the main approach. CO2 changes are closely related to temperature.
Warmer seasons or triennial phases are followed by an atmosphere that is rich in CO2, reflecting the gas solving or exsolving from water, and not photosynthesis activity. Interannual correlations between the variables are good. A weak dominance of temperature changes precedence, relative to CO2 changes, indicate that the main effect is the CO2 increase in the atmosphere due to temperature rising. Decreasing temperature is not followed by CO2 decrease, which indicates a different route for the CO2 capture by the oceans, not by gas re-absorption. Monthly changes have no correspondence as would be expected if the warming was an important absorption-radiation effect of the CO2 increase.
The anthropogenic wasting of fossil fuel CO2 to the atmosphere shows no relation with the temperature changes even in an annual basis. The absence of immediate relation between CO2 and temperature is evidence that rising its mix ratio in the atmosphere will not imply more absorption and time residence of energy over the Earth surface. This is explained because band absorption is nearly all done with historic CO2 values. Unlike CO2, water vapor in the atmosphere is rising in tune with temperature changes, even in a monthly scale. The rising energy absorption of vapor is reducing the outcoming long wave radiation window and amplifying warming regionally and in a different way around the globe.
From the conclusion:
The main conclusion one arrives at the analysis is that CO2 has not a causal relation with global warming and it is not powerful enough to cause the historical changes in temperature that were observed. The main argument is the absence of immediate correlation between CO2 changes preceding temperature either for global or local changes. The greenhouse effect of the CO2 is very small compared to the water vapor because the absorbing effect is already realized with its historical values. So, the reduction of the outcoming long wave radiation window is not a consequence of current enrichment or even of a possible double ratio of CO2. The absence of correlation between temperature changes and the immense and variable volume of CO2 waste by fuel burning is explained by the weak power of additional carbon dioxide in the atmosphere to reduce the outcoming window of long wave radiation. This effect is well performed by atmosphere humidity due to known increase insolation and vapor content in atmosphere.
The role of vapor is reinforced when it is observed that the regions with a great difference between potential and actual specific humidity are the ones with high temperature increase, like continental areas in mid to high latitudes. The main implication is that temperature increase predictions based on CO2 driving models are not reliable.
If the warmer power of solar irradiation is the independent driver for decadal and multidecadal cycles, the expected changes in insolation and no increase in green- house power may imply the recurrence of multidecadal cool phase, recalling the years of the third quarter of past century, before a new warming wave. The last decade stable temperature seems to be the turning point.
Full Text (PDF, 1794KB) PP.102-112 DOI: 10.4236/ijg.2010.13014
One of the biggest, if not the biggest issues of climate science skepticism is the criticism of over-reliance on computer model projections to suggest future outcomes. In this paper, climate models were hindcast tested against actual surface observations, and found to be seriously lacking. Just have a look at Figure 12 (mean temperature -vs- models for the USA) from the paper, shown below:
The graph above shows temperature in the blue lines, and model runs in other colors. Not only are there no curve shape matches, temperature offsets are significant as well. In the study, they also looked at precipitation, which fared even worse in correlation. The bottom line: if the models do a poor job of hindcasting, why would they do any better in forecasting? This from the conclusion sums it up pretty well:
…we think that the most important question is not whether GCMs can produce credible estimates of future climate, but whether climate is at all predictable in deterministic terms.
A comparison of local and aggregated climate model outputs with observed data
Anagnostopoulos, G. G. , Koutsoyiannis, D. , Christofides, A. , Efstratiadis, A. and Mamassis, N. ‘A comparison of local and aggregated climate model outputs with observed data’, Hydrological Sciences Journal, 55:7, 1094 – 1110
We compare the output of various climate models to temperature and precipitation observations at 55 points around the globe. We also spatially aggregate model output and observations over the contiguous USA using data from 70 stations, and we perform comparison at several temporal scales, including a climatic (30-year) scale. Besides confirming the findings of a previous assessment study that model projections at point scale are poor, results show that the spatially integrated projections are also poor.
Citation Anagnostopoulos, G. G., Koutsoyiannis, D., Christofides, A., Efstratiadis, A. & Mamassis, N. (2010) A comparison of local and aggregated climate model outputs with observed data. Hydrol. Sci. J. 55(7), 1094-1110.
According to the Intergovernmental Panel on Climate Change (IPCC), global circulation models (GCM) are able to “reproduce features of the past climates and climate changes” (Randall et al., 2007, p. 601). Here we test whether this is indeed the case. We examine how well several model outputs fit measured temperature and rainfall in many stations around the globe. We also integrate measurements and model outputs over a large part of a continent, the contiguous USA (the USA excluding islands and Alaska), and examine the extent to which models can reproduce the past climate there. We will be referring to this as “comparison at a large scale”.
This paper is a continuation and expansion of Koutsoyiannis et al. (2008). The differences are that (a) Koutsoyiannis et al. (2008) had tested only eight points, whereas here we test 55 points for each variable; (b) we examine more variables in addition to mean temperature and precipitation; and (c) we compare at a large scale in addition to point scale. The comparison methodology is presented in the next section.
While the study of Koutsoyiannis et al. (2008) was not challenged by any formal discussion papers, or any other peer-reviewed papers, criticism appeared in science blogs (e.g. Schmidt, 2008). Similar criticism has been received by two reviewers of the first draft of this paper, hereinafter referred to as critics. In both cases, it was only our methodology that was challenged and not our results. Therefore, after presenting the methodology below, we include a section “Justification of the methodology”, in which we discuss all the critical comments, and explain why we disagree and why we think that our methodology is appropriate. Following that, we present the results and offer some concluding remarks.
Here’s the models they tested:
Comparison at a large scale
We collected long time series of temperature and precipitation for 70 stations in the USA (five were also used in the comparison at the point basis). Again the data were downloaded from the web site of the Royal Netherlands Meteorological Institute (http://climexp.knmi.nl). The stations were selected so that they are geographically distributed throughout the contiguous USA. We selected this region because of the good coverage of data series satisfying the criteria discussed above. The stations selected are shown in Fig. 2 and are listed by Anagnostopoulos
(2009, pp. 12-13).
Fig. 2. Stations selected for areal integration and their contribution areas (Thiessen polygons).
In order to produce an areal time series we used the method of Thiessen polygons (also known as Voronoi cells), which assigns weights to each point measurement that are proportional to the area of influence; the weights are the “Thiessen coefficients”. The Thiessen polygons for the selected stations of the USA are shown in Fig. 2.
The annual average temperature of the contiguous USA was initially computed as the weighted average of the mean annual temperature at each station, using the station’s Thiessen coefficient as weight. The weighted average elevation of the stations (computed by multiplying the elevation of each station with the Thiessen coefficient) is Hm = 668.7 m and the average elevation of the contiguous USA (computed as the weighted average of the elevation of each state, using the area of each state as weight) is H = 746.8 m. By plotting the average temperature of each station against elevation and fitting a straight line, we determined a temperature gradient θ = -0.0038°C/m, which implies a correction of the annual average areal temperature θ(H - Hm) = -0.3°C.
The annual average precipitation of the contiguous USA was calculated simply as the weighted sum of the total annual precipitation at each station, using the station’s Thiessen coefficient as weight, without any other correction, since no significant correlation could be determined between elevation and precipitation for the specific time series examined.
We verified the resulting areal time series using data from other organizations. Two organizations provide areal data for the USA: the National Oceanic and Atmospheric Administration (NOAA) and the National Aeronautics and Space Administration (NASA). Both organizations have modified the original data by making several adjustments and using homogenization methods. The time series of the two organizations have noticeable differences, probably because they used different processing methods. The reason for calculating our own areal time series is that we wanted to avoid any comparisons with modified data. As shown in Fig. 3, the temperature time series we calculated with the method described above are almost identical to the time series of NOAA, whereas in precipitation there is an almost constant difference of 40 mm per year.
Fig. 3. Comparison between areal (over the USA) time series of NOAA (downloaded from http://www.ncdc.noaa.gov/oa/climate/research/cag3/cag3.html) and areal time series derived through the Thiessen method; for (a) mean annual temperature (adjusted for elevation), and (b) annual precipitation.
Determining the areal time series from the climate model outputs is straightforward: we simply computed a weighted average of the time series of the grid points situated within the geographical boundaries of the contiguous USA. The influence area of each grid point is arectangle whose “vertical” (perpendicular to the equator) side is (ϕ2 - ϕ1)/2 and its “horizontal” side is proportional to cosϕ, where ϕ is the latitude of each grid point, and ϕ2 and ϕ1 are the latitudes of the adjacent “horizontal” grid lines. The weights used were thus cosϕ(ϕ2 - ϕ1); where grid latitudes are evenly spaced, the weights are simply cosϕ.
It is claimed that GCMs provide credible quantitative estimates of future climate change, particularly at continental scales and above. Examining the local performance of the models at 55 points, we found that local projections do not correlate well with observed measurements. Furthermore, we found that the correlation at a large spatial scale, i.e. the contiguous USA, is worse than at the local scale.
However, we think that the most important question is not whether GCMs can produce credible estimates of future climate, but whether climate is at all predictable in deterministic terms. Several publications, a typical example being Rial et al. (2004), point out the difficulties that the climate system complexity introduces when we attempt to make predictions. “Complexity” in this context usually refers to the fact that there are many parts comprising the system and many interactions among these parts. This observation is correct, but we take it a step further. We think that it is not merely a matter of high dimensionality, and that it can be misleading to assume that the uncertainty can be reduced if we analyse its “sources” as nonlinearities, feedbacks, thresholds, etc., and attempt to establish causality relationships. Koutsoyiannis (2010) created a toy model with simple, fully-known, deterministic dynamics, and with only two degrees of freedom (i.e. internal state variables or dimensions); but it exhibits extremely uncertain behaviour at all scales, including trends, fluctuations, and other features similar to those displayed by the climate. It does so with a constant external forcing, which means that there is no causality relationship between its state and the forcing. The fact that climate has many orders of magnitude more degrees of freedom certainly perplexes the situation further, but in the end it may be irrelevant; for, in the end, we do not have a predictable system hidden behind many layers of uncertainty which could be removed to some extent, but, rather, we have a system that is uncertain at its heart.
Do we have something better than GCMs when it comes to establishing policies for the future? Our answer is yes: we have stochastic approaches, and what is needed is a paradigm shift. We need to recognize the fact that the uncertainty is intrinsic, and shift our attention from reducing the uncertainty towards quantifying the uncertainty (see also Koutsoyiannis et al., 2009a). Obviously, in such a paradigm shift, stochastic descriptions of hydroclimatic processes should incorporate what is known about the driving physical mechanisms of the processes. Despite a common misconception of stochastics as black-box approaches whose blind use of data disregard the system dynamics, several celebrated examples, including statistical thermophysics and the modelling of turbulence, emphasize the opposite, i.e. the fact that stochastics is an indispensable, advanced and powerful part of physics. Other simpler examples (e.g. Koutsoyiannis, 2010) indicate how known deterministic dynamics can be fully incorporated in a stochastic framework and reconciled with the unavoidable emergence of uncertainty in predictions.
Via Climate Research News:
Meanwhile, Ross McKitrick writes:
“NEW PAPER ON CONTAMINATED SURFACE TEMPERATURE DATA: In 2007 I published a paper with Pat Michaels showing evidence that CRU global surface temperature data used by the IPCC are likely contaminated due to socioeconomic development and variations in data quality. In 2009 Gavin Schmidt published a paper in the International Journal of Climatology claiming our results, as well as those of de Laat and Maurellis who independently found the same things we did, were spurious. My rebuttal, coauthored with Nicolas Nierenberg, has been accepted at The Journal of Economic and Social Measurement.
McKitrick, Ross R. and Nicolas Nierenberg (2010) Socioeconomic Patterns in Climate Data. Journal of Economic and Social Measurement, forthcoming.
The paper provides a complete and thorough refutation of Schmidt’s critique. Why JESM? First, because it is a journal that focuses on the critical evaluation of policy-relevant databases, and its editors and reviewers have considerable econometric depth, and this paper is fundamentally an application of econometrics to the evaluation of data quality. Second, we submitted the paper to the IJOC in April 2009, on the assumption that, having published Schmidt’s paper, they were interested in the topic. Evidently their interest only extends to analyses that support IPCC views. After 10 months we found out that IJOC was rejecting our paper on the basis of some inane referee reports to which Nico and I were not given a chance to reply. We did anyway, and if anyone thinks the rejection by IJOC amounts to a knock against our paper, please read our response letter for some perspective. Whether or not the IJOC editors read it, they refused to reconsider our paper. Interestingly, we learned from the Climategate release that Schmidt’s paper, which focuses on defending Phil Jones’ CRU data against its various critics, was sent by the IJOC Editors to be reviewed by Phil Jones of the CRU. As you can imagine his review was shallow and uncritical, but evidently impressed the editors of IJOC. They didn’t ask deLaat or me to supply a review, nor did they invite us to contribute a response. Every interaction I have had over the years with the IJOC has left me very unimpressed.”
Summary of McKitrick & Nierenberg (2010):
To generate a climate data set, temperature data collected at the Earth’s surface must be adjusted to remove non-climatic effects such as urbanization and measurement discontinuities. Some studies have shown that the post-1980 spatial pattern of temperature trends over land in prominent climate data sets is strongly correlated with the spatial pattern of socioeconomic development, implying that the adjustments are inadequate, leaving a residual warm bias. This evidence has been disputed on three grounds: spatial autocorrelation of the temperature field undermines significance of test results; counterfactual experiments using model generated data suggest such correlations have an innocuous interpretation; and different satellite covariates yield unstable results. Somewhat surprisingly, these claims have not been put into a coherent framework for the purpose of statistical testing. We combine economic and climatological data sets from various teams with trend estimates from global climate models and we use spatial regressions to test the competing hypotheses. Overall we find that the evidence for contamination of climatic data is robust across numerous data sets, it is not undermined by controlling for spatial autocorrelation, and the patterns are not explained by climate models. Consequently we conclude that important data products used for the analysis of climate change over global land surfaces may be contaminated with socioeconomic patterns related to urbanization and other socioeconomic processes.
Now it seems there is another peer reviewed study into this same area and not surprisingly when not relying on an outcome to prove a theory, it had very different results:
Entitled Improved methods for PCA-based reconstructions: case study using the Steig et al. 2009 Antarctic temperature reconstruction its abstract can be viewed here. The following is some analysis of the new paper by Climate Research News:
Remember the Steig et al 2009 Nature paper? As Steve McIntyre points out at Climate Audit: “Like so many Team efforts, it applied a little-known statistical method, the properties of which were poorly known, to supposedly derive an important empirical result. In the case of Steig et al 2009, the key empirical claim was that strong Antarctic warming was not localized to the Antarctic Peninsula (a prominent antecedent position), but was also very pronounced in West Antarctic.”
Well, there is a new paper in press in the Journal of Climate:
Improved methods for PCA-based reconstructions: case study using the Steig et al. 2009 Antarctic temperature reconstruction by Ryan O’Donnell, Nicholas Lewis, Steve McIntyre, Jeff Condon
The abstract states:
A detailed analysis is presented of a recently published Antarctic temperature reconstruction that combines satellite and ground information using a regularized expectation-maximization algorithm. Though the general reconstruction concept has merit, it is susceptible to spurious results for both temperature trends and patterns. The deficiencies include: (a) improper calibration of satellite data; (b) improper determination of spatial structure during infilling; and (c) suboptimal determination of regularization parameters, particularly with respect to satellite principal component retention. We propose two methods to resolve these issues. One utilizes temporal relationships between the satellite and ground data; the other combines ground data with only the spatial component of the satellite data. Both improved methods yield similar results that disagree with the previous method in several aspects. Rather than finding warming concentrated in West Antarctica, we find warming over the period of 1957–2006 to be concentrated in the Peninsula (≈0.35°C decade−1). We also show average trends for the continent, East Antarctica, and West Antarctica that are half or less than that found using the unimproved method. Notably, though we find warming in West Antarctica to be smaller in magnitude, we find that statistically significant warming extends at least as far as Marie Byrd Land. We also find differences in the seasonal patterns of temperature change, with winter and fall showing the largest differences and spring and summer showing negligible differences outside of the Peninsula.Another analysis was provided by Anthony Watts at Watts Up With That:
In a blow to the Real Climate “hockey team” one team member’s paper, Steig et al Nature, Jan 22, 2009 (seen above) has been shown lacking. Once appropriate statistical procedures were applied, the real data spoke clearly, and it was done in a peer reviewed paper by skeptics. Jeff Condon of the Air Vent writes via email that he and co-authors, Ryan O’Donnell, Nicholas Lewis, and Steve McIntyre have succeeded in getting a paper accepted into the prestigious Journal of Climate and asked me to re-post the notice here.
The review process was difficult, with one reviewer getting difficult on submitted comments
[and subsequent rebuttal comments from authors ] that became longer than the submitted paper, 88 pages, 10 times the length of the paper they submitted! I commend them for their patience in wading through such formidable bloviation. Anyone want to bet that reviewer was a “team” member?
As WUWT covered in the past, these authors have demonstrated clearly that the warming is mostly in the Antarctic Peninsula. Steig et al’s Mannian PCA math methods had smeared that warming over most of the entire continent, creating a false impression.
WUWT visitors may want to read this primer which explains how this happens. But most
importantly, have a look at the side by side comparison maps below. Congratulations to Jeff, Ryan, Nick, and Steve! – Anthony
After ten months of reviews and rewrites we have successfully published an improved version of Steig et al. 2009. While we cannot publish the paper here, we can discuss the detail. Personally I’ve never seen so much work put into a single paper as Ryan did and it’s wonderful to see it come to a successful conclusion. This is the initial post on the subject, in the coming weeks there will be more to follow.
Guest post by lead author Ryan O’Donnel.
Improved methods for PCA-based reconstructions: case study using the Steig et al. (2009) Antarctic temperature reconstruction
(Accepted 11/30/10, Journal of Climate)
Ryan O’Donnell Nicholas Lewis Steve McIntyre Jeff Condon
A detailed analysis is presented of a recently published Antarctic temperature reconstruction
Copyright © 2010 American Meteorological Association
(early online release to be available on or around Dec. 7th)
Temperature trend Deg C/Decade .
Some of you remember that we intended to submit the analysis of the Steig Antarctic reconstruction for publication. That was quite some time ago . . . and then you heard nothing. We did, indeed, submit a paper to Journal of Climate in February. The review process unfortunately took longer than expected, primarily due to one reviewer in particular. The total number of pages dedicated by that reviewer alone – and our subsequent responses – was 88 single-spaced pages, or more than 10 times the length of the paper. Another contributor to the length of time from submission to acceptance was a hardware upgrade to the AMS servers that went horribly wrong, heaping a load of extra work on the Journal of Climate editorial staff.
With that being said, I am quite satisfied that the review process was fair and equitable, although I do believe excessive deference was paid to this one particular reviewer at the beginning of the process. While the other two reviews were positive (and contained many good suggestions for improvement of the manuscript), the other review was quite negative. As the situation progressed, however, the editor at Journal of Climate – Dr. Anthony Broccoli – added a fourth reviewer to obtain another opinion, which was also positive. My feeling is that Dr. Broccoli did a commendable job of sorting through a series of lengthy reviews and replies in order to ensure that the decision made was the correct one.
The results in the paper are generally similar to the in-process analysis that was posted at CA and here prior to the submission. Overall, we find that the Steig reconstruction overestimated the continental trends and underestimated the Peninsula – though our analysis found that the trend in West Antarctica was, indeed, statistically significant. I would hope that our paper is not seen as a repudiation of Steig’s results, but rather as an improvement.
In my opinion, the Steig reconstruction was quite clever, and the general concept was sound. A few of the choices made during implementation were incorrect; a few were suboptimal. Importantly, if those are corrected, some of the results change. Also importantly, some do not. Hopefully some of the cautions outlined in our paper are incorporated into other, future work. Time will tell!