Curry on model tuning, part II
The 20th century aerosol forcing used in most of the AR4 model simulations (Section 184.108.40.206) relies on inverse calculations of aerosol optical properties to match climate model simulations with observations. […] Schwartz (2004) notes that the intermodel spread in modeled temperature trend expressed as a fractional standard deviation is much less than the corresponding spread in either model sensitivity or aerosol forcing, and this comparison does not consider differences in solar and volcanic forcing. This agreement is accomplished through inverse calculations, whereby modeling groups can select the forcing data set and model parameters that produces the best agreement with observations. While some modeling groups may have conducted bona fide forward calculations without any a posteriori selection of forcing data sets and model parameters to fit the 20th century time series of global surface temperature anomalies, the available documentation on each model’s tuning procedure and rationale for selecting particular forcing data sets is not generally available.
– J.A. Curry and P.J. Webster, “Climate Science and the Uncertainty Monster”
1) The authors claim that ‘The 20th century aerosol forcing used in most of the AR4 model simulations (Section 220.127.116.11) relies on inverse calculations of optical properties to match climate model simulations with observations’ and thus claim ‘apparent circular reasoning’. This is incorrect. The inverse estimates of aerosol forcing given in 18.104.22.168 are derived from observationally based analyses of temperature and are compared in Chapter 9 with “forward” estimates calculated directly from understanding of the emissions in order to determine whether the two are consistent. But it is critical to understand that such inverse estimates are an output of attribution analyses not an input, and thus the claim of ‘circular reasoning’ is wrong. The aerosol forcing used in 20C3M (see http://www-pcmdi.llnl.gov/projects/cmip/ann_20c3m.php) climate model simulations was based on forward calculations using emission data (Boucher and Pham, 2002; references in Randall et al., 2007). Further, detection and attribution methods determine whether model-simulated temporal and spatial patterns of change (referred to as ‘fingerprints’) that are expected in response to changes in external forcing are present in observations. For example, the aerosol fingerprint shows a spatial and temporal pattern of near-surface temperature changes that varies between hemispheres and over time (see Hegerl et al., 2007 section 22.214.171.124). […] These patterns make the response to solar and aerosol forcing distinguishable (with uncertainties) from that due to greenhouse gas forcing. The amplitude of those fingerprint patterns is estimated from observations. Therefore, attribution of the dominant role of greenhouse gases in the warming of the past half-century is not sensitive to the uncertainties in the magnitude of aerosol forcing, or of other forcings, such as solar forcing. […] Thus, Curry and Webster misrepresent the role of forcing magnitude uncertainties in attribution, and do not appreciate the level of rigour with which physically plausible alternative explanations of the recent climate change are explored.
– Gabriele Hegerl, Peter Stott, Susan Solomon and Francis Zwiers, “Comment on Climate Science and the Uncertainty Monster by J.A. Curry and P.J. Webster.”
Our overall concerns about the IPCC AR4 attribution statement and uncertainty analysis are best illustrated in the context of the recent publication by Gent et al. (2011), showing simulations of the 20th century climate of the NCAR Community Climate System Model Version 4. Figure 1 [the link downloads the file] compares the results of the CCSM3 (used in the AR4) with the CCSM4 simulations (for the AR5). In spite of using a better model and better forcing data for the CCSM4 simulations, the CCSM4 simulations show that after 1970, the simulated surface temperature increases faster than the data, so that by 2005 the model anomaly is 0.4oC larger than the observed anomaly. By contrast, the CCSM3 simulations show very good agreement with the surface temperature data. The critical difference is that the CCSM4 model was tuned for the pre-industrial period and used accepted best estimates of the forcing data, whereas the CCSM3 model was tuned to the 20th century observations and each modeling group was permitted to select their preferred forcing data sets. The contrast between the CCSM3 and CCSM4 simulations illustrate the bootstrapped plausibility of climate model simulations that influenced the AR4 attribution assessment.
– J. A. Curry and P.J. Webster, “Reply to Hegerl et al.’s Comment on “Climate Science and the Uncertainty Monster”” (draft)
Abstract. Hegerl et al.’s comment provides us with a further opportunity to emphasize and clarify our arguments as to why the treatment of uncertainty in the IPCC AR4 assessment regarding attribution is incomplete and arguably misleading.