Well before the advent of General Circulation Models (GCM’s), (Arrhenius 1896), proposed that greenhouse gases could cause global warming and he even made a surprisingly modern quantitative prediction. Today, GCM’s are so much the dominant tool for investigating the climate that debate centers on the climate sensitivity to a doubling of the CO 2 concentration which—whether “equilibrium” or “transient”—is defined as a purely theoretical quantity being accessible only through models. Strictly speaking—short of a controlled multicentennial global scale experiment—it cannot be empirically measured at all. A consequence is that not enough attention has been paid to directly analyzing our ongoing uncontrolled experiment. For example, when attempts are made to test climate sensitivity predictions from the climate record, the tests still rely on GCM defined “fingerprints” (e.g. Santer et al. 2013) or the review in section 9.2.2 of 4th Assessment Report (AR4) of the International Panel on Climate Change (IPCC) or on other comparisons of the record with GCM outputs (e.g. Wigley et al. 1997; Foster and Rahmstorf 2011). This situation can easily lead to the impression that complex GCM codes are indispensible for inferring connections between greenhouse gases and global warming. An unfortunate side effect of this reliance on models is that it allows GCM skeptics to bring into question the anthropogenic causation of the warming. If only for these reasons, it is desirable to complement model based approaches with empirically based methodologies.

But there is yet another reason for seeking non-GCM approaches: the most convincing demonstration of anthropogenic warming has not yet been made—the statistical comparison of the observed warming during the industrial epoch against the null hypothesis for natural variability. To be as rigorous as possible, we must demonstrate that the probability that the current warming is no more than a natural fluctuation is so low that the natural variability may be rejected with high levels of confidence. Although the rejection of natural variability hypothesis would not “prove” anthropogenic causation, it would certainly enhance it’s credibility. Until this is done, there will remain some legitimate grounds for doubting the anthropogenic provenance of the warming. Such statistical testing requires knowledge of the probability distributions of natural fluctuations over roughly centennial scales (i.e. the duration of the industrial epoch CO 2 emissions). To achieve this using GCM’s one would need to construct a statistical ensemble of realistic pre-industrial climates at centennial scales. Unfortunately the GCM variability at these (and longer) scales under natural (especially solar and volcanic) forcings is still the object of active research (e.g. “Millennium” simulations). At present, the variability at these long time scales is apparently somewhat underestimated (Lovejoy 2013) so that it is premature to use GCM’s for this purpose. Indeed, at the moment, the only way of estimating the centennial scale natural variability is to use observations (via multicentennial length multiproxies) and a (modest) use of scaling ideas.

The purpose of this paper is thus to establish an empirically based GCM-free methodology for quantifying anthropogenic warming. This involves two parts. The first part is to estimate both the total amplitude of the anthropogenic warming and the (empirically accessible) “effective” climate sensitivity. It is perhaps surprising that this is apparently the first time that the latter has been directly and simply estimated from surface temperature data. Two innovations were needed. First, we used a stochastic approach that combines all the (nonlinear) responses to natural forcings as well as the (natural) internal nonlinear variability into a single global stochastic quantity T nat (t) that thus takes into account all the natural variability. In contrast, the anthropogenic warming (T anth (t)) is treated as deterministic. The second innovation is to use the CO 2 radiative forcing as a surrogate for all anthropogenic forcings. This includes not only the relatively well understood warmings due to the other long lived Green House Gases (GHG’s) but also the poorly understood cooling due to aerosols. The use of the CO 2 forcing as a broad surrogate is justified by the common dependence (and high correlations) between the various anthropogenic effects due to their mutual dependencies on global economic activity (see Fig. 2a, b below).

The method employed in the first part (Sect. 2) leads to conclusions not very different from those obtained from GCM’s and other model based approaches. In contrast, the main part of the paper (Sect. 3), outlines the first attempt to statistically test the null hypothesis using the statistics of centennial scale natural fluctuations estimated from pre-industrial multiproxies. To make the statistical test strong enough, we use scaling ideas to parametrically bound the tails of the extreme fluctuations using extreme (“fat-tailed”, power law) probability distributions and we scale up the observed distributions from 64 to 125 years using a scaling assumption. Even in the most unfavourable cases, we may reject the natural variability hypothesis at confidence levels >99 %. These conclusions are robust because they take into account two nonclassical statistical features which greatly amplify the probability of extremes—long range statistical dependencies and the fat tails.