Guest essay by Mike Jonas

In this article, I take a look inside the workings of the climate computer models (“the models”), and explain how they are structured and how useful they are for prediction of future climate.

This article follows on from a previous article (here) which looked at the models from the outside. This article takes a look at their internal workings.

The Models’ Method

The models divide up the atmosphere and/or oceans and/or land surface into cells in a three-dimensional grid and assign initial conditions. They then calculate how each cell influences all its neighbours over a very short time. This process is then repeated a very large number of times so that the model then predicts the state of the planet over future decades. The IPCC (Intergovernmental Panel on Climate Change) describes it here. The WMO (World Meteorological Organization) describes it here.

[Enlarge]

Seriously powerful computers are needed, because even on a relatively coarse grid, the number of calculations required to predict just a few years ahead is mind-bogglingly massive.

Internal and External

At first glance, the way the models work may appear to have everything covered, after all, every single relevant part of the planet is (or can be) covered by a model for all of the required period. But there are three major things that cannot be covered by the internal workings of the model:

1 – The initial state.

2 – Features that are too small to be represented by the cells.

3 – Factors that are man-made or external to the planet, or which are large scale and not understood well enough to be generated via the cell-based system.

The workings of the cell-based system will be referred to as the internals of the models, because all of the inter-cell influences within the models are the product of the models’ internal logic. Factors 1 to 3 above will be referred to as externals.

Internals

The internals have to use a massive number of iterations to cover any time period of climate significance. Every one of those iterations introduces a small error into the next iteration. Each subsequent iteration adds in its own error, but is also misdirected by the incoming error. In other words, the errors compound exponentially. Over any meaningful period, those errors become so large that the final result is meaningless.

NB. This assertion is just basic mathematics, but it is also directly supportable: Weather models operate on much finer data with much more sophisticated and accurate calculations. They are able to do this because unlike the climate models they are only required to predict local conditions a short time ahead, not regional and global conditions over many decades. Yet the weather models are still unable to accurately predict more than a few days ahead. The climate models’ internal calculations are less accurate and therefore exponentially less reliable over all periods. Note that each climate cell is local, so the models build up their global views from local conditions. On the way from ‘local’ to ‘global’, the models pass through ‘regional’, and the models are very poor at predicting regional climate [1].

At this point, it is worth clearing up a common misunderstanding. The idea that errors compound exponentially does not necessarily mean that the climate model will show a climate getting exponentially hotter, or colder, or whatever, and shooting “off the scale”. The model could do that, of course, but equally the model could still produce output that at first glance looks quite reasonable – yet either way the model simply has no relation to reality.

An analogy : A clock which runs at irregular speed will always show a valid time of day, but even if it is reset to the correct time it very quickly becomes useless.

Initial State

It is clearly impossible for a model’s initial state to be set completely accurately, so this is another source of error. As NASA says : “Weather is chaotic; imperceptible differences in the initial state of the atmosphere lead to radically different conditions in a week or so.“. [2]

This NASA quote is about weather, not climate. But because the climate models’ internals are dealing with weather, ie. local conditions over a short time, they suffer from the same problem. I will return to this idea later.

Small Externals

External factor 2 concerns features that are too small to be represented in the models’ cell system. I call these the small externals. There are lots of them, and they include such things as storms, precipitation and clouds, or at least the initiation of them. These factors are dealt with by parameterisation. In other words, the models use special parameters to initiate the onset of rain, etc. On each use of these parameters, the exact situation is by definition not known because the cell involved is too large. The parameterisation therefore necessarily involves guesswork, which itself necessarily increases the amount of error in the model.

For example, suppose that the parameterisations (small externals) indicate the start of some rain in a particular cell at a particular time. The parameterisations and/or internals may then change the rate of rain over hours or days in that cell and/or its neighbours. The initial conditions of the cells were probably not well known, and if the model had progressed more than a few days the modelled conditions in those cells were certainly by then totally inaccurate. The modelled progress of the rain – how strong it gets, how long it lasts, where it goes – is therefore ridiculously unreliable. The entire rain event would be a work of fiction.

Large Externals

External factor 3 could include man-made factors such as CO2 emissions, pollution and land-use changes (including urban development), plus natural factors such as the sun, galactic cosmic rays (GCRs), Milankovitch cycles (variations in Earth’s orbit), ocean oscillations, ocean currents, volcanoes and, over extremely long periods, things like continental drift.

I covered some of these in my last article. One crucial problem is that while some of these factors are at least partially understood, none of them are understood well enough to predict their effect on future climate – with just one exception. Changes in solar activity, GCRs, ocean oscillations, ocean currents and volcanoes, for example, cannot be predicted at all accurately, and the effects of solar activity and Milankovitch cycles on climate are not at all well understood. The one exception is carbon dioxide (CO2) itself. It is generally-accepted that a doubling of atmospheric CO2 would by itself, over many decades, increase the global temperature by about 1 degree C. It is also generally accepted that CO2 levels can be reasonably accurately predicted for given future levels of human activity. But the effect of CO2 is woefully inadequate to explain past climate change over any time scale, even when enhanced with spurious “feedbacks” (see here, here, here, here).

Another crucial problem is that all the external factors have to be processed through the models’ internal cell-based system in order to be incorporated in the final climate predictions. But each external factor can only have a noticeable climate influence on time-scales that are way beyond the period (a few days at most) for which the models’ internals are capable of retaining any meaningful degree of accuracy. The internal workings of the models therefore add absolutely no value at all to the externals. Even if the externals and their effect on climate were well understood, there would be a serious risk of them being corrupted by the models’ internal workings, thus rendering the models useless for prediction.

Maths trumps science

The harsh reality is that any science is wrong if its mathematics is wrong. The mathematics of the climate models’ internal workings is wrong.

From all of the above, it is clear that no matter how much more knowledge and effort is put into the climate models, and no matter how many more billions of dollars are poured into them, they can never be used for climate prediction while they retain the same basic structure and methodology.

The Solution

It should by now be clear that the models are upside down. The models try to construct climate using a bottom-up calculation starting with weather (local conditions over a short time). This is inevitably a futile exercise, as I have explained. Instead of bottom-up, the models need to be top-down. That is, the models need to work first and directly with climate, and then they might eventually be able to support more detailed calculations ‘down’ towards weather.

So what would an effective climate model look like? Well, for a start, all the current model internals must be put to one side. They are very inaccurate weather calculations that have no place inside a climate model. They could still be useful for exploring specific ideas on a small scale, but they would be a waste of space inside the climate model itself.

A climate model needs to work directly with the drivers of climate such as the large externals above. The work done by Wyatt and Curry [3] could be a good starting point, but there are others. Before such a climate model could be of any real use, however, much more research needs to be done into the various natural climate factors so that they and their effect on climate are understood.

Such a climate model is unlikely to need a super-computer and massive complex calculations. The most important pre-requisite would be research into the possible drivers of climate, to find out how they work, how relatively important they are, and how they have influenced climate in the past. Henrik Svensmark’s research into GCRs is an example of the kind of research that is needed. Parts of a climate model may well be developed alongside the research and assist with the research, but only when the science is reasonably well understood can the model deliver useful predictions. The research itself may well be very complex, but the model is likely to be relatively straightforward.

The first requirement is for a climate model to be able to reproduce past climate reasonably well over various time scales with an absolutely minimal number of parameters. (John van Neumann’s elephant). The first real step forward will be when a climate model’s predictions are verified in the real world. Right now, the models are a long way away from even the first requirement, and are heading in the wrong direction.

###

Mike Jonas (MA Maths Oxford UK) retired some years ago after nearly 40 years in I.T.

References

[1] A Literature Debate … R Pielke October 28, 2011 [Note : This article has been referenced instead of the original Demetris Koutsoyiannis paper, because it shows the subsequent criticism and rebuttal. Even the paper’s critic admits that the models “have no predictive skill whatsoever on the chronology of events beyond the annual cycle. A climate projection is thus not a prediction of climate [..]“. A link to the original paper is given.

[2] The Physics of Climate Modeling. NASA (By Gavin A. Schmidt), January 2007

[3] M.G. Wyatt and J.A. Curry, “Role for Eurasian Arctic shelf sea ice in a secularly varying hemispheric climate signal during the 20th century,” (Climate Dynamics, 2013). The best place to start is probably The Stadium Wave, by Judith Curry.

Abbreviations

CO2 – Carbon Dioxide

GCR – Galactic Cosmic Ray

IPCC – Intergovernmental Panel on Climate Change

NASA – National Aeronautics and Space Administration

WMO – World Meteorological Organization

Advertisements

Share this: Print

Email

Twitter

Facebook

Pinterest

LinkedIn

Reddit

