#NotAllModels

The biggest contribution scientists can make to #scicomm related to the newly released IPCC Sixth Assessment report, is to stop talking about the multi-model mean.

We’ve discussed the issues in the CMIP6 multi-model ensemble many times over the last couple of years – for instance here and here. There are two slightly contradictory features of this ensemble that feature in the new report – first is the increase in skill seen in CMIP6 compared to CMIP5 models. Biases in the Southern Ocean are less, similarly with sea ice extent or rainfall etc. but also the models as a whole are doing better in representing key modes of variability and their teleconnections. This is good news. The second issue is that the spread of the models’ climate sensitivity is much wider than in CMIP5, and specifically, much wider than the new assessed range. This is not so good – at least at first glance.

The climate sensitivity constraints are discussed in the SPM:

A.4.4 The equilibrium climate sensitivity is an important quantity used to estimate how the climate responds to radiative forcing. Based on multiple lines of evidence, the very likely range of equilibrium climate sensitivity is between 2°C (high confidence) and 5°C (medium confidence). The AR6 assessed best estimate is 3°C with a likely range of 2.5°C to 4°C (high confidence), compared to 1.5°C to 4.5°C in AR5, which did not provide a best estimate.

I’ve plotted the CMIP6 climate sensitivities before, but here I have updated it to the latest compilation (from Mark Zelinka) and added the likely and very likely assessed ranges from AR6.

notallmodels - #NotAllModels

For reference, out of 50 models, 40 are within the very likely AR6 range, and 23 within the likely range. Nonetheless of the 8 models with ECS > 5ºC, most are from very highly respected groups whose models do very well against comparisons of the climatology, including the Hadley Center (3 models), NCAR (2 models), DoE and the Canadian Climate Center (1 model each). My personal assessment of the likely range would be more closely based on Sherwood et al (2020) and so is a little narrower (as discussed here) – but it will need a deeper dive into the main report to see why the ranges differ and that isn’t the main point here. I would note though that given the distribution of ECS seen here, it is marginal whether this is even consistent with sampling from a straightforward distribution.

What does this mean in practice? It’s well known and accepted that CMIP is an ensemble of opportunity and not a structured exploration of model parameter space, that models are not independent of each other (some common assumptions, common modules and very close variants from some model groups) and so it makes no sense to treat the ensemble as if it was a pdf with a nice properties. And yet in CMIP5 (and previously), this was done almost ubiquitously (even by me on the model-data comparison page). Despite the many excellent reasons why ‘model democracy’ shouldn’t be the best thing to do, it often was.

With CMIP5, the coincidence of the model range of sensitivity (2.1°C to 4.6°C) with a reasonable assessed range of 2 to 4.5°C meant that, even if it wasn’t strictly kosher, practically it didn’t make much difference. And even in the AR5 report, the only constrained projection (for sea ice area) turned out to be overfitted and did not properly account for internal variability.

But now with CMIP6, the situation is different. There are now large differences between the constrained projections and ‘model democracy’. For the historical period, the differences are there, but the the unweighted model mean does ok when compared to the observations:

notallmodels 1 - #NotAllModels
AR6 Fig SPM1b

However, they are only plotting the 5-95% envelope and at least one of the high ECS models (NCAR CESM2) used the historical trends as a tuning target. Despite that, it’s clear that the model spread is excessive in the post 1990 period. Were the raw CMIP6 data to be extended into the future – particularly for the higher emissions scenarios, the differences would be even starker. Thus for temperature projections, the authors (sensibly) effectively screen the models for coherence with observed temperatures (following Tokarska et al. (2020) and downweight models with ECS values outside the assessed range:

notallmodels 2 - #NotAllModels
AR6 Figure SPM 8a

The high (and low) ECS models to make a contribution in the maps associated with the impacts at specific Global Warming Levels (GWLs) of 1.5ºC, 2ºC and 4ºC (in figure SPM 5) so the useful information they contain is not lost.

How then should we talk about these models? In my opinion, describing the properties of the multi-model mean or generalizing about the models as a whole is not sensible. Claims such as those made recently that the CMIP6 ensemble ‘runs hot’ are very easily misconstrued to imply that all CMIP6 models have too high ECS values (or indeed all models in general), when really it is only a subset. Discussions of the mean CMIP6 sensitivity is, to my mind, pointless, not least because the ‘CMIP6 mean’ is based on a somewhat arbitrary selection of models that doesn’t take into account model independence nor the fact that CMIP6 itself is a moving target as more models are still being added to the database. And given that all the temperature projections in are constrained projections, the raw CMIP6 mean and its properties are simply irrelevant for any of the AR6 conclusions.

It is true that *some* models have high ECS beyond what can be reconciled with our understanding of change, and in those models the cloud feedback particularly in the Southern Oceans is more positive than previously. But it is not the case that all the CMIP6 models ‘run hot’, nor is true that the model projections in AR6 are affected by these high ECS values. We should therefore avoid giving that impression.

Many people have previously declared that ‘model democracy’ was flawed, but this is the report that has finally buried it.

References


  1. S.C. Sherwood, M.J. Webb, J.D. Annan, K.C. Armour, P.M. Forster, J.C. Hargreaves, G. Hegerl, S.A. Klein, K.D. Marvel, E.J. Rohling, M. Watanabe, T. Andrews, P. Braconnot, C.S. Bretherton, G.L. Foster, Z. Hausfather, A.S. Heydt, R. Knutti, T. Mauritsen, J.R. Norris, C. Proistosescu, M. Rugenstein, G.A. Schmidt, K.B. Tokarska, and M.D. Zelinka, “An Assessment of Earth’s Climate Sensitivity Using Multiple Lines of Evidence”, Reviews of Geophysics, vol. 58, 2020. http://dx.doi.org/10.1029/2019RG000678

  2. K.B. Tokarska, M.B. Stolpe, S. Sippel, E.M. Fischer, C.J. Smith, F. Lehner, and R. Knutti, “Past warming trend constrains future warming in CMIP6 models”, Science Advances, vol. 6, pp. eaaz9549, 2020. http://dx.doi.org/10.1126/sciadv.aaz9549

The post #NotAllModels first appeared on RealClimate.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.