The US federal government goes to quite a lot of effort to (mostly successfully) keep sensitive but unclassified (SBU) information (like personal data) out of the hands of people who would abuse it. But when it comes to the latest climate models, quite a few are SBU as well.
The results from climate models that are being run for CMIP6 have been talked about for a few months as the papers describing them have made it in to the literature, and the first assessments of the multi-model ensemble have been done. For those of you not familiar with the CMIP process, it is a periodic exercise for any climate model groups who want to have their results compared with other models and observations in a consistent manner. CMIP6 is the 5th iteration of this exercise (we skipped CMIP4 for reasons that remain a little obscure) that has been going since the 1990s.
The main focus has been on the climate sensitivity of these models – not necessarily because it’s the most important diagnostic, but it is an easily calculated short-hand to encapsulate the total feedbacks that occur as you increase CO2.
The first public hint of something strange going on, was at the Barcelona CMIP6 meeting in March earlier this year where this graphic showing Equilibrium Climate Sensitivity (ECS) for the models was prepared:
This showed that quite a few of the models were possibly coming in with sensitivities above 5ºC (grey bars were self-reported, green bars were calculated coherently from the archive). At about the same time, developers at the Hadley Centre and IPSL, wrote about their preliminary results. This was news because the previous IPCC report (and most assessments) have found the likely range of climate sensitivity is roughly 2 to 4.5ºC. For contrast the range in CMIP5 models was 2.1 to 4.6ºC.
As more models have been put into the database (all of which is publically available), more consistent estimates are possible, for instance:
By applying the python scripts by Angie @apuffycloud, and incorporating more models available now, here is a summary of ECS from abrupt-4xCO2 for 20 CMIP6 models up to date (with time-varying feedbacks taken into consideration) pic.twitter.com/XbDcBvW3Gh
— Yue Dong (@YueDong35680721) August 29, 2019
So what should people make of this? Here are some options:
- These new higher numbers might be correct. As cloud micro-physical understanding has improved and models better match the real climate, they will converge on a higher ECS.
- These new numbers are not correct. There are however many ways in which this might have manifest:
- The high ECS models have all included something new and wrong.
- They have all neglected a key process that should have been included with the package they did implement.
- There has been some overfitting to imperfect observations.
- The experimental set-up from which the ECS numbers are calculated is flawed.
There are arguments pro and con for each of these possibilities, and it is premature to decide which of them are relevant. It isn’t even clear that there is one answer that will explain all the high values – it might all be a coincidence – a catalogue of unfortunate choices that give this emergent pattern. We probably won’t find out for a while – though many people are now looking at this.
Why might the numbers be correct? All the preliminary analyses I’ve seen with respect to matches to present day climatologies and variability indicate that the skill scores of the new models (collectively, not just the high ECS ones) are improved over the previous versions. This is discussed in Gettelman et al. (2019) (CESM2), Sellar et al (2019) (UKESM1) etc. Indeed, this is a generic pattern in model development. However, up until now, there has not been any clear relationship between overall skill and climate sensitivity. Whether this will now change is (as yet) unclear.
Why might these numbers be wrong? Well, the independent constraints from the historical changes since the 19th C, or from paleo-climate or from emergent constraints in the CMIP5 models collectively suggest lower numbers (classically 2 to 4.5ºC) and new assessments of these constraints are likely to confirm it. For all these constraints to be wrong, a lot of things have to fall out just right (forcings at the LGM would have to be wrong by a factor of two, asymmetries between cooling and warming might need to be larger than we think, pattern effects need to be very important etc.). That seems unlikely.
But if these numbers are wrong, what is the explanation? Discussions with multiple groups indicates that there isn’t one new thing that all of these groups have included (and the other groups have not) or vice versa. Neither is there some dataset to which they have all tuned their models to that is flawed. The closest might be the CERES TOA radiation, or perhaps CloudSAT/CALIPSO data, but there is no indication there are any fundamental issues with them.
There is some indication that for the models with higher ECS that the changes in the abrupt4xCO2 runs are changing so much (more than 10ºC warming) that the models might be exceeding the bounds for which some aspects are valid. Note these are the runs from which the ECS is calculated. What do I mean by this? Take the HadGEM3 model. The Hardiman et al. (2019) paper reports on an artifact in the standard runs related to the rising of the tropopause that ends up putting (fixed) high stratospheric ozone in the troposphere causing an incorrect warming of the tropopause and a massive change of stratospheric water vapor – leading to a positive (and erroneous) amplification of the warming (by about 0.6ºC). Are there other assumptions in these runs that are no longer valid at 10ºC warming? Almost certainly. Is that the explanation? Perhaps not – it turns out that most (though not all) high ECS models also have high transient climate responses (TCR) which happen at much smaller global mean changes (< 3ºC).
What is clear is that (for the first time) the discord between the GCMs and the external constraints is going to cause a headache for the upcoming IPCC report. The deadline for papers to be submitted for consideration for the second order draft is in December 2019, and while there will be some papers on this topic submitted by then. I am not confident that the basic conundrums will be resolved. Thus the chapter on climate sensitivity is going to be contrasted strongly with the chapter on model projections. Model democracy (one model, one vote) is a obviously a terrible idea and if adopted in AR6, will be even more problematic. However, no other scheme has been demonstrated to work better.
Interesting times ahead.
References
A. Gettelman, C. Hannay, J.T. Bacmeister, R.B. Neale, A.G. Pendergrass, G. Danabasoglu, J. Lamarque, J.T. Fasullo, D.A. Bailey, D.M. Lawrence, and M.J. Mills, “High Climate Sensitivity in the Community Earth System Model Version 2 (CESM2)”, Geophysical Research Letters, vol. 46, pp. 8329-8337, 2019.
A.A. Sellar, C.G. Jones, J. Mulcahy, Y. Tang, A. Yool, A. Wiltshire, F.M. O’Connor, M. Stringer, R. Hill, J. Palmieri, S. Woodward, L. Mora, T. Kuhlbrodt, S. Rumbold, D.I. Kelley, R. Ellis, C.E. Johnson, J. Walton, N.L. Abraham, M.B. Andrews, T. Andrews, A.T. Archibald, S. Berthou, E. Burke, E. Blockley, K. Carslaw, M. Dalvi, J. Edwards, G.A. Folberth, N. Gedney, P.T. Griffiths, A.B. Harper, M.A. Hendry, A.J. Hewitt, B. Johnson, A. Jones, C.D. Jones, J. Keeble, S. Liddicoat, O. Morgenstern, R.J. Parker, V. Predoi, E. Robertson, A. Siahaan, R.S. Smith, R. Swaminathan, M.T. Woodhouse, G. Zeng, and M. Zerroukat, “UKESM1: Description and evaluation of the UK Earth System Model”, Journal of Advances in Modeling Earth Systems, 2019.
S.C. Hardiman, M.B. Andrews, T. Andrews, A.C. Bushell, N.J. Dunstone, H. Dyson, G.S. Jones, J.R. Knight, E. Neininger, F.M. O’Connor, J.K. Ridley, M.A. Ringer, A.A. Scaife, C.A. Senior, and R.A. Wood, “The impact of prescribed ozone in climate projections run with HadGEM3?GC3.1”, Journal of Advances in Modeling Earth Systems, 2019.