Climate adaptation should be based on robust regional climate information
Climate adaptation steams forward with an accelerated speed that can be seen through the Climate Adaptation Summit in January (see previous post), the ECCA 2021 in May/June, and the upcoming COP26. Recent extreme events may spur this development even further (see previous post about attribution of recent heatwaves).
To aid climate adaptation, Europe’s Climate-Adapt programme provides a wealth of resources, such as guidance, case studies and videos. This is a good start, but a clear and transparent account on how to use the actual climate information for adaptation seems to be missing. How can projections of future heatwaves or extreme rainfall help practitioners, and how to interpret this kind of information?
The role of climate information
My general impression from the said meetings on climate adaptation and other sources is that it is assumed that the regional climate information is in place, and using it is a little like plug-and-play. One example is the ECCA 2021 Climate Adaptation solutions on YouTube provided by ERA4CS.
The use of climate information is discussed in a recent Podcast about “CORDEX-Africa”, where Dr. Chris Lennard from the University of Cape Town takes us through different aspects of climate change adaptation. His South African research group (CSAG) has some valuable lessons to tell.
The use of climate information is not straightforward, however, and a concern is that neither organisations such as the Copernicus Climate Change Service (C3S), the World Climate Research Programme (WCRP), nor the COordinated Regional Downscaling EXperiment (CORDEX) have had a prominent presence in the said high-level summits, despite the strong reliance on downscaled results in both climate adaptation and climate services.
A vast collection of data
Nevertheless, Climate-Adapt does refer to data and climate indices from Copernicus C3S, which offers an impressive collection of data. The shear scale of the data gathering and observations was underscored in a recent general assembly with invited prominent speakers (Gavin among them – recordings of the seminar are available through this YouTube playlist).
Still, it’s hard to find a comprehensive guide or a handbook on how to actually use the data and what not to do with it. Computer calculations are not the same as observations, and global climate models are no ‘digital twin’ of the real world – so the question is how to interpret the numbers.
Regional climate information for society
In a nutshell, impact researchers and the adaptation community need to use the best information in the right way. Data and information are not the same thing.
The good news is that there are some initiatives on climate change adaptation which involve climate scientists such as the Infrastructure and Climate Network (ICNet Global). Also the European Climate Research Alliance (ECRA) is relevant with collaborative programmes on the Arctic, high-impact events, changes to the hydrological cycle and sea level.
More emphasis on methods and tools than actionable information
There may also be some practices within the climate science community that provide obstacles to climate change adaptation. For outsiders of the climate science community, a recent CORDEX white paper doesn’t appear to address issues really relevant to climate change adaptation, despite a new emphasis on “Regional Climate Information for Society” (RIfS) and “Lighthouse Activities” (“My Climate Risk”).
Some of the headlines from the CORDEX white paper are ‘Smaller domains with finer resolution’, ‘Increasing complexity’ and ‘Exascale computing’. Such activities may in time enhance our understanding of regional climate risks, but it’s difficult to see how they enhance the capacity of society in terms of climate change adaptation right now. After all, there is some urgency in getting on with climate adaptation.
Finer resolution makes sense if the aim is to improve the representation of small-scale processes in the climate models, such as convection. But from a practitioner’s point of view, it would be fairly trivial to get information on fine resolution – the weather statistics doesn’t change all that much over small distances, and if it does, it’s probably due to systematic geographical effects which can be predicted through statistical means.
Different choices give different answers
For climate adaptation, we want to know what local consequences we can expect from a continued global warming. The global climate models (GCMs) are not designed to provide such details, as they typically compute wind, temperature and humidity on scales of about hundred kilometers.
The GCMs are nevertheless useful for climate adaptation, since the local climate often depends on the ambient large-scale conditions. Local consequences can be estimated through a strategy known as ‘downscaling’.
Downscaling can be defined as the procedure of adding new relevant and reliable information to GCM results, such as how the local response depends on the large-scale conditions that GCMs are able to reproduce, and how local geographical conditions play a role.
There are two main approaches to downscaling, (1) dynamical downscaling, involving regional climate models (RCMs), and (2) empirical-statistical downscaling (ESD). Both are supposed to be included in CORDEX, although CORDEX’s main emphasis seems to be on RCMs.
Sources of additional information
The source of new information in ESD is the historical data and mathematical theory concerning their statistical properties (hence ’empirical-statistical’). For RCMs, new information is introduced to the GCM simulations with finer resolution and more detailed representation of the surface.
There appears to be several different perspectives on the issues concerning downscaling, and it’s likely that we do not always understand each other within this discipline. For sure, there are lots of different opinions and approaches regarding downscaling, which also may give different answers.
Common limitations of RCMs
Results from RCMs are frequently different to observed climate, and so-called bias-adjustment is often required to correct for systematic biases in RCM simulations of temperature and rainfall.
An important difference between ESD and bias-adjustment is that the latter doesn’t involve the dependency between large spatial scales, that are well reproduced by GCMs, and local details. Bias-adjustment is also a controversial solution.
The reasons for systematic biases in RCMs are unclear. One may be that RCMs are often physically inconsistent with respect to the driving GCMs, which may involve different outgoing longwave radiation (OLR) at the top of the atmosphere over the same atmospheric volume. This is also what we should expect from different rainfall patterns and cloud climates being simulated by the GCM and RCM, which imply differences in the vertical energy flow.
The atmospheric humidity within the same volume of air may also differ in the RCM and the GCM, and they often use different parameterisation schemes to represent small-scale processes, and usually different accounts of surface processes and aerosols.
Different kinds of information
Another point is that the average rainfall over a grid area of ~10 × 10 km2, typically provided by an RCM, is expected to have different statistical properties to rain gauge data collected with cross-sections of the order of centimetres. Hence, the RCM results do not strictly represent the same aspects as those observed in the rain gauges.
RCMs nevertheless have great value in the context of experiments and studies of how different phenomena respond to different boundary conditions, such as convection or how heatwaves are exacerbated by low soil moisture. They can also add value when used to address specific research questions or test hypotheses concerning regional climatic aspects.
A comprehensive downscaling approach
One question is whether the caveats with RCMs affect their ability to simulate climate change. It could be that all the members in the traditional Euro-CORDEX RCM ensemble have systematic biases with the same sign. A more comprehensive approach to downscaling can ensure more robust results, which involves combining RCMs and ESD.
RCMs and ESD have different strengths and weaknesses, which is a good reason for why it’s important to bring them together to overcome the said limitations. For instance, both ESD and RCM assume stationarity – the former through downscaling dependencies, and the latter through upscaling of unresolved processes (‘parameterisation schemes’).
Moreover, RCMs and ESD add information on regional scales based on different and independent sources. The probability that both are wrong the same way is smaller than either one of them being wrong. Furthermore, they both complement and support each other. RCMs offer descriptions that are not available from ESD, such as fluxes and a complete coverage.
A combination of results from both RCMs and ESD means adding more information to the equation, and hence enhancing our understanding of how robust the results are and what measure of uncertainties is present.
Robust information when downscaling applied to large number of GCMs
It is important to ask exactly what information is needed for climate adaptation and exactly how it is used. For instance, relying on results from a single dynamical downscaling exercise with one simulation by an RCM and a GCM is clearly unwise, because if we chose another GCM/RCM simulation to downscale, we would get a different answer.
In fact, involving only a small set of driving GCMs (n < 30) is likely to give misleading results because of a pronounced presence of stochastic fluctuations (“natural variability”) and ‘the law of small numbers’. This is true even if the models themselves were perfect. It’s a bit similar to having too small statistical samples.
With ESD, it’s possible to downscale large multi-model ensembles of GCMs because ESD doesn’t require as much computational power as the RCMs. ESD is usually computationally cheap and can often be carried out on a laptop while an RCM often requires high-performance computers (HPC).
Most climate services seem to be limited to one approach
Many national climate services are presently providing regional climate information entirely on the Euro-CORDEX RCM ensemble, which excludes ESD. We can get some idea of the use of local climate projections solely based on RCMs from Climate-Adapt:
Copernicus C3S also presents downscaled results only based on RCMs, which in my opinion may give misleading information because of the reasons stated above.
It is also unfortunate that there are two separate CORDEX white papers, one written on RCMs and another on ESD, because they may reinforce “silo thinking” within the downscaling community. Such a limited representation is like inviting guests for a big dinner and only serving potatoes.
My point is that it’s important to include both ensembles of RCMs as well as ESD applied to large multi-model ensembles, in order to get robust and the best information about how local climates may change in the future.
Another point is that climate adaptation should not only involve this linear downscaling chain, but also a “bottom-up” approach with sensitivity analysis and stress testing. The latter may not be scientific, but may still provide useful input to adaptation strategies.
Quality and reliability can be enhanced through scientific debate
I think we need more scholarly discussions and more scientific debate about the use of downscaled climate information because we have an increasing responsibility for providing decision-makers with the best guidance on how to use it. Hopefully there will be more debates after the pandemic.
Such professional debates should include climate scientists, the adaptation community, and practitioners. This question is relevant for the said climate change adaptation summits, and for this reason, it is important that scientists from organisations such as Copernicus C3S, NASA, WCRP/CORDEX are present at such meetings and give key note presentations about the state of the science on modelling and downscaling. Also, best practices could be summarised in handbooks on how to use downscaled projections.
I also think there also needs for more discussions within the downscaling community, and I recently published a discussion paper in the journal GMD which I hope can stimulate some debate about downscaling and climate adaptation (critical comments are encouraged – interactive public discussion until 03 Sep). Debates about downscaling should be maintained and evolve over time. One topic to discuss further could be protocols for evaluation of the regional climate information.
We should expect to learn continuously as we go along, as models improve over time with development in computer capacity. Also there is continuously new data coming in from observations and new cases of extreme events. Hence, climate adaptation should be regarded as a moving target and the debates should be expected to be ongoing.
The post Climate adaptation should be based on robust regional climate information first appeared on RealClimate.