Different Strokes for Different Folks—Not Any More, Say Scientists at the UK Met Office

Morcrette, C. J., Met Office - UK

Cloud Processes

Cloud Life Cycle

Morcrette CJ, EJ O'Connor, and JC Petch. 2012. "Evaluation of two cloud parametrization schemes using ARM and Cloud-Net observations." Quarterly Journal of the Royal Meteorological Society, 138(665), 10.1002/qj.969.


Errors in a cloud simulation can manifest themselves in different ways.


Errors in a cloud simulation can manifest themselves in different ways.

Both climatologists and meteorologists care about the accuracy of cloud predictions. They are also both interested in quantifying the skill of any predictions of cloud cover. However, the most important aspect of what each of them wants to get right might be different, and the metrics they use when evaluating their simulations will reflect that.

Atmospheric models, whether used for climate or weather all try to simulate the earth’s atmosphere and the clouds within it. For a meteorologist, the goal is to predict the weather. So in a 72-hour forecast, the goal is for the model to forecast the location and amount of clouds on an hourly basis.

A climatologist, on the other hand, may use an atmospheric model—not necessarily the same ones as the meteorologist, but there are many common features—to predict the average weather over long time periods, such as seasons, and over large geographical areas, such as cloud patterns over the winter in the US or over Europe in the spring.

In this example, both parties care about the clouds and want to assess the skill of the predictions they use. But a certain aspect of the cloud forecast that matters to the meteorologist may not matter so much to the climatologist and vice-versa. Herein lies the genesis of different metrics—each metric measuring one aspect of the cloud prediction.

In their paper in the Quarterly Journal of the Royal Meteorological Society, scientists at the United Kingdom’s Meteorological Office argue that any cloud cover prediction can be assessed in terms of three types of error. An error in the frequency of occurrence of clouds in general, an error in the cloud cover when clouds are present, and an error in the timing of the cloud. Looking at a range of metrics, and their associated errors and biases, is an essential first step to meaningful comparisons between different models. Such comparisons, using a common set of metrics, could lead to a much better understanding of the cloud system, one of the largest sources of uncertainty in climate and weather models.

When developing an atmospheric model for use for both climatological and meteorological application, that is "a seamless prediction system, this methodology is especially useful as it allows us to look at both weather and climate performance metrics," writes Cyril Morcrette, lead author of the paper.

The method they describe for assessing cloud predictions makes use of the "the synergy between radar, lidar, and radiometer [instruments used to measure weather and climate parameters], which has been used to retrieve profiles of liquid and ice water content from cloud-observing sites at various locations around the world," writes Morcrette. These locations include the "ground-based remote sensing sites operated as part of the ARM Climate Research Facility and Cloud-Net, which have collected near-continuous observations of cloud for a number of years."