First SRNWP Workshop on Statistical Adaptation

4-6 December 2000, ZAMG, Vienna

The Workshop took place in the premises of the Central Institute for Meteorology and Geophysics.

Specialists of 15 European NWS attended the Workshop.

I shall try in the following lines to summarize today's state and tendencies of the statistical treatments applied to the NWP model results.

The state of this field of activity

Today, almost every NWS in Europe applies some kind of statistical treatment to its deterministic forecasts and/or to its ensemble forecasts. These treatments are either limited to a small, even a very small, number of parameters or consider, in other NWS, all the parameters delivered by the model or by the ensemble.

It can be said that the statistical adaptation of model results is an active field among the European NWS, although this field does not have the size, in terms of manpower, of the data analysis or model development fields. Another important point is that this field is not supported and promoted by the research works done at the universities as data analysis and as model dynamical and physical aspects are.

The tools

All the classical tools which have been around for, say, the last 25 years are still used: Model Output Statistics (MOS), Perfect Prognosis Method (PPM or, simpler, PP) as well as, at least at one NWS, the Method of the Analogs. Kalman Filtering (KF) is today a classical tool largely used.

There has been a large development in the choice and number of predictors: their number has, in the most comprehensive schemes, largely grown and we can now find sophisticated predictors as, for example, the Q-vectors. But I have not seen potential vorticity as predictor.

What is new, or relatively new, is the combination of these tools: in one NWS, the MOS results are Kalman filtered. This can be very useful when the model has changed and the MOS statistical relations are not yet ready. In another NWS, the ensemble forecasts are corrected by a combination of PPM and KF.

The use of the neural network technique does not seem to have spread as we could have supposed it would be: only two NWS have reported results based on this method, although it seems that this technique would be very appropriate to built classifiers for the ensemble forecasts.

It is astonishing that the use of the fuzzy logic has never been mentioned as a road to be explored. This is maybe due to the facts mentioned above that, on one side, universities are not active (or only marginally active) in statistical adaptation and that, on the other side, the groups working in this field in the NWS are very modest.

The applications

Obviously, one of the main application of MOS remains the determination of parameters generally not predicted by the numerical models. On the first place, the visibility.

Concerning the treatment of the DMO (Direct Model Output) parameters, mainly MOS and KF are used, primarily for the 2 m temperature and for the precipitations. As far as the 2m temperature is concerned, it is astonishing that ``snow cover'' is never considered as predictor, although its influence on the 2m temperature is very large.

General discussion

A general discussion took place at the end of the meeting. The main points discussed were the following:

a). What is best: KF or MOS/PPM?

The majority, according to my impression, thought that this question is not worth asking: both techniques have their advantages and disadvantages and they should be chosen according to what we want, what we need and what we can do. If we want to forecast visibility, we have normally no choice: we must use MOS. But if we want to correct the temperature bias due to the altitude difference between an observing station and its corresponding grid point, KF, which is much simpler to develop than MOS or PP, would certainly do the job.

b). Do we need a different approach for models with very high resolutions?

What is certain is that models with very high resolutions will still need statistical adaptation, at least for the convective precipitations. With grid point distances of a few km, the location of a shower at a given grid point and not at another one in its neighbourhood will be a random process. Random or arbitrary information should not be given to model users. This implies a statistical treatment, where the occurrence of showers at a given grid point could be express in probabilities. Such an attempt is already under way at one NWS in collaboration with a university.

For the other parameters, it was not sought that models with very high resolutions will need a principally different kind of statistical adaptation.

c). Products to present to the forecasters: the DMO and the statistically adapted DMO or only the statistically adapted DMO?

This point gave rise to a long and very lively discussion (which would have been perhaps better suited for the WGCEF [Working Group on Cooperation between European Forecasters]).

The opinions were clearly divided in two sides.

One side claimed that the forecaster on duty has already too many products and not enough time to consider them all. Thus, only the ``best products'' (that means the statistically adapted model results) should land on his desk.

The other side argued that if the number of products is very restricted, the forecaster will not have the possibility to improve his judgement. He will totally rely on the product presented and will loose his ability to forecast strong or extreme events because the statistical treatment has the tendency to bring the extremes towards the mean.

d). Should we give our forecasts in probabilities?

The majority thought - but it was a meeting of scientists working with statistics! - that we should do it. An interesting statement was the following: warnings - for examples gale force wind warnings or very intensive precipitation warnings - should never be given in probability terms, as this would lessen the notion of danger in the population. If the probability for a gale force wind is 85%, this danger must be announced as certain, because the 15% of probability of non-occurrence will lead too many people to take no security measures.

e). Collaboration or competition?

A participant, a private weather service provider, made a strong plea to organise a competition between the providers - private and public - of weather forecasts. He suggested that this competition could be organised in the framework of EUMETNET.

This request allowed the SRNWP Coordinator to express his views on this issue.

I am not convinced that this would be a good thing to do because NWS and private providers would tune their model in function of the statistical scores considered for the competition. The probability is large that the best overall score would be reached by a model producing very smooth fields, a property that would be detrimental for the forecast of extreme events. Moreover, the best statistical score does not necessarily mean the best forecasts for the users. For example, if I had to forecast the wind at an airport, I would prefer for the security of the passengers and crew members a model with a not very good False Alarm Rate score for strong winds.

f) Future of this Lead Centre (LC) and next meeting

Everybody agrees that the SRNWP two year rule for LC's should also be applied to the LC for Statistical Adaptation. Consequently, the next meeting is foreseen for November/December 2002, most probably in Vienna.

The Coordinator proposed to enlarge the scope of the activities that should be covered by this LC by addressing also the statistical and dynamical follow-up models as the 1D-models (e.g. for fog) or the models used for road condition forecasts or computation of trajectories. As his proposition has been accepted by the participants, it will be submitted to the next SRNWP Annual Assembly together with the proposition of changing the name of the LC for Statistical Adaptation to LC for Statistical and Physical Adaptation.

In summary, this First SRNWP Workshop on Statistical Adaptation has been an interesting and successful meeting. It has proved that it has been a sensible initiative to set up some years ago a Lead Centre for Statistical Adaptation.

Jean Quiby

Coordinator of the SRNWP Network

Last Modified: 11:08am MET, December 22, 2000