Third SRNWP Workshop on Statistical Adaptation

1-2 June 2005, Vienna (Austria)

 

 

Summaries of the Presentations

 

 

 

Session A:   Ensemble Forecasts

 

J. Kilpinen, FMI

Calibration of EPS winds

Experiments with very recent data (October 2004 - March 2005)

Correction of the 10m winds not only for the deterministic winds but also for the EPS winds.

Determination of probabilistic forecasts from deterministic forecasts by use of two methods:

- forecast error distribution

- neighbourhood method (the Suzanne Theis method).

No calibration of the EPS in the traditional way where a large sample of past forecasts and observations are needed. Calibration is made by Kalman filtering. First, the ensemble mean is Kalman filtered and after each member is Kalman filtered using the same coefficients.

Kalman filtering is able to reduce biases and produce better probability forecasts for most of the 6 stations considered in terms of ROC curves, ROC areas and Brier Skill Scores.

 

A. Cofino et al., University of Cantabria (Spain)

Analysing and downscaling ensemble forecasts with topology-preserving clustering methods

The Institute for Artificial Intelligence and Meteorology of the University of Cantabria (Spain) has been with 3 high-level presentations one of the major contributors of the Workshop.

A principal component analysis of ECMWF grid point values of several parameters at several levels over Iberia is realized in order to drop the number of predictors from more than 6000 to about 600. Then a clustering based on these principal components (PC) is performed. For each member of the cluster, a mean precipitation amount (mm/24h) has been computed from the observations.

In operational mode, there is a sharp drop in the quality of the precipitation forecasts after day +3 due to the loss of skill of the ECMWF forecasts. To alleviate this problem, the downscaling has been applied to the 51 members of the ECMWF EPS. Each EPS member finds its place in one member of the cluster in the PC space. Believing that the error is directly correlated with the spread of the EPS members, the authors want to use the spread a measure of predictability. In their case, the spread is immediately given by the distribution of the 51 members over the cluster. To measure the dispersion over the cluster, they use the Eckert-Cattani "Self-Organising Map" algorithm and the "Generative Topographic Map" concept which is a probabilistic re-formulation of the Self-Organizing Map algorithm.

 

D. Cattani and T. Comment, MeteoSwiss

Short presentation about confidence index

How to inform the public about the uncertainty of a weather forecast? Or: How much trust should the public give to a particular weather forecast?
The solution of MeteoSwiss lies in the definition of a "confidence index".
The confidence index (CI) is based on the dispersion of the ECMWF EPS members in an adaptive table of 144 weather situations. The weather situations are defined by their respective H500hPa and T850hPa patterns over Europe and Eastern Atlantic.
The confidence index is determined with respect to a climatological dispersion. Is the EPS dispersion larger than the climatological one, the ECMWF deterministic forecast is considered totally unreliable and receives the mark zero. A confidence index of 10 corresponds to a certainty.
But since this CI is related to a large area of Europe, a low index may still be related to the same kind of weather over the small Swiss area, or a high index to different types of weather over Switzerland. In order to regionalize the CI, we consider the dispersion of the EPS for 3 specific parameters (temperature, cloudiness, precipitation), for 3 grid points over Switzerland. Those dispersions are again compared to climatology. A new CI is then calculated, as a weighted sum of the CI over Europe and the CI of each parameter.

 

M. Vallée, Environment Canada, RPN, Dorval (Canada)

Verification of weather elements forecast by the Canadian EPS

It has become a nice tradition that scientists of Environment Canada, more precisely from "Recherche en Prévision Numérique" in Dorval, cross the pond to visit the different EUMETNET SRNWP Workshops.

For the first time in the series of our Statistical Adaptation Workshops was RPN represented. Marcel Vallée reviewed the past and present EPS works of RPN and informed us about their future plans.

He also presented statistical scores for the verification of the EPS forecasts which are not used in Europe (at least not by the National Meteorological Services) as the "Probability Score", the "Reduce Centered Random Variable" and the "Continuous Ranked Probability Score".

 

 

Session B:   Dynamical Adaptation

 

T. Haiden, ZAMG

INCA - High resolution downscaling of NWP-model forecasts

Thomas presented the "Integrated Nowcasting through Comprehensive Analysis" (INCA) system presently in development at the ZAMG.

Some characteristics of this system:

- Effort has been made to derive the best possible precipitation analyses by combination of the radar and rain gauge information

- Precipitations up to 6 hours are forecast by an extrapolation (of the precipitation areas of the analyses) based on motion vectors determined by precipitation pattern correlation.

Tests show that this forecast method is better than the Aladin precipitations till +4 / +5 hours.

The 2-D cloudiness analyses are determined by using the cloud type analyses produced operationally by EUMETSAT (from MSG) and the sunshine intensities measured by the Austrian automatic observing network TAWES.

The 3-D temperature analyses are based on short-range ALADIN forecasts which are modified with respect to the 1km orography of the INCA grid. The INCA temperature forecasts are clearly better than the Aladin ones till +7 / +8 hours and remain slightly better till +12 hours.

The 1-km winds are determined in the following way: after a dynamical adaptation of the 9.6-km winds from the Aladin forecasts to a 2.3-km mesh, the winds of the TAWES observing stations are nudged simultaneously with a downscaling to the 1-km INCA grid.

 

P. Crochet, Icelandic Meteorological Office

Precipitation mapping in Iceland using the linear theory model of orographic precipitation

A comparison of the yearly accumulated precipitations between the observations and ERA-40 shows large difference in Iceland according to the type of stations. For station in open terrain, ERA-40 is very good, but it shows large precipitation overestimates for station in rain shadow. For stations where precipitations are enhanced by orography, this enhancement is too weak for the ERA-40 precipitations.

It must not be forgotten that a rain gauge network systematically underestimates precipitation due to evaporation, plashing and aerodynamical effect, primarily for snow.

In order to improve the ERA-40 precipitations over Iceland and, consequently, the ECMWF precipitations forecasts, the ERA-40 precipitations have been corrected by the use of a linear model for orographic precipitations [Smith and Barstad, 2004]. The first results look very promising.

 

F. Wimmer and T. Haiden, ZAMG

Analysis of residual errors in T2m nowcasting

For the station Vienna, in February 2003, for the temperature, the persistence is better than the Aladin DMO till +4 hours and the adapted climatology is better till +10 hours. This shows that there must be room for improvement by use of an adequate method.

By introduction of an advection algorithm, the number of better forecasts by use of this algorithm largely exceeds the number of worse forecasts for the period Nov. 2004 - March 2005.

 

 

Session C:   Operational Applications

 

S. Roquelaure and T. Bergot, Météo-France

Predictability of fog and low clouds at Paris CDG airport

Use of the 1-D version of the ISBA soil model and of the 1-D COBEL model.

The initialization of the low clouds and fog is made by explicit introduction into the analysis.

Several initial conditions have been tested and the results of the integrations compared:

- with initialization of low clouds and fog

- without initialization of low clouds and fog

- with initialization of low clouds and fog and cloudiness from the Aladin model

- with initialization of low clouds and fog and persistence of the observed cloudiness at initial time.

These tests showed a large spread for the downward IR fluxes. The best set-up has been the last one (persistence of the observed cloudiness) and the worst the one without initialization (its bias was 20 times larger).

It is interesting to note that the weakest forecasts are for 3 UTC in the afternoon: highest false alarm rate and lowest hit rate.

 

M. Rohn et al., DWD

Objective optimisation of local forecast guidances

The DWD presented a very comprehensive review of adaptation and optimisation of its NWP forecasts results.

From their global model GME: 3500 point forecasts worldwide, each with 21 surface weather elements, most of them MOS processed.

They also work on objective optimisation and one of the methods used is to combine model results by weighted averages. Example for 2 models (global and regional): the weights of each model will vary during the integration as a function of the range.

They also intend to use a Bayesian Model Average scheme in order to use the best model - not known in advance -  for any forecast range.

DWD works also intensively in the nowcasting range. Already developed is the "SatelliteWeather": analysis with METEOSAT and SYNOP; forecast by extrapolation.

Planned is BlitzMOS whose predictand will be the probability of lightning.

 

H. Hoffmann and V. Renner, DWD

Interpretation of the new DWD high resolution LMK

Present configuration of the LMK: 2.8 km grid resolution, 50 levels, forecast lead time: 18 hours.

High-resolution numerical weather forecasts include noticeable stochastic elements already in the short range. Therefore direct model output for deterministic forecasts should be transformed in order to suppress essentially unpredictable small scale structures.

Probability information for the exceedance of given thresholds should be derived by statistical means.

The following methods are planned to reach these aims:

- Derivation of statistical (probabilistic) forecasts from a single (deterministic) model integration by means of the neighbourhood method (NM)

- As there will be 8 LMK integrations per day (every 3 hours), use of time-lagged ensembles.

First results based on two short periods of time:

= Deterministic precipitation forecasts:

Neighbourhood averaging (space and time) not better than simple spatial averaging over 5x5 or 15x15 grid points. Averaging over 5x5 points better than over 15x15.

For some of the elements (e.g. precipitation, gusts) a re-calibration of the distributions of the smoothed field towards the distribution of the original field will be done.

= Probabilistic precipitation forecasts:

Quality improves by increasing the size of the spatio-temporal neighbourhood. The optimal neighbourhood has not yet been determined.

Increasing the temporal neighbourhood size leads to better results than increasing the spatial neighbourhood (with the same number of points in the spatio-temporal neighbourhood).

 

H. Petithomme, Météo-France

New operational methods and forecast parameters at Météo-France

The author reviewed the operational production at Météo-France.

Postprocessing is made for 2500 sites in France and up to 6000 worldwide. The method used depends on the parameter: linear regression + filtering for T2m and U; linear discrimination for dd/ff, vis and N.

Wind gusts:

Computed for 374 sites over France. 2 thresholds: 28 kt and 43 kt.

Several methods tested: regression, linear discriminant analysis, pseudo PPM, direct wind gust computation. In the predictor selection procedure, the surface wind stress came first, which gives to the turbulence produced by wind shear the main cause for wind gusts.

We can say that

- higher threshold (43 kt) much more difficult to predict

- regression gives poor results

- high HR can be achieved (for example with linear discriminant analysis) but they are always linked with high FAR.

Low visibilities:

Five different classes are forecasted with linear discriminant analysis at 16 locations over France.

 

F. Schubiger, MeteoSwiss

Short-report of some operational adaptations at MeteoSwiss with aLMo

MeteoSwiss presented an overview of its presently operational statistical and dynamical adaptations applied to its high-resolution model aLMo.

Statistical adaptations:

- Kalman filtering: T2m, Td2m

- Verification of the precipitations by averaging over 5 and 13 grid points around observation point

- Verification of the cloudiness by averaging over all grid points up to 30 km of observation point

Dynamical adaptation:

- Wind gust computation with the Brasseur method

Other products:

For airplane icing warnings: production of

- liquid water content charts  at 700 hPa

- temperature charts at 700 hPa.

Super-cooled cloud droplets play a very important role in aviation safety as they are the cause of air-plane icing.

Forecast maps showing the amount of cloud liquid water can be of great importance for the aviation. It is by temperatures of -2/-3 degrees that icing is the most dangerous, when glazed, transparent ice forms. At -6/-7 degrees, rimed, opaque ice forms, which is less dangerous as it remains on the edges of the wings and does not spread on the wings as the glazed ice.

 

 

Session D:   Statistical Adaptation  /  Neural Networks

 

A. Manzato

Short term rainfall forecasts from sounding-derived indices using neural networks

In the plain of Friuli-Venezia-Giulia in Northern Italy, there is a radio-sounding (RS) station (Udine-Campoformido) and 15 automatic observing stations.

The RS station makes 4 soundings a day. Each sounding is associated with the maximum of the 15 6-hourly accumulated precipitations measured by the automatic stations (the 6-hourly periods start at sounding launch times). Only precipitation events with at least 5mm in 6h at at least one of the 15 stations are considered.

For each RS of Udine-Campoformido, a very large number of indices (as CAPE and K index) and indications (as height of the tropopause and lifting condensation level) are recorded.

These indices will form the input of a neural network whose output will be the maximum of precipitation to be expected for the next 6 hours at at least one of the 15 observing stations.

The set-up and the computation of the neural network are explained with great details in the presentations.

An interesting point of the work is to look at the ranking of the indices chosen as input by the forward selection algorithm for all the precipitation cases (>=5mm/6h).

1. mean relative humidity in the lower 500hPa

2. maximum buoyancy

3. mid-level density weighted wind (v-component)

4. mean water vapour flux (v-component) at the previous sounding (6 hour before).

The same work has been done for the forecast of two precipitation classes:

one class is >20 mm / 6h at at least one station; the other class is >40mm / 6h at at least one station.

For the 20 mm class as well as for the 40 mm class, the forward selection algorithm picked up in both cases the same indices as inputs and in the same (decreasing) order:

1. mean water vapour flux (v-component)

2. K-index

3. standard deviation of the radiosonde vertical velocity.

But the forecasts given by the neural network for this last class are very poor. They are better for the 20mm class and, according to the author, good for RR> 5mm/6h.

 

C. Sordo and J. Gutiérrez, University of Cantabria (Spain)

A comparison of PCA and CCA predictors for wind speed downscaling using logistic regression and neural networks

PCA = Principal Components

CCA = Canonical Correlation Analysis

The authors have started their presentation by showing for a simple example (temperature in Santander to be deduced from the temperature of the ERA nearest grid point) that a linear relationship (regression) is totally inappropriate. Thus nonlinear relationship between predictands and predictors must be used.

The problem is the determination of the probabilities of wind speeds > 50 kt at 11 stations in the North of Spain using as input T, Z, U, V and H at 27 grid points of  ERA-40 for the period January 1977 - August 2002.

The first step is to reduce the amount of data of the predictors.

Two methods are used:

- Principal Component Analysis (PCA). With only 10% of the PCs, they can reconstruct fields presenting a RMSE of 2 % only.

- Canonical Correlation Analysis (CCA): an output data vector on a lower dimensional space which should have the maximum correlation with the equally reduced input data vector must be defined.

The PCA and the CCA data are used as input for two models:

- Logistic Regression

- Neural Network.

The best results measured by the Brier Skill Score are given by the Neural Network using as input 10 Principal Components. The use of a larger number of PCs has not improved the results significantly.

 

J. Vehovar, Environment Agency of the Republic of Slovenia

Temperature forecasting in terms of quantiles

Instead of having as predictand for a given range and location only the value of the parameter we are interested in (for example temperature), the method will yield a distribution of its possible values in terms of quantiles. For temperature, the regressions to be performed will give the temperature of the different quantiles, for example 5%, 25% 50%, 75% and 95%, and the median. As predictors, DMO (Aladin/SI) and observations have been used.

 

J. Bremnes, met.no

Quantile forecasting in practice using local quantile regression

The aim is to forecast quantiles. For example: for a given probability of rain (say: 40%), which will be to forecast the probabilies of precipitation more than a given amount. The method defines a weight for each data point; the most similar data to historical ones should get most weight.

In the training phase, evaluate the quantile function at predictor values by looking at the corresponding observed precipitations.

 

H. Seidl, ZAMG

Experiences with PPM-methods applied to forecast local and areal amounts of precipitation in Austria

PPM = Perfect Prognosis Method

Firstly the observed precipitations (6-hourly and 12-hourly accumulated) are interpolated onto a regular grid with height correction (the increase of precipitation with height is deduced from the seasonal climatology of Austria).

Method for the Areal PPM:

1. Determination of a multiple regression equation for each area.

Predictors values are taken from ECMWF archived analyses for the grid point most representative of the area considered. Corresponding areal precipitations are taken from the observation data base. Then the regression coefficients are computed. This work must be done once for each area.

2. Operational procedure: From the ECMWF forecast fields, the same predictors are extracted and inserted into the multiple regression equation.

Next to this PPM for area precipitation forecasts, ZAMG has also in a similar way developed a PPM for forecasting local amounts of precipitation, but the method has a tendency to underestimate severe precipitations. This has been alleviated with the "MAXMIN method" which, according to the values of the parameters (predictors), will compute the precipitations either in a maximum mode or in a minimum mode. This method deteriorates the classical scores as MAE or RMSE, but the HR for strong precipitations is increased, concomitantly with an acceptable increase of the FAR.

 

G. Csima, HMS

Using multiple linear regression for post-processing model output data

At the HMS, forecasts of the 00 and 12 UTC of the ECMWF and ALADIN models are 3-hourly processed between +12 / +60 for ECMWF and +3 / +48 for ALADIN.

The post-processing is done by multiple linear regression at every SYNOP station. There is a different set of regression equations for each month.

The predictors are: MSLP, T2m, RH2m, U10m, V10m, N. At 925, 850,700 and 500 hPa: Z, T, U, V, RH. Together 26 predictors.

The selection of the predictors for the linear regression equations has been done by forward selection. The quality of the procedure has been assessed by the ANOVA method (ANalysis Of VAriance).

In simple linear regression mode for the T2m, it is with 13-14 predictors that the reduction of the variance of the errors - the difference between the computed temperatures and their corresponding observations - is maximum. By adding supplementary predictors, the reduction of variance decreases.

For the full test period (Jan, Feb, Mar 2005) for the T2m as predictand, the most significant model corrections took place in mountainous and hilly terrain.

 

P. Crochet, Icelandic Meteorological Office

Prediction of T2m and 10m wind-speed in Iceland using a Kalman filter

A precise mathematical description of the Kalman filter technique has been presented by the author. The author presented results obtained with quantification of the noise statistics (in many NWS, Kalman filter is used with an observation noise sets equal to zero).

 

A. Persson, SMHI

From 2-D to 3D Kalman filtering of NWP output

The author claims that Kalman filtering of a two-dimensional expression (2D-KF) not only corrects the bias but also the variance of the forecasts.

This can be understood if we consider the correction C = A + Bx where x is the forecast and A (the "bias") and B (the "slope" and therefore the variance) which are recursively estimated by the Kalman filter. The improvement in variance meant that an underforecasting of extreme events such as temperature < -20° was almost avoided.

This has been very well shown by the HIRLAM 2m temperature and 10m wind speed corrections for different observing stations in Sweden. The author has also compared the application of a 2D- versus a 3D-KF to the temperatures of a very cold observing station. Account was now taken also of the forecast with respect to the last available observation. This type of 3D-KF decreased the RMSE, but had - compared to the 2D-KF - a dampening influence with fewer forecasts with temperature < -20°.

If the anomalies are used instead of the full values, a large change in B will not necessarily imply large changes in A as well and the Kalman filter will achieve a quicker adaptation.

 

J.M. Gutierrez et al., University of Cantabria (Spain)

Multisite Spatial Analog Downscaling Methods with Bayesian Networks

The main point of this presentation was to show the use of the Bayesian Networks (BN) as a non-linear statistical downscaling method linking the ERA-40 grid point values over Iberia with the local observations.

BM are very popular in several fields (as biology and medicine), but are just starting to be used in meteorology.

BN have the advantage to be global in the sense that only joint probabilities encompassing all the variables (large-scale and small-scale) can be computed. The graphs defining the dependence structures do not separate for the downscaling process the large scale (ERA-40 grid-point values) from the local observations, as other downscaling methods - for example the multiple regression - do it.

The authors have not restricted themselves to the case of singly connected networks over Spain (ie. networks with only one path between any two nodes). This limitation would have rendered the search of a solution for the joint probabilities much easier.

This presentation can be considered as a break-through in the field of the meteorological statistical downscaling.

 

 

 

The end