Third
SRNWP Workshop on Short-range EPS
10-11 December 2007,
Rome (Italy)
-------------------------------------------------------
Report
of the final discussion
Tuesday, 15h45-17h00
Chair: Peter
Houtekamer
Peter Houtekamer,
Massimo Bonavita and Jean Quiby have prepared the following questions for the
workshop final discussion.
I.
The different strategies for the initialization of the LAMs:
downscaling from global, singular
vectors, breeding, EnKF data assimilation
Downscaling from
global
Although we have the
advantage of a better representation of the soil variables, the majority of the
participants agrees that with this technique not much
value is added with respect to the global ensemble that is used to pilot the
LAM ensemble. This could be different for mountainous areas, where the higher
horizontal model resolution of the LAMs permits a steeper model orography,
which would in particular improve the prediction of strong advective
precipitation.
Otherwise, no
supplementary dynamical (e.g. perturbations) or meteorological (e.g.
observations) information is given to the LAMs.
Breeding
As already assessed at
the Second Workshop, it has been repeated that
breeding is a method of limited sophistication, because the perturbations
cannot be made orthogonal. A great advantage of the breeding technique is its
simplicity when compared to the singular vector technique.
Singular vectors
The big advantage of
this method, with respect to the breeding method, is that it defines orthogonal
perturbations. It is generally admitted today that the
SV method is an appropriate method for global models. In fact, after two days
of integration, it would appear that perturbations project on fairly similar
error structures for the breeding, singular vector and EnKF-based ensemble
initialization methods.
The
precise way of initializing an ensemble is likely more important for
high-resolution limited area applications. High resolution LAMs are used for short range,
even for very short range forecasting. For these short ranges, you would like
to have the perturbations fully developed from the beginning of the
integration. But when an optimization time of only a
few hours is used, the eigenvalue spectrum of the SV
perturbations is very flat.
Another weakness of
the SV method for the LAMs is that the highly non-linear
diabatic processes are not (yet?) considered in the
computation of the perturbations, although diabatic
processes can be locally very important.
It has
been strongly advocated by a participant that we have to stick to the SV
method because its rapidly growing dynamical perturbations capture some of the
error growth that is actually due to model error. At this time, a more
appropriate simulation of model errors remains to be developed. (Whether we
will one day be able to simulate model errors is not certain).
EnKF data
assimilation
This method was considered important. It is a pity
that it has not received in Europe the importance that it deserves. However the situation is presently changing: ECMWF is
considering it very seriously as possible alternative and the NWP Consortium
COSMO (among other countries Germany and Italy) has decided to start its
development. For an EnKF that is operating on a local domain, it is not clear
how to deal with boundary perturbations. No experiments have as
yet been performed with coupled global and regional EnKF systems.
II. How many members do we need for LAM EPS?
Olivier Talagrand has claimed that probabilistic scores saturate in
the range of 20-50 members and consequently it is difficult to justify having
more than 50 members. This statement has not been accepted
unanimously, because it was felt that low-probability warnings against extreme
events are important for users with a low cost-loss ratio. Such users might act
on a warning by 10 out of 500 members but not on a warning by just 1 out of 50
members.
Note that the European
Project GLAMEPS foresees the use of a very large ensemble, as it will also
probably be the case with TIGGE-LAM.
It has
been said that we should also look at the practical work needed for the
development of an EPS. An EPS with, say, 200 members would be practically
impossible to validate. Alternatively, one could use available computer
resources to improve the realism of individual members by, for instance,
increasing their horizontal or vertical resolution.
III. How to best account for model errors?
Should
we add random errors, use different parametrizations, implement stochastic
physics or stochastic backscattering of kinetic energy?
Concerning the high
resolution LAMs, one of the strongest statements made in the discussion of this
topic has been the following: Whatever you do inside the integration domain,
this will less influence the solution than a modification in the boundary
conditions.
The pragmatic
multi-model approach comprehensive samples a wide variety of model errors.
However, it is hard to maintain an ensemble of significantly different models
of equal quality. It might be preferable to have some sort of
super-parameterization of model error. It is not clear, however, how to proceed
with this. The main reason must have been that until today, there have not been
enough studies, we have not accumulated enough
experience to assess the respective merits and weaknesses of theses different
techniques.
A major difficulty for
answering the above question is that the cause of model errors is in many cases
not clearly identifiable. Examples: when several causes mix,
when two parametrizations are incompatible, when the coupling between dynamics
and physics is inconsistent. Once a source of errors has been
identified, it will often not be simulated but instead be reduced by subsequent
research and development .
IV.
Postprocessing of the EPS forecasts:
a). Is a recommended way of doing it?
b). Can the calibration of a single model EPS be a
substitute for a calibrated multi-model EPS?
For this topic, the
discussion has been lean.
For the question a),
nobody could make a recommendation.
Concerning question
b), the lack of statements of the participants seemed to be
caused by the fact that, from the presentations given during the
workshop, contradictory results have been presented.
Nevertheless, it came
out of the discussion that multi-model ensembles are not a guarantee for good
spread and that reliable single model ensembles are not
necessarily less skilful than multi-model ensembles. Two very important statements!
For the minutes:
Jean Quiby
With thanks to Pieter
Houtekamer for the reviewing and the enhancement of this report