[email protected] ems 2013 (reading uk) verification techniques for high resolution nwp precipitation...
TRANSCRIPT
[email protected] EMS 2013 (Reading UK)
Verification techniques for high resolution NWP precipitation forecasts
Emiel van der Plas ([email protected])Kees KokMaurice Schmeits
2/15
[email protected] EMS 2013 (Reading UK)2
IntroductionNWP has come a long way…
It was: Then it became Hirlam:Now it is HarmonieIt should be GALES (or so)It looks better…
But how is it better?Does it perform better?
That remains to be seen…
3/15
[email protected] EMS 2013 (Reading UK)3
Representation: “double penalty”Forecast localised phenomena: False alarm + Miss = double penalty
Station (gauge) data:
Forecast vs Radar data:When we take point-by-point errors (ME/RMSE):
4/15
[email protected] EMS 2013 (Reading UK)4
This talkHARP: Hirlam Aladin R-based verification Packages
Tools for spatial, ensemble verificationBased on RFSS, SAL, …
Relies on eg SpatialVX package (NCAR)Generalized MOS approach
Comparison high vs low resolutionHirlam (11 km, hydrostatic)Harmonie (2.5 km, non-hydrostatic, w/ & w/o Mode-S)ECMWF (T1279, deterministic)
Lead times: +003, +006, +009, +012Accumulated precipitation vs (Dutch) radar, synop
5/15
[email protected] EMS 2013 (Reading UK)5
Neo-classical: neighborhood methods, FSS• Options: FSS, ISS, SAL, …Fraction Skill Score (fuzzy verification)
(Roberts & Lean, 2008)
Straightforward interpretation‘Resolves’ double penalty
But‘smoothes’ awayresolution that may contain information! ( Vstorm t )
== upscalingBaserate , FSS
observation forecast
8/15
[email protected] EMS 2013 (Reading UK)8
FSS: more resultsHigher resolutions: higher thresholds?
DMO!
9/15
[email protected] EMS 2013 (Reading UK)99/15
How would a trained meteorologist look at direct model output?
Model Output Statistics
Learn for each model, location, … separately!
10/15
[email protected] EMS 2013 (Reading UK)10
Model Output Statistics• Construct a set of predictors (per model, station, starting and lead time):
For now: use precipitation onlyUse various ‘areas of influence’: 25,50,75,100 kmDMO, coverage, max(DMO) within area, distance to forecasted precipitation, …
Apply logistic regressionForward stepwise selection, backward deletion
Probability of threshold exceedance!
Verify probabilities based on DMO, coefficients of selected predictorsTraining data: day 1-20, `independent’ data: day 21 – 28/31
11/15
[email protected] EMS 2013 (Reading UK)11
Model (predictor) selectionBased on AIC (Akaike Information Criterion)
Take the predictor with highest AIC in training set (day 1 - 20)Test on independent set (day 21 – 28/31)
Sqrt(tot_100)
Sqrt(max)_100
More predictors != more skill
distext_100
exp2int_100
13/15
[email protected] EMS 2013 (Reading UK)13
Model comparison (April – October 2012)
• Hirlam, • Harmonie (based on Hirlam)
ECMWF
12UTC+00312UTC+00612UTC+009
15/15
[email protected] EMS 2013 (Reading UK)15/15
Discussion, to doMOS method:
Stratification per station, season, …More data necessary, reforecasting under way
Representation error: take (small) radar areaUse ELR, conditional probabilities for higher thresholdsExtend to wind, fog/visibility, MSG/cloud products, etc
FSS:Use OPERA data
16/15
[email protected] EMS 2013 (Reading UK)16/15
Conclusion/DiscussionComparison between NWP’s of different resolution is, well, fuzzy
Realism != ScoreFraction Skill Score yields numbers, but sometimes hard to draw conclusions
MOS method: Resolution/model independentTakes into account what we knowDoubles (potentially) as predictive guide
Thank you for your attention!
17/15
[email protected] EMS 2013 (Reading UK)17
Binary predictand yi (here: precip > q)
Probability: logistic:
Joint likelihood:
L2 penalisation (using R: stepPLR by Mee Young Park and Trevor Hastie, 2008):minimise
Use threshold (sqrt(q)) as predictor: complete distribution function (Wilks, 2009)
Few cases, many potential predictors: pool stations, max 5 terms
17/15
Extended Logistic Regression (ELR)