on the challenges of identifying the “best” ensemble member in operational forecasting david...

Post on 18-Jan-2018

218 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

DESCRIPTION

Model (or Best Member) of the Day Can a “best member” be chosen from the ensemble? Can some members be eliminated from further consideration once they have deviated too far from reality? Is a “return to skill” possible for eliminated members? Do the early “best” members continue to verify as best during the remainder of the period?

TRANSCRIPT

On the Challenges of Identifying the “Best” Ensemble Member in

Operational ForecastingDavid Bright

NOAA/Storm Prediction Center

Paul NutterCIMMS/Univ. of Oklahoma

January 14, 2004January 14, 2004

Where Americas Climate and Weather Services Begin

2003 SPC/NSSL Spring Program:• Objectives:

– Advance the science of weather forecasting and the prediction of severe convective weather

– Facilitate discussion and excite collaboration between researchers and forecasters through real-time forecasting and evaluation

– Bring in subject matter experts for assistance– Efficient testing and delivery of results to SPC operations

• Emphasis:– Model predicted convective initiation (< 15 hrs) [40%]– Explore SREF systems' ability to aid severe convective weather

forecasting via Day 2 Probability Outlook [60%]

Model (or Best Member) of the Day• Can a “best member” be chosen from the

ensemble? • Can some members be eliminated from further

consideration once they have deviated too far from reality?

• Is a “return to skill” possible for eliminated members?

• Do the early “best” members continue to verify as best during the remainder of the period?

“Return to Skill” in Lorenz '63 Model

• Trajectories return to nearly the same point, but have taken different paths through phase space (The forecast is “right” for the wrong reason)

• Difference varies by time and variable (and space as seen later)

Unique Best members in Lorenz '63 Model

• In perfect model, nearly every ensemble member has been considered “best” by the time ensemble skill saturates relative to climatology.

• In a biased model, ensemble skill saturates more quickly, but the growth of unique best members is a bit slower.

Average scores for 1000 60-member ensembles

• 15 members

• 5 Eta; 5 EtaKF; 5 RSM

• 1 Control; 2 +/- Bred initial perturbations

• 63 hour forecast starting at 09 UTC and 21 UTC

• 48 km grid spacing

NCEP SREF used in Spring Program

Spatial Variability of “Best” Members

After ranking ensemble members, the median, maximum, and minimum also show highly mixed contributions

Best Member Statistics, or Loss of Member Skill

• Following Best Member ideas of Roulston and Smith (2003)– Attempted to:

• find a “true best ensemble” member at all forecast hours, and • correlate F015 ensemble ranking to F039 ensemble ranking

– Normalized RMSE based on 22 variables• PMSL, PWTR, CAPE• 2 meter: T, Td• 10 meter: U, V• 700, 500, 300 hPa: T, r, U, V, Z

– RUC analyses served as “truth”at 0000 and 1200 UTC

– 24 days of August 2003

• The ensemble mean is nearly always closest to the analyses

• Without ensemble mean, ~3 members are considered best among the 6 that could have been identified during the forecast

r = .28

Loss of SkillCan 15 hr verification help predict 39 hr

results?

• 12-hr rank correlation gradually increases with lead time

• Inclusion of the ensemble mean always improves the result

Excludes Mean

Includes Mean

• Rank correlation decreases with increasing lead time

• A particular member should not be isolated as a preferred deterministic forecast

Summary

• Skill is not monotonic throughout the forecast.

• Performance measures vary widely by parameter and through space and time.

• The ensemble mean is usually the “best member”.

• Attempts to isolate a single best ensemble member will not yield the best forecast over time.

• Eliminating poorly-performing ensemble members early in the forecast degrades its collective future value.

top related