exploring the use of object- oriented verification at the hydrometeorological prediction center faye...
Post on 21-Dec-2015
216 views
TRANSCRIPT
Exploring the Use of Object-Exploring the Use of Object-Oriented Verification at the Oriented Verification at the
Hydrometeorological Prediction Hydrometeorological Prediction CenterCenter
Faye E. BartholdFaye E. Barthold1,21,2, Keith F. Brill, Keith F. Brill11, and David R. Novak, and David R. Novak11
11NOAA/NWS/Hydrometeorological Prediction CenterNOAA/NWS/Hydrometeorological Prediction Center22I.M. Systems Group, Inc.I.M. Systems Group, Inc.
What is Object-Oriented What is Object-Oriented Verification?Verification?
Considers the relationship between Considers the relationship between individual precipitation areas instead of individual precipitation areas instead of performance over an entire forecast gridperformance over an entire forecast grid
MethodsMethods– NeighborhoodNeighborhood– Scale separationScale separation– Features basedFeatures based– Field deformationField deformation
Why use Object-Oriented Why use Object-Oriented Verification?Verification?
Avoids “double penalty” problemAvoids “double penalty” problem– Traditional verification penalizes forecast both for Traditional verification penalizes forecast both for
missing the observed precipitation and for giving a missing the observed precipitation and for giving a false alarmfalse alarm
Provides additional information about why a Provides additional information about why a forecast was correct or incorrectforecast was correct or incorrect– Spatial displacement, axis angle difference, etc.Spatial displacement, axis angle difference, etc.
Goal is to evaluate forecast quality in a manner Goal is to evaluate forecast quality in a manner similar to a forecaster completing a subjective similar to a forecaster completing a subjective forecast evaluationforecast evaluation
Davis et al. (2006)
Method for Object-Based Method for Object-Based Diagnostic Evaluation (MODE)Diagnostic Evaluation (MODE)
Part of the Model Evaluation Tools (MET) verification Part of the Model Evaluation Tools (MET) verification package from the Developmental Testbed Center (DTC)package from the Developmental Testbed Center (DTC)
Defines “objects” in the forecast and observed fields Defines “objects” in the forecast and observed fields based on user-defined precipitation thresholdsbased on user-defined precipitation thresholds
Tries to match each forecast object with an observed Tries to match each forecast object with an observed object based on the similarity of a variety of object object based on the similarity of a variety of object characteristicscharacteristics
– Matching determined by user-defined weights placed on a Matching determined by user-defined weights placed on a number of parametersnumber of parameters
– Interest value—objects are matched when their interest value is Interest value—objects are matched when their interest value is ≥ 0.70≥ 0.70
Configuration ParametersConfiguration Parameters
Convolution radiusConvolution radius
Merging thresholdMerging threshold
Interest thresholdInterest threshold
Centroid distanceCentroid distance
Convex hull distanceConvex hull distance
Area ratioArea ratio
Complexity ratioComplexity ratio
Intensity ratioIntensity ratio
Area thresholdArea threshold
Maximum centroid Maximum centroid distancedistance
Boundary distanceBoundary distance
Angle differenceAngle difference
Intersection area ratioIntersection area ratio
Intensity percentileIntensity percentile
MODE OutputMODE Output
Forecast Objects Observed Objects
unmatched objects
false alarm
miss
matched matched matched
MODE at HPCMODE at HPC
Running daily at HPC since April 2010Running daily at HPC since April 2010– 24hr QPF24hr QPF– 6hr QPF (September 2010)6hr QPF (September 2010)
Supplements traditional verification methodsSupplements traditional verification methods
Training opportunitiesTraining opportunities– Provide spatial information about forecast errorsProvide spatial information about forecast errors– Quantify model biasesQuantify model biases– COMET COOP project with Texas A&MCOMET COOP project with Texas A&M
Forecaster FeedbackForecaster Feedback
Too much smoothing of the forecast and observed Too much smoothing of the forecast and observed fields, particularly at 32 kmfields, particularly at 32 km– Sizeable areas of precipitation not identified as objectsSizeable areas of precipitation not identified as objects– Trouble capturing elongated precip areasTrouble capturing elongated precip areas
Forecast Observed
HPC Forecast
1” (25.4 mm) threshold
Stage IV
1” (25.4 mm) threshold
Large forecast and observed areas >1in but only small objects identified
Forecaster FeedbackForecaster Feedback
Too much smoothing of the forecast and observed Too much smoothing of the forecast and observed fields, particularly at 32 kmfields, particularly at 32 km– Sizeable areas of precipitation not identified as objectsSizeable areas of precipitation not identified as objects– Trouble capturing elongated precip areasTrouble capturing elongated precip areas
Interest value difficult to interpretInterest value difficult to interpret– Seems to be higher for high resolution models than for Seems to be higher for high resolution models than for
operational modelsoperational models
Interest value: 1.000
Forecast Observed
EAST_ARW Forecast Stage IV
0.25” (6.35 mm) threshold 0.25” (6.35 mm) threshold
Forecaster FeedbackForecaster Feedback
Too much smoothing of the forecast and observed Too much smoothing of the forecast and observed fields, particularly at 32 kmfields, particularly at 32 km– Sizeable areas of precipitation not identified as objectsSizeable areas of precipitation not identified as objects– Trouble capturing elongated precip areasTrouble capturing elongated precip areas
Interest value difficult to interpretInterest value difficult to interpret– Seems to be higher for high resolution models than for Seems to be higher for high resolution models than for
operational modelsoperational models
Matches between small and large objects have Matches between small and large objects have unexpectedly high interest valuesunexpectedly high interest values
Forecast
HPC Forecast
0.25” (6.35 mm) threshold
Stage IV
0.25” (6.35 mm) threshold
Observed
(Interest value: 0.7958)Why are these objects matched?
Forecaster FeedbackForecaster Feedback
Too much smoothing of the forecast and observed Too much smoothing of the forecast and observed fields, particularly at 32 kmfields, particularly at 32 km– Sizeable areas of precipitation not identified as objectsSizeable areas of precipitation not identified as objects– Trouble capturing elongated precip areasTrouble capturing elongated precip areas
Interest value difficult to interpretInterest value difficult to interpret– Seems to be higher for high resolution models than for Seems to be higher for high resolution models than for
operational modelsoperational models
Matches between small and large objects have Matches between small and large objects have unexpectedly high interest valuesunexpectedly high interest values
What is the line around some groups of objects?What is the line around some groups of objects?
Forecast
EAST_NMM Forecast
0.25” (6.35 mm) threshold
Observed
Stage IV
0.25” (6.35 mm) threshold
What does line around objects mean?
Configuration ChangesConfiguration Changes
Eliminate area threshold requirementEliminate area threshold requirement**– GOAL: prevent small objects (<10 grid squares) from GOAL: prevent small objects (<10 grid squares) from
being automatically removed from the analysisbeing automatically removed from the analysis
Increase weighting on boundary distance Increase weighting on boundary distance parameterparameter– GOAL: give more credit to objects that are in close GOAL: give more credit to objects that are in close
proximity to one anotherproximity to one another
Increase weighting on area ratio parameterIncrease weighting on area ratio parameter– GOAL: prevent very large objects from being matched GOAL: prevent very large objects from being matched
with very small objectswith very small objects
Hazardous Weather Testbed configurationHazardous Weather Testbed configuration**Iowa State configurationIowa State configuration
* operational only
* high resolution only
EAST_NMM 6hr precip forecast valid 12Z 9 June 2010EAST_NMM 6hr precip forecast valid 12Z 9 June 2010
6hr accumulated precip ending 12Z 9 June 20106hr accumulated precip ending 12Z 9 June 2010
Original ConfigurationOriginal Configuration(0.25 inch threshold)(0.25 inch threshold)
Forecast Objects Observed Objects
Why are these objects matched?(Interest value: 0.7671)
Configuration Change: Increase Boundary Distance Parameter Configuration Change: Increase Boundary Distance Parameter WeightWeight
(0.25 inch threshold)(0.25 inch threshold)
Forecast Objects Observed Objects
Objects are still matched(Interest value: 0.8109)
Configuration Change: Increase Area Ratio Parameter WeightConfiguration Change: Increase Area Ratio Parameter Weight(0.25 inch threshold)(0.25 inch threshold)
Forecast Objects Observed Objects
Objects are now unmatched(Interest value: 0.6295)
Configuration Change: Increase Both Boundary Distance and Area Ratio Configuration Change: Increase Both Boundary Distance and Area Ratio Parameter WeightParameter Weight
(0.25 inch threshold)(0.25 inch threshold)
Forecast Objects Observed Objects
Objects remain unmatched(Interest value: 0.6882)
Hazardous Weather Testbed ConfigurationHazardous Weather Testbed Configuration(0.25 inch threshold)(0.25 inch threshold)
Forecast Objects Observed Objects
Iowa State ConfigurationIowa State Configuration(0.25 inch threshold)(0.25 inch threshold)
Forecast Objects Observed Objects
Objects are unmatched(Interest value: N/A)
ChallengesChallenges
MODE is highly configurableMODE is highly configurable– Difficult to determine which parameters to change to Difficult to determine which parameters to change to
get the desired resultsget the desired results
Interest values difficult to understandInterest values difficult to understand– Seem to be resolution-dependentSeem to be resolution-dependent– No point of reference for the difference between an No point of reference for the difference between an
interest value of 0.95 and 0.9interest value of 0.95 and 0.9– Does interest value of 1.0 indicate a perfect forecast?Does interest value of 1.0 indicate a perfect forecast?
MODE generates large amounts of dataMODE generates large amounts of data
Future WorkFuture Work
Determine the ideal configuration to use with 6hr Determine the ideal configuration to use with 6hr verificationverification
– Examine multiple cases across all seasonsExamine multiple cases across all seasons
Make graphical output available online to allow for easier Make graphical output available online to allow for easier forecaster accessforecaster access
Make 24hr verification available in real time for Make 24hr verification available in real time for HPC/CPC daily map discussionHPC/CPC daily map discussion
Investigate MODE performance in cool season eventsInvestigate MODE performance in cool season events
Make better use of text outputMake better use of text output
ReferencesReferences
Davis, C., B. Brown, and R. Bullock, 2006: Object-based verification of Davis, C., B. Brown, and R. Bullock, 2006: Object-based verification of precipitation forecasts. Part I: Methodology and application to precipitation forecasts. Part I: Methodology and application to mesoscale rain areas. mesoscale rain areas. Mon. Wea. Rev.Mon. Wea. Rev., , 134134, 1772-1784., 1772-1784.
Gallus, W.A., 2010: Application of object-based verification techniques Gallus, W.A., 2010: Application of object-based verification techniques to ensemble precipitation forecasts. to ensemble precipitation forecasts. Wea. ForecastingWea. Forecasting, , 2525, ,
144-144- 158.158.
Gilleland, E. D. Ahijevych, B. G. Brown, B. Casati, and E. E. Ebert, Gilleland, E. D. Ahijevych, B. G. Brown, B. Casati, and E. E. Ebert, 2009: Intercomparison of spatial forecast verification methods. 2009: Intercomparison of spatial forecast verification methods. Wea. ForecastingWea. Forecasting, , 2424, 1416-1430., 1416-1430.
Model Evaluation Tools (MET) was developed at the National Center for Atmospheric Model Evaluation Tools (MET) was developed at the National Center for Atmospheric Research (NCAR) through grants from the United States Air Force Weather Agency Research (NCAR) through grants from the United States Air Force Weather Agency
(AFWA) and the National Oceanic and Atmospheric Administration (NOAA). NCAR is (AFWA) and the National Oceanic and Atmospheric Administration (NOAA). NCAR is sponsored by the United States National Science Foundation.sponsored by the United States National Science Foundation.