Tollerud, Edward, National Oceanic and Atmospheric
Administration/ESRL, Tara Jensen, NCAR/RAL, Tressa Fowler, NCAR/RAL, John
Halley Gotway, NCAR/RAL, Seth Gutman, NOAA/ESRL, Kirk Holub, NOAA/ESRL, Paul
Oldenburg, NCAR/RAL, and Barb Brown, NCAR/RAL
Selection of
observations for verification of numerical forecasts presents several important
issues relating to verification uncertainty. Representativeness remains the
most significant, particularly when using point values versus datasets
representing analyzed areal estimates. Rain gauge and radar quantitative
precipitation estimates (QPE) are good examples of this kind of error.
Significant differences can result from the nature of the analysis method.
Understanding the impact of the analysis scheme on the magnitude of
verification score differences is a critical step toward developing
verification comparisons that are fair and useful.
During recent Hydrometeorolgy
Testbed winter exercises, a set of WRF-based regional ensemble focused on heavy
precipitation forecasts for the American River basin. Since improvement in quantitative precipitation forecasts (QPF)
was the principal objective of the exercises, comparison of QPE from the
StageIV product with individual gauges was performed. Different accumulation
periods (24h and 6h) were also compared. In both cases, gauge-based
verification scores were superior to those from the analyses. This past season,
evaluation was extended to integrated water vapor (IWV) forecasts verified with
point GPS measurements and LAPS analysis of IWV observations; these results
showed the opposite tendency of better scores for the analyses. Reasons for
these two disparate results will be suggested.