Fowler, Tressa L. John Halley Gotway, Randy Bullock,
Paul Oldenburg, Tara Jensen, National Center for Atmospheric Research
Model Evaluation Tools
(MET) is a freely-available software package for forecast verification. It is
distributed through the Developmental Testbed Center (DTC) for testing and
evaluation of the Weather Research and Forecasting (WRF) model. Development has
been led by the community: including WRF users, the DTC, and verification
experts through workshops and user meetings. MET allows users to verify forecasts via traditional,
neighborhood, and object-based methods. To account for the uncertainty
associated with these measures, methods for estimating confidence intervals for
the verification statistics are an integral part of MET.
Many features of the
software were presented at past workshops. The latest release includes many new
features. The new Ensemble-Stat
tool preprocesses sets of forecasts into ensemble forecasts, including mean,
spread, and probability. When observations are included, it will also derive
ensemble statistics such as rank histogram and continuous ranked probability
score. When accumulating statistics over time, users can now adjust the
confidence intervals to account for serial correlation. To assist our WRF ARW
users, MET now can read the netCDF output from the pinterp postprocessor.
Multi-category, e.g. 3x3, contingency tables are supported, along with
appropriate skill scores. Many new preprocessing tools assist users with
formatting and examining their observational data for use with MET. The tools
reformat MADIS (Meteorological Assimilation Data Ingest System), TRMM (Tropical
Rainfall Measuring Mission) and WWMCA (World Wide Merged Cloud Analysis)
products and produce plots of many types of observational data sets. These
enhancements will be of particular interest for users performing cloud forecast
verification. Examples of the existing and new verification capabilities will be
shown.