P39 Recent
enhancements to the Model Evaluation Tools (MET) software.
Halley Gotway, John, Tara L. Jensen, R. Bullock, Tressa Fowler,
Tatiana Burek, Julie Prestopnik,
Minna Win, George McCabe, Paul Prestopnik,
National Center for Atmospheric Research/Research
Applications Laboratory and the Developmental Testbed
Center, Boulder, Colorado, USA
Robust testing and evaluation of research innovations is a
critical component of the Research-to-Operations (R2O) process and is
performed for the U.S. National Oceanic and Atmospheric Administration (NOAA)
National Center for Environmental Prediction (NCEP) by the Developmental Testbed Center (DTC). At the foundation of the DTC testing and
evaluation (T&E) system is the Model Evaluation Tools (MET). MET is a state-of-the-science
verification package supported to the community through the DTC. The
verification team within the DTC has been working closely with DTC teams as
well as the staff at NCEP to enhance MET to better support both internal
T&E activities and external testing within the community. This presentation will demonstrate
several advancements that were made available in the current release. These
include the transition to reading and writing NetCDF4 files, the ability to
run multiple convolution radii and thresholds with one call to the Method for
Object-based Diagnostic Evaluation (MODE), the enhancement of MODE to follow
objects through time (MODE-TD), enhancements to facilitate storm- or
feature-centric evaluations, the inclusion of cosine latitude and grid box
area weighting for larger domains, support for several cloud analysis fields
and satellite-based cloud lidar fields, options for
summarizing multiple point observations at the same station within a time
window, the addition of the High Resolution Analysis (HiRA)
methodology for point observations, and python scripting to facilitate
running MET systematically. |