2.6      Tracking WRF performance:  how do the three most recent versions

            compare? 

 

Wolff, Jamie K. and Michelle Harrold, National Center for Atmospheric Research

 

As improvements and additions are made to the WRF code, continual questions arise in the user community regarding the improvement and/or degradation of specific WRF configurations with subsequent releases of WRF. With the numerous options available in WRF, the answer to that question may be different for each user and laced with caveats. Prior to a release, the WRF code is run through a large number of regression tests to ensure it successfully runs a wide variety of options; however, extensive testing to investigate the skill of the forecast is not widely addressed. To provide beneficial information regarding the progression of WRF code with time, the Developmental Testbed Center (DTC) tested one particular configuration of the Advanced Research WRF (ARW) dynamic core with the three most recent releases of WRF (v3.4, v3.4.1 and v3.5). The testing spanned over a warm season and a cold season to capture the model performance over a variety of weather regimes. The model was run over a 15-km CONUS domain, and forecasts were initiated every 36-hours and run out to 48-hours. For this presentation, objective model verification statistics will be presented to highlight the differences in forecast performance with model progression for surface and upper air temperature, dew point temperature and wind.