Testing for WRF Version 3.5.1
Typical for our annual bug-fix release is late summer (the "third digit release"), three types of testing are conducted. The first type is the weekly testing that accompanies the proposed updates to the repository. These tests are largely to insure that the basic functioning of the model is working in parallel (with more than a single processor). The second type of tests are to validates that optional model features are working by exercising the most of the model's infrastructure. The third type of testing compares forecast results with analyses.
Weekly:
On the NCAR Linux machine (yellowstone), very short tests are conducted with three compilers: Intel (12.1.5, 13.1.2), PGI (12.5, 13.3), and GNU (4.7.2, 4.8.1). The tests cover idealized, real-data, ARW & NMM, and some simple WRF Chemistry options.
Baroclinic Wave: 10 tests (each serial, OpenMP, MPI)
Super Cell: 16 tests (each serial, OpenMP, MPI)
ARW Real-Data: 30 tests (each serial, MPI), 22 tests (each serial, OPenMP, MPI)
NMM Real-Data: 9 tests (each serial, MPI)
Chemistry: 6 tests (each serial, MPI)
This totals to over 200 forecasts per compiler, for a total of 1200 forecasts and a total of 800 bit-for-bit comparisons for parallel vs serial testing.
Optional Features
The feature tests exercise the capabilities of the model that tend to be important to most users. This testing handles the infrastructure related to the available options, and uses longer forecast periods (such as 24-h). The domain size and resolution are not too important, as the subjective validation is carried out through visual inspection of graphics and detailed review of diagnostic printout. These tests start from a basic single domain forecast, and then proceed to a nest. The differing tests for nesting are stationary vs moving, and concurrent vs ndown, multiple input domains vs model-generated static fields. Timing tests for a single nest (3:1, 4:1, 5:1) and timing tests for 4-domain runs indicate that the WRF model spends approximately 6-7% time on nesting overhead. The single domain and some of the nested cases are then used to further test additional capabilities: Analysis Nudging, Digital Filtering, Observation Nudging, and SST Update. All of the special capabilities listed, both with and without nests, are then tested for bit-for-bit correctness with a restart file.
Model Verification
The objective verification of the WRF model is for a winter month (January 2013) and a summer month (June 2010). The model set-up is CONUS at 20-km, which gives 290x190 horizontal grid cells, with 40 vertical levels extending up to 30 hPa. The WRF model initial and lateral boundary data are manufactured with the WPS package, using the NCEP FNL data (1 degree, isobaric, global tropospheric analyses). For both the summer and winter periods, a 0000 UTC initialization and 48-h forecast is conducted, with 28 forecasts generated for summer (June 1 through June 28 initialization) and 28 forecasts generated for winter (January 1 through January 28 initialization). This is 56 model forecasts per physics option, which with 14 physics suites, provides a total of almost 800 48-h forecasts. Each WRF simulation is compared with FNL surface and upper-air analyses, with domain bias and RMSE computed.
Test |
cu_physics |
ra_lw |
ra_sw |
bl_pbl |
LSM |
mp_physics |
shcu |
STD |
1 |
1 |
2 |
1 |
2 |
4 |
NA |
CUGF |
3 |
1 |
2 |
1 |
2 |
4 |
NA |
CAMMP |
1 |
1 |
2 |
1 |
2 |
11 |
NA |
CLM |
1 |
1 |
2 |
1 |
5 |
4 |
NA |
GBM |
1 |
1 |
2 |
12 |
2 |
4 |
NA |
NOAHMP |
1 |
1 |
2 |
1 |
4 |
4 |
NA |
NSSL1MOM_1 |
1 |
1 |
2 |
1 |
2 |
17 |
NA |
NSSL1MOM_2 |
1 |
1 |
2 |
1 |
2 |
18 |
NA |
NSSL2MOM_1 |
1 |
1 |
2 |
1 |
2 |
19 |
NA |
NSSL2MOM_2 |
1 |
1 |
2 |
1 |
2 |
21 |
NA |
RRTMG |
1 |
4 |
4 |
1 |
2 |
4 |
NA |
SHCU |
1 |
1 |
2 |
1 |
2 |
4 |
3 |
Thompson |
1 |
1 |
2 |
1 |
2 |
8 |
NA |
WSM6 |
1 |
1 |
2 |
1 |
2 |
6 |
NA |