======================================== W R F B E N C H M A R K D A T A S E T Single Domain, Large Size ======================================== November, 2005, John Michalakes ======================================== Contents: --------- This benchmark distribution contains the following files (bytes) Filename Description ---------------------------------------------------------- 3598 README.BENCHMARK this file 3,244,032 wrf_large_benchmark.tar basic files The wrf_large_benchmark.tar file contains: 30200 ./ETAMPNEW_DATA phys init 245 ./GENPARM.TBL phys init 8761 ./LANDUSE.TBL phys init 749248 ./RRTM_DATA phys init 2207 ./SOILPARM.TBL phys init 3124 ./VEGPARM.TBL phys init 54446 ./gribmap.txt grib init 748503 ./tr49t67 phys init 748503 ./tr49t85 phys init 748503 ./tr67t85 phys init 3571 ./namelist.input.0h namelist (1) 3570 ./namelist.input.3h namelist (2) 4479 ./diffwrf_large.txt sample stats 63142 ./sample_rsl_out sample stdout 63142 ./sample_rsl_err sample stderr In addition, you need the code, lateral boundary conditions, a restart data set and a file of reference output. These are available at: http://www.mmm.ucar.edu/wrf/WG2/bench/conus2.5km_2005 Description: ------------ Latter 3 hours of a 6-hour, 2.5km resolution case covering the Continental U.S. (CONUS) domain June 4, 2005, using the Eulerian Mass (EM) dynamics with a 15 second time step. The benchmark perid is hours 3-6 (3 hours), starting from a restart file from the end of the initial 3 hour period. As an alternative, the model may out 6 hours from cold start. Advantage of doing the restart at 3 hours is shorter run times (3 hours instead of 6) but larger (2x) input file. Initial runs take longer, but the initial condition file is smaller. Note, a cold-start run from 0h will generate a 3h restart file, which can be used for subsequent benchmark runs conducted as restarts, eliminating the need to download the large restart file, but reducing run times to 3 hours for most of the runs. Instructions: ------------- The files in this directory and at the URL described above should be placed in the directory you will run the benchmarks. This is typically test/em_real the WRF distribution or another directory containing the files that are found in test/em_real in the WRF distribution. The file wrf_large_benchmark.tar.gz in this tar file also contains those files. Make sure you use the namelist.input file from *this* distribution, not the default one that comes with the WRF code. To run from a restart file, copy namelist.input.3h into namelist.input. To run from cold start, copy namelist.input.0h into namelist.input. Be sure you have the correct input file (restart or initial conditions) in the run directory for the run you are doing. You should only edit the namelist.input file to change the decomposition (number of processors in X and Y), the tiling (for shared- or hybrid shared/distributed-memory runs), or the number of I/O processes (not typically needed for this case, which does not measure I/O performance). Run the model on a series of different numbers of processes and save and submit the following files from each run: - namelist.input file(s) - configure.wrf - either: o rsl.error.0000 and rsl.out.0000 (distributed memory parallel) o terminal output redirected from wrf.exe (non distributed memory) - diffout_tag (from wrfout_d01_2005-06-04_06:00:00 ; see below) Also submit a tar file of the WRFV2 source directory with any source code modifications you may have made. Only one such file is needed unless you used different versions of the code for different runs. Please run clean -a in the WRFV2 directory and delete any wrfinput, wrfbdy, wrfout, and any other extraneous large files before archiving. Gzip the tar file. It will not be larger than 10MB if you have cleaned and removed data files. Do not submit the wrfout_d01_* files that are generated by the runs; these are too large. Instead, please submit the output from diffwrf for the wrfout_d01 files generated by each of your runs. The conus12km_2001.tar.gz file contains a reference output file named wrfout_reference. Diffwrf will compare your output with the reference output file and generate difference statistics for each field that is not bit-for-bit identical to the reference output. Run diffwrf as follows: diffwrf your_output wrfout_reference > diffout_tag and return the captured output. The diffwrf program is distributed and compiled automatically with WRF. The executable file is external/io_int/diffwrf. This is the version of diffwrf that reads native binary output files. This may be problematic on certain systems. I you encounter a problem, please contact michalak@ucar.edu . Additional information: ----------------------- http://www.mmm.ucar.edu/wrf/WG2/bench -20-