.. role:: underline :class: underline Running WRF =========== | By default, the WRF model is a fully compressible, nonhydrostatic model with a hybrid vertical hydrostatic pressure coordinate (HVC), and Arakawa C-grid staggering. The model uses the Runge-Kutta 2nd and 3rd order time integration schemes, and 2nd to 6th order advection schemes in both the horizontal and vertical. It uses a time-split small step for acoustic and gravity-wave modes. The dynamics conserves scalar variables. WRF model code contains an initialization program for either real-data (real.exe) or idealized data (ideal.exe), a numerical integration program (wrf.exe), a program allowing one-way nesting for domains run separately (ndown.exe), and a program for tropical storm bogussing (tc.exe). Version 4 of the WRF model supports a variety of capabilities, including * Real-data and idealized simulations * Various lateral boundary condition options * Full physics options, with various filters * Positive-definite advection scheme * Hydrostatic runtime option * Terrain-following vertical coordinate option * One-way, two-way, and moving nest options * Three-dimensional analysis nudging * Observation nudging * Regional and global applications * Digital filter initialization * Vertical refinement for a nested domain | Running Idealized Cases ----------------------- To run an idealized simulation, the model must have been compiled for the idealized test case of choice, with either a serial compiling option (mandatory for the 1-D and 2-D test cases, or with a parallel computing option (e.g., dmpar, allowed for 3-D test cases). See the following instructions for either a 2-D idealized case, or a 3-D idealized case. **3-D Baroclinic Wave Case** #. Move to the case running directory. .. code-block:: > cd WRF/test/em_b_wave #. Edit the namelist.input file to set integration length, output frequency, domain size, timestep, physics options, and other parameters (see 'README.namelist' in the WRF/run directory, or `namelist options`_), and then save the file. #. Run the ideal initialization program. * For a serial build: .. code-block:: > ./ideal.exe >& ideal.log * For a parallel build: .. code-block:: > mpirun -np 1 ./ideal.exe |br| .. note:: ideal.exe must be run with only a single processor (denoted by "-np 1"), even if the code is built for parallel computing. This program typically reads an input sounding file provided in the case directory, and generates an initial condition file ‘wrfinput_d01.’ Idealized cases do not require a lateral boundary file because boundary conditions are handled in the code via namelist options. If the job is successful, the bottom of the “ideal.log” file (or rsl.out.0000 file for parallel execution) should read **SUCCESS COMPLETE IDEAL INIT**. #. Then to run the WRF model, type * For a serial build: .. code-block:: > ./wrf.exe >& wrf.log |br| * For a parallel build (*where here we are asking for 8 processors*): .. code-block:: > mpirun -np 8 ./wrf.exe |br| Pairs of *rsl.out.\** and *rsl.error.\** files will appear with MPI runs. These are standard out and error files. There will be one pair for each processor used. If the simulation was successful, the wrf output is written to a file named "wrfout_d01_0001-01-01_00:00:00," and the end of the "wrf.log" file (or rsl.out.0000) should read **wrf: SUCCESS COMPLETE WRF**. Output files *wrfout_d01_0001-01-01\** and restart files *wrfrst\** should be present in the run directory, depending on how namelist variables are specified for output. The time stamp on these files originates from the start times in the namelist file. | | Running Real-data Cases ----------------------- #. To run the model for a real-data case, move to the working directory by issuing the command .. code-block:: > cd WRF/test/em_real or > cd WRF/run #. Prior to running a real-data case, the WRF Preprocessing System (WPS) must have been successfully run, producing "met_em.*" files. Link the met_em* files to the WRF running directory. .. code-block:: > ln -sf ../../../WPS/met_em* . #. Start with the default *namelist.input* file in the directory and edit it for your case. * Make sure the parameters in the *time_control* and *&domains* sections are set specific to your case * Make sure dates and dimensions of the domain match those set in WPS. If only one domain is used, only entries in the first column are read and other columns are ignored. No need to remove additional columns. #. Run the real-data initialization program. * For WRF built for serial computing, or OpenMP - smpar .. code-block:: > ./real.exe >& real.log * For WRF built for parallel computing - dmpar - an example requesting to run with four processors .. code-block:: > mpiexec -np 4 ./real.exe |br| The real.exe program uses the 2-D output (met_em* files) created by the WPS program to perform vertical interpolation for 3-D meteorological fields and sub-surface soil data, and creates boundary and initial condition files to feed into the wrf.exe program. If successful, the end of the real.log (or rsl.out.0000 file) should read "**real_em: SUCCESS EM_REAL INIT**." There should also now be "wrfinput_d0* files (one file per domain) and a "wrfbdy_d01" file, which are used as input to the wrf.exe program. * Run the WRF model * For WRF built for serial computing, or OpenMP - smpar .. code-block:: > ./wrf.exe >& real.log * For WRF built for parallel computing - dmpar - an example requesting to run with four processors .. code-block:: > mpiexec -np 4 ./wrf.exe |br| Pairs of rsl.out.* and rsl.error.* files will appear with MPI runs (one pair for each processor). These are standard out and error files. If the simulation is successful, wrf output is written to a file named *wrfout_d__