The WRF model is a fully compressible, and nonhydrostatic model (with a runtime hydrostatic option). Its vertical coordinate is a terrain-following hydrostatic pressure coordinate. The grid staggering is the Arakawa C-grid. The model uses the Runge-Kutta 2nd and 3rd order time integration schemes, and 2nd to 6th order advection schemes in both horizontal and vertical. It uses a time-split small step for acoustic and gravity-wave modes. The dynamics conserves scalar variables.
The WRF model code contains several initialization programs (ideal.exe and real.exe; see Chapter 4), a numerical integration program (wrf.exe), and a program to do one-way nesting (ndown.exe). The WRF model Version 3 supports a variety of capabilities. These include
Before compiling WRF code on a computer, check to see if the netCDF library is installed. This is because one of the supported WRF I/O options is netCDF, and it is the one commonly used, and supported by the post-processing programs. If the netCDF is installed in a path other than /usr/local/, then find the path, and use the environment variable NETCDF to define where the path is. To do so, type
setenv NETCDF path-to-netcdf-library
Often the netCDF library and its include/ directory are collocated. If this is not the case, create a directory, link both netCDF lib and include directories in this directory, and use environment variable to set the path to this directory.
If the netCDF library is not available on the computer, it needs to be installed first. NetCDF source code or pre-built binary may be downloaded from and installation instruction can be found on the Unidata Web page at http://www.unidata.ucar.edu/.
If PGI or Intel compiler are used on a Linux computer, make sure netCDF is installed using the same compiler. Use NETCDF environment variable to point to the PGI/Intel compiled netCDF library.
WRF source code tar file can be downloaded from http://www2.mmm.ucar.edu/wrf/download/get_source.html. Once the tar file is gunzipped (gunzip WRFV3.TAR.gz), and untared (untar WRFV3.TAR), and it will create a WRFV3/ directory. This contains:
Makefile |
Top-level makefile |
README |
General information about WRF/ARW core |
README_test_cases |
Explanation of the test cases |
README.NMM |
General information for WRF/NMM core |
README.rsl_output |
For NMM |
Registry/ |
Directory for WRF Registry files |
arch/ |
Directory where compile options are gathered |
clean |
script to clean created files, executables |
compile |
script for compiling WRF code |
configure |
script to create the configure.wrf file for compile |
chem/ |
WRF chemistry, supported by NOAA/GSD |
dyn_em/ |
Directory for ARW dynamics and numerics |
dyn_exp/ |
Directory for a 'toy' dynamic core |
dyn_nmm/ |
Directory for NMM dynamics and numerics, supported by DTC |
external/ |
Directory that contains external packages, such as those for IO, time keeping and MPI |
frame/ |
Directory that contains modules for WRF framework |
inc/ |
Directory that contains include files |
main/ |
Directory for main routines, such as wrf.F, and all executables after compilation |
phys/ |
Directory for all physics modules |
run/ |
Directory where one may run WRF |
share/ |
Directory that contains mostly modules for WRF mediation layer and WRF I/O |
test/ |
Directory that contains test case directories, may be used to run WRF |
tools/ |
Directory that contains tools for developers |
The steps to compile and run the model are:
1. configure: generate a configuration file for compilation
2. compile: compile the code
3. run the model
Go to WRFV3 (top) directory and type
./configure
and a list of choices for your computer should appear. These choices range from compiling for a single processor job (serial), to using OpenMP shared-memory (smpar) or distributed-memory parallelization (dmpar) options for multiple processors, or combination of shared-memory and distributed memory options (dm+sm). When a selection is made, a second choice for compiling nesting will appear. For example, on a Linux computer, the above steps look like:
> setenv NETCDF /usr/local/netcdf-pgi
> ./configure
checking for perl5... no
checking for perl... found /usr/bin/perl (perl)
Will use NETCDF in dir: /usr/local/netcdf-pgi
PHDF5 not set in environment. Will configure WRF for use without.
$JASPERLIB or $JASPERINC not found in environment, configuring to build without
grib2 I/O...
-----------------------------------------------------------------------
Please select from among the following supported platforms.
1. Linux i486 i586 i686, gfortran compiler with gcc (serial)
2. Linux i486 i586 i686, gfortran
compiler with gcc (smpar)
3. Linux i486 i586 i686, gfortran
compiler with gcc (dmpar)
4. Linux i486 i586 i686, gfortran
compiler with gcc (dm+sm)
5. Linux i486 i586 i686, g95
compiler with gcc (serial)
6. Linux i486 i586 i686, g95
compiler with gcc (dmpar)
7. Linux i486 i586 i686, PGI
compiler with gcc (serial)
8. Linux i486 i586 i686, PGI
compiler with gcc (smpar)
9. Linux i486 i586 i686, PGI
compiler with gcc (dmpar)
10. Linux i486 i586 i686, PGI
compiler with gcc (dm+sm)
11. Linux x86_64 i486 i586 i686,
ifort compiler with icc (non-SGI installations) (serial)
12. Linux x86_64 i486 i586 i686,
ifort compiler with icc (non-SGI installations) (smpar)
13. Linux x86_64 i486 i586 i686,
ifort compiler with icc (non-SGI installations) (dmpar)
14. Linux x86_64 i486 i586 i686,
ifort compiler with icc (non-SGI installations) (dm+sm)
15. Linux i486 i586 i686 x86_64,
PathScale compiler with pathcc (serial)
16. Linux i486 i586 i686 x86_64,
PathScale compiler with pathcc
(dmpar)
Enter selection [1-16] : 9
Compile for nesting? (0=no
nesting, 1=basic, 2=preset moves, 3=vortex following) [default 0]: 1
Enter appropriate options that are best for your computer and application.
When the return key is hit, a configure.wrf file will be created. Edit compile options/paths, if necessary.
Hint:
It is helpful to start with
something simple, such as the serial build. If it is successful, move on to
build smpar or dmpar code. Remember to type ‘clean –a’ between each build.
Hint: On some computers (e.g. some Intel machines), it may be necessary to set the following environment variable before one compiles:
setenv WRF_EM_CORE 1
To
compile the code, type
./compile
and the following choices will appear:
Usage:
compile wrf compile wrf in run dir (Note, no real.exe, ndown.exe or ideal.exe generated)
or choose a test case (see README_test_cases for details):
compile em_b_wave
compile em_esmf_exp (example only)
compile em_grav2d_x
compile em_heldsuarez
compile em_hill2d_x
compile em_les
compile em_quarter_ss
compile em_real
compile em_seabreeze2d_x
compile em_squall2d_x
compile em_squall2d_y
compile exp_real (example of a toy solver)
compile nmm_real (NMM solver)
compile –h help message
where em stands for the Advanced Research WRF dynamic solver (which currently is the 'Eulerian mass-coordinate' solver). Type one of the above to compile. When you switch from one test case to another, you must type one of the above to recompile. The recompile is necessary to create a new initialization executable (i.e. real.exe, and ideal.exe - there is a different ideal.exe for each of the idealized test cases), while wrf.exe is the same for all test cases.
If you want to remove all object files (except those in external/directory) and executables, type 'clean'.
Type 'clean -a' to remove built files in ALL directories, including configure.wrf. This is recommended if you make any mistake during the process, or if you have edited the Registry.EM file.
For any 2D test cases (labeled in the case names), serial or OpenMP (smpar) compile options must be used. Suppose you would like to compile and run the 2-dimensional squall case, type
./compile em_squall2d_x
>& compile.log
After a successful compilation, you should have two executables created in the main/ directory: ideal.exe and wrf.exe. These two executables will be linked to the corresponding test/case_name and run/ directories. cd to either directory to run the model.
It is a good practice to save the entire compile output to a file. When the executables were not present, this output is useful to help diagnose the compiler errors.
For a real-data case, type
./compile em_real >& compile.log &
When the compile is successful, it will create three executables in the main/directory: ndown.exe, real.exe and wrf.exe.
real.exe:
for WRF initialization of real data cases
ndown.exe
: for one-way nesting
wrf.exe
: WRF model integration
Like
in the idealized cases, these executables will be linked to test/em_real
and run/
directories. cd to one of these
two directories to run the model.
One may run the model executables in either the run/ directory, or the test/case_name directory. In either case, one should see executables, ideal.exe or real.exe (and ndown.exe), and wrf.exe, linked files (mostly for real-data cases), and one or more namelist.input files in the directory.
Hint: If you would like to run the model executables in a different directory, copy or link the files in test/em_* directory to that directory, and run from there.
Idealized, real data, restart run, two-way nested, and one-way nested runs are explained on the following pages. Read on.
Suppose
the test case em_squall2d_x
is compiled, to run, type
cd test/em_squall2d_x
Edit namelist.input file (see README.namelist in WRFV3/run/ directory or its Web version) to change length of integration, frequency of output, size of domain, timestep, physics options, and other parameters.
If you see a script in the test case directory, called run_me_first.csh, run this one first by typing:
./run_me_first.csh
This links some physics data files that might be needed to run the case.
To run the initialization program, type
./ideal.exe
This
program will typically read an input sounding file located in that directory,
and generate an initial condition file wrfinput_d01.
All idealized cases do not require lateral boundary file because of the
boundary condition choices they use, such as the periodic option. If the job is
run successfully, the last thing it prints should be: ‘wrf: SUCCESS COMPLETE IDEAL INIT’.
To run the model and save the standard output to a file, type
./wrf.exe >&
wrf.out &
or for a 3D test case compiled with MPI (dmpar) option,
mpirun –np 4 ./wrf.exe
Pairs of rsl.out.* and rsl.error.* files will appear with any MPI runs. These are standard out and error files. Note that the execution command for MPI runs may be different on different machines. Check the user manual.
If the model run is successful, the last thing printed in ‘wrf.out’ or rsl.*.0000 file should be: ‘wrf: SUCCESS COMPLETE WRF’. Ouput files wrfout_d01_0001-01-01* and wrfrst* should be present in the run directory, depending on how namelist variables are specified for output. The time stamp on these files originates from the start times in the namelist file.
To
make a real-data case run, cd to
the working directory by typing
cd test/em_real (or cd
run)
Start with a namelist.input template file in the directory, edit it to match your case.
Running a real-data case requires successfully running the WRF Preprocessing System programs (or WPS). Make sure met_em.* files from WPS are seen in the run directory (either link or copy the files):
ls –l ../../WPS/met_em*
ln –s ../../WPS/met_em* .
Make sure you edit the following variables in namelist.input file:
num_metgrid_levels: number of_
incoming data levels (can be found by using ncdump
command on met_em.d01.<date>
file)
eta_levels: model eta levels from 1 to 0, if you choose to do so. If not, real will compute a nice set of eta levels.
Other options for use to assist vertical interpolation are:
use_surface: whether to use surface
input data
extrap_type: vertical extrapolation of non-temperature fields
t_extrap_type: vertical extrapolation for potential temperature
use_levels_below_ground: use levels below input surface level
force_sfc_in_vinterp: force vertical interpolation to use surface data
lowest_lev_from_sfc: place
surface data in the lowest model level
p_top_requested: pressure top
used in the model, default is 5000 Pa
interp_type: vertical
interpolation method: linear in p(default) or log(p)
lagrange_order: vertical
interpolation order, linear (default) or quadratic
zap_close_levels: allow surface
data to be used if it is close to a constant pressure level.
Other minimum set of namelist variables to edit are:
start_*, end_*: start and end times
for data processing and model integration
interval_seconds: input data interval for boundary conditions
time_step: model time step, and can be set as large as 6*DX (in km)
e_ws, e_sn, e_vert: domain
dimensions in west-east, south-north and vertical
dx, dy: model grid distance in meters
To run real-data initialization program compiled using serial or OpenMP (smpar) options, type
./real.exe >& real.out
Successful completion of the job should have ‘real_em: SUCCESS EM_REAL INIT’ printed at the end of real.out file. It should also produce wrfinput_d01 and wrfbdy_d01 files. In real data case, both files are required.
Run
WRF model by typing
./wrf.exe
A successful run should produce one or several output files named like wrfout_d01_yyyy-mm-dd_hh:mm:ss. For example, if you start the model at 1200 UTC, January 24 2000, then your first output file should have the name:
wrfout_d01_2000-01-24_12:00:00
The time stamp on the file name is always the first time the output file is written. It is always good to check the times written to the output file by typing:
ncdump -v Times wrfout_d01_2000-01-24_12:00:00
You may have other wrfout files depending on the namelist options (how often you split the output files and so on using namelist option frames_per_outfile).You may also create restart files if you have restart frequency (restart_interval in the namelist.input file) set within your total integration length. The restart file should have names like
wrfrst_d01_yyyy-mm-dd_hh:mm:ss
The time stamp on a restart file is the time that restart file is valid at.
For DM (distributed memory) parallel systems, some form of mpirun command will be needed to run the executables. For example, on a Linux cluster, the command to run MPI code and using 4 processors may look like:
mpirun -np 4 ./real.exe
mpirun
-np 4 ./wrf.exe
On some IBMs, the command may be:
poe ./real.exe
poe
./wrf.exe
for a batch job, and
poe ./real.exe -rmpool 1
-procs 4
poe
./wrf.exe -rmpool 1 -procs 4
for
an interactive run. (Interactive MPI job is not an option on NCAR IBMs bluevista
and blueice)
A two-way nested run is a run where multiple domains at different grid resolutions are run simultaneously and communicate with each other: The coarser domain provides boundary values for the nest, and the nest feedbacks its calculation back to the coarser domain. The model can handle multiple domains at the same nest level (no overlapping nest), and multiple nest levels (telescoping).
When preparing for a nested run, make sure that the code is compiled with basic nest options (option 1).
Most of options to start a nest run are handled through the namelist. All variables in the namelist.input file that have multiple columns of entries need to be edited with caution. Do start with a namelist template. The following are the key namelist variables to modify:
start_*, end_*: start and end simulation times for the nest
input_from_file: whether a nest requires an input file (e.g. wrfinput_d02). This is typically used for a real data case, since the nest input file contains nest topography and land information.
fine_input_stream: which fields from the nest input file are used in nest initialization. The fields to be used are defined in the Registry.EM. Typically they include static fields (such as terrain, landuse), and masked surface fields (such as skin temperature, soil moisture and temperature). Useful for nest starting at a later time than the coarse domain.
max_dom: the total number of domains to run. For example, if you want to have one coarse domain and one nest, set this variable to 2.
grid_id: domain identifier that is used in the wrfout naming convention. The most coarse grid must have grid_id of 1.
parent_id: used to indicate the parent domain of a nest. grid_id value is used.
i_parent_start/j_parent_start: lower-left corner starting indices of the nest domain in its parent domain. These parameters should be the same as in namelist.wps.
parent_grid_ratio: integer parent-to-nest domain grid size ratio. Typically odd number ratio is used in real-data applications.
parent_time_step_ratio: integer time-step ratio for the nest domain. It may be different from the parent_grid_ratio, though they are typically set the same.
feedback: this is the key setup to define a two-way nested (or one-way nested) run. When feedback is on, the values of the coarse domain are overwritten by the values of the variables (average of cell values for mass points, and average of the cell-face values for horizontal momentum points) in the nest at the coincident points. For masked fields, only the single point value at the collocating points is fedback. If the parent_grid_ratio is even, an arbitrary choice of southwest corner point value is used for feedback. This is the reason it is better to use odd parent_grid_ratio with this option. When feedback is off , it is equivalent to a one-way nested run, since nest results are not reflected in the parent domain.
smooth_option: this a smoothing option for the parent domain in area of the nest if feedback is on. Three options are available: 0 = no smoothing; 1 = 1-2-1 smoothing; 2 = smoothing-desmoothing.
For 3-D idealized cases, no nest input files are required. The key here is the specification of the namelist.input file. What the model does is to interpolate all variables required in the nest from the coarse domain fields. Set
input_from_file = F, F
Real Data Cases
For real-data cases, three input options are supported. The first one is similar to running the idealized cases. That is to have all fields for the nest interpolated from the coarse domain (input_from_file = T, F). The disadvantage of this option is obvious, one will not benefit from the higher resolution static fields (such as terrain, landuse, and so on).
The second option is to set input_from_file = T for each domain, which means that the nest will have a nest wrfinput file to read in. The limitation of this option is that this only allows the nest to start at the same time as the coarse domain.
The third option is in addition to setting input_from_file = T for each domain, also set fine_input_stream = 2 for each domain. Why a value of 2? This is based on the Registry setting, which designates certain fields to be read in from auxiliary input stream number 2. This option allows the nest initialization to use 3-D meteorological fields interpolated from the coarse domain, static fields and masked, time-varying surface fields from the nest wrfinput. It hence allows a nest to start at a later time than hour 0. Setting fine_input_stream = 0 is equivalent to the second option.
To run real.exe for a nested run, one must first run WPS and create data for all the nests. Suppose WPS is run for a two-domain nest case, and these files should be present in a WPS directory:
met_em.d01.2000-01-24_12:00:00
met_em.d01.2000-01-24_18:00:00
met_em.d01.2000-01-25_00:00:00
met_em.d01.2000-01-25_06:00:00
met_em.d01.2000-01-25_12:00:00
met_em.d02.2000-01-24_12:00:00
Typically
only the first time period of the nest input file is needed to create nest
wrfinput file. Link or move all these files to the run directory.
Edit the namelist.input file and set the correct values for all relevant variables, described on the previous pages (in particular, set max_dom = 2, for the total number of domains to run), as well as physics options. Type the following to run:
./real.exe >& real.out
or
mpirun –np 4 ./real.exe
If successful, this will create all input files for coarse as well as nest domains. For a two-domain example, these are
wrfinput_d01
wrfinput_d02
wrfbdy_d01
To run WRF, type
./wrf.exe
or
mpirun –np 4 ./wrf.exe
If successful, the model should create wrfout files for both domain 1 and 2:
wrfout_d01_2000-01-24_12:00:00
wrfout_d02_2000-01-24_12:00:00
WRF supports two separate one-way nested option. In this section, one-way nesting is defined as a finer-grid-resolution run made as a subsequent run after the coarser-grid-resolution run, where the ndown program is run in between the two forecasts. The initial and lateral boundary conditions for this finer-grid run are obtained from the coarse grid run, together with input from higher resolution terrestrial fields (e.g. terrain, landuse, etc.), and masked surface fields (such as soil temperature and moisture). The program that performs this task is ndown.exe. Note that the use of this program requires the code to be compiled for nesting.
When one-way nesting is used, the coarse-to-fine grid ratio is only restricted to be an integer. An integer less than or equal to 5 is recommended.
To make a one-way nested run involves these steps:
1)
Generate a coarse-grid model output
2) Make temporary fine-grid initial condition wrfinput_d01
file (note that only a single time period is required, valid at the desired
start time of the fine-grid domain)
3) Run program ndown,
with coarse-grid model output and a fine-grid initial condition to
generate fine grid initial and boundary conditions, similar to the output from
the real.exe program)
4) Run the fine-grid simulation
To compile, choose an option that supports nesting.
Step 1: Make a coarse grid run
This is no different than any of the single domain WRF run as described above.
Step 2: Make a temporary fine grid initial condition file
The purpose of this step is to ingest higher resolution terrestrial fields and corresponding land-water masked soil fields.
Before doing this step, WPS should be run for one coarse and one nest domains (this helps to line up the nest with the coarse domain), and for the one time period the one-way nested run is to start. This generates a WPS output file for the nested domain (domain 2): met_em.d02.<date>.
-
Rename met_em.d02.* to met.d01.* for the single requested
fine-grid start time. Move the original domain 1 WPS output files before
you do this.
- Edit the namelist.input
file for fine-grid domain (pay attention to column 1 only) and edit in the
correct start time, grid dimensions.
- Run real.exe
for this domain. This will produce a wrfinput_d01
file.
- Rename this wrfinput_d01 file
to wrfndi_d02.
Step 3: Make the final fine-grid initial and boundary condition files
-
Edit namelist.input
again, and this time one needs to edit two columns: one for dimensions of the
coarse grid, and one for the fine grid. Note that the boundary condition
frequency (namelist variable interval_seconds)
is the time in seconds between the coarse-grid model output times.
- Run ndown.exe,
with inputs from the coarse grid wrfout
file(s), and wrfndi_d02 file
generated from Step 2 above. This will produce wrfinput_d02 and wrfbdy_d02 files.
Note that program ndown may be run serially or in MPI, depending on the selected compile option. The ndown program must be built to support nesting, however. To run the program, type,
./ndown.exe
or
mpirun –np 4 ./ndown.exe
Step 4: Make the fine-grid WRF run
-
Rename wrfinput_d02 and wrfbdy_d02 to wrfinput_d01 and wrfbdy_d01,
respectively.
- Edit namelist.input
one more time, and it is now for the fine-grid domain only.
- Run WRF for this grid.
The figure on the next page summarizes the data flow for a one-way nested run using program ndown.
f.
Moving-Nested Run
Two types of moving tests are allowed in WRF. In the first option, a user specifies the nest movement in the namelist. The second option is to move the nest automatically based on an automatic vortex-following algorithm. This option is designed to follow the movement of a well-defined tropical cyclone.
To make the specified moving nest run, select the right nesting compile option (option ‘preset moves’). To run the model, only the coarse grid input files are required. In this option, the nest initialization is defined from the coarse grid data - no nest input is used. In addition to the namelist options applied to a nested run, the following needs to be added to namelist section &domains:
num_moves: the total number of moves one can make in a model run. A move of any domain counts against this total. The maximum is currently set to 50, but it can be changed by change MAX_MOVES in frame/module_driver_constants.F.
move_id: a list of nest IDs, one per move, indicating
which domain is to move for a given move.
move_interval: the number of minutes since the beginning of the run that a move is supposed to occur. The nest will move on the next time step after the specified instant of model time has passed.
move_cd_x,move_cd_y: distance in number of grid points and direction of the nest move(positive numbers indicating moving toward east and north, while negative numbers indicating moving toward west and south).
Parameter max_moves is set to be 50, but can be modified in source code file frame/module_driver_constants.F if needed.
To make the automatic moving nest runs, select the ‘vortex-following’ option when configuring. (Note that this compile would only support auto-moving nest, and will not support the specified moving nest at the same time.) Again, no nest input is needed. If one wants to use values other than the default ones, add and edit the following namelist variables in &domains section:
vortex_interval: how often the vortex position is calculated in minutes (default is 15 minutes).
max_vortex_speed: used with vortex_interval to compute the radius of search for the new vortex center position (default is 40 m/sec).
corral_dist: the distance in number of coarse grid cells that the moving nest is allowed to come near the coarse grid boundary (default is 8).
track_level: the pressure level (in Pa) where the vortex is tracked.
In both types of moving nest runs, the initial location of the nest is specified through i_parent_start and j_parent_start in the namelist.input file.
The automatic moving nest works best for well-developed vortex.
Timing for main: time 2006-01-21_23:55:00 on domain 2:
4.91110 elapsed seconds.
Timing for main:
time 2006-01-21_23:56:00 on domain 2: 4.73350 elapsed
seconds.
Timing for main:
time 2006-01-21_23:57:00 on domain 2: 4.72360 elapsed
seconds.
Timing for main:
time 2006-01-21_23:57:00 on domain 1: 19.55880 elapsed
seconds.
Timing for
Writing wrfout_d02_2006-01-22_00:00:00 for domain 2: 1.17970 elapsed seconds.
Timing for main:
time 2006-01-22_00:00:00 on domain 1: 27.66230 elapsed seconds.
Timing for
Writing wrfout_d01_2006-01-22_00:00:00 for domain 1: 0.60250 elapsed seconds.
5 points exceeded
cfl=2 in
domain 1 at
time 4.200000
MAX AT
i,j,k:
123
48 3 cfl,w,d(eta)=
4.165821
21
points exceeded cfl=2 in
domain 1 at
time 4.200000
MAX AT i,j,k:
123
49 4 cfl,w,d(eta)=
10.66290
When this happens, often reducing time step can help.
If the model aborts very quickly, it is likely that either the computer memory is not large enough to run the specific configuration, or the input data have some serious problem. For the first problem, try to type ‘unlimit’ to see if more memory can be obtained.
To check if the input data is the problem, use ncview or other netCDF file browser.
Another
frequent error seen is ‘module_configure:
initial_config: error reading namelist’. This is an error message from
the model complaining about errors and typos in the namelist.input file. Edit namelist.input
file with caution. If unsure, always start with an available template. A
namelist record where the namelist read error occurs is provided in the V3
error message, and it should help with identifying the error.
WRF offers multiple physics options that can be combined in any way. The options typically range from simple and efficient to sophisticated and more computationally costly, and from newly developed schemes to well tried schemes such as those in current operational models.
The choices vary with each major WRF release, but here we will outline those available in WRF Version 3.
a. Kessler scheme: A warm-rain (i.e. no ice) scheme used commonly in idealized cloud modeling studies (mp_physics = 1).
b. Lin et al. scheme: A sophisticated scheme that has ice, snow and graupel processes, suitable for real-data high-resolution simulations (2).
c. WRF Single-Moment 3-class scheme: A simple efficient scheme with ice and snow processes suitable for mesoscale grid sizes (3).
d. WRF Single-Moment 5-class scheme: A slightly more sophisticated version of (c) that allows for mixed-phase processes and super-cooled water (4).
e. Eta microphysics: The operational microphysics in NCEP models. A simple efficient scheme with diagnostic mixed-phase processes (5).
f. WRF Single-Moment 6-class scheme: A scheme with ice, snow and graupel processes suitable for high-resolution simulations (6).
g. Goddard microphysics scheme. A scheme with ice, snow and graupel processes suitable for high-resolution simulations (7). New in Version 3.0.
h. Thompson et al. scheme: A new scheme with ice, snow and graupel processes suitable for high-resolution simulations (8; replacing the version in 2.1)
i. Morrison double-moment scheme (10). Double-moment ice, snow, rain and graupel for cloud-resolving simulations. New in Version 3.0.
a. RRTM scheme: Rapid Radiative Transfer Model. An accurate scheme using look-up tables for efficiency. Accounts for multiple bands, trace gases, and microphysics species (ra_lw_physics = 1).
b. GFDL scheme: Eta operational radiation scheme. An older multi-band scheme with carbon dioxide, ozone and microphysics effects (99).
c. CAM scheme: from the CAM 3 climate model used in CCSM. Allows for aerosols and trace gases (3).
a. Dudhia scheme: Simple downward integration allowing efficiently for clouds and clear-sky absorption and scattering. When used in high-resolution simulations, sloping and shadowing effects may be considered (ra_sw_physics = 1).
b. Goddard shortwave: Two-stream multi-band scheme with ozone from climatology and cloud effects (2).
c. GFDL shortwave: Eta operational scheme. Two-stream multi-band scheme with ozone from climatology and cloud effects (99).
d. CAM scheme: from the CAM 3 climate model used in CCSM. Allows for aerosols and trace gases (3).
a.MM5 similarity: Based on Monin-Obukhov with Carslon-Boland viscous sub-layer and standard similarity functions from look-up tables (sf_sfclay_physics = 1).
b. Eta similarity: Used in Eta model. Based on Monin-Obukhov with Zilitinkevich thermal roughness length and standard similarity functions from look-up tables(2).
c. Pleim-Xiu surface layer. (7). New in Version 3.0.
a.5-layer thermal diffusion: Soil temperature only scheme, using five layers (sf_surface_physics = 1).
b. Noah Land Surface Model: Unified NCEP/NCAR/AFWA scheme with soil temperature and moisture in four layers, fractional snow cover and frozen soil physics (2).
-Urban canopy model (ucmcall): 3-category UCM option
c. RUC Land Surface Model: RUC operational scheme with soil temperature and moisture in six layers, multi-layer snow and frozen soil physics (3).
d. Pleim-Xiu Land Surface Model. Two-layer scheme with vegetation and sub-grid tiling (7). New in Version 3.0.
a. Yonsei University scheme: Non-local-K scheme with explicit entrainment layer and parabolic K profile in unstable mixed layer (bl_pbl_physics = 1).
b. Mellor-Yamada-Janjic scheme: Eta operational scheme. One-dimensional prognostic turbulent kinetic energy scheme with local vertical mixing (2).
c. MRF scheme: Older version of (a) with implicit treatment of entrainment layer as part of non-local-K mixed layer (99).
d. ACM PBL. Asymmetric Convective Model with non-local upward mixing and local downward mixing (7). New in Version 3.0.
a. Kain-Fritsch scheme: Deep and shallow convection sub-grid scheme using a mass flux approach with downdrafts and CAPE removal time scale (cu_physics = 1).
b. Betts-Miller-Janjic scheme. Operational Eta scheme. Column moist adjustment scheme relaxing towards a well-mixed profile (2).
c. Grell-Devenyi ensemble scheme: Multi-closure, multi-parameter, ensemble method with typically 144 sub-grid members (3).
d. Grell 3d ensemble cumulus scheme. Scheme for higher resolution domains allowing for subsidence in neighboring columns (5). New in Version 3.0.
e. Old Kain-Fritsch scheme: Deep convection scheme using a mass flux approach with downdrafts and CAPE removal time scale (99).
Diffusion in WRF is categorized under two parameters, the diffusion option and the K option. The diffusion option selects how the derivatives used in diffusion are calculated, and the K option selects how the K coefficients are calculated. Note that when a PBL option is selected, vertical diffusion is done by the PBL scheme, and not by the diffusion scheme.
a. Simple diffusion: Gradients are simply taken along coordinate surfaces (diff_opt = 1).
b. Full diffusion: Gradients use full metric terms to more accurately compute horizontal gradients in sloped coordinates (diff_opt = 2).
Note that when using a PBL scheme, only options (a) and (d) below make sense, because (b) and (c) are designed for 3d diffusion.
a. Constant: K is specified by namelist values for horizontal and vertical diffusion (km_opt = 1).
b. 3d TKE: A prognostic equation for turbulent kinetic energy is used, and K is based on TKE (km_opt = 2).
c. 3d Deformation: K is diagnosed from 3d deformation and stability following a Smagorinsky approach (km_opt = 3).
d. 2d Deformation: K for horizontal diffusion is diagnosed from just horizontal deformation. The vertical diffusion is assumed to be done by the PBL scheme (km_opt = 4).
6th-orderhorizontal hyper diffusion (del^6) on all variables to act as a selective short-wave numerical noise filter. Can be used in conjunction with diff_opt.
These are independently activated choices.
a. Upper Damping: Either a layer of increased diffusion (damp_opt =1) or a Rayleigh relaxation layer (2) or an implicit gravity-wave damping layer (3, new in Version 3.0), can be added near the model top to control reflection from the upper boundary.
b.
w-Damping: For operational robustness, vertical motion can be damped to prevent
the model from becoming unstable with locally large vertical
velocities. This only affects strong updraft cores, so has very little impact
on results otherwise.
c. Divergence Damping: Controls horizontally propagating sound waves.
d. External Mode Damping: Controls upper-surface (external) waves.
e. Time Off-centering (epssm): Controls vertically propagating sound waves.
Advection
Options
a. Horizontal advection orders for momentum (h_mom_adv_order) and scalar (h_sca_adv_order) can be 2ndto 6th, with 5th order being the recommended one.
b. Vertical advection orders for momentum (v_mom_adv_order) and scalar (v_sca_adv_order) can be 2ndand 6th, with 3rd order being the recommended one.
c. Positive-definite advection option can be applied to moisture (pd_moist= .true.), scalar (pd_scalar), chemistry variables (pd_chem) and tke (pd_tke).
Other
Dynamics Options
a. The model can be run hydrostatically by setting non_hydrostatic switch to .false.
b. Coriolis term can be applied to wind perturbation (pert_coriolis = .true.) only (idealized only).
c. For diff_opt = 2 only, vertical diffusion may act on full fields (not just on perturbation from 1D base profile (mix_full_fields = .true.; idealized only).
Lateral
Boundary Condition Options
a. Periodic (periodic_x / periodic_y): for idealized cases.
b. Open (open_xs, open_xe, open_ys, open_ye): for idealized cases.
c. Symmetric (symmetric_xs, symmetric_xe, symmetric_ys, symmetric_ye): for idealized cases.
d. Specified (specified): for real-data cases. The first row and column are specified with external model values (spec_zone = 1, and it should not change). The rows and columns in relax_zone have values blended from external model and WRF. The value of relax_zone may be changed, as long as spec_bdy_width = spec_zone + relax_zone.
spec_exp: exponential multiplier for relaxation zone ramp,
used with specified boundary
condition. 0. = linear ramp, default; 0.33 = ~3*dx exp decay factor. May be
useful for long simulations.
e. Nested (nested): for real and idealized cases.
The following is a description of namelist variables. The variables that are a function of nests are indicated by (max_dom) following the variable. Also see README.namelist file in WRFV3/run/ directory.
Variable Names |
Value |
Description |
&time_control |
|
Time control |
run_days |
1 |
run time in days |
run_hours |
0 |
run time in hours |
run_minutes |
0 |
run time in minutes |
run_seconds |
0 |
run time in seconds |
start_year (max_dom) |
2001 |
four digit year of starting time |
start_month (max_dom) |
06 |
two digit month of starting time |
start_day (max_dom) |
11 |
two digit day of starting time |
start_hour (max_dom) |
12 |
two digit hour of starting time |
start_minute (max_dom) |
00 |
two digit minute of starting time |
start_second (max_dom) |
00 |
two digit second of starting time |
end_year (max_dom) |
2001 |
four digit year of ending time |
end_month (max_dom) |
06 |
two digit month of ending time |
end_day (max_dom) |
12 |
two digit day of ending time |
end_hour (max_dom) |
12 |
two digit hour of ending time |
end_minute (max_dom) |
00 |
two digit minute of ending time |
end_second (max_dom) |
00 |
two digit second of ending
time |
interval_seconds |
10800 |
time interval between incoming real data, which will be the interval between the lateral boundary condition file (for real only) |
input_from_file (max_dom) |
T (logical) |
logical; whether nested run will have input files for domains other than 1 |
fine_input_stream (max_dom) |
|
selected fields from nest input |
|
0 |
all fields from nest input are used |
|
2 |
only nest input specified from input stream 2 (defined in the Registry) are used |
history_interval (max_dom) |
60 |
history output file interval in minutes (integer only) |
history_interval_mo (max_dom) |
1 |
history output file interval in months (integer); used as alternative to history_interval |
history_interval_d (max_dom) |
1 |
history output file interval in days (integer); used as alternative to history_interval |
history_interval_h (max_dom) |
1 |
history output file interval in hours (integer); used as alternative to history_interval |
history_interval_m (max_dom) |
1 |
history output file interval in minutes (integer); used as alternative to history_interval and is equivalent to history_interval |
history_interval_s (max_dom) |
1 |
history output file interval in seconds (integer); used as alternative to history_interval |
frames_per_outfile (max_dom) |
1 |
output times per history output file, used to split output files into smaller pieces |
restart |
F (logical) |
whether this run is a restart run |
restart_interval |
1440 |
restart output file interval in minutes |
reset_simulation_start |
F |
whether to overwrite simulation_start_date with forecast start time |
auxinput1_inname |
“met_em.d<domain> <date>” |
input from WPS (this is the default) |
auxinput4_inname |
“wrflowinp_d<domain>” |
input for lower bdy file, works with sst_update = 1 |
auxinput4_interval |
360 |
file interval in minutes for lower bdy file |
io_form_history |
2 |
2 = netCDF; 102 = split netCDF files one per processor (no supported post-processing software for split files) |
|
1 |
binary format (no supported post-processing software avail) |
|
4 |
PHDF5 format (no supported post-processing software avail) |
|
5 |
GRIB 1 |
|
10 |
GRIB 2 |
io_form_restart |
2 |
2 = netCDF; 102 = split netCDF files one per processor (must restart with the same number of processors) |
io_form_input |
2 |
2 = netCDF |
io_form_boundary |
2 |
netCDF format |
debug_level |
0 |
50,100,200,300 values give increasing prints |
auxhist2_outname |
"rainfall_d<domain>" |
file name for extra output; if not specified, auxhist2_d<domain>_<date> will be used. Also note that to write variables in output other than the history file requires Registry.EM file change |
auxhist2_interval |
10 |
interval in minutes |
io_form_auxhist2 |
2 |
output in netCDF |
frame_per_auxhist4 (max_dom) |
|
output times per output file |
auxinput11_interval |
|
designated for obs nudging input |
auxinput11_end_h |
|
designated for obs nudging input |
nocolons |
.false. |
replace : with _ in output file names |
write_input |
t |
write input-formatted data as output for 3DVAR application |
inputout_interval |
180 |
interval in minutes when writing input-formatted data |
input_outname |
“wrf_3dvar_input_ d<domain>_<date>” |
Output file name from 3DVAR |
inputout_begin_y |
0 |
beginning year to write 3DVAR data |
inputout_begin_mo |
0 |
beginning month to write 3DVAR data |
inputout_begin_d |
0 |
beginning day to write 3DVAR data |
inputout_begin_h |
3 |
beginning hour to write 3DVAR data |
Inputout_begin_m |
0 |
beginning minute to write 3DVAR data |
inputout_begin_s |
0 |
beginning second to write 3DVAR data |
inputout_end_y |
0 |
ending year to write 3DVAR data |
inputout_end_mo |
0 |
ending month to write 3DVAR data |
inputout_end_d |
0 |
ending day to write 3DVAR data |
inputout_end_h |
12 |
ending hour to write 3DVAR data |
Inputout_end_m |
0 |
ending minute to write 3DVAR data |
inputout_end_s |
0 |
ending second to write 3DVAR data. |
|
|
The above example shows that the input-formatted data are output starting from hour 3 to hour 12 in 180 min interval. |
|
|
|
&domains |
|
domain definition: dimensions, nesting parameters |
time_step |
60 |
time step for integration in integer seconds (recommended 6*dx in km for a typical case) |
time_step_fract_num |
0 |
numerator for fractional time step |
time_step_fract_den |
1 |
denominator for fractional time step Example, if you want to use 60.3 sec as your time step, set time_step = 60, time_step_fract_num = 3, and time_step_fract_den = 10 |
max_dom |
1 |
number of domains - set it to > 1 if it is a nested run |
s_we (max_dom) |
1 |
start index in x (west-east) direction (leave as is) |
e_we (max_dom) |
91 |
end index in x (west-east) direction (staggered dimension) |
s_sn (max_dom) |
1 |
start index in y (south-north) direction (leave as is) |
e_sn (max_dom) |
82 |
end index in y (south-north) direction (staggered dimension) |
s_vert (max_dom) |
1 |
start index in z (vertical) direction (leave as is) |
e_vert (max_dom) |
28 |
end index in z (vertical) direction (staggered dimension - this refers to full levels). Most variables are on unstaggered levels. Vertical dimensions need to be the same for all nests. |
num_metgrid_levels |
40 |
number of vertical levels in WPS output: type ncdump –h to find out |
eta_levels |
1.0, 0.99,…0.0 |
model eta levels from 1 to 0. If not given, real will provide a set of levels |
force_sfc_in_vinterp |
1 |
use surface data as lower boundary when interpolating through this many eta levels |
p_top_requested |
5000 |
p_top to use in the model; must be available in WPS data |
interp_type |
1 |
vertical interpolation; 1: linear in pressure; 2: linear in log(pressure) |
extrap_type |
2 |
vertical extrapolation of non-temperature variables. 1: extrapolate using the two lowest levels; 2: use lowest level as constant below ground |
t_extrap_type |
2 |
vertical extrapolation for potential temperature. 1: isothermal; 2: -6.5 K/km lapse rate for temperature 3: constant theta |
use_levels_below_ground |
.true. |
in vertical interpolation, whether to use levels below input surface level: true: use input isobaric levels below input surface false: extrapolate when WRF location is below input surface level |
use_surface |
.true. |
whether to use input surface level data in vertical interpolation true: use input surface data false: do not use input surface data |
lagrange_order |
1 |
vertical interpolation order; 1: linear; 2: quadratic |
lowest_lev_from_sfc |
.false. |
T = use surface values for the lowest eta (u,v,t,q); F = use traditional interpolation |
dx (max_dom) |
10000 |
grid length in x direction, unit in meters |
dy (max_dom) |
10000 |
grid length in y direction, unit in meters |
ztop (max_dom) |
19000. |
height in meters; used to define model top for idealized cases |
grid_id (max_dom) |
1 |
domain identifier |
parent_id (max_dom) |
0 |
id of the parent domain |
i_parent_start (max_dom) |
1 |
starting LLC I-indices from the parent domain |
j_parent_start (max_dom) |
1 |
starting LLC J-indices from the parent domain |
parent_grid_ratio (max_dom) |
1 |
parent-to-nest domain grid size ratio: for real-data cases the ratio has to be odd; for idealized cases, the ratio can be even if feedback is set to 0. |
parent_time_step_ratio (max_dom) |
1 |
parent-to-nest time step ratio; it can be different from the parent_grid_ratio |
feedback |
1 |
feedback from nest to its parent domain; 0 = no feedback |
smooth_option |
0 |
smoothing option for parent domain, used only with feedback option on. 0: no smoothing; 1: 1-2-1 smoothing; 2: smoothing-desmoothing |
(options for preset moving nest) |
||
num_moves |
2, |
total number of moves for all domains |
move_id (max_moves) |
2,2, |
a list of nest domain id's, one per move |
move_interval (max_moves) |
60,120, |
time in minutes since the start of this domain |
move_cd_x (max_moves) |
1,-1, |
the number of parent domain grid cells to move in i direction |
move_cd_y (max_moves) |
-1,1, |
the number of parent domain grid cells to move in j direction (positive in increasing i/j directions, and negative in decreasing i/j directions. Only 1, 0 and -1 is permitted. |
(options for automatic moving nest) |
||
vortex_interval (max_dom) |
15 |
how often the new vortex position is computed |
max_vortex_speed (max_dom) |
40 |
unit in m/sec; used to compute the search radius for the new vortex position |
corral_dist (max_dom) |
8 |
how many coarse grid cells the moving nest is allowed to get near the coarse grid boundary |
(options for adaptive time
step) |
||
use_adaptive_time_step |
.false. |
whether to use adaptive time step |
step_to_output_time |
.true. |
whether to modify the time steps so that the exact history time is reached |
target_cfl |
1.2 |
if vertical and horizontal CFL <= this value, then time step is increased |
max_step_increase_pct |
5 |
percentage of previous time step to increase, if the max CFL is <= target_cfl |
starting_time_step |
-1 |
flag -1 implies 6*dx is used to start the model. Any positive integer number specifies the time step the model will start with. Note that when use_adaptive_time_step is true, the value specified for time_step is ignored. |
max_time_step |
-1 |
flag -1 implies the maximum time step is 3*starting_time_step. Any positive integer number specified the maximum time step |
min_time_step |
-1 |
flag -1 implies the minimum time step is 0.5*starting_time_step. Any positive integer number specified the minumum time step |
|
|
|
(options to control
parallel computing) |
||
tile_sz_x |
0 |
number of points in tile x direction |
tile_sz_y |
0 |
number of points in tile y direction can be determined automatically |
numtiles |
1 |
number of tiles per patch (alternative to above two items) |
nproc_x |
-1 |
number of processors in x for decomposition |
nproc_y |
-1 |
number of processors in y for decomposition -1: code will do automatic decomposition >1: for both: will be used for decomposition |
|
|
|
&physics |
|
Physics options |
mp_physics (max_dom) |
|
microphysics option |
|
0 |
no microphysics |
|
1 |
Kessler scheme |
|
2 |
Lin et al. scheme |
|
3 |
WSM 3-class simple ice scheme |
|
4 |
WSM 5-class scheme |
|
5 |
Ferrier (new Eta) microphysics |
|
6 |
WSM 6-class graupel scheme |
|
7 |
Goddard GCE scheme (also use gsfcgce_hail and gsfcgce_2ice) |
|
8 |
Thompson graupel scheme |
|
10 |
Morrison 2-moment scheme |
mp_zero_out |
|
For non-zero mp_physics options, this keeps moisture variables above a threshold value >= 0. |
|
0 |
no action taken, no adjustment to any moisture field |
|
1 |
except for Qv, all other moisture arrays are set to zero if they fall below a critical value |
|
2 |
Qv >= 0 and all other moisture arrays are set to zero if they fall below a critical value |
mp_zero_out_thresh |
1.e-8 |
critical value for moisture variable threshold, below which moisture arrays (except for Qv) are set to zero (unit: kg/kg) |
gsfcgce_hail |
0 |
0: running gsfcgce scheme with graupel 1: running gsfcgce scheme with hail |
gsfcgce_2ice |
0 |
0: running gsfcgce scheme with snow, ice and graupel / hail 1: running gsfcgce scheme with only ice and snow 2: running gsfcgce scheme with only ice and graupel (used only in very extreme situation) |
no_mp_heating |
0 |
switch to turn off latent heating from mp 0: normal 1: turn off latent heating from a microphysics scheme |
ra_lw_physics (max_dom) |
|
longwave radiation option |
|
0 |
no longwave radiation |
|
1 |
rrtm scheme |
|
3 |
CAM scheme |
|
99 |
GFDL (Eta) longwave (semi-supported) |
ra_sw_physics (max_dom) |
|
shortwave radiation option |
|
0 |
no shortwave radiation |
|
1 |
Dudhia scheme |
|
2 |
Goddard short wave |
|
3 |
CAM scheme |
|
99 |
GFDL (Eta) longwave (semi-supported) |
radt (max_dom) |
30 |
minutes between radiation physics calls. Recommend 1 minute per km of dx (e.g. 10 for 10 km grid); use the same value for all nests |
co2tf |
1 |
CO2 transmission function flag for GFDL radiation only. Set it to 1 for ARW, which allows generation of CO2 function internally |
cam_abs_freq_s |
21600 |
CAM clear sky longwave absorption calculation frequency (recommended minimum value to speed scheme up) |
levsiz |
59 |
for CAM radiation input ozone levels |
paerlev |
29 |
for CAM radiation input aerosol levels |
cam_abs_dim1 |
4 |
for CAM absorption save array |
cam_abs_dim2 |
same as e_vert |
for CAM 2nd absorption save array |
sf_sfclay_physics (max_dom) |
|
surface-layer option |
|
0 |
no surface-layer |
|
1 |
Monin-Obukhov scheme |
|
2 |
Monin-Obukhov (Janjic Eta) scheme |
|
3 |
NCEP GFS scheme (NMM only) |
|
7 |
Pleim-Xu (ARW only), only
tested with Pleim-Xu surface and ACM2 PBL |
sf_surface_physics (max_dom) |
|
land-surface option (set before running real; also set correct num_soil_layers) |
|
0 |
no surface temp prediction |
|
1 |
thermal diffusion scheme |
|
2 |
unified Noah land-surface model |
|
3 |
RUC land-surface model |
|
7 |
Pleim-Xu scheme (ARW only) |
bl_pbl_physics (max_dom) |
|
boundary-layer option |
|
0 |
no boundary-layer |
|
1 |
YSU scheme |
|
2 |
Mellor-Yamada-Janjic (Eta) TKE scheme |
|
3 |
NCEP GFS scheme (NMM only) |
|
7 |
ACM2 (Pleim) scheme |
|
99 |
MRF scheme (to be removed) |
bldt (max_dom) |
0 |
minutes between boundary-layer physics calls. 0 = call every time step |
cu_physics (max_dom) |
|
cumulus option |
|
0 |
no cumulus |
|
1 |
Kain-Fritsch (new Eta) scheme |
|
2 |
Betts-Miller-Janjic scheme |
|
3 |
Grell-Devenyi ensemble scheme |
|
4 |
Simplied Arakawa-Schubert (NMM only) |
|
5 |
New Grell scheme (G3) |
|
99 |
previous Kain-Fritsch scheme |
cudt |
0 |
minutes between cumulus physics calls. 0 = call every time step |
isfflx |
1 |
heat and moisture fluxes from the surface (only works for sf_sfclay_physics = 1) 1 = with fluxes from the surface 0 = no flux from the surface |
ifsnow |
0 |
snow-cover effects (only works for sf_surface_physics = 1) 1 = with snow-cover effect 0 = without snow-cover effect |
icloud |
1 |
cloud effect to the optical depth in radiation (only works for ra_sw_physics = 1 and ra_lw_physics = 1) 1 = with cloud effect 0 = without cloud effect |
swrat_scat |
1. |
Scattering tuning parameter (default 1 is 1.e-5 m2/kg) |
surface_input_source |
1,2 |
where landuse and soil category data come from: 1 = WPS/geogrid; 2 = GRIB data from another model (only if arrays VEGCAT/SOILCAT exist) |
num_soil_layers |
|
number of soil layers in land surface model (set in real) |
|
5 |
thermal diffusion scheme for temp only |
|
4 |
Noah land-surface model |
|
6 |
RUC land-surface model |
|
2 |
Pleim-Xu land-surface model |
pxlsm_smois_init (max_dom) |
1 |
PX LSM soil moisture initialization option 0: from analysis 1: from LANDUSE.TBL (SLMO) |
ucmcall (max_dom) |
0 |
activate urban canopy model (in Noah LSM only) (0=no, 1=yes) |
maxiens |
1 |
Grell-Devenyi only |
maxens |
3 |
G-D only |
maxens2 |
3 |
G-D only |
maxens3 |
16 |
G-D only |
ensdim |
144 |
G-D only. These are recommended numbers. If you would like to use any other number, consult the code, know what you are doing. |
seaice_threshold |
271. |
tsk < seaice_threshold, if water point and 5-layer slab scheme, set to land point and permanent ice; if water point and Noah scheme, set to land point, permanent ice, set temps from 3 m to surface, and set smois and sh2o |
sst_update |
|
option to use time-varying SST, seaice, vegetation fraction, and albedo during a model simulation (set before running real) |
|
0 |
no SST update |
|
1 |
real.exe will create wrflowinp_d01 file at the same time interval as the available input data. To use it in wrf.exe, add auxinput4_inname = "wrflowinp_d<domain>", auxinput4_interval in namelist section &time_control |
usemonalb |
.false. |
whether to use monthly albedo
map instead of LANDUSE.TBL values. Recommended for sst_update = 1 |
slope_rad |
0 |
slope effects for ra_sw_physics=1
(1=on, 0=off) |
topo_shading |
0 |
neighboring-point shadow
effects for ra_sw_physics=1
(1=on, 0=off) |
shadlen |
25000. |
max shadow length in meters
for topo_shading = 1 |
omlcall |
0 |
simple ocean
mixed layer model. (1=on, 0=off) |
oml_hml0 |
50. |
initial ocean mixed layer
depth (m), constant everywhere |
oml_gamma |
0.14 |
lapse rate in deep water for
oml (K m-1) |
isftcflx |
0 |
alternative Ck, Cd for
tropical storm application. (1=on, 0=off) |
|
|
|
&fdda |
|
for grid and obs nudging |
(for grid nudging) |
|
|
grid_fdda (max_dom) |
1 |
grid-nudging on (=0 off) for each domain |
gfdda_inname |
“wrffdda_d<domain>” |
Defined name in real |
gfdda_interval (max_dom) |
360 |
Time interval (min) between analysis times |
gfdda_end_h (max_dom) |
6 |
Time (h) to stop nudging after start of forecast |
io_form_gfdda |
2 |
Analysis format (2 = netcdf) |
fgdt (max_dom) |
0 |
Calculation frequency (in minutes) for analysis nudging. 0 = every time step, and this is recommended |
if_no_pbl_nudging_uv (max_dom) |
0 |
1= no nudging of u and v in the pbl; 0= nudging in the pbl |
if_no_pbl_nudging_t (max_dom) |
0 |
1= no nudging of temp in the pbl; 0= nudging in the pbl |
if_no_pbl_nudging_t (max_dom) |
0 |
1= no nudging of qvapor in the pbl; 0= nudging in the pbl |
if_zfac_uv (max_dom) |
0 |
0= nudge u and v all layers, 1= limit nudging to levels above k_zfac_uv |
k_zfac_uv |
10 |
10=model level below which nudging is switched off for u and v |
if_zfac_t (max_dom) |
0 |
|
k_zfac_t |
10 |
10=model level below which nudging is switched off for temp |
if_zfac_q (max_dom) |
0 |
|
k_zfac_q |
10 |
10=model level below which nudging is switched off for water qvapor |
guv (max_dom) |
0.0003 |
nudging coefficient for u and v (sec-1) |
gt (max_dom) |
0.0003 |
nudging coefficient for temp (sec-1) |
gq (max_dom) |
0.0003 |
nudging coefficient for qvapor (sec-1) |
if_ramping |
0 |
0= nudging ends as a step function, 1= ramping nudging down at end of period |
dtramp_min |
60. |
time (min) for ramping function, 60.0=ramping starts at last analysis time, -60.0=ramping ends at last analysis time |
(for obs nudging) |
|
|
obs_nudge_opt (max_dom) |
1 |
obs-nudging fdda on (=0 off) for each domain; also need to set auxinput11_interval and auxinput11_end_h in time_control namelist |
max_obs |
150000 |
max number of observations used on a domain during any given time window |
fdda_start |
0. |
obs nudging start time in minutes |
fdda_end |
180. |
obs nudging end time in minutes |
obs_nudge_wind (max_dom) |
1 |
whether to nudge wind: (=0 off) |
obs_coef_wind (max_dom) |
6.e-4 |
nudging coefficient for wind, unit: s-1 |
obs_nudge_temp (max_dom) |
1 |
whether to nudge temperature: (=0 off) |
obs_coef_temp (max_dom) |
6.e-4 |
nudging coefficient for temp, unit: s-1 |
obs_nudge_mois (max_dom) |
1 |
whether to nudge water vapor mixing ratio: (=0 off) |
obs_coef_mois (max_dom) |
6.e-4 |
nudging coefficient for water vapor mixing ratio, unit: s-1 |
obs_nudge_pstr (max_dom) |
0 |
whether to nudge surface pressure (not used) |
obs_coef_pstr (max_dom) |
0. |
nudging coefficient for surface pressure, unit: s-1 (not used) |
obs_rinxy |
200. |
horizontal radius of influence in km |
obs_rinsig |
0.1 |
vertical radius of influence in eta |
obs_twindo (max_dom) |
0.666667 |
half-period time window over which an observation will be used for nudging; the unit is in hours |
obs_npfi |
10 |
freq in coarse grid timesteps for diag prints |
obs_ionf (max_dom) |
2 |
freq in coarse grid timesteps for obs input and err calc |
obs_idynin |
0 |
for dynamic initialization using a ramp-down function to gradually turn off the FDDA before the pure forecast (=1 on) |
obs_dtramp |
40. |
time period in minutes over which the nudging is ramped down from one to zero. |
obs_nobs_prt (max_dom) |
10 |
number of current obs to print
grid coord. info. |
obs_ipf_in4dob |
.true. |
print obs input diagnostics (=.false. off) |
obs_ipf_errob |
.true. |
print obs error diagnostics (=.false. off) |
obs_ipf_nudob |
.true. |
print obs nudge diagnostics (=.false. off) |
obs_ipf_init |
.true. |
enable obs init warning
messages |
|
|
|
&dynamics |
|
Diffusion, damping options,
advection options |
rk_ord |
|
time-integration scheme option: |
|
2 |
Runge-Kutta 2nd order |
|
3 |
Runge-Kutta 3rd order (recommended) |
diff_opt |
|
turbulence and mixing option: |
|
0 |
= no turbulence or explicit spatial numerical filters (km_opt IS IGNORED). |
|
1 |
evaluates 2nd order diffusion term on coordinate surfaces. uses kvdif for vertical diff unless PBL option is used. may be used with km_opt = 1 and 4. (= 1, recommended for real-data case) |
|
2 |
evaluates mixing terms in physical space (stress form) (x,y,z). turbulence parameterization is chosen by specifying km_opt. |
km_opt |
|
eddy coefficient option |
|
1 |
constant (use khdif and kvdif) |
|
2 |
1.5 order TKE closure (3D) |
|
3 |
Smagorinsky first order closure (3D) Note: option 2 and 3 are not recommended for DX > 2 km |
|
4 |
horizontal Smagorinsky first order closure (recommended for real-data case) |
diff_6th_opt (max_dom) |
0 |
6th-order numerical diffusion 0 = no 6th-order diffusion (default) 1 = 6th-order numerical diffusion 2 = 6th-order numerical diffusion but prohibit up-gradient diffusion |
diff_6th_factor (max_dom) |
0.12 |
6th-order numerical diffusion non-dimensional rate (max value 1.0 corresponds to complete removal of 2dx wave in one timestep) |
damp_opt |
|
upper level damping flag |
|
0 |
without damping |
|
1 |
with diffusive damping; maybe used for real-data cases (dampcoef nondimensional ~ 0.01 - 0.1) |
|
2 |
with Rayleigh damping (dampcoef inverse time scale [1/s], e.g. 0.003) |
|
3 |
with w-Rayleigh damping (dampcoef inverse
time scale [1/s] e.g. .05; for real-data cases) |
zdamp (max_dom) |
5000 |
damping depth (m) from model top |
dampcoef (max_dom) |
0. |
damping coefficient (see damp_opt) |
w_damping |
|
vertical velocity damping flag (for operational use) |
|
0 |
without damping |
|
1 |
with damping |
base_pres |
100000. |
Base state surface pressure (Pa), real only. Do not change. |
base_temp |
290. |
Base state sea level temperature (K), real only. |
base_lapse |
50. |
real-data ONLY, lapse rate (K), DO NOT CHANGE. |
khdif (max_dom) |
0 |
horizontal diffusion constant (m^2/s) |
kvdif (max_dom) |
0 |
vertical diffusion constant (m^2/s) |
smdiv (max_dom) |
0.1 |
divergence damping (0.1 is typical) |
emdiv (max_dom) |
0.01 |
external-mode filter coef for mass coordinate model (0.01 is typical for real-data cases) |
epssm (max_dom) |
.1 |
time off-centering for vertical sound waves |
non_hydrostatic (max_dom) |
.true. |
whether running the model in hydrostatic or non-hydro mode |
pert_coriolis (max_dom) |
.false. |
Coriolis only acts on wind perturbation (idealized) |
top_lid (max_dom) |
.false. |
zero vertical motion at top of
domain |
mix_full_fields |
.false. |
used with diff_opt = 2; value of ".true." is recommended, except for highly idealized numerical tests; damp_opt must not be 1 if ".true." is chosen. .false. means subtract 1-d base-state profile before mixing |
mix_isotropic(max_dom) |
0 |
0=anistropic
vertical/horizontal diffusion coeffs, 1=isotropic |
mix_upper_bound(max_dom) |
0.1 |
non-dimensional upper limit
for diffusion coeffs |
h_mom_adv_order (max_dom) |
5 |
horizontal momentum advection order (5=5th, etc.) |
v_mom_adv_order (max_dom) |
3 |
vertical momentum advection order |
h_sca_adv_order (max_dom) |
5 |
horizontal scalar advection order |
v_sca_adv_order (max_dom) |
3 |
vertical scalar advection order |
time_step_sound (max_dom) |
4 |
number of sound steps per time-step (if using a time_step much larger than 6*dx (in km), increase number of sound steps). = 0: the value computed automatically |
pd_moist (max_dom) |
.false. |
positive define advection of moisture; set to .true. to turn it on |
pd_scalar (max_dom) |
.false. |
positive define advection of scalars |
pd_tke (max_dom) |
.false. |
positive define advection of tke |
pd_chem (max_dom) |
.false. |
positive define advection of chem vars |
tke_drag_coefficient (max_dom) |
0 |
surface drag coefficient (Cd, dimensionless) for diff_opt=2 only |
tke_heat_flux (max_dom) |
0 |
surface thermal flux (H/rho*cp), K m/s) for diff_opt = 2 only |
do_coriolis (max_dom) |
.true. |
whether to do Coriolis
calculations (idealized) |
do_curvature (max_dom) |
.true. |
whether to do curvature
calculations (idealized) |
do_gradp (max_dom) |
.true. |
whether to do horizontal
pressure gradient calculations (idealized) |
fft_filter_lat |
45. |
the latitude above which the
polar filter is turned on for global model |
|
|
|
&bdy_control |
|
boundary condition control |
spec_bdy_width |
5 |
total number of rows for specified boundary value nudging |
spec_zone |
1 |
number of points in specified zone (spec b.c. option) |
relax_zone |
4 |
number of points in relaxation zone (spec b.c. option) |
specified (max_dom) |
.false. |
specified boundary conditions (only can be used for to domain 1) |
spec_exp |
0. |
exponential multiplier for relaxation zone ramp for specified=.t. (0.= linear ramp default; 0.33=~3*dx exp decay factor) |
|
|
The above 5 namelists are used for real-data runs only |
periodic_x (max_dom) |
.false. |
periodic boundary conditions in x direction |
symmetric_xs (max_dom) |
.false. |
symmetric boundary conditions at x start (west) |
symmetric_xe (max_dom) |
.false. |
symmetric boundary conditions at x end (east) |
open_xs (max_dom) |
.false. |
open boundary conditions at x start (west) |
open_xe (max_dom) |
.false. |
open boundary conditions at x end (east) |
periodic_y (max_dom) |
.false. |
periodic boundary conditions in y direction |
symmetric_ys (max_dom) |
.false. |
symmetric boundary conditions at y start (south) |
symmetric_ye (max_dom) |
.false. |
symmetric boundary conditions at y end (north) |
open_ys (max_dom) |
.false. |
open boundary conditions at y start (south) |
open_ye (max_dom) |
.false. |
open boundary conditions at y end (north) |
nested (max_dom) |
.false.,.true.,.true., |
nested boundary conditions (must be set to .true. for nests) |
polar |
.false. |
polar boundary condition (v=0 at polarward-most v-point) for global application |
|
|
|
&namelist_quilt |
|
Option for asynchronized I/O for MPI applications |
nio_tasks_per_group |
0 |
default value is 0: no quilting; > 0 quilting I/O |
nio_groups |
1 |
default 1 |
|
|
|
&grib2 |
|
|
background_proc_id |
255 |
Background generating process identifier, typically defined by the originating center to identify the background data that was used in creating the data. This is octet 13 of Section 4 in the grib2 message |
forecast_proc_id |
255 |
Analysis or generating forecast process identifier, typically defined by the originating center to identify the forecast process that was used to generate the data. This is octet 14 of Section 4 in the grib2 message |
production_status |
255 |
Production status of processed data in the grib2 message. See Code Table 1.3 of the grib2 manual. This is octet 20 of Section 1 in the grib2 record |
compression |
40 |
The compression method to encode the output grib2 message. Only 40 for jpeg2000 or 41 for PNG are supported |
|
|
|
&dfi_control |
digital filter option control (does not yet support nesting) |
|
dfi_opt |
3 |
which DFI option to use 0: no digital filter
initialization 1: digital filter launch (DFL) 2: diabatic DFI (DDFI) 3: twice DFI (TDFI)
(recommended) |
dfi_nfilter |
7 |
digital filter type: 0 – uniform; 1- Lanczos; 2 – Hamming; 3 – Blackman; 4 – Kaiser; 5 – Potter; 6 – Dolph window; 7 – Dolph (recommended); 8 – recursive high-order |
dfi_write_filtered_ input |
.true. |
whether to write wrfinput file
with filtered model state before beginning forecast |
dfi_write_dfi_history |
.false. |
whether to write wrfout files
during filtering integration |
dfi_cutoff_seconds |
3600 |
cutoff period, in seconds, for
the filter. Should not be longer than the filter window |
dfi_time_dim |
1000 |
maximum number of time steps
for filtering period, this value can be larger than necessary |
dfi_bckstop_year |
2001 |
four-digit year of stop time
for backward DFI integration. For a model that starts from 2001061112, this
specifies 1 hour backward integration |
dfi_bckstop_month |
06 |
two-digit month of stop time
for backward DFI integration |
dfi_bckstop_day |
11 |
two-digit day of stop time for
backward DFI integration |
dfi_bckstop_hour |
11 |
two-digit hour of stop time
for backward DFI integration |
dfi_bckstop_minute |
00 |
two-digit minute of stop time
for backward DFI integration |
dfi_bckstop_second |
00 |
two-digit second of stop time
for backward DFI integration |
dfi_fwdstop_year |
2001 |
four-digit year of stop time
for forward DFI integration. For a model that starts at 2001061112, this
specifies 30 minutes of forward integration |
dfi_fwdstop_month |
06 |
two-digit month of stop time
for forward DFI integration |
dfi_fwdstop_day |
11 |
two-digit day of stop time for
forward DFI integration |
dfi_fwdstop_hour |
12 |
two-digit hour of stop time
for forward DFI integration |
dfi_fwdstop_minute |
30 |
two-digit minute of stop time
for forward DFI integration |
dfi_fwdstop_second |
00 |
two-digit second of stop time
for forward DFI integration |
List of Fields
The following is an edited output from netCDF command 'ncdump'. Note that valid output fields will depend on the model options used.
ncdump -h wrfout_d01_yyyy_mm_dd-hh:mm:ss
netcdf wrfout_d01_2000-01-24_12:00:00 {
dimensions:
Time= UNLIMITED ; // (1
currently)
DateStrLen= 19 ;
west_east= 73 ;
south_north= 60 ;
west_east_stag= 74 ;
bottom_top= 27 ;
south_north_stag= 61 ;
bottom_top_stag= 28 ;
soil_layers_stag= 5 ;
variables:
charTimes(Time, DateStrLen) ;
floatLU_INDEX(Time,
south_north, west_east) ;
LU_INDEX:description= "LAND USE CATEGORY" ;
LU_INDEX:units= "" ;
floatU(Time, bottom_top,
south_north, west_east_stag) ;
U:description= "x-wind component" ;
U:units= "m s-1" ;
floatV(Time, bottom_top,
south_north_stag, west_east) ;
V:description= "y-wind component" ;
V:units= "m s-1" ;
floatW(Time, bottom_top_stag,
south_north, west_east) ;
W:description= "z-wind component" ;
W:units= "m s-1" ;
floatPH(Time, bottom_top_stag,
south_north, west_east) ;
PH:description= "perturbation geopotential" ;
PH:units= "m2 s-2" ;
floatPHB(Time,
bottom_top_stag, south_north, west_east) ;
PHB:description= "base-state geopotential" ;
PHB:units= "m2 s-2" ;
floatT(Time, bottom_top,
south_north, west_east) ;
T:description= "perturbation potential temperature(theta-t0)" ;
T:units= "K" ;
floatMU(Time, south_north,
west_east) ;
MU:description= "perturbation dry air mass in column" ;
MU:units= "Pa" ;
floatMUB(Time, south_north,
west_east) ;
MUB:description= "base state dry air mass in column" ;
MUB:units= "Pa" ;
floatNEST_POS(Time,
south_north, west_east) ;
NEST_POS:description= "-" ;
NEST_POS:units= "-" ;
floatP(Time, bottom_top,
south_north, west_east) ;
P:description= "perturbation pressure" ;
P:units= "Pa" ;
floatPB(Time, bottom_top,
south_north, west_east) ;
PB:description= "BASE STATE PRESSURE" ;
PB:units= "Pa" ;
floatSR(Time, south_north,
west_east) ;
SR:description= "fraction of frozen precipitation" ;
SR:units= "-" ;
floatFNM(Time, bottom_top) ;
FNM:description= "upper weight for vertical stretching" ;
FNM:units= "" ;
floatFNP(Time, bottom_top) ;
FNP:description= "lower weight for vertical stretching" ;
FNP:units= "" ;
floatRDNW(Time, bottom_top) ;
RDNW:description= "inverse d(eta) values between full (w) levels" ;
RDNW:units= "" ;
floatRDN(Time, bottom_top) ;
RDN:description= "inverse d(eta) values between half (mass) levels" ;
RDN:units= "" ;
floatDNW(Time, bottom_top) ;
DNW:description= "d(eta) values between full (w) levels" ;
DNW:units= "" ;
floatDN(Time, bottom_top) ;
DN:description= "d(eta) values between half (mass) levels" ;
DN:units= "" ;
floatZNU(Time, bottom_top) ;
ZNU:description= "eta values on half (mass) levels" ;
ZNU:units= "" ;
floatZNW(Time,
bottom_top_stag) ;
ZNW:description= "eta values on full (w) levels" ;
ZNW:units= "" ;
floatCFN(Time) ;
CFN:description= "extrapolation constant" ;
CFN:units= "" ;
floatCFN1(Time) ;
CFN1:description= "extrapolation constant" ;
CFN1:units= "" ;
floatQ2(Time, south_north,
west_east) ;
Q2:description= "QV at 2 M" ;
Q2:units= "kg kg-1" ;
floatT2(Time, south_north,
west_east) ;
T2:description= "TEMP at 2 M" ;
T2:units= "K" ;
floatTH2(Time, south_north,
west_east) ;
TH2:description= "POT TEMP at 2 M" ;
TH2:units= "K" ;
floatPSFC(Time, south_north,
west_east) ;
PSFC:description= "SFC PRESSURE" ;
PSFC:units= "Pa" ;
floatU10(Time, south_north,
west_east) ;
U10:description= "U at 10 M" ;
U10:units= "m s-1" ;
floatV10(Time, south_north,
west_east) ;
V10:description= "V at 10 M" ;
V10:units= "m s-1" ;
floatRDX(Time) ;
RDX:description= "INVERSE X GRID LENGTH" ;
RDX:units= "" ;
floatRDY(Time) ;
RDY:description= "INVERSE Y GRID LENGTH" ;
RDY:units= "" ;
floatRESM(Time) ;
RESM:description= "TIME WEIGHT CONSTANT FOR SMALL STEPS" ;
RESM:units= "" ;
floatZETATOP(Time) ;
ZETATOP:description= "ZETA AT MODEL TOP" ;
ZETATOP:units= "" ;
floatCF1(Time) ;
CF1:description= "2nd order extrapolation constant" ;
CF1:units= "" ;
floatCF2(Time) ;
CF2:description= "2nd order extrapolation constant" ;
CF2:units= "" ;
floatCF3(Time) ;
CF3:description= "2nd order extrapolation constant" ;
CF3:units= "" ;
intITIMESTEP(Time) ;
ITIMESTEP:description= "" ;
ITIMESTEP:units= "" ;
floatXTIME(Time) ;
XTIME:description= "minutes since simulation start" ;
XTIME:units= "" ;
floatQVAPOR(Time, bottom_top,
south_north, west_east) ;
QVAPOR:description= "Water vapor mixing ratio" ;
QVAPOR:units= "kg kg-1" ;
floatQCLOUD(Time, bottom_top,
south_north, west_east) ;
QCLOUD:description= "Cloud water mixing ratio" ;
QCLOUD:units= "kg kg-1" ;
floatQRAIN(Time, bottom_top,
south_north, west_east) ;
QRAIN:description= "Rain water mixing ratio" ;
QRAIN:units= "kg kg-1" ;
floatLANDMASK(Time,
south_north, west_east) ;
LANDMASK:description= "LAND MASK (1 FOR LAND, 0 FOR WATER)" ;
LANDMASK:units= "" ;
floatTSLB(Time,
soil_layers_stag, south_north, west_east) ;
TSLB:description= "SOIL TEMPERATURE" ;
TSLB:units= "K" ;
floatZS(Time,
soil_layers_stag) ;
ZS:description= "DEPTHS OF CENTERS OF SOIL LAYERS" ;
ZS:units= "m" ;
floatDZS(Time,
soil_layers_stag) ;
DZS:description= "THICKNESSES OF SOIL LAYERS" ;
DZS:units= "m" ;
floatSMOIS(Time,
soil_layers_stag, south_north, west_east) ;
SMOIS:description= "SOIL
MOISTURE" ;
SMOIS:units= "m3 m-3" ;
floatSH2O(Time, soil_layers_stag,
south_north, west_east) ;
SH2O:description= "SOIL LIQUID WATER" ;
SH2O:units= "m3 m-3" ;
floatXICE(Time, south_north,
west_east) ;
XICE:description= "SEA ICE FLAG" ;
XICE:units= "" ;
floatSFROFF(Time, south_north,
west_east) ;
SFROFF:description= "SURFACE RUNOFF" ;
SFROFF:units= "mm" ;
floatUDROFF(Time, south_north,
west_east) ;
UDROFF:description= "UNDERGROUND RUNOFF" ;
UDROFF:units= "mm" ;
intIVGTYP(Time, south_north,
west_east) ;
IVGTYP:description= "DOMINANT VEGETATION CATEGORY" ;
IVGTYP:units= "" ;
intISLTYP(Time, south_north,
west_east) ;
ISLTYP:description= "DOMINANT SOIL CATEGORY" ;
ISLTYP:units= "" ;
floatVEGFRA(Time, south_north,
west_east) ;
VEGFRA:description= "VEGETATION FRACTION" ;
VEGFRA:units= "" ;
floatGRDFLX(Time, south_north,
west_east) ;
GRDFLX:description= "GROUND HEAT FLUX" ;
GRDFLX:units= "W m-2" ;
floatSNOW(Time, south_north,
west_east) ;
SNOW:description= "SNOW WATER EQUIVALENT" ;
SNOW:units= "kg m-2" ;
floatSNOWH(Time, south_north,
west_east) ;
SNOWH:description= "PHYSICAL SNOW DEPTH" ;
SNOWH:units= "m" ;
floatRHOSN(Time, south_north,
west_east) ;
RHOSN:description= " SNOW DENSITY" ;
RHOSN:units= "kg m-3" ;
floatCANWAT(Time, south_north,
west_east) ;
CANWAT:description= "CANOPY WATER" ;
CANWAT:units= "kg m-2" ;
floatSST(Time, south_north,
west_east) ;
SST:description= "SEA SURFACE TEMPERATURE" ;
SST:units= "K" ;
floatQNDROPSOURCE(Time,
bottom_top, south_north, west_east) ;
QNDROPSOURCE:description= "Droplet number source" ;
QNDROPSOURCE:units= " /kg/s" ;
floatMAPFAC_M(Time,
south_north, west_east) ;
MAPFAC_M:description= "Map scale factor on mass grid" ;
MAPFAC_M:units= "" ;
floatMAPFAC_U(Time, south_north,
west_east_stag) ;
MAPFAC_U:description= "Map scale factor on u-grid" ;
MAPFAC_U:units= "" ;
floatMAPFAC_V(Time,
south_north_stag, west_east) ;
MAPFAC_V:description= "Map scale factor on v-grid" ;
MAPFAC_V:units= "" ;
floatF(Time, south_north,
west_east) ;
F:description= "Coriolis sine latitude term" ;
F:units= "s-1" ;
floatE(Time, south_north,
west_east) ;
E:description= "Coriolis cosine latitude term" ;
E:units= "s-1" ;
floatSINALPHA(Time,
south_north, west_east) ;
SINALPHA:description= "Local sine of map rotation" ;
SINALPHA:units= "" ;
floatCOSALPHA(Time,
south_north, west_east) ;
COSALPHA:description= "Local cosine of map rotation" ;
COSALPHA:units= "" ;
floatHGT(Time, south_north,
west_east) ;
HGT:description= "Terrain Height" ;
HGT:units= "m" ;
floatTSK(Time, south_north,
west_east) ;
TSK:description= "SURFACE SKIN TEMPERATURE" ;
TSK:units= "K" ;
floatP_TOP(Time) ;
P_TOP:description= "PRESSURE TOP OF THE MODEL" ;
P_TOP:units= "Pa" ;
floatRAINC(Time, south_north,
west_east) ;
RAINC:description= "ACCUMULATED TOTAL CUMULUS PRECIPITATION" ;
RAINC:units= "mm" ;
floatRAINNC(Time, south_north,
west_east) ;
RAINNC:description= "ACCUMULATED TOTAL GRID SCALE PRECIPITATION" ;
RAINNC:units= "mm" ;
floatSNOWNC(Time, south_north,
west_east) ;
SNOWNC:description="ACCUMULATED TOTAL GRIDSCALE SNOW AND ICE" ;
SNOWNC:units= "mm" ;
floatGRAUPELNC(Time,
south_north, west_east) ;
GRAUPELNC:description= "ACCUMULATED TOTAL GRID SCALE GRAUPEL" ;
GRAUPELNC:units= "mm" ;
floatSWDOWN(Time, south_north,
west_east) ;
SWDOWN:description= "DOWNWARD SHORT WAVE FLUX AT GROUND SURFACE" ;
SWDOWN:units= "W m-2" ;
floatGLW(Time, south_north,
west_east) ;
GLW:description= "DOWNWARD LONG WAVE FLUX AT GROUND SURFACE" ;
GLW:units= "W m-2" ;
floatOLR(Time, south_north,
west_east) ;
OLR:description= "TOA OUTGOING LONG WAVE" ;
OLR:units= "W m-2" ;
floatXLAT(Time, south_north,
west_east) ;
XLAT:description= "LATITUDE, SOUTH IS NEGATIVE" ;
XLAT:units= "degree_north" ;
floatXLONG(Time, south_north,
west_east) ;
XLONG:description= "LONGITUDE, WEST IS NEGATIVE" ;
XLONG:units= "degree_east" ;
floatXLAT_U(Time, south_north,
west_east_stag) ;
XLAT_U:description= "LATITUDE, SOUTH IS NEGATIVE" ;
XLAT_U:units= "degree_north" ;
floatXLONG_U(Time, south_north,
west_east_stag) ;
XLONG_U:description= "LONGITUDE, WEST IS NEGATIVE" ;
XLONG_U:units= "degree_east" ;
floatXLAT_V(Time,
south_north_stag, west_east) ;
XLAT_V:description= "LATITUDE, SOUTH IS NEGATIVE" ;
XLAT_V:units= "degree_north" ;
floatXLONG_V(Time,
south_north_stag, west_east) ;
XLONG_V:description= "LONGITUDE, WEST IS NEGATIVE" ;
XLONG_V:units= "degree_east" ;
floatALBEDO(Time, south_north,
west_east) ;
ALBEDO:description= "ALBEDO" ;
ALBEDO:units= "-" ;
floatTMN(Time, south_north,
west_east) ;
TMN:description= "SOIL TEMPERATURE AT LOWER BOUNDARY" ;
TMN:units= "K" ;
floatXLAND(Time, south_north,
west_east) ;
XLAND:description= "LAND MASK (1 FOR LAND, 2 FOR WATER)" ;
XLAND:units= "" ;
floatUST(Time, south_north,
west_east) ;
UST:description= "U* IN SIMILARITY THEORY" ;
UST:units= "m s-1" ;
floatPBLH(Time, south_north,
west_east) ;
PBLH:description= "PBL HEIGHT" ;
PBLH:units= "m" ;
floatHFX(Time, south_north,
west_east) ;
HFX:description= "UPWARD HEAT FLUX AT THE SURFACE" ;
HFX:units= "W m-2" ;
floatQFX(Time, south_north,
west_east) ;
QFX:description= "UPWARD MOISTURE FLUX AT THE SURFACE" ;
QFX:units= "kg m-2 s-1" ;
floatLH(Time, south_north,
west_east) ;
LH:description= "LATENT HEAT FLUX AT THE SURFACE" ;
LH:units= "W m-2" ;
floatSNOWC(Time, south_north,
west_east) ;
SNOWC:description= "FLAG INDICATING SNOW COVERAGE (1 FOR SNOW COVER)"
;
SNOWC:units= "" ;
}
Special WRF Output Variables
WRF model outputs the state variables defined in the Registry file, and these state variables are used in the model's prognostic equations. Some of these variables are perturbation fields. Therefore some definition for reconstructing meteorological variables is necessary. In particular, the definitions for the following variables are:
total geopotential |
PH + PHB |
total geopotential height in m |
( PH + PHB ) / 9.81 |
total potential temperature in_ K |
T + 300 |
total pressure in mb |
( P + PB ) * 0.01 |
The definition for map projection options:
map_proj = 1: Lambert Conformal
map_proj = 2: Polar Stereographic
map_proj = 3: Mercator
map_proj
= 10: latitude and longitude
List
of Global Attributes
// global attributes:
:TITLE= " OUTPUT FROM WRF V3.0 MODEL" ;
:START_DATE= "2000-01-24_12:00:00" ;
:SIMULATION_START_DATE= "2000-01-24_12:00:00" ;
:WEST-EAST_GRID_DIMENSION= 74 ;
:SOUTH-NORTH_GRID_DIMENSION= 61 ;
:BOTTOM-TOP_GRID_DIMENSION= 28 ;
:DX= 30000.f ;
:DY= 30000.f ;
:GRIDTYPE= "C" ;
:DYN_OPT= 2 ;
:DIFF_OPT= 1 ;
:KM_OPT= 4 ;
:DAMP_OPT= 0 ;
:KHDIF= 0.f ;
:KVDIF= 0.f ;
:MP_PHYSICS= 3 ;
:RA_LW_PHYSICS= 0 ;
:RA_SW_PHYSICS= 1 ;
:SF_SFCLAY_PHYSICS= 1 ;
:SF_SURFACE_PHYSICS= 1 ;
:BL_PBL_PHYSICS= 1 ;
:CU_PHYSICS= 1 ;
:SURFACE_INPUT_SOURCE= 1 ;
:SST_UPDATE= 0 ;
:GRID_FDDA= 0 ;
:GFDDA_INTERVAL_M= 0 ;
:GFDDA_END_H= 0 ;
:UCMCALL= 0 ;
:FEEDBACK= 1 ;
:SMOOTH_OPTION= 0 ;
:SWRAD_SCAT= 1.f ;
:W_DAMPING= 0 ;
:PD_MOIST= 1 ;
:PD_SCALAR= 0 ;
:PD_TKE= 0 ;
:DIFF_6TH_OPT= 0 ;
:DIFF_6TH_FACTOR= 0.12f ;
:OBS_NUDGE_OPT= 0 ;
:WEST-EAST_PATCH_START_UNSTAG= 1 ;
:WEST-EAST_PATCH_END_UNSTAG= 73 ;
:WEST-EAST_PATCH_START_STAG= 1 ;
:WEST-EAST_PATCH_END_STAG= 74 ;
:SOUTH-NORTH_PATCH_START_UNSTAG= 1 ;
:SOUTH-NORTH_PATCH_END_UNSTAG= 60 ;
:SOUTH-NORTH_PATCH_START_STAG= 1 ;
:SOUTH-NORTH_PATCH_END_STAG= 61 ;
:BOTTOM-TOP_PATCH_START_UNSTAG= 1 ;
:BOTTOM-TOP_PATCH_END_UNSTAG= 27 ;
:BOTTOM-TOP_PATCH_START_STAG= 1 ;
:BOTTOM-TOP_PATCH_END_STAG= 28 ;
:GRID_ID= 1 ;
:PARENT_ID= 0 ;
:I_PARENT_START= 0 ;
:J_PARENT_START= 0 ;
:PARENT_GRID_RATIO= 1 ;
:DT= 180.f ;
:CEN_LAT= 34.83001f ;
:CEN_LON= -81.03f ;
:TRUELAT1= 30.f ;
:TRUELAT2= 60.f ;
:MOAD_CEN_LAT= 34.83001f ;
:STAND_LON= -98.f ;
:GMT= 12.f ;
:JULYR= 2000 ;
:JULDAY= 24 ;
:MAP_PROJ= 1 ;
:MMINLU= "USGS" ;
:ISWATER= 16 ;
:ISICE= 24 ;
:ISURBAN= 1 ;
:ISOILWATER= 14 ;