User’s Guide for Advanced Research WRF (ARW) Modeling System Version 2

 

Chapter 5: WRF Model

Table of Contents

Introduction

The WRF model is a fully compressible, nonhydrostatic model (with a hydrostatic option). Its vertical coordinate is a terrain-following hydrostatic pressure coordinate. The grid staggering is the Arakawa C-grid. The model uses the Runge-Kutta 2nd and 3rd order time integration schemes, and 2nd to 6th order advection schemes in both horizontal and vertical directions. It uses a time-split small step for acoustic and gravity-wave modes. The dynamics conserves scalar variables.

The WRF model code contains several initialization programs (ideal.exe and real.exe; see Chapter 4), a numerical integration program (wrf.exe), and a program to do one-way nesting (ndown.exe). The WRF model Version 2.1 supports a variety of capabilities. These include

·               Real-data and idealized simulations

·               Various lateral boundary condition options for both real-data and idealized simulations

·               Full physics options

·               non-hydrostatic and hydrostatic (runtime option)

·               One-way, two-way nesting and moving nest

·               Applications ranging from meters to thousands of kilometers

Software requirement

·               Fortran 90 or 95 and c compiler

·               perl 5.04 or better

·               If MPI and OpenMP compilation is desired, it requires MPI or OpenMP libraries

·               WRF I/O API supports netCDF, PHD5 and GriB 1 formats, hence one of these libraries needs to be available on the computer where you compile and run WRF

Before you start

Before you compile WRF code on your computer, check to see if you have netCDF library installed. One of the supported WRF I/O is netCDF format. If your netCDF is installed in some odd places (e.g. not in your /usr/local/), then you need to know the paths to netCDF library, and to its include/ directory. You may use the environment variable NETCDF to define where the path to netCDF library is. To do so, type

setenv NETCDF /path-to-netcdf-library

If you don't have netCDF on your computer, you need to install it first. You may download netCDF source code or pre-built binary. Installation instruction can be found on the Unidata Web page at http://www.unidata.ucar.edu/.

Hint: for Linux users:

If you use PGI or Intel compiler on a Linux computer, make sure your netCDF is installed using the same compiler. If your path does not point to PGI/Intel-compiled netCDF, use NETCDF environment variable: e.g.,

 
setenv NETCDF /usr/local/netcdf-pgi
 

Hint: NCAR IBM users:

On NCAR's IBM computer (bluesky), netCDF library is installed for both 32-bit and 64-bit memory usage. The default would be the 32-bit version. If you would like to use the 64-bit version, set the following environment variable before you start compilation:

setenv OBJECT_MODE 64

This will result in creating correct netCDF library and include link in netcdf_links/ directory under WRFV2/.

Hint: for nesting compile:

- On most platforms, this requires RSL/MPI, even if you only have one processor. Check the options carefully and select those which support nesting.

How to compile WRF?

WRF source code tar file may be downloaded from http://www2.mmm.ucar.edu/wrf/download/get_source.html. Once you obtain the tar file, gunzip, and untar the file, and this will create a WRFV2/ directory. This contains:

 

Makefile

Top-level makefile

README

General information about WRF code

README_test_cases

Explanation of the test cases

Registry/

Directory for WRF Registry file

arch/

Directory where compile options are gathered

clean

script to clean created files, executables

compile

script for compiling WRF code

configure

script to configure the configure.wrf file for compile

dyn_em/

Directory for modules for dynamics in current WRF core (Advanced Research WRF core)

dyn_exp/

Directory for a 'toy' dynamic core

external/

Directory that contains external packages, such as those for IO, time keeping and MPI

frame/

Directory that contains modules for WRF framework

inc/

Directory that contains include files

main/

Directory for main routines, such as wrf.F, and all executables

phys/

Directory for all physics modules

run/

Directory where one may run WRF

share/

Directory that contains mostly modules for WRF mediation layer and WRF I/O

test/

Directory that contains 7 test case directories, may be used to run WRF

tools/

Directory that contains tools for

Go to WRFV2 (top) directory.

Type 'configure', and you will be given a list of choices for your computer. These choices range from compiling for a single processor job, to using OpenMP shared-memory or distributed-memory parallelization options for multiple processors. Some options support nesting, others do not. So select the option carefully. For example, the choices for a Linux computer looks like this:

 
 
checking for perl5... no

checking for perl... found /usr/bin/perl (perl)

Will use NETCDF in dir: /usr/local/netcdf-pig

PHDF5 not set in environment. Will configure WRF for use without.

------------------------------------------------------------------------

Please select from among the following supported platforms.

1. PC Linux i486 i586 i686, PGI compiler (Single-threaded, no nesting)
2. PC Linux i486 i586 i686, PGI compiler (single threaded, allows nesting using RSL without MPI)
3. PC Linux i486 i586 i686, PGI compiler SM-Parallel (OpenMP, no nesting)
4. PC Linux i486 i586 i686, PGI compiler SM-Parallel (OpenMP, allows nesting using RSL without MPI)
5. PC Linux i486 i586 i686, PGI compiler DM-Parallel (RSL, MPICH, Allows nesting)
6. PC Linux i486 i586 i686, PGI compiler DM-Parallel (RSL_LITE, MPICH, Allows nesting)
7. Intel xeon i686 ia32 Xeon Linux, ifort compiler (single-threaded, no nesting)
8. Intel xeon i686 ia32 Xeon Linux, ifort compiler (single threaded, allows nesting using RSL without MPI)
9. Intel xeon i686 ia32 Xeon Linux, ifort compiler (OpenMP)
10. Intel xeon i686 ia32 Xeon Linux, ifort compiler SM-Parallel (OpenMP, allows nesting using RSL without MPI)
11. Intel xeon i686 ia32 Xeon Linux, ifort+icc compiler DM-Parallel (RSL, MPICH, allows nesting)
12. Intel xeon i686 ia32 Xeon Linux, ifort+gcc compiler DM-Parallel (RSL, MPICH, allows nesting)
13. PC Linux i486 i586 i686, PGI compiler, ESMF (Single-threaded, ESMF coupling, no nesting)

Enter selection [1-13] :

Enter a number.

You will see a configure.wrf file. Edit compile options/paths, if necessary.

Hint: if you are interested in making nested runs, make sure you select an option that supports nesting. In general, the options that are used for MPI/RSL/RSL_LITE are also those which support nesting.

Hint: On some computers (e.g. some Intel machines), it is necessary to set the following environment variable before one compiles:

setenv WRF_EM_CORE 1

Type 'compile', and it will show the choices:

 
 
  Usage:
 
    compile wrf           compile wrf in run dir (Note, no real.exe, ndown.exe or ideal.exe generated)
 
    test cases (see README_test_cases for details):
 
     compile em_b_wave
     compile em_grav2d_x
     compile em_hill2d_x
     compile em_quarter_ss
     compile em_real
     compile em_squall2d_x
     compile em_squall2d_y
    compile -h                 help message
 

where em stands for the Advanced Research WRF dynamic solver (which currently is the 'Eulerian mass-coordinate' solver). When you switch from one test case to another, you must type one of the above to recompile. The recompile is necessary for the initialization programs (i.e. real.exe, and ideal.exe - there is a different ideal.exe for each of the idealized cases), while wrf.exe is the same for all test cases.

If you want to clean directories of all object files and executables, type 'clean'.

Type 'clean -a' to remove all built files, including configure.wrf. This is recommended if you make any mistake during the process, or if you have edited the Registry.EM file.

a. Idealized case

Type 'compile case_name' to compile. Suppose you would like to run the 2-dimensional squall case, type

compile em_squall2d_x, or compile em_squall2d_x >& compile.log

After successful compilation, you should have two executables created in the main/ directory: ideal.exe and wrf.exe. These two executables will be linked to the corresponding test/ or run/ directories. cd to those directory to run the model.

b. Real-data case

Compile WRF model after 'configure', type

compile em_real, or compile em_real >& compile.log

When the compile is successful, it will create three executables in the main/ directory: ndown.exe, real.exe and wrf.exe.

real.exe : for WRF initialization of real data cases
ndown.exe : used for one-way nesting
wrf.exe : WRF model integration

Like in the idealized cases, these executables will be linked to test/em_real or run/ directories. cd to one of these two directory to run the real-data case.

 

(back to top)

How to run WRF?

After successful compilation, it is time to run the model. You can do so by either cd to the run/ directory, or the test/case_name directory. In either case, you should see executables, ideal.exe or real.exe, and wrf.exe, linked files (mostly for real-data cases), and one or more namelist.input files in the directory.

Idealized, real data, two-way nested, and one-way nested runs are explained on this page. Read on.

a. Idealized case

Say you choose to compile the test case em_squall2d_x, now type 'cd test/em_squall2d_x' or 'cd run'.

Edit namelist.input file (see README.namelist in WRFV2/run/ directory or its Web version) to change length of integration, frequency of output, size of domain, timestep, physics options, and other parameters.

If you see a script in the test case directory, called run_me_first.csh, run this one first by typing:

run_me_first.csh

This links some data files that you might need to run the case.

To run the initialization program, type

ideal.exe

This will generate wrfinput_d01 file in the same directory. All idealized cases do require lateral boundary file because of the boundary condition choices they use, such as the periodic boundary condition option.

Note:

- ideal.exe cannot generally be run in parallel. For parallel compiles, run this on a single processor.

- The exception is the quarter_ss case, which can now be run in MPI.

To run the model, type

wrf.exe

or variations such as

wrf.exe >& wrf.out &

Note:

- Two-dimensional ideal cases cannot be run in parallel. OpenMP is ok.

- The execution command may be different for MPI runs on different machines, e.g. mpirun.

After successful completion, you should see wrfout_d01_0001-01-01* and wrfrst* files, depending on how one specifies the namelist variables for output.

b. Real-data case

Type 'cd test/em_real' or 'cd run', and this is where you are going to run both the WRF initialization program, real.exe, and WRF model, wrf.exe.

Running a real-data case requires successfully running the WRF Standard Initialization program. Make sure wrf_real_input_em.* files from the Standard Initialization are in this directory (you may link the files to this directory).

NOTE: you must use the SI version 2.0 and above, to prepare input for V2 WRF!

Edit namelist.input for dates, and domain size. Also edit time step, output options, and physics options.

Type 'real.exe' instead of ideal.exe to produce wrfinput_d01 and wrfbdy_d01 files. In real data case, both files are required.

Run WRF model by typing 'wrf.exe'.

A successful run should produce one or several output files named like wrfout_d01_yyyy-mm-dd_hh:mm:ss. For example, if you start the model at 1200 UTC, January 24 2000, then your first output file should have the name:

wrfout_d01_2000-01-24_12:00:00

It is always good to check the times written to the output file by typing:

ncdump -v Times wrfout_d01_2000-01-24_12:00:00

You may have other wrfout files depending on the namelist options (how often you split the output files and so on using namelist option frames_per_outfile). You may also create restart files if you have restart frequency (restart_interval in the namelist.input file) set within your total integration length. The restart file should have names like

wrfrst_d01_yyyy-mm-dd_hh:mm:ss

For DM (distributed memory) parallel systems, some form of mpirun command will be needed here. For example, on a Linux cluster, the command to run MPI code and using 4 processors may look like:

mpirun -np 4 real.exe
mpirun -np 4 wrf.exe

On IBM, the command is

poe real.exe
poe wrf.exe

in a batch job, and

poe real.exe -rmpool 1 -procs 4
poe wrf.exe -rmpool 1 -procs 4

for interactive runs.

 

How to Make a Two-way Nested Run?

WRF V2 supports a two-way nest option, in both 3-D idealized cases (quarter_ss and b_wave) and real data cases. The model can handle multiple domains at the same nest level (no overlapping nest), or multiple nest levels (telescoping). Moving nest option is also available since V2.0.3.1.

Most of options to start a nest run are handled through the namelist. All variables in the namelist.input file that have multiple columns of entries need to be edited with caution. The following are the key namelist variables to modify:

start_ and end_year/month/day/minute/second: these control the nest start and end times

input_from_file: whether a nest requires an input file (e.g. wrfinput_d02). This is typically an option for real data case.

fine_input_stream: which fields from the nest input file are used in nest initialization. The fields to be used are defined in the Registry.EM. Typically they include static fields (such as terrain, landuse), and masked surface fields (such as skin temp, soil moisture and temperature).

max_dom: setting this to a number > 1 will invoke nesting. For example, if you want to have one coarse domain and one nest, set this variable to 2.

grid_id: domain identifier will be used in the wrfout naming convention.

parent_id: use the grid_id to define the parent_id number for a nest.

i_parent_start/j_parent_start: lower-left corner starting indices of the nest domain in its parent domain. You should find these numbers in your SI's $MOAD_DATAROOT/static/wrfsi.nl namelist file, and look for values in the second (and third, and so on) column of DOMAIN_ORIGIN_LLI and DOMAIN_ORIGIN_LLJ.

parent_grid_ratio: integer parent-to-nest domain grid size ratio. If feedback is off, then this ratio can be even or odd. If feedback is on, then this ratio has to be odd.

parent_time_step_ratio: time ratio for the coarse and nest domains may be different from the parent_grid_ratio. For example, you may run a coarse domain at 30 km, and a nest at 10 km, the parent_grid_ratio in this case is 3. But you do not have to use 180 sec for the coarse domain and 60 for the nest domain. You may use, for example, 45 sec or 90 sec for the nest domain by setting this variable to 4 or 2.

feedback: this option takes the values of prognostic variables in the nest and overwrites the values in the coarse domain at the coincident points. This is the reason currently that it requires odd parent_grid_ratio with this option.

smooth_option: this a smoothing option for the parent domain if feedback is on. Three options are available: 0 - no smoothing; 1 - 1-2-1 smoothing; 2 - smoothing-desmoothing. (There was a bug for this option in pre-V2.1 code, and it is fixed.)

3-D Idealized Cases

For 3-D idealized cases, no additional input files are required. The key here is the specification of the namelist.input file. What the model does is to interpolate all variables required in the nest from the coarse domain fields. Set

input_from_file = F, F

Real Data Cases

For real-data cases, three input options are supported. The first one is similar to running the idealized cases. That is to have all fields for the nest interpolated from the coarse domain (namelist variable input_from_file set to F for each domain). The disadvantage of this option is obvious, one will not benefit from the higher resolution static fields (such as terrain, landuse, and so on).

The second option is to set input_from_file = T for each domain, which means that the nest will have a nested wrfinput file to read in (similar to MM5 nest option IOVERW = 1). The limitation of this option is that this only allows the nest to start at hour 0 of the coarse domain run.

The third option is in addition to setting input_from_file = T for each domain, also set fine_input_stream = 2 for each domain. Why a value of 2? This is based on current Registry setting which designates certain fields to be read in from auxiliary input unit 2. This option allows the nest initialization to use interpolated 3-D meterological fields, static fields and masked, time-varying surface fields from nest wrfinput. It hence allows a nest to start at a later time than hour 0, and use higher resolution input. Setting fine_input_stream = 0 is equivalent to the second option. This option is introduced in V2.1.

The way options 2 and 3 work is a bit cumbersome at this time. So please bear with us. It involves

- Running SI requesting one or more nest domains
- Running real.exe multiple times, once for each nest domain to produce wrfinput_d0n files (e.g. wrfinput_d02), and once for domain 1 like one normally does to create wrfbdy_d01 and wrfinput_d01
- Running WRF once

To prepare for the nested run, first follow the instruction in program WRF SI to create nest domain files. In addition to the files available for domain 1 (wrf_real_input_em.d01.yyyy-mm-dd_hh:mm:ss for all time periods), you should have a file from SI that is named wrf_real_input_em.d02.yyyy-mm-dd_hh:mm:ss, and this should be for the first time period of your model run. Say you have created SI output for a model run that starts at 1200 UTC Jan 24, 2000, using 6 hourly data, you should then have these files from SI:

wrf_real_input_em.d01.2000-01-24_12:00:00
wrf_real_input_em.d01.2000-01-24_18:
00:00
wrf_real_input_em.d01.2000-01-25_00:
00:00
wrf_real_input_em.d01.2000-01-25_06:
00:00
wrf_real_input_em.d01.2000-01-25_12:
00:00

If you use the nested option in SI, you should have one more file:

wrf_real_input_em.d02.2000-01-24_12:00:00

Once you have these files, do the following:

- Find out the dimensions of your nested domain. You can either make a note while you are running SI, or type

ncdump -h wrf_real_input_em.d02.2000-01-24_12:00:00

the grid dimensions can be found near the end of the dump: WEST-EAST_GRID_DIMENSION and SOUTH-NORTH_GRID_DIMENSION

- Use these dimension to set up the first namelist file. Note that you only need to set the namelist to run the first time period for domain 2. Only the first column matters in this case.

- To run real.exe, you must rename the domain 2 SI output file as if it is for domain 1. In the above case, type

mv wrf_real_input_em.d02.2000-01-24_12:00:00 \
wrf_real_input_em.d01.2000-01-24_12:
00:00

Make sure you don't overwrite the real domain 1 input file for this time period. So do this first for the nest domain. Move or link file(s) for the nest to your run directory first.

After running real.exe, rename wrfinput_d01 to wrfinput_d02

Save the namelist edited for this domain.

- Re-edit the namelist for the coarse domain. Again, only the first column matters.

- Run real.exe for all domain 1 input files from SI, and create wrfinput_d01 and wrfbdy_d01 files.

- Edit the namelist again. This time you must pay attention to all the multiple column entries. Don't forget to set max_dom = the number of your nested domains, including the coarse domain.

-   Run wrf.exe as usual with namelist modified

The following figure summarizes the data flow in the two-input, two-way nested run.

 


How to Make a One-way Nested Run?

WRF supports one-way nested option. One-way nesting is defined as a finer grid resolution run made as a subsequent run after the coarser grid resolution run and driven by coarse grid output as initial and lateral boundary conditions, together with input from higher resolution terrestrial fields (e.g. terrain, landuse, etc.)

When one-way nesting is used, the coarse-to-fine grid ratio is only restricted to an integer. An integer less than 5 is recommended.

It takes several steps to make a one-way nested run. It involves these steps:

1) Make a coarse grid run
2) Make temporary fine grid initial condition (only a single time period is required)
3) Run program ndown, with coarse grid WRF model output, and fine grid input to generate fine grid initial and boundary conditions
4) Make the fine grid run

To compile, choose an option that supports nesting.

Step 1: Make a coarse grid run

This is no different than any of the single domain WRF run as described above.

Step 2: Make a temporary fine grid initial condition file

The purpose of this step is to ingest higher resolution terrestrial fields and corresponding land-water masked soil fields.

Before doing this step, one would have run SI and requested one coarse and one nest domain, and for the one time period one wants to start the one-way nested run. This should generate an SI output for the nested domain (domain 2) named wrf_real_input_em.d02.*.

- Rename wrf_real_input_em.d02.* to wrf_real_input_em.d01.*. (Move the original domain 1 SI output files to a different place before you do this.)
- Edit the namelist.input file for this domain (pay attention to column 1 only,) and edit in the correct start time, grid dimensions and physics options.
- Run real.exe for this domain and this will produce a wrfinput_d01 file.
- Rename this wrfinput_d01 file as if it comes from SI again: wrf_real_input_em.d02.*. Make sure the date string is correctly specified in the file name.

 

Step 3: Make the final fine grid initial and boundary condition files

- Edit namelist.input again, and this time one needs to edit two columns: one for dimensions of the coarse grid, and one for the fine grid. Note that the boundary condition frequencey (namelist variable interval_seconds) is the time in seconds between the coarse grid output times.
- Run ndown.exe, with inputs from the coarse grid wrfout files, and wrf_real_input_em.d02.* file generated from Step 2 above. This will produce wrfinput_d02 and wrfbdy_d02 files.

Note that one may run program ndown in mpi - if it is compiled so. For example,

mpirun -np 4 ndown.exe

Step 4: Make the fine grid WRF run

- Rename wrfinput_d02 and wrfbdy_d02 to wrfinput_d01 and wrfbdy_d01, respectively.
- Edit namelist.input one more time, and it is now for the fine grid only.
- Run WRF for this grid.

The following figure summarizes the data flow for a one-way nested run.

 


How to Make a Moving-Nest Run?

The moving nest option is supported in the current WRF. Two types of moving tests are allowed. In the first option, a user needs to specify the nest movement in the namelist. The second option is to allow the nest to move automatically based on an automatic vortex-following algorithm. This option is designed to follow the movement of a tropical cyclone.

To make the specified moving nest runs, one first needs to compile the code with -DMOVE_NESTS added to ARCHFLAGS in the configure.wrf file. To run the model, only the coarse grid input files are required. In this option, the nest initialization is defined from the coarse grid data - no nest input is used. In addition to the namelist options applied to a nested run, the following needs to be added to namelist section &domains:

num_moves: the total number of moves one can make in a model run. A move of any domain counts against this total. The maximum is currently set to 50, but it can be changed by change MAX_MOVES in frame/module_driver_constants.F.

move_id: a list of nest IDs, one per move, indicating which domain is to move for a given move.

move_interval: the number of minutes since the beginning of the run that a move is supposed to occur. The nest will move on the next time step after the specified instant of model time has passed.

move_cd_x, move_cd_y: distance in number of grid points and direction of the nest move (positive numbers indicating moving toward east and north, while negative numbers indicating moving toward west and south).

To make the automatic moving nest runs, two compiler flags are needed in ARCHFLAGS: -DMOVE_NESTS and -DVORTEX_CENTER. (Note that this compile would only support auto-moving nest runs, and will not support  the specified moving nest at the same time. One must recompile with only –DMOVE_NESTS to do the specified moving nest runs.) Again, no nest input is needed. If one wants to use values other than the default ones, add and edit the following namelist variables in &domains section:

vortex_interval: how often the vortex position is calculated in minutes (default is 15 minutes).

max_vortex_speed: used with vortex_interval to compute the radius of search for the new vortex center position.

corral_dist: the distance in number of coarse grid cells that the moving nest is allowed to come near the coarse grid boundary (default is 8).

In both types of moving nest runs, the initial location of the nest is specified through i_parent_start and j_parent_start in the namelist.input file.

Both moving nest options are considered experimental at this time.

 

Physics and Diffusion Options

Physics Options

WRF offers multiple physics options that can be combined in any way. The options typically range from simple and efficient to sophisticated and more computationally costly, and from newly developed schemes to well tried schemes such as those in current operational models.

The choices vary with each major WRF release, but here we will outline those available in WRF Version 2.0.

1. Microphysics (mp_physics)

a. Kessler scheme: A warm-rain (i.e. no ice) scheme used commonly in idealized cloud modeling studies.

b. Lin et al. scheme: A sophisticated scheme that has ice, snow and graupel processes, suitable for real-data high-resolution simulations.

c. WRF Single-Moment 3-class scheme: A simple efficient scheme with ice and snow processes suitable for mesoscale grid sizes.

d. WRF Single-Moment 5-class scheme: A slightly more sophisticated version of (c) that allows for mixed-phase processes and super-cooled water.

e. Eta microphysics: The operational microphysics in NCEP models. A simple efficient scheme with diagnostic mixed-phase processes.

f. WRF Single-Moment 6-class scheme: A scheme with ice, snow and graupel processes suitable for high-resolution simulations.

g. Thompson et al. scheme: A new scheme related to Reisner2 with ice, snow and graupel processes suitable for high-resolution simulations.

h. NCEP 3-class: An older version of (c)

i. NCEP 5-class: An older version of (d)

 

2.1 Longwave Radiation (ra_lw_physics)

a. RRTM scheme: Rapid Radiative Transfer Model. An accurate scheme using look-up tables for efficiency. Accounts for multiple bands, trace gases, and microphysics species.

b. GFDL scheme: Eta operational radiation scheme. An older multi-band scheme with carbon dioxide, ozone and microphysics effects.

2.2 Shortwave Radiation (ra_sw_physics)

a. Dudhia scheme: Simple downward integration allowing efficiently for clouds and clear-sky absorption and scattering.

b. Goddard shortwave: Two-stream multi-band scheme with ozone from climatology and cloud effects.

c. GFDL shortwave: Eta operational scheme. Two-stream multi-band scheme with ozone from climatology and cloud effects.

3.1 Surface Layer (sf_sfclay_physics)

a. MM5 similarity: Based on Monin-Obukhov with Carslon-Boland viscous sub-layer and standard similarity functions from look-up tables.

b. Eta similarity: Used in Eta model. Based on Monin-Obukhov with Zilitinkevich thermal roughness length and standard similarity functions from look-up tables.

3.2 Land Surface (sf_surface_physics)

a. 5-layer thermal diffusion: Soil temperature only scheme, using five layers.

b. Noah Land Surface Model: Unified NCEP/NCAR/AFWA scheme with soil temperature and moisture in four layers, fractional snow cover and frozen soil physics.

c. RUC Land Surface Model: RUC operational scheme with soil temperature and moisture in six layers, multi-layer snow and frozen soil physics.

4. Planetary Boundary layer (bl_pbl_physics)

a. Yonsei University scheme: Non-local-K scheme with explicit entrainment layer and parabolic K profile in unstable mixed layer.

b. Mellor-Yamada-Janjic scheme: Eta operational scheme. One-dimensional prognostic turbulent kinetic energy scheme with local vertical mixing.

c. MRF scheme: Older version of (a) with implicit treatment of entrainment layer as part of non-local-K mixed layer

5. Cumulus Parameterization (cu_physics)

a. Kain-Fritsch scheme: Deep and shallow sub-grid scheme using a mass flux approach with downdrafts and CAPE removal time scale.

b. Betts-Miller-Janjic scheme. Operational Eta scheme. Column moist adjustment scheme relaxing towards a well-mixed profile.

c. Grell-Devenyi ensemble scheme: Multi-closure, multi-parameter, ensemble method with typically 144 sub-grid members.

Diffusion and Damping Options

Diffusion in WRF is categorized under two parameters, the diffusion option and the K option. The diffusion option selects how the derivatives used in diffusion are calculated, and the K option selects how the K coefficients are calculated. Note that when a PBL option is selected, vertical diffusion is done by the PBL scheme, and not by the diffusion scheme.

1.1 Diffusion Option (diff_opt)

a. Simple diffusion: Gradients are simply taken along coordinate surfaces.

b. Full diffusion: Gradients use full metric terms to more accurately compute horizontal gradients in sloped coordinates.

 

1.2 K Option (km_opt)

Note that when using a PBL scheme, only options (a) and (d) below make sense, because (b) and (c) are designed for 3d diffusion.

a. Constant: K is specified by namelist values for horizontal and vertical diffusion.

b. 3d TKE: A prognostic equation for turbulent kinetic energy is used, and K is based on TKE.

c. 3d Deformation: K is diagnosed from 3d deformation and stability following a Smagorinsky approach.

d. 2d Deformation: K for horizontal diffusion is diagnosed from just horizontal deformation. The vertical diffusion is assumed to be done by the PBL scheme.

 

2. Damping Options

These are independently activated choices.

a. Upper Damping: Either a layer of increased diffusion or a Rayleigh relaxation layer can be added near the model top to control reflection from the upper boundary.

b. w-Damping: For operational robustness, vertical motion can be damped to prevent the model from becoming unstable with locally large vertical
velocities. This only affects strong updraft cores, so has very little impact on results otherwise.

c. Divergence Damping: Controls horizontally propagating sound waves.

d. External Mode Damping: Controls upper-surface (external) waves.

e. Time Off-centering (epssm): Controls vertically propagating sound waves.

 

Description of Namelist Variables

The following is a description of namelist variables. The variables that are function of nest are indicated by (max_dom) following the variable.

 

Variable Names

Value

Description

&time_control

 

Time control

run_days

1

run time in days

run_hours

0

run time in hours
Note: if it is more than 1 day, one may use both run_days and run_hours or just run_hours. e.g. if the total run length is 36 hrs, you may set run_days = 1, and run_hours = 12, or run_days = 0, and run_hours 36

run_minutes

0

run time in minutes

run_seconds

0

run time in seconds

start_year (max_dom)

2001

four digit year of starting time

start_month (max_dom)

06

two digit month of starting time

start_day (max_dom)

11

two digit day of starting time

start_hour (max_dom)

12

two digit hour of starting time

start_minute (max_dom)

00

two digit minute of starting time

start_second (max_dom)

00

two digit second of starting time
Note: the start time is used to name the first wrfout file. It also controls the start time for nest domains, and the time to restart

end_year (max_dom)

2001

four digit year of ending time

end_month (max_dom)

06

two digit month of ending time

end_day (max_dom)

12

two digit day of ending time

end_hour (max_dom)

12

two digit hour of ending time

end_minute (max_dom)

00

two digit minute of ending time

end_second (max_dom)

00

two digit second of ending time
Note all end times also control when the nest domain integrations end All start and end times are used by real.exe. One may use either run_days/run_hours etc. or end_year/month/day/hour etc. to control the length of model integration. But run_days/run_hours takes precedence over the end times. Program real.exe uses start and end times only.

interval_seconds

10800

time interval between incoming real data, which will be the interval between the lateral boundary condition file (for real only)

input_from_file (max_dom)

T (logical)

logical; whether nested run will have input files for domains other than 1

fine_input_stream (max_dom)

 

selected fields from nest input

 

0

all fields from nest input are used

 

2

only nest input specified from input stream 2 (defined in the Registry) are used

history_interval (max_dom)

60

history output file interval in minutes (integer only)

history_interval_mo (max_dom)

1

history output file interval in months (integer); used as alternative to history_interval

history_interval_d (max_dom)

1

history output file interval in days (integer); used as alternative to history_interval

history_interval_h (max_dom)

1

history output file interval in hours (integer); used as alternative to history_interval

history_interval_m (max_dom)

1

history output file interval in minutes (integer); used as alternative to history_interval and is equivalent to history_interval

history_interval_s (max_dom)

1

history output file interval in seconds (integer); used as alternative to history_interval

frames_per_outfile (max_dom)

1

output times per history output file, used to split output files into smaller pieces

restart

F (logical)

whether this run is a restart run

restart_interval

1440

restart output file interval in minutes

io_form_history

2

2 = netCDF; 102 = split netCDF files one per processor (no supported post-processing software for split files)

io_form_restart

2

2 = netCDF; 102 = split netCDF files one per processor (must restart with the same number of processors)

io_form_input

2

2 = netCDF

io_form_boundary

2

netCDF format

 

4

PHDF5 format (no supported post-processing software)

 

5

GRIB1 format (no supported post-processing software)

 

1

binary format (no supported post-processing software)

debug_level

0

50,100,200,300 values give increasing prints

auxhist2_outname

"rainfall"

file name for extra output; if not specified, auxhist2_d_ will be used also note that to write variables in output other than the history file requires Registry.EM file change

auxhist2_interval

10

interval in minutes

io_form_auxhist2

2

output in netCDF

write_input

t

write input-formatted data as output for 3DVAR apllication

inputout_interval

180

interval in minutes when writing input-formatted data

input_outname

wrf_3dvar_input

Output file name from 3DVAR

inputout_begin_y

0

beginning year to write 3DVAR date

inputout_begin_mo

0

beginning month to write 3DVAR data

inputout_begin_d

0

beginning day to write 3DVAR data

inputout_begin_h

3

beginning hour to write 3DVAR data

inputout_begin_s

0

beginning second to write 3DVAR data

inputout_end_y

0

ending year to write 3DVAR data

inputout_end_mo

0

ending month to write 3DVAR data

inputout_end_d

0

ending day to write 3DVAR data

inputout_end_h

12

ending hour to write 3DVAR data

inputout_end_s

0

ending second to write 3DVAR data.

 

 

The above example shows that the input-formatted data are output starting from hour 3 to hour 12 in 180 min interval.

 

 

 

&domains

 

domain definition: dimensions, nesting parameters

time_step

60

time step for integration in integer seconds (recommended 6*dx in km for a typical real-data case

time_step_fract_num

0

numerator for fractional time step

time_step_fract_den

1

denominator for fractional time step Example, if you want to use 60.3 sec as your time step, set time_step = 60, time_step_fract_num = 3, and time_step_fract_den = 10

max_dom

1

number of domains - set it to > 1 if it is a nested run

s_we (max_dom)

1

start index in x (west-east) direction (leave as is)

e_we (max_dom)

91

end index in x (west-east) direction (staggered dimension)

s_sn (max_dom)

1

start index in y (south-north) direction (leave as is)

e_sn (max_dom)

82

end index in y (south-north) direction (staggered dimension)

s_vert (max_dom)

1

start index in z (vertical) direction (leave as is)

e_vert (max_dom)

28

end index in z (vertical) direction (staggered dimension - this refers to full levels). Most varialbes are on unstaggered levels. Vertical dimensions need to be the same for all nests.

dx (max_dom)

10000

grid length in x direction, unit in meters

dy (max_dom)

10000

grid length in y direction, unit in meters

ztop (max_dom)

19000.

used in mass model for idealized cases

grid_id (max_dom)

1

domain identifier

parent_id (max_dom)

0

id of the parent domain

i_parent_start (max_dom)

0

starting LLC I-indices from the parent domain

j_parent_start (max_dom)

0

starting LLC J-indices from the parent domain

parent_grid_ratio (max_dom)

1

parent-to-nest domain grid size ratio: for real-data cases the ratio has to be odd; for idealized cases, the ratio can be even if feedback is set to 0.

parent_time_step_ratio (max_dom)

1

parent-to-nest time step ratio; it can be different from the parent_grid_ratio

feedback

1

feedback from nest to its parent domain; 0 = no feedback

smooth_option

0

smoothing option for parent domain, used only with feedback option on. 0: no smoothing; 1: 1-2-1 smoothing; 2: smoothing-desmoothing

 

 

Namelist variables for controling the prototype moving nest:
Note that moving nest needs to be activated at the compile time by adding -DMOVE_NESTS to the ARCHFLAGS. The maximum number of moves, max_moves, is set to be 50, but can be modified in source code file frame/module_driver_constants.F

num_moves

2,

total number of moves

move_id

2,2,

a list of nest domain id's, one per move

move_interval

60,120,

time in minutes since the start of this domain

move_cd_x

1,-1,

the number of parent domain grid cells to move in i direction

move_cd_y

-1,1,

the number of parent domain grid cells to move in j direction (positive in increasing i/j directions, and negative in decreasing i/j directions. The limitation now is to move only 1 grid cell at each move.

vortex_interval

15

how often the new vortex position is computed

max_vortex_speed

40

used to compute the search radius for the new vortex position

corral_dist

8

how many coarse grid cells the moving nest is allowed to get near the coarse grid boundary

 

 

 

&physics

 

Physics options

chem_opt

0

chemistry option - not yet available

mp_physics (max_dom)

 

microphysics option

 

0

no microphysics

 

1

Kessler scheme

 

2

Lin et al. scheme

 

3

WSM 3-class simple ice scheme

 

4

WSM 5-class scheme

 

5

Ferrier (new Eta) microphysics

 

6

WSM 6-class graupel scheme

 

8

Thompson et al. graupel scheme

 

98

NCEP 3-class simple ice scheme (to be removed)

 

99

NCEP 5-class scheme (to be removed)

mp_zero_out

 

For non-zero mp_physics options, to keep Qv >= 0, and to set the other moisture fields < a threshold value to zero

 

0

no action taken, no adjustment to any moist field

 

1

except for Qv, all other moist arrays are set to zero if they fall below a critical value

 

2

Qv is >= 0, all other moist arrays are set to zero if they fall below a critical value

mp_zero_out_thresh

1.e-8

critical value for moisture variable threshold, below which moist arrays (except for Qv) are set to zero (unit: kg/kg)

ra_lw_physics (max_dom)

 

longwave radiation option

 

0

no longwave radiation

 

1

rrtm scheme

 

99

GFDL (Eta) longwave (semi-supported)

ra_sw_physics (max_dom)

 

shortwave radiation option

 

0

no shortwave radiation

 

1

Dudhia scheme

 

2

Goddard short wave

 

99

GFDL (Eta) longwave (semi-supported)

 

 

 

radt (max_dom)

30

minutes between radiation physics calls. Recommend 1 minute per km of dx (e.g. 10 for 10 km grid)

sf_sfclay_physics (max_dom)

 

surface-layer option (old bl_sfclay_physics option)

 

0

no surface-layer

 

1

Monin-Obukhov scheme

 

2

Monin-Obukhov (Janjic Eta) scheme

sf_surface_physics (max_dom)

 

land-surface option (old bl_surface_physics option)

 

0

no surface temp prediction

 

1

thermal diffusion scheme

 

2

Noah land-surface model

 

3

RUC land-surface model

bl_pbl_physics (max_dom)

 

boundary-layer option

 

0

no boundary-layer

 

1

YSU scheme

 

2

Mellor-Yamada-Janjic (Eta) TKE scheme

 

99

MRF scheme (to be removed)

bldt (max_dom)

0

minutes between boundary-layer physics calls

cu_physics (max_dom)

 

cumulus option

 

0

no cumulus

 

1

Kain-Fritsch (new Eta) scheme

 

2

Betts-Miller-Janjic scheme

 

3

Grell-Devenyi ensemble scheme

 

99

previous Kain-Fritsch scheme

cudt

0

minutes between cumulus physics calls

isfflx

1

heat and moisture fluxes from the surface (only works for sf_sfclay_physics = 1) 1 = with fluxes from the surface 0 = no flux from the surface

ifsnow

0

snow-cover effects (only works for sf_surface_physics = 1) 1 = with snow-cover effect 0 = without snow-cover effect

icloud

1

cloud effect to the optical depth in radiation (only works for ra_sw_physics = 1 and ra_lw_physics = 1) 1 = with cloud effect 0 = without cloud effect

surface_input_source

1,2

where landuse and soil category data come from: 1 = SI/gridgen 2 = GRIB data from another model (only possible

 

 

(VEGCAT/SOILCAT are in wrf_real_input_em files from SI)

num_soil_layers

 

number of soil layers in land surface model

 

5

thermal diffusion scheme

 

4

Noah landsurface model

 

6

RUC landsurface model

maxiens

1

Grell-Devenyi only

maxens

3

G-D only

maxens2

3

G-D only

maxens3

16

G-D only

ensdim

144

G-D only These are recommended numbers. If you would like to use any other number, consult the code, know what you are doing.

seaice_threshold

271.

tsk < seaice_threshold, if water point and 5-layer slab scheme, set to land point and permanent ice; if water point and Noah scheme, set to land point, permanent ice, set temps from 3 m to surface, and set smois and sh2o

sst_update

 

option to use time-varying SST during a model simulation

 

0

no SST update

 

1

real.exe will create wrflowinput_d01 file at the same time interval as the available input data. To use it in wrf.exe, add auxinput_inname = "wrflowinp_d01", auxinput5_interval, and auxinput_end_h in namelist section &time_control

 

 

 

&dynamics

 

Diffusion, damping options, advection options

dyn_opt

2

dynamical core option: advanced research WRF core (Eulerian mass)

rk_ord

 

time-integration scheme option:

 

2

Runge-Kutta 2nd order

 

3

Runge-Kutta 3rd order (recommended)

diff_opt

 

turbulence and mixing option:

 

0

= no turbulence or explicit spatial numerical filters (km_opt IS IGNORED).

 

1

evaluates 2nd order diffusion term on coordinate surfaces. uses kvdif for vertical diff unless PBL option is used. may be used with km_opt = 1 and 4. (= 1, recommended for real-data case when grid distance < 10 km)

 

2

evaluates mixing terms in physical space (stress form) (x,y,z). turbulence parameterization is chosen by specifying km_opt.

km_opt

 

eddy coefficient option

 

1

constant (use khdif kvdif)

 

2

1.5 order TKE closure (3D)

 

3

Smagorinsky first order closure (3D) Note: option 2 and 3 are not recommended for DX > 2 km

 

4

horizontal Smagorinsky first order closure (recommended for real-data case when grid distance < 10 km)

damp_opt

 

upper level damping flag (do not use for real-data cases until further notice)

 

0

without damping

 

1

with diffusive damping (dampcoef nondimensional ~ 0.01 - 0.1)

 

2

with Rayleigh damping (dampcoef inverse time scale [1/s], e.g. 0.003)

w_damping

 

vertical velocity damping flag (for operational use)

 

0

without damping

 

1

with damping

zdamp (max_dom)

5000

damping depth (m) from model top

dampcoef (max_dom)

0.

damping coefficient (dampcoef <= 0.2, for 3D cases, set it <=0.1)

base_temp

290.

real-data, em ONLY, base sea-level temp (K)

base_pres

100000.

real-data, em ONLY, base sea-level pressure (Pa), DO NOT CHANGE

base_lapse

50.

real-data, em ONLY, lapse rate (K), DO NOT CHANGE

khdif (max_dom)

0

horizontal diffusion constant (m^2/s)

kvdif (max_dom)

0

vertical diffusion constant (m^2/s)

smdiv (max_dom)

0.1

divergence damping (0.1 is typical)

emdiv (max_dom)

0.01

external-mode filter coef for mass coordinate model (0.01 is typical for real-data cases)

epssm (max_dom)

.1

time off-centering for vertical sound waves

non_hydrostatic (max_dom)

.true.

whether running the model in hydrostatic or non-hydro mode

pert_coriolis (max_dom)

.false.

Coriolis only acts on wind perturbation (idealized)

h_mom_adv_order (max_dom)

5

horizontal momentum advection order (5=5th, etc.)

v_mom_adv_order (max_dom)

3

vertical momentum advection order

h_sca_adv_order (max_dom)

5

horizontal scalar advection order

v_sca_adv_order (max_dom)

3

vertical scalar advection order

time_step_sound (max_dom)

4

number of sound steps per time-step (if using a time_step much larger than 6*dx (in km), increase number of sound steps)

 

 

 

&bc_control

 

boundary condition control

spec_bdy_width

5

total number of rows for specified boundary value nudging

spec_zone

1

number of points in specified zone (spec b.c. option)

relax_zone

4

number of points in relaxation zone (spec b.c. option)

specified (max_dom)

.false.

specified boundary conditions (only applies to domain 1)

 

 

The above 4 namelists are used for real-data runs only

periodic_x (max_dom)

.false.

periodic boundary conditions in x direction

symmetric_xs (max_dom)

.false.

symmetric boundary conditions at x start (west)

symmetric_xe (max_dom)

.false.

symmetric boundary conditions at x end (east)

open_xs (max_dom)

.false.

open boundary conditions at x start (west)

open_xe (max_dom)

.false.

open boundary conditions at x end (east)

periodic_y (max_dom)

.false.

periodic boundary conditions in y direction

symmetric_ys (max_dom)

.false.

symmetric boundary conditions at y start (south)

symmetric_ye (max_dom)

.false.

symmetric boundary conditions at y end (north)

open_ys (max_dom)

.false.

open boundary conditions at y start (south)

open_ye (max_dom)

.false.

open boundary conditions at y end (north)

nested (max_dom)

.false.

nested boundary conditions (inactive)

 

 

 

&namelist_quilt

 

Option for asynchronized I/O for MPI applications.

nio_tasks_per_group

0

default value is 0: no quilting; > 0 quilting I/O

nio_groups

1

default 1, don't change

 

 

 

miscelleneous in &domains:

 

 

tile_sz_x

0

number of points in tile x direction

tile_sz_y

0

number of points in tile y direction can be determined automatically

numtiles

1

number of tiles per patch (alternative to above two items)

nproc_x

-1

number of processors in x for decomposition

nproc_y

-1

number of processors in y for decomposition -1: code will do automatic decomposition >1: for both: will be used for decomposition

 


List of Fields in WRF Output

List of Fields

The following is an edited output from netCDF command 'ncdump':

ncdump -h wrfout_d01_yyyy_mm_dd-hh:mm:ss

 
  char Times(Time, DateStrLen) ;
  float LU_INDEX(Time, south_north, west_east) ;
           LU_INDEX:description = "LAND USE CATEGORY" ;
           LU_INDEX:units = "" ;
  float U(Time, bottom_top, south_north, west_east_stag) ;
           U:description = "x-wind component" ;
           U:units = "m s-1" ;
  float V(Time, bottom_top, south_north_stag, west_east) ;
           V:description = "y-wind component" ;
           V:units = "m s-1" ;
  float W(Time, bottom_top_stag, south_north, west_east) ;
           W:description = "z-wind component" ;
           W:units = "m s-1" ;
  float PH(Time, bottom_top_stag, south_north, west_east) ;
           PH:description = "perturbation geopotential" ;
           PH:units = "m2 s-2" ;
  float PHB(Time, bottom_top_stag, south_north, west_east) ;
           PHB:description = "base-state geopotential" ;
           PHB:units = "m2 s-2" ;
  float T(Time, bottom_top, south_north, west_east) ;
           T:description = "perturbation potential temperature (theta-t0)" ;
           T:units = "K" ;
  float MU(Time, south_north, west_east) ;
           MU:description = "perturbation dry air mass in column" ;
           MU:units = "Pa" ;
  float MUB(Time, south_north, west_east) ;
           MUB:description = "base state dry air mass in column" ;
           MUB:units = "Pa" ;
  float P(Time, bottom_top, south_north, west_east) ;
           P:description = "perturbation pressure" ;
           P:units = "Pa" ;
  float PB(Time, bottom_top, south_north, west_east) ;
           PB:description = "BASE STATE PRESSURE" ;
           PB:units = "Pa" ;
  float FNM(Time, bottom_top) ;
           FNM:description = "upper weight for vertical stretching" ;
           FNM:units = "" ;
  float FNP(Time, bottom_top) ;
           FNP:description = "lower weight for vertical stretching" ;
           FNP:units = "" ;
  float RDNW(Time, bottom_top) ;
           RDNW:description = "inverse dn values on full (w) levels" ;
           RDNW:units = "" ;
  float RDN(Time, bottom_top) ;
           RDN:description = "dn values on half (mass) levels" ;
           RDN:units = "" ;
  float DNW(Time, bottom_top) ;
           DNW:description = "dn values on full (w) levels" ;
           DNW:units = "" ;
  float DN(Time, bottom_top) ;
           DN:description = "dn values on half (mass) levels" ;
           DN:units = "" ;
  float ZNU(Time, bottom_top) ;
           ZNU:description = "eta values on half (mass) levels" ;
           ZNU:units = "" ;
  float ZNW(Time, bottom_top_stag) ;
           ZNW:description = "eta values on full (w) levels" ;
           ZNW:units = "" ;
  float CFN(Time, ext_scalar) ;
           CFN:description = "" ;
           CFN:units = "" ;
  float CFN1(Time, ext_scalar) ;
           CFN1:description = "" ;
           CFN1:units = "" ;
  float EPSTS(Time, ext_scalar) ;
           EPSTS:description = "" ;
           EPSTS:units = "" ;
  float Q2(Time, south_north, west_east) ;
           Q2:description = "QV at 2 M" ;
           Q2:units = "kg kg-1" ;
  float T2(Time, south_north, west_east) ;
           T2:description = "TEMP at 2 M" ;
           T2:units = "K" ;
  float TH2(Time, south_north, west_east) ;
           TH2:description = "POT TEMP at 2 M" ;
           TH2:units = "K" ;
  float PSFC(Time, south_north, west_east) ;
           PSFC:description = "SFC PRESSURE" ;
           PSFC:units = "Pa" ;
  float U10(Time, south_north, west_east) ;
           U10:description = "U at 10 M" ;
           U10:units = "m s-1" ;
  float V10(Time, south_north, west_east) ;
           V10:description = "V at 10 M" ;
           V10:units = "m s-1" ;
  float RDX(Time, ext_scalar) ;
           RDX:description = "INVERSE X GRID LENGTH" ;
           RDX:units = "" ;
  float RDY(Time, ext_scalar) ;
           RDY:description = "INVERSE Y GRID LENGTH" ;
           RDY:units = "" ;
  float RESM(Time, ext_scalar) ;
           RESM:description = "TIME WEIGHT CONSTANT FOR SMALL STEPS" ;
           RESM:units = "" ;
  float ZETATOP(Time, ext_scalar) ;
           ZETATOP:description = "ZETA AT MODEL TOP" ;
           ZETATOP:units = "" ;
  float CF1(Time, ext_scalar) ;
           CF1:description = "2nd order extrapolation constant" ;
           CF1:units = "" ;
  float CF2(Time, ext_scalar) ;
           CF2:description = "2nd order extrapolation constant" ;
           CF2:units = "" ;
  float CF3(Time, ext_scalar) ;
           CF3:description = "2nd order extrapolation constant" ;
           CF3:units = "" ;
  int ITIMESTEP(Time, ext_scalar) ;
           ITIMESTEP:description = "" ;
           ITIMESTEP:units = "" ;
  float QVAPOR(Time, bottom_top, south_north, west_east) ;
           QVAPOR:description = "Water vapor mixing ratio" ;
           QVAPOR:units = "kg kg-1" ;
  float QCLOUD(Time, bottom_top, south_north, west_east) ;
           QCLOUD:description = "Cloud water mixing ratio" ;
           QCLOUD:units = "kg kg-1" ;
  float QRAIN(Time, bottom_top, south_north, west_east) ;
           QRAIN:description = "Rain water mixing ratio" ;
           QRAIN:units = "kg kg-1" ;
  float LANDMASK(Time, south_north, west_east) ;
           LANDMASK:description = "LAND MASK (1 FOR LAND, 0 FOR WATER)" ;
           LANDMASK:units = "" ;
  float TSLB(Time, soil_layers_stag, south_north, west_east) ;
           TSLB:description = "SOIL TEMPERATURE" ;
           TSLB:units = "K" ;
  float ZS(Time, soil_layers_stag) ;
           ZS:description = "DEPTHS OF CENTERS OF SOIL LAYERS" ;
           ZS:units = "m" ;
  float DZS(Time, soil_layers_stag) ;
           DZS:description = "THICKNESSES OF SOIL LAYERS" ;
           DZS:units = "m" ;
  float SMOIS(Time, soil_layers_stag, south_north, west_east) ;
           SMOIS:description = "SOIL MOISTURE" ;
           SMOIS:units = "m3 m-3" ;
  float SH2O(Time, soil_layers_stag, south_north, west_east) ;
           SH2O:description = "SOIL LIQUID WATER" ;
           SH2O:units = "m3 m-3" ;
  float XICE(Time, south_north, west_east) ;
           XICE:description = "SEA ICE FLAG" ;
           XICE:units = "" ;
  float SFROFF(Time, south_north, west_east) ;
           SFROFF:description = "SURFACE RUNOFF" ;
           SFROFF:units = "mm" ;
  float UDROFF(Time, south_north, west_east) ;
           UDROFF:description = "UNDERGROUND RUNOFF" ;
           UDROFF:units = "mm" ;
  int IVGTYP(Time, south_north, west_east) ;
           IVGTYP:description = "DOMINANT VEGETATION CATEGORY" ;
           IVGTYP:units = "" ;
  int ISLTYP(Time, south_north, west_east) ;
           ISLTYP:description = "DOMINANT SOIL CATEGORY" ;
           ISLTYP:units = "" ;
  float VEGFRA(Time, south_north, west_east) ;
           VEGFRA:description = "VEGETATION FRACTION" ;
           VEGFRA:units = "" ;
  float GRDFLX(Time, south_north, west_east) ;
           GRDFLX:description = "GROUND HEAT FLUX" ;
           GRDFLX:units = "W m-2" ;
  float SNOW(Time, south_north, west_east) ;
           SNOW:description = "SNOW WATER EQUIVALENT" ;
           SNOW:units = "kg m-2" ;
  float SNOWH(Time, south_north, west_east) ;
           SNOWH:description = "PHYSICAL SNOW DEPTH" ;
           SNOWH:units = "m" ;
  float CANWAT(Time, south_north, west_east) ;
           CANWAT:description = "CANOPY WATER" ;
           CANWAT:units = "kg m-2" ;
  float SST(Time, south_north, west_east) ;
           SST:description = "SEA SURFACE TEMPERATURE" ;
           SST:units = "K" ;
  float MAPFAC_M(Time, south_north, west_east) ;
           MAPFAC_M:description = "Map scale factor on mass grid" ;
           MAPFAC_M:units = "" ;
  float MAPFAC_U(Time, south_north, west_east_stag) ;
           MAPFAC_U:description = "Map scale factor on u-grid" ;
           MAPFAC_U:units = "" ;
  float MAPFAC_V(Time, south_north_stag, west_east) ;
           MAPFAC_V:description = "Map scale factor on v-grid" ;
           MAPFAC_V:units = "" ;
  float F(Time, south_north, west_east) ;
           F:description = "Coriolis sine latitude term" ;
           F:units = "s-1" ;
  float E(Time, south_north, west_east) ;
           E:description = "Coriolis cosine latitude term" ;
           E:units = "s-1" ;
  float SINALPHA(Time, south_north, west_east) ;
           SINALPHA:description = "Local sine of map rotation" ;
           SINALPHA:units = "" ;
  float COSALPHA(Time, south_north, west_east) ;
           COSALPHA:description = "Local cosine of map rotation" ;
           COSALPHA:units = "" ;
  float HGT(Time, south_north, west_east) ;
           HGT:description = "Terrain Height" ;
           HGT:units = "m" ;
  float TSK(Time, south_north, west_east) ;
           TSK:description = "SURFACE SKIN TEMPERATURE" ;
           TSK:units = "K" ;
  float P_TOP(Time, ext_scalar) ;
           P_TOP:description = "PRESSURE TOP OF THE MODEL" ;
           P_TOP:units = "Pa" ;
  float RAINC(Time, south_north, west_east) ;
           RAINC:description = "ACCUMULATED TOTAL CUMULUS PRECIPITATION" ;
           RAINC:units = "mm" ;
  float RAINNC(Time, south_north, west_east) ;
           RAINNC:description = "Accumulated Total Grid Scale Precipitation" ;
           RAINNC:units = "mm" ;
  float SWDOWN(Time, south_north, west_east) ;
           SWDOWN:description = "Downward Short Wave Flux At Ground Surface" ;
           SWDOWN:units = "W m-2" ;
  float GLW(Time, south_north, west_east) ;
           GLW:description = "DOWNWARD LONG WAVE FLUX AT GROUND SURFACE" ;
           GLW:units = "W m-2" ;
  float XLAT(Time, south_north, west_east) ;
           XLAT:description = "LATITUDE, SOUTH IS NEGATIVE" ;
           XLAT:units = "degree_north" ;
  float XLONG(Time, south_north, west_east) ;
           XLONG:description = "LONGITUDE, WEST IS NEGATIVE" ;
           XLONG:units = "degree_east" ;
  float TMN(Time, south_north, west_east) ;
           TMN:description = "SOIL TEMPERATURE AT LOWER BOUNDARY" ;
           TMN:units = "K" ;
  float XLAND(Time, south_north, west_east) ;
           XLAND:description = "LAND MASK (1 FOR LAND, 2 FOR WATER)" ;
           XLAND:units = "" ;
  float PBLH(Time, south_north, west_east) ;
           PBLH:description = "PBL HEIGHT" ;
           PBLH:units = "m" ;
  float HFX(Time, south_north, west_east) ;
           HFX:description = "UPWARD HEAT FLUX AT THE SURFACE" ;
           HFX:units = "W m-2" ;
  float QFX(Time, south_north, west_east) ;
           QFX:description = "UPWARD MOISTURE FLUX AT THE SURFACE" ;
           QFX:units = "kg m-2 s-1" ;
  float LH(Time, south_north, west_east) ;
           LH:description = "LATENT HEAT FLUX AT THE SURFACE" ;
           LH:units = "W m-2" ;
  float SNOWC(Time, south_north, west_east) ;
           SNOWC:description = "FLAG INDICATING SNOW COVERAGE (1 FOR SNOW COVER)" ;
           SNOWC:units = "" ;
  

Special WRF Output Variables

WRF model outputs the state variables defined in the Registry file, and these state variables are used in the model's prognostic equations. Some of these variables are perturbation fields. Therefore some definition for reconstructing meteorological variables is necessary. In particular, the definitions for the following variables are:

total geopotential

   PH + PHB

total geopotential height in m

   ( PH + PHB ) / 9.81

total potential temperature in  K

   T + 300

total pressure in mb

   ( P + PB ) * 0.01

 

 


List of Global Attributes

 
                :TITLE = " OUTPUT FROM WRF V2.0.3.1 MODEL" ;
                :START_DATE = "2000-01-24_12:00:00" ;
                :SIMULATION_START_DATE = "2000-01-24_12:00:00" ;
                :WEST-EAST_GRID_DIMENSION = 74 ;
                :SOUTH-NORTH_GRID_DIMENSION = 61 ;
                :BOTTOM-TOP_GRID_DIMENSION = 28 ;
                :GRIDTYPE = "C" ;
                :DYN_OPT = 2 ;
                :DIFF_OPT = 0 ;
                :KM_OPT = 1 ;
                :DAMP_OPT = 0 ;
                :KHDIF = 0.f ;
                :KVDIF = 0.f ;
                :MP_PHYSICS = 3 ;
                :RA_LW_PHYSICS = 1 ;
                :RA_SW_PHYSICS = 1 ;
                :SF_SFCLAY_PHYSICS = 1 ;
                :SF_SURFACE_PHYSICS = 1 ;
                :BL_PBL_PHYSICS = 1 ;
                :CU_PHYSICS = 1 ;
                :WEST-EAST_PATCH_START_UNSTAG = 1 ;
                :WEST-EAST_PATCH_END_UNSTAG = 73 ;
                :WEST-EAST_PATCH_START_STAG = 1 ;
                :WEST-EAST_PATCH_END_STAG = 74 ;
                :SOUTH-NORTH_PATCH_START_UNSTAG = 1 ;
                :SOUTH-NORTH_PATCH_END_UNSTAG = 60 ;
                :SOUTH-NORTH_PATCH_START_STAG = 1 ;
                :SOUTH-NORTH_PATCH_END_STAG = 61 ;
                :BOTTOM-TOP_PATCH_START_UNSTAG = 1 ;
                :BOTTOM-TOP_PATCH_END_UNSTAG = 27 ;
                :BOTTOM-TOP_PATCH_START_STAG = 1 ;
                :BOTTOM-TOP_PATCH_END_STAG = 28 ;
                :GRID_ID = 1 ;
                :PARENT_ID = 0 ;
                :I_PARENT_START = 0 ;
                :J_PARENT_START = 0 ;
                :PARENT_GRID_RATIO = 1 ;
                :DX = 30000.f ;
                :DY = 30000.f ;
                :DT = 180.f ;
                :CEN_LAT = 34.72602f ;
                :CEN_LON = -81.22598f ;
                :TRUELAT1 = 30.f ;
                :TRUELAT2 = 60.f ;
                :MOAD_CEN_LAT = 34.72602f ;
                :STAND_LON = -98.f ;
                :GMT = 12.f ;
                :JULYR = 2000 ;
                :JULDAY = 24 ;
                :MAP_PROJ = 1 ;
                :MMINLU = "USGS" ;
                :ISWATER = 16 ;
                :ISICE = 24 ;
                :ISURBAN = 1 ;
                :ISOILWATER = 14 ;  

(back to top)