Chapter 2: Software Installation
Table of Contents
Introduction
The WRF modeling system software
installation is fairly straightforward on the ported platforms listed below.
The model-component portion of the package is mostly self-contained. The WRF model does contain the source
code to a Fortran interface to ESMF and the source to FFTPACK . Contained
within the WRF system is the WRF-Var component, which has several external
libraries that the user must install (for various observation types and linear
algebra solvers). Similarly, the WPS
package, separate from the WRF source code, has additional external libraries
that must be built (in support of Grib2 processing). The one external package that all of the systems require is
the netCDF library, which is one of the supported I/O API packages. The netCDF
libraries or source code are available from the Unidata homepage at
http://www.unidata.ucar.edu (select DOWNLOADS, registration required).
There are three tar files for the WRF code. The first is the WRF model (including the real and ideal
pre-processors). The second is the
WRF-Var code. The third tar file is for WRF chemistry. In order to run the WRF chemistry code,
both the WRF model and the chemistry tar file must be combined.
The WRF model has been successfully ported to a number of Unix-based
machines. We do not have access to all of them and must rely on outside users
and vendors to supply the required configuration information for the compiler
and loader options. Below is a list of the supported combinations of hardware
and software for WRF.
Vendor
|
Hardware
|
OS
|
Compiler
|
Cray
|
X1
|
UniCOS
|
vendor
|
Cray
|
AMD
|
Linux
|
PGI /
PathScale
|
IBM
|
Power Series
|
AIX
|
vendor
|
SGI
|
IA64 / Opteron
|
Linux
|
Intel
|
COTS*
|
IA32
|
Linux
|
Intel / PGI /
gfortran / g95 /
PathScale
|
COTS
|
IA64 / Opteron
|
Linux
|
Intel / PGI /
gfortran /
PathScale
|
Mac
|
Power Series
|
Darwin
|
xlf / g95 / PGI / Intel
|
Mac
|
Intel
|
Darwin
|
g95 / PGI / Intel
|
* Commercial Off The Shelf systems
The WRF model may be built to run on a single processor machine, a
shared-memory machine (that use the OpenMP API), a distributed memory machine
(with the appropriate MPI libraries), or on a distributed cluster (utilizing
both OpenMP and MPI). The WRF-Var and WPS packages run on the above listed
systems.
Required Compilers and Scripting Languages
The majority of the WRF model, WPS, and
WRF-Var codes are written in Fortran (what many refer to as Fortran 90). The
software layer, RSL_LITE,
which sits between WRF and WRF-Var and the MPI interface is written in C. WPS
makes direct calls to the MPI libraries for distributed memory message
passing. There are also ancillary
programs that are written in C to perform file parsing and file construction,
which are required for default building of the WRF modeling code. Additionally,
the WRF build mechanism uses several scripting languages: including perl, Cshell and Bourne shell. The
traditional UNIX text/file processing utilities are used: make, m4, sed, and
awk. See Chapter 8: WRF Software (Required
Software) for a more detailed listing of the necessary pieces for the WRF
build.
Required/Optional
Libraries to Download
The only library that
is almost always required is the netCDF package from Unidata (login > Downloads >
NetCDF). Most of the WRF post-processing packages assume that the data from the
WRF model, the WPS package, or the WRF-Var program is using the netCDF
libraries. One may also need to add /path-to-netcdf/netcdf/bin to your path so
that one may execute netCDF utility commands, such as ncdump.
Note 1: If one wants to compile WRF system
components on a Linux system that has access to multiple compilers, link the
correct external libraries. For
example, do not link the libraries built with PathScale when compiling the WRF
components with gfortran.
Note 2: If netCDF-4 is used, be sure that it
is installed without activating the new capabilities (such as parallel I/O
based on HDF5). The WRF modeling system currently only uses its classic data
model supported in netCDF-4.
If you are going to be running distributed memory WRF jobs, you need a
version of MPI. You can pick up a version of mpich, but you
might want your system group to install the code. A working installation of MPI
is required prior to a build of WRF using distributed memory. Either MPI-1 or
MPI-2 are acceptable. Do you already
have an MPI lying around? Try
which mpif90
which mpicc
which mpirun
If
these are all defined executables in your path, you are probably OK. Make sure
your paths are set up to point to the MPI lib,
include, and bin directories.
Note that to output WRF model data in Grib1
format, Todd Hutchinson (WSI) has provided a complete source library that is
included with the software release.
However, when trying to link the WPS, the WRF model, and the WRF-Var
data streams together, always use the netCDF format.
Post-Processing
Utilities
The more widely used (and therefore supported)
WRF post-processing utilities are:
- NCL (homepage and WRF
download)
- NCAR Command Language
written by NCAR Scientific Computing Division
- NCL scripts written
and maintained by WRF support
- many template scripts
are provided that are tailored for specific real-data and ideal-data
cases
- raw WRF output can be
input with the NCL scripts
- interactive or
command-file driven
- Vis5D (homepage and WRF download)
- download Vis5D
executable, build format converter
- programs are
available to convert the WRF output into an input format suitable for
Vis5D
- GUI interface, 3D
movie loops, transparency
- GrADS (homepage and WRF
download)
- download GrADS
executable, build format converter
- programs are
available to convert the WRF output into an input format suitable for
GrADS
- interpolates to
regular lat/lon grid
- simple to generate
publication quality
- RIP (homepage and WRF
download)
- RIP4 written and maintained
by Mark Stoelinga, UW
- interpolation to
various surfaces, trajectories, hundreds of diagnostic calculations
- Fortran source
provided
- based on the NCAR
Graphics package
- pre-processor
converts WRF, WPS, and WRF-Var data to RIP input format
- table driven
UNIX Environment Settings
There are only a few environmental settings that
are WRF system related. Most of these are not required, but when things start
acting badly, test some out. In Cshell syntax:
- setenv
WRF_EM_CORE 1
- explicitly defines which model core to build
- setenv
WRF_NMM_CORE 0
- explicitly defines which model core NOT to
build
- setenv
WRF_DA_CORE 0
- explicitly defines no data assimilation
- setenv
NETCDF /usr/local/netcdf (or
where ever you have it stuck)
- all of the WRF components want both the lib
and the include directories
- setenv
OMP_NUM_THREADS n (where n is the number
of procs to use)
- if you have OpenMP on your system, this is
how to specify the number of threads
- setenv
MP_STACK_SIZE 64000000
- OpenMP blows through the stack size, set it
large.
- However, if the model still crashes, it may
be a problem of over specifying stack size. Set stack size sufficiently
large, but not unlimited.
- On some system, the equivalent parameter
could be KMP_STACKSIZE, or OMP_STACKSIZE.
- unlimit
- especially if you are on a small system
Building the WRF Code
The WRF code has a fairly complicated build mechanism. It tries to
determine the architecture that you are on, and then presents you with options
to allow you to select the preferred build method. For example, if you are on a
Linux machine, it determines whether this is a 32 or 64 bit machine, and then
prompts you for the desired usage of processors (such as serial, shared memory,
or distributed memory). You select
from among the available compiling options in the build mechanism. For example, do not choose a PGI build
if you do not have PGI compilers installed on your system.
- Get the WRF zipped tar file from WRFV3 from
- http://www2.mmm.ucar.edu/wrf/users/download/get_source.html
- always get the latest
version if you are not trying to continue a long project
- unzip and untar the file
- gzip -cd
WRFV3.TAR.gz | tar -xf -
- cd WRFV3
- ./configure
- serial means single processor
- smpar means Symmetric Multi-Processing/Shared
Memory Parallel (OpenMP)
- dmpar means Distributed Memory Parallel (MPI)
- dm+sm means Distributed Memory with Shared Memory
(for example, MPI across nodes with OpenMP within a node)
- the second option is
for nesting: 0 = no nesting, 1 = standard static nesting, 2 = nesting
with a prescribed set of moves, 3 = nesting that allows a domain to
follow a vortex (typhoon tracking)
- ./compile em_real (or
any of the directory names in ./WRFV3/test directory)
- ls -ls main/*.exe
- if you built a
real-data case, you should see ndown.exe, real.exe,
and wrf.exe
- if you built an
ideal-data case, you should see ideal.exe and wrf.exe
Users
wishing to run the WRF chemistry code must first download the WRF model tar
file, and untar it. Then the
chemistry code is untar’ed in the WRFV3 directory (this is the chem directory structure). Once the source code from the tar files
is combined, then users may proceed with the WRF chemistry build.
Building the WPS Code
Building WPS requires that WRFV3 is already built.
- Get the WPS zipped tar file
WPSV3.TAR.gz from
- http://www2.mmm.ucar.edu/wrf/users/download/get_source.html
- Also download the
geographical dataset from the same page
- unzip and untar the file
- gzip -cd WPSV3.TAR.gz
| tar -xf -
- cd WPS
- ./configure
- choose one of the
options
- usually, option
"1" and option “2” are for serial builds, that is the best for
an initial test
- WPS requires that you
build for the appropriate Grib decoding, select an option that suitable
for the data you will use with the ungrib program
- If you select a Grib2
option, you must have those libraries prepared and built in advance
- ./compile
- ls -ls *.exe
- you should see geogrid.exe, ungrib.exe, and metgrid.exe (if you are
missing both geogrid.exe
and metgrid.exe,
you probably need to fix where the path to WRF is pointing in the configure.wps file; if
you are missing ungrib.exe,
try a Grib1-only build to further isolate the problem)
- ls -ls util/*.exe
- you should see a
number of utility executables: avg_tsfc.exe, calc_ecmwf_p.exe, g1print.exe, g2print.exe, mod_levs.exe, plotfmt.exe, plotgrids.exe, and rd_intermediate.exe
(files requiring NCAR Graphics are plotfmt.exe and plotgrids.exe)
- if geogrid.exe and metgrid.exe executables are missing, probably
the path to the WRFV3 directory structure is incorrect (found inside the configure.wps file)
- if the ungrib.exe is missing,
probably the Grib2 libraries are not linked or built correctly
- if the plotfmt.exe
or the plotgrids.exe
programs are missing, probably the NCAR Graphics path is set incorrectly
Building the WRF-Var Code
WRF-Var
uses the same build mechanism as WRF, and as a consequence, this mechanism must
be instructed to configure and build the code for WRF-Var rather than WRF.
Additionally, the paths to libraries needed by WRF-Var code must be set, as
described in the steps below.
- Get the WRF-Var zipped tar file,
WRFDAV3_1_1.TAR.gz, from http://www2.mmm.ucar.edu/wrf/users/download/get_source.html
- Unzip and untar the WRF-Var
code
- gzip -cd
WRFDAV3_1_1.TAR.gz | tar -xf –
- This will create a
directory,
WRFDA
- cd WRFDA
- In addition to NETCDF, set up
environment variables pointing to additional libraries required by
WRF-Var.
- If you intend to use PREPBUFR
observation data from NCEP, environment variable BUFR has to be set with
setenv BUFR 1
o
If you intend
to use satellite radiance data, either CRTM (V1.2) or RTTOV (V8.7) has to be
installed. They can be downloaded from
ftp://ftp.emc.ncep.noaa.gov/jcsda/CRTM/ and
http://www.metoffice.gov.uk/science/creating/working_together/nwpsaf_public.html
- Make certain that all the
required libraries are compiled using the same compiler as will be used to
build WRF-Var, since the libraries produced by one compiler may not be
compatible with code compiled with another.
- Assuming, for example, that these libraries have been installed in
subdirectories of /usr/local, the necessary environment variables might be
set with
- setenv CRTM
/usr/local/crtm (optional, make sure libcrtm.a is in $CRTM directory)
- setenv RTTOV
/usr/local/rttov87 (optional, make sure librttov.a is in $RTTOV directory)
- ./configure wrfda
- serial means single processor
- smpar means Symmetric Multi-Processing/Shared
Memory Parallel (OpenMP)
- dmpar means Distributed Memory Parallel (MPI)
- dm+sm means Distributed Memory with Shared Memory
(for example, MPI across nodes with OpenMP within a node)
- ./compile all_wrfvar
- ls -ls var/build/*.exe
- If the compilation
was successful, da_wrfvar.exe,
da_update_bc.exe,
and other executables should be found in the var/build directory and
their links are in var/da directory; obsproc.exe
should be found in the var/obsproc/src directory