The WRF modeling system software installation is fairly straightforward on the ported platforms listed below. The model-component portion of the package is mostly self-contained. The WRF model does contain the source code to a Fortran interface to ESMF and the source to FFTPACK . Contained within the WRF system is the WRFDA component, which has several external libraries that the user must install (for various observation types and linear algebra solvers). Similarly, the WPS package, separate from the WRF source code, has additional external libraries that must be built (in support of Grib2 processing). The one external package that all of the systems require is the netCDF library, which is one of the supported I/O API packages. The netCDF libraries and source code are available from the Unidata homepage at http://www.unidata.ucar.edu (select DOWNLOADS, registration required).
There are three tar files for the WRF code. The first is the WRF model (including the real and ideal pre-processors). The second is the WRFDA code. The third tar file is for WRF chemistry. In order to run the WRF chemistry code, both the WRF model and the chemistry tar file must be combined.
The WRF model has been successfully ported to a number of Unix-based machines. We do not have access to all of them and must rely on outside users and vendors to supply the required configuration information for the compiler and loader options. Below is a list of the supported combinations of hardware and software for WRF.
Vendor |
Hardware |
OS |
Compiler |
Cray |
XC30 Intel |
Linux |
Intel |
Cray |
XE AMD |
Linux |
Intel |
IBM |
Power Series |
AIX |
vendor |
IBM |
Intel |
Linux |
Intel / PGI / gfortran |
SGI |
IA64 / Opteron |
Linux |
Intel |
COTS* |
IA32 |
Linux |
Intel / PGI / gfortran / g95 / PathScale |
COTS |
IA64 / Opteron |
Linux |
Intel / PGI / gfortran / PathScale |
Mac |
Power Series |
Darwin |
xlf / g95 / PGI / Intel |
Mac |
Intel |
Darwin |
gfortran / PGI / Intel |
NEC |
NEC |
Linux |
vendor |
Fujitsu |
FX10 Intel |
Linux |
vendor |
* Commercial Off-The-Shelf systems
The WRF model may be built to run on a single-processor machine, a shared-memory machine (that uses the OpenMP API), a distributed memory machine (with the appropriate MPI libraries), or on a distributed cluster (utilizing both OpenMP and MPI). The WRFDA and WPS packages run on the above-listed systems.
The majority of the WRF model, WPS, and WRFDA codes are written in Fortran (what many refer to as Fortran 90). The software layer, RSL, which sits between WRF and WRFDA, and the MPI interface is written in C. WPS makes direct calls to the MPI libraries for distributed memory message passing. There are also ancillary programs that are written in C to perform file parsing and file construction, which are required for default building of the WRF modeling code. Additionally, the WRF build mechanism uses several scripting languages: including perl, Cshell and Bourne shell. The traditional UNIX text/file processing utilities are used: make, m4, sed, and awk. See Chapter 8: WRF Software (Required Software) for a more detailed listing of the necessary pieces for the WRF build.
The only library that is always required is the netCDF package from Unidata (login > Downloads > NetCDF). Most of the WRF post-processing packages assume that the data from the WRF model, the WPS package, or the WRFDA program are using the netCDF libraries. One may also need to add ‘/path-to-netcdf/netcdf/bin’ to their path so that they may execute netCDF utility commands, such as ncdump. Use a netCDF version that is 3.6.1 or later. To utilize the compression capabilities, use netCDF 4.0 or later. Note that compression will require the use of HDF5.
Note 1: If one wants to compile WRF system components on a Linux or Darwin system that has access to multiple compilers, link the correct external libraries. For example, do not link the libraries built with PathScale when compiling the WRF components with gfortran. Even more, the same options when building the netCDF libraries must be used when building the WRF code (32 vs 64 bit, assumptions about underscores in the symbol names, etc.).
Note 2: If netCDF-4 is used, be sure that it is installed without activating parallel I/O based on HDF5. The WRF modeling system is able to use either the classic data model from netCDF-3 or the compression options supported in netCDF-4.
If you are going to be running distributed memory WRF jobs, you need a version of MPI. You can pick up a version of mpich, but you might want your system group to install the code. A working installation of MPI is required prior to a build of WRF using distributed memory. Either MPI-1 or MPI-2 are acceptable. Do you already have an MPI lying around? Try
which mpif90
which mpicc
which mpirun
If these are all defined executables in your path, you are probably OK. Make sure your paths are set up to point to the MPI lib, include, and bin directories. As with the netCDF libraries, you must build MPI consistently with the WRF source code.
Note that to output WRF model data in Grib1
format, Todd Hutchinson (WSI) has provided a complete source library that is
included with the software release.
However, when trying to link the WPS, the WRF model, and the WRFDA data
streams together, always use the netCDF format.
Note 3: The
entire step-by-step recipe for building the WRF and WPS packages is available
at: http://www2.mmm.ucar.edu/wrf/OnLineTutorial/compilation_tutorial.php This
page includes complete turn-key directions, from tests of
your machines’s utilities all the way up through where to download real-time
data.
The more widely used (and therefore supported) WRF post-processing utilities are:
· NCL (homepage and WRF download)
There are only a few environmental settings that are WRF system related. Most of these are not required, but when things start acting badly, test some out. In Cshell syntax:
·
setenv
WRF_EM_CORE 1
o explicitly defines which model core
to build
·
setenv
WRF_NMM_CORE 0
·
setenv
WRF_DA_CORE 0
o explicitly defines no data
assimilation
· setenv NETCDF /usr/local/netcdf (or wherever you have it stored)
o all of the WRF components want both the lib and the include directories
· setenv OMP_NUM_THREADS n (where n is the number of procs to use)
·
setenv
MP_STACK_SIZE 64000000
o OpenMP blows through the stack
size, set it large
o However, if the model still crashes, it may be a problem of over- specifying stack size. Set stack size sufficiently large, but not unlimited.
o On some systems, the equivalent parameter could be KMP_STACKSIZE, or OMP_STACKSIZE
·
unlimit
o especially if you are on a small
system
An instructional web site describes the sequence of steps required to build the WRF and WPS codes (though the instructions are specifically given for tcsh and GNU compilers).
http://www2.mmm.ucar.edu/wrf/OnLineTutorial/compilation_tutorial.php
The WRF code supports a parallel build option, an option that compiles separate source code files in the WRF directories at the same time on separate processors (though those processors need to share memory) via a parallel make. The purpose of the parallel build option is to be able to speed-up the time required to construct executables. In practice, users typically see approximately a 2x speed-up, a limit imposed by the various dependencies in the code due to modules and USE association. To enable the parallel build option, the user sets an environment variable, J. In csh, to utilize two processors, before the ./compile command, issue the following:
setenv J “-j 2”
Users may wish to only use a single processor for the build. In which case:
setenv J “-j 1”
Users
wishing to run the WRF chemistry code must first download the WRF model tar
file, and untar it. Then the chemistry
code is untar’ed in the WRF directory (this is the chem
directory structure). Once the source
code from the tar files is combined, then users may proceed with the WRF
chemistry build.
Building the
WPS Code
Building WPS requires that WRF be already built.
If you plan to use Grib2 data, additional libraries for zlib,
png, and jasper are required. Please see
details in Chapter 3.