The WRF modeling system software installation is fairly straightforward on the ported platforms listed below. The model-component portion of the package is mostly self-contained, meaning that WRF model requires no external libraries (such as for FFTs or various linear algebra solvers). Contained within the WRF system is the WRF-Var component, which has several external libraries that the user must install (for various observation types, FFTs, and linear algebra solvers). Similarly, the WPS package, separate from the WRF source code, has additional external libraries that must be built (in support of Grib2 processing). The one external package that all of the systems require is the netCDF library, which is one of the supported I/O API packages. The netCDF libraries or source code are available from the Unidata homepage at http://www.unidata.ucar.edu (select DOWNLOADS, registration required).
There are three tar files for the WRF code. The first is the WRF model (including the real and ideal pre-processors). The second is the WRF-Var code. This separate tar file must be combined with the WRF code for the WRF-Var code to work. The third tar file is for WRF chemistry. Again, in order to run the WRF chemistry code, both the WRF model and the chemistry tar file must be combined.
The WRF model has been successfully ported to a number of Unix-based machines. We do not have access to all of them and must rely on outside users and vendors to supply the required configuration information for the compiler and loader options. Below is a list of the supported combinations of hardware and software for WRF.
Vendor |
Hardware |
OS |
Compiler |
Cray |
X1 |
UniCOS |
vendor |
Cray |
AMD |
Linux |
PGI / PathScale |
IBM |
Power Series |
AIX |
vendor |
SGI |
IA64 / Opteron |
Linux |
Intel |
COTS* |
IA32 |
Linux |
Intel / PGI / gfortran / g95 / PathScale |
COTS |
IA64 / Opteron |
Linux |
Intel / PGI / gfortran / PathScale |
Mac |
Power Series |
Darwin |
xlf / g95 / PGI / Intel |
Mac |
Intel |
Darwin |
g95 / PGI / Intel |
* Commercial Off The Shelf systems
The WRF model may be built to run on a single processor machine, a shared-memory machine (that use the OpenMP API), a distributed memory machine (with the appropriate MPI libraries), or on a distributed cluster (utilizing both OpenMP and MPI). The WRF-Var and WPS packages run on the above listed systems.
The WRF model, WPS, and WRF-Var are written in Fortran (what many refer to as Fortran 90). The software layer, RSL_LITE, which sits between WRF and WRF-Var and the MPI interface is written in C. WPS makes direct calls to the MPI libraries for distributed memory message passing. There are also ancillary programs that are written in C to perform file parsing and file construction, which are required for default building of the WRF modeling code. Additionally, the WRF build mechanism uses several scripting languages: including perl, Cshell and Bourne shell. The traditional UNIX text/file processing utilities are used: make, m4, sed, and awk. See Chapter 7: WRF Software (Required Software) for a more detailed listing of the necessary pieces for the WRF build.
The only library that is almost always required is the netCDF package from Unidata (login > Downloads > NetCDF). Most of the WRF post-processing packages assume that the data from the WRF model, the WPS package, or the WRF-Var program are using the netCDF libraries. One may also need to add /path-to-netcdf/netcdf/bin to your path so that one may execute netcdf commands, such as ncdump.
Hint: If one wants to compile WRF system components on a Linux system that has access to multiple compilers, link the correct external libraries. For example, do not link the libraries built with PathScale when compiling the WRF components with gfortran.
If you are going to be running distributed memory WRF jobs, you need a version of MPI. You can pick up a version of mpich, but you might want your system group to install the code. A working installation of MPI is required prior to a build of WRF using distributed memory. Either MPI-1 or MPI-2 are acceptable. Do you already have an MPI lying around? Try
which mpif90
which mpicc
which mpirun
If these are all defined executables, you are probably OK. Make sure your paths are set up to point to the MPI lib, include, and bin directories.
Note that to output WRF model data in Grib1 format, Todd Hutchinson (WSI) has provided a complete source library that is included with the software release. However, when trying to link the WPS, the WRF model, and the WRF-Var data streams together, always use the netCDF format.
The more widely used (and therefore supported) WRF post-processing utilities are:
There are only a few environmental settings that are WRF system related. Most of these are not required, but when things start acting badly, test some out. In Cshell syntax:
Users wishing to run the WRF-Var code or the WRF chemistry code must first download the WRF model tar file, and untar it. Then the WRF-Var (or chemistry) code is untar’ed in the WRFV3 directory (there already exists the appropriate directories). Once the source code from the tar files is combined, then users may proceed with the WRF-Var or WRF chemistry build.
Building WPS requires that WRFV3 is already built.
WRF-Var uses the same build mechanism as WRF, and as a consequence, this mechanism must be instructed to configure and build the code for WRF-Var rather than WRF. Additionally, the paths to libraries needed by WRF-Var code must be set, as described in the steps below.