The WRF modeling system (https://www2.mmm.ucar.edu/wrf/users/) software installation (https://www2.mmm.ucar.edu/wrf/users/download/get_source.html) is fairly straightforward on the ported platforms listed below. The model-component portion of the package is mostly self-contained. The WRF model contains the source code to a Fortran interface to ESMF and the source to FFTPACK. Contained within the WRF system is the WRFDA component, which has several external libraries that the user must install (for various observation types and linear algebra solvers). Similarly, the WPS package, separate from the WRF source code, has additional external libraries that must be built (in support of Grib2 processing). All of the systems require the netCDF library, which is one of the supported I/O API packages. netCDF libraries and source code are available from the Unidata homepage at http://www.unidata.ucar.edu (select DOWNLOADS, registration required).
The WRF model has been successfully ported to a number of Unix-based machines. We do not have access to all of them and must rely on outside users and vendors to supply the required configuration information for the compiler and loader options. Below is a list of the supported combinations of hardware and software for WRF, WRFDA, and WPS.
Vendor |
Hardware |
OS |
Compiler |
Cray |
XC30 Intel |
Linux |
Intel |
Cray |
XE AMD |
Linux |
Intel |
IBM |
Power Series |
AIX |
vendor |
IBM |
Intel |
Linux |
Intel / PGI / gfortran |
SGI |
IA64 / Opteron |
Linux |
Intel |
COTS* |
IA32 |
Linux |
Intel / PGI / gfortran / g95 / PathScale |
COTS |
IA64 / Opteron |
Linux |
Intel / PGI / gfortran / PathScale |
Mac |
Power Series |
Darwin |
xlf / g95 / PGI / Intel |
Mac |
Intel |
Darwin |
gfortran / PGI / Intel
|
NEC |
NEC |
Linux |
vendor |
Fujitsu |
FX10 Intel |
Linux |
vendor |
* Commercial Off-The-Shelf systems
The WRF model may be built to run on a single-processor machine, a shared-memory machine (that uses the OpenMP API), a distributed memory machine (with the appropriate MPI libraries), or on a distributed cluster (utilizing both OpenMP and MPI).
The majority of the WRF model, WPS, and WRFDA codes are written in Fortran 90. The software layer, RSL, which sits between WRF and WRFDA, and the MPI interface is written in C. WPS makes direct calls to the MPI libraries for distributed memory message passing. There are also ancillary programs, written in C, to perform file parsing and file construction, which are required for default building of the WRF modeling code. Additionally, the WRF build mechanism uses several scripting languages: including perl, Cshell and Bourne shell. The traditional UNIX text/file processing utilities are used: make, m4, sed, and awk. See Chapter 8: WRF Software (Required Software) for a more detailed listing of the necessary pieces for the WRF build.
The only library that is always required is the netCDF package from Unidata (login > Downloads > NetCDF). Most of the WRF post-processing packages assume that the data from the WRF model, the WPS package, or the WRFDA program are using the netCDF libraries. One may also need to add ‘/path-to-netcdf/netcdf/bin’ to their path so that they may execute netCDF utility commands, such as ncdump. Use a netCDF version that is 3.6.1 or later. To utilize the compression capabilities, use netCDF 4.0 or later. Note that compression will require the use of HDF5.
Note 1: The entire step-by-step recipe for building the WRF and WPS packages is available at: http://www2.mmm.ucar.edu/wrf/OnLineTutorial/compilation_tutorial.php This page includes complete turn-key directions, from tests of your machines’s utilities all the way up through where to download real-time data.
Note 2: If one wants to compile WRF system components on a Linux or Darwin system that has access to multiple compilers, link the correct external libraries. For example, do not link the libraries built with PathScale when compiling the WRF components with gfortran. The same options used to build the netCDF libraries must be used when building the WRF code (32 vs 64 bit, assumptions about underscores in the symbol names, etc.).
Note 3: If netCDF-4 is used, be sure that it is installed without activating parallel I/O based on HDF5. The WRF modeling system is able to use either the classic data model from netCDF-3 or the compression options supported in netCDF-4.
If you are going to run distributed memory WRF jobs, you will need an MPI library (e.g., mpich [https://www.mpich.org/]). It may be easiest to have a systems administrator at your institution install this library. A working installation of MPI is required prior to a build of WRF using distributed memory. Either MPI-1 or MPI-2 are acceptable. If you already have an MPI library lying around, try
which mpif90
which mpicc
which mpirun
If these are all defined executables in your path, you are probably OK. Make sure your paths are set up to point to the MPI lib, include, and bin directories. As with the netCDF libraries, you must build MPI consistently with the WRF source code.
Note that to output WRF model data in Grib1 format, Todd Hutchinson (https://www.ibm.com/weather) has provided a complete source library that is included with the software release. However, when trying to link the WPS, the WRF model, and the WRFDA data streams together, always use the netCDF format.
The more widely used (and therefore supported) WRF post-processing utilities are:
· NCL (http://www.ncl.ucar.edu/ )
There are only a few environmental settings that are WRF system related. Most of these are not required, but when things start acting badly, test some out. In Cshell syntax:
· setenv WRF_EM_CORE 1
o explicitly defines which model core to build
· setenv WRF_NMM_CORE 0
· setenv WRF_DA_CORE 0
o explicitly defines no data assimilation
· setenv NETCDF /usr/local/netcdf (or wherever you have it stored)
o all of the WRF components want both the lib and the include directories
· setenv OMP_NUM_THREADS n (where n is the number of procs to use)
· setenv MP_STACK_SIZE 64000000
o OpenMP blows through the stack size, set it large
o However, if the model still crashes, it may be a problem of over- specifying stack size. Set stack size sufficiently large, but not unlimited.
o On some systems, the equivalent parameter could be KMP_STACKSIZE, or OMP_STACKSIZE
o Note: There are many other issues that can contribute to model failure that may not be due to stack size.
· unlimit
o especially if you are on a small system
An instructional web site describes the sequence of steps required to build the WRF and WPS codes (though the instructions are specifically given for tcsh and GNU compilers).
http://www2.mmm.ucar.edu/wrf/OnLineTutorial/compilation_tutorial.php
The WRF code supports a parallel build option, an option that compiles separate source code files in the WRF directories at the same time on separate processors (though those processors need to share memory) via a parallel make. The purpose of this option is to speed-up the time required to construct executables. In practice, users typically see approximately a 2x speed-up, a limit imposed by the various dependencies in the code due to modules and USE association. To enable the parallel build option, set an environment variable, J. For example, in csh, to utilize two processors, before the ./compile command, issue the following:
setenv J “-j 2”
Users may wish to only use a single processor for the build. In which case:
setenv J “-j 1”
WRF chemistry code is packaged with the WRF code. Inside the WRF directory, is a directory named “chem” which contains the chemistry-specific code. To build the model for chemistry purposes, users will need to compile following guidance found here: https://ruc.noaa.gov/wrf/wrf-chem/.
Building WPS requires that WRF be already built.
If you plan to use Grib2 data, additional libraries for zlib, png, and jasper are required. Please see details in Chapter 3.
(NLCD), and high-resolution urban data sets for select North American
cities.
o You should see geogrid.exe, ungrib.exe, and metgrid.exe (if you are missing both geogrid.exe and metgrid.exe, you likely need to correct the path to WRF in the configure.wps file (or set the WRF_DIR variable as indicated above); if you are missing ungrib.exe, the problem is likely that your COMPRESSION_LIB and COMPRESSION_INC variables are not set correctly in configure.wps, or that the Grib2 libraries are not linked or built correctly.