Compiling¶
The WRF modeling system is comprised of the WRF Preprocessing System (WPS), the WRF model, WRFDA, WRF-Chem, WRF-hydro, and a handful of utility programs. The WPS source code is separate from the other WRF components and must be compiled for real-data cases. The WRF model contains the source code to a Fortran interface to ESMF and the source to FFTPACK.
The WRF model is ported to a number of Unix-based machines. WRF developers do not have access to every system and, therefore, rely on users and vendors to supply required configuration information for the compiler and loader options. Below lists the supported combinations of hardware and software for the WRF modeling system.
Vendor |
Hardware |
OS |
Compiler |
---|---|---|---|
Cray |
XC30 Intel |
Linux |
Intel |
Cray |
XE AMD |
Linux |
Intel |
IBM |
Power Series |
AIX |
vendor |
IBM |
Intel |
Linux |
Intel/PGI/gfortran |
SGI |
IA64/Opteron |
Linux |
Intel |
COTS* |
IA32 |
Linux |
Intel/PGI/gfortran/g95/PathScale |
COTS |
IA64/Opteron |
Linux |
Intel/PGI/gfortran/PathScale |
Mac |
Power Series |
Darwin |
xlf/g95/PGI/Intel |
Mac |
Intel |
Darwin |
gfortran/PGI/Intel |
NEC |
NEC |
Linux |
vendor |
Fujitsu |
FX10 Intel |
Linux |
vendor |
WRF may be built to run on one of the following processing types:
a single-processor machine
a shared-memory machine (that uses the OpenMP API)
a distributed memory machine (with the appropriate MPI libraries)
a distributed cluster (utilizing both OpenMP and MPI)
See WRF Software for detailed information about the software that controls the WRF build mechanism.
Required Compilers & Scripting¶
The WRF modeling system code is primarily written in standard Fortran 90 (with some 2003 capabilities). The software layer, RSL, which sits between WRF and WRFDA, and the MPI interface, is written in C. WPS makes direct calls to the MPI libraries for distributed memory message passing. Ancillary programs written in C perform file parsing and file construction, which are required for default building of the WRF modeling code.
Because of this makeup, regardless of whether the code will be built with a gfortran/GNU option, the following are mandatory installations prior to building the WRF code:
a gfortran compiler
gcc
cpp
It is recommended to use a Fortran compiler that supports Fortran2003 standard (version 4.6+). The build mechanism uses several scripting languages, including perl, Cshell and Bourne shell. Several traditional UNIX text/file processing utilities are used, and therefore the following are mandatory:
ar |
head |
sed |
awk |
hostname |
sleep |
cat |
ln |
sort |
cd |
ls |
tar |
cp |
make |
touch |
cut |
mkdir |
tr |
expr |
mv |
uname |
file |
nm |
wc |
grep |
printf |
which |
gzip |
rm |
m4 |
Required & Optional Libraries¶
Note
If any of the following libraries fails to properly build, users must obtain help from either a systems administrator at their institution, or a support team for the specific library. WRF MODEL DEVELOPERS AND SUPPORT TEAM DO NOT HAVE THE RESOURCES TO SUPPORT LIBRARIES ON INDIVIDUAL SYSTEMS!
Scroll down, or click the below links to go to the following sub-sections:
NetCDF¶
The netCDF package (version 3.6.1+) is the only library that is mandatory for building the WRF modeling system. Access netCDF source code, precompiled binaries, and documentation from Unidata. To utilize compression capabilities, use netCDF 4.0 or later. Note that compression requires the use of HDF5.
See How to Compile WRF for the step-by-step recipe for building the WRF and WPS packages, which includes:
System environment tests
Steps for installing libraries
Library compatibility tests
Steps for building WRF and WPS
Instructions for downloading static geography data (used for for the WPS geogrid program)
Instructions for downloading sample real-time data
To compile WRF system components on a Linux or Darwin system that has access to multiple compilers, link the correct external libraries. For example, do not link the libraries built with PathScale when compiling the WRF components with gfortran. The same options used to build the netCDF libraries must be used when building the WRF code (32 vs 64 bit, assumptions about underscores in the symbol names, etc.).
If netCDF-4 is used, be sure it is installed without activating parallel I/O based on HDF5. The WRF modeling system can use either the classic data model from netCDF-3 or the compression options supported in netCDF-4. Beginning with WRFv4.4, the ability to write compressed netCDF-4 files in parallel is available. With this option, performance is slower than with pnetcdf, but can be notably faster than the use of regular netCDF on parallel file systems. Compression provides files significantly smaller than those generated by pnetcdf. It is expected that files sizes will differ with compression.
After installing netCDF, the environment variables PATH and NETCDF should be set so that the model is able to find necessary library files during the build. For e.g.,
Note
Paths may differ from user to user (if unsure, check with a systems administrator at your institution).
setenv PATH /usr/local/netcdf/bin:$PATH
setenv NETCDF /usr/local/netcdf
MPI¶
Prior to building or running WRF for distributed memory jobs, a working installation of an MPI library (for e.g., MPICH or OpenMPI - either MPI-1 or MPI-2) is required. Most multi-processor machines come preconfigured with a version of MPI; however, if one is not available, see How to Compile WRF for instructions on installing MPI. To determine if an MPI library exists, issue the following commands, and if paths are given, the library is already available.
which mpif90
which mpicc
which mpirun
Ensure that paths are set up to point to the MPI lib, include, and bin directories. MPI must be built consistently with the WRF source code.
GRIB2 Libraries¶
If planning to run real-data simulations with GRIB Edition 2 input data, the following libraries are required by the WPS ungrib program, and therefore must be installed prior to configuring WPS.
zlib
libpng
jasper
Users may obtain and install these libraries on their systems, or, for WPSv4.4+, internal copies of the libraries can be built during compiling WPS.
Building GRIB2 Libraries Internally¶
In WPS versions 4.4+, the following configuration option installs internal copies of the zlib, libpng, and JasPer libraries:
./configure --build-grib2-libs
These libraries will be installed in the WPS/grib2 directory. When this option is used, the environment variables JASPERLIB and JASPERINC are ignored, and the compiled ungrib and g2print executables will use the internally built GRIB2 libraries. See specific instructions for this option in Configure WPS.
Building GRIB2 Libraries Manually¶
Note
Users are encouraged to engage their system administrators for installation of these packages so that traditional library paths and include paths are maintained.
Paths to user-installed compression libraries are handled in the configure.wps file by the COMPRESSION_LIBS and COMPRESSION_INC variables. To ensure GRIB2 library files are accessible during WPS configuration, it is recommended to install all three in a common directory. For example, if the libraries will be installed in /usr/local, create a library inside /usr/local, called something like grib2. See instructions below.
JasPer (an implementation of the JPEG2000 standard for “lossy” compression)
Download and unpack the JasPer package.
Move into the unpacked JasPer directory (for e.g.,
cd jasper-1.900.1
)Issue the following to install (Note: this follows the above example, placing all GRIB2 libraries in the grib2 directory. This path may vary depending on the system and user preferences)
./configure --prefix=/usr/local/grib2 make make install
Note
WPS expects to find include files in “jasper/jasper.h”, so it may be necessary to manually create a jasper subdirectory within the include directory created by the JasPer installation, and then manually link header files there.
PNG (compression library for “lossless” compression)
Download and unpack the PNG package.
Move into the unpacked directory (for e.g.,
cd libpng-1.2.50
)Issue the following to install (Note: this follows the above example, placing all GRIB2 libraries in the grib2 directory. This path may vary depending on the system and user preferences)
./configure --prefix=/usr/local/grib2 make make install
zlib (a compression library used by the PNG library)
Download and unpack the current released zlib package.
Move into the unpacked directory (for e.g.,
cd zlib-1.2.7
).Issue the following to install (Note: this follows the above example, placing all GRIB2 libraries in the grib2 directory. This path may vary depending on the system and user preferences)
./configure --prefix=/usr/local/grib2 make make install
Setting UNIX Environment Variables¶
To ensure the WPS Ungrib build can locate the JasPer, PNG, and zlib libraries, some environment variables must be set.
An alternative to manually editing the COMPRESSION_LIBS and COMPRESSION_INC variables in the configure.wps file, is to set the environment variables JASPERLIB and JASPERINC to the directories containing the JasPer lib and include files before configuring WPS; for example, if JasPer libraries were installed in /usr/local/grib2, the following csh/tcsh commands would set these variables:
setenv JASPERLIB /usr/local/grib2/lib
setenv JASPERINC /usr/local/grib2/include
If zlib and PNG libraries are not in a standard path that the compiler can check automatically, the paths to these libraries can be added on to the JasPer environment variables; for example, if PNG libraries are installed in /usr/local/libpng-1.2.29 and the zlib libraries are installed in /usr/local/zlib-1.2.3, the following csh/tcsh commands can be used after having previously set JASPERLIB and JASPERINC.
setenv JASPERLIB "${JASPERLIB} -L/usr/local/libpng-1.2.29/lib -L/usr/local/zlib-1.2.3/lib"
setenv JASPERINC "${JASPERINC} -I/usr/local/libpng-1.2.29/include -I/usr/local/zlib-1.2.3/include"
It may also be necessary to set the following (for e.g., in csh or tcsh),
setenv LDFLAGS -L/usr/local/grib2/lib
setenv CPPFLAGS -I/usr/local/grib2/include
GRIB1 Output¶
To output WRF model data in GRIB1 format, a complete source library is included with the software release (provided by The Weather Company); however, when trying to link the WPS, the WRF model, and the WRFDA data streams together, always use the netCDF format.
Building WRF¶
The WRF code’s build mechanism attempts to determine the computing system’s architecture, and then presents options for the preferred build method. For example, if using a Linux machine, it determines whether the machine is 32 or 64 bit, and then prompts the desired usage of processors (such as serial, shared memory, or distributed memory). From available compiling options in the build mechanism, only select an option for a compiler that is installed on the system.
How to Compile WRF provides the steps required to build WRF and WPS. Alternatively, use the following steps to compile WRF.
Obtain the WRF system code (which includes WRFDA, WRF-Chem, and WRF-hydro)
The latest version is recommended unless you are continuing a long project, or duplicating previous work. Note that versions prior to V4.0 are no longer supported
Move to the WRF directory (note that it may be called something else, for e.g., WRFV4.4).
cd WRF
Configure WRF¶
Issue the following in the command line:
./configure
Select the appropriate compiler and processor usage. Only choose an option for a compiler that is installed on the system.
- serial
Computes with a single processor - only useful for small cases with domain size of about 100x100 grid spaces.
- smpar
Symmetric Multi-processing/Shared Memory Parallel (OpenMP) - only recommended for those who are knowledgeable with computation and processing - it works most reliably for IBM machines.
- dmpar
Distributed Memory Parallel (MPI) - this is the recommended option.
- dm+sm
Distributed Memory with Shared Memory (for e.g., MPI across nodes with OpenMP within a node). Performance is typically better with the dmpar-only option, and this option is not recommended for those without extensive computation/processing experience.
Select the desired nesting option.
0 = no nesting
1 = basic nesting (standard, this is the most common choice)
2 = nesting with a prescribed set of moves
3 = nesting that allows a domain to follow a vortex, specific to tropical cyclone tracking
Optional configuration options include
./configure -d
For debugging - removes optimization, which is useful when running a debugger (such as gdb or dbx)
./configure -D
For bounds checking and additional exception handling, plus debugging, with optimization removed - only available for PGI, Intel, and gfortran (GNU) compilers.
./configure -r8
For double-precision - only available for PGI, Intel, and gfortran (GNU) compilers.
After configuring, there should be a new file in the top-level WRF directory called “configure.wrf.”
Compile WRF¶
Type the following in the command line to compile (always send the standard error and output to a log file, using the “&>” syntax. This is useful if the compile fails).
./compile em_test_case >& compile.log
where em_test_case is the type of case to be built (real-data or specific ideal case). Available options are:
em_real |
real-data simulations |
|
3D idealized cases |
|
2D idealized cases |
em_scm_xy |
1D idealized case |
See Initialization for Idealized Cases for additional information on idealized cases.
Compiling the code should take anywhere from ~10-60 minutes.
Note
Using multiple processors may speed up the compiling process. Add “-j N” in the compile command, where N is the number of processors (for e.g., ./compile em_real -j 4 >& compile.log
).
Testing shows there is not much benefit in using more than about six processors.
By default, the WRF compile uses two processors. However, if compiling errors occur, try compiling with a single processor to more easily identify the root cause of the problem. To do so, issue something like: e.g.,
./compile em_real -j 1 >& compile.log
.
When the compile is complete, check the end of the compile log to determine whether it was successful. If successful a full listing of the wrf/main directory should reveal the following executables (file sizes may vary):
ls -ls main/*.exe
For a real-data compile
-rwxr-xr-x 51935024 ndown.exe
-rwxr-xr-x 47030584 real.exe
-rwxr-xr-x 45936248 tc.exe
-rwxr-xr-x 59119872 wrf.exe
For an indealized compile
-rwxr-xr-x 47030584 ideal.exe
-rwxr-xr-x 59119872 wrf.exe
The above executables are linked to the following two directories, and can be run from either location.
WRF/run
WRF/test/em_case (where case is the case chosen in the compile command above)
Building Multiple Test Cases¶
To build two (or multiple) different test cases (e.g., em_real and em_les), WRF must be built separately for each case to ensure the correct initialization is created. When a case is built, its executables reside in the WRF/main directory, and are linked to the test/em_<case> directory. Prior to building a different test case, the executables must be moved to prevent overwriting. See the below examples:
New Case, Same Configuration Options¶
E.g., You have a 3-D ideal test case built (e.g., em_les) using ifort/icc, a distributed memory build (dmpar), and basic nesting. You now wish to build a real-data test case (em_real) using ifort/icc with distributed memory and basic nesting
There is no need to clean the code or to reconfigure. Simply recompile the new case, after moving the executables for the em_les case.
Move to the top-level WRF directory
cd WRF
Save the current executables to the em_les test case directory.
mv main/\*.exe test/em_les
Recompile
./compile em_real >& log.compile
New Case, Different Configuration Options¶
E.g., You have a 3-D ideal test case built (e.g., em_les) using ifort/icc, a distributed memory build (dmpar), and basic nesting. You now wish to build a 2-D ideal test case (e.g., em_squall2d_x).
Important
All 2-D and 1-D ideal cases MUST be compiled with serial computing and a “no nest” option.
- Move to a directory outside of the top-level WRF directory (e.g., the directory that contains your em_les build) and then use one of the following methods:
Either copy the previously-built WRF directory to a new name
cp -r WRF WRF_new
or download/clone/install a clean version of the code - for e.g.,
git clone https://github.com/wrf-model/WRF.git WRF_new
- Move into the new directory and build the code
(If you copied the code in the previous step)
./clean -a
Otherwise,
./configure ./compile em_squall2d_x
Make sure to choose a serial option, and then “no nesting” during configuration.
Failed WRF Compile¶
If the compile fails, open the log file (e.g., compile.log) and search for the word “Error” with a capital “E.” Typically the first error listed in the file is the primary issue and subsequent errors result from the initial problem. This is not always the case if multiple processors were used to compile. If the error is not clear, try recompiling with a single processor (e.g.,
./compile em_real -j 1 >& compile.log
) to ensure the first error listed is the root cause. Make sure to clean and reconfigure the code before recompiling (see bullet below about recompiling).Many compiling inquiries are addresssed on the WRF & MPAS-A Users’ Forum. If unsure how to address the error, try searching the forum for helpful hints.
To ensure all libraries and compilers are installed correctly, follow the instructions and tests on the How to Compile WRF website before recompiling.
Once the issue is resolved, clean and configure the code again before recompiling.
> ./clean -a > ./configure
WRF Directory Structure¶
The top-level WRF directory consists of the following files and sub-directories.
arch/ |
directory containing files specific to configuration |
chem/ |
directory containing files specific to building and running WRF-Chem |
clean |
user-executable script to clean the model code prior to recompiling |
compile |
user-executable script to build the WRF model |
configure |
user-executable script to declare configuration settings prior to compiling |
doc/ |
directory containing informational documents on specific WRF applications |
dyn_em/ |
directory containing files specific to the dynamical core mediation-layer and model-layer subroutines |
external/ |
directory containing files and sub-directories for building additional external libraries needed for WRF |
frame/ |
directory containing files related to WRF software framework-specifc modules |
hydro/ |
directory containing files specific to building and running WRF-Hydro |
inc/ |
directory containing various .h libraries, and include (.inc) files generated by the Registry during the WRF compile |
LICENSE.txt |
text file containing WRF licensing information |
main/ |
directory containing the ‘main’ WRF programs with symbolic links for executable files in the test/em_* and run directories |
Makefile |
file used as input to the UNIX ‘make’ utility during compiling |
phys/ |
directory containing WRF model layer routines for physics |
README |
text file containing information about the WRF model version, a public domain notice, and information about releases prior to V4.0 (of which code repository information is not available) |
README.md |
text file for maintaining the code in a .git repository system, and containing important information for users |
Registry/ |
directory containing files that control many of the compile-time aspects of the WRF code |
run/ |
directory contining symbolic links for compiled executables, along with all tables and text files that may be necessary during run-time |
share/ |
directory containing mediation layer routines, including WRF I/O modules that call the I/O API |
test/ |
directory containing subdirectories for all real and idealized cases; inside each of those directories are the same files and executables that are in the run directory |
tools/ |
directory containing the program that reads the appropriate Registry.*X* file (for e.g., Registry.EM for a basic WRF compile) and auto-generates files in the inc directory |
var/ |
directory containing files and subdirectories specific for building and running WRFDA |
wrftladj/ |
directory containing files specific to building and running WRFPLUS (a program affiliated with WRFDA) |
Building WRFDA, WRF-Chem, & WRF-hydro¶
Information on required libraries specific to WRFDA, WRF-Chem, and WRF-hydro, as well as instructions for compiling can be found from the following links.
WRF Data Assimilation chapter of this Users’ Guide
WRF Chemistry website
WRF-Hydro Modeling System website
Building WPS¶
The WRF Preprocessing System (WPS) uses a build mechanism similar to that used by the WRF model. External libraries for geogrid and metgrid are limited to those required by WRF, since the WPS uses WRF’s implementations of the WRF I/O API; consequently, WRF must be compiled prior to WPS so that the I/O API libraries in the WRF external directory will be available to WPS programs.
The only library required to build WPS is netCDF; however, the ungrib program requires three compression libraries for GRIB Edition 2 support (if support for GRIB2 data is not needed, ungrib can be compiled without these compression libraries). Where WRF adds a software layer between the model and the communications package, the WPS programs geogrid and metgrid make MPI calls directly. Most multi-processor machines come preconfigured with a version of MPI, so it is unlikely that users need to install this package by themselves. See Required & Optional Libraries for additional information.
To bypass portability issues, the NCEP GRIB libraries, w3 and g2, are included in the WPS distribution.
The How to Compile WRF website provides the steps required to build WPS (instructions are specific for tcsh and GNU compilers). Alternatively, use the following steps to compile WPS.
Obtain the WPS code
Obtain the latest code version unless continuing a long project or duplicating previous work. Note that versions prior to V4.0 are no longer supported.
Move to the WPS directory (note that it may be called something else, for e.g., WPSV4.4).
cd WPS
Configure WPS¶
Set the WRF_DIR environment variable, which is used by the configure script to link to the compiled WRF. The following is a cshell example (the path and name of the WRF directory may vary).
setenv WRF_DIR ../WRF
To build internal copies of zlib, libpng, and JasPer libraries (see GRIB2 Libraries for details), issue the following command.
./configure --build-grib2-libs
Otherwise, issue:
./configure
Note
To only compile ungrib.exe for the purpose of running MPAS, use the configure command
./configure --nowrf
A list of supported compilers on the current system architecture should be presented, as well as available processing options for each.
serial : Executables are computed with a single processor; this is the recommended option
serial_NO_GRIB2 : Same as above, but without GRIB2 support (i.e., without compression libraries installed)
dmpar : Executables are computed with Distributed Memory Parallel (MPI)
dmpar_NO_GRIB2 : Same as above, but without GRIB2 support (i.e., without compression libraries installed)
Note
Unless domain sizes will be very large (1000’s x 1000s of grid spaces), it is recommended to choose a serial option (even if WRF was compiled with a different option). WPS executables run quickly and parallel computing is not typically necessary. If a dmpar option is chosen, ungrib still must be run with a single processor - ungrib does not support parallel computing.
Choose a configure option. A configure.wps file should be available in the WPS directory when configuration is complete.
Compile WPS¶
Issue the following in the command line (always send the standard error and output to a log file, using the “&>” syntax. This is useful if the compile fails).
./compile >& compile.log
WPS should compile relatively quickly compared to WRF. When it is complete, if successful, the following executables should be available in the WPS directory, linked from their corresponding source code directories.
geogrid.exe -> geogrid/src/geogrid.exe
ungrib.exe -> ungrib/src/ungrib.exe
metgrid.exe -> metgrid/src/metgrid.exe
Failed WPS Compile¶
If WPS fails to compile, search the log file (e.g., compile.log) for the word “Error” with a capital “E”. Typically the first error listed in the file is the primary issue and subsequent errors result from the initial problem.
Geogrid and Metgrid Fail¶
WPS geogrid and metgrid executables make use of the external I/O libraries in the WRF/external/ directory - the libraries are built when WRF is installed, and if it was not installed properly, geogrid and metgrid are unable to compile.
Make sure WRF compiled successfully.
Check that the same compiler (and version) are being used to build WPS as were used to build WRF.
Check that the same netCDF (and version) are being used to build WPS as were used to build WRF.
- Is the path for WRF_DIR set properly? Check the path and name of the WRF directory
echo $WRF_DIR
Ungrib Fail¶
Make sure the jasper, zlib, and libpng libraries are correctly installed (if compiling with GRIB2 support).
Make sure the correct path is being used for the following lines in configure.wps.
COMPRESSION_LIBS = -L/$path-to-ungrib-libraries/lib -ljasper -lpng -lz COMPRESSION_INC = -I/$path-to-ungrib-libraries/include
The “clean -a” Tool¶
It is often necessary to clean the code before recompiling, but not always.
The code should be cleaned when modifications have been made to the configure.wrf(wps) file, or any changes have been made to a WRF/Registry/* file. If so, issue
./clean -a
prior to recompiling.Modifications to subroutines, or .F and .F90 files, require a recompile, but DO NOT require the code to be cleaned, nor reconfigured before recompiling. Simply recompile, which should be much faster than a clean compile.