User’s Guide for Advanced Research WRF (ARW) Modeling System Version 2

 

Chapter 2: Software Installation

Table of Contents

Introduction

The WRF modeling system software installation is fairly straightforward on the ported platforms. The package is mostly self-contained, meaning that WRF requires no external libraries (such as for FFTs or various linear algebra solvers). The one external package it does require is the netCDF library, which is one of the supported I/O API packages. The netCDF libraries or source code are available from the Unidata homepage at http://www.unidata.ucar.edu (select DOWNLOADS, registration required).

The WRF model has been successfully ported to a number of Unix-based machines. We do not have access to all of them and must rely on outside users and vendors to supply the required configuration information for the compiler and loader options. Below is a list of the supported combinations of hardware and software for WRF.

 

 

 

 

Vendor

Hardware

OS

Compiler

Cray

X1

UniCOS

vendor

HP/Compaq

alpha

Tru64

vendor

HP/Compaq

IA64 (Intel)

Linux

vendor

HP/Compaq

IA64

HPUX

vendor

IBM

Power Series

AIX

vendor

SGI

IA64

Linux

Intel

SGI

MIPS

Irix

vendor

Sun

UltraSPARC

SunOS

vendor

COTS*

IA32/AMD 32

Linux

Intel / PGI

COTS

IA64/Opteron

Linux

Intel / PGI

Mac

G5

Darwin

xlf

* Commercial off the shelf systems

The WRF code runs on single processor machines, shared-memory machines (that use the OpenMP API), distributed memory machines (with the appropriate MPI libraries), and on distributed clusters (utilizing both OpenMP and MPI). The WRF 3DVAR code runs on most systems listed above too. The porting to systems that use the Intel compiler is currently under development.  The Mac architecture is only supported as a serial build.

The WRFSI code also runs on most systems list above. Sun and Intel compiles are not yet supported.

Required Compilers and Scripting Languages

The WRF model (and WRF 3DVAR) is written in Fortran (what many refer to as Fortran 90). The software layer, RSL and now RSL_LITE, which sits between WRF and the MPI interface is written in C. There are also ancillary programs that are written in C to perform file parsing and file construction, both of which are required for default building of the WRF modeling code. Additionally, the WRF build mechanism uses several scripting languages: including perl (to handle various tasks such as the code browser designed by Brian Fiedler), Cshell and Bourne shell. The traditional UNIX text/file processing utilities are used: make, M4, sed, and awk. See Chapter 7: WRF Software (Required Software) for a more detailed listing of the necessary pieces for the WRF build.

The WRFSI is mostly written in Fortran 77 and Fortran 90 with a few C routines. Perl scripts are used to run the programs, and Perl/Tk is used for GUI.

Unix make is used in building all executables.

Required/Optional Libraries to Download

The only library that is almost always required is the netCDF package from Unidata (login > Downloads > NetCDF). Some of the WRF post-processing packages assume that the data from the WRF model is using the netCDF libraries. One may also need to add /path-to-netcdf/netcdf/bin to your path so that one may execute netcdf command, such as ncdump and ncgen.

 

Hint: If one wants to compile WRF codes on a Linux system using PGI (Intel) compiler, make sure the netCDF library is installed using PGI (Intel) compiler, too.

There are optional external libraries that may be used within the WRF system: ESMF and PHDF.  Neither the ESMF nor the PHDF libraries are required for standard builds of the WRF system.

If you are going to be running distributed memory WRF jobs, you need a version of MPI. You can pick up a version of mpich, but you might want your system group to install the code. A working installation of MPI is required prior to a build of WRF using distributed memory. Do you already have an MPI lying around? Try

        which mpif90
        which mpicc
        which mpirun
 

If these are all defined executables, you are probably OK. Make sure your paths are set up to point to the MPI lib, include, and bin directories.

 

Note that for GriB1 data processing, Todd Hutchinson (WSI) has provided a complete source library that is included with the software release. 

Post-Processing Utilities

The more widely used (and therefore supported) WRF post-processing utilities are:

  • NCL (homepage and WRF download)
    • NCAR Command Language written by NCAR Scientific Computing Division
    • NCL scripts written and maintained by WRF support
    • many template scripts are provided that are tailored for specific real-data and ideal-data cases
    • raw WRF output can be input with the NCL scripts
    • interactive or command-file driven
  • Vis5D (homepage and WRF download)
    • download Vis5D executable, build format converter
    • programs are available to convert the WRF output into an input format suitable for Vis5D
    • GUI interface, 3D movie loops, transparency
  • GrADS (homepage and WRF download)
    • download GrADS executable, build format converter
    • programs are available to convert the WRF output into an input format suitable for GrADS
    • interpolates to regular lat/lon grid
    • simple to generate publication quality
  • RIP (homepage and WRF download)
    • RIP4 written and maintained by Mark Stoelinga, UW
    • interpolation to various surfaces, trajectories, hundreds of diagnostic calculations
    • Fortran source provided
    • based on the NCAR Graphics package
    • pre-processor converts WRF data to RIP input format
    • table driven

UNIX Environment Settings

There are only a few environmental settings that are WRF related. Most of these are not required, but when things start acting badly, test some out. In c-shell syntax:

  • setenv WRF_EM_CORE 1

explicitly defines which model core to build

  • unset limits
    • especially if you are on a small system
  • setenv MP_STACK_SIZE 64000000
    • OpenMP blows through the stack size, set it large
  • setenv NETCDF /usr/local/netcdf (or where ever you have it stuck)
    • WRF wants both the lib and the include directories
  • setenv MPICH_F90 f90 (or whatever your Fortran compiler may be called)
    • WRF needs the bin, lib, and include directories
  • setenv OMP_NUM_THREADS n (where n is the number of procs to use)
    • if you have OpenMP on your system, this is how to specify the number of threads

Building the WRF Code

The WRF code has a fairly complicated build mechanism. It tries to determine the architecture that you are on, and then present you with options to allow you to select the preferred build method. For example, if you are on a Linux machine, it determines whether this is a 32 or 64 bit machine, and then prompts you for the desired usage of processors (such as serial, shared memory, or distributed memory).

  • Get the WRF zipped tar file
    • WRFV2 from http://www2.mmm.ucar.edu/wrf/users/get_source.html
    • always get the latest version if you are not trying to continue a long project
  • unzip and untar the file
    • gzip -cd WRFV2.2.TAR.gz | tar -xf -
    • again, if there is a later version of the code grab it, 2.1 is just used as an example
  • cd WRFV2
  • ./configure
    • choose one of the options
    • usually, option "1" is for a serial build, that is the best for an initial test
  • ./compile em_real (or any of the directory names in ./WRFV2/test)
  • ls -ls main/*.exe
    • if you built a real-data case, you should see ndown.exe, real.exe, and wrf.exe
    • if you built an ideal-data case, you should see ideal.exe and wrf.exe

Building the WRF-Var Code

See details in Chapter 6.

Building the WPS Code

WPS replaces the old WRFSI code.

Building WPS requires that WRFV2 is already build.

  • Get the WPS zipped tar file
    • WPSV2.2.TAR.gz from http://www2.mmm.ucar.edu/wrf/users/get_source.html
  • unzip and untar the file
    • gzip -cd WPSV2.2.TAR.gz | tar -xf -
  • cd WPS
  • ./configure
    • choose one of the options
    • usually, option "1" is for a serial build, that is the best for an initial test
  • ./compile
  • ls -ls *.exe
    • you should see geogrid.exe, ungrib.exe, and metgrid.exe
  • ls -ls util/*.exe
    • you should see a number of utility executables: avg_tsfc.exe, g1print.exe, g2print.exe, mod_levs.exe, plotfmt.exe, plotgrids.exe, and rd_intermediate.exe