Data assimilation is the technique by which observations are combined with a NWP product (the first guess or background forecast) and their respective error statistics to provide an improved estimate (the analysis) of the atmospheric (or oceanic, Jovian, whatever) state. Variational (Var) data assimilation achieves this through the iterative minimization of a prescribed cost (or penalty) function. Differences between the analysis and observations/first guess are penalized (damped) according to their perceived error. The difference between three-dimensional (3D-Var) and four-dimensional (4D-Var) data assimilation is the use of a numerical forecast model in the latter.
The MMM Division of NCAR supports a unified (global/regional, multi-model, 3/4D-Var) model-space data assimilation system (WRFDA) for use by NCAR staff and collaborators, and is also freely available to the general community, together with further documentation, test results, plans etc., from the WRFDA web-page http://www2.mmm.ucar.edu/wrf/users/wrfda/Docs/user_guide_V3.2/users_guide_chap6.htm.
Various components of the WRFDA system are shown in blue in the sketch below, together with their relationship with rest of the WRF system.
xb: first guess either from previous
WRF forecast or from WPS/REAL output.
xlbc:
lateral boundary from WPS/REAL output.
xa:
analysis from WRFDA data assimilation system.
xf:
WRF forecast output.
yo:
observations processed by OBSPROC.
(note: PREPBUFR input, Radar and Radiance data don’t go through OBSPROC)
B0:
background error statistics from generic BE data (CV3) or gen_be.
R:
observational and representative error statistics.
In this chapter, you will learn how to run the various components of WRFDA system. For the training purpose, you are supplied with a test case including the following input data: a) observation file (in the format prior to OBSPROC), b) WRF NetCDF background file (WPS/REAL output used as a first guess of the analysis), and c) Background error statistics (estimate of errors in the background file). You can download the test dataset from http://www2.mmm.ucar.edu/wrf/users/wrfda/download/testdata.html. In your own work, you have to create all these input files yourselves. See the section Running Observation Preprocessor for creating your observation files. See section Running gen_be for generating your background error statistics file if you want to use cv_options=5.
Before using your own data, we suggest that you start by running through the WRFDA related programs at least once using the supplied test case. This serves two purposes: First, you can learn how to run the programs with data we have tested ourselves, and second you can test whether your computer is adequate to run the entire modeling system. After you have done the tutorial, you can try running other, more computationally intensive, case studies and experimenting with some of the many namelist variables.
WARNING: It is impossible to test every code upgrade with every permutation of computer, compiler, number of processors, case, namelist option, etc. The “namelist” options that are supported are indicated in the “WRFDA/var/README.namelist” and these are the default options.
Running with your own domain. Hopefully, our test cases will have prepared you for the variety of ways in which you may wish to run WRFDA. Please inform us about your experiences.
As a professional courtesy, we request that you include the following reference in any publications that makes use of any component of the community WRFDA system:
Barker, D.M., W. Huang, Y.R. Guo,
and Q.N. Xiao., 2004: A Three-Dimensional (3DVAR) Data Assimilation System For
Use With MM5: Implementation and Initial Results. Mon. Wea. Rev., 132, 897-914.
Huang, X.Y., Q. Xiao, D.M. Barker,
X. Zhang, J. Michalakes, W. Huang, T. Henderson, J. Bray, Y. Chen, Z. Ma, J.
Dudhia, Y. Guo, X. Zhang, D.J. Won, H.C. Lin, and Y.H. Kuo, 2009: Four-Dimensional
Variational Data Assimilation for WRF: Formulation and Preliminary Results.
Mon. Wea. Rev., 137, 299–314.
Running WRFDA requires a Fortran 90 compiler. We have currently tested the WRFDA on the following platforms: IBM (XLF), SGI Altix (INTEL), PC/Linux (PGI, INTEL, GFORTRAN), and Apple (G95/PGI). Please let us know if this does not meet your requirements, and we will attempt to add other machines to our list of supported architectures as resources allow. Although we are interested to hear of your experiences on modifying compile options, we do not yet recommend making changes to the configure file used to compile WRFDA.
a.
Obtaining
WRFDA Source Code
Users can download the WRFDA source code from http://www2.mmm.ucar.edu/wrf/users/wrfda/download/get_source.html.
After the tar file is unzipped (gunzip WRFDAV3.2.tar.gz)
and untarred (untar WRFDAV3.2.tar), the directory
WRFDA should be
created; this directory contains the WRFDA source, external libraries, and
fixed files. The following is a list of the system components and the content
for each directory:
Directory Name |
Content |
var/da |
WRFDA source code |
var/run |
Fixed input files required by WRFDA, such as background error covariances, and radiance related files CRTM coefficients, radiance_info and VARBC.in. |
var/external |
Library needed by WRFDA, include crtm, bufr, lapack, blas |
var/obsproc |
Obsproc source code , namelist, and observation error file. |
var/gen_be |
Source code of generate background error |
var/build |
Build all .exe files. |
b.
Compile
WRFDA and Libraries
Start with
V3.1.1, to compile the WRFDA code, it is necessary to have installed the NetCDF
library. The NetCDF library is the only mandatory library to install WRFDA, if
only conventional observational data from LITTLE_R format file is to be used.
Only if you intend to use observational data with PREPBUFR format, an
environment variables is needed to be set like (using the C-shell),
> setenv BUFR
1
In addition to BUFR library, if you intend to assimilate satellite radiance data with CRTM (V2.0.2),
> setenv CRTM
1
The CRTM
will be compiled with WRFDA together. You don’t need to install the CRTM
separately any more since CRTM V2.0.2. However, if you intend to use RTTOV
(8.7) to assimilate radiance data, which still have to be installed separately.
RTTOV (8.7) can be downloaded from http://www.metoffice.gov.uk/science/creating/working_together/nwpsaf_public.html. The additional necessary environment
variables needed are set (again using the C-shell), by commands looking
something like
> setenv RTTOV /usr/local/rttov87
(Note: make a linkage of $RTTOV/librttov.a to
$RTTOV/src/librttov8.7.a)
Note: Make sure the required libraries were all compiled using the same compiler that will be used to build WRFDA, since the libraries produced by one compiler may not be compatible with code compiled with another.
Assuming all required libraries are available and the WRFDA source code is
ready, start to install the WRFDA as following step:
To configure WRFDA, enter the WRFDA directory and type
> ./configure wrfda
A list of configuration options for your computer should appear. Each option combines a compiler type and a parallelism option; since the configuration script doesn’t check which compilers are actually available, be sure to only select among the options for compilers that are available on your system. The parallelism option allows for a single-processor (serial) compilation, shared-memory parallel (smpar) compilation, distributed-memory parallel (dmpar) compilation and distributed-memory with shared-memory parallel (sm+dm) compilation. For example, on a Macintosh computer, the above steps look like:
> ./configure wrfda
checking
for perl5... no
checking
for perl... found /usr/bin/perl (perl)
Will use
NETCDF in dir: /users/noname/work/external/g95/netcdf-3.6.1
PHDF5 not
set in environment. Will configure WRF for use without.
$JASPERLIB
or $JASPERINC not found in environment, configuring to build without grib2
I/O...
------------------------------------------------------------------------
Please
select from among the following supported platforms.
1. Darwin (MACOS) PGI compiler with pgcc (serial)
2. Darwin (MACOS) PGI compiler with pgcc (smpar)
3. Darwin (MACOS) PGI compiler with pgcc (dmpar)
4. Darwin (MACOS) PGI compiler with pgcc (dm+sm)
5. Darwin (MACOS) intel compiler with icc (serial)
6. Darwin (MACOS) intel
compiler with icc (smpar)
7. Darwin (MACOS) intel compiler with icc (dmpar)
8. Darwin (MACOS) intel compiler with icc (dm+sm)
9. Darwin (MACOS) intel compiler with cc (serial)
10. Darwin (MACOS) intel compiler with cc (smpar)
11. Darwin (MACOS) intel compiler with cc (dmpar)
12. Darwin (MACOS) intel compiler with cc (dm+sm)
13. Darwin (MACOS) g95 with gcc (serial)
14. Darwin (MACOS) g95 with gcc (dmpar)
15. Darwin (MACOS) xlf (serial)
16. Darwin (MACOS) xlf (dmpar)
Enter
selection [1-10] : 13
------------------------------------------------------------------------
Compile
for nesting? (0=no nesting, 1=basic, 2=preset moves, 3=vortex following)
[default 0]:
Configuration
successful. To build the model type compile .
……
After running the configuration script and choosing a compilation option, a configure.wrf file will be created. Because of the variety of ways that a computer can be configured, if the WRFDA build ultimately fails, there is a chance that minor modifications to the configure.wrf file may be needed.
Note: WRF compiles with –r4 option while WRFDA compiles with –r8. For this reason, WRF and WRFDA cannot reside and be
compiled under the same directory.
Hint: It is
helpful to start with something simple, such as the serial build. If it is
successful, move on to build dmpar code. Remember to type ‘clean –a’ between each
build.
To compile the code, type
>
./compile all_wrfvar >&! compile.out
Successful compilation of ‘all_wrfvar” will produce 32 executables in the var/build directory which are linked in var/da directory, as well as obsproc.exe in var/obsproc/src directory. You can list these executables by issuing the command (from WRFDA directory)
> ls -l
var/build/*exe var/obsproc/src/obsproc.exe
-rwxr-xr-x 1 noname users
641048 Mar 23 09:28 var/build/da_advance_time.exe
-rwxr-xr-x 1 noname users
954016 Mar 23 09:29 var/build/da_bias_airmass.exe
-rwxr-xr-x 1 noname users
721140 Mar 23 09:29 var/build/da_bias_scan.exe
-rwxr-xr-x 1 noname users
686652 Mar 23 09:29 var/build/da_bias_sele.exe
-rwxr-xr-x 1 noname users
700772 Mar 23 09:29 var/build/da_bias_verif.exe
-rwxr-xr-x 1 noname users
895300 Mar 23 09:29 var/build/da_rad_diags.exe
-rwxr-xr-x 1 noname users
742660 Mar 23 09:29 var/build/da_tune_obs_desroziers.exe
-rwxr-xr-x 1 noname users
942948 Mar 23 09:29 var/build/da_tune_obs_hollingsworth1.exe
-rwxr-xr-x 1 noname users
913904 Mar 23 09:29 var/build/da_tune_obs_hollingsworth2.exe
-rwxr-xr-x 1 noname users
943000 Mar 23 09:28 var/build/da_update_bc.exe
-rwxr-xr-x 1 noname users
1125892 Mar 23 09:29 var/build/da_verif_anal.exe
-rwxr-xr-x 1 noname users
705200 Mar 23 09:29 var/build/da_verif_obs.exe
-rwxr-xr-x 1 noname users 46602708
Mar 23 09:28 var/build/da_wrfvar.exe
-rwxr-xr-x 1 noname users
1938628 Mar 23 09:29 var/build/gen_be_cov2d.exe
-rwxr-xr-x 1 noname users
1938628 Mar 23 09:29 var/build/gen_be_cov3d.exe
-rwxr-xr-x 1 noname users
1930436 Mar 23 09:29 var/build/gen_be_diags.exe
-rwxr-xr-x 1 noname users
1942724 Mar 23 09:29 var/build/gen_be_diags_read.exe
-rwxr-xr-x 1 noname users
1941268 Mar 23 09:29 var/build/gen_be_ensmean.exe
-rwxr-xr-x 1 noname users
1955192 Mar 23 09:29 var/build/gen_be_ensrf.exe
-rwxr-xr-x 1 noname users
1979588 Mar 23 09:28 var/build/gen_be_ep1.exe
-rwxr-xr-x 1 noname users
1961948 Mar 23 09:28 var/build/gen_be_ep2.exe
-rwxr-xr-x 1 noname users
1945360 Mar 23 09:29 var/build/gen_be_etkf.exe
-rwxr-xr-x
1 noname users
1990936 Mar 23 09:28 var/build/gen_be_stage0_wrf.exe
-rwxr-xr-x 1 noname users
1955012 Mar 23 09:28 var/build/gen_be_stage1.exe
-rwxr-xr-x 1 noname users
1967296 Mar 23 09:28 var/build/gen_be_stage1_1dvar.exe
-rwxr-xr-x 1 noname users
1950916 Mar 23 09:28 var/build/gen_be_stage2.exe
-rwxr-xr-x 1 noname users
2160796 Mar 23 09:29 var/build/gen_be_stage2_1dvar.exe
-rwxr-xr-x 1 noname users
1942724 Mar 23 09:29 var/build/gen_be_stage2a.exe
-rwxr-xr-x 1 noname users 1950916
Mar 23 09:29 var/build/gen_be_stage3.exe
-rwxr-xr-x 1 noname users
1938628 Mar 23 09:29 var/build/gen_be_stage4_global.exe
-rwxr-xr-x 1 noname users
1938732 Mar 23 09:29 var/build/gen_be_stage4_regional.exe
-rwxr-xr-x 1 noname users
1094740 Mar 23 09:29 var/build/gen_be_vertloc.exe
-rwxr-xr-x 1 noname users
1752352 Mar 23 09:29 var/obsproc/src/obsproc.exe
da_wrfvar.exe is the main executable for running WRFDA. Make sure it is created after the compilation. Sometimes (unfortunately) it is possible that other utilities get successfully compiled, while the main da_wrfvar.exe fails; please check the compilation log file carefully to figure out the problem.
The basic gen_be utility for regional model consists of gen_be_stage0_wrf.exe, gen_be_stage1.exe, gen_be_stage2.exe, gen_be_stage2a.exe, gen_be_stage3.exe, gen_be_stage4_regional.exe, and gen_be_diags.exe.
da_updated_bc.exe is used for updating WRF boundary condition after a new WRFDA analysis is generated.
da_advance_time.exe is a very handy and useful tool for date/time manipulation. Type “da_advance_time.exe” to see its usage instruction.
In addition to the executables for running WRFDA and gen_be, obsproc.exe (the executable for preparing conventional data for WRFDA) compilation is also included in “./compile all_wrfvar”.
Go to /external/bufr and /external/crtm to check if the libbufr.a and libcrtm.a were generated if you use BUFR and CRTM library.
c. Clean
Compilation
To remove all object files and executables, type:
clean
To remove all build files, including configure.wrfda, type:
clean -a
The clean command is recommended if compilation fails or configuration file is changed.
If you intend to run WRF 4D-Var, it is necessary to have installed the WRFNL (WRF nonlinear model) and WRFPLUS (WRF adjoint and tangent linear model). WRFNL is a modified version of WRF V3.2 and can only be used for 4D-Var purposes. WRFPLUS contains the adjoint and tangent linear models based on a simplified WRF model, which only includes some simple physical processes such as vertical diffusion and large-scale condensation.
To
install WRFNL:
http://www2.mmm.ucar.edu/wrf/users/download/get_source.html
> cd
WRFNL
> gzip
-cd WRFV3.TAR.gz | tar -xf - ; mv WRFV3 WRFNL
http://www2.mmm.ucar.edu/wrf/users/wrfda/download/wrfnl.html
> gzip -cd WRFNL3.2_PATCH.tar.gz | tar -xf -
> ./configure
serial means single processor
dmpar means Distributed Memory Parallel (MPI)
smpar is not supported for 4D-Var
Please select 0 for the second option for no nesting
> ./compile em_real
> ls -ls main/*.exe
If you built the real-data case, you
should see wrf.exe
To
install WRFPLUS:
http://www2.mmm.ucar.edu/wrf/users/wrfda/download/wrfplus.html
> gzip -cd WRFPLUS3.2.tar.gz | tar -xf -
> cd WRFPLUS
> ./configure wrfplus
serial means single processor
dmpar means Distributed Memory Parallel (MPI)
Note: wrfplus was tested on following platforms:
IBM AIX:
xlfrte 11.1.0.5
Linux :
pgf90 6.2-5 64-bit target on x86-64 Linux (environmental
variable PGHPF_ZMEM=yes is needed)
Mac OS
(Intel) : g95 0.91!
> ./compile wrf
> ls -ls main/*.exe
You should see wrfplus.exe
The OBSPROC program reads observations in LITTLE_R format (a legendary ASCII format, in use since MM5 era). Please refer to the documentation at http://www2.mmm.ucar.edu/mm5/mm5v3/data/how_to_get_rawdata.html for LITTLE_R format description. For your applications, you will have to prepare your own observation files. Please see http://www2.mmm.ucar.edu/mm5/mm5v3/data/free_data.html for the sources of some freely available observations and the program for converting the observations to LITTLE_R format. Because the raw observation data files could be in any of formats, such as ASCII, BUFR, PREPBUFR, MADIS, HDF, etc. Further more, for each of formats, there may be the different versions. To make WRFDA system as general as possible, the LITTLE_R format ASCII file was adopted as an intermediate observation data format for WRFDA system. Some extensions were made in the LITTLE_R format for WRFDA applications. More complete description of LITTLE_R format and conventional observation data sources for WRFDA could be found from the web page: 2010 Winter Tutorial by clicking “Observation Pre-processing”. The conversion of the user-specific-source data to the LITTLE_R format observation data file is the users’ task.
The purposes of OBSPROC are:
· Remove
observations outside the time range and domain (horizontal and top).
· Re-order and merge duplicate (in time and location) data reports.
· Retrieve pressure or height based on observed information using the hydrostatic assumption.
· Check
vertical consistency and super adiabatic for multi-level observations.
· Assign
observational errors based on a pre-specified error file.
· Write
out the observation file to be used by WRFDA in ASCII or BUFR format.
The OBSPROC program—obsproc.exe
should be found under the directory WRFDA/var/obsproc/src if “compile all_wrfvar” was completed successfully.
a. Prepare
observational data for 3D-Var
To prepare the observation file, for example, at the analysis time 0h for
3D-Var, all the observations between ±1h (or ±1.5h) will be processed, as
illustrated in following figure, which means that the observations between 23h
and 1h are treated as the observations at 0h.
Before running obsproc.exe, create the required namelist file namelist.obsproc (see WRFDA/var/obsproc/README.namelist, or the section Description of Namelist Variables for details).
For your reference, an example file named “namelist_obsproc.3dvar.wrfvar-tut” has already been created in the var/obsproc directory. Thus, proceed as follows.
>
cp namelist.obsproc.3dvar.wrfvar-tut namelist.obsproc
Next, edit the namelist file namelist.obsproc by changing the following variables to accommodate your experiments.
&record1
obs_gts_filename='obs.2008020512'
&record2
time_window_min
= '2008-02-05_11:00:00',: The earliest time edge as ccyy-mm-dd_hh:mn:ss
time_analysis = '2008-02-05_12:00:00', : The analysis
time as ccyy-mm-dd_hh:mn:ss
time_window_max
= '2008-02-05_13:00:00',: The latest time edge as ccyy-mm-dd_hh:mn:ss
&record6,7,8
Edit all the domain setting according with your own experiment. You may pay special attention on NESTIX and NESTJX, which is described in the section Description of Namelist Variables for details).
&record9
use_for =
'3DVAR', ; used for 3D-Var,
default
To run OBSPROC, type
>
obsproc.exe >&! obsproc.out
Once obsproc.exe has completed successfully, you will see an observation data file, obs_gts_2008-02-05_12:00:00.3DVAR, in the obsproc directory. This is the input observation file to WRFDA.
obs_gts_2008-02-05_12:00:00.3DVAR is an ASCII file that contains a header section (listed below) followed by observations. The meanings and format of observations in the file are described in the last six lines of the header section.
TOTAL
= 9066, MISS. =-888888.,
SYNOP
= 757, METAR = 2416, SHIP = 145, BUOY
= 250, BOGUS
= 0, TEMP =
86,
AMDAR
= 19, AIREP
= 205, TAMDAR= 0, PILOT = 85, SATEM = 106, SATOB = 2556,
GPSPW
= 187, GPSZD = 0, GPSRF = 3, GPSEP = 0, SSMT1 = 0, SSMT2 = 0,
TOVS = 0, QSCAT = 2190, PROFL = 61, AIRSR = 0, OTHER = 0,
PHIC =
40.00, XLONC = -95.00, TRUE1 =
30.00, TRUE2 = 60.00, XIM11
= 1.00, XJM11 = 1.00,
base_temp=
290.00, base_lapse= 50.00,
PTOP = 1000., base_pres=100000., base_tropo_pres= 20000.,
base_strat_temp= 215.,
IXC = 60, JXC = 90, IPROJ = 1, IDD = 1, MAXNES= 1,
NESTIX= 60,
NESTJX= 90,
NUMC = 1,
DIS = 60.00,
NESTI
= 1,
NESTJ
= 1,
INFO = PLATFORM, DATE, NAME, LEVELS,
LATITUDE, LONGITUDE, ELEVATION, ID.
SRFC = SLP, PW (DATA,QC,ERROR).
EACH = PRES, SPEED, DIR, HEIGHT, TEMP, DEW
PT, HUMID (DATA,QC,ERROR)*LEVELS.
INFO_FMT =
(A12,1X,A19,1X,A40,1X,I6,3(F12.3,11X),6X,A40)
SRFC_FMT =
(F12.3,I4,F7.2,F12.3,I4,F7.3)
EACH_FMT =
(3(F12.3,I4,F7.2),11X,3(F12.3,I4,F7.2),11X,3(F12.3,I4,F7.2))
#------------------------------------------------------------------------------#
……
observations ………
Before running WRFDA, you may like to learn more about various types of data that will be passed to WRFDA for this case, for example, their geographical distribution, etc. This file is in ASCII format and so you can easily view it. To have a graphical view about the content of this file, there is a “MAP_plot” utility to look at the data distribution for each type of observations. To use this utility, proceed as follows.
> cd
MAP_plot
> make
We have prepared some configure.user.ibm/linux/mac/… files for some platforms, when “make” is typed, the Makefile will use one of
them to determine the compiler and compiler option. Please modify the Makefile and configure.user.xxx to
accommodate the complier on your platform. Successful compilation will produce Map.exe. Note: The successful compilation of
Map.exe requires pre-installed NCARG Graphics
libraries under $(NCARG_ROOT)/lib.
Modify the script Map.csh to set the time window and full path of input observation file (obs_gts_2008-02-05_12:00:00.3DVAR). You will need to set the following strings in this script as follows:
Map_plot = /users/noname/WRFDA/var/obsproc/MAP_plot
TIME_WINDOW_MIN = ‘2008020511’
TIME_ANALYSIS = ‘2008020512’
TIME_WINDOW_MAX
= ‘2008020513’
OBSDATA = ../obs_gts_2008-02-05_12:00:00.3DVAR
Next, type
>
Map.csh
When the job has completed, you will have a gmeta file gmeta.{analysis_time} corresponding to analysis_time=2008020512. This contains plots of data distribution for each
type of observations contained in the OBS data file: obs_gts_2008-02-05_12:00:00.3DVAR. To view
this, type
>
idt gmeta.2008020512
It will display (panel by panel) geographical distribution of various types of data. Following is the geographic distribution of “sonde” observations for this case.
There is an alternative way to plot the observation by using ncl script: WRFDA/var/graphics/ncl/plot_ob_ascii_loc.ncl.
However, with this way, you need to provide the first guess file to the ncl
script, and have ncl installed in your system.
b. Prepare
observational data for 4D-Var
To prepare the observation file, for example, at the analysis time 0h for
4D-Var, all observations from 0h to 6h will be processed and grouped in 7
sub-windows from slot1 to slot7, as illustrated in following figure. NOTE: The
“Analysis time” in the figure below is not the actual analysis time (0h), it
just indicates the time_analysis setting in the namelist file, and is set to
three hours later than the actual analysis time. The actual analysis time is
still 0h.
An example file named “namelist_obsproc.4dvar.wrfvar-tut” has already been created in the var/obsproc directory. Thus, proceed as follows:
>
cp namelist.obsproc.4dvar.wrfvar-tut namelist.obsproc
In the namelist file, you need to change the following variables to accommodate your experiments. In this test case, the actual analysis time is 2008-02-05_12:00:00, but in namelist, the time_analysis should be set to 3 hours later. The different value of time_analysis will make the different number of time slots before and after time_analysis. For example, if you set time_analysis = 2008-02-05_16:00:00, and set the num_slots_past = 4 and time_slots_ahead=2. The final results will be same as before.
&record1
obs_gts_filename='obs.2008020512'
&record2
time_window_min
= '2008-02-05_12:00:00',: The earliest time edge as ccyy-mm-dd_hh:mn:ss
time_analysis = '2008-02-05_15:00:00', : The
analysis time as ccyy-mm-dd_hh:mn:ss
time_window_max
= '2008-02-05_18:00:00',: The latest time edge as ccyy-mm-dd_hh:mn:ss
&record6,7,8
Edit all the domain setting according with your own experiment. You may pay special attention on NESTIX and NESTJX, which is described in the section Description of Namelist Variables for details).
&record9
use_for =
'4DVAR', ; used for 3D-Var,
default
; num_slots_past and
num_slots_ahead are used ONLY for FGAT and 4DVAR:
num_slots_past = 3, ; the number of time slots
before time_analysis
num_slots_ahead = 3, ; the number of time slots after
time_analysis
To run OBSPROC, type
>
obsproc.exe >&! obsproc.out
Once obsproc.exe
has completed successfully, you will see 7 observation data files:
obs_gts_2008-02-05_12:00:00.4DVAR
obs_gts_2008-02-05_13:00:00.4DVAR
obs_gts_2008-02-05_14:00:00.4DVAR
obs_gts_2008-02-05_15:00:00.4DVAR
obs_gts_2008-02-05_16:00:00.4DVAR
obs_gts_2008-02-05_17:00:00.4DVAR
obs_gts_2008-02-05_18:00:00.4DVAR
They are the input observation files to WRF 4D-Var. You can also use “MAP_Plot” to view the geographic distribution of different observations at different time slots.
The WRFDA system requires three input files to run:
a) A WRF first guess and boundary input files output from either WPS/real (cold-start)
or WRF forecast (warm-start)
b) Observations (in ASCII format, PREBUFR or BUFR for radiance)
c) A background error statistics file (containing background error covariance)
The following table
summarizes the above info:
Input
Data |
Format |
Created
By |
First Guess |
NETCDF |
WRF Preprocessing
System (WPS) and real.exe or WRF |
Observations |
ASCII (PREPBUFR also
possible) |
Observation Preprocessor
(OBSPROC) |
Background Error
Statistics |
Binary |
/Default CV3 |
In the test case, you will store data in a directory defined by the environment variable $DAT_DIR. This directory can be at any location and it should have read access. Type
> setenv
DAT_DIR your_choice_of_dat_dir
Here, "your_choice_of_dat_dir" is the directory where the WRFDA input data is stored. Create this directory if it does not exist, and type
> cd
$DAT_DIR
Download the test data for a “Tutorial” case valid at 12 UTC 5th February 2008 from http://www2.mmm.ucar.edu/wrf/users/wrfda/download/testdata.html
Once you have downloaded “WRFDAV3.2-testdata.tar.gz” file to $DAT_DIR, extract it by typing
> gunzip WRFDAV3.2-testdata.tar.gz
> tar -xvf WRFDAV3.2-testdata.tar
Now you should find the following three sub-directories/files under “$DAT_DIR”
ob/2008020512/ob.2008020512.gz # Observation data in “little_r” format
rc/2008020512/wrfinput_d01 # First guess file
rc/2008020512/wrfbdy_d01 # lateral boundary file
be/be.dat # Background error file
......
You should first go through the section “Running Observation Preprocessor (OBSPROC)”
and have a WRF-3D-Var-ready
observation file (obs_gts_2008-02-05_12:00:00.3DVAR)
generated in your OBSPROC working directory. You could then copy or move obs_gts_2008-02-05_12:00:00.3DVAR to be in $DAT_DIR/ob/2008020512/ob.ascii.
If you want to try
4D-Var, please go through the section “Running Observation Preprocessor
(OBSPROC)” and have the WRF-4D-Var-ready
observation files (obs_gts_2008-02-05_12:00:00.4DVAR,……). You could copy or move the
observation files to $DAT_DIR/ob using following commands:
> mv
obs_gts_2008-02-05_12:00:00.4DVAR
$DAT_DIR/ob/2008020512/ob.ascii+
> mv
obs_gts_2008-02-05_13:00:00.4DVAR
$DAT_DIR/ob/2008020513/ob.ascii
> mv
obs_gts_2008-02-05_14:00:00.4DVAR
$DAT_DIR/ob/2008020514/ob.ascii
> mv
obs_gts_2008-02-05_15:00:00.4DVAR
$DAT_DIR/ob/2008020515/ob.ascii
> mv
obs_gts_2008-02-05_16:00:00.4DVAR
$DAT_DIR/ob/2008020516/ob.ascii
> mv
obs_gts_2008-02-05_17:00:00.4DVAR
$DAT_DIR/ob/2008020517/ob.ascii
> mv
obs_gts_2008-02-05_18:00:00.4DVAR
$DAT_DIR/ob/2008020518/ob.ascii-
At this point you have three
of the input files (first guess, observation and background error statistics
files in directory $DAT_DIR)
required to run WRFDA, and have successfully downloaded and compiled the WRFDA
code. If this is correct, you are ready to learn how to run WRFDA.
The data for this case is valid at 12 UTC 5th February 2008. The first guess comes from the NCEP FNL (Final) Operational Global Analysis data, passed through the WRF-WPS and real programs.
To run WRF 3D-Var, first create and cd to a
working directory, for example, WRFDA/var/test/tutorial,
and then follow the steps below:
> cd
WRFDA/var/test/tutorial
> ln
-sf WRFDA/run/LANDUSE.TBL ./LANDUSE.TBL
> ln
-sf $DAT_DIR/rc/2008020512/wrfinput_d01 ./fg (link first guess file as fg)
> ln
-sf WRFDA/var/obsproc/obs_gts_2008-02-05_12:00:00.3DVAR ./ob.ascii (link
OBSPROC processed observation file as ob.ascii)
> ln
-sf $DAT_DIR/be/be.dat ./be.dat (link background error statistics as be.dat)
> ln
-sf WRFDA/var/da/da_wrfvar.exe ./da_wrfvar.exe (link executable)
We will begin by editing the file, namelist.input, which
is a very basic namelist.input for running the tutorial test case is shown
below and provided as WRFDA/var/test/tutorial/namelist.input.
Only the time and domain settings need to be specified in this case, if we are
using the default settings provided in WRFDA/Registry/Registry.wrfvar)
&wrfvar1
print_detail_grad=false,
/
&wrfvar2
/
&wrfvar3
/
&wrfvar4
/
&wrfvar5
/
&wrfvar6
/
&wrfvar7
/
&wrfvar8
/
&wrfvar9
/
&wrfvar10
/
&wrfvar11
/
&wrfvar12
/
&wrfvar13
/
&wrfvar14
/
&wrfvar15
/
&wrfvar16
/
&wrfvar17
/
&wrfvar18
analysis_date="2008-02-05_12:00:00.0000",
/
&wrfvar19
/
&wrfvar20
/
&wrfvar21
time_window_min="2008-02-05_11:00:00.0000",
/
&wrfvar22
time_window_max="2008-02-05_13:00:00.0000",
/
&wrfvar23
/
&time_control
start_year=2008,
start_month=02,
start_day=05,
start_hour=12,
end_year=2008,
end_month=02,
end_day=05,
end_hour=12,
/
&dfi_control
/
&domains
e_we=90,
e_sn=60,
e_vert=41,
dx=60000,
dy=60000,
/
&physics
mp_physics=3,
ra_lw_physics=1,
ra_sw_physics=1,
radt=60,
sf_sfclay_physics=1,
sf_surface_physics=1,
bl_pbl_physics=1,
cu_physics=1,
cudt=5,
num_soil_layers=5,
(IMPORTANT: it’s essential to make sure the setting here is consistent
with the number in your first guess file)
mp_zero_out=2,
co2tf=0,
/
&fdda
/
&dynamics
/
&bdy_control
/
&grib2
/
&namelist_quilt
/
>
da_wrfvar.exe >&! wrfda.log
The file wrfda.log (or rsl.out.0000 if run in distributed-memory mode) contains important WRFDA runtime log information. Always check the log after a WRFDA run:
***
VARIATIONAL ANALYSIS ***
DYNAMICS
OPTION: Eulerian Mass Coordinate
WRF NUMBER OF
TILES = 1
Set up observations (ob)
Using ASCII format observation input
scan obs
ascii
end scan obs
ascii
Observation summary
ob
time 1
sound
85 global,
85 local
synop
531 global,
525 local
pilot
84 global,
84 local
satem
78 global,
78 local
geoamv
736 global, 719 local
polaramv
0 global,
0 local
airep
132 global,
131 local
gpspw
183 global,
183 local
gpsrf
0 global, 0 local
metar 1043
global, 1037 local
ships
86 global,
82 local
ssmi_rv
0 global,
0 local
ssmi_tb
0 global,
0 local
ssmt1
0 global, 0 local
ssmt2
0 global, 0 local
qscat
0 global, 0 local
profiler
61 global,
61 local
buoy
216 global,
216 local
bogus
0 global, 0 local
pseudo
0 global, 0 local
radar
0 global, 0 local
radiance
0 global,
0 local
airs retrieval
0 global, 0 local
sonde_sfc
85 global, 85 local
mtgirs
0 global, 0 local
tamdar
0 global, 0 local
Set up background errors for regional application
WRF-Var dry control variables are:psi, chi_u, t_u and psfc
Humidity control variable is q/qsg
Using
the averaged regression coefficients for unbalanced part
Vertical truncation for psi =
15( 99.00%)
Vertical truncation for chi_u = 20( 99.00%)
Vertical truncation for t_u =
29( 99.00%)
Vertical truncation for rh =
22( 99.00%)
Calculate innovation vector(iv)
Minimize cost function using CG method
For this run cost function diagnostics will not be written
Starting outer iteration : 1
Starting cost function: 2.28356084D+04, Gradient= 2.23656955D+02
For this outer iteration gradient target is:
2.23656955D+00
----------------------------------------------------------
Iter Gradient
Step
1
1.82455068D+02 7.47025772D-02
2
1.64971618D+02 8.05531077D-02
3
1.13694365D+02 7.22382618D-02
4
7.87359568D+01 7.51905761D-02
5
5.71607218D+01 7.94572516D-02
6
4.18746777D+01 8.30731280D-02
7
2.95722963D+01 6.13223951D-02
8
2.34205172D+01 9.05920463D-02
9
1.63772518D+01 6.48090044D-02
10
1.09735524D+01 7.71148550D-02
11
8.22748934D+00 8.81041046D-02
12
5.65846963D+00 7.89528133D-02
13
4.15664769D+00 7.45589721D-02
14
3.16925808D+00 8.35300020D-02
----------------------------------------------------------
Inner iteration stopped after 15 iterations
Final: 15
iter, J= 1.76436785D+04, g= 2.06098421D+00
----------------------------------------------------------
Diagnostics
Final
cost function J = 17643.68
Total
number of obs. = 26726
Final
value of J
=
17643.67853
Final
value of Jo = 15284.64894
Final
value of Jb = 2359.02958
Final
value of Jc
= 0.00000
Final
value of Je =
0.00000
Final
value of Jp =
0.00000
Final
J / total num_obs
= 0.66017
Jb
factor used(1) =
1.00000
Jb
factor used(2) = 1.00000
Jb
factor used(3) =
1.00000
Jb
factor used(4) =
1.00000
Jb
factor used(5) =
1.00000
Jb
factor used
=
1.00000
Je
factor used
=
1.00000
VarBC
factor used =
1.00000
*** WRF-Var
completed successfully ***
A file called namelist.output (which contains the complete namelist settings) will be generated after a successful da_wrfvar.exe run. The settings appearing in namelist.output, but not specified in your namelist.input, are the default values from WRFDA/Registry/Registry.wrfvar.
After successful completion of job, wrfvar_output (the WRFDA analysis file, i.e. the new initial condition for WRF) should appear in the working directory along with a number of diagnostic files. Various text diagnostics output files will be explained in the next section (WRFDA Diagnostics).
In order to understand the role of various important WRFDA options, try re-running WRFDA by changing different namelist options. Such as making WRFDA convergence criteria more stringent. This is achieved by reducing the value of the convergence criteria “EPS” to e.g. 0.0001 by adding "EPS=0.0001" in the namelist.input record &wrfvar6. See section (WRFDA additional exercises) for more namelist options
To run WRF 4D-Var, first create and cd to a
working directory, for example, WRFDA/var/test/4dvar;
next assuming that we are using the C-shell, set the working directories for
the three WRF 4D-Var components WRFDA, WRFNL and
WRFPLUS thusly
> setenv WRFDA_DIR /ptmp/$user/WRFDA
> setenv WRFNL_DIR /ptmp/$user/WRFNL
> setenv WRFPLUS_DIR /ptmp/$user/WRFPLUS
Assume the analysis date is 2008020512 and the test data directories are:
> setenv DATA_DIR /ptmp/$user/DATA
> ls –lr $DATA_DIR
ob/2008020512
ob/2008020513
ob/2008020514
ob/2008020515
ob/2008020516
ob/2008020517
ob/2008020518
rc/2008020512
be
Note: Currently, WRF 4D-Var can only run with the observation data processed by OBSPROC, and cannot work with PREPBUFR format data; Although WRF-4DVar is able to assimilate satellite radiance BUFR data, but this capability is still under testing.
Assume the working directory is:
> setenv WORK_DIR $WRFDA_DIR/var/test/4dvar
Then follow the steps below:
1) Link the executables.
> cd $WORK_DIR
> ln -fs $WRFDA_DIR/var/da/da_wrfvar.exe .
> cd $WORK_DIR/nl
> ln -fs $WRFNL_DIR/main/wrf.exe .
> cd $WORK_DIR/ad
> ln -fs $WRFPLUS_DIR/main/wrfplus.exe .
> cd $WORK_DIR/tl
> ln -fs $WRFPLUS_DIR/main/wrfplus.exe .
2) Link the observational data, first guess and BE. (Currently, only LITTLE_R formatted observational data is supported in 4D-Var, PREPBUFR observational data is not supported)
> cd $WORK_DIR
> ln -fs $DATA_DIR/ob/2008020512/ob.ascii+ ob01.ascii
> ln -fs $DATA_DIR/ob/2008020513/ob.ascii ob02.ascii
> ln -fs $DATA_DIR/ob/2008020514/ob.ascii ob03.ascii
> ln -fs $DATA_DIR/ob/2008020515/ob.ascii ob04.ascii
> ln -fs $DATA_DIR/ob/2008020516/ob.ascii ob05.ascii
> ln -fs $DATA_DIR/ob/2008020517/ob.ascii ob06.ascii
> ln -fs $DATA_DIR/ob/2008020518/ob.ascii- ob07.ascii
> ln -fs $DATA_DIR/rc/2008020512/wrfinput_d01 .
> ln -fs $DATA_DIR/rc/2008020512/wrfbdy_d01 .
> ln -fs wrfinput_d01 fg
> ln -fs wrfinput_d01 fg01
> ln -fs $DATA_DIR/be/be.dat .
3) Establish the miscellaneous links.
> cd $WORK_DIR
> ln -fs nl/nl_d01_2008-02-05_13:00:00 fg02
> ln -fs nl/nl_d01_2008-02-05_14:00:00 fg03
> ln -fs nl/nl_d01_2008-02-05_15:00:00 fg04
> ln -fs nl/nl_d01_2008-02-05_16:00:00 fg05
> ln -fs nl/nl_d01_2008-02-05_17:00:00 fg06
> ln -fs nl/nl_d01_2008-02-05_18:00:00 fg07
> ln -fs ad/ad_d01_2008-02-05_12:00:00 gr01
> ln -fs tl/tl_d01_2008-02-05_13:00:00 tl02
> ln -fs tl/tl_d01_2008-02-05_14:00:00 tl03
> ln -fs tl/tl_d01_2008-02-05_15:00:00 tl04
> ln -fs tl/tl_d01_2008-02-05_16:00:00 tl05
> ln -fs tl/tl_d01_2008-02-05_17:00:00 tl06
> ln -fs tl/tl_d01_2008-02-05_18:00:00 tl07
> cd $WORK_DIR/ad
> ln -fs ../af01 auxinput3_d01_2008-02-05_12:00:00
> ln -fs ../af02 auxinput3_d01_2008-02-05_13:00:00
> ln -fs ../af03 auxinput3_d01_2008-02-05_14:00:00
> ln -fs ../af04 auxinput3_d01_2008-02-05_15:00:00
> ln -fs ../af05 auxinput3_d01_2008-02-05_16:00:00
> ln -fs ../af06 auxinput3_d01_2008-02-05_17:00:00
> ln -fs ../af07 auxinput3_d01_2008-02-05_18:00:00
4) Run in single processor mode (serial compilation required for WRFDA, WRFNL and WRFPLUS)
Edit $WORK_DIR/namelist.input to match your experiment settings.
> cp $WORK_DIR/nl/namelist.input.serial $WORK_DIR/nl/namelist.input
Edit $WORK_DIR/nl/namelist.input to match your experiment settings.
> cp $WORK_DIR/ad/namelist.input.serial $WORK_DIR/ad/namelist.input
> cp $WORK_DIR/tl/namelist.input.serial $WORK_DIR/tl/namelist.input
Edit $WORK_DIR/ad/namelist.input and $WORK_DIR/tl/namelist.input to match your experiment settings, but only change following variables:
&time_control
run_hours=06,
start_year=2008,
start_month=02,
start_day=05,
start_hour=12,
end_year=2008,
end_month=02,
end_day=05,
end_hour=18,
......
&domains
time_step=360,
# NOTE:MUST BE THE SAME WITH WHICH IN $WORK_DIR/nl/namelist.input
e_we=90,
e_sn=60,
e_vert=41,
dx=60000,
dy=60000,
......
> cd $WORK_DIR
> setenv NUM_PROCS 1
> ./da_wrfvar.exe >&! wrfda.log
5) Run with multiple processors with MPMD mode. (dmpar compilation required for WRFDA, WRFNL and WRFPLUS)
Edit $WORK_DIR/namelist.input to match your experiment settings.
> cp $WORK_DIR/nl/namelist.input.parallel $WORK_DIR/nl/namelist.input
Edit $WORK_DIR/nl/namelist.input to match your experiment settings.
> cp $WORK_DIR/ad/namelist.input.parallel $WORK_DIR/ad/namelist.input
> cp $WORK_DIR/tl/namelist.input.parallel $WORK_DIR/tl/namelist.input
Edit $WORK_DIR/ad/namelist.input and $WORK_DIR/tl/namelist.input to match your experiment settings.
Currently, parallel WRF 4D-Var is a MPMD (Multiple Program Multiple Data)
application. Because there are so many parallel configurations across the
platforms, it is very difficult to define a generic way to run the WRF 4D-Var
parallel. As an example, to launch the three WRF 4D-Var executables as a
concurrent parallel job on a 16 processor cluster, use:
> mpirun –np 4
da_wrfvar.exe: -np 8 ad/wrfplus.exe: -np 4 nl/wrf.exe
In the above example, 4 processors are assigned to run WRFDA, 4 processors
are assigned to run WRFNL and 8 processors for WRFPLUS due to high
computational cost in adjoint code.
The file wrfda.log (or rsl.out.0000 if running in parallel mode) contains important WRF-4DVar runtime log information. Always check the log after a WRF-4DVar run.
This section gives a brief description for various aspects related to radiance assimilation
in WRFDA. Each aspect is described mainly from the viewpoint of usage rather
than more technical and scientific details, which will appear in separated
technical report and scientific paper. Namelist parameters controlling
different aspects of radiance assimilation will be detailed in the following
sections. It should be noted that this section does not cover general aspects
of the WRFDA assimilation. These can be found in other sections of chapter 6 of
this users guide or other WRFDA documentation.
a. Running WRFDA with radiances
In addition to the basic input
files (LANDUSE.TBL,
fg, ob.ascii, be.dat) mentioned in “Running WRFDA” section, the following extra
files are required for radiances: radiance data in NCEP BUFR format,
radiance_info files, VARBC.in, RTM (CRTM or RTTOV) coefficient files.
Edit namelist.input (Pay special attention to &wrfvar4, &wrfvar14, &wrfvar21, and
&wrfvar22 for radiance-related options. A
very basic namelist.input for running the radiance test case is provided as WRFDA/var/test/radiance/namelist.input)
> ln
-sf ${DAT_DIR}/gdas1.t00z.1bamua.tm00.bufr_d ./amsua.bufr
> ln
-sf ${DAT_DIR}/gdas1.t00z.1bamub.tm00.bufr_d ./amsub.bufr
> ln
-sf WRFDA/var/run/radiance_info
./radiance_info #
(radiance_info is a directory)
> ln
-sf WRFDA/var/run/VARBC.in
./VARBC.in
(CRTM
only) > ln -sf
WRFDA/var/run/crtm_coeffs ./crtm_coeffs #(crtm_coeffs is a directory)
(RTTOV
only) > ln -sf rttov87/rtcoef_rttov7/* . # (a list of rtcoef* files)
See the following sections for
more details on each aspect.
b. Radiance Data Ingest
Currently, the ingest interface
for NCEP BUFR radiance data is implemented in WRFDA. The radiance data are
available through NCEP’s public ftp server ftp://ftp.ncep.noaa.gov/pub/data/nccf/com/gfs/prod/gdas.${yyyymmddhh} in near real-time (with 6-hour delay) and can meet
requirements both for research purposes and some real-time applications.
So far, WRFDA can read data from
the NOAA ATOVS instruments (HIRS, AMSU-A, AMSU-B and MHS), the EOS Aqua
instruments (AIRS, AMSU-A) and DMSP instruments (SSMIS). Note that NCEP
radiance BUFR files are separated by instrument names (i.e., each file for one
type instrument) and each file contains global radiance (generally converted to
brightness temperature) within 6-hour assimilation window from multi-platforms.
For running WRFDA, users need to rename NCEP corresponding BUFR files (table 1)
to hirs3.bufr
(including HIRS data from NOAA-15/16/17), hirs4.bufr (including HIRS data from NOAA-18, METOP-2), amsua.bufr (including AMSU-A
data from NOAA-15/16/18, METOP-2), amsub.bufr (including AMSU-B data from NOAA-15/16/17), mhs.bufr (including MHS data
from NOAA-18 and METOP-2), airs.bufr (including AIRS and AMSU-A data from EOS-AQUA) and ssmis.bufr (SSMIS data from
DMSP-16, AFWA provided) for WRFDA filename convention. Note that airs.bufr file
contains not only AIRS data but also AMSU-A, which is collocated with AIRS
pixels (1 AMSU-A pixels collocated with 9 AIRS pixels). Users must place these
files in the working directory where WRFDA executable is located. It should
also be mentioned that WRFDA reads these BUFR radiance files directly without
use if any separate pre-processing program is used. All processing of radiance
data, such as quality control, thinning and bias correction and so on, is
carried out inside WRFDA. This is different from conventional observation
assimilation, which requires a pre-processing package (OBSPROC) to generate
WRFDA readable ASCII files. For reading the radiance BUFR files, WRFDA must be
compiled with the NCEP BUFR library (see http://www.nco.ncep.noaa.gov/sib/decoders/BUFRLIB/).
Table 1: NCEP and WRFDA radiance BUFR file naming convention
NCEP BUFR file names |
WRFDA naming convention |
gdas1.t00z.1bamua.tm00.bufr_d |
amsua.bufr |
gdas1.t00z.1bamub.tm00.bufr_d |
amsub.bufr |
gdas1.t00z.1bhrs3.tm00.bufr_d |
hirs3.bufr |
gdas1.t00z.1bhrs4.tm00.bufr_d |
hirs4.bufr |
gdas1.t00z.1bmhs.tm00.bufr_d |
mhs.bufr |
gdas1.t00z.airsev.tm00.bufr_d |
airs.bufr |
Namelist parameters are used to
control the reading of corresponding BUFR files into WRFDA. For instance, USE_AMSUAOBS, USE_AMSUBOBS, USE_HIRS3OBS, USE_HIRS4OBS, USE_MHSOBS, USE_AIRSOBS, USE_EOS_AMSUAOBS and USE_SSMISOBS
control whether or not the respective file is
read. These are logical parameters that are assigned to FALSE by default;
therefore they must be set to true to
read the respective observation file. Also note that these parameters only
control whether the data is read, not whether the data included in the files is
to be assimilated. This is controlled by other namelist parameters explained in
the next section.
NCEP BUFR files downloaded from
NCEP’s public ftp server ftp://ftp.ncep.noaa.gov/pub/data/nccf/com/gfs/prod/gdas.${yyyymmddhh} are Fortran-blocked on big-endian machine and can be
directly used on big-endian machines (for example, IBM). For most Linux
clusters with Intel platforms, users need to first unblock the BUFR files, and
then reblock them. The utility for blocking/unblocking is available from http://www.nco.ncep.noaa.gov/sib/decoders/BUFRLIB/toc/cwordsh
c. Radiative Transfer Model
The core component for direct
radiance assimilation is to incorporate a radiative transfer model (RTM, should
be accurate enough yet fast) into the WRFDA system as one part of observation
operators. Two widely used RTMs in NWP community, RTTOV8*
(developed by EUMETSAT in Europe), and CRTM (developed by the Joint Center for
Satellite Data Assimilation (JCSDA) in US), are already implemented in WRFDA
system with a flexible and consistent user interface. Selecting which RTM to be
used is controlled by a simple namelist parameter RTM_OPTION
(1 for RTTOV, the default, and 2 for CRTM). WRFDA is designed to be able to
compile with only one of two RTM libraries or without RTM libraries (for those
not interested in radiance assimilation) by the definition of environment variables
“CRTM” and “RTTOV” (see Installing WRFDA section).
Both RTMs can calculate radiances
for almost all available instruments aboard various satellite platforms in
orbit. An important feature of WRFDA design is that all data structures related
to radiance assimilation are dynamically allocated during running time according
to simple namelist setup. The instruments to be assimilated are controlled at
run time by four integer namelist parameters: RTMINIT_NSENSOR (the total
number of sensors to be assimilated), RTMINIT_PLATFORM (the platforms IDs
array to be assimilated with dimension RTMINIT_NSENSOR, e.g., 1 for NOAA, 9 for
EOS, 10 for METOP and 2 for DMSP), RTMINIT_SATID (satellite IDs array)
and RTMINIT_SENSOR (sensor IDs array, e.g., 0 for HIRS, 3 for AMSU-A, 4 for
AMSU-B, 15 for MHS, 10 for SSMIS, 11 for AIRS). For instance, the configuration
for assimilating 12 sensors from 7 satellites (what WRFDA can assimilated
currently) will be
RTMINIT_NSENSOR
= 12 # 5 AMSUA; 3 AMSUB; 2 MHS; 1 AIRS; 1 SSMIS
RTMINIT_PLATFORM
= 1,1,1,9,10, 1,1,1, 1,10, 9, 2
RTMINIT_SATID
= 15,16,18,2,2, 15,16,17, 18,2, 2, 16
RTMINIT_SENSOR
=
3,3,3,3,3,
4,4,4,
15,15, 11, 10
The instrument triplets (platform,
satellite and sensor ID) in the namelist can be ranked in any order. More detail about the convention of instrument triplet
can be found at the tables 2 and 3 in RTTOV8/9 Users Guide (http://www.metoffice.gov.uk/research/interproj/nwpsaf/rtm/rttov8_ug.pdf Or http://www.metoffice.gov.uk/research/interproj/nwpsaf/rtm/rttov9_files/users_guide_91_v1.6.pdf)
CRTM uses a different instrument naming method. A convert routine inside WRFDA is
already created to make CRTM use the same instrument triplet as RTTOV such that
the user interface remains the same for RTTOV and CRTM.
When running WRFDA with radiance
assimilation switched on (RTTOV or CRTM), a set of RTM coefficient files need
to be loaded. For RTTOV option, RTTOV coefficient files are to be directly
copied or linked under the working directory; for CRTM option, CRTM coefficient
files are to be copied or linked to a sub-directory “crtm_coeffs” under the
working directory. Only coefficients listed in namelist are needed. Potentially
WRFDA can assimilate all sensors as long as the corresponding coefficient files
are provided with RTTOV and CRTM. In addition, necessary developments on
corresponding data interface, quality control and bias correction are also
important to make radiance data assimilated properly. However, a modular design
of radiance relevant routines already facilitates much to add more instruments
in WRFDA.
RTTOV packages are not distributed
with WRFDA due to license and support issues. Users are encouraged to contact
the corresponding team for obtaining RTMs. See following links for more
information.
http://www.metoffice.gov.uk/research/interproj/nwpsaf/rtm/index.html .
CRTM pakages are now distributed
with WRFDA, which locate in the WRFDA/var/external/crtm. Users can still find
it on the following link:
ftp://ftp.emc.ncep.noaa.gov/jcsda/CRTM.
d. Channel Selection
Channel selection in WRFDA is
controlled by radiance ‘info’ files located in the sub-directory
‘radiance_info’ under the working directory. These files are separated by satellites
and sensors, e.g., noaa-15-amsua.info, noaa-16-amsub.info, dmsp-16-ssmis.info
and so on. An example for 5 channels from noaa-15-amsub.info is shown below.
The fourth column is used by WRFDA to control if assimilating corresponding
channel. Channels with the value “-1” indicates that the channel is “not
assimilated” (channels 1, 2 and 4 in this case), with the value “1” means
“assimilated” (channels 3 and 5). The sixth column is used by WRFDA to set the
observation error for each channel. Other columns are not used by WRFDA. It
should be mentioned that these error values might not necessarily be optimal
for your applications; It is user’s responsibility to obtain the optimal error
statistics for your own applications.
sensor channel IR/MW use idum varch
polarisation(0:vertical;1:horizontal)
415
1 1 -1 0
0.5500000000E+01
0.0000000000E+00
415
2 1 -1 0
0.3750000000E+01
0.0000000000E+00
415
3 1 1 0
0.3500000000E+01
0.0000000000E+00
415
4 1 -1 0
0.3200000000E+01
0.0000000000E+00
415
5 1 1 0
0.2500000000E+01 0.0000000000E+00
e. Bias Correction
Satellite radiance is generally
considered biased with respect to a reference (e.g., background or analysis
field in NWP assimilation) due to system error of observation itself, reference
field and RTM. Bias correction is a necessary step prior to assimilating radiance
data. In WRFDA, there are two ways of performing bias correction. One is based
on Harris and Kelly (2001) method and is carried out using a set of coefficient
files pre-calculated with an off-line statistics package, which will apply to a
training dataset for a month-long period. The other is Variational Bias
Correction (VarBC). Only VarBC is
introduced here and recommended for users because of its relative simplicity in
usage.
f. Variational Bias Correction
Getting started with VarBC
To use VarBC, set namelist option USE_VARBC
to TRUE and have a VARBC.in file in the working directory. VARBC.in is a VarBC
setup file in ASCII format. A template is provided with the WRFDA package
(WRFDA/var/run/VARBC.in).
Input and Output files
All VarBC input is passed through
one single ASCII file called VARBC.in file. Once WRFDA has run with the VarBC
option switched on, it will produce a VARBC.out file which looks very much like
the VARBC.in file you provided. This output file will then be used as input
file for the next assimilation cycle.
Coldstart
Coldstarting means starting the
VarBC from scratch i.e. when you do not know the values of the bias parameters.
The Coldstart is a routine in
WRFDA. The bias predictor statistics (mean and standard deviation) are computed
automatically and will be used to normalize the bias parameters. All
coldstarted bias parameters are set to zero, except the first bias parameter (=
simple offset), which is set to the mode (=peak) of the distribution of the
(uncorrected) innovations for the given channel.
A threshold of number of
observations can be set through a namelist option VARBC_NOBSMIN (default = 10), under which it is considered that not
enough observations are present to keep the Coldstart values (i.e. bias
predictor statistics and bias parameter values) for the next cycle. In this
case, the next cycle will do another Coldstart.
Background Constraint for the bias parameters
The background constraint controls
the inertia you want to impose on the predictors (i.e. the smoothing in the
predictor time series). It corresponds to an extra term in the WRFDA cost
function.
It is defined through an integer
number in the VARBC.in file. This number is related to a number of
observations: the bigger the number, the more inertia constraint. If these numbers
are set to zero, the predictors can evolve without any constraint.
Scaling factor
The VarBC uses a specific
preconditioning, which can be scaled through a namelist option VARBC_FACTOR (default = 1.0).
Offline bias correction
The analysis of the VarBC
parameters can be performed "offline", i.e. independently from the
main WRFDA analysis. No extra code is needed, just set the following MAX_VERT_VAR* namelist variables to be 0, which will disable the
standard control variable and only keep the VarBC control variable.
MAX_VERT_VAR1=0.0
MAX_VERT_VAR2=0.0
MAX_VERT_VAR3=0.0
MAX_VERT_VAR4=0.0
MAX_VERT_VAR5=0.0
Freeze VarBC
In certain circumstances, you
might want to keep the VarBC bias parameters constant in time
(="frozen"). In this case, the bias correction is read and applied to
the innovations, but it is not updated during the minimization. This can easily
be achieved by setting the namelist options:
USE_VARBC=false
FREEZE_VARBC=true
Passive observations
Some observations are useful for
preprocessing (e.g. Quality Control, Cloud detection) but you might not want to
assimilate them. If you still need to estimate their bias correction, these
observations need to go through the VarBC code in the minimization. For this
purpose, the VarBC uses a separate threshold on the QC values, called
"qc_varbc_bad". This threshold is currently set to the same value as
"qc_bad", but can easily be changed to any ad hoc value.
g. Other namelist variables to control radiance
assimilation
RAD_MONITORING (30)
Integer
array of dimension RTMINIT_NSENSER, where 0 for assimilating mode, 1 for
monitoring mode (only calculate innovation).
THINNING
Logical,
TRUE will perform thinning on radiance data.
THINNING_MESH (30)
Real
array with dimension RTMINIT_NSENSOR, values indicate thinning mesh (in KM) for
different sensors.
QC_RAD
Logical,
control if perform quality control, always set to TRUE.
WRITE_IV_RAD_ASCII
Logical,
control if output Observation minus Background files which are in ASCII format
and separated by sensors and processors.
WRITE_OA_RAD_ASCII
Logical,
control if output Observation minus Analysis files (including also O minus B)
which are ASCII format and separated by sensors and processors.
USE_ERROR_FACTOR_RAD
Logical,
controls use of a radiance error tuning factor file
“radiance_error.factor”, which is
created with empirical values or generated using variational tunning method
(Desroziers and Ivanov, 2001)
ONLY_SEA_RAD
Logical,
controls whether only assimilating radiance over water.
TIME_WINDOW_MIN
String,
e.g., "2007-08-15_03:00:00.0000", start time of assimilation time
window
TIME_WINDOW_MAX
String,
e.g., "2007-08-15_09:00:00.0000", end time of assimilation time
window
CRTM_ATMOSPHERE
Integer,
used by CRTM to choose climatology reference profile used above model top (up
to 0.01hPa).
0:
Invalid (default, use U.S. Standard Atmosphere)
1:
Tropical
2:
Midlatitude summer
3:
Midlatitude winter
4:
Subarctic summer
5:
Subarctic winter
6: U.S.
Standard Atmosphere
USE_ANTCORR (30)
Logical
array with dimension RTMINIT_NSENSER, control if performing Antenna Correction
in CRTM.
AIRS_WARMEST_FOV
Logical,
controls whether using the observation brightness temperature for AIRS Window
channel #914 as criterium for GSI thinning.
USE_CRTM_KMATRIX
Logical,
controls whether using CRTM K matrix rather than calling CRTM TL and AD
routines for gradient calculation.
h. Diagnostics and Monitoring
(1) Monitoring capability within
WRFDA.
Run WRFDA
with the rad_monitoring namelist parameter in record wrfvar14 in
namelist.input.
0 means
assimilating mode, innovations (O minus B) are calculated and data are used in
minimization.
1 means
monitoring mode: innovations are calculated for diagnostics and monitoring. Data
are not used in minimization.
Number of
rad_monitoring should correspond to number of rtminit_nsensor. If rad_monitoring is not set, then default
value of 0 will be used for all sensors.
(2) Outputing radiance diagnostics
from WRFDA
Run WRFDA
with the following namelist variables in record wrfvar14 in namelist.input.
write_iv_rad_ascii=.true.
to write
out (observation-background) and other diagnostics information in plain-text
files with prefix inv followed by instrument name and processor id. For example,
01_inv_noaa-17-amsub.0000 (01 is outerloop index, 0000 is processor index)
write_oa_rad_ascii=.true.
to write
out (observation-background), (observation-analysis) and other diagnostics
information in plain-text files with prefix oma followed by instrument name and
processor id. For example, 01_oma_noaa-18-mhs.0001
Each
processor writes out information of one instrument in one file in the WRFDA
working directory.
(3) Radiance diagnostics data
processing
A
Fortran90 program is used to collect the 01_inv* or 01_oma* files and write out
in netCDF format (one instrument in one file with prefix diags followed by
instrument name, analysis date, and suffix .nc)) for easier data viewing,
handling and plotting with netCDF utilities and NCL scripts.
(4) Radiance diagnostics plotting
NCL
scripts (WRFDA/var/graphics/ncl/plot_rad_diags.ncl and
WRFDA/var/graphics/ncl/advance_cymdh.ncl) are used for plotting. The NCL script
can be run from a shell script, or run stand-alone with interactive ncl command
(need to edit the NCL script and set the plot options. Also the path of advance_cymdh.ncl,
a date advancing script loaded in the main NCL plotting script, may need to be
modified).
Step (3)
and (4) can be done by running a single ksh script (WRFDA/var/../scripts/da_rad_diags.ksh)
with proper settings. In addition to the settings of directories and what instruments
to plot, there are some useful plotting options, explained below.
export OUT_TYPE=ncgm |
ncgm or pdf pdf will be much slower than ncgm and generate huge output
if plots are not split. But pdf has higher resolution than ncgm. |
export PLOT_STATS_ONLY=false |
true or false true: only statistics of OMB/OMA vs channels and
OMB/OMA vs dates will be plotted. false: data coverage, scatter plots (before and after
bias correction), histograms (before and after bias correction), and
statistics will be plotted. |
export PLOT_OPT=sea_only |
all, sea_only, land_only |
export PLOT_QCED=false |
true or false true: plot only quality-controlled data false: plot all data |
export PLOT_HISTO=false |
true or false: switch for histogram plots |
export PLOT_SCATT=true |
true or false: switch for scatter plots |
export PLOT_EMISS=false |
true or false: switch for emissivity plots |
export PLOT_SPLIT=false |
true or false true: one frame in each file false: all frames in one file |
export PLOT_CLOUDY=false |
true or false true: plot cloudy data. Cloudy data to be plotted are
defined by PLOT_CLOUDY_OPT (si or clwp), CLWP_VALUE, SI_VALUE settings. |
export PLOT_CLOUDY_OPT=si |
si or clwp clwp: cloud liquid water path from model si: scatter index from obs, for amsua, amsub and mhs
only |
export CLWP_VALUE=0.2 |
only plot points with clwp >= clwp_value (when clwp_value > 0) clwp >
clwp_value (when clwp_value = 0) |
export SI_VALUE=3.0 |
|
(5) evolution of VarBC parameters
NCL
scripts (WRFDA/var/graphics/ncl/plot_rad_varbc_param.ncl
and WRFDA/var/graphics/ncl/advance_cymdh.ncl)
are used for plotting evolutions of VarBC parameters.
WRFDA produces a number of diagnostic files that contain useful information on how the data assimilation has performed. This section will introduce you to some of these files, and what to look for.
Having run WRFDA, it is important to check a number of
output files to see if the assimilation appears sensible. The WRFDA package,
which includes lots of useful scripts may be downloaded from http://www2.mmm.ucar.edu/wrf/users/wrfda/download/tools.html
The content of some
useful diagnostic files are as follows:
cost_fn and grad_fn: These files hold (in ASCII format) WRFDA cost and gradient function values, respectively, for the first and last iterations. However, if you run with PRINT_DETAIL_GRAD=true, these values will be listed for each iteration; this can be helpful for visualization purposes. The NCL script WRFDA/var/graphcs/ncl/plot_cost_grad_fn.ncl may be used to plot the content of cost_fn and grad_fn, if these files are generated with PRINT_DETAIL_GRAD=true.
Note: Make sure that you removed first two lines (header) in cost_fn and grad_fn before you plot. Also, you need to specify the directory name for these two files.
gts_omb_oma_01: It contains (in ASCII format) information on all of the observations used by the WRFDA run. Each observation has its observed value, quality flag, observation error, observation minus background (OMB), and observation minus analysis (OMA). This information is very useful for both analysis and forecasts verification purposes.
namelist.input: This is the WRFDA input namelist file, which contains all the user defined non-default options. Any namelist defined options that do not appear in this file, should have their names checked against values in WRFDA/Registry/Registry.wrfvar.
namelist.output: A consolidated list of all the namelist options used.
rsl*: Files containing information of standard WRFDA output from individual processors when multiple processors are used. It contains host of information on number of observations, minimization, timings etc. Additional diagnostics may be printed in these files by including various “print” WRFDA namelist options. To learn more about these additional “print” options, search “print_” string in WRFDA/Registry/Registry.wrfvar.
statistics: Text file containing OMB (OI), OMA (OA) statistics (minimum, maximum, mean and standard deviation) for each observation type and variable. This information is very useful in diagnosing how WRFDA has used different components of the observing system. Also contained are the analysis minus background (A-B) statistics i.e. statistics of the analysis increments for each model variable at each model level. This information is very useful in checking the range of analysis increment values found in the analysis, and where they are in the WRF-model grid space.
The WRFDA analysis file is wrfvar_output. It is in WRF (NetCDF) format. It will become the input file “wrfinput_d01” of any subsequent WRF runs after lateral boundary and/or low boundary conditions are updated by another WRFDA utility (See section “Updating WRF boundary conditions”).
A NCL script WRFDA/var/graphics/ncl/WRF-Var_plot.ncl, is provided for plotting. You need to specify the analsyis_file name, its full path etc. Please see the in-line comments in the script for details.
As an example, if you are aiming to display U-component of the analysis at level 18, execute the following command after modifying the script “WRFDA/var/graphcs/ncl/WRF-Var_plot.ncl”, make sure the following piece of codes are uncommented:
var = "U"
fg = first_guess->U
an = analysis->U
plot_data = an
When you execute the following command from WRFDA/var/graphics/ncl.
> ncl WRF-Var_plot.ncl
The plot should look like:
You may change the variable name, level etc in this script to display the
variable of your choice at the desired eta level.
Take time to look through the text output files to ensure you understand how WRFDA works. For example,
How closely has WRFDA fitted individual observation types? Look at the statistics file to compare the O-B and O-A statistics.
How big are the analysis increments? Again, look in the statistics file to see minimum/maximum values of A-B for each variable at various levels. It will give you a feel for the impact of input observation data you assimilated via WRFDA by modifying the input analysis first guess.
How long did WRFDA take to converge? Does it really converge? You will get the answers of all these questions by looking into rsl-files, as it indicates the number of iterations taken by WRFDA to converge. If this is the same as the maximum number of iterations specified in the namelist (NTMAX) or its default value (=200) set in WRFDA/Registry/Registry.wrfvar, then it means that the analysis solution did not converge. If so, you may like to increase the value of “NTMAX” and rerun your case to ensure that the convergence is achieved. On the other hand, a normal WRFDA run should usually converge within 100 iterations. If it still doesn’t converge in 200 iterations, that means there might be some problem in the observations or first guess.
A good visual way of seeing the impact of assimilation of observations is to plot the analysis increments (i.e. analysis minus first guess difference). There are many different graphics packages used (e.g. RIP4, NCL, GRADS etc) that can do this. The plot of level 18 theta increments below was produced using the particular NCL script. This script is located at WRFDA/var/graphcs/ncl/WRF-Var_plot.ncl.
You need to modify this script to fix the full path for first_guess & analysis files. You may also like to modify the display level by setting “kl” and the name of the variable to display by setting “var”. Further details are given in this script.
If you are aiming to display increment of potential temperature at level 18, after modifying WRFDA/var/graphcs/ncl/WRF-Var_plot.ncl suitably, make sure following pieces of codes are uncommented:
var = "T"
fg = first_guess->T ;Theta- 300
an = analysis->T ;Theta- 300
plot_data = an - fg
When you execute the following command from “WRFDA/var/graphics/ncl”.
>
ncl WRF-Var_plot.ncl
The plot created will looks as follows:
Note: Larger analysis increments indicate a larger data impact in the
corresponding region of the domain.
Before running NWP forecast using the
WRF-model with WRFDA analysis, the values and tendencies for each of
predicted variables for the first time period in the lateral boundary condition
file for domain-1 (wrfbdy_d01) must be updated to be consistent with the new
WRFDA initial condition (analysis). This is absolutely essential. Moreover, in the cycling run mode (warm-start), the low
boundary in the WRFDA analysis
file also need to be updated based on the information of the wrfinput file
generated by WPS/real.exe at the analysis time. So there are three input files:
WRFDA analysis, wrfinput and wrfbdy files from WPS/real.exe, and a namelist
file: param.in for running
da_update_bc.exe for
domain-1.
For the nested domains, domain-2, domain-3…, the lateral boundaries are provided by their parent domains, so no lateral boundary update needed for these domains, But the low boundaries in each of the nested domains’ WRFDA analysis files are still need to be updated. In these cases, you must set the namelist variable, domain_id > 1 (default is 1 for domain-1), and no wrfbdy_d01file need to be provided to the namelist variable: wrf_bdy_file.
This procedure is performed by the WRFDA utility called da_updated_bc.exe.
Note: Make sure that you have da_update_bc.exe in WRFDA/var/build directory. This executable should be created when you compiled WRFDA code,
To run da_update_bc.exe, follow the steps
below:
> cd WRFDA/var/test/update_bc
> cp –p $DAT_DIR/rc/2008020512/wrfbdy_d01 ./wrfbdy_d01
(IMPORTANT: make a copy of wrfbdy_d01 as the wrf_bdy_file will be overwritten
by da_update_bc.exe)
> vi parame.in
&control_param
wrfvar_output_file = './wrfvar_output'
wrf_bdy_file = './wrfbdy_d01'
wrf_input =
'$DAT_DIR/rc/2008020512/wrfinput_d01'
cycling =
.false. (set to .true. if WRFDA first guess comes from a previous WRF forecast.)
debug = .true.
low_bdy_only
= .false.
update_lsm =
.false.
/
> ln –sf WRFDA/var/da/da_update_bc.exe
./da_update_bc.exe
> ./da_updatebc.exe
At this stage, you should have the files wrfvar_output and wrfbdy_d01 in your WRFDA working directory. They are the WRFDA updated initial condition and boundary condition for any subsequent WRF model runs. To use, just link a copy of wrfvar_output and wrfbdy_d01 to wrfinput_d01 and wrfbdy_d01, respectively, in your WRF working directory.
Starting with WRFDA version 3.1, the users have two choices to define the background error covariance (BE). We call them CV3 and CV5 respectively. Both are applied to the same set of the control variables, stream function, unbalanced potential velocity, unbalanced temperature, unbalanced surface pressure, and pseudo relative humidity. With CV3, the control variables are in physical space while with CV5 the control variables are in eigenvector space. So the major differences between these two kinds of BE are the vertical covariance. CV3 uses the vertical recursive filter to model the vertical covariance but CV5 uses the empirical orthogonal function (EOF) to represent the vertical covariance. The recursive filters to model the horizontal covariance are also different in these two BEs. We have not conducted the systematic comparison of the analyses based on these two BEs. However, CV3 (a BE file provided with our WRFDA system) is a global BE and can be used for any regional domains while CV5 is a domain-dependent BE, which should be generated based in the forecasts data from the same domain. At this time, it is hard to tell which BE is better; the impact on analysis may be varying case by case.
CV3 is the NCEP background error covariance, it is estimated in grid space by what has become known as the NMC method (Parrish and Derber 1992) . The statistics are estimated with the differences of 24 and 48-hour GFS forecasts with T170 resolution valid at the same time for 357 cases distributed over a period of one year. Both the amplitudes and the scales of the background error have to be tuned to represent the forecast error in the guess fields. The statistics that project multivariate relations among variables are also derived from the NMC method.
The variance of each variable and the variance of its second derivative are used to estimate its horizontal scales. For example, the horizontal scales of the stream function can be estimated from the variance of the vorticity and stream function.
The vertical scales are estimated with the vertical correlation of each variable. A table is built to cover the range of vertical scales for the variables. The table is then used to find the scales in vertical grid units. The filter profile and the vertical correlation are fitted locally. The scale of the best fit from the table is assigned as the scale of the variable at that vertical level for each latitude. Note that the vertical scales are locally defined so that the negative correlation further away in the vertical direction is not included.
Theoretically, CV3 BE is a generic background error
statistics file can be used for any case. It is quite straightforward to use
CV3 in your own case. To use CV3 BE file in your case, just set cv_options=3 in $wrfvar7 and the be.dat is located in WRFDA/var/run/be.dat.cv3.
To use CV5 background error covariance, it is necessary to generate your domain-specific background error statistics with the gen_be utility. The background error statistics file supplied with the tutorial test case can NOT be used for your applications other than the tutorial case
The Fortran main programs for gen_be can be found in WRFDA/var/gen_be. The executables of gen_be should be created after you have compiled the WRFDA code (as described earlier). The scripts to run these codes are in WRFDA/var/../scripts/gen_be.
The input data for gen_be are WRF forecasts, which are used
to generate model perturbations, used as a proxy for estimates of forecast
error. For the NMC-method, the model perturbations are differences between
forecasts (e.g. T+24 minus T+12 is typical for regional applications, T+48
minus T+24 for global) valid at the same time. Climatological estimates of
background error may then be obtained by averaging such forecast differences over
a period of time (e.g. one month). Given input from an ensemble prediction system
(EPS), the inputs are the ensemble forecasts, and the model perturbations
created are the transformed ensemble perturbations. The gen_be code has been
designed to work with either forecast difference, or ensemble-based
perturbations. The former is illustrated in this tutorial example.
It is important to include forecast differences from at least 00Z and 12Z through the period, to remove the diurnal cycle (i.e. do not run gen_be using just 00Z or 12Z model perturbations alone).
The inputs to gen_be are NetCDF WRF forecast output ("wrfout") files at specified forecast ranges. To avoid unnecessary large single data files, it is assumed that all forecast ranges are output to separate files. For example, if we wish to calculate BE statistics using the NMC-method with (T+24)-(T+12) forecast differences (default for regional) then by setting the WRF namelist.input options history_interval=720, and frames_per_outfile=1 we get the necessary output datasets. Then the forecast output files should be arranged as follows: directory name is the forecast initial time, time info in the file name is the forecast valid time. 2008020512/wrfout_d01_2008-02-06_00:00:00 mean a 12-hour forecast valid at 2008020600 initialized at 2008020512.
Example dataset for a test case (90 x 60 x 41 gridpoints) can be downloaded from http://www2.mmm.ucar.edu/wrf/users/wrfda/download/testdata.html, untar the gen_be_forecasts_20080205.tar.gz, you will have:
>ls $FC_DIR
-rw-r--r--
1 users 11556492
2008020512/wrfout_d01_2008-02-06_00:00:00
-rw-r--r--
1 users 11556492
2008020512/wrfout_d01_2008-02-06_12:00:00
-rw-r--r--
1 users 11556492
2008020600/wrfout_d01_2008-02-06_12:00:00
-rw-r--r--
1 users 11556492
2008020600/wrfout_d01_2008-02-07_00:00:00
-rw-r--r--
1 users 11556492
2008020612/wrfout_d01_2008-02-07_00:00:00
-rw-r--r--
1 users 11556492 2008020612/wrfout_d01_2008-02-07_12:00:00
In the above example, only 1 day (12Z 05 Feb to 12Z 06 Feb. 2002) of forecasts, every 12 hours are supplied to gen_be_wrapper to estimate forecast error covariance. It is only for demonstration. The minimum number of forecasts required depends on the application, number of grid points, etc. Month-long (or longer) datasets are typical for the NMC-method. Generally, at least 1-month dataset should be used.
Under WRFDA/var/../scripts/gen_be, gen_be_wrapper.ksh
is used to generate the BE data, following variables need to be set to
fit your case:
export
WRFVAR_DIR=/users/noname/work/code/trunk/phoenix_g95_opt/WRFDA
export START_DATE=2008020612 # the first perturbation valid date
export END_DATE=2008020700 # the last perturbation valid date
export NUM_LEVELS=40 # e_vert - 1
export BIN_TYPE=5
export FC_DIR=/users/noname/work/exps/friendlies/expt/fc #
where wrf forecasts are
export
RUN_DIR=/users/noname/work/exps/friendlies/gen_be${BIN_TYPE}
Note: The START_DATE
and END_DATE are perturbation valid dates. As show in the forecast list above,
when you have 24-hour and 12-hour forecasts initialized at 2008020512 through
2008020612, the first and final forecast difference valid dates are 2008020612
and 2008020700 respectively.
Note: The forecast
dataset should be located in $FC_DIR. Then
type:
> gen_be_wrapper.ksh
Once gen_be_wrapper.ksh runs completed, the be.dat can be found under $RUN_DIR directory.
To get a clear idea about what are included in be.dat, the script gen_be_plot_wrapper.ksh may be used to plot various data in be.dat such as:
(a) Single Observation response in WRFDA:
With the single observation test, you may get the ideas of how the background and observation
error statistics working in the model variable space. Single observation test
is done in WRFDA by setting num_pseudo=1
along with other pre-specified values in record &wrfvar15 and &wrfvar19 of namelist.input,
With the settings shown below, WRFDA generates a single observation with pre-specified innovation (Observation – First Guess) value at desired location e.g. at (in terms of grid coordinate) 23x23, level 14 for “U” observation with error characteristics 1 m/s, innovation size = 1.0 m/s.
&wrfvar15
num_pseudo = 1,
pseudo_x = 23.0,
pseudo_y = 23.0,
pseudo_z = 14.0,
pseudo_err = 1.0,
pseudo_val = 1.0,
/
&wrfvar19
pseudo_var = “u”, (Note: pseudo_var can be u, v, t, p, q.
If pseudo_var is q,
then the reasonable values of pseudo_err and pseudo_val are 0.001)
/
Note: You may like to repeat this exercise for other observations like
temperature (“t”), pressure “p”, specific humidity “q” etc.
(b) Response of BE length scaling parameter:
Run single observation test with following additional parameters in record &wrfvar7 of
namelist.input
&wrfvar7
len_scaling1 = 0.5, # reduce psi length scale by 50%
len_scaling2 = 0.5, # reduce chi_u length scale by 50%
len_scaling3 = 0.5, # reduce T length scale by 50%
len_scaling4 = 0.5, # reduce q length scale by 50%
len_scaling5 = 0.5, # reduce Ps length scale by 50%
/
Note: You may like to try the
response of an individual
variable by setting one parameter at one time. See the spread of analysis
increment.
(c) Response of changing BE variance:
Run single observation test with following additional parameters in record &wrfvar7 of
namelist.input
&wrfvar7
var_scaling1 = 0.25, # reduce psi variance by 75%
var_scaling2 = 0.25, # reduce chi_u variance by 75%
var_scaling3 = 0.25, # reduce T variance by 75%
var_scaling4 = 0.25, # reduce q variance by 75%
var_scaling5 = 0.25, # reduce Ps variance by 75%
/
Note: You may like to try the
response of individual variable by setting one parameter at one time. See the
magnitude of analysis increments.
(d) Response of convergence criteria:
Run tutorial case with
&wrfvar6
eps = 0.0001,
/
You may like to compare various diagnostics with earlier run.
(e) Response of outer loop on minimization:
Run
tutorial case with
&wrfvar6
max_ext_its = 2,
/
With this setting “outer loop” for the minimization procedure will get
activated. You may like to compare various diagnostics with earlier run.
Note: Maximum permissible value for
“MAX_EXT_ITS”
is 10
(f) Response of suppressing particular types of data in WRFDA:
The types of observations that WRFDA gets to use actually depend on what is
included in the observation file and the WRFDA namelist settings. For example,
if you have SYNOP data in the observation file, you can suppress its usage in
WRFDA by setting use_synopobs=false
in record &wrfvar4
of namelist.input. It
is OK if there is no SYNOP data in the observation file and use_synopobs=true.
Turning on and off of certain types of observations are widely used for
assessing impact of observations on data assimilations.
Note: It is important to go through
the default “use_*” settings in record &wrfvar4 in WRFDA/Registry/Registry.wrfvar to know what
observations are activated in default.
The WRFDA system also includes a hybrid data assimilation
technique, which is based on the existing 3DVAR. The difference between hybrid
and 3DVAR schemes is that 3DVAR relies solely on a static covariance model to
specify the background errors, while the hybrid system uses a combination of
3DVAR static error covariances and ensemble-estimated error covariances to incorporate
a flow-dependent estimate of the background error statistics. Please refer to
Wang et al. (2008a,b) for a detailed description of the methodology used in the
WRF hybrid system. The following section will give a brief introduction of various
aspects of using the hybrid system.
a.
Source Code
There are three executables that are used in the hybrid
system. If you have successfully compiled the WRFDA system, you will see the
following:
WRFDA/var/build/gen_be_ensmean.exe
WRFDA/var/build/gen_be_ep2.exe
WRFDA/var/build/da_wrfvar.exe
gen_be_ensmean.exe
is used to calculate the ensemble mean, while gen_be_ep2.exe is used to
calculate the ensemble perturbations. As with 3DVAR/4DVAR, da_wrfvar.exe is the
main WRFDA program. However, in
this case, da_wrfvar.exe will run in the hybrid mode.
b. Running The Hybrid System
The procedure is the same as running 3DVAR/4DVAR with the
exception of some extra input files and namelist settings. The basic input
files for WRFDA are LANDUSE.TBL, ob.ascii or ob.bufr (depending on which
observation format you use), and be.dat (static background errors). Additional
input files required by the hybrid are a single ensemble mean file (used as the
fg for the hybrid application) and a set of ensemble perturbation files (used
to represent flow-dependent background errors).
Before the hybrid application can be started, a set of initial
ensemble members must be prepared. These ensembles can be obtained from other
ensemble model outputs or you can generate them yourself, for example, adding
random noise to the initial conditions at a previous time and integrating each
member to the desired time. Once you have the initial ensembles, the ensemble
mean and perturbations can be calculated following the steps below.
1) Calculate ensemble mean
Copy or link the ensemble forecasts to your working
directory. In this example, the time is 2006102712.
< ln -sf /wrfhelp/DATA/VAR/Hybrid/fc/2006102712.e0* .
Next, copy the directory that contains two template files
(ensemble mean and variance files) to your working directory. In this case, the
directory name is 2006102712, which contains the template ensemble mean file (wrfout_d01_2006-10-28_00:00:00)
and the template variance file (wrfout_d01_2006-10-28_00:00:00.vari). These
template files will be overwritten by the program that calculates the ensemble
mean and variance as discussed below.
< cp
-r /wrfhelp/DATA/VAR/Hybrid/fc/2006102712 .
Edit gen_be_ensmean_nl.nl (or copy it from /wrfhelp/DATA/VAR/Hybrid/gen_be_ensmean_nl.nl).
You will need to set the following information in this script as follows:
< vi gen_be_ensmean_nl.nl
&gen_be_ensmean_nl
directory = './2006102712'
filename = 'wrfout_d01_2006-10-28_00:00:00'
num_members = 10
nv = 7
cv = 'U', 'V', 'W', 'PH', 'T', 'MU', 'QVAPOR'
/
Here,
“directory” is the folder you just copied,
“filename” is the name of the ensemble mean file,
“num_members” is the number of ensemble members you are
using,
“nv” is the number of variables, which must be consistent
with the next “cv” option, and
“cv” is the name of variables used in the hybrid system.
Next, link gen_be_ensmean.exe to your working directory and
run it.
< ln –sf
WRFDA/var/build/gen_be_ensmean.exe .
< ./ gen_be_ensmean.exe
Check the output files.
2006102712/wrfout_d01_2006-10-28_00:00:00 is the ensemble mean
2006102712/wrfout_d01_2006-10-28_00:00:00.vari is the ensemble
variance
2) Calculate ensemble perturbations
Create another sub-directory in which you will be working to
create ensemble perturbations.
< mkdir -p 2006102800/ep
< cd 2006102800/ep
Next, run gen_be_ep2.exe.
gen_be_ep2.exe requires four
command-line arguments (DATE, NUM_MEMBER, DIRECTORY, FILENAME) as shown below:
< ln –sf WRFDA/var/build/gen_be_ep2.exe .
< ./gen_be_ep2.exe 2006102800 10 ../../2006102712 wrfout_d01_2006-10-28_00:00:00
Check the output files.
A list of binary files will be
created under the 2006102800/ep directory. Among them, tmp.e* are temporary
scratch files that can be removed.
3) Run WRFDA in hybrid mode
In your hybrid working directory, link all the necessary
files and directories as follows:
< ln
-fs 2006102800/ep ./ep (ensemble perturbation files should
be under the ep subdirectory)
< ln -fs 2006102712/wrfout_d01_2006-10-28_00:00:00 ./fg (first guess is the ensemble mean)
< ln -fs WRFDA/run/LANDUSE.TBL .
< ln -fs /wrfhelp/DATA/VAR/Hybrid/ob/2006102800/ob.ascii ./ob.ascii (or
ob.bufr)
< ln -fs /wrfhelp/DATA/VAR/Hybrid/be/be.dat ./be.dat
< ln –fs WRFDA/var/build/da_wrfvar.exe .
< cp /wrfhelp/DATA/VAR/Hybrid/namelist.input .
Edit namelist.input and pay special attention to the following
hybrid-related settings:
&wrfvar7
je_factor = 2.0
/
&wrfvar16
ensdim_alpha = 10
alphacv_method = 2
alpha_corr_type=3
alpha_corr_scale = 1500.0
alpha_std_dev=1.000
/
Next,
run hybrid in serial mode (recommended for initial testing of the hybrid
system), or in parallel mode
< ./da_wrfvar.exe >&! wrfda.log
Check
the output files.
The
output file lists are the same as when you run WRF 3D-Var.
c.
Hybrid namelist options
1)
je_factor : ensemble covariance weighting factor. This factor controls the
weighting component of ensemble and static covariances. The corresponding
jb_factor = je_factor/(je_factor - 1).
2)
ensdim_alpha: the number of ensemble members. Hybrid mode is
activated when ensdim_alpha is larger than zero.
3) alphacv_method: 1=perturbations in control variable space
(“psi”,”chi_u”,”t_u”,”rh”,”ps_u”); 2=perturbations in model space
(“u”,”v”,”t”,”q”,”ps”). Option 2 is extensively tested and recommended to use.
4)
alpha_corr_type: correlation function. 1=Exponential; 2=SOAR; 3=Gaussian.
5)
alpha_corr_scale: hybrid covariance localization scale in km unit. Default
value is 1500.
6)
alpha_std_dev: alpha standard deviation. Default value is 1.0
WRFDA namelist variables.
Variable Names |
Default Value |
Description |
|
&wrfvar1 |
|||
write_increments |
false |
.true.: write out a binary analysis increment file |
|
var4d |
false |
.true.: 4D-Var mode |
|
multi_inc |
0 |
> 0: multi-incremental run |
|
var4d_coupling |
2 |
1: var4d_coupling_disk_linear,
2: var4d_coupling_disk_simul |
|
print_detail_radar |
false |
print_detail_xxx: output extra
(sometimes can be too many) diagnostics for debugging; not recommended to
turn them on for production runs |
|
print_detail_xa |
false |
||
print_detail_xb |
false |
||
print_detail_obs |
false |
||
print_detail_grad |
false |
the purpose of
print_detail_grad is changed as of V3.1. .true.: to print out detailed
gradient of each observation type at each iteration and write out detailed
cost function and gradient into files called cost_fn and grad_fn |
|
check_max_iv_print |
true |
obsolete (only used by Radar) |
|
&wrfvar2 |
|||
analysis_accu |
900 |
seconds, if time difference between namelist setting (analysis_date) and date info read in from first guess is larger than analysis_accu, WRFDA will issue a warning message ("=======> Wrong xb time found???"), but won't abort. |
|
calc_w_increment |
false |
.true.: the increment of
the vertical velocity W will be diagnosed based on the increments of other
fields. If there is information of the W from observations assimilated, such
as the Radar radial velocity, the W increments are always computed, no matter calc_w_increment=true. or .false. .false.: the increment of the vertical velocity W is zero if no W information assimilated. |
|
dt_cloud_model |
false |
Not used |
|
&wrfvar3 |
|||
fg_format |
1 |
1: fg_format_wrf_arw_regional (default) 2: fg_format_wrf_nmm_regional 3: fg_format_wrf_arw_global 4: fg_format_kma_global |
|
ob_format |
2 |
1: ob_format_bufr (NCEP PREPBUFR), read in data from ob.bufr (not fully tested) 2: ob_format_ascii (output from obsproc), read in data from ob.ascii (default) 3: ob_format_madis (not tested) |
|
num_fgat_time |
1 |
1: 3DVar > 1: number of time slots for FGAT and 4DVAR |
|
&wrfvar4 |
|||
thin_conv |
true |
for ob_format=1 (NCEP PREPBUFR) only. thining is mandatory for ob_format=1 as time-duplicate data are "thinned" within thinning routine, however, thin_conv can be set to .false. for debugging purpose. |
|
thin_mesh_conv |
20. (max_instruments) |
for ob_format=1 (NCEP PREPBUFR) only. km, each observation type can set its thinning mesh and the index/order follows the definition in WRFDA/var/da/da_control/da_control.f90 |
|
use_synopobs |
true |
use_xxxobs - .true.: assimilate xxx obs if available .false.: not assimilate xxx obs even available |
|
use_shipsobs |
true |
||
use_metarobs |
true |
||
use_soundobs |
true |
||
use_pilotobs |
true |
||
use_airepobs |
true |
||
use_geoamvobs |
true |
||
use_polaramvobs |
true |
||
use_bogusobs |
true |
||
use_buoyobs |
true |
||
use_profilerobs |
true |
||
use_satemobs |
true |
||
use_gpspwobs |
true |
||
use_gpsrefobs |
true |
||
use_qscatobs |
true |
||
use_radarobs |
false |
||
use_radar_rv |
false |
||
use_radar_rf |
false |
||
use_airsretobs |
true |
||
; use_hirs2obs, use_hirs3obs, use_hirs4obs, use_mhsobs ; use_msuobs, use_amsuaobs, use_amsubobs, use_airsobs, ; use_eos_amsuaobs, use_hsbobs, use_ssmisobs are ; radiance-related variables that only control if reading ; in corresponding BUFR files into WRFDA or not, but ; do not control if assimilate the data or not. ; Some more variables have to be set in &wrfvar14 in order ; to assimilate radiance data. |
|||
use_hirs2obs |
fasle |
.true.: to read in data from
hirs2.bufr |
|
use_hirs3obs |
false |
.true.: to read in data from
hirs3.bufr |
|
use_hirs4obs |
false |
.true.: to read in data from
hirs4.bufr |
|
use_mhsobs |
false |
.true.: to read in data from
mhs.bufr |
|
use_msuobs |
false |
.true.: to read in data from
msu.bufr |
|
use_amsuaobs |
false |
.true.: to read in data from
amsua.bufr |
|
use_amsubobs |
false |
.true.: to read in data from
amsub.bufr |
|
use_airsobs |
false |
.true.: to read in data from
airs.bufr |
|
use_eos_amsuaobs |
false |
.true.: to read in data from airs.bufr |
|
use_hsbobs |
false |
.true.: to read in data from
hsb.bufr |
|
use_ssmisobs |
false |
.true.: to read in data from
ssmis.bufr |
|
use_obs_errfac |
false |
.true.: apply obs error tuning factors if errfac.dat is available for conventional data only |
|
&wrfvar5
|
|||
check_max_iv |
true |
.true.: reject the observations whose innovations (O-B) are larger than a maximum value defined as a multiple of the observation error for each observation. i.e., inv > (obs_error*factor) --> fails_error_max; the default maximum value is 5 times the observation error ; the factor of 5 can be changed through max_error_* settings. |
|
max_error_t |
5.0 |
maximum check_max_iv error
check factor for t |
|
max_error_uv |
5.0 |
maximum check_max_iv error
check factor for u and v |
|
max_error_pw |
5.0 |
maximum check_max_iv error
check factor for precipitable water |
|
max_error_ref |
5.0 |
maximum check_max_iv error
check factor for gps refractivity |
|
max_error_q |
5.0 |
maximum check_max_iv error
check factor for specific humidity |
|
max_error_p |
5.0 |
maximum check_max_iv error check factor for pressure |
|
max_error_thickness |
|
maximum check_max_iv error
check factor for thickness |
|
max_error_rv |
|
maximum check_max_iv error
check factor for radar radial velocity |
|
max_error_rf |
|
maximum check_max_iv error
check factor for radar reflectivity |
|
&wrfvar6 |
|||
max_ext_its |
1 |
number of outer loops |
|
ntmax |
200 |
maximum number of iterations
in an inner loop |
|
eps |
0.01 (max_ext_its) |
minimization convergence
criterion (used dimension: max_ext_its); minimization stops when the norm of
the gradient of the cost function gradient is reduced by a factor of eps.
inner minimization stops either when the criterion is met or when inner iterations reach ntmax. |
|
&wrfvar7 |
|||
cv_options |
5 |
3: NCEP Background Error model 5: NCAR Background Error model
(default) |
|
as1(3) |
-1.0 |
tuning factors for variance,
horizontal and vertical scales for control variable 1 = stream function. For
cv_options=3 only. The actual default values are 0.25, 1.0, 1.5. |
|
as2(3) |
-1.0 |
tuning factors for variance,
horizontal and vertical scales for control variable 2 - unbalanced potential
velocity. For cv_options=3 only. The actual default values are 0.25, 1.0,
1.5. |
|
as3(3) |
-1.0 |
tuning factors for variance,
horizontal and vertical scales for control variable 3 - unbalanced temperature.
For cv_options=3 only. The actual default values are 0.25, 1.0, 1.5. |
|
as4(3) |
-1.0 |
tuning factors for variance,
horizontal and vertical scales for control variable 4 - pseudo relative humidity.
For cv_options=3 only. The actual default values are 0.25, 1.0, 1.5. |
|
as5(3) |
-1.0 |
tuning factors for variance,
horizontal and vertical scales for control variable 5 - unbalanced surface
pressure. For cv_options=3 only. The actual default values are 0.25, 1.0,
1.5. |
|
rf_passes |
6 |
number of passes of recursive
filter. |
|
var_scaling1 |
1.0 |
tuning factor of background
error covariance for control variable 1 - stream function. For cv_options=5
only. |
|
var_scaling2 |
1.0 |
tuning factor of background
error covariance for control
variable 2 - unbalanced velocity potential. For cv_options=5 only. |
|
var_scaling3 |
1.0 |
tuning factor of background
error covariance for control variable 3 - unbalanced temperature. For
cv_options=5 only. |
|
var_scaling4 |
1.0 |
tuning factor of background
error covariance for control
variable 4 - pseudo relative humidity. For cv_options=5 only. |
|
var_scaling5 |
1.0 |
tuning factor of background
error covariance for control
variable 5 - unbalanced surface pressure. For cv_options=5 only. |
|
len_scaling1 |
1.0 |
tuning factor of scale-length
for stream function. For cv_options=5 only. |
|
len_scaling2 |
1.0 |
tuning factor of scale-length
for unbalanced velocity potential. For cv_options=5 only. |
|
len_scaling3 |
1.0 |
tuning factor of scale-length for unbalanced temperature.
For cv_options=5 only. |
|
len_scaling4 |
1.0 |
tuning factor of scale-length
for pseudo relative humidity. For cv_options=5 only. |
|
len_scaling5 |
1.0 |
tuning factor of scale-length
for unbalanced surface pressure. For cv_options=5 only. |
|
je_factor |
1.0 |
ensemble
covariance weighting factor |
|
&wrfvar8 ;not used |
|||
&wrfvar9 |
|
for program tracing. trace_use=.true. gives additional performance diagnostics (calling tree, local routine timings, overall routine timings, memory usage) It does not change results, but does add runtime overhead. |
|
stdout |
6 |
unit number for standard
output |
|
stderr |
0 |
unit number for error output |
|
trace_unit |
7 |
Unit number for tracing output
note that units 10 and 9 are reserved for reading namelist.input and writing
namelist.output respectively. |
|
trace_pe |
0 |
Currently, statistics are
always calculated for all processors, and output by processor 0. |
|
trace_repeat_head |
10 |
the number of times any trace
statement will produce output for any particular routine. This stops overwhelming
trace output when a routine is called multiple times. Once this limit is
reached a 'going quiet' message is written to the trace file, and no more output
is produced from the routine, though statistics are still gathered. |
|
trace_repeat_body |
10 |
see trace_repeat_head
description |
|
trace_max_depth |
30 |
define the deepest level to
which tracing writes output |
|
trace_use |
true |
.true.: activate tracing |
|
trace_use_frequent |
false |
|
|
trace_use_dull |
false |
|
|
trace_memory |
true |
.true.: calculate allocated
memory using a mallinfo call. On some platforms (Cray and Mac), mallinfo is
not available and no memory monitoring can be done. |
|
trace_all_pes |
false |
.true.: tracing is output for
all pes. As stated in trace_pe, this does not change processor statistics. |
|
trace_csv |
true |
.true.: tracing statistics are
written to a xxxx.csv file in CSV format |
|
use_html |
true |
.true.: tracing and error
reporting routines will include HTML tags. |
|
warnings_are_fatal |
false |
.true.: warning messages that
would normally allow the
program to continue are treated as fatal errors. |
|
&wrfvar10 ; for code
developer |
|||
&wrfvar11 |
|||
cv_options_hum |
1 |
do not change |
|
check_rh |
|
0 --> No supersaturation
check after minimization. 1 --> supersaturation
(rh> 100%) and minimum rh (rh<10%) check, and make the local adjustment
of q. 2 --> supersaturation (rh> 95%) and
minimum rh (rh<11%) check and make the multi-level q adjustment under the
constraint of conserved column integrated water vapor |
|
sfc_assi_options |
1 |
1 --> surface observations will be assimilated based on the lowest model level first guess. Observations are not used when the height difference of the elevation of the observing site and the lowest model level height is larger than
100m. 2 --> surface observations will be assimilated based on surface similarity theory in PBL. Innovations are computed based on 10-m wind, 2-m temperature and 2-m moisture. |
|
calculate_cg_cost_fn |
false |
the purpose of calculate_cg_cost_fn is changed. use print_detail_grad=.true. to dump cost function and gradient of each iteration to cost_fn and grad_fn. conjugate gradient algorithm does not require the computation of cost function at every iteration during minimization..true.: cost function is printed out and is directly derived from the gradient using the fully linear properties inside the inner-loop..false.: Only the initial and final cost functions are computed |
|
lat_stats_option |
false |
do not change |
|
&wrfvar12 |
|||
balance_type |
1 |
obsolete |
|
&wrfvar13 |
|||
vert_corr |
2 |
do not change |
|
vertical_ip |
0 |
obsolete |
|
vert_evalue |
1 |
do not change |
|
max_vert_var1 |
99.0 |
specify the maximum truncation value (in percentage) to explain the variance of stream function in eigenvector decomposition |
|
max_vert_var2 |
99.0 |
specify the maximum truncation value (in percentage) to explain the variance of unbalanced potential velocity in eigenvector decomposition |
|
max_vert_var3 |
99.0 |
specify the maximum truncation value (in percentage) to explain the variance of the unbalanced temperature in eigenvector decomposition |
|
max_vert_var4 |
99.0 |
specify the maximum truncation value (percentage) to explain the variance of pseudo relative humidity in eigenvector decomposition |
|
max_vert_var5 |
99.0 |
for unbalanced surface pressure, it should be a non-zero positive numer. set max_vert_var5=0.0 only for offline VarBC applications. |
|
&wrfvar14 |
|||
the following 4 variables (rtminit_nsensor, rtminit_platform, rtminit_satid, rtminit_sensor) together control what sensors to be assimilated. |
|||
rtminit_nsensor |
1 |
total number of sensors to be
assimilated |
|
rtminit_platform |
-1 (max_instruments) |
platforms IDs array (used dimension: rtminit_nsensor); e.g., 1 for NOAA, 9 for EOS, 10 for METOP and 2 for DMSP |
|
rtminit_satid |
-1.0 (max_instruments) |
satellite IDs array (used
dimension: rtminit_nsensor) |
|
rtminit_sensor |
-1.0 (max_instruments) |
sensor IDs array (used
dimension: rtminit_nsensor); e.g., 0 for HIRS, 3 for AMSU-A, 4 for
AMSU-B, 15 for MHS, 10 for SSMIS,
11 for AIRS |
|
rad_monitoring |
0 (max_instruments) |
integer array (used dimension:
rtminit_nsensor); 0: assimilating mode; 1: monitoring mode (only calculate innovations) |
|
thinning_mesh |
60.0 (max_instruments) |
real array (used dimension: rtminit_nsensor); specify thinning mesh size (in KM) for different sensors. |
|
thinning |
false |
.true.: perform thinning on
radiance data |
|
qc_rad |
true |
.true.: perform quality
control. always .true. |
|
write_iv_rad_ascii |
false |
.true.: output radiance Observation minus Background files, which are in ASCII format and separated by sensors and processors. |
|
write_oa_rad_ascii |
false |
.true.: output radiance Observation minus Analysis files (Observation minus Background information is also included), which are in ASCII format and separated by sensors and processors. |
|
use_error_factor_rad |
false |
.true.: use a radiance error tuning factor
file "radiance_error.factor", which can be created with empirical
values or generated using variational tuning method (Desroziers and Ivanov,
2001) |
|
use_antcorr |
false (max_instruments) |
.true.: perform Antenna Correction in CRTM |
|
rtm_option |
1 |
what RTM (Radiative Transfer Model) to use 1:
RTTOV (WRFDA needs to compile with RTTOV) 2: CRTM (WRFDA needs to compile with CRTM) |
|
only_sea_rad |
false |
.true.: assimilate radiance
over water only |
|
use_varbc |
false |
.true.: perform Variational Bias Correction. A parameter file in ASCII format called VARBC.in (a template is provided with the source code tar ball) is required. |
|
freeze_varbc |
false |
.true: together with use_varbc=.false., keep the VarBC bias parameters constant in time. In this case, the bias correction is read and applied to the innovations, but it is not updated during the minimization. |
|
varbc_factor |
1.0 |
for scaling the VarBC
preconditioning |
|
varbc_nobsmin |
10 |
defines the minimum number of
observations required for the computation of the predictor statistics during
the first assimilation cycle. If there are not enough data (according to
"VARBC_NOBSMIN") on the first cycle, the next cycle will perform a
coldstart again. |
|
airs_warmest_fov |
false |
.true.: uses the observation brightness temperature forAIRS Window channel #914 as criterion for GSI thinning (with a higher amplitude than the distance from the observation location to the nearest grid point). |
|
crtm_atmosphere |
0 |
climatology reference profile used above model top for CRTM Radiative Transfer Model (up to 0.01hPa 0: Invalid (default, use U.S. Standard Atmosphere) 1: Tropical 2: Midlatitude summer 3: Midlatitude winter 4: Subarctic summer 5: Subarctic winter 6: U.S. Standard Atmosphere |
|
use_crtm_kmatrix |
false |
.true. use CRTM K matrix rather than calling CRTM TL and AD routines for gradient calculation, which reduces runtime noticeably. |
|
&wrfvar15 (needs to be
set together with &wrfvar19) |
|||
num_pseudo |
0 |
Set the number of pseudo
observations, either 0 or 1 (single ob) |
|
pseudo_x |
1.0 |
Set the x-position (I) of the OBS in unit of grid-point. |
|
pseudo_y |
1.0 |
Set the y-position (J) of the OBS in unit of grid-point. |
|
pseudo_z |
1.0 |
Set the z-position (K) of OBS with the vertical level index, in bottom-up order. |
|
pseudo_val |
1.0 |
Set the innovation of the ob; wind in m/s, pressure in Pa, temperature in K, specific humidity in kg/kg |
|
pseudo_err |
1.0 |
set the error of the pseudo ob. Unit the same as pseudo_val.; if pseudo_var="q", pseudo_err=0.001 is more reasonable. |
|
&wrfvar16 (for hybrid
WRFDA/ensemble) |
|||
alphacv_method |
2 |
1: ensemble perturbations in control variable space 2: ensemble perturbations in model variable space |
|
ensdim_alpha |
0 |
ensemble size |
|
alpha_corr_type |
3 |
1: alpha_corr_type_exp 2: alpha_corr_type_soar 3: alpha_corr_type_gaussian (default) |
|
alpha_corr_scale |
1500.0 |
km |
|
&wrfvar17 |
|||
analysis_type |
“3D-VAR” |
"3D-VAR": 3D-VAR mode (default); "QC-OBS": 3D-VAR mode plus extra filtered_obs output; "VERIFY": verification mode. WRFDA resets check_max_iv=.false. and ntmax=0; "RANDOMCV": for creating ensemble perturbations |
|
&wrfvar18 (needs to set
&wrfvar21 and &wrfvar22 as well if ob_format=1 and/or radiances are
used) |
|||
analysis_date |
“2002-08-03_00:00:00.0000” |
specify the analysis time. It should be consistent with the first guess time. However, if time difference between analysis_date and date info read in from first guess is larger than analysis_accu, WRFDA will issue a warning message ("=======> Wrong xb time found???"), but won't abort. |
|
&wrfvar19 (needs to be
set together with &wrfvar15) |
|||
pseudo_var |
“t” |
Set the name of the OBS
variable: 'u' = X-direction component of
wind, 'v' = Y-direction component of
wind, 't' = Temperature, 'p' = Prerssure, 'q' = Specific humidity "pw": total
precipitable water "ref": refractivity "ztd": zenith total
delay |
|
&wrfvar20 |
|||
documentation_url |
“http://www2.mmm.ucar.edu/people/wrfhelp/wrfvar/code/trunk” |
|
|
&wrfvar21 |
|||
time_window_min |
"2002-08-02_21:00:00.0000" |
start time of assimilation time window used for ob_format=1 and radiances to select observations inside the defined time_window. Note: Start from V3.1, this variable is also used for ob_format=2 to double-check if the obs are within the specified time window. |
|
&wrfvar22 |
|||
time_window_max |
"2002-08-03_03:00:00.0000" |
end time of assimilation time
window used for ob_format=1 and radiances to select observations inside the
defined time_window. Note: Start from V3.1, this variable is also used for
ob_format=2 to double-check if the obs are within the specified time window. |
|
&wrfvar23 (settings
related to the 4D-Var penalty term option, which controls the high-frequency
gravity waves using a digital filter) |
|||
jcdfi_use |
false |
.true.: Include JcDF term in
cost function. .False.: Ignore JcDF term in
cost function. |
|
jcdfi_io |
false |
.true.: Read JcDF output from WRF+. Even jcdfi_use= false. Used for diagnosis. .False.: Ignore the JcDF output from WRF+ |
|
jcdfi_tauc |
10800 |
seconds, filter time window
second. |
|
jcdfi_gama |
1.0 |
Scaling number used to tune the weight of JcDF term |
|
jcdfi_error_wind |
3.0 |
m/s, wind error used in JcDF |
|
jcdfi_error_t |
1.0 |
K, temperature error used in
JcDF |
|
jcdfi_error_q |
0.001 |
kg/kg, specific humidity error
used in JcDF |
|
jcdfi_error_mu |
1000. |
Pa, perturbation pressure (mu)
error used in JcDF |
|
OBSPROC namelist variables.
Variable Names |
Description |
&record1 |
|
obs_gts_filename |
name and path of decoded
observation file |
fg_format |
'MM5' for MM5 application, 'WRF' for WRF
application |
obserr.txt |
name and path of observational
error file |
first_guess_file |
name and path of the first
guess file |
&record2 |
|
time_window_min |
The earliest time edge as
ccyy-mm-dd_hh:mn:ss |
time_analysis |
The analysis time as
ccyy-mm-dd_hh:mn:ss |
time_window_max |
The latest time edge as
ccyy-mm-dd_hh:mn:ss ** Note : Only observations between
[time_window_min, time_window_max] will kept. |
&record3 |
|
max_number_of_obs |
Maximum number of observations
to be loaded, ie in domain and time window, this is independent of the number
of obs actually read. |
fatal_if_exceed_max_obs |
.TRUE.: will stop when more than max_number_of_obs are loaded .FALSE.: will process the
first max_number_of_obs loaded observations. |
&record4 |
|
qc_test_vert_consistency |
.TRUE. will perform a vertical
consistency quality control check on sounding |
qc_test_convective_adj |
.TRUE. will perform a
convective adjustment quality control check on sounding |
qc_test_above_lid |
.TRUE. will flag the
observation above model lid |
remove_above_lid |
.TRUE. will remove the
observation above model lid |
domain_check_h |
.TRUE. will discard the observations
outside the domain |
Thining_SATOB |
.FALSE.: no thinning for SATOB data. .TRUE.: thinning procedure
applied to SATOB data. |
Thining_SSMI |
.FALSE.: no thinning for SSMI data. .TRUE.: thinning procedure
applied to SSMI data. |
Thining_QSCAT |
.FALSE.: no thinning for SATOB data. .TRUE.: thinning procedure
applied to SSMI data. |
&record5 |
|
print_gts_read |
TRUE. will write diagnostic on
the decoded obs reading in file obs_gts_read.diag |
print_gpspw_read |
.TRUE. will write diagnostic
on the gpsppw obs reading in file obs_gpspw_read.diag |
print_recoverp |
.TRUE. will write diagnostic
on the obs pressure recovery in file obs_recover_pressure.diag |
print_duplicate_loc |
.TRUE. will write diagnostic on space duplicate
removal in file obs_duplicate_loc.diag |
print_duplicate_time |
.TRUE. will write diagnostic on time duplicate
removal in file obs_duplicate_time.diag |
print_recoverh |
.TRUE will write diagnostic on
the obs height recovery in file obs_recover_height.diag |
print_qc_vert |
.TRUE will write diagnostic on
the vertical consistency check in file obs_qc1.diag |
print_qc_conv |
.TRUE will write diagnostic on
the convective adjustment check in file obs_qc1.diag |
print_qc_lid |
.TRUE. will write diagnostic
on the above model lid height check in file obs_qc2.diag |
print_uncomplete |
.TRUE. will write diagnostic
on the uncompleted obs removal in file obs_uncomplete.diag |
user_defined_area |
.TRUE.: read in the record6:
x_left, x_right, y_top, y_bottom, .FALSE.: not read in the
record6. |
&record6 |
|
x_left |
West border of sub-domain, not
used |
x_right |
East border of sub-domain, not
used |
y_bottom |
South border of sub-domain,
not used |
y_top |
North border of sub-domain,
not used |
ptop |
Reference pressure at model
top |
ps0 |
Reference sea level pressure |
base_pres |
Same as ps0. User must set
either ps0 or base_pres. |
ts0 |
Mean sea level temperature |
base_temp |
Same as ts0. User must set
either ts0 or base_temp. |
tlp |
Temperature lapse rate |
base_lapse |
Same as tlp. User must set
either tlp or base_lapse. |
pis0 |
Tropopause pressure, the
default = 20000.0 Pa |
base_tropo_pres |
Same as pis0. User must set
either pis0 or base_tropo_pres |
tis0 |
Isothermal temperature above
tropopause (K), the default = 215 K. |
base_start_temp |
Same as tis0. User must set
either tis0 or base_start_temp. |
&record7 |
|
IPROJ |
Map projection (0 =
Cylindrical Equidistance, 1 = Lambert Conformal, 2 = Polar stereographic, 3 =
Mercator) |
PHIC |
Central latitude of the domain |
XLONC |
Central longitude of the
domain |
TRUELAT1 |
True latitude 1 |
TRUELAT2 |
True latitude 2 |
MOAD_CEN_LAT |
The central latitude for the
Mother Of All Domains |
STANDARD_LON |
The standard longitude
(Y-direction) of the working domain. |
&record8 |
|
IDD |
Domain ID (1=< ID =<
MAXNES), Only the observations geographically located on that domain will be
processed. For WRF application with XLONC /= STANDARD_LON, set IDD=2, otherwise
set 1. |
MAXNES |
Maximum numbe of domains as
needed. |
NESTIX |
The I(y)-direction dimension
for each of the domains |
NESTJX |
The J(x)-direction dimension
for each of the domains |
DIS |
The grid size for each of the domains.
For WRF application, always set NESTIX(1),NESTJX(1), and DIS(1) based on the
infomation in wrfinput. |
NUMC |
The mother domain ID number
for each of the domains |
NESTI |
The I location in its mother
domain of the nest domain's low left corner -- point (1,1) |
NESTI |
The J location in its mother
domain of the nest domain's low left corner -- point (1,1). For WRF
application, NUMC(1), NESTI(1), and NESTJ(1) are always set to be 1. |
&record9 |
|
prepbufr_output_filename |
Name of the prebufr OBS file. |
prepbufr_table_filename |
'prepbufr_table_filename' ;
not change |
output_ob_format |
output 1, prebufr OBS file
only; 2,
ASCII OBS file only; 3,
Both prebufr and ASCII OBS files. |
use_for |
'3DVAR' obs file, same as
before, default 'FGAT ' obs files for FGAT '4DVAR' obs files for 4DVAR |
num_slots_past |
the number of time slots
before time_analysis |
num_slots_ahead |
the number of time slots after
time_analysis |
write_synop |
If keep synop obs in obs_gts
(ASCII) files. |
write_ship |
If keep ship obs in obs_gts
(ASCII) files. |
write_metar |
If keep metar obs in obs_gts
(ASCII) files. |
write_buoy |
If keep buoy obs in obs_gts
(ASCII) files. |
write_pilot |
If keep pilot obs in obs_gts
(ASCII) files. |
write_sound |
If keep sound obs in obs_gts
(ASCII) files. |
write_amdar |
If keep amdar obs in obs_gts
(ASCII) files. |
write_satem |
If keep satem obs in obs_gts
(ASCII) files. |
write_satob |
If keep satob obs in obs_gts
(ASCII) files. |
write_airep |
If keep airep obs in obs_gts
(ASCII) files. |
write_gpspw |
If keep gpspw obs in obs_gts
(ASCII) files. |
write_gpsztd |
If keep gpsztd obs in obs_gts
(ASCII) files. |
write_gpsref |
If keep gpsref obs in obs_gts
(ASCII) files. |
write_gpseph |
If keep gpseph obs in obs_gts
(ASCII) files. |
write_ssmt1 |
If keep ssmt1 obs in obs_gts
(ASCII) files. |
write_ssmt2 |
If keep ssmt2 obs in obs_gts
(ASCII) files. |
write_ssmi |
If keep ssmi obs in obs_gts
(ASCII) files. |
write_tovs |
If keep tovs obs in obs_gts
(ASCII) files. |
write_qscat |
If keep qscat obs in obs_gts
(ASCII) files. |
write_profl |
If keep profile obs in obs_gts
(ASCII) files. |
write_bogus |
If keep bogus obs in obs_gts
(ASCII) files. |
write_airs |
If keep airs obs in obs_gts
(ASCII) files. |