Preparing Meshes

For quasiuniform meshes, users generally only need to prepare mesh decomposition files - files that describe the decomposition of the SCVT mesh across processors - when running MPAS-A using multiple MPI tasks.


See MPAS Mesh Specification for additional details



Graph Partitioning with METIS

To run MPAS in parallel, a mesh decomposition file is required. This file includes an appropriate number of partitions - equal to the number of MPI tasks that will be used. A limited number of mesh decomposition files (graph.info.part.*) are provided with each mesh, as is the mesh connectivity file (graph.info). If a pre-computed decomposition file matches the number of MPI tasks to be used to run MPAS, then there is no need to run METIS.

To create new mesh decomposition files for some particular number of MPI tasks, only the graph.info file is required. The supported method for partitioning a graph.info file uses the METIS software. The serial graph partitioning program, METIS (rather than ParMETIS or hMETIS), should be sufficient for quickly partitioning any mesh usable by MPAS.

After installing METIS, a graph.info file can be partitioned into N partitions by running

> gpmetis -minconn -contig -niter=200 graph.info N

where N is the required number of partitions. The resulting file, graph.info.part.N, can then be copied into the MPAS run directory before running the model with N MPI tasks.




Online graph partitioning with PT-Scotch

Starting with the v8.4.0 release, MPAS provides an option to enable online graph partitioning using the PT-Scotch library. To be able to use this feature, the PT-Scotch library must first be installed on the system, and then MPAS must be built with PT-Scotch support. Linking with the PT-Scotch graph partioning library provides more details on building MPAS with PT-Scotch support.

Once MPAS has been built with PT-Scotch support, then the model can partition meshes at runtime, using only the connectivity information present in the input file, and without the need for any graph.info files. After computing the graph partitioning, MPAS will write the generated partition file to disk for future use. The online partitioning feature can be most beneficial when starting from a fresh run directory, changing the MPI task count frequently, or when running workflows where pre-computing many graph.info.part.* files might be inconvenient.

Using the partial global graph read by the MPAS bootstrapping framework, a PT-Scotch distributed graph is constructed. Internally, PT-Scotch performs the partitioning operation through distributed graph mapping algorithms. Presently, the MPAS interface to PT-Scotch only specifies the number of partitions that the graph should be partitioned into, and assumes that the edge and vertex weights are all uniformly distributed.

The exact behavior of the online graph partitioning can be controlled through a namelist option, as described in Usage and run-time behavior.



Usage and run-time behavior

After building MPAS with PT-Scotch, the user can still choose between using pre-computed graph partition files or the PT-Scotch online graph partitioning, by setting appropriate values for the config_block_decomp_file_prefix namelist option. Assuming that mpi_tasks denotes the number MPI tasks to be used in the model run, MPAS decides whether to use a pre-computed graph partition file or to invoke PT-Scotch online partitioning based on the following logic:

  • If config_block_decomp_file_prefix + mpi_tasks points to a valid graph partition file that already exists in the run directory, then it is used to proceed with the model run without invoking PT-Scotch.

  • If config_block_decomp_file_prefix==” or config_block_decomp_file_prefix + mpi_tasks does not match any valid graph partition file in the run directory, then PT-Scotch graph partitioning is invoked. During this process, the generated partition is saved as a graph partition file to the run directory so that it may be reused in the following runs without needing to invoke PT-Scotch again.

If MPAS has not been built with PT-Scotch, any code paths relating to PT-Scotch will not be taken. And an incorrect specification of config_block_decomp_file_prefix + mpi_tasks should lead to the model halting. The run-time behavior is summarized below:

PT-Scotch support

config_block_decomp_file_prefix + mpi_tasks

Run-time behavior

Yes/No

Set to a valid graph partition file that exists in the run directory

The existing graph partition file is used to proceed with the model run without invoking PT-Scotch.

Yes

Unset or set to a value that does not match any valid graph partition file in the run directory

PT-Scotch graph partitioning is invoked. The generated partition is saved to the run directory for reuse in later runs.

No

Unset or set to a value that does not match any valid graph partition file in the run directory

MPAS model run halts with an error




Relocating Refinement Regions on the Sphere

The grid_rotate program is used to rotate an MPAS mesh file, moving a refinement region from one geographic location to another, so that the mesh can be re-used for different applications. This utility saves computational resources, since generating an SCVT - particularly one with a large number of generating points or a high degree of refinement - can take considerable time.


To build the “grid_rotate” program,

  1. Edit the Makefile to set the Fortran compiler to be used.

  2. If the NETCDF environment variable points to a netCDF installation that was built with a separate Fortran interface library, add -lnetcdff just before -lnetcdf in the Makefile.

  3. Run make to create a grid_rotate executable file.


Besides the MPAS grid file to be rotated, grid_rotate requires a namelist file, namelist.input, which specifies the rotation to be applied to the mesh. The namelist.input variables specific to “grid_rotate” are summarized in the table below.


config_original_latitude_degrees

original latitude of any point on the sphere

config_original_longitude_degrees

original longitude of any point on the sphere

config_new_latitude_degrees

latitude to which the original point should be shifted

config_new_longitude_degrees

longitude to which the original point should be shifted

config_birdseye_rotation_counter_clockwise_degrees

rotation about a vector from the sphere center through the original point


Essentially, one chooses any point on the sphere, decides to where that point should be shifted, and specifies any change to the orientation (i.e., rotation) of the mesh about that point.

After the rotation parameters are set in namelist.input, grid_rotate program is run with two command-line options specifying the original grid file name and the name of the rotated grid file to be produced, e.g.,

> grid_rotate grid.nc grid_SE_Asia_refinement.nc

The original grid file will not be altered, and a new rotated grid file will be created. The NCL script mesh.ncl may be used to plot either of the original or rotated grid files after setting the name of the grid file in the script.

Warning

The grid_rotate program initializes the new, rotated grid file to a copy of the original grid file. If the original grid file has only read permission (i.e., no write permission), then so will the copy, and consequently, the grid_rotate program will fail when attempting to update the fields in the copy.




Creating Limited-area SCVT Meshes

The process of creating a limited-area (regional) mesh for MPAS-Atmosphere involves

  1. selecting any existing mesh - either a quasi-uniform mesh, or a variable-resolution mesh that has been rotated as in Relocating Refinement Regions on the Sphere

  2. describing the geographical region to be extracted from that mesh, and

  3. running the limited-area Python program to extract all cells, edges, and vertices in the designated region.


The result is a new netCDF mesh file that can be used to make a limited-area simulation as described in Regional Simulations.


See MPAS-A Meshes to obtain the limited-area Python program. Although the set of required Python packages may change over time, the program currently requires the numpy and netCDF4 packages, in addition to other standard packages.