Scalable Interfaces for Geometry and Mesh based Applications



The Argonne Training Program on Extreme-Scale Computing (ATPESC 2015), is being offered from August 2 — August 14, 2015. The SIGMA tools are part of the FastMath institute’s unstructured mesh technologies, which are part of the tutorial and hands-on sessions conducted on August 7 — 8.

The following material gives the necessary details for the students to understand the examples used in the hands-on session related to SIGMA tools, and more specifically about MOAB. Due to time constraints, the other tools part of SIGMA (CGM, Lasso, MeshKit) will not be covered in this session. For further information, please email Vijay Mahadevan at vijaysm _at_ or the MOAB-dev list or use the contact page.


Capabilities: Geometry and Mesh (data) generation/handling infrastructure with flexible solver interfaces.

  • CGM supports both open-source (OCC) and commercial (ACIS) geometry modeling engines.
  • MOAB provides scalable mesh (data) usage in applications through efficient array-based access; Support parallel I/O, visualization.
  • MeshKit provides unified meshing interfaces to advanced algorithms and to external packages (Cubit/Netgen).
  • PETSc — MOAB interface simplifies efficient discretization and solution of PDE on unstructured meshes with FEM.

Getting started

We will be using the ALCF Vesta resources to run all the examples. Use the ATPESC2015 allocation.

  1. Set some environment variables for MOAB_DIR and PETSC_DIR (bash)
    • export ATPESC_DIR=/projects/FASTMath/ATPESC-2015
    • export ATPESCINST_DIR=$ATPESC_DIR/install/fm-2015
    • export ATPESCEX_DIR=$ATPESC_DIR/examples/moab
    • export PETSC_DIR=$ATPESCINST_DIR/petsc/3.6.1/powerpc64-bgq-linux-gcc-4.4/with_moab_opt
    • export MOAB_DIR=$ATPESCINST_DIR/moab/4.8.3/powerpc64-bgq-linux-gcc-4.4/optimized
  2. Two options available to run the codes
    • Pre-compiled executables are available on Vesta (copy them locally along with input/ directory)
      $ATPESCEX_DIR (/projects/FASTMath/ATPESC-2015/examples/moab)
    • Get the sources and compile yourself on your home space in Vesta. Instruction available in
      git clone ATPESC2015
    • How do I visualize and examine the results ?
      This mesh can be visualized with a MOAB-enabled version of VisIt or ParaView; If your versions of these tools are not MOAB-enabled, use the following command to generate a VTK file that can be used with these tools.
      $MOAB_DIR/bin/mbconvert FILENAME.h5m FILENAME.vtk
  3. To run the examples, use the qsub job submission system (do not use mpiexec wrappers directly). Example:
    qsub -n 32 -t 10 -q ATPESC2015 -A ATPESC2015 ./example-program --options

    For simplicity, in your bash shell, you can create an alias

    alias qsubmitrun="qsub -q ATPESC2015 -A ATPESC2015 --mode c16"

    This should submit the job and the output of the run will be available in jobid.output. Optionally, you could run interactively by adding “-I” option to qsub after reserving a node block a-priori.

MOAB examples

To utilize the SIGMA tools effectively, we will follow a workflow using several existing examples to eventually solve a 2-D Laplacian equation on a square mesh (unit cube). This workflow involves 4 examples and multiple usage variations can be used to get the same result with these tools.

  • Example 1: HelloParMOAB introduces basic parallel API in MOAB
  • Example 2: SetsNTags introduces querying the different entity sets and tags
  • Example 3: LargeMesh introduces generation of meshes in memory
  • Example 4: MBPart tool introduces usage of different partitioning methods for unstructured meshes
  • Example 5: LloydSmoother provides algorithm to improve quality of elements iteratively
  • Example 6: DMMoab-based Laplacian solvers shows assembling and solving discrete PDE operators

Example 1: HelloParMOAB

What does this example implement ?

  1. Instantiates MOAB interfaces — introduce basic API
  2. Introduction to some MOAB objects and reads a mesh from file in parallel
  3. Queries the MOAB database and reports the list the entities of various dimensions (elements, faces, edges, vertices) and their adjacency information
  4. This code also shows how to query MOAB for entities by both type and by dimension. Querying by dimension is useful for writing code that works regardless of entity type (e.g. triangle vs. quad).
  5. Perform ghost layer exchange between processors after loading file (control number of layers); Options to resolve shared entities between processors

Available options:

Usage: HelloParMOAB --help | [options] 
  -h [--help]    : Show full help text
  -f [--file] <arg>   : Name of input file (default=$PWD/input/64bricks_512hex_256part.h5m)
  -g [--nghost] <int> : Number of ghost layers (default=1)

Run commands:

qsubmitrun -t 10 -n 1 --proccount 16 $ATPESCEX_DIR/HelloParMOAB -f $ATPESCEX_DIR/input/64bricks_512hex_256part.h5m
qsubmitrun -t 10 -n 2 --proccount 32 $ATPESCEX_DIR/HelloParMOAB -f $ATPESCEX_DIR/input/64bricks_512hex_256part.h5m -g 1
qsubmitrun -t 10 -n 8 --proccount 128 $ATPESCEX_DIR/HelloParMOAB -f $ATPESCEX_DIR/input/hexagonal_assembly.h5m -g 2

Example 2: SetsNTags

What does this example implement ?

  1. Load file and query the entity sets (group of entities) and their contents
  2. Look at various conventional tag names available in MOAB and list how/where they are defined.
    — These conventional tags show how data often found in computational meshes is typically described, e.g. material types and Dirichlet/Neumann boundary condition groupings
Usage: SetsNTags --help | [options]
  -h [--help]    : Show full help text
  -f [--file] <arg>   : Name of input file (default=$PWD/input/64bricks_512hex_256part.h5m)

Run commands:

qsubmitrun -t 10  -n 2 --proccount 32 $ATPESCEX_DIR/SetsNTags -f $ATPESCEX_DIR/input/64bricks_512hex_256part.h5m
qsubmitrun -t 10  -n 1 --proccount 4 $ATPESCEX_DIR/SetsNTags -f $ATPESCEX_DIR/input/hexagonal_assembly.h5m

Example 3: GenLargeMesh

What does this example implement ?

  1. Generate d-dimensional parallel structured mesh with given partition/element information (linear HEX/TET/QUAD/TRI/EDGE)
  2. Define double or integer Tags on entities (vertex or elements)
  3. Allow user to control the domain sizes and partition data
  4. Write to file in parallel with partition information
  5. Good example to measure parallel resolution and HDF5 I/O performance

Available options:

Usage: GenLargeMesh --help | [options] 
  -h [--help]       : Show full help text
  -t [--topology] <int>: Topological dimension of the mesh to be generated (default=3)
  -b [--blockSizeElement] <int>: Block size of mesh (default=4)
  -M [--xproc] <int>: Number of processors in x dir (default=1)
  -N [--yproc] <int>: Number of processors in y dir (default=1)
  -K [--zproc] <int>: Number of processors in z dir (default=1)
  -A [--xblocks] <int>: Number of blocks on a task in x dir (default=2)
  -B [--yblocks] <int>: Number of blocks on a task in y dir (default=2)
  -C [--zblocks] <int>: Number of blocks on a task in x dir (default=2)
  -x [--xsize] <val>: Total size in x direction (default=1.)
  -y [--ysize] <val>: Total size in y direction (default=1.)
  -z [--zsize] <val>: Total size in z direction (default=1.)
  -w [--newMerge]   : Use new merging method
  -s [--simplices]  : Generate simplices
  -f [--faces_edges]: Create all faces and edges
  -i [--int_tag_cells] <arg>: Add integer tag on cells
  -d [--double_tag_verts] <arg>: Add double tag on vertices
  -o [--outFile] <arg>: Specify the output file name string (default GenLargeMesh.h5m)

Run commands — generate 2-D (-t 2) and 3-D (-t 3) meshes in serial and parallel by varying blocksize:

qsubmitrun -t 10 -n 1 --proccount 1 $ATPESCEX_DIR/GenLargeMesh -t 2 -b 20
qsubmitrun -t 10 -n 1 --proccount 16 $ATPESCEX_DIR/GenLargeMesh -t 2 -b 50 -M 4 -N 4 -x 10 -y 10
qsubmitrun -t 20 -n 4 --proccount 64 $ATPESCEX_DIR/GenLargeMesh -t 3 -f -b 50 -M 4 -N 4 -K 4

Run commands — generate simplices (-s) in 2-D (triangular) and 3-D (tetrahedral) meshes in serial and parallel by varying blocksize:

qsubmitrun -t 10 -n 4 --proccount 64 $ATPESCEX_DIR/GenLargeMesh -t 2 -s -b 50 -M 8 -N 8
qsubmitrun -t 20 -n 4 --proccount 64 $ATPESCEX_DIR/GenLargeMesh -t 3 -s -f -b 50 -M 4 -N 4 -K 4

Run commands — add some double tags on the mesh vertices and integer tags on mesh elements:

qsubmitrun -t 20 -n 2 --proccount 32 $ATPESCEX_DIR/GenLargeMesh -t 3 -b 50 -M 4 -N 4 -K 2 -x 10 -y 10 -d myvar 
qsubmitrun -t 20 -n 4 --proccount 64 $ATPESCEX_DIR/GenLargeMesh -t 3 -f -b 100 -M 4 -N 4 -K 4 -d dvar1 -d dvar2 -i ivar1

You can also use HelloParMOAB and SetsNTags to look at the entities and tags that get generated using this example.

Example 4: Partition mesh

MOAB provides interfaces to several serial and parallel partitioners through Metis, ParMetis and Zoltan (Scotch). These options along with specific details on using the mesh distribution algorithms can be invoked by using mbpart tool.

Usage: mbpart --help | [options]    
  #parts      : Number of parts in partition
  input_file  : Mesh/geometry to partition
  output_file : File to which to write partitioned mesh/geometry
  -h [--help]       : Show full help text
  - --dimension : Specify dimension of entities to partition.  Default is  largest in file.
  -z [--zoltan] : (Zoltan) Specify Zoltan partition method.  One of RR, RCB, RIB, HFSC, PHG, or Hypergraph (PHG and Hypergraph are synonymous).
  -p [--parmetis] : (Zoltan+PARMetis) Specify PARMetis partition method.
  -o [--octpart] : (Zoltan) Specify OctPart partition method.
  -c [--include_closure]: Include element closure for part sets.
  -i [--imbalance] : Imbalance tolerance (used in Metis/PHG/Hypergraph method)
  -m [--metis] : (Metis) Specify Metis partition method. One of ML_RB or ML_KWAY.
  -x [--taggedsets] : (Metis) Partition tagged sets.
  -y [--taggedents] : (Metis) Partition tagged ents.
  -a [--aggregatingtag] : (Metis) Specify aggregating tag to partion tagged sets or tagged entities.
  -B [--aggregatingBCtag] : (Metis) Specify boundary id tag name used to group cells with same boundary ids.
  -I [--aggregatingBCids] :  (Metis) Specify id or ids of boundaries to be aggregated before partitioning (all elements with same boundary id will be in the same partition). Comma separated 
e.g. -I 1,2,5 
  -s [--sets]       : Write partition as tagged sets (Default)
  -t [--tags]       : Write partition by tagging entities
  -M [--power] : Generate multiple partitions, in powers of 2, up to 2^(pow)
  -R [--reorder]    : Reorder mesh to group entities by partition
  -v [--vertex_w] : Number of weights associated with a graph vertex.
  -e [--edge_w] : Number of weights associated with an edge.
  -l [--set_l] : Load material set(s) with specified ids (comma seperated) for partition
  -T                : Print CPU time for each phase.

You can run the mbpart tool, using the following commands on Vesta.

qsubmitrun -t 20 -n 1 --proccount 1 $MOAB_DIR/bin/mbpart 64 -m ML_RB -R input/quadhole.h5m output/quadhole_64.h5m
qsubmitrun -t 20 -n 1 --proccount 1 $MOAB_DIR/bin/mbpart 512 -z PHG -R input/quadhole.h5m output/quadhole_512.h5m

Example 5: Lloyd smoother for improving mesh quality

The LloydRelaxation.cpp example implements Lloyd relaxation on a surface mesh. In a nutshell, each iteration of Lloyd relaxation sets each face centroid to the average position of its vertices, then the vertex centroids to the average of connected faces’ centroids. The overall effect is to smooth a mesh, especially across large transitions in mesh size. More formally, Lloyd relaxation results in the dual of the mesh being a centroidal Voronoi Tessellation.

This example also demonstrates another way to perform mesh-based calculations in parallel. The mesh on a given processor is initialized with a layer of ghost faces around the locally-owned faces. For each iteration, face centroids are computed for all faces (including the ghost faces), vertex centroids are computed only for locally-owned vertices, then the centroid tag is communicated to processors sharing those vertices. This approach results in a single round of communication for each iteration, at the extra cost of computing ghost face centroids on every processor sharing the ghost faces (instead of just on the owning processor).

  -h [--help]   : Show full help text
  -n [--niter] : Number of Lloyd smoothing iterations (default=10)
  -e [--eps]   : Tolerance for the Lloyd smoothing error (default=1e-5)
  -d [--dim]   : Topological dimension of the mesh (default=2)
  -f [--file]  : Input mesh file to smoothen (default=input/surfrandomtris-4part.h5m)
  -r [--nrefine] : Number of uniform refinements to perform and apply smoothing cycles (default=1)
  -p [--ndegree] : Degree of uniform refinement (default=2)
Random  triangles with bad quality

Random triangles with bad quality

Smoothed mesh after 25 Lloyd iterations

Smoothed mesh after 25 Lloyd iterations

1 level of refinement of smoothed mesh with additional Lloyd iterations

1 level of refinement of smoothed mesh with additional Lloyd iterations

In the figures above, the mesh on the left is smoothed with Lloyd relaxation into the mesh on the right.

To run the example, here are some sample commands:

qsubmitrun -t 10 -n 1 --proccount 1 $ATPESCEX_DIR/LloydRelaxation -n 20 -r 0 -f input/surfrandomtris-64part.h5m
qsubmitrun -t 10 -n 1 --proccount 16 $ATPESCEX_DIR/LloydRelaxation -n 20 -r 2 -f input/surfrandomtris-64part.h5m 
qsubmitrun -t 20 -n 2 --proccount 32 $ATPESCEX_DIR/LloydRelaxation -n 50 -r 3 -f input/surfrandomtris-64part.h5m 
qsubmitrun -t 20 -n 4 --proccount 64 $ATPESCEX_DIR/LloydRelaxation -n 500 -e 1e-6 -r 4 -f input/surfrandomtris-64part.h5m
qsubmitrun -t 20 -n 8 --proccount 128 $ATPESCEX_DIR/LloydRelaxation -n 500 -e 1e-6 -r 4 -p 3 -f input/surfrandomtris-64part.h5m

Example 6: DMMoab Diffusion-Reaction FEM Solver

DMMoabLaplacian2D.cxx – Inhomogeneous Laplacian in 2D with FEM

What does this example implement ?

  1. Introduction to some DMMoab concepts
  2. Create DMMoab object using file loaded from disk
    • generated with GenLargeMesh example (2-D)
    • coarse mesh after smoothing and refinement
  3. Define fields to be solved — scalar diffusion PDE (1 variable)
  4. Setup linear operators and PETSc objects
  5. Compute the linear operators based on a Finite Element Method discretization of the scalar field
  6. Solve the system of equations with PETSc’s KSP — use command line to monitor convergence and change solver/preconditioner
  7. Output the computed solution as tags defined on the mesh
  8. Visualize the mesh and solution data

Some available options:

Usage: DMMoabLaplacian2D -help | [options] 
  -p : The type of problem being solved (controls forcing function)
  -n : The elements in each direction
  -l : Number of levels in the multigrid hierarchy
  -rho : The conductivity
  -nu : The width of the Gaussian source (for -p 1)
  -mg:  Use multigrid preconditioner
  -io:  Write out the solution and mesh data
  -tri:  Use triangles to discretize the domain
  -error:  Compute the discrete L_2 and L_inf errors of the solution
  -file <>: The mesh file for the problem

For list of all available options (including options for PETSc), run “DMMoabLaplacian2D -help”. For all runs below, use

-ksp_monitor -pc_type -log_summary

to monitor convergence, play with preconditioners, and obtain profiling data respectively.

Run with different values of \rho and \nu (problem 1) to control diffusion and gaussian source spread. This uses the internal mesh generator implemented in DMMoab.

qsubmitrun -t 20 -n 1 --proccount 16 $ATPESCEX_DIR/DMMoabLaplacian2D -n 20 -nu 0.02 -rho 0.01
qsubmitrun -t 20 -n 2 --proccount 32 $ATPESCEX_DIR/DMMoabLaplacian2D –n 40 -nu 0.01 -rho 0.005 –io –ksp_monitor
qsubmitrun -t 20 -n 4 --proccount 64 $ATPESCEX_DIR/DMMoabLaplacian2D -n 80 -nu 0.01 -rho 0.005 –io –ksp_monitor –pc_type hypre
qsubmitrun -t 20 -n 8 --proccount 128 $ATPESCEX_DIR/DMMoabLaplacian2D -n 160 -bc_type neumann -nu 0.005 -rho 0.01 –io
qsubmitrun -t 20 -n 8 --proccount 128 $ATPESCEX_DIR/DMMoabLaplacian2D -n 320 -bc_type neumann -nu 0.001 -rho 1 –io

Measure convergence rate with uniform refinement with the options: “-p 2 -error”.

qsubmitrun -t 20 -n 1 --proccount 16 $ATPESCEX_DIR/DMMoabLaplacian2D -p 2 -error -n 16
qsubmitrun -t 20 -n 4 --proccount 32 $ATPESCEX_DIR/DMMoabLaplacian2D -p 2 -error -n 16 -l 1
qsubmitrun -t 20 -n 4 --proccount 64 $ATPESCEX_DIR/DMMoabLaplacian2D -p 2 -error -n 64
qsubmitrun -t 20 -n 4 --proccount 64 $ATPESCEX_DIR/DMMoabLaplacian2D -p 2 -error -n 128
qsubmitrun -t 20 -n 8 --proccount 128 $ATPESCEX_DIR/DMMoabLaplacian2D -p 2 -error -n 256
qsubmitrun -t 20 -n 8 --proccount 128 $ATPESCEX_DIR/DMMoabLaplacian2D -p 2 -error -n 512

Now, load up the file we generated with the GenLargeMesh example or use the implicit structured grid generator to run verification suite.

qsubmitrun -t 10 -n 1 --proccount 16 $ATPESCEX_DIR/DMMoabLaplacian2D -p 1 -n 16 -error -l


qsubmitrun -t 10 -n 1 --proccount 16 $ATPESCEX_DIR/DMMoabLaplacian2D -p 1 -file output/GenLargeMesh.h5m -error -l

You can also conduct a parametric study with the following commands, using linear simplices for a change.

qsubmitrun -t 20 -n 8 --proccount 128 $ATPESCEX_DIR/DMMoabLaplacian2D -p 2 -tri -n 160 -l 1 -nu 0.05 -rho 0.05 -io -ksp_monitor -pc_type hypre
qsubmitrun -t 20 -n 8 --proccount 128 $ATPESCEX_DIR/DMMoabLaplacian2D -p 2 -tri -n 320 -nu 0.01 -rho 0.005 -io -ksp_monitor -pc_type gamg
qsubmitrun -t 20 -n 8 --proccount 128 $ATPESCEX_DIR/DMMoabLaplacian2D -p 2 -tri -n 320 -nu 0.001 -rho 0.05 -io -ksp_monitor -pc_type hypre

Remember to use same number or lesser for –proccount argument when loading up the mesh. This is because the parallel mesh generated will only have the parts you specified and loading it back up on more processors in the solver is not possible. Else, re-partition your mesh using the mbpart tool.

DMMoabLaplacian2D.cxx – Unstructured triangles (Square with a circular hole)

Expected mesh distribution (on 4 procs) and solution profiles.

DMMoabLaplacian2D unstructured triangular mesh

DMMoabLaplacian2D unstructured triangular mesh

DMMoabLaplacian2D solution profile

DMMoabLaplacian2D solution profile


qsubmitrun -t 5 -n 1 --proccount 16 $ATPESCEX_DIR/DMMoabLaplacian2D -p 3 -file input/quadhole_64.h5m -tri -io -l 0 -pc_type hypre -ksp_monitor
qsubmitrun -t 10 -n 2 --proccount 16 $ATPESCEX_DIR/DMMoabLaplacian2D -p 3 -file input/quadhole_64.h5m -tri -io -l 1 -mg -ksp_monitor
qsubmitrun -t 10 -n 2 --proccount 32 $ATPESCEX_DIR/DMMoabLaplacian2D -p 3 -file input/quadhole_64.h5m -tri -io -l 2 -mg -ksp_monitor
qsubmitrun -t 20 -n 2 --proccount 32 $ATPESCEX_DIR/DMMoabLaplacian2D -p 3 -file input/quadhole_64.h5m -tri -io -l 3 -mg -ksp_monitor
qsubmitrun -t 20 -n 4 --proccount 64 $ATPESCEX_DIR/DMMoabLaplacian2D -p 3 -file input/quadhole_64.h5m -tri -io -levels 4 -pc_type hypre -ksp_monitor
# Reference
qsubmitrun -t 25 -n 8 --proccount 64 $ATPESCEX_DIR/DMMoabLaplacian2D -p 3 -file input/quadhole_64.h5m -tri -io -l 6 -mg -ksp_monitor

Extension: Utilize the Lloyd smoother on input/surfrandomtris-64part.h5m mesh and write your own code or modify the driver to create new problem.

DMMoabPoisson3D.cxx – FEM Poisson solver

Verifiable 3-D FEM solver for Poisson equation. Measure order of accuracy.

qsubmitrun -t 10 -n 2 --proccount 32 $ATPESCEX_DIR/DMMoabPoisson3D -error -p 1 -n 32 -l 1 -io -mg -ksp_monitor
qsubmitrun -t 10 -n 4 --proccount 32 $ATPESCEX_DIR/DMMoabPoisson3D -error -p 1 -n 32 -l 2 -io -mg -ksp_monitor
qsubmitrun -t 10 -n 8 --proccount 64 $ATPESCEX_DIR/DMMoabPoisson3D -error -p 1 -n 64 -l 1 -io -mg -ksp_monitor
qsubmitrun -t 10 -n 16 --proccount 128 $ATPESCEX_DIR/DMMoabPoisson3D -error -p 1 -n 128 -l 1 -io -pc_type gamg -ksp_monitor
qsubmitrun -t 10 -n 32 --proccount 256 $ATPESCEX_DIR/DMMoabPoisson3D -error -p 1 -n 256 -l 1 -io -pc_type hypre -ksp_monitor

How do we modify this for a new problem ? Ask us questions.



ATPESC examples repository
ATPESC-2014 examples repository
ATPESC-2014 hands-on documentation
ATPESC 2014 Lecture
ATPESC 2013 Lecture

Copyright © 2014 SIGMA. All Rights Reserved.