Scalable Interfaces for Geometry and Mesh based Applications



The Argonne Training Program on Extreme-Scale Computing (ATPESC 2014), is being offered from August 3 — August 15, 2014. The SIGMA tools are part of the FastMath institute’s unstructured mesh technologies, which are part of the tutorial and hands-on sessions conducted on August 8 — 9.
The following material gives the necessary details for the students to understand the examples used in the hands-on session related to SIGMA tools, and more specifically about MOAB. Due to time constraints, the other tools part of SIGMA (CGM, Lasso, MeshKit) will not be covered in this session. For further information, please email Vijay Mahadevan at vijaysm _at_ or the MOAB-dev list or use the contact page.


Capabilities: Geometry and Mesh (data) generation/handling infrastructure with flexible solver interfaces.

  • CGM supports both open-source (OCC) and commercial (ACIS) geometry modeling engines.
  • MOAB provides scalable mesh (data) usage in applications through efficient array-based access; Support parallel I/O, visualization.
  • MeshKit provides unified meshing interfaces to advanced algorithms and to external packages (Cubit/Netgen).
  • PETSc — MOAB interface simplifies efficient discretization and solution of PDE on unstructured meshes with FEM.

Getting started

We will be using the ALCF Vesta resources to run all the examples. Use the ATPESC2014 allocation.

  1. Set some environment variables for MOAB_DIR and PETSC_DIR (bash)
    • export ATPESC_DIR=/projects/FASTMath/ATPESC-2014
    • export ATPESCINST_DIR=$ATPESC_DIR/install
    • export ATPESCEX_DIR=$ATPESC_DIR/examples/moab
    • export PETSC_DIR=$ATPESCINST_DIR/petsc/
    • export MOAB_DIR=$ATPESCINST_DIR/moab/4.7.0RC2/linux-rhel_6-ppc64-gcc-4.4.6
  2. Two options available to run the codes
    • Pre-compiled executables are available on Vesta (copy them locally along with input/ directory)
    • Get the sources and compile yourself on your home space in Vesta. Instruction available in
      git clone ATPESC2014
    • How do I visualize and examine the results ?
      This mesh can be visualized with a MOAB-enabled version of VisIt or ParaView; If your versions of these tools are not MOAB-enabled, use the following command to generate a VTK file that can be used with these tools.

      $MOAB_DIR/bin/mbconvert FILENAME.h5m FILENAME.vtk
  3. To run the examples, use the qsub job submission system (do not use mpiexec wrappers directly). Example:
    qsub -n 32 -t 10 -q Q.ATPESC -A ATPESC2014 ./example-program --options

    For simplicity, in your bash shell, you can create an alias

    alias qsubmitrun="qsub -q Q.ATPESC -A ATPESC2014 --mode c16"

    This should submit the job and the output of the run will be available in jobid.output. Optionally, you could run interactively by adding “-I” option to qsub after reserving a node block a-priori.

MOAB examples

To utilize the SIGMA tools effectively, we will follow a workflow using several existing examples to eventually solve a 2-D Laplacian equation on a square mesh (unit cube). This workflow involves 4 examples and multiple usage variations can be used to get the same result with these tools.

  • Example 1: HelloParMOAB introduces basic API in MOAB
  • Example 2: SetsNTags introduces querying the different entity sets and tags
  • Example 3: LargeMesh introduces generation of meshes in memory
  • Example 4: DMMoab-based Laplacian solver shows assembling and solving linear operators

Example 1: HelloParMOAB

What does this example implement ?

  1. Instantiates MOAB interfaces — introduce basic API
  2. Introduction to some MOAB objects and reads a mesh from file in parallel
  3. Queries the MOAB database and reports the list the entities of various dimensions (elements, faces, edges, vertices) and their adjacency information
  4. This code also shows how to query MOAB for entities by both type and by dimension. Querying by dimension is useful for writing code that works regardless of entity type (e.g. triangle vs. quad).
  5. Perform ghost layer exchange between processors after loading file (control number of layers); Options to resolve shared entities between processors

Available options:

Usage: HelloParMOAB --help | [options]
  -h [--help]               : Show full help text
  -f [--file] <arg>   : Name of input file (default=$PWD/input/64bricks_512hex_256part.h5m)
  -g [--nghost] <int> : Number of ghost layers (default=1)

Run commands:

qsubmitrun -t 10 -n 1 --proccount 1 $ATPESCEX_DIR/HelloParMOAB -f $ATPESCEX_DIR/input/64bricks_512hex_256part.h5m
qsubmitrun -t 10 -n 1 --proccount 4 $ATPESCEX_DIR/HelloParMOAB -f $ATPESCEX_DIR/input/64bricks_512hex_256part.h5m -g 1
qsubmitrun -t 10 -n 1 --proccount 16 $ATPESCEX_DIR/HelloParMOAB -f $ATPESCEX_DIR/input/hexagonal_assembly.h5m -g 2

Example 2: SetsNTags

What does this example implement ?

  1. Load file and query the entity sets (group of entities) and their contents
  2. Look at various conventional tag names available in MOAB and list how/where they are defined.
    — These conventional tags show how data often found in computational meshes is typically described, e.g. material types and Dirichlet/Neumann boundary condition groupings
Usage: SetsNTags --help | [options]
  -h [--help]               : Show full help text
  -f [--file] <arg>   : Name of input file (default=$PWD/input/64bricks_512hex_256part.h5m)

Run commands:

qsubmitrun -t 10  -n 1 --proccount 1 $ATPESCEX_DIR/SetsNTags -f $ATPESCEX_DIR/input/64bricks_512hex_256part.h5m
qsubmitrun -t 10  -n 1 --proccount 4 $ATPESCEX_DIR/SetsNTags -f $ATPESCEX_DIR/input/hexagonal_assembly.h5m

Example 3: GenLargeMesh

What does this example implement ?

  1. Generate d-dimensional parallel mesh with given partition/element information (linear HEX/TET/QUAD/TRI/EDGE)
  2. Define double or integer Tags on entities (vertex or elements)
  3. Allow user to control the domain sizes
  4. Write to file in parallel with partition information

Available options:

Usage: GenLargeMesh --help | [options]
  -h [--help]       : Show full help text
  -t [--topology] <int>: Topological dimension of the mesh to be generated (default=3)
  -b [--blockSizeElement] <int>: Block size of mesh (default=4)
  -M [--xproc] <int>: Number of processors in x dir (default=1)
  -N [--yproc] <int>: Number of processors in y dir (default=1)
  -K [--zproc] <int>: Number of processors in z dir (default=1)
  -A [--xblocks] <int>: Number of blocks on a task in x dir (default=2)
  -B [--yblocks] <int>: Number of blocks on a task in y dir (default=2)
  -C [--zblocks] <int>: Number of blocks on a task in x dir (default=2)
  -x [--xsize] <val>: Total size in x direction (default=1.)
  -y [--ysize] <val>: Total size in y direction (default=1.)
  -z [--zsize] <val>: Total size in z direction (default=1.)
  -w [--newMerge]   : Use new merging method
  -k [--keep_skins] : Keep skins with shared entities
  -s [--simplices]  : Generate simplices
  -f [--faces_edges]: Create all faces and edges
  -i [--int_tag_cells] <arg>: Add integer tag on cells
  -d [--double_tag_verts] <arg>: Add double tag on vertices
  -o [--outFile] <arg>: Specify the output file name string (default GenLargeMesh.h5m)

Run commands — generate 2-D (-t 2) and 3-D (-t 3) meshes in serial and parallel by varying blocksize:

qsubmitrun -t 10 -n 1 --proccount 1 $ATPESCEX_DIR/GenLargeMesh -t 2 -b 5
qsubmitrun -t 10 -n 1 --proccount 4 $ATPESCEX_DIR/GenLargeMesh -t 2 -b 50 -M 2 -N 2 -x 10 -y 10
qsubmitrun -t 20 -n 4 --proccount 64 $ATPESCEX_DIR/GenLargeMesh -t 3 -f -b 50 -M 4 -N 4 -K 4

Run commands — generate simplices (-s) in 2-D (triangular) and 3-D (tetrahedral) meshes in serial and parallel by varying blocksize:

qsubmitrun -t 10 -n 1 --proccount 4 $ATPESCEX_DIR/GenLargeMesh -t 2 -s -b 50 -M -N 2
qsubmitrun -t 20 -n 2 --proccount 32 $ATPESCEX_DIR/GenLargeMesh -t 3 -s -f -b 50 -M 4 -N 4 -K 2

Run commands — add some double tags on the mesh vertices and integer tags on mesh elements:

qsubmitrun -t 20 -n 1 --proccount 4 $ATPESCEX_DIR/GenLargeMesh -t 3 -b 50 -M 2 -N 2 -x 10 -y 10 -d myvar
qsubmitrun -t 20 -n 4 --proccount 64 $ATPESCEX_DIR/GenLargeMesh -t 3 -f -b 100 -M 4 -N 4 -K 4 -d dvar1 -d dvar2 -i ivar1

You can also use HelloParMOAB and SetsNTags to look at the entities and tags that get generated using this example.

Example 4: DMMoab Inhomogeneous Laplacian Solver

What does this example implement ?

  1. Introduction to some DMMoab concepts
  2. Create DMMoab object using file loaded from disk — generated with GenLargeMesh example (2-D)
  3. Define fields to be solved — scalar diffusion (1 variable)
  4. Setup linear operators and PETSc objects
  5. Compute the linear operators based on a Finite Element Method discretization of the scalar field
  6. Solve the system of equations with PETSc’s KSP — use command line to monitor convergence and change solver/preconditioner
  7. Output the computed solution as tags defined on the mesh
  8. Visualize

Some available options:

Usage: DMMoabLaplacian -help | [options]
  -h [--help]       : Show full help text
  -problem <1>: The type of problem being solved (controls forcing function)
  -bc_type <dirichlet> (choose one of) dirichlet neumann
  -n <2>: The elements in each direction
  -rho <0.1>: The conductivity
  -x <1>: The domain size in x-direction
  -y <1>: The domain size in y-direction
  -xref <0.5>: The x-coordinate of Gaussian center (for -problem 1)
  -yref <0.5>: The y-coordinate of Gaussian center (for -problem 1)
  -nu <0.05>: The width of the Gaussian source (for -problem 1)
  -error: <FALSE> Compute the discrete L_2 and L_inf errors of the solution (for -problem 2)
  -io: <FALSE> Write out the solution and mesh data
  -file <>: The mesh file for the problem

For list of all available options (including options for PETSc), run “DMMoabLaplacian -help”. For all runs below, use

-ksp_monitor -pc_type -log_summary

to monitor convergence, play with preconditioners, and obtain profiling data respectively.
Run with different values of \rho and \nu (problem 1) to control diffusion and gaussian source spread. This uses the internal mesh generator implemented in DMMoab.

qsubmitrun -t 20 -n 1 --proccount 4 $ATPESCEX_DIR/DMMoabLaplacian -n 20 -nu 0.02 -rho 0.01
qsubmitrun -t 20 -n 1 --proccount 4 $ATPESCEX_DIR/DMMoabLaplacian –n 40 -nu 0.01 -rho 0.005 –io –ksp_monitor
qsubmitrun -t 20 -n 1 --proccount 4 $ATPESCEX_DIR/DMMoabLaplacian -n 80 -nu 0.01 -rho 0.005 –io –ksp_monitor –pc_type hypre
qsubmitrun -t 20 -n 1 --proccount 4 $ATPESCEX_DIR/DMMoabLaplacian -n 160 -bc_type neumann -nu 0.005 -rho 0.01 –io
qsubmitrun -t 20 -n 1 --proccount 4 $ATPESCEX_DIR/DMMoabLaplacian -n 320 -bc_type neumann -nu 0.001 -rho 1 –io

Measure convergence rate with uniform refinement with the options: “-problem 2 -error”.

qsubmitrun -t 20 -n 1 --proccount 4 $ATPESCEX_DIR/DMMoabLaplacian -problem 2 -error -n 16
qsubmitrun -t 20 -n 1 --proccount 4 $ATPESCEX_DIR/DMMoabLaplacian -problem 2 -error -n 32
qsubmitrun -t 20 -n 1 --proccount 4 $ATPESCEX_DIR/DMMoabLaplacian -problem 2 -error -n 64
qsubmitrun -t 20 -n 1 --proccount 8 $ATPESCEX_DIR/DMMoabLaplacian -problem 2 -error -n 128
qsubmitrun -t 20 -n 1 --proccount 8 $ATPESCEX_DIR/DMMoabLaplacian -problem 2 -error -n 256
qsubmitrun -t 20 -n 1 --proccount 16 $ATPESCEX_DIR/DMMoabLaplacian -problem 2 -error -n 512

Now, load up the file we generated with the GenLargeMesh example and run the solver using it.

qsubmitrun -t 20 -n 1 --proccount 4 $ATPESCEX_DIR/DMMoabLaplacian -problem 1 -file ./GenLargeMesh.h5m
qsubmitrun -t 20 -n 1 --proccount 4 $ATPESCEX_DIR/DMMoabLaplacian -problem 2 -file ./GenLargeMesh.h5m -error

Remember to use same number or lesser for –proccount argument when loading up the mesh. This is because the parallel mesh generated will only have the parts you specified and loading it back up on more processors in the solver is not possible.


ATPESC-2014 examples repository
ATPESC-2014 hands-on documentation
ATPESC 2014 Lecture
ATPESC 2013 Lecture

Copyright © 2014--2020 SIGMA. All Rights Reserved.