Frequently Asked Questions (FAQ)
- What’s the largest mesh MOAB can read/write?
We’ve read meshes of 128M tetrahedral elements and 256M hexahedral elements. For 64M tets, MOAB uses about 2.5GB of memory. MOAB also has a structured mesh representation, that requires about 57% less memory than the unstructured representation. Memory use does tend to vary depending on what you’re doing with the mesh, since MOAB doesn’t store adjacency information (specifically, vertex to element adjacencies) until you ask for them. Similarly, MOAB doesn’t construct internal faces and edges in a 3D mesh until you ask for them.
- Do you have any sample meshes that I can use to test MOAB?
- What about geometry models, and relating them to mesh?
We also develop the CGM library, for representing solid model and other types of geometric models. CGM is the library that supports geometry in the CUBIT mesh generation toolkit. In order to avoid dependencies between MOAB and CGM, for those applications wanting to use one without the other, we package mesh to geometry relations in a separate library, named Lasso. CGM and Lasso implement this iGeom and iRel ITAPS interfaces, respectively.
- Why does MOAB use ErrorCode and not exception handling?
One of the things exception handling will give you that you can’t get from return codes is the ability to clean things up all the way up the call stack, especially for functions/calls that don’t explicitly return things, e.g. constructors or destructors. For the fine-grained data objects (mesh entities, tag values), MOAB uses an array-based storage scheme, rather than a C++ object-based scheme. So, exceptions in constructors aren’t as much of an issue as they might be in other C++ mesh databases. ErrorCode has some information, and you can get more information by calling Interface::get_last_error.
There is currently an ongoing effort to improve the error handling framework to provide better (and more useful)Â error messages during critical operations of parallel mesh handling and manipulation. This will be part of the future release once the functionality has been tested.
- Why are kdtrees so expensive to initialize?
A Kd-tree is a type of binary tree where each cell is subdivided by an axis-aligned plane placed somewhere in that cell, not necessarily at the midpoint in the dimension being split. These trees are used to speed up spatial searches, since they generally provide log(n) scaling, where n is the number of entities in the tree. Tree construction is relatively fast when vertices are stored in the tree, since the test of whether a tree node contains a point is relatively fast and points are contained in exactly one leaf node of the tree. However, non-vertex entities (triangles, hex elements, etc) are often not wholly contained in a single leaf node of the tree, nor are they always wholly contained in either of two leaf nodes descended from a single parent. Therefore, when testing whether an element, e.g. a triangle, is contained in a given node bounding box, it is not enough to simply compare that node’s bounding box with that of the element. Instead, a more complicated test must be done. If the element is convex, the separating axis test can be used, and this is in fact how the test is implemented in MOAB. This requires far more time than a simple box test.
There are several types of strategies used for splitting entities among descendant nodes of a tree; see AdaptiveKDTree::PLANE_SET option for details.
- How do I build MOAB from scratch?
- How do I see the actual compile/link command being used?
To turn on verbose make output, use ‘make V=1’, or add V=1 to the CCFLAGS/CXXFLAGS/etc. at configure time.
- IBM Blue Gene/P
A few miscellaneous gotchas that might come up:
- Configure was choosing gfortran as FC but was picking up a defunct Portland group compiler as F77. I had to reconfigure with ‘F77=gfortran’ to get make check to work correctly. Without this config specification, the check phase errored out when it attempted to build examples/itaps/TagIterateF.F with the old compiler.
- If using MPI, and the C++ MPI compiler resolves to mpiCC (and the C compiler to mpicc), the MOAB build may fail on linking, since for some filesystems on the Mac filenames are case-insensitive. To fix this, add ‘CXX=mpicxx’ to the configure line just after ‘./configure’.
- Wrong HDF5 shared libraries being used on MCS LAN
- Linking from C or Fortran with pgi compilers
- How are boundary conditions represented in MOAB?
Boundary conditions are often specified in terms of geometric model entities, similar to material types. MOAB uses entity sets to store this information as well. The DIRICHLET_SET and NEUMANN_SET tags are used to mark Dirichlet- and Neumann-type boundary condition sets, respectively; the value of these tags indicates the id given to the set, e.g. the nodeset or sideset in ExodusII. By convention, Neumann sets usually contain (indirectly) intermediate-dimension entities like edges in a 2D mesh or faces in a 3D mesh, while Dirichlet sets usually contain vertices. In addition, Neumann sets are represented as sets of faces, rather than as sides of elements. Faces can be ordered “forward” or “reverse” with respect to one of the bounding elements, depending on whether the right-hand normal points into or out of the element. Forward-sense faces are added to the Neumann set. Reverse-sense faces are put into a separate set; that set is tagged with the SENSE tag, with value = -1; and that reverse set is added to the Neummann set.
Material groupings are indicated using the MATERIAL_SET tag, again with a value corresponding to that of the element block in ExodusII.
Material/Dirichlet/Neumann sets can also have a NAME tag, if the corresponding block/nodeset/sideset was given a name in CUBIT.
- What happend to my ExodusII blocks, nodesets, and sidesets?
When you import mesh from ExodusII or CUBIT’s .cub file formats, the metadata like blocks, nodesets, and sidesets get put into entity sets in MOAB. These sets are tagged with specific tags to identify them. The above description on representation of boundary conditions explains how the blocks/nodesets/sidesets are defined in MOAB.
- How do I get access to the geometric topology used to generate my mesh?
In solid modeler-based mesh generation, groups of mesh entities are associated with each geometric model entity. MOAB can restore some information about the geometric model when reading the mesh from CUBIT’s .cub format. MOAB can also read geometric models directly, when built with the CGM option. The geometric model is represented in MOAB using entity sets, marked with the GEOM_DIMENSION tag whose value equals the topological dimension of the model entity. The GLOBAL_ID tag is set to the id of the original model entity. When reading a mesh, these sets are populated with the entities “owned” by the corresponding model entity (this does NOT include the mesh on lower-dimensional entities). When reading geometry directly, the sets for vertices, edges, and faces are populated with the results of calling the faceting engine in CGM, and volumes have no mesh entities. In all cases, sets are given parent/child relations with other geometric model sets. Sense information is also stored in these sets.
For a more complete description of this topic, see the MOAB meta-data document.
- I generate my mesh with Cubit; what’s the best way to import those into MOAB?
If you generate your mesh with CUBIT, the best way to import it into MOAB is through CUBIT’s .cub file format. Use the “save as” command to save to a file with the .cub extension. MOAB’s .cub file reader will read not only the mesh, but also the geometric topology, boundary condition sets, and other metadata information from the file. MOAB also reads the ExodusII format, if the mesh has been exported in that format; in this case, only the mesh and the blocks/nodesets/sidesets will be restored.
For more details on what information from CUBIT is read and how it is represented internally, see the MOAB metadata document.
- Why is MOAB’s .cub reader named Tqdcfr?
MOAB’s .cub file reader is implemented in MOAB’s Tqdcfr class; this is an acronym for Tim’s Quick and Dirty Cub File Reader. Note this name is more a description of how this class was first written. It has since been updated many times, in order to maintain compatibility with more recent versions of CUBIT.
- Why doesn’t MOAB have a .cub writer?
The .cub format is proprietary and used only by CUBIT for native save/restore of mesh and geometry. It is unlikely that one would want to take data from MOAB back into CUBIT. Therefore, we have decided that it is not worth the development effort to implement a .cub writer. If a MOAB user would like to develop one, though, we would be happy to add it to the MOAB source code.
- Where can I find out what options are implemented for the various readers and writers in MOAB?
Like the iMesh interface, MOAB’s mesh input and output functions have a const char* options argument; this argument is used to pass a wide range of options to the readers and writers. For example, parallel read/write methods, timestep and variable information can all be specified in the options string. The various options implemented by each reader/writer are described in the MOAB metadata document and the README.IO file in the MOAB source tree.
- Can MOAB read climate data from .nc files?
Yes, MOAB has a .nc reader class that can read climate data; this reader is used to read any files with the .nc extension. Options specifying the timestep, variable, mesh subset, and a few other parameters can be specified in the options string. For a complete list of options, see the MOAB metadata document.
- How do I get access to the CCMIO library, for writing Star-CD and -CCM+ meshes?
Contact Support@…, with a copy to Steve.Feldman@… and evolp@…, requesting libccmio-2.06.020 (if the domain name of the above emails are missing, they’re supposed to be us dot cd-adapco dot com).
- How do I build MOAB to run in parallel?
Use the –with-mpi= option to the MOAB configure script to build in support for parallel functionality in MOAB. If your MPI installation is in the system directory space (e.g. /usr/lib or /usr/local), you only need to specify –with-mpi, and the configure script will automatically find the correct location of MPI. If you will be using true parallel read/write functions in MOAB, be sure to point the configure to a parallel build of the HDF5 library (using the –with-hdf5 configure option). Note that applications built from a parallel MOAB build can often be run without using the MPI mpirun or mpiexec functions. For more information on using MOAB in parallel mode, see the MOAB User Guide.
- How do I tell MOAB to read a mesh in parallel?
Make sure MOAB is built in parallel (see these instructions for instructions on how to do this), and that this version is built with support for parallel HDF5.
To test parallel reading and writing, you can use the mbconvert tool included in the MOAB build. This tool reads and writes any formats enabled in MOAB, and allows the user to pass options that get passed in the options string to MOAB’s load_mesh function. Some of the options that control parallel read/write behavior include PARALLEL (basic read/write strategy, e.g. true parallel read vs. read & delete non-owned mesh), PARTITION (which set tag to use to get partition information), PARALLEL_RESOLVE_SHARED_ENTS (resolve shared interfaces after read), PARALLEL_GHOSTS (exchange a specified number of layers of ghost entities between processors). For more details, see section 5 of the MOAB user guide.
You can use the HDF5 file MeshFiles/unittest/64bricks_512hex.h5m file distributed with MOAB source. The following command line will tell MOAB to read that file onto 2 processors, resolve shared entities between processors, and exchange 1 layer of ghost cells, then write the database to the new file dummy.h5m:
mpiexec -np 2 mbconvert -O PARALLEL=READ_PART -O PARTITION=MATERIAL_SET -O PARALLEL_RESOLVE_SHARED_ENTS -O PARALLEL_GHOSTS=3.0.1 -o PARALLEL=WRITE_PART MeshFiles/unittest/64bricks_512hex.h5m dummy.h5m
Parallel reading and writing has been verified on up to 16k processors of IBM BG/P so far, as well as various types of x86-based clusters, using both OpenMPI and MPICH.
- What partitioning tools can be run with MOAB mesh?
MOAB includes a tool that interfaces with the Zoltan load balancing library, in the tools/mbzoltan subdirectory of the MOAB source. Since Zoltan can also serve as an interface to the Parmetis, Jostle, and Scotch partitioning libraries, these libraries are also supported.Compiling and linking MOAB’s partitioning tool (mbpart) requires using the –enable-mbzoltan, –with-zoltan=<zoltan_dir> and –with-parmetis=<metis_dir> configure options (once compiled, run ‘mbpart -h’ for options). I usually use the RIB partitioning method (mbpart -z RIB), since the graph-based option seems to often generate zero-element parts.Once computed using Zoltan, the partition information can be written as entity sets, as tags on individual elements, or both. The advantage of writing the information on sets is two-fold; first, the storage costs for sets can be lower than for element-based variables, if the element handles have a fair degree of contiguity; and second, the sets provide an efficient mechanism for finding all elements in a given part, with no searching over elements required. The partitioned sets can be visualized in visit, using the “subset” plot, PARALLEL_PARTITIONs variable, or “Subset” dialog in the Controls.. menu.
- How do I visualize a partitioned mesh?
Parallel mesh partitions can be computed using the mbzoltan tool in MOAB. Currently, viewing partition information requires saving the partition as tags on individual elements, then saving the file to .h5m or .vtk and importing the mesh into a visualization tool. MOAB files can be read directly into Paraview using the vtkMOABReader plugin packaged with MOAB, stored in the tools/vtkMOABReader subdirectory in the MOAB source code. MOAB meshes can also be read through the iMesh data reader in the VisIt tool. Currently, VisIt’s iMesh reader does not have very extensive support for set-based data, though work is underway to improve this aspect of the reader.
- Does MOAB implement the iMeshP interface?
- Parts on a process: iMeshP represents the parallelism in a mesh using the concept of a Part. Mesh sharing, ghosting, and interfaces between parallel aspects is done at the Part level, rather than the process level. In theory, this provides the option of supporting so-called “over-partitioning”, where a mesh is partitioned into more Parts than there are available processes. There are two advantages stated for this approach. First, load balancing can be supported based on exchanging whole parts. Second, Partitions can be computed with a maximum number of Parts, and run on lower numbers of Processes, eliminating the need to compute and store multiple Partitions with a mesh. We believe that these advantages are relatively small, especially given the cost of implementing them. Load balancing of whole Parts is clearly not enough to satisfy many applications; indeed, iMeshP provides lower-level mesh migration functions for this very reason. Furthermore, we believe the difficulty of supporting multiple Partitions, in the same base representation of the mesh or in multiple copies, is more an artifact of how many libraries store Partitions, namely as one file per Part. When Partitions and Parts are stored as entity sets with the mesh in a single file, multiple Partitions and Parts are simply another set of metadata in the file. At an implementation level, representing shared mesh on the basis of Parts inserts a level of indirection in determining whether the other shared copies are on-processor or off-processor, when that is really the reason why shared mesh is relevant to applications. So, it is likely MOAB will continue to support only one-Part-per-processor. Applications wanting a finer partition on their data, e.g. to support FETI-type decompositions, can easily implement those capabilities in terms of entity sets.
- Decomposition and Partitions/Parts?: Since only Parts and Partitions are used to represent parallel aspects of a mesh, iMeshP must be used to create Partitions and Parts. Thus, there is no way to create or interact with Partitions and Parts unless the iMeshP interface has been instantiated, which for some implementations may require compilation under MPI. In MOAB, Partitions can be based on any covering of the mesh elements being distributed among Parts; examples of these include material groupings, geometric model groupings, or true partition assignments e.g. from Zoltan. Only after an application has loaded mesh in parallel does MOAB consider a particular Partition as the basis for parallel sharing and communication.
- Knowledge of sharing data: iMeshP requires only that a Part with a copy of an entity know the part ID and entity handle corresponding to the owner of the entity. That means that if the Part wants to communicate with any other Part sharing the entity, it must first communicate with the owning Part. In MOAB, if an entity is shared with another Part/process, the handles for the entity on all sharing Parts/processes are known. This allows direct communication between all Parts/processes sharing an entity. In practice, this results in fewer round-trip messages, decreasing communication costs in parallel applications.
- Ok, I built MOAB in parallel, now how can I get information on a parallel mesh?
- How do I run the portions of MOAB’s test suite that test parallel capabilities?
- How do I get entities on the geometric skin in a parallel mesh?
The ITAPS project developed a common parallel mesh interface named iMeshP. This interface uses an MPI-like approach to parallel mesh handling, with explicit treatment of message passing and distributed memory representation of a mesh. The iMeshP interface defines a model for partitioning the entities of a mesh among distinct processes. It describes the distribution of and relationships among entities on different processes. In the model, a partition is a high-level description of the distribution of mesh entities. A parallel communication abstraction is used to manage communication among entities and processes in a partition. A partition assigns entities to subsets called parts. The partition maps each part to a process such that each process may have zero, one, or many parts.
As of version 4.6, MOAB implements a large portion of the iMeshP interface. MOAB treats all parallel-relevant data in terms of the existing data model in iMesh. That is, partitions and parts are implemented using entity sets, with tags used to identify them as partition or part sets. Likewise, parallel aspects of the mesh, such as whether an entity is shared among processors or a ghost entity is stored and retrieved in the form of tags. Convenience functions for accessing parallel mesh information are provided by the ParallelComm class in MOAB.
While there has been considerable effort dedicated to making the implementation of iMeshP complete, this effort can never by complete due to some specific differences given below:
These differences are all that are known currently, though there are discussions underway regarding iMeshP that could result in further differences. If you have further questions, please send us your questions to the MOAB mailing list (moab-dev _at_ mcs.anl.gov.