MOAB HDF5 reader
Pseudo code on MOAB’s parallel HDF5-based reading class ReadHDF5 — load_file.
set_up_read
- parse options, allocate buffer
- if bcastsummary && root:
- get file info (vert/elem/set/tag table structures, etc.)
- close file
- bcast size of file info
- bcast file info
- open file with set_fapl_mpio, set_dxpl_mpio(collective)
- if options given, set hyperslab selection limit / append behavior
read_all_set_meta
- if root, mhdf_readSetMetaWithOpt (read nx4 table with indices for contents/par/child/opts, n sets)
- bcast set meta info
load_file_partial
- get_subset_ids:
- if (values not specified): get_tagged_entities:
- for dense indices (entity type/#verts sequences with that tag), add all entities to file ids
- if sparse data:
- open sparse data table
- while (remaining) read chunks, store in file ids
- if (values not specified): get_tagged_entities:
- get_partition: filter file ids to just ones read on this proc
- read_set_ids_recursive (sets):
- open set data table
- ReadHDF5Dataset.init(content_handle) (initialize window into dataset?)
- ReadHDF5Dataset.init(child_handle) (initialize window into dataset?)
- do: read_set_data while more children are read
- get_set_contents (sets):
- read_set_data (content), passing file ids out:
- construct range of offsets into data table for the set
- read contents in chunks determined by reader
- convert file ids to entity handles, add to set
- read_set_data (content), passing file ids out:
- compute maximum dimension, MPI_Allreduce
- for all element tables, get polyhedra, read_elems
- for each element type/num_verts combination that’s not polyhedra type, read_elems:
- mhdf_openConnectivitySimple: open this connectivity table
- allocate entities, get ptr to connectivity
- set file ids, with chunk size determined by buffer size
- read in chunks
- if node_ids array passed to read_elems, also copy read node ids to passed array
- insert elem ids into map
- read_nodes (range of file ids for nodes):
- open node coordinates table
- allocate nodes, get ptrs to coords arrays
- if (blocked)
- for each dimension 0..dim-1:
- set column #, file ids, with chunk size determined by reader
- read in chunks
- for each dimension 0..dim-1:
- else (interleaved, default)
- set file ids, with chunk size determined by buffer size
- read in chunks determined by buffer size, assign into (blocked) storage in MOAB
- insert node ids in id map
- for each elem sequence with dim between 1 and max_dim-1: read_node_adjacent_elems:
- while (remaining):
- read chunk of connectivity
- for each element, check for all read nodes; if false, mark start vertex for not creation, else increment number of entities
- create number of entities
- go back through connectivity, copying connectivity into new connect array
- insert created entities into output handle list
- while (remaining):
- update_connectivity:
- convert all stored connectivity lists from file id to vertex handles
- read_adjacencies
- find_sets_containing (sets):
- scan set contents, using either read-bcast or collective read
- for each set, if contents ids intersects with file ids, add set id to file ids
- read_sets:
- create sets
- read_set_data (contents): . construct range of offsets into data table for the set
- read contents in chunks determined by reader
- convert file ids to entity handles, add to set
- read_set_data (children)
- read_set_data (parents)
- for each tag, read_tag:
- create the tag
- if sparse, read_sparse_tag:
- read_sparse_tag_indices (read ids in chunks, convert to handles)
- else if var length, read_var_len_tag
- else, for each dense index (index=entity type + #verts):
- get ptr to node/set/element description in file info
- read_dense_tag:
- get entities in this table actually read
- create reader for this table, set file ids of items to read
- while !done:
- read data into buffer
- if handle type tag, convert to handles
- compute handles of entities read in this chunk
- set tag data for those entities