Skip to content
This repository has been archived by the owner on Feb 8, 2022. It is now read-only.

Test Cases ucvm2mesh

meihuisu edited this page May 29, 2019 · 19 revisions

ucvm2mesh

A general solution for creating a large mesh is to partition a target 3D mesh into a regular grid with smaller segments. The segments are then processed in a divide-and-conquer manner and the results are gathered into a single file.

There are 3 different mesh building programs in UCVMC, ucvm2mesh, ucvm2mesh_mpi, and ucvm2mesh_mpi_layer, a serial version and two mpi-based versions. They all use the same configuration file but the serial version ignores information about the cores.

There are two inter-related configuration files used in the mpi versions. One file is the ucvm2mesh.conf, which is shared with the serial version of ucvm2mesh, the other is the job submission script, that runs ucvm2mesh_mpi or ucvm2mesh_mpi_layer on a cluster. Examples of these two scripts are below:

ucvm2mesh.conf

A key constraint: the ucvm2mesh configuration file specifies the partitioning of grid among x,y,z-orientation (as px, py, pz). The number of cells in each dimension needs to be divisible by the partition. A unit of process or rank is defined as an unit within the grid partition. For ucvm2mesh_mpi, the number of processes that are requested in the mpi job submission needs to be the product of these three values (e.g. example below (2 * 2 * 5) = 20). For ucvm2mesh_mpi_layer, the number of processes that are requested can be less than the product of these three values as long as the product is divisible by the processes requested. If the mpi job submission script requests number of processes that does not satisfy above requirements, the system exits with a configuration error.

The origin of the grid is the south west corner and X axis is the horizontal before rotation.

Example of ucvm2mesh.conf,

# List of CVMs to query
ucvmlist=cvmsi

# UCVM conf file
ucvmconf=/auto/scec-00/maechlin/ucvmc/conf/ucvm.conf

# Gridding cell centered or vertex (CENTER, VERTEX)
gridtype=CENTER

# Spacing of cells
spacing=2000.0

# Projection
proj=+proj=utm +datum=WGS84 +zone=11
rot=-40.0
x0=-122.3
y0=34.7835
z0=0.0

# Number of cells along each dim
nx=384
ny=248
nz=25

# Partitioning of grid among processors (request px*py*pz processes in mpi submit)
px=2
py=2
pz=5

# Vs/Vp minimum
vp_min=200
vs_min=200

# Mesh and grid files. Meshtype must be one of valid formats (IJK-12, IJK-20, IJK-32, SORD)
meshfile=/auto/scec-00/maechlin/ucvmc_mpi_testing/mesh_cvms5_sord_2000m_mpi.media
gridfile=/auto/scec-00/maechlin/ucvmc_mpi_testing/mesh_cvms5_sord_2000m_mpi.grid
meshtype=SORD

# Location of scratch dir
scratch=/staging/scratch

ucvm2mesh_mpi

The following job manager script was tested on USC HPC using the slurm job manager.

ucvm2mesh.slurm

#!/bin/bash
##SBATCH -p scec
#SBATCH --ntasks=20 #number of tasks with one per processor
#SBATCH -N 10
#SBATCH --mem 0 # Set to unlimited memory
#SBATCH --time=02:30:00
#SBATCH -o ucvm2mesh_mpi_large_%A.out
#SBATCH -e ucvm2mesh_mpi_large_%A.err
#SBATCH --export=NONE
#SBATCH --mail-user=maechlin@usc.edu
#SBATCH --mail-type=END

cd /auto/scec-00/maechlin/ucvmc_mpi_testing
srun -v --mpi=pmi2 /auto/scec-00/maechlin/ucvmc/bin/ucvm2mesh_mpi -f /auto/scec-00/maechlin/ucvmc_mpi_testing/ucvm2mesh.conf

ucvm2mesh_mpi_layer

When a mesh is enormous and fails to complete in a reasonable limit set by the cluster administrator, the mesh building generation needs to be decompose into more manageable size. Another mpi-based mesh building tool, ucvm2mesh_mpi_layer allows a mpi process to take on multiple rank tasks (one after another) and also allows the user to define a region of interest(ROI) and only works on the ranks tasks that is bounded by this ROI.

A layer is defined as a group of tasks that falls within a z partition layer, ie. for pz partitions, there are pz number of layers and each layer has px * py number of rank tasks.

rank_per_layer = px * py
starting_rank_layer = 1
number_of_layers =1

newly added command parameters :

     -l starting_rank_layer
     -c number_of_layers
start_rank = (starting_rank_layer - 1) * rank_per_layer
end_rank = start_rank + (number_of_layers * rank_per_layer) - 1

Example of the La Habra mesh's ucvm2mesh.conf on NCSA's Bluewater,

#List of CVMs to query
ucvmlist=cvmsi

# UCVM conf file
ucvmconf=/u/sciteam/meisu/scratch/UCVMC_TEST/ucvm.conf

# Gridding cell centered or vertex
gridtype=CENTER

# Spacing of cells
spacing=20.0

# Projection
proj=+proj=utm +datum=WGS84 +zone=11
rot=-39.9
x0=-119.288842
y0=34.120549
z0=0.0

# Number of cells along each dim
nx=9000
ny=6750
nz=3072

# Partitioning of grid among processors
px=18
py=18
pz=64

# Vs/Vp minimum
vp_min=0
vs_min=0

# Mesh and grid files, format
meshfile=/u/sciteam/meisu/scratch/UCVM_REVIEW/RESULT/bw_la_habra_mesh.media
gridfile=/u/sciteam/meisu/scratch/UCVM_REVIEW/RESULT/bw_la_habra_mesh.grid
meshtype=IJK-12

# Location of scratch dir
scratch=/u/sciteam/meisu/scratch

and the submit script is,

#!/bin/bash
#PBS -l walltime=24:00:00,nodes=81:ppn=8:xe
#PBS -A baln
#PBS -e /u/sciteam/meisu/scratch/UCVM_REVIEW/RESULT/bw_la_habra_mesh.err
#PBS -o /u/sciteam/meisu/scratch/UCVM_REVIEW/RESULT/bw_la_habra_mesh.out

export UCVM_INSTALL_PATH=/projects/sciteam/baln/meisu/TARGET_UCVMC/install
cp ${UCVM_INSTALL_PATH}/bin/ucvm2mesh_mpi_layer .

aprun -n 648 ./ucvm2mesh_mpi_layer -f bw_la_habra_mesh.conf -l 1  -c 16 
aprun -n 648 ./ucvm2mesh_mpi_layer -f bw_la_habra_mesh.conf -l 17 -c 16
aprun -n 648 ./ucvm2mesh_mpi_layer -f bw_la_habra_mesh.conf -l 33 -c 16
aprun -n 648 ./ucvm2mesh_mpi_layer -f bw_la_habra_mesh.conf -l 49 -c 16 

echo "Jobs done"
exit 0

Run Summary

An useful summary is produced when ucvm2mesh_mpi and ucvm2mesh_mpi_layer completes. This summary includes the max, min vs, mp and the point where they are found.

With MinVS and MinVS to to 0 (updated ucvm2mesh.conf to set min vp and min vs to 0)
[0] Max Vp: 8530.947266 at
[0]	i,j,k : 237, 53, 14
[0] Max Vs: 5225.743652 at
[0]	i,j,k : 237, 54, 13
[0] Max Rho: 3174.164062 at
[0]	i,j,k : 119, 60, 15
[0] Min Vp: 283.636932 at
[0]	i,j,k : 204, 110, 0
[0] Min Vs: 138.720001 at
[0]	i,j,k : 194, 82, 0
[0] Min Rho: 1909.786255 at
[0]	i,j,k : 204, 110, 0
[0] Min Ratio: 1.414214 at
[0]	i,j,k : 125, 62, 0

Output file formats

Two output files are generated from ucvm2mesh, ucvm2mesh_mpi and ucvm2mesh_mpi_layer, a output grid file and a mesh file (or media file).

grid file

The grid file contains the I J and lat lon values in the mesh.

The dump_grid.py utility reads in grid file and dumps the I J values, and the lat/lon values

./dumpgrid.py mesh_bbp1d_ijk12_2000m.grid 384 248

produces,

380, 247 : -112.506316, 33.825386
381, 247 : -112.490418, 33.813224
382, 247 : -112.474526, 33.801059
383, 247 : -112.458637, 33.788893

media file

The .media file formats include IJK-12, IJK-20, IJK-32 and SORD. All mesh file formats are fast x, y, and z (although this is an assumption based on the grid file format). The formats differ by the contents for each mesh point. The size of each point depends on the size of a structure, and the structures contains different information, shown below:

typedef struct mesh_ijk12_t {
  float vp;
  float vs;
  float rho;
} mesh_ijk12_t;
typedef struct mesh_ijk20_t {
  float vp;
  float vs;
  float rho;
  float qp;
  float qs;
} mesh_ijk20_t;
typedef struct mesh_ijk32_t {
  int i;
  int j;
  int k;
  float vp;
  float vs;
  float rho;
  float qp;
  float qs;
} mesh_ijk32_t;

The SORD output format, produces three files. Each file contains only one type of material property value, and has no ijk values, written like this:

case MESH_FORMAT_SORD:
for (i = 0; i < node_count; i++) {
      ptr1_sord[i].val = nodes[i].vp;
      ptr2_sord[i].val = nodes[i].vs;
      ptr3_sord[i].val = nodes[i].rho;
    }
Clone this wiki locally