Skip to content
Rajdeep Konwar edited this page May 29, 2018 · 15 revisions

Disclaimer

This is awp-odc-os's development wiki.

Remark: If you are looking for documentation on how to build and run awp-odc-os this is probably the wrong place. Please find the user documentation at http://hpgeoc.github.io/awp-odc-os/doc/.

Repository Structure

Branches

awp-odc-os repository structure is build around the two keywords release and development. release-versions are tested and stable. development-versions are branches of previous release-versions and include features planned to be included in future releases. development-versions are experimental, use in production is discouraged. This structure results in a total of four default branches for awp-odc-os:

Tags

Tags are snapshots (git annotated tags) of the release branches master and gh-pages. They follow the nomenclature YYMM.V for master and YYMM.V-jekyll for gh-pages. YY is the year, e.g. 16, at which the tag was created and MM the month respective month, e.g. 05. V is an increasing version number for bug fixes (no features). The version tags always match: 1605.2-jekyll contains the pages for 1605.2. Examples are the two initial tags 1605.0 and 1605.0-jekyll.

This Wiki

This wiki is the developer wiki of awp-odc-os and therefore in sync with the develop-branch. Side effects:

New implementations to the develop branch

Receiver output writer

  • Implemented new class ReceiverWriter inside src/io/OutputWriter.hpp to get the velocity-output at receivers which are passed in an input list.

  • Outputs are written to a temporary buffer which gets flushed every 10 iterations (controlled by m_buffSkip variable).

  • Namespace of this class is odc::io::ReceiverWriter.

  • An example of the input receiver-list format (txt file):

    2

    20000 20693 20000

    20000 25543 20000

    where 2 is the number of receivers and the next 2 lines are the x, y, z coordinates of the receivers in meters. The z-coordinate represents depth of the receiver from the top free-surface.

  • 2 new arguments INRCVR and OUTRCVR added to AWP-ODC-OS's input command-line arguments to handle receiver outputs.

  • A sample run would look like

mpiexec -n 1 bin/pmcl3d -X 480 -Y 480 -Z 460 -x 1 -y 1 --TMAX 5.0 --DH 100.0 --DT 0.001 --NVAR 5 --NSRC 1 --IFAULT 3 --MEDIASTART 4 --NTISKP 10 -c "path_to_checkpoint_file" --INSRC "path_to_input_source_file" --INVEL "path_to_input_mesh_file" --INRCVR "path_to_input_receiver_list" --OUTRCVR "path_to_output_receiver"
  • The parameter OUTRCVR specifies the prefix of each receiver output file. For example, if OUTRCVR is set to output/receiverOutput and 12 receivers are specified in the input receiver-list, then the program will output 12 different files in the output folder (it should exist beforehand) in csv format. Each of the file is suffixed with the receiver number, i.e. receiverOutput_1.csv and so on.
  • Receiver output is only implemented for the Vanilla and YASK kernels and works in parallel with MPI+OMP. It has not been implemented for the GPU (CUDA) kernels.

Parallel Mesh reader

  • Important note on coordinate-system followed internally (within AWP-ODC-OS):

    x-direction : towards right

    y-direction : into the paper/screen

    z-direction : upwards

    origin : lower-left corner towards screen/paper

  • Implemented parallel mesh reader for large meshes by splitting it into separate smaller files.

  • Is useful for huge domains where a single large mesh-file can cause issues with memory availability and run-time performance.

  • Refer to sismowine.md for further information on how to split the domain into separate mesh-files depending on the number of MPI ranks used.

  • For example, if we are using 4 MPI ranks in x an y direction, we should have 4 mesh-files with MPI rank as suffix, like setup_mesh.bin.3 signifying that it is the mesh file to be read by MPI rank 3 (rank starts from 0).

  • A new option (4) is added to the MEDIASTART option of the AWP command-line parameter, i.e. --MEDIASTART 4.

  • Pass the mesh-file name (without any MPI rank suffix) to the INVEL option in order to use the parallel mesh reader, like --INVEL path_to/setup_mesh.bin

  • The parallel mesh reader has only been implemented for the Vanilla and YASK kernels.