OP2 port of MG-CFD. Provides MPI, full OpenMP, SIMD, CUDA, OpenACC, OpenMP 4, and some pairings of them.
- OP2
- HDF5 parallel
- ParMETIS
- PT-Scotch
- CUDA library if using
mgcfd_cuda
ormgcfd_mpi_cuda
Configure with --enable-parallel
option, then standard compile.
Standard compile.
Follow their build instructions. After linking Makefile.inc
, edit it and remove the flag -DSCOTCH_PTHREAD
from CFLAGS
. Then standard compile.
Several distinct libraries can be compiled, depending on the mix of parallelism and performance portability desired. Similary for MG-CFD, and so it is likely that you only need to compile a subset of the OP2 libraries.
All MG-CFD variants need two particular OP2 libraries, created by executing these two OP2 make rules: core
, hdf5
. Variant-specific dependencies are listed in next section.
Different binaries can be generated, depending on the mix of parallelism and performance portability desired:
Intent | MG-CFD make rule | OP2 dependency make rule |
---|---|---|
Sequential | seq | seq |
OpenMP | openmp | openmp |
MPI | mpi | mpi_seq |
MPI + OpenMP | mpi_openmp | mpi_seq |
MPI + SIMD | mpi_vec | mpi_seq |
CUDA | cuda | cuda |
MPI + CUDA | mpi_cuda | mpi_cuda |
In future, OpenACC and OpenMP 4.5 ports will be available
Want to execute immediately? Navigate to a folder containing input HDF5 files and execute:
$ ./path/to/mgcfd_* -i input.dat
MG-CFD has more command-line arguments to ease file/directory interaction, and control execution parameters. View the help page for more information:
$ ./path/to/mgcfd_* --help
-
Prepare a json file detailing run configuration. See ./run-inputs/annotated.json for documentation on each option.
-
Generate run batch scripts from the json file:
$ python ./run-scripts/gen_job.py --json path/to/config.json
-
The specified
jobs directory
will contain a subfolder for each run configuration, and a singlesubmit_all.sh
file. If a scheduler was specified in the .json file, thensubmit_all.sh
will compile locally then submit each job to the scheduler for execution. If local execution was requested in the json file, thensubmit_all.sh
will compile and execute locally. -
Each run will output .csv files containing performance data. These can be collated together using
aggregate-output-data.py
:
$ python ./run-scripts/aggregate-output-data.py \
--output-dirpath path/to/desired-folder/for/collated-csv-files \
--data-dirpath path/to/job-output
A release is provided that includes the Onera M6 wing. It consists of 300K nodes (930K edges), and three additional multigrid meshes with respective node counts of 165K, 111K, and 81K.
Additional larger meshes are available at request:
- Rotor 37 1M nodes (multigrid)
- Rotor 37 8M nodes (multigrid)
- Rotor 37 25M nodes (multigrid)
- Rotor 37 150M nodes (single level)
12/Jun/2019: added MPI + SIMD variant
Andrew Owenson: a.m.b.owenson@warwick.ac.uk
For more information on the design of MG-CFD, please refer to our publication: https://onlinelibrary.wiley.com/doi/10.1002/cpe.5443
If you wish to cite this work then please use the following:
- Owenson A.M.B., Wright S.A., Bunt R.A., Ho Y.K., Street M.J., and Jarvis S.A. (2019), An Unstructured CFD Mini-Application for the Performance Prediction of a Production CFD Code, Concurrency Computat: Pract Exper., 2019