The COMET compiler consists of a Domain Specific Language (DSL) for sparse and dense tensor algebra computations, a progressive lowering process to map high-level operations to low-level architectural resources, a series of optimizations performed in the lowering process, and various IR dialects to represent key concepts, operations, and types at each level of the multi-level IR. At each level of the IR stack, COMET performs different optimizations and code transformations. Domain-specific, hardware- agnostic optimizations that rely on high-level semantic information are applied at high-level IRs. These include reformulation of high-level operations in a form that is amenable for execution on heterogeneous devices (e.g., rewriting Tensor contraction operations as Transpose-Transpose-GEMM-Transpose) and automatic parallelization of high-level primitives (e.g., tiling for thread- and task-level parallelism).
Comprehensive documentation of the COMET compiler can be found here.
These commands can be used to setup COMET project:
- Install Dependencies To install COMET and LLVM/MLIR, the following dependencies need to be installed:
-
C++ compiler toolchain as mentioned here
-
1.a [Optional but recommended] Create a new python environment
$ export PYTHON_EXECUTABLE=$(which python3.x) # Replace 3.x with your version $ ${PYTHON_EXECUTABLE} -m venv "comet" $ source comet/bin/activate
- Get submodules required for COMET. COMET contains LLVM and blis as a git submodule. The LLVM repo here includes staged changes to MLIR which may be necessary to support COMET. It also represents the version of LLVM that has been tested. MLIR is still changing relatively rapidly, so feel free to use the current version of LLVM, but APIs may have changed. BLIS is an award-winning portable software framework for instantiating high-performance BLAS-like dense linear algebra libraries. COMET generates a call to BLIS microkernel after some optimizations.
$ git clone https://github.com/pnnl/COMET.git
$ export COMET_SRC=`pwd`/COMET
$ cd $COMET_SRC
$ git submodule init
$ git submodule update --depth=1 # --depth=1 requires git>=1.8.4
- Build and test LLVM/MLIR:
$ export PYTHON_EXECUTABLE=$(which python3.x) # Replace 3.x with your version. Skip if already run in step 1.a
$ cd $COMET_SRC
$ mkdir llvm/build
$ cd llvm/build
# NVPTX is only required if targetting Nvidia GPUs
$ cmake -G Ninja ../llvm \
-DLLVM_ENABLE_PROJECTS="mlir;openmp;clang" \
-DLLVM_TARGETS_TO_BUILD="AArch64;X86;NVPTX" \
-DCMAKE_OSX_ARCHITECTURES="arm64" \
-DPython3_EXECUTABLE=${PYTHON_EXECUTABLE} \
-DLLVM_ENABLE_ASSERTIONS=ON \
-DCMAKE_BUILD_TYPE=Release
$ ninja
$ ninja check-mlir
- Apply BLIS patch to meet COMET requirements:
$ cd $COMET_SRC
$ patch -s -p0 < comet-blis.patch
- Build and test BLIS:
$ cd $COMET_SRC
$ cd blis
$ ./configure --prefix=$COMET_SRC/install --disable-shared auto
$ make [-j]
$ make check [-j]
$ make install [-j]
- (If targetting GPUs) Patch Triton:
$ cd $COMET_SRC
$ cd triton
$ git apply ${COMET_SRC}/triton.patch
- Build and test COMET:
$ cd $COMET_SRC
$ mkdir build
$ cd build
# Omit -DENABLE_GPU_TARGET, -DCUDA_COMPUTE_CAPABILITY and -DTRITON_PATH
# if only targetting CPUs
# In -DCUDA_COMPUTE_CAPABILITY=70 replace 70 with the desired value
$ cmake -G Ninja .. \
-DMLIR_DIR=$PWD/../llvm/build/lib/cmake/mlir \
-DLLVM_DIR=$PWD/../llvm/build/lib/cmake/llvm \
-DENABLE_GPU_TARGET=ON \
-DCUDA_COMPUTE_CAPABILITY=70 \
-DTRITON_PATH=$PWD/../triton/ \
-DLLVM_ENABLE_ASSERTIONS=ON \
-DCMAKE_BUILD_TYPE=Release
$ ninja
$ ninja check-comet-integration # Run the integration tests.
The -DCMAKE_BUILD_TYPE=DEBUG
flag enables debug information, which makes the
whole tree compile slower, but allows you to step through code into the LLVM
and MLIR frameworks.
To get something that runs fast, use -DCMAKE_BUILD_TYPE=Release
or
-DCMAKE_BUILD_TYPE=RelWithDebInfo
if you want to go fast and optionally if
you want debug info to go with it. Release
mode makes a very large difference
in performance.
This project is licensed under the Simplified BSD License. See the LICENSE file and the DISCLAIMER file for more details.
Issues with COMET can be reported through GitHub. We will try our best to timely address issues reported by users. The community is also welcome to discuss any remedies or experience that may help to resolve issues.
Contributions to COMET are welcome. The community can get involved by contributing some new feature, reporting bugs, and/or improving documentation. Please feel free to create a pull-request on GitHub for code contributions. We will try our best to timely incorporate user requests.
We encourage you to use GitHub’s tracking system to report any issues or for code contributions as mentioned above. For any other queries, please feel free to contact us via email:
- Gokcen Kestor (email: first-name.last-name@pnnl.gov), Pacific Northwest National Laboratory (PNNL), United States.
- Zhen Peng (email: first-name.last-name@pnnl.gov), Pacific Northwest National Laboratory, United States.
- Polykarpos Thomadakis (email: first-name.last-name@pnnl.gov), Pacific Northwest National Laboratory, United States.
- Ryan Friese (email: first-name.last-name@pnnl.gov), Pacific Northwest National Laboratory, United States.
If you use COMET in your research or work, please cite any of the following relevant papers:
- Erdal Mutlu, Ruiqin Tian, Bin Ren, Sriram Krishnamoorthy, Roberto Gioiosa, Jacques Pienaar & Gokcen Kestor, COMET: A Domain-Specific Compilation of High-Performance Computational Chemistry, In: Chapman, B., Moreira, J. (eds) Languages and Compilers for Parallel Computing, LCPC 2020, Lecture Notes in Computer Science, vol 13149, Springer, Cham. DOI and BIB.
@InProceedings{COMET:LCPC-20,
author={Mutlu, Erdal and Tian, Ruiqin and Ren, Bin and Krishnamoorthy, Sriram and Gioiosa, Roberto and Pienaar, Jacques and Kestor, Gokcen",
editor={Chapman, Barbara and Moreira, Jos{\'e}},
title={COMET: A Domain-Specific Compilation of High-Performance Computational Chemistry},
booktitle={Languages and Compilers for Parallel Computing},
year={2022},
publisher={Springer International Publishing},
address={Cham},
pages={87--103}
}
- Ruiqin Tian, Luanzheng Guo, Jiajia Li, Bin Ren, & Gokcen Kestor, A High Performance Sparse Tensor Algebra Compiler in MLIR, In: IEEE/ACM 7th Workshop on the LLVM Compiler Infrastructure in HPC, LLVM-HPC 2021, November 14, 2021, St. Louis, MO, United States. DOI
@InProceedings{COMET:LLVM-HPC-2021,
author={Tian, Ruiqin and Guo, Luanzheng and Li, Jiajia and Ren, Bin and Kestor, Gokcen},
booktitle={2021 IEEE/ACM 7th Workshop on the LLVM Compiler Infrastructure in HPC (LLVM-HPC)},
title={A High Performance Sparse Tensor Algebra Compiler in MLIR},
year={2021},
pages={27-38},
doi={10.1109/LLVMHPC54804.2021.00009}
}
The COMET compiler is supported in part by the Data-Model Convergence (DMC) initiative at the Pacific Northwest National Laboratory.
This work is also supported in part by the High Performance Data Analytics (HPDA) program at the Pacific Northwest National Laboratory.
This work is also supported in part by the U.S. Department of Energy’s (DOE) Office of Advanced Scientific Computing Research (ASCR) as part of the Center for Artificial Intelligence-focused Architectures and Algorithms (ARIAA).