Skip to content

Commit

Permalink
Merge branch 'main' into cmake
Browse files Browse the repository at this point in the history
  • Loading branch information
NathanielMcVicar committed Sep 18, 2023
2 parents 0c8ab1c + 5aca454 commit 289c1d6
Show file tree
Hide file tree
Showing 539 changed files with 29,675 additions and 26,169 deletions.
21 changes: 14 additions & 7 deletions .azure-pipelines/Windows-CI.yml
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,8 @@ parameters:

jobs:
- job: Build_onnx_mlir_Windows
timeoutInMinutes: 240
# 4h timeout is sometimes a tiny bit short when llvm-project is rebuilt
timeoutInMinutes: 270
pool:
vmImage: 'windows-2019'

Expand Down Expand Up @@ -142,14 +143,20 @@ jobs:
- script: |
call "%ProgramFiles(x86)%\Microsoft Visual Studio\2019\Enterprise\VC\Auxiliary\Build\vcvarsall.bat" x64
call onnx-mlir\utils\check-onnx-numerical.cmd
displayName: Run onnx-mlir numerical tests
call onnx-mlir\utils\check-docs.cmd
displayName: Run onnx-mlir doc tests
workingDirectory: $(Agent.BuildDirectory)
env:
CTEST_PARALLEL_LEVEL: ${{ parameters.CTEST_PARALLEL_LEVEL }}
- script: |
call "%ProgramFiles(x86)%\Microsoft Visual Studio\2019\Enterprise\VC\Auxiliary\Build\vcvarsall.bat" x64
call onnx-mlir\utils\check-docs.cmd
displayName: Run onnx-mlir doc tests
call onnx-mlir\utils\check-unittest.cmd
displayName: Run onnx-mlir unit tests
workingDirectory: $(Agent.BuildDirectory)
- script: |
call "%ProgramFiles(x86)%\Microsoft Visual Studio\2019\Enterprise\VC\Auxiliary\Build\vcvarsall.bat" x64
call onnx-mlir\utils\check-onnx-numerical.cmd
displayName: Run onnx-mlir numerical tests
workingDirectory: $(Agent.BuildDirectory)
env:
CTEST_PARALLEL_LEVEL: ${{ parameters.CTEST_PARALLEL_LEVEL }}
16 changes: 11 additions & 5 deletions .github/workflows/macos-amd64-build.yml
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ jobs:
build:
runs-on: macos-latest
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
with:
submodules: recursive
- uses: actions/setup-python@v4
Expand All @@ -24,12 +24,10 @@ jobs:
sh ~/work/onnx-mlir/onnx-mlir/utils/install-protobuf.sh
- name: cache MLIR directory
id: cache-mlir
uses: actions/cache@v2
env:
cache-name: cache-mlir-directory
uses: actions/cache@v3
with:
path: ~/work/onnx-mlir/llvm-project
key: V8-${{ runner.os }}-build-${{ env.cache-name }}-${{ hashFiles('**/clone-mlir.sh', '**/build-mlir.sh') }}
key: ${{ runner.os }}-mlir-${{ hashFiles('**/clone-mlir.sh', '**/build-mlir.sh') }}
- name: clone & build MLIR
if: steps.cache-mlir.outputs.cache-hit != 'true'
run: |
Expand All @@ -50,6 +48,14 @@ jobs:
run: |
cd ~/work/onnx-mlir
sh ~/work/onnx-mlir/onnx-mlir/utils/install-onnx-mlir.sh
- name: build and run docs/doc_example tests
run: |
cd ~/work/onnx-mlir
sh ~/work/onnx-mlir/onnx-mlir/utils/check-doc-example.sh
- name: build and run unit tests
run: |
cd ~/work/onnx-mlir
sh ~/work/onnx-mlir/onnx-mlir/utils/check-unittest.sh
- name: run onnx-mlir backend and numerical tests
run: |
cd ~/work/onnx-mlir
Expand Down
7 changes: 6 additions & 1 deletion CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,10 +11,15 @@ Releases are extensively tested, including the following steps.

# Current releases

## Prerelease 0.4.1
## Prerelease 0.4.2

< current main branch >

## Prerelease 0.4.1

This prerelease was cut from commit on August 26th, 2023.
There are no security issues that we know of.

## Prerelease 0.4.0

This prerelease was cut on March 24th, 2023.
Expand Down
2 changes: 1 addition & 1 deletion VERSION_NUMBER
Original file line number Diff line number Diff line change
@@ -1 +1 @@
0.4.1
0.4.2
17 changes: 10 additions & 7 deletions docker/Dockerfile.llvm-project
Original file line number Diff line number Diff line change
Expand Up @@ -28,8 +28,8 @@ RUN distro=$(cat /etc/os-release|grep -Po '(?<=^ID=").*(?=")|(?<=^ID=)[^"].*[^"]
ln -sf /usr/share/zoneinfo/${TZ} /etc/localtime && \
dpkg-reconfigure -f noninteractive tzdata && \
apt-get install -qq -y --no-install-recommends \
autoconf automake ca-certificates cmake cppcheck curl \
default-jdk-headless gcc g++ git libncurses-dev \
autoconf automake ca-certificates clang cmake cppcheck \
curl default-jdk-headless gcc g++ git libncurses-dev \
libtool make maven ninja-build openjdk-11-jdk-headless \
python3 python3-dev python3-distutils python3-numpy \
python3-pip python3-pytest-xdist python3-setuptools \
Expand All @@ -38,16 +38,17 @@ RUN distro=$(cat /etc/os-release|grep -Po '(?<=^ID=").*(?=")|(?<=^ID=)[^"].*[^"]
ln -sf /usr/bin/pytest-3 /usr/bin/pytest; \
elif [ "${distro}" = "rhel" ] || [ "${distro}" = "fedora" ]; then \
ln -sf /usr/share/zoneinfo/${TZ} /etc/localtime && \
([ -x /usr/bin/microdnf ] && microdnf install -y yum) && \
([ -x /usr/bin/microdnf ] && microdnf install -y yum) && \
RHEL_VERSION=$(grep CPE_NAME /etc/os-release | cut -d':' -f5) && \
yum install -q -y \
https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm && \
https://dl.fedoraproject.org/pub/epel/epel-release-latest-${RHEL_VERSION}.noarch.rpm && \
yum update -q -y && \
yum install -q -y \
autoconf automake ca-certificates cmake diffutils \
autoconf automake ca-certificates clang cmake diffutils \
file java-11-openjdk-devel java-11-openjdk-headless \
gcc gcc-c++ git libtool make ncurses-devel \
python39 python39-devel python39-numpy python39-pip \
python39-setuptools python39-wheel zlib-devel && \
python39-setuptools python39-wheel tzdata-java zlib-devel && \
# Use same versions as those in ubuntu:jammy
pip3 install -q \
Cython pytest==6.2.5 pytest-forked==1.4.0 \
Expand Down Expand Up @@ -83,7 +84,9 @@ RUN git clone -n https://github.com/llvm/llvm-project.git \
&& cd llvm-project \
&& git checkout ${LLVM_PROJECT_SHA1} \
&& mkdir -p build && cd build \
&& cmake -G Ninja ../llvm \
# Build with clang since gcc on ppc64le doesn't support __builtin_thread_pointer
&& CC=clang CXX=clang++ \
cmake -G Ninja ../llvm \
-DLLVM_ENABLE_PROJECTS=mlir \
-DLLVM_TARGETS_TO_BUILD="host" \
-DCMAKE_BUILD_TYPE=Release \
Expand Down
21 changes: 15 additions & 6 deletions docker/Dockerfile.onnx-mlir
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,7 @@ RUN ONNX_ROOT=${WORK_DIR}/onnx-mlir/third_party/onnx \

ARG NPROC=4
ARG ACCEL=NNPA
ARG TEST_NOFLOAT16
ARG TEST_MCPU
ARG KEEPSRC

Expand All @@ -39,24 +40,32 @@ RUN LLVM_PROJECT_ROOT=${WORK_DIR}/llvm-project \
&& rm -rf build && mkdir -p build && cd build \
# NNPA acclerator is built on all archs to enable lit tests
# (dependent libzdnn is built on s390x only)
&& cmake -DMLIR_DIR=${LLVM_PROJECT_ROOT}/build/lib/cmake/mlir \
&& CC=clang CXX=clang++ \
cmake -DMLIR_DIR=${LLVM_PROJECT_ROOT}/build/lib/cmake/mlir \
-DCMAKE_BUILD_TYPE=Release \
-DCMAKE_INSTALL_MESSAGE=NEVER \
-DONNX_MLIR_ACCELERATORS=${ACCEL} .. \
&& make -j${NPROC} \
&& make -j${NPROC} LIT_OPTS=-v check-onnx-lit \
# FLOAT16 backend tests only work on ppc64le platform at the moment
&& TEST_NOFLOAT16=${TEST_NOFLOAT16:-$([ "$(uname -m)" = "s390x" ] && echo true || \
([ "$(uname -m)" = "x86_64" ] && echo true || \
([ "$(uname -m)" = "ppc64le" ] && echo || echo)))} \
# User image is built with SIMD (currently on s390x only)
&& TEST_MCPU=${TEST_MCPU:-$([ "$(uname -m)" = "s390x" ] && echo z14 || \
[ "$(uname -m)" = "x86_64" ] && echo || \
[ "$(uname -m)" = "ppc64le" ] && echo || echo)} \
([ "$(uname -m)" = "x86_64" ] && echo || \
([ "$(uname -m)" = "ppc64le" ] && echo || echo)))} \
&& TEST_ARGS="-mcpu=${TEST_MCPU}" \
&& make check-docs \
&& make check-unittest \
&& make check-multiple-models \
&& make NPROC=${NPROC} \
CTEST_PARALLEL_LEVEL=${NPROC} \
TEST_NOFLOAT16=${TEST_NOFLOAT16} \
TEST_MCPU=${TEST_MCPU} \
TEST_ARGS="${TEST_ARGS}" \
-j${NPROC} \
check-onnx-backend-numerical \
&& make check-docs \
&& make -j${NPROC} install \
&& echo -e "/usr/local/lib" > \
/etc/ld.so.conf.d/onnx-mlir.conf && ldconfig \
Expand All @@ -72,8 +81,8 @@ RUN LLVM_PROJECT_ROOT=${WORK_DIR}/llvm-project \
pip3 uninstall -q -y Cython pybind11 pytest pytest-forked \
pytest-xdist typing-extensions && \
yum remove -q -y \
adwaita-icon-theme autoconf automake cmake diffutils file \
git libtool make python39 && \
adwaita-icon-theme autoconf automake cmake file \
git libtool python39 && \
rm -rf /var/cache/dnf/* /usr/local/bin/ninja; \
fi \
&& rm -rf /tmp/* /usr/bin/python \
Expand Down
17 changes: 13 additions & 4 deletions docker/Dockerfile.onnx-mlir-dev
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,7 @@ RUN ONNX_ROOT=${WORK_DIR}/onnx-mlir/third_party/onnx \

ARG NPROC=4
ARG ACCEL=NNPA
ARG TEST_NOFLOAT16
ARG TEST_MCPU

RUN LLVM_PROJECT_ROOT=${WORK_DIR}/llvm-project \
Expand All @@ -28,7 +29,8 @@ RUN LLVM_PROJECT_ROOT=${WORK_DIR}/llvm-project \
&& rm -rf build && mkdir -p build && cd build \
# NNPA acclerator is built on all archs to enable lit tests
# (dependent libzdnn is built on s390x only)
&& cmake -DMLIR_DIR=${LLVM_PROJECT_ROOT}/build/lib/cmake/mlir \
&& CC=clang CXX=clang++ \
cmake -DMLIR_DIR=${LLVM_PROJECT_ROOT}/build/lib/cmake/mlir \
-DCMAKE_BUILD_TYPE=Debug \
-DONNX_MLIR_TEST_OPTLEVEL=0 \
-DONNX_MLIR_ACCELERATORS=${ACCEL} .. \
Expand All @@ -44,20 +46,27 @@ RUN LLVM_PROJECT_ROOT=${WORK_DIR}/llvm-project \
fi \
&& make -j${NPROC} \
&& make -j${NPROC} LIT_OPTS=-v check-onnx-lit \
# FLOAT16 backend tests only work on ppc64le platform at the moment
&& TEST_NOFLOAT16=${TEST_NOFLOAT16:-$([ "$(uname -m)" = "s390x" ] && echo true || \
([ "$(uname -m)" = "x86_64" ] && echo true || \
([ "$(uname -m)" = "ppc64le" ] && echo || echo)))} \
# Dev image is built without SIMD, placeholder for easy SIMD enablement
&& TEST_MCPU=$([ "$(uname -m)" = "s390x" ] && echo || \
[ "$(uname -m)" = "x86_64" ] && echo || \
[ "$(uname -m)" = "ppc64le" ] && echo || echo) \
([ "$(uname -m)" = "x86_64" ] && echo || \
([ "$(uname -m)" = "ppc64le" ] && echo || echo))) \
&& TEST_ARGS="-mcpu=${TEST_MCPU}" \
&& TEST_OPTLEVEL=0 \
&& make check-docs \
&& make check-unittest \
&& make check-multiple-models \
&& make NPROC=${NPROC} \
CTEST_PARALLEL_LEVEL=${NPROC} \
TEST_NOFLOAT16=${TEST_NOFLOAT16} \
TEST_MCPU=${TEST_MCPU} \
TEST_ARGS="${TEST_ARGS}" \
TEST_OPTLEVEL=${TEST_OPTLEVEL} \
-j${NPROC} \
check-onnx-backend-numerical \
&& make check-docs \
&& rm -f Debug/bin/*Test Debug/bin/Perf* Debug/bin/Test* \
# When building for push event to publish the image, unshallow and
# rename origin to upstream to make the repo a bit more dev friendly.
Expand Down
9 changes: 3 additions & 6 deletions docs/AddCustomAccelerators.md
Original file line number Diff line number Diff line change
Expand Up @@ -50,9 +50,6 @@ We provide a base class [onnx_mlir::accel::Accelerator](../src/Accelerators/Acce
// Hooks for onnx-mlir driver
//===--------------------------------------------------------------------===//

/// Load the MLIR dialects necessary to generate code for an accelerator.
virtual void getOrLoadDialects(mlir::MLIRContext &context) const = 0;

/// Add the transformations necessary to support the accelerator.
virtual void addPasses(mlir::OwningOpRef<mlir::ModuleOp> &module,
mlir::PassManager &pm,
Expand All @@ -65,9 +62,9 @@ virtual void addPasses(mlir::OwningOpRef<mlir::ModuleOp> &module,
/// Register the MLIR dialects required to support an accelerator.
virtual void registerDialects(mlir::DialectRegistry &registry) const = 0;

/// Initialize the transformation passes required to generate code for an
/// accelerator.
virtual void initPasses(int optLevel) const = 0;
/// Register accelerator transformation passes to make available as
/// command line options.
virtual void registerPasses(int optLevel) const = 0;

//===--------------------------------------------------------------------===//
// Hooks for onnx-to-krnl pass
Expand Down
32 changes: 28 additions & 4 deletions docs/BuildOnLinuxOSX.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,8 @@

We provide here directions to install ONNX-MLIR on Linux and OSX.
On Mac, there are a couple of commands that are different.
These differences will be listed in the explanation below, when relevant.
These differences will be listed in the explanation below, when relevant. Installing ONNX-MLIR on Apple silicon is natively supported and it is recommended to use brew to manage prerequisites.


## MLIR

Expand All @@ -14,7 +15,7 @@ Firstly, install MLIR (as a part of LLVM-Project):
``` bash
git clone -n https://github.com/llvm/llvm-project.git
# Check out a specific branch that is known to work with ONNX-MLIR.
cd llvm-project && git checkout 6cf7fe4a9a715bcdf3f4913753109e22dfc9940b && cd ..
cd llvm-project && git checkout 4acc3ffbb0af5631bc7916aeff3570f448899647 && cd ..
```

[same-as-file]: <> (utils/build-mlir.sh)
Expand All @@ -41,7 +42,7 @@ The `MLIR_DIR` cmake variable must be set before building onnx-mlir. It should p

This project uses lit ([LLVM's Integrated Tester](https://llvm.org/docs/CommandGuide/lit.html)) for unit tests. When running cmake, we can also specify the path to the lit tool from LLVM using the `LLVM_EXTERNAL_LIT` variable but it is not required as long as MLIR_DIR points to a build directory of llvm-project. If `MLIR_DIR` points to an install directory of llvm-project, `LLVM_EXTERNAL_LIT` is required.

To build ONNX-MLIR, use the following commands:
To build ONNX-MLIR, use the following commands (maybe with additional `-DCMAKE_CXX_FLAGS` argument described [below](#enable-cpu-optimizations)):

[same-as-file]: <> ({"ref": "utils/install-onnx-mlir.sh", "skip-doc": 2})
```bash
Expand Down Expand Up @@ -75,9 +76,16 @@ The environment variable `$pythonLocation` may be used to specify the base direc

After the above commands succeed, an `onnx-mlir` executable should appear in the `Debug/bin` or `Release/bin` directory.

### Enable CPU Optimizations

To make the compiler run faster (without any affect on the generated code)
you can pass `-DCMAKE_CXX_FLAGS=-march=native` to the `cmake -G Ninja ..` build configuration step above to generate code that exploits all the features of your local CPU, at the expense of portability. Or you can enable a specific CPU feature, e.g. `-DCMAKE_CXX_FLAGS=-mf16c` to enable the F16C feature to enable native conversion between float16 and (32 bit) float. It is used in `src/Support/SmallFP.hpp`.

### Known MacOS Issues

There is a known issue when building onnx-mlir. If you see a error of this sorts:
#### jsoniter issue

There is a known issue when building onnx-mlir. If you see an error of this sorts:

``` shell
Cloning into '/home/user/onnx-mlir/build/src/Runtime/jni/jsoniter'...
Expand All @@ -91,6 +99,22 @@ make: *** [Makefile:146: all] Error 2.

The suggested workaround until jsoniter is fixed is as follows: install maven (e.g. `brew install maven`) and run `alias nproc="sysctl -n hw.logicalcpu"` in your shell.

#### Protobuf issue (Mac M1, specific to protobuf 3.20.3 which is currently required)

On Mac M1, you may have some issues building protobuf. In particular, you may fail to install onnx (via `pip install -e third_party/onnx`) or you may fail to compile `onnx-mlir` (no arm64 symbol for `InternalMetadata::~InternalMetadata`).

The first failure is likely an issue with having multiple versions of protobuf.
Installing a version with `brew` was not helpful (version 3.20.3 because of a known bug that can be corrected with a patch below).
Uninstall the brew version, and make sure you install the right one with pip: `pip install protobuf==3.20.3`.

The second failure can be remediated by downloading protobuf source code, applying a patch, and installing it on the local machine.
See [Dockerfile.llvm-project](../docker/Dockerfile.llvm-project) on line 66 for cloning instructions. After cloning the right version, you should apply a patch [patch](https://github.com/protocolbuffers/protobuf/commit/0574167d92a232cb8f5a9107aabda0aefbc39e8b) by downloading from the link above and applying it.
Then you should follow the steps in the [Dockerfile.llvm-project](../docker/Dockerfile.llvm-project) file (skipped the `ldconfig` step without consequences).
You may have to brew a couple of the tools, see the `yum install` in the `Dockerfile.llvm-project` file above.
You should then be able to successfully install protobuf and compile `onnx-mlir`.
As the dependences between `third_party` and `onnx-mlir` might cause issues, it is always safe to delete the `third_party` directory, reinstall using `git submodule update --init --recursive`, reinstall `onnx`, delete `onnx-mlir/build` and rebuild `onnx-mlir` from scratch.


### Trouble shooting build issues

Check this [page](TestingHighLevel.md) for helpful hints.
2 changes: 1 addition & 1 deletion docs/BuildOnWindows.md
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,7 @@ Install MLIR (as a part of LLVM-Project):
```shell
git clone -n https://github.com/llvm/llvm-project.git
# Check out a specific branch that is known to work with ONNX-MLIR.
cd llvm-project && git checkout 6cf7fe4a9a715bcdf3f4913753109e22dfc9940b && cd ..
cd llvm-project && git checkout 4acc3ffbb0af5631bc7916aeff3570f448899647 && cd ..
```

[same-as-file]: <> (utils/build-mlir.cmd)
Expand Down
1 change: 1 addition & 0 deletions docs/ConstPropagationPass.md
Original file line number Diff line number Diff line change
Expand Up @@ -73,6 +73,7 @@ class Pattern<
dag sourcePattern,
list<dag> resultPatterns,
list<dag> additionalConstraints = [],
list<dag> supplementalPatterns = [],
dag benefitsAdded = (addBenefit 0)
>;
```
Expand Down
6 changes: 6 additions & 0 deletions docs/DebuggingNumericalError.md
Original file line number Diff line number Diff line change
Expand Up @@ -62,6 +62,12 @@ optional arguments:
--upper-bound UPPER_BOUND Upper bound values for each data type. Used to generate random inputs. E.g. --upper-bound=int64:10,float32:0.2,uint8:9. Supported types are bool, uint8, int8, uint16, int16, uint32, int32, uint64, int64, float16, float32, float64
```

## Helper script to compare a model under two distinct compile option.

Based on the above `utils/runONNXModel.py`, the `utils/checkONNXModel.py` allows a user to run a given model twice, under two distinct compile options, and compare its results.
This let a user simply test a new option, comparing the safe version of the compiler (e.g. `-O0` or `-O3`) with a more advanced version (e.g. `-O3` or `-O3 -march=x86-64`). Simply specify the compile options using the `--ref-compile-args` and `--test-compile-args` flags, a model using the `--model` flag, and possibly a `--shape-info` in presence of dynamic shape inputs.
Full options are listed under the `--help` flag.

## Debugging the Code Generated for an Operator.

If you know, or suspect, that a particular ONNX MLIR operator produces an incorrect result, and want to narrow down the problem, we provide a couple of useful Krnl operators that allow printing (at runtime) the value of a tensor, or a value that has a primitive data type.
Expand Down
Loading

0 comments on commit 289c1d6

Please sign in to comment.