Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[aarch64] Implement QGEMM kernels with UMMLA/SMMLA instructions #17160

Merged
merged 3 commits into from
Oct 23, 2023

Conversation

snadampal
Copy link
Contributor

@snadampal snadampal commented Aug 15, 2023

Description

This PR adds UMMLA and SMMLA based QGEMM kernels for aarch64. This covers
(i) symmetric quantization (zero point is Zero)
(ii) asymmetric quantization (zero point is non zero)
(iii) per channel as well as per tensor quantization
(iv) Signed weights (U8S8 Gemm)
(v) Unsigned weights (U8U8 Gemm) and
(vi) Signed activations and weights (S8S8 Gemm) scenarios

I've enabled the ummla/smmla kernels based on cpuinfo check for I8MM support
MMLA QGEMM kernels are enabled for all the devices that support I8MM instructions.

Motivation and Context

This is to improve INT8 quantized MatMul performance on aarch64 platform.
I have run the below benchmarking script (bert , roberta and gpt2 model inference) on AWS Graviton3 based c7g.4xl instance and observed up to 1.33x performance improvement compared to the optimized UDOT qgemm kernel performance.

cd onnxruntime/python/tools/transformers
python3 benchmark.py

I have also run the unit tests, and made sure all are passing

./build.sh --config RelWithDebInfo --build_shared_lib --parallel --compile_no_warning_as_error --skip_submodule_sync 

@snadampal snadampal requested a review from a team as a code owner August 15, 2023 13:28
@snadampal snadampal changed the title [aarch64] Implement QGEMM kernel with UMMLA instructions [aarch64] Implement QGEMM kernel with UMMLA/SMMLA instructions Aug 16, 2023
@snadampal snadampal changed the title [aarch64] Implement QGEMM kernel with UMMLA/SMMLA instructions [aarch64] Implement QGEMM kernels with UMMLA/SMMLA instructions Aug 16, 2023
@snadampal
Copy link
Contributor Author

Hi @snnn , would you be able to review and provide feedback on this PR? appreciate your time.

@milpuz01
Copy link
Contributor

I have checked out the changes and run performance test and accuracy tests with and without flag using onnxruntime_perf_test (modified the binary to dump output for comparisons) on AWS Graviton3 instances and I can confirm that the performance uptick is as expected and accuracy was the same.

Copy link
Contributor

@chenfucn chenfucn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Windows assembly files are in here:
https://github.com/microsoft/onnxruntime/tree/main/onnxruntime/core/mlas/lib/arm64

Also please make sure mlas unit tests pass when your new kernels are enabled.

onnxruntime/core/mlas/lib/platform.cpp Outdated Show resolved Hide resolved
@snadampal
Copy link
Contributor Author

Hi @chenfucn, @skottmckay , I have addressed your feedback and tested it on Linux. Could you please help trigger CI runs so that I can get the windows results and can fix if any issues. Thank you!

@snadampal
Copy link
Contributor Author

Hi @chenfucn , as you know mmla kernels are alternative to dot kernels, so, I have reused the existing tests and tested all the features that qgemm kernels support.

@chenfucn
Copy link
Contributor

/azp run Linux CPU CI Pipeline, Linux CPU Minimal Build E2E CI Pipeline, Linux GPU CI Pipeline, Linux GPU TensorRT CI Pipeline, Linux OpenVINO CI Pipeline, Linux QNN CI Pipeline, MacOS CI Pipeline

@chenfucn
Copy link
Contributor

/azp run ONNX Runtime Web CI Pipeline, Windows ARM64 QNN CI Pipeline, Windows CPU CI Pipeline, Windows GPU CI Pipeline, Windows GPU TensorRT CI Pipeline, onnxruntime-binary-size-checks-ci-pipeline, orttraining-linux-ci-pipeline, orttraining-linux-gpu-ci-pipeline, orttraining-ortmodule-distributed

@azure-pipelines
Copy link

Azure Pipelines successfully started running 7 pipeline(s).

@azure-pipelines
Copy link

Azure Pipelines successfully started running 9 pipeline(s).

chenfucn
chenfucn previously approved these changes Oct 13, 2023
@chenfucn
Copy link
Contributor

hi @skottmckay this would increase binary size. Should we allow an exception or exclude this from minimal build?

@snadampal
Copy link
Contributor Author

Thanks for approving the PR, @chenfucn .
Hi @skottmckay , the performance gains are substantial (up to 1.33x at use case level) to consider this feature even for a minimal build configuration. please let me know the next steps to get it merged. Thank you!

@skottmckay
Copy link
Contributor

hi @skottmckay this would increase binary size. Should we allow an exception or exclude this from minimal build?

I think the benefit justifies any binary size increase, so I think it's fine to include this in a minimal build.

@skottmckay
Copy link
Contributor

Thanks for approving the PR, @chenfucn Chen Fu FTE . Hi @skottmckay Scott McKay FTE , the performance gains are substantial (up to 1.33x at use case level) to consider this feature even for a minimal build configuration. please let me know the next steps to get it merged. Thank you!

Need to resolve the build errors from the minimal builds. They're not due to any binary size checks.

e.g.

FAILED: CMakeFiles/onnxruntime_mlas.dir/onnxruntime_src/onnxruntime/core/mlas/lib/aarch64/QgemmU8X8KernelUmmla.S.o 
/ndk_home/toolchains/llvm/prebuilt/linux-x86_64/bin/clang -target aarch64-none-linux-android29 --sysroot=/ndk_home/toolchains/llvm/prebuilt/linux-x86_64/sysroot -DCPUINFO_SUPPORTED_PLATFORM=1 -DDISABLE_FLOAT8_TYPES -DDISABLE_ML_OPS -DEIGEN_MPL2_ONLY -DEIGEN_USE_THREADS -DJSON_NOEXCEPTION -DMLAS_NO_EXCEPTION -DNSYNC_ATOMIC_CPP11 -DONNX_NO_EXCEPTIONS -DORT_EXTENDED_MINIMAL_BUILD -DORT_MINIMAL_BUILD -DORT_NO_EXCEPTIONS -DORT_NO_RTTI -DPLATFORM_POSIX -DUSE_NNAPI=1 -D_GNU_SOURCE -I/build/7/MinSizeRel/_deps/utf8_range-src -I/onnxruntime_src/include/onnxruntime -I/onnxruntime_src/include/onnxruntime/core/session -I/build/7/MinSizeRel/_deps/pytorch_cpuinfo-src/include -I/build/7/MinSizeRel/_deps/google_nsync-src/public -I/build/7/MinSizeRel -I/onnxruntime_src/onnxruntime -I/build/7/MinSizeRel/_deps/abseil_cpp-src -I/onnxruntime_src/onnxruntime/core/mlas/inc -I/onnxruntime_src/onnxruntime/core/mlas/lib -I/build/7/MinSizeRel/_deps/gsl-src/include -g -DANDROID -fdata-sections -ffunction-sections -funwind-tables -fstack-protector-strong -no-canonical-prefixes -D_FORTIFY_SOURCE=2 -Wformat -Werror=format-security  -Os -DNDEBUG -fPIC -Wall -Wextra -Wno-unused-parameter -Werror  -march=armv8.2-a+i8mm -MD -MT CMakeFiles/onnxruntime_mlas.dir/onnxruntime_src/onnxruntime/core/mlas/lib/aarch64/QgemmU8X8KernelUmmla.S.o -MF CMakeFiles/onnxruntime_mlas.dir/onnxruntime_src/onnxruntime/core/mlas/lib/aarch64/QgemmU8X8KernelUmmla.S.o.d -o CMakeFiles/onnxruntime_mlas.dir/onnxruntime_src/onnxruntime/core/mlas/lib/aarch64/QgemmU8X8KernelUmmla.S.o -c /onnxruntime_src/onnxruntime/core/mlas/lib/aarch64/QgemmU8X8KernelUmmla.S
<instantiation>:45:18: error: invalid operand for instruction
        mov x27, v18.2D[0]
                 ^
<instantiation>:2:9: note: while in macro instantiation
        OutputRow12Element Zero,x2,x13,16,17,18,19,(8 == 1)
        ^
<instantiation>:25:9: note: while in macro instantiation
        OutputBlock Zero,12,8
        ^
<instantiation>:34:9: note: while in macro instantiation
        ProcessRows Zero,8
        ^
FAILED: CMakeFiles/onnxruntime_mlas.dir/onnxruntime_src/onnxruntime/core/mlas/lib/qgemm_kernel_ummla.cpp.o 
/ndk_home/toolchains/llvm/prebuilt/linux-x86_64/bin/clang++ --target=aarch64-none-linux-android29 --sysroot=/ndk_home/toolchains/llvm/prebuilt/linux-x86_64/sysroot -DCPUINFO_SUPPORTED_PLATFORM=1 -DDISABLE_FLOAT8_TYPES -DDISABLE_ML_OPS -DEIGEN_MPL2_ONLY -DEIGEN_USE_THREADS -DJSON_NOEXCEPTION -DMLAS_NO_EXCEPTION -DNSYNC_ATOMIC_CPP11 -DONNX_NO_EXCEPTIONS -DORT_EXTENDED_MINIMAL_BUILD -DORT_MINIMAL_BUILD -DORT_NO_EXCEPTIONS -DORT_NO_RTTI -DPLATFORM_POSIX -DUSE_NNAPI=1 -D_GNU_SOURCE -I/build/7/MinSizeRel/_deps/utf8_range-src -I/onnxruntime_src/include/onnxruntime -I/onnxruntime_src/include/onnxruntime/core/session -I/build/7/MinSizeRel/_deps/pytorch_cpuinfo-src/include -I/build/7/MinSizeRel/_deps/google_nsync-src/public -I/build/7/MinSizeRel -I/onnxruntime_src/onnxruntime -I/build/7/MinSizeRel/_deps/abseil_cpp-src -I/onnxruntime_src/onnxruntime/core/mlas/inc -I/onnxruntime_src/onnxruntime/core/mlas/lib -I/build/7/MinSizeRel/_deps/gsl-src/include -g -DANDROID -fdata-sections -ffunction-sections -funwind-tables -fstack-protector-strong -no-canonical-prefixes -D_FORTIFY_SOURCE=2 -Wformat -Werror=format-security   -ffunction-sections -fdata-sections -fno-exceptions -fno-unwind-tables -fno-asynchronous-unwind-tables -DCPUINFO_SUPPORTED -Os -DNDEBUG -std=gnu++17 -fPIC -fno-rtti -Wall -Wextra -Wno-unused-parameter -Wno-deprecated-copy -Wno-tautological-pointer-compare -Wno-ambiguous-reversed-operator -Wno-deprecated-anon-enum-enum-conversion -Wno-undefined-var-template -Wshorten-64-to-32 -Werror -MD -MT CMakeFiles/onnxruntime_mlas.dir/onnxruntime_src/onnxruntime/core/mlas/lib/qgemm_kernel_ummla.cpp.o -MF CMakeFiles/onnxruntime_mlas.dir/onnxruntime_src/onnxruntime/core/mlas/lib/qgemm_kernel_ummla.cpp.o.d -o CMakeFiles/onnxruntime_mlas.dir/onnxruntime_src/onnxruntime/core/mlas/lib/qgemm_kernel_ummla.cpp.o -c /onnxruntime_src/onnxruntime/core/mlas/lib/qgemm_kernel_ummla.cpp
/onnxruntime_src/onnxruntime/core/mlas/lib/qgemm_kernel_ummla.cpp:796:17: error: implicit conversion loses integer precision: 'size_t' (aka 'unsigned long') to 'int' [-Werror,-Wshorten-64-to-32]
        int k = CountK;

and the lint warnings

https://github.com/microsoft/onnxruntime/actions/runs/6501885026/job/17685114761?pr=17160

@snadampal
Copy link
Contributor Author

snadampal commented Oct 19, 2023

thanks, @skottmckay , I had tried the minimal build on Linux system with native compilation and didn't observe any issues. Could you please share the steps to reproduce the above build errors. Looks like it was for a cross-compilation on x86?
btw, I used the following command from this page: https://onnxruntime.ai/docs/build/custom.html#linux

<ONNX Runtime repository root>/build.sh \
  --config=Release \
  --build_shared_lib \
  --minimal_build \
  --disable_ml_ops --disable_exceptions --disable_rtti \
  --include_ops_by_config <config file from model conversion> --enable_reduced_operator_type_support \
  --skip_tests

I will fix the lint warnings.

@skottmckay
Copy link
Contributor

It's not actually the minimal build aspect that's failing. Seems to be any Android build, which involves using the Android NDK to build (which uses clang).

Are the new kernels expected to build for Android when the target architecture is arm64-v8a?

@snadampal
Copy link
Contributor Author

I don't see a reason why they wouldn't, will give it a try. Are these the instructions for Android cross compilation?https://onnxruntime.ai/docs/build/android.html#cross-compiling-on-linux-and-macos

@skottmckay
Copy link
Contributor

skottmckay commented Oct 19, 2023

Yeah. Unfortunately requires installing the Android SDK and an NDK. Quickest and simplest is to install Android Studio and point to the SDK/NDK in that install. The API level doesn't matter too much - I used 29 in my build. I'd use the latest NDK as well (26.0).

@snadampal
Copy link
Contributor Author

I'm able to set up the Android sdk/ndk using sdkmanager from command line tools and reproduced the compilation failures. I will update the PR with the fixes.

@skottmckay
Copy link
Contributor

Awesome! Thank you.

FWIW XNNPACK has some i8mm kernels. Not sure if those could be used as a reference to see if there's any difference in how they are implemented in general.

@snadampal
Copy link
Contributor Author

thanks, @skottmckay , I'm able to fix the android clang build failures, the issue was trivial instruction syntax errors only. next I'm checking what linter and format tools are used to fix the cpplint warnings.

@skottmckay
Copy link
Contributor

skottmckay commented Oct 20, 2023

https://github.com/microsoft/onnxruntime/blob/main/docs/Coding_Conventions_and_Standards.md#linting should have details of setting up the linting tools.

I would typically run lintrunner -a to fix lint errors once everything is setup.

The UMMLA qgemm kernels are invoked if the hardware has a
support for I8MM instructions.
The SMMLA qgemm kernels are invoked if the hardware has a
support for I8MM instructions.
@snadampal
Copy link
Contributor Author

Hi @skottmckay, thanks! I have pushed the updated commits and verified them for (1) Linux native compilation for libraries and wheels (2) linux native minimal build and (3) android ndk build, cross compiled on x86.

Regarding the cpplint warnings, looks like the similar warnings exist on several existing files as well, and most of them are of low confidence score, so, i didn't address some of them. But I have run the following format tools and made sure no errors or warnings.

lintrunner -a
git-clang-format

@skottmckay
Copy link
Contributor

/azp run Windows ARM64 QNN CI Pipeline,Windows x64 QNN CI Pipeline,Windows CPU CI Pipeline,Windows GPU CI Pipeline,Windows GPU TensorRT CI Pipeline,ONNX Runtime Web CI Pipeline,Linux CPU CI Pipeline,Linux CPU Minimal Build E2E CI Pipeline,Linux GPU CI Pipeline,Linux GPU TensorRT CI Pipeline

@skottmckay
Copy link
Contributor

/azp run Linux OpenVINO CI Pipeline,Linux QNN CI Pipeline,MacOS CI Pipeline,orttraining-amd-gpu-ci-pipeline,orttraining-linux-ci-pipeline,orttraining-linux-gpu-ci-pipeline,orttraining-ortmodule-distributed,onnxruntime-python-checks-ci-pipeline,onnxruntime-binary-size-checks-ci-pipeline

@azure-pipelines
Copy link

Azure Pipelines successfully started running 8 pipeline(s).

@azure-pipelines
Copy link

Azure Pipelines successfully started running 10 pipeline(s).

@skottmckay
Copy link
Contributor

I think you can ignore the cpplint warnings for mlas files. That library was always special-cased when it came to formatting rules.

@snadampal
Copy link
Contributor Author

The Windows_CI (CUDA and TVM) build failures don't seem to be related to the PR. Is it already a known issue? or do I need to look at?

@skottmckay
Copy link
Contributor

Unrelated and not required builds. Please ignore.

@snadampal
Copy link
Contributor Author

thank you, please approve if there is no other feedback.

@snadampal
Copy link
Contributor Author

Hi @chenfucn, thanks for the approval. Appreciate if you could merge it.

@skottmckay skottmckay merged commit 780ee18 into microsoft:main Oct 23, 2023
61 of 64 checks passed
@snadampal
Copy link
Contributor Author

thanks for merging the PR, @skottmckay !

@snnn
Copy link
Member

snnn commented Oct 23, 2023

@skottmckay , "post merge" pipeline found an issue from this PR:

https://dev.azure.com/onnxruntime/onnxruntime/_build/results?buildId=1179709&view=results

@snadampal
Copy link
Contributor Author

Looks like iOS build failure, I'm not sure how it's different from the CI build.

@skottmckay
Copy link
Contributor

skottmckay commented Oct 24, 2023

@chenfucn Do the new kernels need to be treated the same as the special-cased fp16 ones (done in this PR) because a custom -march value is required?

For iOS the A11 chip was the first ARMv8.2-a model. That was in iPhone 8, 8+ and X which were released in 2017. https://en.wikipedia.org/wiki/Apple_A11

Building for iOS seems to primarily be based on the minimum iOS version to support, which makes it hard to exclude devices older than iPhone 8/X without greatly limiting the supported iOS version. Limiting to iOS 16 would do that, but it's the second most recent release. https://iosref.com/ios

If we don't call the kernel because we check if the CPU supports i8mm at runtime is it a problem to build it with armv8-2.a specific flags?

@skottmckay
Copy link
Contributor

This didn't show up in the CIs as we only ran the required ones, which did not include iOS builds. We should probably make that required so things like this don't get missed in the future.

@snadampal
Copy link
Contributor Author

I realized that I had the sources under generic aarch64 but the ISA compiler flag settings only under (not APPLE). not sure if it would have compiled if I had set the I8MM flags for Apple builds as well.
Anyway, based on the fp16 CR, looks like it wouldn't compile, hence I have raised this PR to exclude the mmla kernels from Apple.
#18082

snnn added a commit that referenced this pull request Oct 25, 2023
@snadampal
Copy link
Contributor Author

Hi @snnn , could you please check this #18082 before reverting the mmla kernels?

wejoncy added a commit that referenced this pull request Oct 26, 2023
commit 538e97c
Author: Patrice Vignola <vignola.patrice@gmail.com>
Date:   Wed Oct 25 19:56:16 2023 -0700

    [DML EP] Add dynamic graph compilation (#17876)

    Historically, DML was only able to fuse partitions when all sizes are
    known in advance or when we were overriding them at session creation
    time. But in practice, it should be possible to compile partitions at
    compute time if the caller knows that the dimensions won't be changed
    for every inference (e.g. resizing a webcam window, or padding the input
    to powers of 2). This graph will be cached and reused until the sizes
    change.

    This is an opt-in option gated under the `enable_dynamic_graph_fusion`
    option, which means that it will only be enabled when the caller
    requests it since they have more context on how their model will be
    called between inferences.

    This PR also adds the option to disable metacommands from the python
    API, which is an option for the C API but was lacking for python.

commit d30d4d3
Author: Jambay Kinley <jambaykinley@microsoft.com>
Date:   Wed Oct 25 15:34:58 2023 -0700

    Add MatMul FP4 and NF4 Support (#18066)
    Add a contrib op MatMulBnb4 (FP4 and NF4) and related toolchain to
    support quantization on weight.

    This PR adds:
    - schema for contrib op MatMulBnb4 which can support FP4 (4-bit floating
    point) and NF4 (4-bit NormalFloat) quantization on weight.
    - a naive implementation for MatMulBnb4 on CPU and GPU, i.e.,
    implemented like MatMul(A, Dequantize(B)).
    - a special implementation for GemV for MatMulBnb4 and related benchmark
    tool.
    - tool to quantize model to FP4 or NF4.

commit d88d52e
Author: snadampal <87143774+snadampal@users.noreply.github.com>
Date:   Wed Oct 25 13:34:57 2023 -0500

    [aarch64] Remove mmla kernel support from apple (#18082)
    <!-- Describe your changes. -->
    The mmla kernels require additional ISA flags
    and are currently supported only on Linux
    <!-- - Why is this change required? What problem does it solve?
    - If it fixes an open issue, please link to the issue here. -->
    more context is in #15270

    cc: @skottmckay , @chenfucn , @snnn

commit 706e13e
Author: liqun Fu <liqfu@microsoft.com>
Date:   Wed Oct 25 10:46:04 2023 -0700

    implement affinegrid cpu kernel (#17777)

commit 2c6b31c
Author: pengwa <pengwa@microsoft.com>
Date:   Wed Oct 25 15:11:02 2023 +0800

    FP16 optimizer automatically detect DeepSpeed compatibility (#18084)

    Optimum/Transformers are using accelerate lib to prepare models, so our
    FP16 optimizer wrapper does not work for long time. Because the
    namespace is `accelerate.utils.deepspeed.DeepSpeedOptimizerWrapper`,
    which underlying is still calling into DeepSpeed stage1and2 optimizer.

    This PR includes following changes:
    1. Add `accelerate.utils.deepspeed.DeepSpeedOptimizerWrapper` in the
    modifier registry, plus a check on its contained `optimizer` property
    MUST be DeepSpeed stage 1 and 2 optimizer. (let's cover Stage 3
    optimizer later)
    2. For DeepSpeed version > 0.9.1, we will store the source code in a
    version list. As long as the related function in DeepSpeed remains
    unchanged during its new release, we won't need manually upgrade the
    version check any more. If some day, the source code did not match, a
    warning will be raised to users, to add a new version of source code in
    the list.

    With the above change, we will have our FP16 Optimizer working again in
    Optimum.

    ![image](https://github.com/microsoft/onnxruntime/assets/10530022/d35b4aa9-b371-46f1-98ae-73114f91179b)

commit ae85619
Author: Sumit Agarwal <sumitagarwal330@gmail.com>
Date:   Tue Oct 24 19:41:10 2023 -0700

    Introduce new optimizer MatMul + BatchNormalization (#17915)
    Introduce new ORT L1 optimizer under RewriteRule category to fuse MatMul
    + BatchNormalization node. This optimizer look for a specific pattern
    observed in one of the impacting customer models and fuse the Matmul and
    Batchnormalization node into a Gemm node. For details on the pattern
    matching and fusion please refer to the comment section of
    `matmul_bn_fusion.cc`.

    To visualize, this optimizer will replace following subgraph to a Gemm
    node.
    <pre>
                   MatMul                  GEMM
                     |                       |
                  Reshape ^     --->      Reshape ^
                     |                       |
                Transpose ^             Transpose ^
                     |
           BatchNormalization
    Note: ^ means there can be >=0 occurrence(s) of that node.
    Few example fusable pattern:
    * - MatMul -> Reshape -> Transpose -> BatchNormalization ---> GEMM ->
    Reshape -> Transpose
    * - MatMul -> Reshape -> BatchNormalization ---> GEMM -> Reshape
    * - MatMul -> Transpose -> BatchNormalization ---> GEMM -> Transpose
    * - MatMul -> Reshape -> Reshape -> BatchNormalization ---> GEMM ->
    Reshape -> Reshape
    * - MatMul -> Reshape -> Transpose -> Reshape -> BatchNormalization --->
    GEMM -> Reshape -> Transpose -> Reshape
    * - MatMul -> BatchNormalization ---> GEMM
    </pre>

    Note: This optimizer may evolve in the future to be more generic in
    terms of the pattern matching.
    - Why is this change required? What problem does it solve?
    One of the user of ORT+DML ep needs this to better target the model to
    DML. But this transformation applies more broadly, so added L1
    optimizer.
    <!-- - If it fixes an open issue, please link to the issue here. -->

commit 76e275b
Author: Jian Chen <cjian@microsoft.com>
Date:   Tue Oct 24 15:17:36 2023 -0700

    Merge Cuda docker files into a single one (#18020)
    <!-- Describe your changes. -->
    <!-- - Why is this change required? What problem does it solve?
    - If it fixes an open issue, please link to the issue here. -->

commit 6ec45f2
Author: Changming Sun <chasun@microsoft.com>
Date:   Tue Oct 24 13:04:08 2023 -0700

    Merge aiinfra-linux-ARM64-CPU-2019 and onnxruntime-linux-ARM64-CPU-2019 (#18069)
    Merge aiinfra-linux-ARM64-CPU-2019 and onnxruntime-linux-ARM64-CPU-2019
    machines to a single one to ease management.

commit efa0cc2
Author: liqun Fu <liqfu@microsoft.com>
Date:   Tue Oct 24 10:58:54 2023 -0700

    implement isinf20 and isnan20 (#17874)

commit abb3291
Author: Changming Sun <chasun@microsoft.com>
Date:   Tue Oct 24 10:50:12 2023 -0700

    Update win-wasm-ci.yml: increase the timeout value (#18023)

commit e63ccd3
Author: Jian Chen <cjian@microsoft.com>
Date:   Tue Oct 24 10:47:23 2023 -0700

    Install CUDA 12.2 on Windows (#18044)
    <!-- Describe your changes. -->
    <!-- - Why is this change required? What problem does it solve?
    - If it fixes an open issue, please link to the issue here. -->

commit eb47008
Author: Jiajia Qin <jiajia.qin@intel.com>
Date:   Tue Oct 24 13:56:56 2023 +0800

    [js/webgpu] FP16 Cast, Resize (#18035)
    <!-- Describe your changes. -->

    Cast/Resize with f16 are missing in vae-decoder-f16. With this change,
    vae-decoder-f16 becomes 315 ms from over than 1 seconds.

commit 688524a
Author: Tianlei Wu <tlwu@microsoft.com>
Date:   Mon Oct 23 22:00:02 2023 -0700

    [CUDA EP] Add warning logs when adding memcpy nodes (#18032)

    Memcpy nodes could have negative impact on performance, they also cause
    ORT unable to run CUDA graph.

    Here we add a warning log for CUDA EP when this happens. It could help
    trouble shooting. For example, when CUDA graph cannot run, we can see
    the logs to find out where the Memcpy nodes are inserted (Although it is
    also possible through saving optimized model, but that need more time
    and disk space).

    Note that the warning is per graph. When there are subgraphs, we might
    see multiple warnings if the issue happens in multiple graphs.

    Example logs:
    ```
    2023-10-19 20:58:10.678176531 [I:onnxruntime:, transformer_memcpy.cc:329 AddCopyNode] Add MemcpyFromHost after input_ids for CUDAExecutionProvider
    2023-10-19 20:58:10.678198702 [I:onnxruntime:, transformer_memcpy.cc:329 AddCopyNode] Add MemcpyFromHost after /text_model/ArgMax_output_0 for CUDAExecutionProvider
    2023-10-19 20:58:10.678211727 [I:onnxruntime:, transformer_memcpy.cc:329 AddCopyNode] Add MemcpyFromHost after /text_model/Gather_3_output_0 for CUDAExecutionProvider
    2023-10-19 20:58:10.678257903 [W:onnxruntime:, transformer_memcpy.cc:74 ApplyImpl] 3 Memcpy nodes are added to the graph main_graph for CUDAExecutionProvider. It might have negative impact on performance (including unable to run CUDA graph). Set session_options.log_severity_level=1 to see the detail logs before this message.
    ```

commit 555b2af
Author: Chi Lo <54722500+chilo-ms@users.noreply.github.com>
Date:   Tue Oct 24 02:41:15 2023 +0000

    [TensorRT EP] Add unit test for user provided cuda stream (#17974)

    Add a unit test for testing user provided CUDA stream

commit 4ffd022
Author: Chi Lo <54722500+chilo-ms@users.noreply.github.com>
Date:   Tue Oct 24 00:46:38 2023 +0000

    [TensorRT EP] Refactor of TRT plugins support (#17946)

    Make sure "trt.plugins" custom op domain only being registered once.
    The bottom line is "trt.plugins" custom op domain needs to be registered
    before model load.

    `CreateTensorRTCustomOpDomainList()` is TRT EP's function to create
    "trt.plugins" custom op domain. Following are places where this function
    will be called. (This function only fetches all the TRT plugins from TRT
    plugin registry but not yet registered them to ORT custom op registry.
    The real registration happens in AddCustomOpDomains())

    C/C++ APIs:

    - `OrtApis::SessionOptionsAppendExecutionProvider_TensorRT_XX`: This
    function will make session option object contain the "trt.plugins"
    custom op domain for ORT to register. So that later the session creation
    api can register the custom op domain accordingly and won't complain
    about invalid onnx node.
    - `InferenceSession::RegisterExecutionProvider`: In some cases, users
    might create the session object first and later call
    session_object.RegisterExecutionProvider(). This function will call
    p_exec_provider->GetCustomOpDomainList() which returns "trt.plugins"
    custom op domain. Otherwise, session_object.Load(model) will complain.

    Python APIs:

    - `RegisterTensorRTPluginsAsCustomOps`: Need to call this function so
    that session option object contains the "trt.plugins" custom op domain
    for ORT to register.

    Different language bindings have slightly different workflow of
    initializing the session. This might cause duplicate custom op domain in
    `session_option.custom_op_domains_` or
    `CreateTensorRTCustomOpDomainList()` being called more than once, but we
    put checks to make sure ep's custom op domain won't be registered twice.

commit 2c50b75
Author: Dmitri Smirnov <yuslepukhin@users.noreply.github.com>
Date:   Mon Oct 23 17:42:20 2023 -0700

    Functions Ahead Of Time inlininng (#17764)
    Inline functions in an EP aware fashion.

    The result of this PR is that models that are having been inlined by
    ONNX inliner and optimized and models that have been AOT inlined appear
    to be visually identical.

    For tests I used two models. The only difference is the resulting size
    because ONNX inliner removes local function definitions and AOT does
    not. Difference in sizes for `HF Mobile` model was 2.5 MB, and for `HF
    Bart` it was ~500K. It seems that the resuling model size affects the
    load time more than the actual optimizations.

    In general, the inlined models grow in size very fast and can easily
    exceed 2Gb limit.

    Q. Should we make AOT optional?

    `If` costant folding and the removal of local inlined models will be
    coming in other PRs.

    Some stats:

    ![image](https://github.com/microsoft/onnxruntime/assets/11303988/fcb4c815-2e06-4574-8d96-5a0a727d1ecf)

commit f3cfe08
Author: satyajandhyala <satya.k.jandhyala@gmail.com>
Date:   Mon Oct 23 16:02:50 2023 -0700

    [JS/Web] Enabled 1d spacial input to GlobalAveragePool (#17973)
    Enable one-dim special  input to GlobalAveragePoll input
    <!-- - Why is this change required? What problem does it solve?
    - If it fixes an open issue, please link to the issue here. -->
    Currently only 2D input is supported.

commit 780ee18
Author: snadampal <87143774+snadampal@users.noreply.github.com>
Date:   Mon Oct 23 16:49:04 2023 -0500

    [aarch64] Implement QGEMM kernels with UMMLA/SMMLA instructions (#17160)
    <!-- Describe your changes. -->
    This PR adds UMMLA and SMMLA based QGEMM kernels for aarch64. This
    covers
    (i) symmetric quantization (zero point is Zero)
    (ii) asymmetric quantization (zero point is non zero)
    (iii) per channel as well as per tensor quantization
    (iv) Signed weights (U8S8 Gemm)
    (v) Unsigned weights (U8U8 Gemm) and
    (vi) Signed activations and weights (S8S8 Gemm) scenarios

    I've enabled the ummla/smmla kernels based on cpuinfo check for `I8MM`
    support
    MMLA QGEMM kernels are enabled for all the devices that support I8MM
    instructions.
    <!-- - Why is this change required? What problem does it solve?
    - If it fixes an open issue, please link to the issue here. -->
    This is to improve INT8 quantized MatMul performance on aarch64
    platform.
    I have run the below benchmarking script (bert , roberta and gpt2 model
    inference) on AWS Graviton3 based c7g.4xl instance and observed up to
    1.33x performance improvement compared to the optimized UDOT qgemm
    kernel performance.

    ```
    cd onnxruntime/python/tools/transformers
    python3 benchmark.py
    ```
    I have also run the unit tests, and made sure all are passing

    ```
    ./build.sh --config RelWithDebInfo --build_shared_lib --parallel --compile_no_warning_as_error --skip_submodule_sync

    ```

commit 2a17d5c
Author: kunal-vaishnavi <115581922+kunal-vaishnavi@users.noreply.github.com>
Date:   Mon Oct 23 13:00:56 2023 -0700

    LLaMA Model Optimization (#18021)
    This PR contains fusion-level and kernel-level optimizations for [Meta's
    LLaMA-2](https://blogs.microsoft.com/blog/2023/07/18/microsoft-and-meta-expand-their-ai-partnership-with-llama-2-on-azure-and-windows/).

    Some of the added optimizations include:

    - SimplifiedLayerNorm changes
      - Fusions for multiple variants
    - SkipSimplifiedLayerNorm changes
      - Kernel support for CPU
    - Rotary embeddings (previously did not exist)
      - Fusions for multiple variants
      - CPU and CUDA kernels
      - Supports interleaving and non-interleaving in the same kernels
      - Optimized cache that requires half of its originally exported sizes
    - Reduced from `(max_sequence_length, head_size)` to
    `(max_sequence_length, head_size / 2)`
    - Multi-head attention
      - Support for 2D and 3D attention masks
    - Group query attention (for FP16 CUDA and INT4 CUDA)
      - Integration with flash attention v2 and past-present buffer sharing
    - Removes need for `attention_mask` input as it is supported in the
    kernel
    - 4 bit quantization
      - `block_size` parameter is available for customizing
    - Support the new changes for [Microsoft
    version](https://github.com/microsoft/Llama-2-Onnx)
    - Support combinations of the below variants (ex: export ORT version and
    run with Optimum)

    Supported variants of LLaMA-2 include:
    - [ORT
    version](https://github.com/microsoft/onnxruntime/tree/main/onnxruntime/python/tools/transformers/models/llama)
    - Produces one ONNX file that is already optimized (and quantized if
    requested)
      - Integrates with Optimum
    - [Another Microsoft version](https://github.com/microsoft/Llama-2-Onnx)
      - Already exported and available off-the-shelf
      - Faster versions of those models will be uploaded there soon
    - [Hugging Face version](https://huggingface.co/meta-llama)
      - Models that end with `-hf`
    - Some older and current versions of
    [`transformers`](https://github.com/huggingface/transformers) and
    [`optimum`](https://github.com/huggingface/optimum) that export the
    model to ONNX differently
    - Note that while some older versions are supported, it is recommended
    to use the latest package versions.

    To use the optimizations, please see `README.md` for details. Please
    note the various `requirements.txt` files for the package versions
    recommended in order to use these changes.

    To run the ORT transformer optimizer separately, run the script as
    follows:
    ```
    $ cd onnxruntime/onnxruntime/python/tools/transformers/
    $ python3 optimizer.py --input <filename>.onnx --output <filename>.onnx --model_type gpt2 --num_heads <number of attention heads> --hidden_size <attention hidden size> --use_external_data_format --opt_level 0
    ```
    This PR helps the following issues:
    - #14997
    - #16254
    - #17681
    - #17925
    - microsoft/onnxruntime-inference-examples#320

    This PR uses changes from the following PRs:
    - pytorch/pytorch#104468
    - pytorch/pytorch#109759
    - #17020
    - #17674
    - #17890
    - #17920
    - huggingface/transformers#26162
    - huggingface/optimum#1257
    - huggingface/optimum#1289
    - huggingface/optimum#1462

    This PR uses changes from the following issues and PRs to begin
    supporting the [new TorchDynamo
    exporter](https://pytorch.org/docs/stable/onnx.html#torchdynamo-based-onnx-exporter):
    - huggingface/transformers#26307
    - pytorch/pytorch#104903
    - pytorch/pytorch#105040
    - microsoft/onnxscript#847
    - microsoft/onnxscript#862
    - microsoft/onnxscript#493

commit 8a12b2c
Author: Jiajia Qin <jiajia.qin@intel.com>
Date:   Tue Oct 24 02:02:19 2023 +0800

    [js/webgpu] Fix the transpose error when dims > 4D (#18027)
    <!-- Describe your changes. -->
    Currently, the uniform support has bugs when dims rank is larger than 4.
    See #17860 item 1.
    So this PR only enables shapes uniforms when shape rank is <= 4 for
    transpose. Otherwise, below compilation errors are thrown:
    ```
    1 error(s) generated while compiling the shader:
    :3:50 error: uniform storage requires that array elements are aligned to 16 bytes, but array element of type 'u32' has a stride of 4 bytes. Consider using a vector or struct as the element type instead.
          struct Uniforms { output_size:u32, a_shape:array<u32, 5>, a_strides:array<u32, 5>, output_shape:array<u32, 5>, output_strides:array<u32, 5> };
                                                     ^^^^^^^^^^^^^

    :3:7 note: see layout of struct:
    /*            align(4) size(84) */ struct Uniforms {
    /* offset( 0) align(4) size( 4) */   output_size : u32;
    /* offset( 4) align(4) size(20) */   a_shape : array<u32, 5>;
    /* offset(24) align(4) size(20) */   a_strides : array<u32, 5>;
    /* offset(44) align(4) size(20) */   output_shape : array<u32, 5>;
    /* offset(64) align(4) size(20) */   output_strides : array<u32, 5>;
    /*                              */ };
          struct Uniforms { output_size:u32, a_shape:array<u32, 5>, a_strides:array<u32, 5>, output_shape:array<u32, 5>, output_strides:array<u32, 5> };
          ^^^^^^

    :4:42 note: 'Uniforms' used in address space 'uniform' here
          @group(0) @binding(2) var<uniform> uniforms: Uniforms;
                                             ^^^^^^^^
    ```

commit f0d5ea5
Author: Hector Li <hecli@microsoft.com>
Date:   Mon Oct 23 09:01:29 2023 -0700

    [QNN EP] Disable flaky test QnnCPUBackendTests.MatMulOp_Broadcast (#18033)

    Disable flaky test QnnCPUBackendTests.MatMulOp_Broadcast. The test
    failed on Linux randomly.

commit b7ae293
Author: JiCheng <wejoncy@163.com>
Date:   Sun Oct 22 23:33:29 2023 +0800

    Support large model export using multi-gpu (#17990)

    This PR is to implemente a exporter which works for large language
    models(LLM).
    It works for models like Llama2-70b or gpt-175.

    The main idea is to utilize multiple-GPU and dispatch differnet layers
    to different GPU, in short, it symply implemented auto pipeline
    parallelism.

    For example : to export Llama2-70b, you need 8x V100-32GB or 4x A100-80G
    or More GPU memories.

    It would expect to export decoder-only models. For encoder-decoder
    arch-like models, we didn't test it yet.
    <!-- - Why is this change required? What problem does it solve?
    - If it fixes an open issue, please link to the issue here. -->

    ---------

    Co-authored-by: Justin Chu <justinchuby@users.noreply.github.com>

commit 444a0ed
Author: pengwa <pengwa@microsoft.com>
Date:   Sat Oct 21 19:45:45 2023 +0800

    Avoid one time clone to save memory peak (#17934)

commit 009cd4e
Author: RandySheriffH <48490400+RandySheriffH@users.noreply.github.com>
Date:   Fri Oct 20 16:12:21 2023 -0700

    Allow cuda custom ops allocate deferred cpu mem (#17893)

    Expose a new allocator from cuda stream.
    The allocator manages deferred cpu memory which only get recycled before
    stream destruction.

    ---------

    Co-authored-by: Randy Shuai <rashuai@microsoft.com>

commit 2f57625
Author: Chi Lo <54722500+chilo-ms@users.noreply.github.com>
Date:   Fri Oct 20 22:09:46 2023 +0000

    [TensorRT EP] Add stream sync after enqueue (#18026)

    If the model is partitioned into TRT subgraphs and CUDA EP node, we
    observed cuda stream synchronization issue when multithreading. Calling
    stream sync API after enqueue can solve this issue without adding much
    performance overhead.

commit 020824e
Author: liqun Fu <liqfu@microsoft.com>
Date:   Fri Oct 20 15:08:25 2023 -0700

    Update ONNX to 1.15.0rc1 (#17914)

commit a43c57f
Author: Baiju Meswani <bmeswani@microsoft.com>
Date:   Fri Oct 20 11:39:57 2023 -0700

    ResizeGrad CUDA/ROCM kernel implementation (#17772)

commit cc7e8cc
Author: Changming Sun <chasun@microsoft.com>
Date:   Fri Oct 20 09:24:21 2023 -0700

    Update dockerfiles/Dockerfile.source to avoid installing onnx (#17975)
    Update dockerfiles/Dockerfile.source to avoid installing onnx python
    package. ONNX is not listed in
    https://github.com/microsoft/onnxruntime/blob/main/requirements.txt.in.
    We do not have to install it. Especially when we do not run tests, the
    package provides no help when building onnxruntime from source.
    Resolve #17781

commit 99b8dca
Author: Yi Zhang <zhanyi@microsoft.com>
Date:   Fri Oct 20 23:41:40 2023 +0800

    Disable dml stage in windows GPU pipeline temporarily. (#18034)
    <!-- Describe your changes. -->
    <!-- - Why is this change required? What problem does it solve?
    - If it fixes an open issue, please link to the issue here. -->
kleiti pushed a commit to kleiti/onnxruntime that referenced this pull request Mar 22, 2024
…osoft#17160)

### Description
<!-- Describe your changes. -->
This PR adds UMMLA and SMMLA based QGEMM kernels for aarch64. This
covers
(i) symmetric quantization (zero point is Zero)
(ii) asymmetric quantization (zero point is non zero)
(iii) per channel as well as per tensor quantization
(iv) Signed weights (U8S8 Gemm)
(v) Unsigned weights (U8U8 Gemm) and 
(vi) Signed activations and weights (S8S8 Gemm) scenarios

I've enabled the ummla/smmla kernels based on cpuinfo check for `I8MM`
support
MMLA QGEMM kernels are enabled for all the devices that support I8MM
instructions.

### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
This is to improve INT8 quantized MatMul performance on aarch64
platform.
I have run the below benchmarking script (bert , roberta and gpt2 model
inference) on AWS Graviton3 based c7g.4xl instance and observed up to
1.33x performance improvement compared to the optimized UDOT qgemm
kernel performance.

```
cd onnxruntime/python/tools/transformers
python3 benchmark.py
```
I have also run the unit tests, and made sure all are passing

```
./build.sh --config RelWithDebInfo --build_shared_lib --parallel --compile_no_warning_as_error --skip_submodule_sync 

```
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants