Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[NNAdapter][QualcommQNN] Supports ERNIE nano fully quantized model #9618

Merged

Commits on Oct 31, 2022

  1. [NNAdapter][QualcommQNN] Fix matmul+add+softmax for ernie int8 model

    Change-Id: I49034a6a432717b917bd1eb2993e5e6b33d8fe5c
    hong19860320 committed Oct 31, 2022
    Configuration menu
    Copy the full SHA
    942c97c View commit details
    Browse the repository at this point in the history

Commits on Nov 3, 2022

  1. fix int8

    Change-Id: Ia9dca6e3a504a5e6f1ab877a4c3c10b5edc795c7
    hong19860320 committed Nov 3, 2022
    Configuration menu
    Copy the full SHA
    849fc2f View commit details
    Browse the repository at this point in the history

Commits on Nov 6, 2022

  1. Dump the variables without outscale test=huawei_ascend_npu

    Change-Id: If4cb513e3b55095107c6c1c302b2696475ad9916
    hong19860320 committed Nov 6, 2022
    Configuration menu
    Copy the full SHA
    6d1caab View commit details
    Browse the repository at this point in the history
  2. Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle-Lite

    …into hongming/fix_ernie_int8_qnn
    
    test=huawei_ascend_npu
    hong19860320 committed Nov 6, 2022
    Configuration menu
    Copy the full SHA
    73d475b View commit details
    Browse the repository at this point in the history