-
Notifications
You must be signed in to change notification settings - Fork 4.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
x86 optimization for gemm int8 #5763
Merged
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Member
nihui
commented
Oct 31, 2024
•
edited
Loading
edited
- sse2 madd
- sse4.1 cvt epi16
- 64bit 16 registers
- xop maddd
- avx pack8
- avx2 madd
- avx2 gather
- avx512 pack16 + 32 registers
- avx512 scatter
- avx512 vnni
- avx vnni
- avx vnni int8 (depends on avx vnni int8, avx vnni int16, avx ne convert infrastructure #5749)
- avx10.1 + scatter + 32 registers (TODO infrastructure)
- avx10.2 + avx512 vnni int8 (TODO infrastructure)
- opt pack a
- opt pack at
- opt pack b
- opt pack bt
- opt unpack out
- opt unpack aligned load + cvt ps
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## master #5763 +/- ##
==========================================
+ Coverage 94.93% 95.08% +0.14%
==========================================
Files 820 824 +4
Lines 267315 276713 +9398
==========================================
+ Hits 253778 263111 +9333
- Misses 13537 13602 +65 ☔ View full report in Codecov by Sentry. |
nihui
changed the title
[WIP] x86 optimization for gemm int8
x86 optimization for gemm int8
Dec 16, 2024
a simple gemm int8 benchmark test #include "benchmark.h"
static void benchmark_gemm_int8(int M, int N, int K)
{
ncnn::Mat A = RandomMat(K, M);
ncnn::Mat BT = RandomMat(K, N);
ncnn::ParamDict pd;
pd.set(0, 1.f); // alpha
pd.set(1, 1.f); // beta
pd.set(2, 0); // transA
pd.set(3, 1); // transB
pd.set(4, 0); // constantA
pd.set(5, 0); // constantB
pd.set(6, 1); // constantC
pd.set(7, M);
pd.set(8, N);
pd.set(9, K);
pd.set(10, -1); // broadcast_type_C
pd.set(11, 0); // output_N1M
pd.set(13, 0); // output_elemtype
pd.set(14, 0); // output_transpose
pd.set(18, 2); // int8_scale_term
ncnn::Option opt;
opt.num_threads = 1;
ncnn::Layer* gemm = ncnn::create_layer("Gemm");
gemm->load_param(pd);
gemm->load_model(ncnn::ModelBinFromMatArray(0));
gemm->create_pipeline(opt);
std::vector<ncnn::Mat> inputs(2);
inputs[0] = A;
inputs[1] = BT;
std::vector<ncnn::Mat> outputs(1);
double mint = 999999999;
for (int i = 0; i < 10; i++)
{
double t0 = ncnn::get_current_time();
gemm->forward(inputs, outputs, opt);
double t1 = ncnn::get_current_time();
double t = t1 - t0;
fprintf(stderr, "%.2f\n", t);
if (t < mint)
mint = t;
}
fprintf(stderr, "mint = %.2f\n", mint);
ncnn::Mat out = outputs[0];
gemm->destroy_pipeline(opt);
delete gemm;
} |
EPYC 9754 single thread gemm int8 dynamic-quant M = N = K = 5000
|
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.