Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adapt to CUDA.jl profile changes #168

Merged
merged 1 commit into from
Nov 7, 2023
Merged

Conversation

thomasfaingnaert
Copy link
Member

ref JuliaGPU/CUDA.jl#2139

Probably should add methods to CUDA.jl to access the kernel times instead of grabbing them directly from the dataframe.

ref JuliaGPU/CUDA.jl#2139

Probably should add methods to CUDA.jl to access the kernel times
instead of grabbing them directly from the dataframe.
@maleadt
Copy link
Member

maleadt commented Nov 7, 2023

Benchmark results for commit a427f5e (comparing to 93ef046):

test master PR Δmin
WMMA diagonal GEMM diag(Float16)*Float16=Float32 (4096×4096) · (4096×4096) (NN) OP (16, 16, 16) 480.0 μs ± 3.85% (456.0 … 506.0 μs)
255 regs
432.0 μs ± 3.83% (414.0 … 462.0 μs)
136 regs
-9.2% ✅
Tropical GEMM Float32*Float32=Float32 (128×256) · (256×128) (TN) OP (8, 16, 2), base shape (4, 8, 1) 14.6 μs ± 0.908% (14.5 … 15.0 μs)
80 regs
124.0 μs ± 0.153% (123.0 … 124.0 μs)
116 regs
+747.5% ❌
WMMA diagonal GEMM diag(Float16)*Float16=Float32 (4096×4096) · (4096×4096) (NT) OP (16, 16, 16) 474.0 μs ± 3.52% (451.0 … 495.0 μs)
255 regs
430.0 μs ± 4.26% (405.0 … 460.0 μs)
138 regs
-10.2% ✅
Tropical GEMM Float32*Float32=Float32 (128×128) · (128×128) (NN) OP (8, 16, 2), base shape (4, 8, 1) 9.17 μs ± 1.65% (8.82 … 9.54 μs)
80 regs
63.7 μs ± 0.27% (63.4 … 64.1 μs)
118 regs
+618.9% ❌
Tropical GEMM Float32*Float32=Float32 (256×128) · (128×256) (TT) OP (8, 16, 2), base shape (4, 8, 1) 9.54 μs ± 1.6% (9.3 … 9.78 μs)
82 regs
64.0 μs ± 0.263% (63.7 … 64.4 μs)
123 regs
+584.6% ❌
Tropical GEMM Float32*Float32=Float32 (256×256) · (256×256) (TT) OP (8, 16, 2), base shape (4, 8, 1) 16.3 μs ± 0.985% (16.0 … 16.5 μs)
82 regs
110.0 μs ± 0.144% (110.0 … 111.0 μs)
123 regs
+589.6% ❌
WMMA GEMM Float16*Float16=Float32 (128×128) · (128×128) (NN) OP (16, 16, 16) 9.15 μs ± 1.8% (8.82 … 9.3 μs) 8.06 μs ± 2.14% (7.87 … 8.34 μs) -10.8% ✅
WMMA diagonal GEMM diag(Float16)*Float16=Float32 (128×128) · (128×128) (NN) OP (16, 16, 16) 12.8 μs ± 1.13% (12.4 … 12.9 μs)
154 regs
9.93 μs ± 1.29% (9.54 … 10.0 μs)
119 regs
-23.1% ✅
Tropical GEMM Float32*Float32=Float32 (128×256) · (256×128) (NT) OP (8, 16, 2), base shape (4, 8, 1) 18.3 μs ± 0.86% (18.1 … 18.6 μs)
80 regs
127.0 μs ± 0.144% (127.0 … 128.0 μs)
111 regs
+598.7% ❌
Tropical GEMM Float32*Float32=Float32 (128×256) · (256×128) (TT) OP (8, 16, 2), base shape (4, 8, 1) 16.1 μs ± 0.872% (15.7 … 16.5 μs)
82 regs
125.0 μs ± 0.123% (125.0 … 126.0 μs)
123 regs
+692.4% ❌
WMMA diagonal GEMM diag(Float16)*Float16=Float32 (256×256) · (256×256) (NT) OP (16, 16, 16) 16.9 μs ± 0.677% (16.7 … 17.2 μs)
255 regs
14.3 μs ± 0.972% (14.1 … 14.5 μs)
121 regs
-15.7% ✅
WMMA diagonal GEMM diag(Float16)*Float16=Float32 (128×128) · (128×128) (NT) OP (16, 16, 16) 11.3 μs ± 1.04% (11.0 … 11.7 μs)
154 regs
9.89 μs ± 1.49% (9.54 … 10.0 μs)
120 regs
-13.0% ✅
Tropical GEMM Float32*Float32=Float32 (256×256) · (256×256) (NT) OP (8, 16, 2), base shape (4, 8, 1) 18.4 μs ± 0.705% (18.1 … 18.8 μs)
80 regs
127.0 μs ± 0.134% (127.0 … 128.0 μs)
111 regs
+600.0% ❌
Tropical GEMM Float32*Float32=Float32 (256×128) · (128×256) (NT) OP (8, 16, 2), base shape (4, 8, 1) 10.7 μs ± 1.3% (10.5 … 11.0 μs)
80 regs
64.9 μs ± 0.267% (64.4 … 65.3 μs)
111 regs
+513.6% ❌
Tropical GEMM Float32*Float32=Float32 (256×256) · (256×256) (TN) OP (8, 16, 2), base shape (4, 8, 1) 14.8 μs ± 1.06% (14.5 … 15.0 μs)
80 regs
109.0 μs ± 0.161% (109.0 … 110.0 μs)
116 regs
+650.8% ❌
Tropical GEMM Float32*Float32=Float32 (256×128) · (128×256) (TN) OP (8, 16, 2), base shape (4, 8, 1) 8.76 μs ± 1.77% (8.58 … 9.06 μs)
80 regs
63.2 μs ± 0.23% (62.9 … 63.7 μs)
116 regs
+633.3% ❌
Tropical GEMM Float32*Float32=Float32 (128×256) · (256×128) (NN) OP (8, 16, 2), base shape (4, 8, 1) 15.7 μs ± 1.09% (15.5 … 16.0 μs)
80 regs
125.0 μs ± 0.118% (124.0 … 125.0 μs)
118 regs
+703.1% ❌
Tropical GEMM Float32*Float32=Float32 (128×128) · (128×128) (TT) OP (8, 16, 2), base shape (4, 8, 1) 9.42 μs ± 1.59% (9.06 … 9.78 μs)
82 regs
63.8 μs ± 0.27% (63.4 … 64.1 μs)
123 regs
+600.0% ❌
WMMA diagonal GEMM diag(Float16)*Float16=Float32 (256×256) · (256×256) (NN) OP (16, 16, 16) 17.0 μs ± 0.757% (16.7 … 17.2 μs)
255 regs
14.2 μs ± 1.0% (14.1 … 14.5 μs)
121 regs
-15.7% ✅
Tropical GEMM Float32*Float32=Float32 (128×128) · (128×128) (TN) OP (8, 16, 2), base shape (4, 8, 1) 8.65 μs ± 1.82% (8.34 … 8.82 μs)
80 regs
55.6 μs ± 0.214% (55.1 … 56.0 μs)
116 regs
+560.0% ❌
Tropical GEMM Float32*Float32=Float32 (128×128) · (128×128) (NT) OP (8, 16, 2), base shape (4, 8, 1) 10.5 μs ± 0.723% (10.3 … 11.0 μs)
80 regs
57.0 μs ± 0.303% (56.7 … 57.5 μs)
111 regs
+453.5% ❌
Tropical GEMM Float32*Float32=Float32 (256×256) · (256×256) (NN) OP (8, 16, 2), base shape (4, 8, 1) 15.9 μs ± 0.93% (15.5 … 16.2 μs)
80 regs
110.0 μs ± 0.153% (110.0 … 111.0 μs)
118 regs
+607.7% ❌
WMMA GEMM Float16*Float16=Float16 (2048×2048) · (2048×2048) (NT) OP (16, 16, 16) 277.0 μs ± 6.14% (256.0 … 298.0 μs) 311.0 μs ± 4.91% (278.0 … 324.0 μs) +8.7% ❌
Tropical GEMM Float32*Float32=Float32 (256×128) · (128×256) (NN) OP (8, 16, 2), base shape (4, 8, 1) 9.32 μs ± 1.81% (9.06 … 9.54 μs)
80 regs
63.8 μs ± 0.276% (63.4 … 64.4 μs)
118 regs
+600.0% ❌

@thomasfaingnaert thomasfaingnaert marked this pull request as ready for review November 7, 2023 17:18
@thomasfaingnaert thomasfaingnaert merged commit 5dfb8ab into master Nov 7, 2023
1 check was pending
@thomasfaingnaert thomasfaingnaert deleted the tf/new-cuda-at-profile branch November 7, 2023 17:18
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants