Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP, Kernel] (3/N) Machete W4A8 #8046

Closed

Conversation

LucasWilkinson
Copy link
Contributor

No description provided.

Copy link

👋 Hi! Thank you for contributing to the vLLM project.
Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which consists a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of default ones by unblocking the steps in your fast-check build on Buildkite UI.

Once the PR is approved and ready to go, please make sure to run full CI as it is required to merge (or just use auto-merge).

To run full CI, you can do one of these:

  • Comment /ready on the PR
  • Add ready label to the PR
  • Enable auto-merge.

🚀

@LucasWilkinson LucasWilkinson changed the title [WIP, Kernel] Machete W4A8 [WIP, Kernel] (3/N) Machete W4A8 Aug 30, 2024
@cli99
Copy link
Contributor

cli99 commented Sep 17, 2024

Hi @LucasWilkinson, I ran the w4a8 benchmark in the PR, the gemm perf from machete is 15-20% slower than the marline kernels. Is that expected? Thanks.

@LucasWilkinson
Copy link
Contributor Author

@cli99 Thanks for your interest in the kernels! For starters I will preface by saying this is work-in-progress PR so I still need to do some performance tuning (the updated heuristic in #7701 will likely help this PR once its merged). We do expect that for an M dim (batch-size * seq-len) <= 64 for Marlin to outperform this current PR, but you should see speedups for M > 64 and larger speedups at M >= 128 for most shapes. What were the shapes that you tested?

@zkf331
Copy link

zkf331 commented Oct 21, 2024

Hi, @LucasWilkinson, thank you for your amazing work! I wanted to install machete-w4a8 for testing, and the installation was successful. However, when I ran benchmark_machete.py, I encountered the following issue:

  File "~/vllm-0.6.3-machete-w4a8/./benchmarks/kernels/benchmark_machete.py", line 255, in <lambda>
    return lambda: ops.machete_mm(
  File "~/vllm-0.6.3-machete-w4a8/vllm/_custom_ops.py", line 43, in wrapper
    raise NotImplementedError(msg % (fn.__name__, e)) from e
NotImplementedError: Error in calling custom op machete_mm: Could not run '_C::machete_mm' with arguments from the 'CUDA' backend. This could be because the
operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee usi
ng PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. '_C::machete_mm' is only available for these backends: [Meta, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradMPS, AutogradXPU, AutogradHPU, AutogradLazy, AutogradMeta, Tracer, AutocastCPU, AutocastXPU, AutocastCUDA, FuncTorchBatched, Batched
NestedTensor, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PreDispatch, PythonDispatcher].

Environment configuration:

  • GPU: H800*80G
  • Torch: '2.4.0+cu121'
  • Python: 3.10.12

Could you please provide some suggestions?

@LucasWilkinson
Copy link
Contributor Author

Superseded by: #9855

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants