Skip to content

Commit

Permalink
[Doc][AMD][ROCm]Added tips to refer to mi300x tuning guide for mi300x…
Browse files Browse the repository at this point in the history
… users (vllm-project#6754)
  • Loading branch information
hongxiayang authored Jul 24, 2024
1 parent 1bfb015 commit 59af1d6
Showing 1 changed file with 7 additions and 0 deletions.
7 changes: 7 additions & 0 deletions docs/source/getting_started/amd-installation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -142,3 +142,10 @@ Alternatively, wheels intended for vLLM use can be accessed under the releases.
- Triton flash attention does not currently support sliding window attention. If using half precision, please use CK flash-attention for sliding window support.
- To use CK flash-attention or PyTorch naive attention, please use this flag ``export VLLM_USE_TRITON_FLASH_ATTN=0`` to turn off triton flash attention.
- The ROCm version of PyTorch, ideally, should match the ROCm driver version.


.. tip::
- For MI300x (gfx942) users, to achieve optimal performance, please refer to `MI300x tuning guide <https://rocm.docs.amd.com/en/latest/how-to/tuning-guides/mi300x/index.html>`_ for performance optimization and tuning tips on system and workflow level.
For vLLM, please refer to `vLLM performance optimization <https://rocm.docs.amd.com/en/latest/how-to/tuning-guides/mi300x/workload.html#vllm-performance-optimization>`_.


0 comments on commit 59af1d6

Please sign in to comment.