Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add a GPU support matrix in documentation #18404

Open
fire opened this issue Aug 29, 2024 · 2 comments
Open

Add a GPU support matrix in documentation #18404

fire opened this issue Aug 29, 2024 · 2 comments
Labels
codegen/nvvm NVVM code generation compiler backend codegen/rocm ROCm code generation compiler backend (HIP/HSA) codegen/spirv SPIR-V code generation compiler backend documentation ✏️ Improvements or additions to documentation enhancement ➕ New feature or request hal/cuda Runtime CUDA HAL backend hal/hip Runtime HIP HAL backend hal/vulkan Runtime Vulkan GPU HAL backend

Comments

@fire
Copy link

fire commented Aug 29, 2024

Request description

Add a GPU support matrix table so that independent software vendors know what is expected to work.

What component(s) does this issue relate to?

Runtime

Additional context

About gpu compatibility with AMD and IREE?

Was curious about https://www.techpowerup.com/gpu-specs/radeon-rx-7900-xtx.c3941 which has 24GB and https://www.amd.com/en/products/graphics/workstations/radeon-pro/w7900.html which has 48GB of video ram.
and.. uh the steamdeck AMD gpu.

Scott Todd — Today at 13:51

What are you curious about specifically? General support for the hardware?

Jakub Kuderski — Today at 14:07
We support both 7900xtx and w7900.

Steam deck is based on rdan2 and should work, but there may be some sharp edges if nobody tried it before

Ben Vanik — Today at 14:12
I tried it ages ago, no clue if it still works though (both CPU and Vulkan)
last year around this time, apparently :P ⁠vulkan⁠

iFire — Today at 14:14
Will you claim LLMs run on the steamdeck 😄
Like in a table.

Ben Vanik — Today at 14:15
I claim nothing. YMMV :P

iFire — Today at 14:18

So both AMD 7900xtx and w7900 will have IREE organization support. That's great!

Scott Todd — Today at 14:20

We have CI workflows running unit tests and some model tests/benchmarks on w7900 GPUs using both the ROCm/HIP and Vulkan APIs. That's mostly what we had on hand in sufficient supply for automated testing, but we'd like to expand the test device matrix eventually.

Some performance optimization work is shared across all backends or at least across CDNA and RDNA (via the LLVMGPU / ROCm / HIP code generation pipelines).

iFire — Today at 14:22

Both windows and linux right?

Scott Todd — Today at 14:26

Yep. Related to that, we're evaluating HSA as a backend option for AMD hardware that has many nice features for the runtime to use, like on #18349, but that (I think?) is currently Linux-only. Definitely pushing to keep the core project fully supported on all operating systems.

Hmm that reminds me, I have a dual boot machine with some w7900s in it now. I should run through our docs etc. on Windows and check that things work out of the box.

@fire fire added the enhancement ➕ New feature or request label Aug 29, 2024
@ScottTodd
Copy link
Member

ScottTodd commented Aug 29, 2024

@ScottTodd ScottTodd added documentation ✏️ Improvements or additions to documentation hal/vulkan Runtime Vulkan GPU HAL backend codegen/spirv SPIR-V code generation compiler backend hal/cuda Runtime CUDA HAL backend codegen/nvvm NVVM code generation compiler backend codegen/rocm ROCm code generation compiler backend (HIP/HSA) hal/hip Runtime HIP HAL backend labels Aug 29, 2024
@fire
Copy link
Author

fire commented Sep 2, 2024

There's a bug report about AMD 7900XTX card failure. nod-ai/SHARK-Studio#2165

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
codegen/nvvm NVVM code generation compiler backend codegen/rocm ROCm code generation compiler backend (HIP/HSA) codegen/spirv SPIR-V code generation compiler backend documentation ✏️ Improvements or additions to documentation enhancement ➕ New feature or request hal/cuda Runtime CUDA HAL backend hal/hip Runtime HIP HAL backend hal/vulkan Runtime Vulkan GPU HAL backend
Projects
None yet
Development

No branches or pull requests

2 participants