You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I currently see a prerequisite of CUDA for vLLM models. We currently have a cluster with Intel PVC GPUs. I would like to know if there are plans on abstracting vllm to use our pre-trained models so that it can run without CUDA.
The text was updated successfully, but these errors were encountered:
Hi @atanikan, thanks for your interest in vLLM. We are definitely interested in supporting more diverse hardware backends. However, currently we don't have expertise on Intel PVC GPUs and don't even have access to them. Therefore, we won't be able to do it by ourselves, while we'd love to help if someone works on integration.
I currently see a prerequisite of CUDA for vLLM models. We currently have a cluster with Intel PVC GPUs. I would like to know if there are plans on abstracting vllm to use our pre-trained models so that it can run without CUDA.
The text was updated successfully, but these errors were encountered: