-
Notifications
You must be signed in to change notification settings - Fork 93
Update to latest vLLM upstream and Support vLLM on CPU #149
base: master
Are you sure you want to change the base?
Conversation
xwu99
commented
Apr 23, 2024
•
edited
Loading
edited
- Update models to pydantic v2 as latest vllm has adopted v2 models instead of v1
- Fix AutoscalingConfig model as it's from Ray Serve that is based on pydantic v1
- Add CPU model yaml files for Llama2 7B
were you able to run this locally? does it work? I am just looking forward to see how to update this project to support latest vllm |
I am working on this. Several packages have been updated (ray, vllm, pydantic, openai etc.) since the last release of RayLLM. Hopefully to get it working soon. |
Signed-off-by: Wu, Xiaochang <xiaochang.wu@intel.com>
Hey all, I also have similar updates on a fork however I've struggled to get feedback from the maintainers to work out how to proceed here. I similiarly updated rayllm to pydantic v2 due to the vllm migration to v2 proper (not using the v1 back-compat). The challenge this introduced is that it makes these changes incompatible with ray because ray is still using v1 compat. See: ray-project/ray#43908 (I haven't had a chance to go back and get further specifics as requested to help convince the core ray team to reconsider the pydantic upgrade) There's numerous other signature changes with the tight coupling of ray and vllm so whilst you may get rayllm working directly with vllm, I wonder what the mileage will be here on getting this contribution accepted if it excludes ray support. Just food for thought. :) |
No need for ray to upgrade. I just upgrade AutoscalingConfig to v2 here. |
Signed-off-by: Wu, Xiaochang <xiaochang.wu@intel.com>
Signed-off-by: Wu, Xiaochang <xiaochang.wu@intel.com>
@xwu99 comment says |
You just need to follow vLLM official guide. |
@xwu99 I saw |
vLLM for CPU does not support tensor parallelism yet. This PR should be revised later to support both CPU and GPU. Right now it's just adapted for CPU. |
Great, thanks for clarification. I also try to upgrade vllm to latest version but for GPU, I found it's not a easy work. the main problem is vllm require driver process also has GPU capability |