You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In this post, https://lmsys.org/blog/2024-07-25-sglang-llama3/, it looks like vllm is not efficient in small model size in both online and offline benchmark. What is the bottleneck for vllm for small model inference and whether this will be addressed to catch SGLang and TensorRT performance.
Alternatives
No response
Additional context
No response
The text was updated successfully, but these errors were encountered:
🚀 The feature, motivation and pitch
In this post, https://lmsys.org/blog/2024-07-25-sglang-llama3/, it looks like vllm is not efficient in small model size in both online and offline benchmark. What is the bottleneck for vllm for small model inference and whether this will be addressed to catch SGLang and TensorRT performance.
Alternatives
No response
Additional context
No response
The text was updated successfully, but these errors were encountered: