You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When I was using vllm to launch the Qwen2-VL model service, I configured the parameter --enable-prefix-caching. An error occurred when the service requested the second image. It seems that during the current use, this parameter is not compatible with multimodal large models. Do we have plans to fix this bug for compatibility in the future?
🐛 Describe the bug
When I was using vllm to launch the Qwen2-VL model service, I configured the parameter --enable-prefix-caching. An error occurred when the service requested the second image. It seems that during the current use, this parameter is not compatible with multimodal large models. Do we have plans to fix this bug for compatibility in the future?
Before submitting a new issue...
Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.
The text was updated successfully, but these errors were encountered:
This issue has been automatically marked as stale because it has not had any activity within 90 days. It will be automatically closed if no further activity occurs within 30 days. Leave a comment if you feel this issue should remain open. Thank you!
Your current environment
The output of `python collect_env.py`
🐛 Describe the bug
When I was using vllm to launch the Qwen2-VL model service, I configured the parameter --enable-prefix-caching. An error occurred when the service requested the second image. It seems that during the current use, this parameter is not compatible with multimodal large models. Do we have plans to fix this bug for compatibility in the future?
Before submitting a new issue...
The text was updated successfully, but these errors were encountered: