-
Notifications
You must be signed in to change notification settings - Fork 354
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Which transfomer version could be used with VLLM 0.6.2? #282
Comments
same issue +1 |
the same |
seems the issue is found:vllm-project/vllm#8829 |
We have just now fixed the issue in vllm-project/vllm#8837. Please install vLLM from source to resolve the config loading problem. |
vllm 依然不支持多张图片或视频的问答,请问是否有计划修复? |
Multi-image input is currently supported in both offline and online inference, while video input is only supported for offline inference at the moment. If you need to pass videos via OpenAI API, you can instead provide multiple images for now. Please check the example in |
Can we get a .post0 release for this? Installing from source is a lot more difficult. |
use |
I'd recommend against installing |
vllm 0.62 运行还是报这个错:Unrecognized keys in rope_scaling for 'rope_type'='default': {'mrope_section'} |
Please install vLLM from source to fix the issue. |
Hi all, I'm encountering the same error. It seems related to the rope_scaling["type"] settings in Qwen2VLConfig from the transformers library. You can check the relevant code here: if self.rope_scaling is not None and "type" in self.rope_scaling:
if self.rope_scaling["type"] == "mrope":
# self.rope_scaling["type"] = "default"
self.rope_scaling["rope_type"] = self.rope_scaling["type"]
rope_config_validation(self, ignore_keys={"mrope_section"}) After commenting out that line, my program work well with |
change |
VLLM 0.6.2 had just released few hours ago, it said no support multi image inference with Qwen2-VL.
I've try it, but it require the newest transformer and automatic install it.
When I start it use follow script (worked with vllm 0.6.1)
it report error like
if I return to old transformer with
it report error like
The text was updated successfully, but these errors were encountered: