-
-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: minicpmv crash on 1x1 image #8954
Comments
Are you able to process the image using their image processor on HuggingFace? If the problem lies with their image processor, you should raise the issue to their repository instead of ours. I doubt that most image processors are designed to deal with this case though. |
I'll test it later. @DarkLight1337 |
Hey, thanks for the bug report! I went ahead and reproduced this error with minicpm 2.6. Some repro cases below: VLLM - The minicpm preprocessing changed a little in the last week, so it's not using the default mapper anymore, but it's still wrapping the same HF image processor, so it doesn't matter for this case from vllm import LLM, SamplingParams
import numpy as np
from PIL import Image
model_name = "openbmb/MiniCPM-V-2_6"
llm = LLM(
model=model_name,
dtype="half",
trust_remote_code=True,
)
sampling_params = SamplingParams()
IMG_DIMS = [1, 1, 3]
# Make a random image of the specified dims
img = Image.fromarray((np.random.rand(*IMG_DIMS) * 255).astype('uint8'))
outputs = llm.generate(
{
"prompt": "",
"multi_modal_data": {"image": img}
},
sampling_params=sampling_params
) In transformers, it's pretty much equivalent to this: import transformers
import numpy as np
from PIL import Image
model_name = "openbmb/MiniCPM-V-2_6"
proc = transformers.AutoImageProcessor.from_pretrained(model_name, trust_remote_code=True)
IMG_DIMS = [1, 1, 3]
# Make a random image of the specified dims
img = Image.fromarray((np.random.rand(*IMG_DIMS) * 255).astype('uint8'))
proc.preprocess(img, return_tensors="pt") Running both gives I did open a PR to tweak the logging error for the default mapper a bit here, although it won't be logged for this specific model now that there is a custom mapper |
When the vllm server encounters this error, the system stops processing new requests. I think vllm should ignore this error and processes new reqeusts normally. |
@njhill how can we let LLM engine recover from such errors? |
#9141 |
@DarkLight1337 I was looking into this in general before, I will try to get back to it. There are some classes of errors where we don't need to fail the whole server and some cases not even the whole batch. |
This issue has been automatically marked as stale because it has not had any activity within 90 days. It will be automatically closed if no further activity occurs within 30 days. Leave a comment if you feel this issue should remain open. Thank you! |
Your current environment
Model Input Dumps
No response
🐛 Describe the bug
Before submitting a new issue...
The text was updated successfully, but these errors were encountered: