Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug] Qwen2VL Crashes on some inputs #2037

Closed
5 tasks done
jakep-allenai opened this issue Nov 14, 2024 · 0 comments
Closed
5 tasks done

[Bug] Qwen2VL Crashes on some inputs #2037

jakep-allenai opened this issue Nov 14, 2024 · 0 comments
Assignees

Comments

@jakep-allenai
Copy link
Contributor

jakep-allenai commented Nov 14, 2024

Checklist

  • 1. I have searched related issues but cannot get the expected help.
  • 2. The bug has not been fixed in the latest version.
  • 3. Please note that if the bug-related issue you submitted lacks corresponding environment info and a minimal reproducible demo, it will be challenging for us to reproduce and resolve the issue, reducing the likelihood of receiving feedback.
  • 4. If the issue you raised is not a bug but a question, please raise a discussion at https://github.com/sgl-project/sglang/discussions/new/choose Otherwise, it will be closed.
  • 5. Please use English, otherwise it will be closed.

Describe the bug

When running a qwen2-vl vision model through sglang, occasionally (maybe 1/10000) requests fails with the stack trace listed below:

2024-11-14T22:16:37.097121828Z 2024-11-14 22:16:37,096 - sglang - INFO - [2024-11-14 22:16:37 TP0] Traceback (most recent call last):
2024-11-14T22:16:37.097179033Z 2024-11-14 22:16:37,096 - sglang - INFO -   File "/root/sglang/python/sglang/srt/managers/scheduler.py", line 1212, in run_scheduler_process
2024-11-14T22:16:37.097201943Z 2024-11-14 22:16:37,097 - sglang - INFO -     scheduler.event_loop_normal()
2024-11-14T22:16:37.097237634Z 2024-11-14 22:16:37,097 - sglang - INFO -   File "/usr/local/lib/python3.11/dist-packages/torch/utils/_contextlib.py", line 116, in decorate_context
2024-11-14T22:16:37.097303640Z 2024-11-14 22:16:37,097 - sglang - INFO -     return func(*args, **kwargs)
2024-11-14T22:16:37.097354768Z 2024-11-14 22:16:37,097 - sglang - INFO -            ^^^^^^^^^^^^^^^^^^^^^
2024-11-14T22:16:37.097389482Z 2024-11-14 22:16:37,097 - sglang - INFO -   File "/root/sglang/python/sglang/srt/managers/scheduler.py", line 337, in event_loop_normal
2024-11-14T22:16:37.097450598Z 2024-11-14 22:16:37,097 - sglang - INFO -     result = self.run_batch(batch)
2024-11-14T22:16:37.097555578Z 2024-11-14 22:16:37,097 - sglang - INFO -              ^^^^^^^^^^^^^^^^^^^^^
2024-11-14T22:16:37.097577999Z 2024-11-14 22:16:37,097 - sglang - INFO -   File "/root/sglang/python/sglang/srt/managers/scheduler.py", line 788, in run_batch
2024-11-14T22:16:37.097664330Z 2024-11-14 22:16:37,097 - sglang - INFO -     logits_output, next_token_ids = self.tp_worker.forward_batch_generation(
2024-11-14T22:16:37.097692408Z 2024-11-14 22:16:37,097 - sglang - INFO -                                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-11-14T22:16:37.097724398Z 2024-11-14 22:16:37,097 - sglang - INFO -   File "/root/sglang/python/sglang/srt/managers/tp_worker.py", line 139, in forward_batch_generation
2024-11-14T22:16:37.097796061Z 2024-11-14 22:16:37,097 - sglang - INFO -     logits_output = self.model_runner.forward(forward_batch)
2024-11-14T22:16:37.097841602Z 2024-11-14 22:16:37,097 - sglang - INFO -                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-11-14T22:16:37.097976965Z 2024-11-14 22:16:37,097 - sglang - INFO -   File "/root/sglang/python/sglang/srt/model_executor/model_runner.py", line 579, in forward
2024-11-14T22:16:37.098004625Z 2024-11-14 22:16:37,097 - sglang - INFO -     return self.forward_extend(forward_batch)
2024-11-14T22:16:37.098018594Z 2024-11-14 22:16:37,097 - sglang - INFO -            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-11-14T22:16:37.098062877Z 2024-11-14 22:16:37,097 - sglang - INFO -   File "/root/sglang/python/sglang/srt/model_executor/model_runner.py", line 563, in forward_extend
2024-11-14T22:16:37.098107300Z 2024-11-14 22:16:37,098 - sglang - INFO -     return self.model.forward(
2024-11-14T22:16:37.098164575Z 2024-11-14 22:16:37,098 - sglang - INFO -            ^^^^^^^^^^^^^^^^^^^
2024-11-14T22:16:37.098209416Z 2024-11-14 22:16:37,098 - sglang - INFO -   File "/root/sglang/python/sglang/srt/models/qwen2_vl.py", line 647, in forward
2024-11-14T22:16:37.098265643Z 2024-11-14 22:16:37,098 - sglang - INFO -     inputs_embeds[left_idx:right_idx] = image_embeds[
2024-11-14T22:16:37.098369226Z 2024-11-14 22:16:37,098 - sglang - INFO -     ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^
2024-11-14T22:16:37.098492297Z 2024-11-14 22:16:37,098 - sglang - INFO - RuntimeError: The expanded size of the tensor (816) must match the existing size (962) at non-singleton dimension 0.  Target sizes: [816, 3584].  Tensor sizes: [962, 3584]
2024-11-14T22:16:37.098521353Z 2024-11-14 22:16:37,098 - sglang - INFO - 

@yizhang2077

Reproduction

Sglang has been launched with this command

python -m sglang.launch_server --model [local finetuned qwen2vl checkpointpath] --chat-template qwen2-vl --context-length 8192 --port 30004 --log-level-http warning

This is from a nightly build of sglang:
eff468d

Environment

# python3 -m sglang.check_env
/bin/sh: 1: /usr/local/cuda/bin/nvcc: not found
Python: 3.11.10 (main, Oct  3 2024, 07:29:13) [GCC 11.2.0]
CUDA available: True
GPU 0: NVIDIA H100 80GB HBM3
GPU 0 Compute Capability: 9.0
CUDA_HOME: /usr/local/cuda
NVCC: Not Available
CUDA Driver Version: 525.147.05
PyTorch: 2.4.0+cu121
sglang: 0.3.5
flashinfer: 0.1.6+cu121torch2.4
triton: 3.0.0
transformers: 4.46.2
requests: 2.32.3
tqdm: 4.67.0
numpy: 1.26.4
aiohttp: 3.10.10
fastapi: 0.115.5
hf_transfer: 0.1.8
huggingface_hub: 0.26.2
interegular: 0.3.3
packaging: 24.2
PIL: 10.4.0
psutil: 6.1.0
pydantic: 2.9.2
uvicorn: 0.32.0
uvloop: 0.21.0
zmq: 26.2.0
vllm: 0.6.3.post1
multipart: 0.0.17
openai: 1.54.4
anthropic: 0.39.0
NVIDIA Topology: 
        GPU0    NIC0    NIC1    NIC2    NIC3    NIC4    NIC5    CPU Affinity    NUMA Affinity
GPU0     X      SYS     SYS     SYS     SYS     PIX     NODE    48-95,144-191   1
NIC0    SYS      X      NODE    NODE    NODE    SYS     SYS
NIC1    SYS     NODE     X      PIX     NODE    SYS     SYS
NIC2    SYS     NODE    PIX      X      NODE    SYS     SYS
NIC3    SYS     NODE    NODE    NODE     X      SYS     SYS
NIC4    PIX     SYS     SYS     SYS     SYS      X      NODE
NIC5    NODE    SYS     SYS     SYS     SYS     NODE     X 

Legend:

  X    = Self
  SYS  = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
  NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
  PHB  = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
  PXB  = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
  PIX  = Connection traversing at most a single PCIe bridge
  NV#  = Connection traversing a bonded set of # NVLinks

NIC Legend:

  NIC0: mlx5_0
  NIC1: mlx5_1
  NIC2: mlx5_2
  NIC3: mlx5_3
  NIC4: mlx5_4
  NIC5: mlx5_5


ulimit soft: 1048576
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants