Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Usage]: BNB quantization not supported for Paligemma2 model #12216

Closed
1 task done
ken2190 opened this issue Jan 20, 2025 · 1 comment · Fixed by #12237
Closed
1 task done

[Usage]: BNB quantization not supported for Paligemma2 model #12216

ken2190 opened this issue Jan 20, 2025 · 1 comment · Fixed by #12237
Assignees
Labels
usage How to use vllm

Comments

@ken2190
Copy link

ken2190 commented Jan 20, 2025

Your current environment

The output of `docker run --runtime nvidia --gpus all     -v /path/to/models:/data/vllm.model     --env "HUGGING_FACE_HUB_TOKEN=hf_token"     -p 8000:8000     --ipc=host     vllm/vllm-openai:latest     --model /path/to/model/dir/paligemma2-28b-pt-896     --quantization bitsandbytes --load-format bitsandbytes     --served-model-name paligemma2-28b     --max-model-len 8192     --distributed-executor-backend mp     --tensor-parallel-size 2
`

I adjusted some code to use quantization 4bit bitsandbytes based on model repo and tested in jupyter notebook without problem but i get this error when running with vllm
https://huggingface.co/google/paligemma2-28b-pt-896
https://huggingface.co/blog/paligemma2

(VllmWorkerProcess pid=70) INFO 01-20 01:25:05 selector.py:217] Cannot use FlashAttention-2 backend for Volta and Turing GPU
s.
(VllmWorkerProcess pid=70) INFO 01-20 01:25:05 selector.py:129] Using XFormers backend.                                     ERROR 01-20 01:25:05 engine.py:366] Bfloat16 is only supported on GPUs with compute capability of at least 8.0. Your Tesla V
100-PCIE-32GB GPU has compute capability 7.0. You can use float16 instead by explicitly setting the`dtype` flag in CLI, for
example: --dtype=half.
ERROR 01-20 01:25:05 engine.py:366] Traceback (most recent call last):
ERROR 01-20 01:25:05 engine.py:366]   File "/usr/local/lib/python3.12/dist-packages/vllm/engine/multiprocessing/engine.py",
line 357, in run_mp_engine
ERROR 01-20 01:25:05 engine.py:366]     engine = MQLLMEngine.from_engine_args(engine_args=engine_args,                      ERROR 01-20 01:25:05 engine.py:366]              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 01-20 01:25:05 engine.py:366]   File "/usr/local/lib/python3.12/dist-packages/vllm/engine/multiprocessing/engine.py",
line 119, in from_engine_args
ERROR 01-20 01:25:05 engine.py:366]     return cls(ipc_path=ipc_path,
ERROR 01-20 01:25:05 engine.py:366]            ^^^^^^^^^^^^^^^^^^^^^^
ERROR 01-20 01:25:05 engine.py:366]   File "/usr/local/lib/python3.12/dist-packages/vllm/engine/multiprocessing/engine.py",
line 71, in __init__
ERROR 01-20 01:25:05 engine.py:366]     self.engine = LLMEngine(*args, **kwargs)
ERROR 01-20 01:25:05 engine.py:366]                   ^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 01-20 01:25:05 engine.py:366]   File "/usr/local/lib/python3.12/dist-packages/vllm/engine/llm_engine.py", line 273, in
 __init__
ERROR 01-20 01:25:05 engine.py:366]     self.model_executor = executor_class(vllm_config=vllm_config, )
ERROR 01-20 01:25:05 engine.py:366]                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 01-20 01:25:05 engine.py:366]   File "/usr/local/lib/python3.12/dist-packages/vllm/executor/distributed_gpu_executor.p
y", line 26, in __init__
ERROR 01-20 01:25:05 engine.py:366]     super().__init__(*args, **kwargs)
ERROR 01-20 01:25:05 engine.py:366]   File "/usr/local/lib/python3.12/dist-packages/vllm/executor/executor_base.py", line 36
, in __init__
ERROR 01-20 01:25:05 engine.py:366]     self._init_executor()
ERROR 01-20 01:25:05 engine.py:366]   File "/usr/local/lib/python3.12/dist-packages/vllm/executor/multiproc_gpu_executor.py"
, line 82, in _init_executor
ERROR 01-20 01:25:05 engine.py:366]     self._run_workers("init_device")
ERROR 01-20 01:25:05 engine.py:366]   File "/usr/local/lib/python3.12/dist-packages/vllm/executor/multiproc_gpu_executor.py"
, line 157, in _run_workers
ERROR 01-20 01:25:05 engine.py:366]     driver_worker_output = driver_worker_method(*args, **kwargs)
ERROR 01-20 01:25:05 engine.py:366]                            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 01-20 01:25:05 engine.py:366]   File "/usr/local/lib/python3.12/dist-packages/vllm/worker/worker.py", line 140, in ini
t_device
ERROR 01-20 01:25:05 engine.py:366]     _check_if_gpu_supports_dtype(self.model_config.dtype)
ERROR 01-20 01:25:05 engine.py:366]   File "/usr/local/lib/python3.12/dist-packages/vllm/worker/worker.py", line 479, in _ch
eck_if_gpu_supports_dtype
ERROR 01-20 01:25:05 engine.py:366]     raise ValueError(
ERROR 01-20 01:25:05 engine.py:366] ValueError: Bfloat16 is only supported on GPUs with compute capability of at least 8.0.
Your Tesla V100-PCIE-32GB GPU has compute capability 7.0. You can use float16 instead by explicitly setting the`dtype` flag
in CLI, for example: --dtype=half.
Process SpawnProcess-1:
ERROR 01-20 01:25:05 multiproc_worker_utils.py:123] Worker VllmWorkerProcess pid 70 died, exit code: -15
INFO 01-20 01:25:05 multiproc_worker_utils.py:127] Killing local vLLM worker processes
Traceback (most recent call last):                                                                                
  File "/usr/lib/python3.12/multiprocessing/process.py", line 314, in _bootstrap
    self.run()
  File "/usr/lib/python3.12/multiprocessing/process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "/usr/local/lib/python3.12/dist-packages/vllm/engine/multiprocessing/engine.py", line 368, in run_mp_engine
    raise e
  File "/usr/local/lib/python3.12/dist-packages/vllm/engine/multiprocessing/engine.py", line 357, in run_mp_engine
    engine = MQLLMEngine.from_engine_args(engine_args=engine_args,
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/vllm/engine/multiprocessing/engine.py", line 119, in from_engine_args
    return cls(ipc_path=ipc_path,
           ^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/vllm/engine/multiprocessing/engine.py", line 71, in __init__
    self.engine = LLMEngine(*args, **kwargs)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/vllm/engine/llm_engine.py", line 273, in __init__
    self.model_executor = executor_class(vllm_config=vllm_config, )
                          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/vllm/executor/distributed_gpu_executor.py", line 26, in __init__
    super().__init__(*args, **kwargs)
  File "/usr/local/lib/python3.12/dist-packages/vllm/executor/executor_base.py", line 36, in __init__
    self._init_executor()
  File "/usr/local/lib/python3.12/dist-packages/vllm/executor/multiproc_gpu_executor.py", line 82, in _init_executor
    self._run_workers("init_device")
  File "/usr/local/lib/python3.12/dist-packages/vllm/executor/multiproc_gpu_executor.py", line 157, in _run_workers
    driver_worker_output = driver_worker_method(*args, **kwargs)
                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/vllm/worker/worker.py", line 140, in init_device
    _check_if_gpu_supports_dtype(self.model_config.dtype)
  File "/usr/local/lib/python3.12/dist-packages/vllm/worker/worker.py", line 479, in _check_if_gpu_supports_dtype
    raise ValueError(
ValueError: Bfloat16 is only supported on GPUs with compute capability of at least 8.0. Your Tesla V100-PCIE-32GB GPU has co
mpute capability 7.0. You can use float16 instead by explicitly setting the`dtype` flag in CLI, for example: --dtype=half.
Traceback (most recent call last):
  File "<frozen runpy>", line 198, in _run_module_as_main
  File "<frozen runpy>", line 88, in _run_code
  File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 774, in <module>
    uvloop.run(run_server(args))
  File "/usr/local/lib/python3.12/dist-packages/uvloop/__init__.py", line 109, in run
    return __asyncio.run(
           ^^^^^^^^^^^^^^
  File "/usr/lib/python3.12/asyncio/runners.py", line 194, in run
    return runner.run(main)
           ^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.12/asyncio/runners.py", line 118, in run
    return self._loop.run_until_complete(task)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "uvloop/loop.pyx", line 1518, in uvloop.loop.Loop.run_until_complete
  File "/usr/local/lib/python3.12/dist-packages/uvloop/__init__.py", line 61, in wrapper
    return await main
           ^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 740, in run_server
    async with build_async_engine_client(args) as engine_client:
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.12/contextlib.py", line 210, in __aenter__
    return await anext(self.gen)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 118, in build_async_engine_clie
nt
    async with build_async_engine_client_from_engine_args(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.12/contextlib.py", line 210, in __aenter__
    return await anext(self.gen)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 223, in build_async_engine_clie
nt_from_engine_args
    raise RuntimeError(
RuntimeError: Engine process failed to start. See stack trace for the root cause.

PS
I added --dtype half and get another error

INFO 01-20 01:42:15 selector.py:129] Using XFormers backend.                                                       [108/425]
(VllmWorkerProcess pid=70) INFO 01-20 01:42:15 selector.py:217] Cannot use FlashAttention-2 backend for Volta and Turing GPU
s.
(VllmWorkerProcess pid=70) INFO 01-20 01:42:15 selector.py:129] Using XFormers backend.
WARNING 01-20 01:42:15 xformers.py:387] XFormers does not support logits soft cap. Outputs may be slightly off.
(VllmWorkerProcess pid=70) WARNING 01-20 01:42:15 xformers.py:387] XFormers does not support logits soft cap. Outputs may be
 slightly off.
ERROR 01-20 01:42:16 engine.py:366] Model PaliGemmaForConditionalGeneration does not support BitsAndBytes quantization yet.
ERROR 01-20 01:42:16 engine.py:366] Traceback (most recent call last):
ERROR 01-20 01:42:16 engine.py:366]   File "/usr/local/lib/python3.12/dist-packages/vllm/engine/multiprocessing/engine.py",
line 357, in run_mp_engine
ERROR 01-20 01:42:16 engine.py:366]     engine = MQLLMEngine.from_engine_args(engine_args=engine_args,
ERROR 01-20 01:42:16 engine.py:366]              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 01-20 01:42:16 engine.py:366]   File "/usr/local/lib/python3.12/dist-packages/vllm/engine/multiprocessing/engine.py", line 119, in from_engine_args
ERROR 01-20 01:42:16 engine.py:366]     return cls(ipc_path=ipc_path,
ERROR 01-20 01:42:16 engine.py:366]            ^^^^^^^^^^^^^^^^^^^^^^
ERROR 01-20 01:42:16 engine.py:366]   File "/usr/local/lib/python3.12/dist-packages/vllm/engine/multiprocessing/engine.py",
line 71, in __init__
ERROR 01-20 01:42:16 engine.py:366]     self.engine = LLMEngine(*args, **kwargs)
ERROR 01-20 01:42:16 engine.py:366]                   ^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 01-20 01:42:16 engine.py:366]   File "/usr/local/lib/python3.12/dist-packages/vllm/engine/llm_engine.py", line 273, in
 __init__
ERROR 01-20 01:42:16 engine.py:366]     self.model_executor = executor_class(vllm_config=vllm_config, )
ERROR 01-20 01:42:16 engine.py:366]                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 01-20 01:42:16 engine.py:366]   File "/usr/local/lib/python3.12/dist-packages/vllm/executor/distributed_gpu_executor.p
y", line 26, in __init__
ERROR 01-20 01:42:16 engine.py:366]     super().__init__(*args, **kwargs)
ERROR 01-20 01:42:16 engine.py:366]   File "/usr/local/lib/python3.12/dist-packages/vllm/executor/executor_base.py", line 36
, in __init__
ERROR 01-20 01:42:16 engine.py:366]     self._init_executor()
ERROR 01-20 01:42:16 engine.py:366]   File "/usr/local/lib/python3.12/dist-packages/vllm/executor/multiproc_gpu_executor.py"
, line 83, in _init_executor
ERROR 01-20 01:42:16 engine.py:366]     self._run_workers("load_model",
ERROR 01-20 01:42:16 engine.py:366]   File "/usr/local/lib/python3.12/dist-packages/vllm/executor/multiproc_gpu_executor.py"
, line 157, in _run_workers
ERROR 01-20 01:42:16 engine.py:366]     driver_worker_output = driver_worker_method(*args, **kwargs)
ERROR 01-20 01:42:16 engine.py:366]                            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 01-20 01:42:16 engine.py:366]   File "/usr/local/lib/python3.12/dist-packages/vllm/worker/worker.py", line 155, in loa
d_model
ERROR 01-20 01:42:16 engine.py:366]     self.model_runner.load_model()
ERROR 01-20 01:42:16 engine.py:366]   File "/usr/local/lib/python3.12/dist-packages/vllm/worker/model_runner.py", line 1096,
 in load_model
ERROR 01-20 01:42:16 engine.py:366]     self.model = get_model(vllm_config=self.vllm_config)
ERROR 01-20 01:42:16 engine.py:366]                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 01-20 01:42:16 engine.py:366]   File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/model_loader/__init_
_.py", line 12, in get_model
ERROR 01-20 01:42:16 engine.py:366]     return loader.load_model(vllm_config=vllm_config)
ERROR 01-20 01:42:16 engine.py:366]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 01-20 01:42:16 engine.py:366]   File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/model_loader/loader.
py", line 1154, in load_model
ERROR 01-20 01:42:16 engine.py:366]     self._load_weights(model_config, model)
ERROR 01-20 01:42:16 engine.py:366]   File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/model_loader/loader.
py", line 1012, in _load_weightspy", line 1012, in _load_weights
ERROR 01-20 01:42:16 engine.py:366]     raise AttributeError(
ERROR 01-20 01:42:16 engine.py:366] AttributeError: Model PaliGemmaForConditionalGeneration does not support BitsAndBytes qu
antization yet.
Process SpawnProcess-1:
ERROR 01-20 01:42:16 multiproc_worker_utils.py:123] Worker VllmWorkerProcess pid 70 died, exit code: -15
INFO 01-20 01:42:16 multiproc_worker_utils.py:127] Killing local vLLM worker processes
Traceback (most recent call last):
  File "/usr/lib/python3.12/multiprocessing/process.py", line 314, in _bootstrap
    self.run()
  File "/usr/lib/python3.12/multiprocessing/process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "/usr/local/lib/python3.12/dist-packages/vllm/engine/multiprocessing/engine.py", line 368, in run_mp_engine
    raise e
  File "/usr/local/lib/python3.12/dist-packages/vllm/engine/multiprocessing/engine.py", line 357, in run_mp_engine
    engine = MQLLMEngine.from_engine_args(engine_args=engine_args,
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/vllm/engine/multiprocessing/engine.py", line 119, in from_engine_args
    return cls(ipc_path=ipc_path,
           ^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/vllm/engine/multiprocessing/engine.py", line 71, in __init__
    self.engine = LLMEngine(*args, **kwargs)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/vllm/engine/llm_engine.py", line 273, in __init__
    self.model_executor = executor_class(vllm_config=vllm_config, )
                          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/vllm/executor/distributed_gpu_executor.py", line 26, in __init__
    super().__init__(*args, **kwargs)
  File "/usr/local/lib/python3.12/dist-packages/vllm/executor/executor_base.py", line 36, in __init__
    self._init_executor()
  File "/usr/local/lib/python3.12/dist-packages/vllm/executor/multiproc_gpu_executor.py", line 83, in _init_executor
    self._run_workers("load_model",
  File "/usr/local/lib/python3.12/dist-packages/vllm/executor/multiproc_gpu_executor.py", line 157, in _run_workers
    driver_worker_output = driver_worker_method(*args, **kwargs)
                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/vllm/worker/worker.py", line 155, in load_model
    self.model_runner.load_model()
  File "/usr/local/lib/python3.12/dist-packages/vllm/worker/model_runner.py", line 1096, in load_model
    self.model = get_model(vllm_config=self.vllm_config)
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/model_loader/__init__.py", line 12, in get_model
    return loader.load_model(vllm_config=vllm_config)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/model_loader/loader.py", line 1154, in load_model
    self._load_weights(model_config, model)
  File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/model_loader/loader.py", line 1012, in _load_weights
    raise AttributeError(
AttributeError: Model PaliGemmaForConditionalGeneration does not support BitsAndBytes quantization yet.
[rank0]:[W120 01:42:17.503600197 ProcessGroupNCCL.cpp:1250] Warning: WARNING: process group has NOT been destroyed before we
 destruct ProcessGroupNCCL. On normal program exit, the application should call destroy_process_group to ensure that any pen
ding NCCL operations have finished in this process. In rare cases this process can exit before this point and block the prog
ress of another member of the process group. This constraint has always been present,  but this warning has only been added
since PyTorch 2.4 (function operator())
Task exception was never retrieved                                                                                   [2/425]
future: <Task finished name='Task-2' coro=<MQLLMEngineClient.run_output_handler_loop() done, defined at /usr/local/lib/pytho
n3.12/dist-packages/vllm/engine/multiprocessing/client.py:178> exception=ZMQError('Operation not supported')>
Traceback (most recent call last):
  File "/usr/local/lib/python3.12/dist-packages/vllm/engine/multiprocessing/client.py", line 184, in run_output_handler_loop
    while await self.output_socket.poll(timeout=VLLM_RPC_TIMEOUT
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/zmq/_future.py", line 400, in poll
    raise _zmq.ZMQError(_zmq.ENOTSUP)
zmq.error.ZMQError: Operation not supported
Task exception was never retrieved
future: <Task finished name='Task-3' coro=<MQLLMEngineClient.run_output_handler_loop() done, defined at /usr/local/lib/pytho
n3.12/dist-packages/vllm/engine/multiprocessing/client.py:178> exception=ZMQError('Operation not supported')>
Traceback (most recent call last):
  File "/usr/local/lib/python3.12/dist-packages/vllm/engine/multiprocessing/client.py", line 184, in run_output_handler_loop
    while await self.output_socket.poll(timeout=VLLM_RPC_TIMEOUT
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/zmq/_future.py", line 400, in poll
    raise _zmq.ZMQError(_zmq.ENOTSUP)
zmq.error.ZMQError: Operation not supported
Traceback (most recent call last):
  File "<frozen runpy>", line 198, in _run_module_as_main
  File "<frozen runpy>", line 88, in _run_code
  File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 774, in <module>
    uvloop.run(run_server(args))
  File "/usr/local/lib/python3.12/dist-packages/uvloop/__init__.py", line 109, in run
    return __asyncio.run(
           ^^^^^^^^^^^^^^
  File "/usr/lib/python3.12/asyncio/runners.py", line 194, in run
    return runner.run(main)
           ^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.12/asyncio/runners.py", line 118, in run
    return self._loop.run_until_complete(task)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "uvloop/loop.pyx", line 1518, in uvloop.loop.Loop.run_until_complete
  File "/usr/local/lib/python3.12/dist-packages/uvloop/__init__.py", line 61, in wrapper
    return await main
           ^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 740, in run_server
    async with build_async_engine_client(args) as engine_client:
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.12/contextlib.py", line 210, in __aenter__
    return await anext(self.gen)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 118, in build_async_engine_clie
nt
    async with build_async_engine_client_from_engine_args(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.12/contextlib.py", line 210, in __aenter__
    return await anext(self.gen)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 223, in build_async_engine_clie
nt_from_engine_args
    raise RuntimeError(
RuntimeError: Engine process failed to start. See stack trace for the root cause.

How would you like to use vllm

I want to run inference of a [specific model](put link here). I don't know how to integrate it with vllm.

Before submitting a new issue...

  • Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.
@ken2190 ken2190 added the usage How to use vllm label Jan 20, 2025
@DarkLight1337 DarkLight1337 changed the title [Usage]: Issue when running Paligemma2 model [Usage]: BNB quantization not supported for Paligemma2 model Jan 20, 2025
@jeejeelee
Copy link
Collaborator

I'll complete this feature later.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
usage How to use vllm
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants