You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
(VllmWorkerProcess pid=70) INFO 01-20 01:25:05 selector.py:217] Cannot use FlashAttention-2 backend for Volta and Turing GPU
s.
(VllmWorkerProcess pid=70) INFO 01-20 01:25:05 selector.py:129] Using XFormers backend. ERROR 01-20 01:25:05 engine.py:366] Bfloat16 is only supported on GPUs with compute capability of at least 8.0. Your Tesla V
100-PCIE-32GB GPU has compute capability 7.0. You can use float16 instead by explicitly setting the`dtype` flag in CLI, for
example: --dtype=half.
ERROR 01-20 01:25:05 engine.py:366] Traceback (most recent call last):
ERROR 01-20 01:25:05 engine.py:366] File "/usr/local/lib/python3.12/dist-packages/vllm/engine/multiprocessing/engine.py",
line 357, in run_mp_engine
ERROR 01-20 01:25:05 engine.py:366] engine = MQLLMEngine.from_engine_args(engine_args=engine_args, ERROR 01-20 01:25:05 engine.py:366] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 01-20 01:25:05 engine.py:366] File "/usr/local/lib/python3.12/dist-packages/vllm/engine/multiprocessing/engine.py",
line 119, in from_engine_args
ERROR 01-20 01:25:05 engine.py:366] return cls(ipc_path=ipc_path,
ERROR 01-20 01:25:05 engine.py:366] ^^^^^^^^^^^^^^^^^^^^^^
ERROR 01-20 01:25:05 engine.py:366] File "/usr/local/lib/python3.12/dist-packages/vllm/engine/multiprocessing/engine.py",
line 71, in __init__
ERROR 01-20 01:25:05 engine.py:366] self.engine = LLMEngine(*args, **kwargs)
ERROR 01-20 01:25:05 engine.py:366] ^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 01-20 01:25:05 engine.py:366] File "/usr/local/lib/python3.12/dist-packages/vllm/engine/llm_engine.py", line 273, in
__init__
ERROR 01-20 01:25:05 engine.py:366] self.model_executor = executor_class(vllm_config=vllm_config, )
ERROR 01-20 01:25:05 engine.py:366] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 01-20 01:25:05 engine.py:366] File "/usr/local/lib/python3.12/dist-packages/vllm/executor/distributed_gpu_executor.p
y", line 26, in __init__
ERROR 01-20 01:25:05 engine.py:366] super().__init__(*args, **kwargs)
ERROR 01-20 01:25:05 engine.py:366] File "/usr/local/lib/python3.12/dist-packages/vllm/executor/executor_base.py", line 36
, in __init__
ERROR 01-20 01:25:05 engine.py:366] self._init_executor()
ERROR 01-20 01:25:05 engine.py:366] File "/usr/local/lib/python3.12/dist-packages/vllm/executor/multiproc_gpu_executor.py"
, line 82, in _init_executor
ERROR 01-20 01:25:05 engine.py:366] self._run_workers("init_device")
ERROR 01-20 01:25:05 engine.py:366] File "/usr/local/lib/python3.12/dist-packages/vllm/executor/multiproc_gpu_executor.py"
, line 157, in _run_workers
ERROR 01-20 01:25:05 engine.py:366] driver_worker_output = driver_worker_method(*args, **kwargs)
ERROR 01-20 01:25:05 engine.py:366] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 01-20 01:25:05 engine.py:366] File "/usr/local/lib/python3.12/dist-packages/vllm/worker/worker.py", line 140, in ini
t_device
ERROR 01-20 01:25:05 engine.py:366] _check_if_gpu_supports_dtype(self.model_config.dtype)
ERROR 01-20 01:25:05 engine.py:366] File "/usr/local/lib/python3.12/dist-packages/vllm/worker/worker.py", line 479, in _ch
eck_if_gpu_supports_dtype
ERROR 01-20 01:25:05 engine.py:366] raise ValueError(
ERROR 01-20 01:25:05 engine.py:366] ValueError: Bfloat16 is only supported on GPUs with compute capability of at least 8.0.
Your Tesla V100-PCIE-32GB GPU has compute capability 7.0. You can use float16 instead by explicitly setting the`dtype` flag
in CLI, for example: --dtype=half.
Process SpawnProcess-1:
ERROR 01-20 01:25:05 multiproc_worker_utils.py:123] Worker VllmWorkerProcess pid 70 died, exit code: -15
INFO 01-20 01:25:05 multiproc_worker_utils.py:127] Killing local vLLM worker processes
Traceback (most recent call last):
File "/usr/lib/python3.12/multiprocessing/process.py", line 314, in _bootstrap
self.run()
File "/usr/lib/python3.12/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/usr/local/lib/python3.12/dist-packages/vllm/engine/multiprocessing/engine.py", line 368, in run_mp_engine
raise e
File "/usr/local/lib/python3.12/dist-packages/vllm/engine/multiprocessing/engine.py", line 357, in run_mp_engine
engine = MQLLMEngine.from_engine_args(engine_args=engine_args,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/vllm/engine/multiprocessing/engine.py", line 119, in from_engine_args
return cls(ipc_path=ipc_path,
^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/vllm/engine/multiprocessing/engine.py", line 71, in __init__
self.engine = LLMEngine(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/vllm/engine/llm_engine.py", line 273, in __init__
self.model_executor = executor_class(vllm_config=vllm_config, )
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/vllm/executor/distributed_gpu_executor.py", line 26, in __init__
super().__init__(*args, **kwargs)
File "/usr/local/lib/python3.12/dist-packages/vllm/executor/executor_base.py", line 36, in __init__
self._init_executor()
File "/usr/local/lib/python3.12/dist-packages/vllm/executor/multiproc_gpu_executor.py", line 82, in _init_executor
self._run_workers("init_device")
File "/usr/local/lib/python3.12/dist-packages/vllm/executor/multiproc_gpu_executor.py", line 157, in _run_workers
driver_worker_output = driver_worker_method(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/vllm/worker/worker.py", line 140, in init_device
_check_if_gpu_supports_dtype(self.model_config.dtype)
File "/usr/local/lib/python3.12/dist-packages/vllm/worker/worker.py", line 479, in _check_if_gpu_supports_dtype
raise ValueError(
ValueError: Bfloat16 is only supported on GPUs with compute capability of at least 8.0. Your Tesla V100-PCIE-32GB GPU has co
mpute capability 7.0. You can use float16 instead by explicitly setting the`dtype` flag in CLI, for example: --dtype=half.
Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 774, in <module>
uvloop.run(run_server(args))
File "/usr/local/lib/python3.12/dist-packages/uvloop/__init__.py", line 109, in run
return __asyncio.run(
^^^^^^^^^^^^^^
File "/usr/lib/python3.12/asyncio/runners.py", line 194, in run
return runner.run(main)
^^^^^^^^^^^^^^^^
File "/usr/lib/python3.12/asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "uvloop/loop.pyx", line 1518, in uvloop.loop.Loop.run_until_complete
File "/usr/local/lib/python3.12/dist-packages/uvloop/__init__.py", line 61, in wrapper
return await main
^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 740, in run_server
async with build_async_engine_client(args) as engine_client:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.12/contextlib.py", line 210, in __aenter__
return await anext(self.gen)
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 118, in build_async_engine_clie
nt
async with build_async_engine_client_from_engine_args(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.12/contextlib.py", line 210, in __aenter__
return await anext(self.gen)
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 223, in build_async_engine_clie
nt_from_engine_args
raise RuntimeError(
RuntimeError: Engine process failed to start. See stack trace for the root cause.
PS
I added --dtype half and get another error
INFO 01-20 01:42:15 selector.py:129] Using XFormers backend. [108/425]
(VllmWorkerProcess pid=70) INFO 01-20 01:42:15 selector.py:217] Cannot use FlashAttention-2 backend for Volta and Turing GPU
s.
(VllmWorkerProcess pid=70) INFO 01-20 01:42:15 selector.py:129] Using XFormers backend.
WARNING 01-20 01:42:15 xformers.py:387] XFormers does not support logits soft cap. Outputs may be slightly off.
(VllmWorkerProcess pid=70) WARNING 01-20 01:42:15 xformers.py:387] XFormers does not support logits soft cap. Outputs may be
slightly off.
ERROR 01-20 01:42:16 engine.py:366] Model PaliGemmaForConditionalGeneration does not support BitsAndBytes quantization yet.
ERROR 01-20 01:42:16 engine.py:366] Traceback (most recent call last):
ERROR 01-20 01:42:16 engine.py:366] File "/usr/local/lib/python3.12/dist-packages/vllm/engine/multiprocessing/engine.py",
line 357, in run_mp_engine
ERROR 01-20 01:42:16 engine.py:366] engine = MQLLMEngine.from_engine_args(engine_args=engine_args,
ERROR 01-20 01:42:16 engine.py:366] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 01-20 01:42:16 engine.py:366] File "/usr/local/lib/python3.12/dist-packages/vllm/engine/multiprocessing/engine.py", line 119, in from_engine_args
ERROR 01-20 01:42:16 engine.py:366] return cls(ipc_path=ipc_path,
ERROR 01-20 01:42:16 engine.py:366] ^^^^^^^^^^^^^^^^^^^^^^
ERROR 01-20 01:42:16 engine.py:366] File "/usr/local/lib/python3.12/dist-packages/vllm/engine/multiprocessing/engine.py",
line 71, in __init__
ERROR 01-20 01:42:16 engine.py:366] self.engine = LLMEngine(*args, **kwargs)
ERROR 01-20 01:42:16 engine.py:366] ^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 01-20 01:42:16 engine.py:366] File "/usr/local/lib/python3.12/dist-packages/vllm/engine/llm_engine.py", line 273, in
__init__
ERROR 01-20 01:42:16 engine.py:366] self.model_executor = executor_class(vllm_config=vllm_config, )
ERROR 01-20 01:42:16 engine.py:366] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 01-20 01:42:16 engine.py:366] File "/usr/local/lib/python3.12/dist-packages/vllm/executor/distributed_gpu_executor.p
y", line 26, in __init__
ERROR 01-20 01:42:16 engine.py:366] super().__init__(*args, **kwargs)
ERROR 01-20 01:42:16 engine.py:366] File "/usr/local/lib/python3.12/dist-packages/vllm/executor/executor_base.py", line 36
, in __init__
ERROR 01-20 01:42:16 engine.py:366] self._init_executor()
ERROR 01-20 01:42:16 engine.py:366] File "/usr/local/lib/python3.12/dist-packages/vllm/executor/multiproc_gpu_executor.py"
, line 83, in _init_executor
ERROR 01-20 01:42:16 engine.py:366] self._run_workers("load_model",
ERROR 01-20 01:42:16 engine.py:366] File "/usr/local/lib/python3.12/dist-packages/vllm/executor/multiproc_gpu_executor.py"
, line 157, in _run_workers
ERROR 01-20 01:42:16 engine.py:366] driver_worker_output = driver_worker_method(*args, **kwargs)
ERROR 01-20 01:42:16 engine.py:366] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 01-20 01:42:16 engine.py:366] File "/usr/local/lib/python3.12/dist-packages/vllm/worker/worker.py", line 155, in loa
d_model
ERROR 01-20 01:42:16 engine.py:366] self.model_runner.load_model()
ERROR 01-20 01:42:16 engine.py:366] File "/usr/local/lib/python3.12/dist-packages/vllm/worker/model_runner.py", line 1096,
in load_model
ERROR 01-20 01:42:16 engine.py:366] self.model = get_model(vllm_config=self.vllm_config)
ERROR 01-20 01:42:16 engine.py:366] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 01-20 01:42:16 engine.py:366] File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/model_loader/__init_
_.py", line 12, in get_model
ERROR 01-20 01:42:16 engine.py:366] return loader.load_model(vllm_config=vllm_config)
ERROR 01-20 01:42:16 engine.py:366] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 01-20 01:42:16 engine.py:366] File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/model_loader/loader.
py", line 1154, in load_model
ERROR 01-20 01:42:16 engine.py:366] self._load_weights(model_config, model)
ERROR 01-20 01:42:16 engine.py:366] File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/model_loader/loader.
py", line 1012, in _load_weightspy", line 1012, in _load_weights
ERROR 01-20 01:42:16 engine.py:366] raise AttributeError(
ERROR 01-20 01:42:16 engine.py:366] AttributeError: Model PaliGemmaForConditionalGeneration does not support BitsAndBytes qu
antization yet.
Process SpawnProcess-1:
ERROR 01-20 01:42:16 multiproc_worker_utils.py:123] Worker VllmWorkerProcess pid 70 died, exit code: -15
INFO 01-20 01:42:16 multiproc_worker_utils.py:127] Killing local vLLM worker processes
Traceback (most recent call last):
File "/usr/lib/python3.12/multiprocessing/process.py", line 314, in _bootstrap
self.run()
File "/usr/lib/python3.12/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/usr/local/lib/python3.12/dist-packages/vllm/engine/multiprocessing/engine.py", line 368, in run_mp_engine
raise e
File "/usr/local/lib/python3.12/dist-packages/vllm/engine/multiprocessing/engine.py", line 357, in run_mp_engine
engine = MQLLMEngine.from_engine_args(engine_args=engine_args,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/vllm/engine/multiprocessing/engine.py", line 119, in from_engine_args
return cls(ipc_path=ipc_path,
^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/vllm/engine/multiprocessing/engine.py", line 71, in __init__
self.engine = LLMEngine(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/vllm/engine/llm_engine.py", line 273, in __init__
self.model_executor = executor_class(vllm_config=vllm_config, )
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/vllm/executor/distributed_gpu_executor.py", line 26, in __init__
super().__init__(*args, **kwargs)
File "/usr/local/lib/python3.12/dist-packages/vllm/executor/executor_base.py", line 36, in __init__
self._init_executor()
File "/usr/local/lib/python3.12/dist-packages/vllm/executor/multiproc_gpu_executor.py", line 83, in _init_executor
self._run_workers("load_model",
File "/usr/local/lib/python3.12/dist-packages/vllm/executor/multiproc_gpu_executor.py", line 157, in _run_workers
driver_worker_output = driver_worker_method(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/vllm/worker/worker.py", line 155, in load_model
self.model_runner.load_model()
File "/usr/local/lib/python3.12/dist-packages/vllm/worker/model_runner.py", line 1096, in load_model
self.model = get_model(vllm_config=self.vllm_config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/model_loader/__init__.py", line 12, in get_model
return loader.load_model(vllm_config=vllm_config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/model_loader/loader.py", line 1154, in load_model
self._load_weights(model_config, model)
File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/model_loader/loader.py", line 1012, in _load_weights
raise AttributeError(
AttributeError: Model PaliGemmaForConditionalGeneration does not support BitsAndBytes quantization yet.
[rank0]:[W120 01:42:17.503600197 ProcessGroupNCCL.cpp:1250] Warning: WARNING: process group has NOT been destroyed before we
destruct ProcessGroupNCCL. On normal program exit, the application should call destroy_process_group to ensure that any pen
ding NCCL operations have finished in this process. In rare cases this process can exit before this point and block the prog
ress of another member of the process group. This constraint has always been present, but this warning has only been added
since PyTorch 2.4 (function operator())
Task exception was never retrieved [2/425]
future: <Task finished name='Task-2' coro=<MQLLMEngineClient.run_output_handler_loop() done, defined at /usr/local/lib/pytho
n3.12/dist-packages/vllm/engine/multiprocessing/client.py:178> exception=ZMQError('Operation not supported')>
Traceback (most recent call last):
File "/usr/local/lib/python3.12/dist-packages/vllm/engine/multiprocessing/client.py", line 184, in run_output_handler_loop
while await self.output_socket.poll(timeout=VLLM_RPC_TIMEOUT
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/zmq/_future.py", line 400, in poll
raise _zmq.ZMQError(_zmq.ENOTSUP)
zmq.error.ZMQError: Operation not supported
Task exception was never retrieved
future: <Task finished name='Task-3' coro=<MQLLMEngineClient.run_output_handler_loop() done, defined at /usr/local/lib/pytho
n3.12/dist-packages/vllm/engine/multiprocessing/client.py:178> exception=ZMQError('Operation not supported')>
Traceback (most recent call last):
File "/usr/local/lib/python3.12/dist-packages/vllm/engine/multiprocessing/client.py", line 184, in run_output_handler_loop
while await self.output_socket.poll(timeout=VLLM_RPC_TIMEOUT
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/zmq/_future.py", line 400, in poll
raise _zmq.ZMQError(_zmq.ENOTSUP)
zmq.error.ZMQError: Operation not supported
Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 774, in <module>
uvloop.run(run_server(args))
File "/usr/local/lib/python3.12/dist-packages/uvloop/__init__.py", line 109, in run
return __asyncio.run(
^^^^^^^^^^^^^^
File "/usr/lib/python3.12/asyncio/runners.py", line 194, in run
return runner.run(main)
^^^^^^^^^^^^^^^^
File "/usr/lib/python3.12/asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "uvloop/loop.pyx", line 1518, in uvloop.loop.Loop.run_until_complete
File "/usr/local/lib/python3.12/dist-packages/uvloop/__init__.py", line 61, in wrapper
return await main
^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 740, in run_server
async with build_async_engine_client(args) as engine_client:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.12/contextlib.py", line 210, in __aenter__
return await anext(self.gen)
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 118, in build_async_engine_clie
nt
async with build_async_engine_client_from_engine_args(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.12/contextlib.py", line 210, in __aenter__
return await anext(self.gen)
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 223, in build_async_engine_clie
nt_from_engine_args
raise RuntimeError(
RuntimeError: Engine process failed to start. See stack trace for the root cause.
How would you like to use vllm
I want to run inference of a [specific model](put link here). I don't know how to integrate it with vllm.
Before submitting a new issue...
Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.
The text was updated successfully, but these errors were encountered:
Your current environment
I adjusted some code to use quantization 4bit bitsandbytes based on model repo and tested in jupyter notebook without problem but i get this error when running with vllm
https://huggingface.co/google/paligemma2-28b-pt-896
https://huggingface.co/blog/paligemma2
PS
I added --dtype half and get another error
How would you like to use vllm
I want to run inference of a [specific model](put link here). I don't know how to integrate it with vllm.
Before submitting a new issue...
The text was updated successfully, but these errors were encountered: