-
-
Notifications
You must be signed in to change notification settings - Fork 5.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: vllm serve --config.yaml - Order of arguments matters? #8947
Comments
The
|
What are you using to manage your Python environment? |
Hi DarkLight, thanks for the help. So my current actual command is:
If I use your config, I get the following error:
I am using poetry and updated my vllm version today with |
I see what you mean now, yeah the config parsing could definitely use some improvement...
Can you try wrapping |
Hi @FloWsnr, could you please try my PR? @DarkLight1337 Could you please review my PR when you get a chance?😊 |
Hi guys, thanks for helping!
Your PR seems to work for my usecase. As you discussed in the PR, some more work on the arg-parsing might be needed in the future to provide an elegant solution.
I think it might be a problem with the released version since the content of # _version.py
__version__ = version = '0.6.1.dev238+ge2c6e0a82'
__version_tuple__ = version_tuple = (0, 6, 1, 'dev238', 'ge2c6e0a82') Should I open a new issue to track this separately? |
v0.6.2 (release) broke |
I meet same issue。
sever command:
|
@youqugit It seems not to be an argument parsing issue of vLLM, please double check your |
Your current environment
The output of `python collect_env.py`
Model Input Dumps
No response
🐛 Describe the bug
When serving a vllm server with
vllm serve path/to/model --config path/to/config.yaml
the position of the argumentserved-model-name
seems to be cruical to successfully run the server.P.s.
Why is
collect_env.py
showingvLLM Version: 0.6.1.dev238+ge2c6e0a82
I definitely used vllm=0.6.2 and my pip shows the same.
Config that works flawlessly:
Here, the server runs and I can call the model using the name "MyModel".
Config that does not work:
With the latter config, I get the following error:
vllm serve: error: the following arguments are required: model_tag
Before submitting a new issue...
The text was updated successfully, but these errors were encountered: