You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
My vllm version is 0.5.0 post1. I want to make qwen2:7b-instruct model to have functional calling enabled. So following the issue #5649 here, I run the command
#5649 is a draft of a pull request. It is a work-in-progress, and has not been merged into vLLM's codebase. Any capabilities and features present in that branch, whether they are a work-in-progress or completed, will not be available in a vLLM release unless/until the pull request is approved and merged from the Constellate AI fork into the vLLM project's codebase.
This issue has been automatically marked as stale because it has not had any activity within 90 days. It will be automatically closed if no further activity occurs within 30 days. Leave a comment if you feel this issue should remain open. Thank you!
My vllm version is 0.5.0 post1. I want to make qwen2:7b-instruct model to have functional calling enabled. So following the issue #5649 here, I run the command
python -m vllm.entrypoints.openai.api_server --model /home/asus/autodl-tmp/qwen/Qwen2-7B-Instruct --tool-use-prompt-template /home/asus/autodl-tmp/examples/chatml.jinja --enable-api-tools --enable-auto-tool-choice
But it shows api_server.py: error: unrecognized arguments: --tool-use-prompt-template --enable-api-tools --enable-auto-tool-choice
The text was updated successfully, but these errors were encountered: