Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Frontend][Feature] support tool calling for internlm/internlm2_5-7b-chat model #8405

Merged
merged 25 commits into from
Oct 4, 2024

Conversation

sydnash
Copy link
Contributor

@sydnash sydnash commented Sep 12, 2024

  1. support tool calling for internlm/internlm2_5-7b-chat model
  2. add ToolParserManager to manage the tool parsers
  3. add a command line which used to specific a customize tool parser which can be used in the --tool-call-parser
  4. add a parallel test skip config for models which dose not support parallel tool call

PR Checklist (Click to Expand)

Thank you for your contribution to vLLM! Before submitting the pull request, please ensure the PR meets the following criteria. This helps vLLM maintain the code quality and improve the efficiency of the review process.

PR Title and Classification

Only specific types of PRs will be reviewed. The PR title is prefixed appropriately to indicate the type of change. Please use one of the following:

  • [Bugfix] for bug fixes.
  • [CI/Build] for build or continuous integration improvements.
  • [Doc] for documentation fixes and improvements.
  • [Model] for adding a new model or improving an existing model. Model name should appear in the title.
  • [Frontend] For changes on the vLLM frontend (e.g., OpenAI API server, LLM class, etc.)
  • [Kernel] for changes affecting CUDA kernels or other compute kernels.
  • [Core] for changes in the core vLLM logic (e.g., LLMEngine, AsyncLLMEngine, Scheduler, etc.)
  • [Hardware][Vendor] for hardware-specific changes. Vendor name should appear in the prefix (e.g., [Hardware][AMD]).
  • [Misc] for PRs that do not fit the above categories. Please use this sparingly.

Note: If the PR spans more than one category, please include all relevant prefixes.

Code Quality

The PR need to meet the following code quality standards:

  • We adhere to Google Python style guide and Google C++ style guide.
  • Pass all linter checks. Please use format.sh to format your code.
  • The code need to be well-documented to ensure future contributors can easily understand the code.
  • Include sufficient tests to ensure the project to stay correct and robust. This includes both unit tests and integration tests.
  • Please add documentation to docs/source/ if the PR modifies the user-facing behaviors of vLLM. It helps vLLM user understand and utilize the new features or changes.

Adding or changing kernels

Each custom kernel needs a schema and one or more implementations to be registered with PyTorch.

  • Make sure custom ops are registered following PyTorch guidelines: Custom C++ and CUDA Operators and The Custom Operators Manual
  • Custom operations that return Tensors require meta-functions. Meta-functions should be implemented and registered in python so that dynamic dims can be handled automatically. See above documents for a description of meta-functions.
  • Use torch.libary.opcheck() to test the function registration and meta-function for any registered ops. See tests/kernels for examples.
  • When changing the C++ signature of an existing op, the schema must be updated to reflect the changes.
  • If a new custom type is needed, see the following document: Custom Class Support in PT2.

Notes for Large Changes

Please keep the changes as concise as possible. For major architectural changes (>500 LOC excluding kernel/data/config/test), we would expect a GitHub issue (RFC) discussing the technical design and justification. Otherwise, we will tag it with rfc-required and might not go through the PR.

What to Expect for the Reviews

The goal of the vLLM team is to be a transparent reviewing machine. We would like to make the review process transparent and efficient and make sure no contributor feel confused or frustrated. However, the vLLM team is small, so we need to prioritize some PRs over others. Here is what you can expect from the review process:

  • After the PR is submitted, the PR will be assigned to a reviewer. Every reviewer will pick up the PRs based on their expertise and availability.
  • After the PR is assigned, the reviewer will provide status update every 2-3 days. If the PR is not reviewed within 7 days, please feel free to ping the reviewer or the vLLM team.
  • After the review, the reviewer will put an action-required label on the PR if there are changes required. The contributor should address the comments and ping the reviewer to re-review the PR.
  • Please respond to all comments within a reasonable time frame. If a comment isn't clear or you disagree with a suggestion, feel free to ask for clarification or discuss the suggestion.

Thank You

Finally, thank you for taking the time to read these guidelines and for your interest in contributing to vLLM. Your contributions make vLLM a great tool for everyone!

Copy link

👋 Hi! Thank you for contributing to the vLLM project.
Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can do one of these:

  • Add ready label to the PR
  • Enable auto-merge.

🚀

@DarkLight1337
Copy link
Member

DarkLight1337 commented Sep 12, 2024

@K-Mistele can you take a look at this and see if the changes look good to you? Since you mentioned that you will work on Llama3.1 tool parsing, it would be great to have an abstraction that works for your case as well.

@DarkLight1337 DarkLight1337 requested a review from mgoin September 12, 2024 08:39
@K-Mistele
Copy link
Contributor

@K-Mistele can you take a look at this and see if the changes look good to you? Since you mentioned that you will work on Llama3.1 tool parsing, it would be great to have an abstraction that works for your case as well.

Thanks for the ping! Checking now.

Copy link
Contributor

@K-Mistele K-Mistele left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Included a couple thoughts.

Generally,

  1. can you include documentation about the newly supported model(s) in the docs/source/serving/openai_compatible_server.md?
  2. it would probably be good to still have have the --tool-call-parser flag have a list of explicitly supported values for the purpose of clarity & understandability; which can be superceded by --tool-parser-plugin flag. This way. someone can easily understand which parsers are available out-of-the-box.
  3. Documentation on the implementation of tool parser plugins would be nice to have
  4. tests seem to have been unnecessarily broadened with respect to the "state" option of the get_current_weather tool. If internLM struggles to format this tool correctly, that should be fixed in the system prompt or chat template, instead of make tests less robust for all models.
  5. The streaming implementation needs a fix with type="function" as noted; and the streaming parser is not really implemented correctly.

examples/tool_chat_template_internlm2_tool.jinja Outdated Show resolved Hide resolved
tests/tool_use/test_tool_calls.py Outdated Show resolved Hide resolved
tests/tool_use/test_tool_calls.py Outdated Show resolved Hide resolved
vllm/entrypoints/openai/cli_args.py Show resolved Hide resolved
@@ -174,13 +174,21 @@ def make_arg_parser(parser: FlexibleArgumentParser) -> FlexibleArgumentParser:
parser.add_argument(
"--tool-call-parser",
type=str,
choices=["mistral", "hermes"],
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we still want to specify the choices of ["mistral", "hermes", "internlm2_5"], but make this optional in the event that --enable-auto-tool-choice is called with --tool-parser-plugin.

It would be good for people to know which tool call parsers are available by default, and this makes sure that the expected values get into the auto-generated documentation.

Copy link
Contributor Author

@sydnash sydnash Sep 13, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if special a choices in the add_argument, user cannot special a --tool-call-parser which register in the --tool-parser-plugin.

maybe we can given the default choices to the help information.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think what I'm trying to say is that you could keep the choices of ["mistral", "hermes", "internlm2_5"] and then do something like this in serving_chat.py:

Current state:

if self.enable_auto_tools:
    try:
        self.tool_parser = ToolParserManager.get_tool_parser(tool_parser)
    except Exception as e:
        raise TypeError("Error: --enable-auto-tool-choice requires tool_parser:'{tool_parser}' which has not  been registered") from e

Possible changes:

# if a plugin is not specified; we can do this already
if self.enable_auto_tools and not self.tool_parser_plugin:
    plugin_name = tool_parser # one of the options from the CLI argument, e.g. hermes or mistral

# if a plugin is specified - this may require some refactoring to get the tool parser plugin loaded in serving chat
elif self.enable_auto_tools and self.tool_parser_plugin:
    # get the name of the plugin loaded from `--tool-parser-plugin`
    plugin_name = get_plugin_name_somehow_from_loaded_plugin()

# handle additional cases here
try: 
    self.tool_parser = ToolParserManager.get_tool_parser(plugin_name)
except Exception as e:
    raise TypeError("You must specify a valid value for --tool-call-parser OR a value tool parser plugin"

Copy link
Contributor Author

@sydnash sydnash Sep 13, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

but In my design, a plugin can register any number of tool parsers into vllm, and user can use --tool-call-parser to specify the one he want to use just like the default tool parser write in vllm.

I added some documents in the docs/source/serving/openai_compatible_server.md, maybe you can take a look of that.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ohhhh, I see. Hmm. I'm not sure what the best pattern would be for the arguments here, then. @DarkLight1337 @mgoin do y'all have any thoughts?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe we can use metavar instead of choices to display help information.

    valid_tool_parsers = ["mistral", "hermes", "internlm2", "internlm2_5"]
    parser.add_argument(
        "--tool-call-parser",
        type=str,
        metavar=
        "{" + ",".join(valid_tool_parsers) +  "} or name registered in "
        "--tool-parser-plugin",
        default=None,
        help=
        "Select the tool call parser depending on the model that you're using."
        " This is used to parse the model-generated tool call into OpenAI API "
        "format. Required for --enable-auto-tool-choice.")

the help will look like this:

--tool-call-parser {mistral,hermes,internlm2,internlm2_5} or name registered in --tool-parser-plugin
                        Select the tool call parser depending on the model
                        that you're using. This is used to parse the model-
                        generated tool call into OpenAI API format. Required
                        for --enable-auto-tool-choice.

and move the plugin import and tool call parser check to run_server to check the invalid tool call parser name quickly.(before the model loads).

if args.tool_parser_plugin and len(args.tool_parser_plugin) > 3:
        ToolParserManager.import_tool_parser(args.tool_parser_plugin)
    
    if args.enable_auto_tool_choice:
        if args.tool_call_parser not in ToolParserManager.tool_parsers.keys():
            raise KeyError(f"invalid tool call parser: {args.tool_call_parser} "
                        f"chose from {{ {','.join(ToolParserManager.tool_parsers.keys())} }}")

error info look like this:

Traceback (most recent call last):
  File "/LocalRun/jun.dai/conda/envs/vllm_env/bin/vllm", line 8, in <module>
    sys.exit(main())
  File "/LocalRun/jun.dai/code/github/sydnash/vllm/vllm/scripts.py", line 165, in main
    args.dispatch_function(args)
  File "/LocalRun/jun.dai/code/github/sydnash/vllm/vllm/scripts.py", line 37, in serve
    asyncio.run(run_server(args))
  File "/LocalRun/jun.dai/conda/envs/vllm_env/lib/python3.10/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/LocalRun/jun.dai/conda/envs/vllm_env/lib/python3.10/asyncio/base_events.py", line 649, in run_until_complete
    return future.result()
  File "/LocalRun/jun.dai/code/github/sydnash/vllm/vllm/entrypoints/openai/api_server.py", line 505, in run_server
    raise KeyError(f"invalid tool call parser: {args.tool_call_parser} "
KeyError: 'invalid tool call parser: internlm3 chose from { hermes,internlm2,internlm2_5,mistral,internlm }'

vllm/entrypoints/openai/serving_chat.py Show resolved Hide resolved
@sydnash sydnash requested a review from K-Mistele September 18, 2024 02:13
@sydnash
Copy link
Contributor Author

sydnash commented Sep 18, 2024

@khluu It seems that the CI failure was caused by an environment issue. Could you please grant me retry permissions to give it a try?

@DarkLight1337
Copy link
Member

I have restarted the CI build.

@sydnash
Copy link
Contributor Author

sydnash commented Sep 23, 2024

Included a couple thoughts.

Generally,

  1. can you include documentation about the newly supported model(s) in the docs/source/serving/openai_compatible_server.md?
  2. it would probably be good to still have have the --tool-call-parser flag have a list of explicitly supported values for the purpose of clarity & understandability; which can be superceded by --tool-parser-plugin flag. This way. someone can easily understand which parsers are available out-of-the-box.
  3. Documentation on the implementation of tool parser plugins would be nice to have
  4. tests seem to have been unnecessarily broadened with respect to the "state" option of the get_current_weather tool. If internLM struggles to format this tool correctly, that should be fixed in the system prompt or chat template, instead of make tests less robust for all models.
  5. The streaming implementation needs a fix with type="function" as noted; and the streaming parser is not really implemented correctly.

hi, these things has been fixed. Please take some time to review it so that we can move on. @K-Mistele

tests/tool_use/test_tool_calls.py Outdated Show resolved Hide resolved
docs/source/serving/openai_compatible_server.md Outdated Show resolved Hide resolved
Copy link
Contributor

@K-Mistele K-Mistele left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM :)

@sydnash
Copy link
Contributor Author

sydnash commented Sep 26, 2024

Hello, I have completed the review with @K-Mistele . What should I do next? @DarkLight1337 @mgoin

@mgoin mgoin added the ready ONLY add when PR is ready to merge/full CI is needed label Sep 26, 2024
@sydnash
Copy link
Contributor Author

sydnash commented Sep 29, 2024

@K-Mistele hi, can help me to see this.
After merge from main, the mistral's test_chat_completion_with_tools case failed. It seems the module output is not correct.
input:

INFO 09-29 07:59:31 logger.py:36] Received request chat-431bfe95c7b34771b2b2a50d71c8fb93: prompt: '<s>[INST] Hi! How are you?[/INST] I\'m doing great! How can I assist you?</s>[AVAILABLE_TOOLS] [{"type": "function", "function": {"name": "get_current_weather", "description": "Get the current weather in a given location", "parameters": {"type": "object", "properties": {"city": {"type": "string", "description": "The city to find the weather for, e.g. \'San Francisco\'"}, "state": {"type": "string", "description": "must the two-letter abbreviation for the state that the city is in, e.g. \'CA\' which would mean \'California\'"}, "unit": {"type": "string", "description": "The unit to fetch the temperature in", "enum": ["celsius", "fahrenheit"]}}}}}][/AVAILABLE_TOOLS][INST] Can you tell me a joke please?[/INST]', params: SamplingParams(n=1, best_of=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=-1, min_p=0.0, seed=None, use_beam_search=False, length_penalty=1.0, early_stopping=False, stop=[], stop_token_ids=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=150, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None), prompt_token_ids: [1, 3, 16127, 29576, 2370, 1228, 1136, 29572, 4, 1083, 29510, 29487, 3316, 2366, 29576, 2370, 1309, 1083, 6799, 1136, 29572, 2, 6, 1501, 7567, 1891, 2032, 1113, 3396, 1316, 1113, 3396, 2032, 10598, 1629, 2032, 1113, 1295, 29498, 3790, 29498, 1537, 1991, 1316, 1113, 7286, 2032, 1113, 2226, 1040, 2636, 8854, 1065, 1032, 2846, 5491, 1316, 1113, 12206, 2032, 10598, 1891, 2032, 1113, 3582, 1316, 1113, 11491, 2032, 10598, 19141, 2032, 10598, 1891, 2032, 1113, 2195, 1316, 1113, 7286, 2032, 1113, 1782, 3758, 1066, 2068, 1040, 8854, 1122, 29493, 1085, 29491, 29489, 29491, 1232, 18672, 10454, 29510, 8474, 1113, 2795, 2032, 10598, 1891, 2032, 1113, 2195, 1316, 1113, 7286, 2032, 1113, 20727, 1040, 1757, 29501, 16635, 1302, 2920, 5987, 1120, 1122, 1040, 2433, 1137, 1040, 3758, 1117, 1065, 29493, 1085, 29491, 29489, 29491, 1232, 5962, 29510, 1458, 1450, 2840, 1232, 8386, 6767, 29510, 8474, 1113, 6074, 2032, 10598, 1891, 2032, 1113, 2195, 1316, 1113, 7286, 2032, 1113, 1782, 5796, 1066, 12470, 1040, 8409, 1065, 1316, 1113, 10825, 2032, 8135, 29485, 1958, 3938, 1316, 1113, 29490, 19425, 13075, 3010, 1743, 1743, 10925, 7, 3, 3186, 1136, 2680, 1296, 1032, 13783, 5433, 29572, 4], lora_request: None, prompt_adapter_request: None.

output:

model_output=[TOOL_CALLS] [{"name": "get_current_weather", "arguments": {"city": "San Francisco", "state": "CA", "unit": "fahrenheit"}}]

[{"name": "tell_joke"}]

I'm sorry, I don't have a function to tell jokes yet. But here's a classic one: Why don't scientists trust atoms? Because they make up everything! =end

the extract info like this:

tool call info = tools_called=True tool_calls=[ToolCall(id='chatcmpl-tool-5607719ad00441eea537f35ceada4a5c', type='function', function=FunctionCall(name='get_current_weather', arguments='{"city": "San Francisco", "state": "CA", "unit": "fahrenheit"}'))] content=None = end

@K-Mistele
Copy link
Contributor

@K-Mistele hi, can help me to see this. After merge from main, the mistral's test_chat_completion_with_tools case failed. It seems the module output is not correct. input:

INFO 09-29 07:59:31 logger.py:36] Received request chat-431bfe95c7b34771b2b2a50d71c8fb93: prompt: '<s>[INST] Hi! How are you?[/INST] I\'m doing great! How can I assist you?</s>[AVAILABLE_TOOLS] [{"type": "function", "function": {"name": "get_current_weather", "description": "Get the current weather in a given location", "parameters": {"type": "object", "properties": {"city": {"type": "string", "description": "The city to find the weather for, e.g. \'San Francisco\'"}, "state": {"type": "string", "description": "must the two-letter abbreviation for the state that the city is in, e.g. \'CA\' which would mean \'California\'"}, "unit": {"type": "string", "description": "The unit to fetch the temperature in", "enum": ["celsius", "fahrenheit"]}}}}}][/AVAILABLE_TOOLS][INST] Can you tell me a joke please?[/INST]', params: SamplingParams(n=1, best_of=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=-1, min_p=0.0, seed=None, use_beam_search=False, length_penalty=1.0, early_stopping=False, stop=[], stop_token_ids=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=150, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None), prompt_token_ids: [1, 3, 16127, 29576, 2370, 1228, 1136, 29572, 4, 1083, 29510, 29487, 3316, 2366, 29576, 2370, 1309, 1083, 6799, 1136, 29572, 2, 6, 1501, 7567, 1891, 2032, 1113, 3396, 1316, 1113, 3396, 2032, 10598, 1629, 2032, 1113, 1295, 29498, 3790, 29498, 1537, 1991, 1316, 1113, 7286, 2032, 1113, 2226, 1040, 2636, 8854, 1065, 1032, 2846, 5491, 1316, 1113, 12206, 2032, 10598, 1891, 2032, 1113, 3582, 1316, 1113, 11491, 2032, 10598, 19141, 2032, 10598, 1891, 2032, 1113, 2195, 1316, 1113, 7286, 2032, 1113, 1782, 3758, 1066, 2068, 1040, 8854, 1122, 29493, 1085, 29491, 29489, 29491, 1232, 18672, 10454, 29510, 8474, 1113, 2795, 2032, 10598, 1891, 2032, 1113, 2195, 1316, 1113, 7286, 2032, 1113, 20727, 1040, 1757, 29501, 16635, 1302, 2920, 5987, 1120, 1122, 1040, 2433, 1137, 1040, 3758, 1117, 1065, 29493, 1085, 29491, 29489, 29491, 1232, 5962, 29510, 1458, 1450, 2840, 1232, 8386, 6767, 29510, 8474, 1113, 6074, 2032, 10598, 1891, 2032, 1113, 2195, 1316, 1113, 7286, 2032, 1113, 1782, 5796, 1066, 12470, 1040, 8409, 1065, 1316, 1113, 10825, 2032, 8135, 29485, 1958, 3938, 1316, 1113, 29490, 19425, 13075, 3010, 1743, 1743, 10925, 7, 3, 3186, 1136, 2680, 1296, 1032, 13783, 5433, 29572, 4], lora_request: None, prompt_adapter_request: None.

output:

model_output=[TOOL_CALLS] [{"name": "get_current_weather", "arguments": {"city": "San Francisco", "state": "CA", "unit": "fahrenheit"}}]

[{"name": "tell_joke"}]

I'm sorry, I don't have a function to tell jokes yet. But here's a classic one: Why don't scientists trust atoms? Because they make up everything! =end

the extract info like this:

tool call info = tools_called=True tool_calls=[ToolCall(id='chatcmpl-tool-5607719ad00441eea537f35ceada4a5c', type='function', function=FunctionCall(name='get_current_weather', arguments='{"city": "San Francisco", "state": "CA", "unit": "fahrenheit"}'))] content=None = end

Yes, this is a known issue. Someone messed with tokenizers for Mistral models on main branch, which destabilitized Mistral tool calling with the default config.

Seems related to #8301 and #8515

Can you try setting the CLI arg for mistral in the utils.py for tests: --tokenizer-mode mistral --load-format mistral --config-format mistral and see if that works? Alternatively, you may have to explicitly disable mistral for all of these things and set them back to the default. I'm not sure which will work, but one of them should.

@sydnash
Copy link
Contributor Author

sydnash commented Sep 29, 2024

@K-Mistele hi, can help me to see this. After merge from main, the mistral's test_chat_completion_with_tools case failed. It seems the module output is not correct. input:

INFO 09-29 07:59:31 logger.py:36] Received request chat-431bfe95c7b34771b2b2a50d71c8fb93: prompt: '<s>[INST] Hi! How are you?[/INST] I\'m doing great! How can I assist you?</s>[AVAILABLE_TOOLS] [{"type": "function", "function": {"name": "get_current_weather", "description": "Get the current weather in a given location", "parameters": {"type": "object", "properties": {"city": {"type": "string", "description": "The city to find the weather for, e.g. \'San Francisco\'"}, "state": {"type": "string", "description": "must the two-letter abbreviation for the state that the city is in, e.g. \'CA\' which would mean \'California\'"}, "unit": {"type": "string", "description": "The unit to fetch the temperature in", "enum": ["celsius", "fahrenheit"]}}}}}][/AVAILABLE_TOOLS][INST] Can you tell me a joke please?[/INST]', params: SamplingParams(n=1, best_of=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=-1, min_p=0.0, seed=None, use_beam_search=False, length_penalty=1.0, early_stopping=False, stop=[], stop_token_ids=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=150, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None), prompt_token_ids: [1, 3, 16127, 29576, 2370, 1228, 1136, 29572, 4, 1083, 29510, 29487, 3316, 2366, 29576, 2370, 1309, 1083, 6799, 1136, 29572, 2, 6, 1501, 7567, 1891, 2032, 1113, 3396, 1316, 1113, 3396, 2032, 10598, 1629, 2032, 1113, 1295, 29498, 3790, 29498, 1537, 1991, 1316, 1113, 7286, 2032, 1113, 2226, 1040, 2636, 8854, 1065, 1032, 2846, 5491, 1316, 1113, 12206, 2032, 10598, 1891, 2032, 1113, 3582, 1316, 1113, 11491, 2032, 10598, 19141, 2032, 10598, 1891, 2032, 1113, 2195, 1316, 1113, 7286, 2032, 1113, 1782, 3758, 1066, 2068, 1040, 8854, 1122, 29493, 1085, 29491, 29489, 29491, 1232, 18672, 10454, 29510, 8474, 1113, 2795, 2032, 10598, 1891, 2032, 1113, 2195, 1316, 1113, 7286, 2032, 1113, 20727, 1040, 1757, 29501, 16635, 1302, 2920, 5987, 1120, 1122, 1040, 2433, 1137, 1040, 3758, 1117, 1065, 29493, 1085, 29491, 29489, 29491, 1232, 5962, 29510, 1458, 1450, 2840, 1232, 8386, 6767, 29510, 8474, 1113, 6074, 2032, 10598, 1891, 2032, 1113, 2195, 1316, 1113, 7286, 2032, 1113, 1782, 5796, 1066, 12470, 1040, 8409, 1065, 1316, 1113, 10825, 2032, 8135, 29485, 1958, 3938, 1316, 1113, 29490, 19425, 13075, 3010, 1743, 1743, 10925, 7, 3, 3186, 1136, 2680, 1296, 1032, 13783, 5433, 29572, 4], lora_request: None, prompt_adapter_request: None.

output:

model_output=[TOOL_CALLS] [{"name": "get_current_weather", "arguments": {"city": "San Francisco", "state": "CA", "unit": "fahrenheit"}}]

[{"name": "tell_joke"}]

I'm sorry, I don't have a function to tell jokes yet. But here's a classic one: Why don't scientists trust atoms? Because they make up everything! =end

the extract info like this:

tool call info = tools_called=True tool_calls=[ToolCall(id='chatcmpl-tool-5607719ad00441eea537f35ceada4a5c', type='function', function=FunctionCall(name='get_current_weather', arguments='{"city": "San Francisco", "state": "CA", "unit": "fahrenheit"}'))] content=None = end

Yes, this is a known issue. Someone messed with tokenizers for Mistral models on main branch, which destabilitized Mistral tool calling with the default config.

Seems related to #8301 and #8515

Can you try setting the CLI arg for mistral in the utils.py for tests: --tokenizer-mode mistral --load-format mistral --config-format mistral and see if that works? Alternatively, you may have to explicitly disable mistral for all of these things and set them back to the default. I'm not sure which will work, but one of them should.

After add these parameters, it crashed in mistral_tool_parser.py.

  File "/LocalRun/jun.dai/code/github/sydnash/vllm/vllm/entrypoints/openai/tool_parsers/mistral_tool_parser.py", line 51, in __init__
    self.bot_token_id = self.model_tokenizer.vocab[self.bot_token]
TypeError: 'method' object is not subscriptable

It seems the self.model_tokenizer.vocab is a methord, not a dict after set theses parameters.

@sydnash
Copy link
Contributor Author

sydnash commented Sep 29, 2024

@K-Mistele hi, can help me to see this. After merge from main, the mistral's test_chat_completion_with_tools case failed. It seems the module output is not correct. input:

INFO 09-29 07:59:31 logger.py:36] Received request chat-431bfe95c7b34771b2b2a50d71c8fb93: prompt: '<s>[INST] Hi! How are you?[/INST] I\'m doing great! How can I assist you?</s>[AVAILABLE_TOOLS] [{"type": "function", "function": {"name": "get_current_weather", "description": "Get the current weather in a given location", "parameters": {"type": "object", "properties": {"city": {"type": "string", "description": "The city to find the weather for, e.g. \'San Francisco\'"}, "state": {"type": "string", "description": "must the two-letter abbreviation for the state that the city is in, e.g. \'CA\' which would mean \'California\'"}, "unit": {"type": "string", "description": "The unit to fetch the temperature in", "enum": ["celsius", "fahrenheit"]}}}}}][/AVAILABLE_TOOLS][INST] Can you tell me a joke please?[/INST]', params: SamplingParams(n=1, best_of=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=-1, min_p=0.0, seed=None, use_beam_search=False, length_penalty=1.0, early_stopping=False, stop=[], stop_token_ids=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=150, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None), prompt_token_ids: [1, 3, 16127, 29576, 2370, 1228, 1136, 29572, 4, 1083, 29510, 29487, 3316, 2366, 29576, 2370, 1309, 1083, 6799, 1136, 29572, 2, 6, 1501, 7567, 1891, 2032, 1113, 3396, 1316, 1113, 3396, 2032, 10598, 1629, 2032, 1113, 1295, 29498, 3790, 29498, 1537, 1991, 1316, 1113, 7286, 2032, 1113, 2226, 1040, 2636, 8854, 1065, 1032, 2846, 5491, 1316, 1113, 12206, 2032, 10598, 1891, 2032, 1113, 3582, 1316, 1113, 11491, 2032, 10598, 19141, 2032, 10598, 1891, 2032, 1113, 2195, 1316, 1113, 7286, 2032, 1113, 1782, 3758, 1066, 2068, 1040, 8854, 1122, 29493, 1085, 29491, 29489, 29491, 1232, 18672, 10454, 29510, 8474, 1113, 2795, 2032, 10598, 1891, 2032, 1113, 2195, 1316, 1113, 7286, 2032, 1113, 20727, 1040, 1757, 29501, 16635, 1302, 2920, 5987, 1120, 1122, 1040, 2433, 1137, 1040, 3758, 1117, 1065, 29493, 1085, 29491, 29489, 29491, 1232, 5962, 29510, 1458, 1450, 2840, 1232, 8386, 6767, 29510, 8474, 1113, 6074, 2032, 10598, 1891, 2032, 1113, 2195, 1316, 1113, 7286, 2032, 1113, 1782, 5796, 1066, 12470, 1040, 8409, 1065, 1316, 1113, 10825, 2032, 8135, 29485, 1958, 3938, 1316, 1113, 29490, 19425, 13075, 3010, 1743, 1743, 10925, 7, 3, 3186, 1136, 2680, 1296, 1032, 13783, 5433, 29572, 4], lora_request: None, prompt_adapter_request: None.

output:

model_output=[TOOL_CALLS] [{"name": "get_current_weather", "arguments": {"city": "San Francisco", "state": "CA", "unit": "fahrenheit"}}]

[{"name": "tell_joke"}]

I'm sorry, I don't have a function to tell jokes yet. But here's a classic one: Why don't scientists trust atoms? Because they make up everything! =end

the extract info like this:

tool call info = tools_called=True tool_calls=[ToolCall(id='chatcmpl-tool-5607719ad00441eea537f35ceada4a5c', type='function', function=FunctionCall(name='get_current_weather', arguments='{"city": "San Francisco", "state": "CA", "unit": "fahrenheit"}'))] content=None = end

Yes, this is a known issue. Someone messed with tokenizers for Mistral models on main branch, which destabilitized Mistral tool calling with the default config.
Seems related to #8301 and #8515
Can you try setting the CLI arg for mistral in the utils.py for tests: --tokenizer-mode mistral --load-format mistral --config-format mistral and see if that works? Alternatively, you may have to explicitly disable mistral for all of these things and set them back to the default. I'm not sure which will work, but one of them should.

After add these parameters, it crashed in mistral_tool_parser.py.

  File "/LocalRun/jun.dai/code/github/sydnash/vllm/vllm/entrypoints/openai/tool_parsers/mistral_tool_parser.py", line 51, in __init__
    self.bot_token_id = self.model_tokenizer.vocab[self.bot_token]
TypeError: 'method' object is not subscriptable

It seems the self.model_tokenizer.vocab is a methord, not a dict after set theses parameters.

I fixed it in this commit 1db530d.

  1. I add a vocab property to MistralTokenizer in vllm/transformers_utils/tokenizers/mistral.py file. @DarkLight1337 Can you help me to check if this is ok.
  2. Remove the modify of self.model_tokenizer in vllm/entrypoints/openai/tool_parsers/mistral_tool_parser.py.

@DarkLight1337
Copy link
Member

DarkLight1337 commented Sep 29, 2024

I add a vocab property to MistralTokenizer in vllm/transformers_utils/tokenizers/mistral.py file. @DarkLight1337 Can you help me to check if this is ok.
Remove the modify of self.model_tokenizer in vllm/entrypoints/openai/tool_parsers/mistral_tool_parser.py.

Looking at the definition of the tokenizer in HuggingFace (PreTrainedTokenizerBase), there doesn't seem to be a publicly defined property called vocab. Instead, there is a method called get_vocab. To maintain a consistent API, I suggest to instead update the tool parser to call that method.

@sydnash
Copy link
Contributor Author

sydnash commented Sep 29, 2024

I add a vocab property to MistralTokenizer in vllm/transformers_utils/tokenizers/mistral.py file. @DarkLight1337 Can you help me to check if this is ok.
Remove the modify of self.model_tokenizer in vllm/entrypoints/openai/tool_parsers/mistral_tool_parser.py.

Looking at the definition of the tokenizer in HuggingFace (PreTrainedTokenizerBase), there doesn't seem to be a publicly defined property called vocab. Instead, there is a method called get_vocab. To maintain a consistent API, I suggest to instead update the tool parser to call that method.

Thanks, I'll change it.

I only found vocab property in the PreTrainedTokenizerFast class.

@sydnash
Copy link
Contributor Author

sydnash commented Sep 29, 2024

After add these CLI flags to choose the MistralTokenizer, the chat_template will cannot works as the warning message says here.
The prompt generated by apply_chat_template is not correct even if I add chat_template=chat_template to the apply_chat_template function.
@DarkLight1337 @K-Mistele

def apply_mistral_chat_template(
    tokenizer: MistralTokenizer,
    messages: List[ChatCompletionMessageParam],
    chat_template: Optional[str] = None,
    **kwargs: Any,
) -> List[int]:
    if chat_template is not None:
        logger.warning(
            "'chat_template' cannot be overridden for mistral tokenizer.")

    return tokenizer.apply_chat_template(
        messages=messages,
        **kwargs,
    )

@DarkLight1337
Copy link
Member

@patrickvonplaten since you were the last one to update the mistral tokenizer, can you investigate this regression?

@sydnash
Copy link
Contributor Author

sydnash commented Sep 29, 2024

Based on my current testing, there are three issues:

  1. The message parameter of apply_mistral_chat_template need to change from request.messages to conversation due to this error:
ERROR 09-29 17:36:44 serving_chat.py:153]   File "/LocalRun/jun.dai/conda/envs/vllm_env/lib/python3.10/site-packages/mistral_common/protocol/instruct/validator.py", line 147, in _validate_assistant_message
ERROR 09-29 17:36:44 serving_chat.py:153]     raise InvalidAssistantMessageException(
ERROR 09-29 17:36:44 serving_chat.py:153] mistral_common.exceptions.InvalidAssistantMessageException: Assistant message must have either content or tool_calls, but not both.
  1. The tool call id must change to fit ^[a-zA-Z0-9]{9}$ regex due to this error:
ERROR 09-29 17:29:25 serving_chat.py:153]   File "/LocalRun/jun.dai/conda/envs/vllm_env/lib/python3.10/site-packages/mistral_common/protocol/instruct/validator.py", line 310, in _validate_tool_call
ERROR 09-29 17:29:25 serving_chat.py:153]     raise InvalidFunctionCallException(
ERROR 09-29 17:29:25 serving_chat.py:153] mistral_common.exceptions.InvalidFunctionCallException: Tool call id was chatcmpl-tool-03e6481b146e408e9523d9c956696295 but must be a-z, A-Z, 0-9, with a length of 9.
  1. The model cannot generate the correct tool call message even doing the above change.
[{"name": "get_current_weather", "arguments": {"city": "Dallas", "state": "TX", "unit": "fahrenheit"}}, {"name": "get_current_weather", "arguments": {"city": "Orlando", "state": "FL", "unit": "fahrenheit"}}]

The output message has no [TOOL_CALLS], which is used to identify it as a tool call message.

@patrickvonplaten
Copy link
Contributor

@patrickvonplaten since you were the last one to update the mistral tokenizer, can you investigate this regression?

Sorry I'm not 100% following here - this PR seems to be related to internlm - not MistralTokenizer no?

If there is a regression for MistralTokenizer, can we maybe open an issue about it? Happy to look into it

@DarkLight1337
Copy link
Member

Based on my current testing, there are three issues:

  1. The message parameter of apply_mistral_chat_template need to change from request.messages to conversation due to this error:
ERROR 09-29 17:36:44 serving_chat.py:153]   File "/LocalRun/jun.dai/conda/envs/vllm_env/lib/python3.10/site-packages/mistral_common/protocol/instruct/validator.py", line 147, in _validate_assistant_message
ERROR 09-29 17:36:44 serving_chat.py:153]     raise InvalidAssistantMessageException(
ERROR 09-29 17:36:44 serving_chat.py:153] mistral_common.exceptions.InvalidAssistantMessageException: Assistant message must have either content or tool_calls, but not both.
  1. The tool call id must change to fit ^[a-zA-Z0-9]{9}$ regex due to this error:
ERROR 09-29 17:29:25 serving_chat.py:153]   File "/LocalRun/jun.dai/conda/envs/vllm_env/lib/python3.10/site-packages/mistral_common/protocol/instruct/validator.py", line 310, in _validate_tool_call
ERROR 09-29 17:29:25 serving_chat.py:153]     raise InvalidFunctionCallException(
ERROR 09-29 17:29:25 serving_chat.py:153] mistral_common.exceptions.InvalidFunctionCallException: Tool call id was chatcmpl-tool-03e6481b146e408e9523d9c956696295 but must be a-z, A-Z, 0-9, with a length of 9.
  1. The model cannot generate the correct tool call message even doing the above change.
[{"name": "get_current_weather", "arguments": {"city": "Dallas", "state": "TX", "unit": "fahrenheit"}}, {"name": "get_current_weather", "arguments": {"city": "Orlando", "state": "FL", "unit": "fahrenheit"}}]

The output message has no [TOOL_CALLS], which is used to identify it as a tool call message.

@sydnash does this occur without this PR? If so, please open a new issue and ping @patrickvonplaten about it.

@sydnash
Copy link
Contributor Author

sydnash commented Oct 3, 2024

It's only occur while start the server with the --tokenizer-mode mistral CLI, I do not know dose it a issue, It seems because the mistral tool parser and openai api are not compatible with the MistralTokenizer.
I'll remove it from this PR in the test case.

@sydnash
Copy link
Contributor Author

sydnash commented Oct 3, 2024

@K-Mistele hi, can help me to see this. After merge from main, the mistral's test_chat_completion_with_tools case failed. It seems the module output is not correct. input:

INFO 09-29 07:59:31 logger.py:36] Received request chat-431bfe95c7b34771b2b2a50d71c8fb93: prompt: '<s>[INST] Hi! How are you?[/INST] I\'m doing great! How can I assist you?</s>[AVAILABLE_TOOLS] [{"type": "function", "function": {"name": "get_current_weather", "description": "Get the current weather in a given location", "parameters": {"type": "object", "properties": {"city": {"type": "string", "description": "The city to find the weather for, e.g. \'San Francisco\'"}, "state": {"type": "string", "description": "must the two-letter abbreviation for the state that the city is in, e.g. \'CA\' which would mean \'California\'"}, "unit": {"type": "string", "description": "The unit to fetch the temperature in", "enum": ["celsius", "fahrenheit"]}}}}}][/AVAILABLE_TOOLS][INST] Can you tell me a joke please?[/INST]', params: SamplingParams(n=1, best_of=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=-1, min_p=0.0, seed=None, use_beam_search=False, length_penalty=1.0, early_stopping=False, stop=[], stop_token_ids=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=150, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None), prompt_token_ids: [1, 3, 16127, 29576, 2370, 1228, 1136, 29572, 4, 1083, 29510, 29487, 3316, 2366, 29576, 2370, 1309, 1083, 6799, 1136, 29572, 2, 6, 1501, 7567, 1891, 2032, 1113, 3396, 1316, 1113, 3396, 2032, 10598, 1629, 2032, 1113, 1295, 29498, 3790, 29498, 1537, 1991, 1316, 1113, 7286, 2032, 1113, 2226, 1040, 2636, 8854, 1065, 1032, 2846, 5491, 1316, 1113, 12206, 2032, 10598, 1891, 2032, 1113, 3582, 1316, 1113, 11491, 2032, 10598, 19141, 2032, 10598, 1891, 2032, 1113, 2195, 1316, 1113, 7286, 2032, 1113, 1782, 3758, 1066, 2068, 1040, 8854, 1122, 29493, 1085, 29491, 29489, 29491, 1232, 18672, 10454, 29510, 8474, 1113, 2795, 2032, 10598, 1891, 2032, 1113, 2195, 1316, 1113, 7286, 2032, 1113, 20727, 1040, 1757, 29501, 16635, 1302, 2920, 5987, 1120, 1122, 1040, 2433, 1137, 1040, 3758, 1117, 1065, 29493, 1085, 29491, 29489, 29491, 1232, 5962, 29510, 1458, 1450, 2840, 1232, 8386, 6767, 29510, 8474, 1113, 6074, 2032, 10598, 1891, 2032, 1113, 2195, 1316, 1113, 7286, 2032, 1113, 1782, 5796, 1066, 12470, 1040, 8409, 1065, 1316, 1113, 10825, 2032, 8135, 29485, 1958, 3938, 1316, 1113, 29490, 19425, 13075, 3010, 1743, 1743, 10925, 7, 3, 3186, 1136, 2680, 1296, 1032, 13783, 5433, 29572, 4], lora_request: None, prompt_adapter_request: None.

output:

model_output=[TOOL_CALLS] [{"name": "get_current_weather", "arguments": {"city": "San Francisco", "state": "CA", "unit": "fahrenheit"}}]

[{"name": "tell_joke"}]

I'm sorry, I don't have a function to tell jokes yet. But here's a classic one: Why don't scientists trust atoms? Because they make up everything! =end

the extract info like this:

tool call info = tools_called=True tool_calls=[ToolCall(id='chatcmpl-tool-5607719ad00441eea537f35ceada4a5c', type='function', function=FunctionCall(name='get_current_weather', arguments='{"city": "San Francisco", "state": "CA", "unit": "fahrenheit"}'))] content=None = end

Yes, this is a known issue. Someone messed with tokenizers for Mistral models on main branch, which destabilitized Mistral tool calling with the default config.

Seems related to #8301 and #8515

Can you try setting the CLI arg for mistral in the utils.py for tests: --tokenizer-mode mistral --load-format mistral --config-format mistral and see if that works? Alternatively, you may have to explicitly disable mistral for all of these things and set them back to the default. I'm not sure which will work, but one of them should.

The issue is because I accidentally deleted the system prompt of Mistral during the merge.
I think the mistral tool parser from @K-Mistele PR dose not support the MistralTokenizer.

@sydnash
Copy link
Contributor Author

sydnash commented Oct 3, 2024

My work on InternLM is complete. Should I open a separate issue to discuss the compatibility of MistralTokenizer with the openai API implement? Currently, the Mistral model can work without using --tokenizer-mode mistral, but there is a warning at startup. @DarkLight1337 @patrickvonplaten @K-Mistele @mgoin

FutureWarning: It is strongly recommended to run mistral models with `--tokenizer_mode "mistral"` to ensure correct encoding and decoding.

@patrickvonplaten
Copy link
Contributor

My work on InternLM is complete. Should I open a separate issue to discuss the compatibility of MistralTokenizer with the openai API implement? Currently, the Mistral model can work without using --tokenizer-mode mistral, but there is a warning at startup. @DarkLight1337 @patrickvonplaten @K-Mistele @mgoin

FutureWarning: It is strongly recommended to run mistral models with `--tokenizer_mode "mistral"` to ensure correct encoding and decoding.

Think the warning is expected :-) Would be suuper nice if you could open a seperate issue about the --tokenizer-mode mistral and ping me there

Copy link
Member

@DarkLight1337 DarkLight1337 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, we should work on OpenAI compatibility in another PR. Let's get this merged.

@DarkLight1337 DarkLight1337 merged commit 3dbb215 into vllm-project:main Oct 4, 2024
54 checks passed
liuyanyi pushed a commit to liuyanyi/vllm that referenced this pull request Oct 6, 2024
shajrawi pushed a commit to ROCm/vllm that referenced this pull request Oct 9, 2024
* [Build/CI] Upgrade to gcc 10 in the base build Docker image (vllm-project#8814)

* [Docs] Add README to the build docker image (vllm-project#8825)

* [CI/Build] Fix missing ci dependencies (vllm-project#8834)

* [misc][installation] build from source without compilation (vllm-project#8818)

* [ci] Soft fail Entrypoints, Samplers, LoRA, Decoder-only VLM (vllm-project#8872)

Signed-off-by: kevin <kevin@anyscale.com>

* [Bugfix] Include encoder prompts len to non-stream api usage response (vllm-project#8861)

* [Misc] Change dummy profiling and BOS fallback warns to log once (vllm-project#8820)

* [Bugfix] Fix print_warning_once's line info (vllm-project#8867)

* fix validation: Only set tool_choice `auto` if at least one tool is provided (vllm-project#8568)

* [Bugfix] Fixup advance_step.cu warning (vllm-project#8815)

* [BugFix] Fix test breakages from transformers 4.45 upgrade (vllm-project#8829)

* [Installation] Allow lower versions of FastAPI to maintain Ray 2.9 compatibility (vllm-project#8764)

* [Feature] Add support for Llama 3.1 and 3.2 tool use (vllm-project#8343)

Signed-off-by: Max de Bayser <mbayser@br.ibm.com>

* [Core] rename`PromptInputs` and `inputs` (vllm-project#8876)

* [misc] fix collect env (vllm-project#8894)

* [MISC] Fix invalid escape sequence '\' (vllm-project#8830)

Signed-off-by: Peter Pan <Peter.Pan@daocloud.io>

* [Bugfix][VLM] Fix Fuyu batching inference with `max_num_seqs>1` (vllm-project#8892)

* [TPU] Update pallas.py to support trillium (vllm-project#8871)

* [torch.compile] use empty tensor instead of None for profiling (vllm-project#8875)

* [Kernel] AQ AZP 4/4: Integrate asymmetric quantization to linear method (vllm-project#7271)

* [Bugfix] fix for deepseek w4a16 (vllm-project#8906)

Co-authored-by: mgoin <michael@neuralmagic.com>

* [Core] Multi-Step + Single Step Prefills via Chunked Prefill code path (vllm-project#8378)

Co-authored-by: Varun Sundar Rabindranath <varun@neuralmagic.com>

* [misc][distributed] add VLLM_SKIP_P2P_CHECK flag (vllm-project#8911)

* [Core] Priority-based scheduling in async engine (vllm-project#8850)

* [misc] fix wheel name (vllm-project#8919)

* [Bugfix][Intel] Fix XPU Dockerfile Build (vllm-project#7824)

Signed-off-by: tylertitsworth <tyler.titsworth@intel.com>
Co-authored-by: youkaichao <youkaichao@126.com>

* [Misc] Remove vLLM patch of `BaichuanTokenizer` (vllm-project#8921)

* [Bugfix] Fix code for downloading models from modelscope (vllm-project#8443)

* [Bugfix] Fix PP for Multi-Step (vllm-project#8887)

* [CI/Build] Update models tests & examples (vllm-project#8874)

Co-authored-by: Roger Wang <ywang@roblox.com>

* [Frontend] Make beam search emulator temperature modifiable (vllm-project#8928)

Co-authored-by: Eduard Balzin <nfunctor@yahoo.fr>

* [Bugfix] Support testing prefill throughput with benchmark_serving.py --hf-output-len 1 (vllm-project#8891)

* [doc] organize installation doc and expose per-commit docker (vllm-project#8931)

* [Core] Improve choice of Python multiprocessing method (vllm-project#8823)

Signed-off-by: Russell Bryant <rbryant@redhat.com>
Co-authored-by: youkaichao <youkaichao@126.com>

* [Bugfix] Block manager v2 with preemption and lookahead slots (vllm-project#8824)

* [Bugfix] Fix Marlin MoE act order when is_k_full == False (vllm-project#8741)

Co-authored-by: Tyler Michael Smith <tyler@neuralmagic.com>

* [CI/Build] Add test decorator for minimum GPU memory (vllm-project#8925)

* [Build/CI] Set FETCHCONTENT_BASE_DIR to one location for better caching (vllm-project#8930)

* [Model] Support Qwen2.5-Math-RM-72B (vllm-project#8896)

* [Model][LoRA]LoRA support added for MiniCPMV2.5 (vllm-project#7199)

* [BugFix] Fix seeded random sampling with encoder-decoder models (vllm-project#8870)

Co-authored-by: Roger Wang <ywang@roblox.com>

* [Misc] Fix typo in BlockSpaceManagerV1 (vllm-project#8944)

* [Frontend] Added support for HF's new `continue_final_message` parameter (vllm-project#8942)

* [Kernel][Model] Varlen prefill + Prefill chunking support for mamba kernels and Jamba model (vllm-project#8533)

* [Model] support input embeddings for qwen2vl (vllm-project#8856)

* [Misc][CI/Build] Include `cv2` via `mistral_common[opencv]`  (vllm-project#8951)

* [Model][LoRA]LoRA support added for MiniCPMV2.6 (vllm-project#8943)

Co-authored-by: DarkLight1337 <tlleungac@connect.ust.hk>

* [Model] Expose InternVL2 max_dynamic_patch as a mm_processor_kwarg (vllm-project#8946)

* [Core] Make scheduling policy settable via EngineArgs (vllm-project#8956)

* [Misc] Adjust max_position_embeddings for LoRA compatibility (vllm-project#8957)

* [ci] Add CODEOWNERS for test directories  (vllm-project#8795)

Signed-off-by: kevin <kevin@anyscale.com>

* [CI][SpecDecode] Fix spec decode tests, use flash attention backend for spec decode CI tests. (vllm-project#8975)

* [Frontend][Core] Move guided decoding params into sampling params (vllm-project#8252)

Signed-off-by: Joe Runde <Joseph.Runde@ibm.com>
Co-authored-by: Nick Hill <nickhill@us.ibm.com>

* [CI/Build] Fix machete generated kernel files ordering (vllm-project#8976)

Signed-off-by: kevin <kevin@anyscale.com>
Co-authored-by: Cody Yu <hao.yu.cody@gmail.com>

* [torch.compile] fix tensor alias (vllm-project#8982)

* [Misc] add process_weights_after_loading for DummyLoader (vllm-project#8969)

* [Bugfix] Fix Fuyu tensor parallel inference (vllm-project#8986)

* [Bugfix] Fix Token IDs Reference for MiniCPM-V When Images are Provided With No Placeholders (vllm-project#8991)

Signed-off-by: Alex-Brooks <Alex.Brooks@ibm.com>

* [Core] [Frontend] Priority scheduling for embeddings and in the OpenAI-API (vllm-project#8965)

* [Doc] Update list of supported models (vllm-project#8987)

* Update benchmark_serving.py to read and write json-datasets, results in UTF8, for better compatibility with Windows (vllm-project#8997)

* [Spec Decode] (1/2) Remove batch expansion (vllm-project#8839)

* [Core] Combined support for multi-step scheduling, chunked prefill & prefix caching (vllm-project#8804)

Co-authored-by: Varun Sundar Rabindranath <varun@neuralmagic.com>
Co-authored-by: Andrew Feldman <afeld2012@gmail.com>

* [Misc] Update Default Image Mapper Error Log (vllm-project#8977)

Signed-off-by: Alex-Brooks <Alex.Brooks@ibm.com>
Co-authored-by: Roger Wang <136131678+ywang96@users.noreply.github.com>

* [Core] CUDA Graphs for Multi-Step + Chunked-Prefill (vllm-project#8645)

Co-authored-by: Varun Sundar Rabindranath <varun@neuralmagic.com>

* [OpenVINO] Enable GPU support for OpenVINO vLLM backend (vllm-project#8192)

* [Model]  Adding Granite MoE. (vllm-project#8206)

Co-authored-by: Nick Hill <nickhill@us.ibm.com>

* [Doc] Update Granite model docs (vllm-project#9025)

* [Bugfix] example template should not add parallel_tool_prompt if tools is none (vllm-project#9007)

* [Misc] log when using default MoE config (vllm-project#8971)

* [BugFix] Enforce Mistral ToolCall id constraint when using the Mistral tool call parser (vllm-project#9020)

* [Core] Make BlockSpaceManagerV2 the default BlockManager to use. (vllm-project#8678)

* [Frontend] [Neuron] Parse literals out of override-neuron-config (vllm-project#8959)

Co-authored-by: Jerzy Zagorski <jzagorsk@amazon.com>

* [misc] add forward context for attention (vllm-project#9029)

* Fix failing spec decode test (vllm-project#9054)

* [Bugfix] Weight loading fix for OPT model (vllm-project#9042)

Co-authored-by: dvres <dvres@fri.uni-lj.si>

* [Frontend][Feature] support tool calling for internlm/internlm2_5-7b-chat model (vllm-project#8405)

* [CI/Build] Per file CUDA Archs (improve wheel size and dev build times) (vllm-project#8845)

* [Misc] Enable multi-step output streaming by default (vllm-project#9047)

* [Models] Add remaining model PP support (vllm-project#7168)

Signed-off-by: Muralidhar Andoorveedu <muralidhar.andoorveedu@centml.ai>
Signed-off-by: Murali Andoorveedu <muralidhar.andoorveedu@centml.ai>
Co-authored-by: DarkLight1337 <tlleungac@connect.ust.hk>

* [Misc] Move registry to its own file (vllm-project#9064)

* [Bugfix] Reshape the dimensions of the input image embeddings in Qwen2VL (vllm-project#9071)

* [Bugfix] Flash attention arches not getting set properly (vllm-project#9062)

* [Model] add a bunch of supported lora modules for mixtral (vllm-project#9008)

Signed-off-by: Prashant Gupta <prashantgupta@us.ibm.com>

* Remove AMD Ray Summit Banner (vllm-project#9075)

* [Hardware][PowerPC] Make oneDNN dependency optional for Power (vllm-project#9039)

Signed-off-by: Varad Ahirwadkar <varad.ahirwadkar1@ibm.com>

* [Core][VLM] Test registration for OOT multimodal models (vllm-project#8717)

Co-authored-by: DarkLight1337 <tlleungac@connect.ust.hk>

* Adds truncate_prompt_tokens param for embeddings creation (vllm-project#8999)

Signed-off-by: Flavia Beo <flavia.beo@ibm.com>

* [Kernel] Zero point support in fused MarlinMoE kernel + AWQ Fused MoE (vllm-project#8973)

Co-authored-by: Dipika <dipikasikka1@gmail.com>
Co-authored-by: Dipika Sikka <ds3822@columbia.edu>

* [CI] Update performance benchmark: upgrade trt-llm to r24.07, and add SGLang (vllm-project#7412)

* [Misc] Improved prefix cache example (vllm-project#9077)

* [Misc] Add random seed for prefix cache benchmark (vllm-project#9081)

* [Misc] Fix CI lint (vllm-project#9085)

* [Hardware][Neuron] Add on-device sampling support for Neuron (vllm-project#8746)

Co-authored-by: Ashraf Mahgoub <ashymahg@amazon.com>

* [torch.compile] improve allreduce registration (vllm-project#9061)

* [Doc] Update README.md with Ray summit slides (vllm-project#9088)

* [Bugfix] use blockmanagerv1 for encoder-decoder (vllm-project#9084)

Co-authored-by: Roger Wang <ywang@roblox.com>

* [Bugfix] Fixes Phi3v & Ultravox Multimodal EmbeddingInputs (vllm-project#8979)

* [Model] Support Gemma2 embedding model (vllm-project#9004)

* [Bugfix] Deprecate registration of custom configs to huggingface (vllm-project#9083)

* [Bugfix] Fix order of arguments matters in config.yaml (vllm-project#8960)

* [core] use forward context for flash infer (vllm-project#9097)

* [Bugfix] Fix try-catch conditions to import correct Flash Attention Backend in Draft Model (vllm-project#9101)

* [Frontend] API support for beam search (vllm-project#9087)

Co-authored-by: youkaichao <youkaichao@126.com>

* [Misc] Remove user-facing error for removed VLM args (vllm-project#9104)

* [Model] PP support for embedding models and update docs (vllm-project#9090)

Co-authored-by: Roger Wang <136131678+ywang96@users.noreply.github.com>

* [Bugfix] fix tool_parser error handling when serve a model not support it (vllm-project#8709)

* [Bugfix] Fix incorrect updates to num_computed_tokens in multi-step scheduling (vllm-project#9038)

Co-authored-by: Varun Sundar Rabindranath <varun@neuralmagic.com>

* [Bugfix][Hardware][CPU] Fix CPU model input for decode (vllm-project#9044)

* [BugFix][Core] Fix BlockManagerV2 when Encoder Input is None (vllm-project#9103)

* [core] remove beam search from the core (vllm-project#9105)

* [Model] Explicit interface for vLLM models and support OOT embedding models (vllm-project#9108)

* [Hardware][CPU] Cross-attention and Encoder-Decoder models support on CPU backend (vllm-project#9089)

* [Core] Refactor GGUF parameters packing and forwarding (vllm-project#8859)

* [Model] Support NVLM-D and fix QK Norm in InternViT (vllm-project#9045)

Co-authored-by: Roger Wang <ywang@roblox.com>
Co-authored-by: Isotr0py <mozf@mail2.sysu.edu.cn>

* [Doc]: Add deploying_with_k8s guide (vllm-project#8451)

* [CI/Build] Add linting for github actions workflows (vllm-project#7876)

Signed-off-by: Russell Bryant <rbryant@redhat.com>

* [Doc] Include performance benchmark in README (vllm-project#9135)

* [misc] fix comment and variable name (vllm-project#9139)

* Add Slack to README (vllm-project#9137)

* [misc] update utils to support comparing multiple settings (vllm-project#9140)

* [Intel GPU] Fix xpu decode input  (vllm-project#9145)

* [misc] improve ux on readme (vllm-project#9147)

* [Frontend] API support for beam search for MQLLMEngine (vllm-project#9117)

* [Core][Frontend] Add Support for Inference Time mm_processor_kwargs (vllm-project#9131)

Signed-off-by: Alex-Brooks <Alex.Brooks@ibm.com>

* Factor out common weight loading code

* Fix EAGLE model loading

* [Frontend] Add Early Validation For Chat Template / Tool Call Parser (vllm-project#9151)

Signed-off-by: Alex-Brooks <Alex.Brooks@ibm.com>

* Improve efficiency

* Rename

* Update LLaVA-NeXT-Video

* [CI/Build] Add examples folder into Docker image so that we can leverage the templates*.jinja when serving models (vllm-project#8758)

Signed-off-by: Peter Pan <Peter.Pan@daocloud.io>

* [Bugfix] fix OpenAI API server startup with --disable-frontend-multiprocessing (vllm-project#8537)

* Automatic loading and save memory

* Rename

* Update docstring

* Simplify

* Cleanup

* Fully enable recursive loading

* Clarify

* [Doc] Update vlm.rst to include an example on videos (vllm-project#9155)

Co-authored-by: Cyrus Leung <cyrus.tl.leung@gmail.com>

* Fix incorrect semantics

* Move function

* Update error message

* Fix Ultravox loading

* spacing

* [Doc] Improve contributing and installation documentation (vllm-project#9132)

Signed-off-by: Rafael Vasquez <rafvasq21@gmail.com>

* Fix server

* [Bugfix] Try to handle older versions of pytorch (vllm-project#9086)

---------

Signed-off-by: kevin <kevin@anyscale.com>
Signed-off-by: Max de Bayser <mbayser@br.ibm.com>
Signed-off-by: Peter Pan <Peter.Pan@daocloud.io>
Signed-off-by: tylertitsworth <tyler.titsworth@intel.com>
Signed-off-by: Russell Bryant <rbryant@redhat.com>
Signed-off-by: Joe Runde <Joseph.Runde@ibm.com>
Signed-off-by: Alex-Brooks <Alex.Brooks@ibm.com>
Signed-off-by: Muralidhar Andoorveedu <muralidhar.andoorveedu@centml.ai>
Signed-off-by: Murali Andoorveedu <muralidhar.andoorveedu@centml.ai>
Signed-off-by: Prashant Gupta <prashantgupta@us.ibm.com>
Signed-off-by: Varad Ahirwadkar <varad.ahirwadkar1@ibm.com>
Signed-off-by: Flavia Beo <flavia.beo@ibm.com>
Signed-off-by: Rafael Vasquez <rafvasq21@gmail.com>
Co-authored-by: Tyler Michael Smith <tyler@neuralmagic.com>
Co-authored-by: Michael Goin <michael@neuralmagic.com>
Co-authored-by: fyuan1316 <yuanfang@alauda.io>
Co-authored-by: youkaichao <youkaichao@gmail.com>
Co-authored-by: Kevin H. Luu <kevin@anyscale.com>
Co-authored-by: Pernekhan Utemuratov <pernekhan@deepinfra.com>
Co-authored-by: Chirag Jain <jain.chirag925@gmail.com>
Co-authored-by: Nick Hill <nickhill@us.ibm.com>
Co-authored-by: Cyrus Leung <tlleungac@connect.ust.hk>
Co-authored-by: Maximilien de Bayser <mbayser@br.ibm.com>
Co-authored-by: Peter Pan <peter.pan@daocloud.io>
Co-authored-by: Isotr0py <2037008807@qq.com>
Co-authored-by: Brittany <24945384+bvrockwell@users.noreply.github.com>
Co-authored-by: Luka Govedič <ProExpertProg@users.noreply.github.com>
Co-authored-by: Lucas Wilkinson <LucasWilkinson@users.noreply.github.com>
Co-authored-by: Varun Sundar Rabindranath <varunsundar08@gmail.com>
Co-authored-by: Varun Sundar Rabindranath <varun@neuralmagic.com>
Co-authored-by: Sebastian Schoennenbeck <sebastian.schoennenbeck@comma-soft.com>
Co-authored-by: Tyler Titsworth <titswortht@gmail.com>
Co-authored-by: youkaichao <youkaichao@126.com>
Co-authored-by: tastelikefeet <58414341+tastelikefeet@users.noreply.github.com>
Co-authored-by: Roger Wang <ywang@roblox.com>
Co-authored-by: Edouard B. <eduard.r.balzin@gmail.com>
Co-authored-by: Eduard Balzin <nfunctor@yahoo.fr>
Co-authored-by: Chen Zhang <zhangch99@outlook.com>
Co-authored-by: Russell Bryant <rbryant@redhat.com>
Co-authored-by: sroy745 <142070531+sroy745@users.noreply.github.com>
Co-authored-by: ElizaWszola <eliza@neuralmagic.com>
Co-authored-by: Zilin Zhu <zilinzhu@tencent.com>
Co-authored-by: Jee Jee Li <pandaleefree@gmail.com>
Co-authored-by: juncheoll <127460634+juncheoll@users.noreply.github.com>
Co-authored-by: danieljannai21 <100521221+danieljannai21@users.noreply.github.com>
Co-authored-by: Mor Zusman <mor.zusmann@gmail.com>
Co-authored-by: whyiug <whyiug@hotmail.com>
Co-authored-by: Roger Wang <136131678+ywang96@users.noreply.github.com>
Co-authored-by: Lily Liu <lilyliupku@gmail.com>
Co-authored-by: Joe Runde <Joseph.Runde@ibm.com>
Co-authored-by: Cody Yu <hao.yu.cody@gmail.com>
Co-authored-by: Divakar Verma <137818590+divakar-amd@users.noreply.github.com>
Co-authored-by: Alex Brooks <alex.brooks@ibm.com>
Co-authored-by: vlsav <vl_sav@mail.ru>
Co-authored-by: afeldman-nm <156691304+afeldman-nm@users.noreply.github.com>
Co-authored-by: Andrew Feldman <afeld2012@gmail.com>
Co-authored-by: Sergey Shlyapnikov <Sergeishlyapnikov@gmail.com>
Co-authored-by: Shawn Tan <shawn@wtf.sg>
Co-authored-by: Travis Johnson <tsjohnso@us.ibm.com>
Co-authored-by: Guillaume Calmettes <guillaume.calmettes@gmail.com>
Co-authored-by: xendo <xendoo@gmail.com>
Co-authored-by: Jerzy Zagorski <jzagorsk@amazon.com>
Co-authored-by: Domen Vreš <56541137+domenVres@users.noreply.github.com>
Co-authored-by: dvres <dvres@fri.uni-lj.si>
Co-authored-by: 代君 <sydnash@users.noreply.github.com>
Co-authored-by: Murali Andoorveedu <37849411+andoorve@users.noreply.github.com>
Co-authored-by: Prashant Gupta <prashantgupta@us.ibm.com>
Co-authored-by: Simon Mo <simon.mo@hey.com>
Co-authored-by: Varad Ahirwadkar <86718090+varad-ahirwadkar@users.noreply.github.com>
Co-authored-by: Flávia Béo <119421251+flaviabeo@users.noreply.github.com>
Co-authored-by: Dipika <dipikasikka1@gmail.com>
Co-authored-by: Dipika Sikka <ds3822@columbia.edu>
Co-authored-by: Kuntai Du <kuntai@uchicago.edu>
Co-authored-by: Andy Dai <76841985+Imss27@users.noreply.github.com>
Co-authored-by: Chongming Ni <chongmni@amazon.com>
Co-authored-by: Ashraf Mahgoub <ashymahg@amazon.com>
Co-authored-by: Zhuohan Li <zhuohan123@gmail.com>
Co-authored-by: hhzhang16 <54051230+hhzhang16@users.noreply.github.com>
Co-authored-by: Xin Yang <105740670+xyang16@users.noreply.github.com>
Co-authored-by: TJian <tunjian1996@gmail.com>
Co-authored-by: Brendan Wong <35351983+LunrEclipse@users.noreply.github.com>
Co-authored-by: Yanyi Liu <wolfsonliu@163.com>
Co-authored-by: Isotr0py <mozf@mail2.sysu.edu.cn>
Co-authored-by: TimWang <7367474+haitwang-cloud@users.noreply.github.com>
Co-authored-by: Kunshang Ji <kunshang.ji@intel.com>
Co-authored-by: Daniele <36171005+dtrifiro@users.noreply.github.com>
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Co-authored-by: Cyrus Leung <cyrus.tl.leung@gmail.com>
Co-authored-by: Rafael Vasquez <rafvasq21@gmail.com>
Co-authored-by: bnellnm <49004751+bnellnm@users.noreply.github.com>
opus24 added a commit to Hyper-Accel/vllm that referenced this pull request Oct 11, 2024
commit 94bf9ae4e9b8199636668ccbe4dabcdc3b9e5ae6
Author: Andy Dai <76841985+Imss27@users.noreply.github.com>
Date:   Thu Oct 10 17:33:16 2024 -0700

    [Misc] Fix sampling from sonnet for long context case (#9235)

commit f990bab2a4198c4de6b5b349d35fc74bf0f36f3e
Author: omrishiv <327609+omrishiv@users.noreply.github.com>
Date:   Thu Oct 10 16:36:32 2024 -0700

    [Doc][Neuron] add note to neuron documentation about resolving triton issue (#9257)

    Signed-off-by: omrishiv <327609+omrishiv@users.noreply.github.com>

commit e00c094f15e79c5a113fdf975df1ee9018cb65b3
Author: youkaichao <youkaichao@gmail.com>
Date:   Thu Oct 10 15:54:23 2024 -0700

    [torch.compile] generic decorators (#9258)

commit a78c6ba7c88a7bb42b38410f9dcfa5b342b95b57
Author: Kevin H. Luu <kevin@anyscale.com>
Date:   Thu Oct 10 15:45:09 2024 -0700

    [ci/build] Add placeholder command for custom models test (#9262)

commit fb870fd491482cfe5a41648b8c081d1bd6941205
Author: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Date:   Thu Oct 10 13:30:46 2024 -0700

    Bump actions/setup-python from 3 to 5 (#9195)

    Signed-off-by: dependabot[bot] <support@github.com>
    Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

commit 270953bafb1ccf444f2018d1c0a88c51472de22e
Author: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Date:   Thu Oct 10 13:30:35 2024 -0700

    Bump actions/checkout from 3 to 4 (#9196)

    Signed-off-by: dependabot[bot] <support@github.com>
    Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

commit 9cc811c4ff3d5200cc23f16709f540821531b77c
Author: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Date:   Thu Oct 10 13:30:24 2024 -0700

    Bump actions/github-script from 6 to 7 (#9197)

    Signed-off-by: dependabot[bot] <support@github.com>
    Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

commit e4d652ea3ed9b2a60c1582cb2e2605695e61280f
Author: youkaichao <youkaichao@gmail.com>
Date:   Thu Oct 10 12:39:36 2024 -0700

    [torch.compile] integration with compilation control (#9058)

commit 78c0b4166cb097de749993970b51cb7b8becba58
Author: Simon Mo <simon.mo@hey.com>
Date:   Thu Oct 10 12:29:24 2024 -0700

    Suggest codeowners for the core componenets (#9210)

commit 21efb603f5f88a0d78ad11e4fbc6e18fe83916d4
Author: jordanyono <40174853+jyono@users.noreply.github.com>
Date:   Thu Oct 10 14:18:18 2024 -0400

    [CI/Build] Make the `Dockerfile.cpu` file's  `PIP_EXTRA_INDEX_URL` Configurable as a Build Argument (#9252)

commit 055f3270d40bbc492630d0f2c96ec8b64823ba34
Author: Rafael Vasquez <rafvasq21@gmail.com>
Date:   Thu Oct 10 13:48:51 2024 -0400

    [Doc] Improve debugging documentation (#9204)

    Signed-off-by: Rafael Vasquez <rafvasq21@gmail.com>

commit 18511aeda64b473314bb7727a97a220565e0af41
Author: Lucas Wilkinson <LucasWilkinson@users.noreply.github.com>
Date:   Thu Oct 10 13:39:56 2024 -0400

    [Bugfix] Fix Machete unittests failing with `NotImplementedError` (#9218)

commit 83ea5c72b9a287b65c9f7b95fbd868b3f613e6f5
Author: Ilya Lavrenov <ilya.lavrenov@intel.com>
Date:   Thu Oct 10 21:18:58 2024 +0400

    [OpenVINO] Use torch 2.4.0 and newer optimim version (#9121)

    Co-authored-by: DarkLight1337 <tlleungac@connect.ust.hk>

commit 04de9057ab8099291e66ad876e78693c7c2f2ce5
Author: whyiug <whyiug@hotmail.com>
Date:   Thu Oct 10 23:00:47 2024 +0800

    [Model] support input image embedding for minicpmv (#9237)

commit 07c11cf4d4b9a913fa52142fe134849f1e25e393
Author: Isotr0py <2037008807@qq.com>
Date:   Thu Oct 10 21:11:56 2024 +0800

    [Bugfix] Fix lm_head weights tying with lora for llama (#9227)

commit f3a507f1d31e13a99c4fc8ac02738a73c3e3136f
Author: sroy745 <142070531+sroy745@users.noreply.github.com>
Date:   Wed Oct 9 23:17:17 2024 -0700

    [Core] Add an environment variable which needs to be set explicitly to allow BlockSpaceManagerV1 (#9149)

commit a64e7b940734b68d849ed2b07ca1bc3824713555
Author: Lucas Wilkinson <LucasWilkinson@users.noreply.github.com>
Date:   Thu Oct 10 02:16:17 2024 -0400

    [Bugfix] Machete garbage results for some models (large K dim) (#9212)

commit ce00231a8bfb5eae85167b5a3def1b7304c723b6
Author: Michael Goin <michael@neuralmagic.com>
Date:   Thu Oct 10 02:15:40 2024 -0400

    [Bugfix] Fix Weight Loading Multiple GPU Test - Large Models (#9213)

commit de895f1697d22ea19a5a4d4ab3dc17037a3e9af3
Author: youkaichao <youkaichao@gmail.com>
Date:   Wed Oct 9 21:58:27 2024 -0700

    [misc] improve model support check in another process (#9208)

commit cf25b93bddb607077e52cbe4681332ca61aff189
Author: Russell Bryant <rbryant@redhat.com>
Date:   Thu Oct 10 00:10:09 2024 -0400

    [Core] Fix invalid args to _process_request (#9201)

    Signed-off-by: Russell Bryant <rbryant@redhat.com>

commit d5fbb8706d2c7fd00b64cff2efbe7c771fe82c3c
Author: Michael Goin <michael@neuralmagic.com>
Date:   Wed Oct 9 14:51:47 2024 -0400

    [CI/Build] Update Dockerfile install+deploy image to ubuntu 22.04 (#9130)

    Co-authored-by: DarkLight1337 <tlleungac@connect.ust.hk>

commit cdca8994bd856a234112875a92746c5782837768
Author: Russell Bryant <rbryant@redhat.com>
Date:   Wed Oct 9 13:15:28 2024 -0400

    [CI/Build] mypy: check vllm/entrypoints (#9194)

    Signed-off-by: Russell Bryant <rbryant@redhat.com>

commit ca77dd7a44f2bc103c668560818918ac0335835a
Author: Li, Jiang <jiang1.li@intel.com>
Date:   Thu Oct 10 00:28:08 2024 +0800

    [Hardware][CPU] Support AWQ for CPU backend (#7515)

commit 7dea289066eaed35538e74dfadafd1fea1dbe05d
Author: Ewout ter Hoeven <E.M.terHoeven@student.tudelft.nl>
Date:   Wed Oct 9 17:16:26 2024 +0200

    Add Dependabot configuration for GitHub Actions updates (#1217)

    Co-authored-by: DarkLight1337 <tlleungac@connect.ust.hk>

commit cfaa6008e666d4e9bb5131ece68f8609b6f94ee4
Author: Cyrus Leung <tlleungac@connect.ust.hk>
Date:   Wed Oct 9 22:59:57 2024 +0800

    [Bugfix] Access `get_vocab` instead of `vocab` in tool parsers (#9188)

commit 21906a6f50ee0edf49ede856a82e8840bab41471
Author: Ahmad Fahadh Ilyas <37577369+fahadh4ilyas@users.noreply.github.com>
Date:   Wed Oct 9 05:10:44 2024 -0700

    [Bugfix] Fix lora loading for Compressed Tensors in #9120 (#9179)

commit dc4aea677ab0520d91ff4979e80340cb5a090095
Author: Jiangtao Hu <ycool@users.noreply.github.com>
Date:   Wed Oct 9 16:59:42 2024 +0800

    [Doc] Fix VLM prompt placeholder sample bug (#9170)

commit c8627cd41b10747da393b76c382de5ef0eb635a2
Author: youkaichao <youkaichao@gmail.com>
Date:   Wed Oct 9 00:38:40 2024 -0700

    [ci][test] use load dummy for testing (#9165)

commit 8bfaa4e31eb63d41499fec933e68969ebbedb01f
Author: Cyrus Leung <tlleungac@connect.ust.hk>
Date:   Wed Oct 9 15:36:55 2024 +0800

    [Bugfix] fix composite weight loading and EAGLE weight loading (#9160)

commit 0b5b5d767e7fdc0b1070b37319de749e46a4d42a
Author: AlpinDale <52078762+AlpinDale@users.noreply.github.com>
Date:   Wed Oct 9 07:03:14 2024 +0000

    [Frontend] Log the maximum supported concurrency (#8831)

commit cdc72e3c80b7029c49de9667150f68481f386956
Author: Hui Liu <96135754+hliuca@users.noreply.github.com>
Date:   Tue Oct 8 23:43:06 2024 -0700

    [Model] Remap FP8 kv_scale in CommandR and DBRX (#9174)

commit 7627172bf42b9cd628402c98845c6ac3de80859a
Author: Joe Rowell <joerowell4@gmail.com>
Date:   Wed Oct 9 06:43:34 2024 +0100

    [Bugfix][Doc] Report neuron error in output (#9159)

commit 480b7f40cfa9a900e03ea4e825abc1a46b5d085b
Author: Travis Johnson <tsjohnso@us.ibm.com>
Date:   Tue Oct 8 22:54:48 2024 -0600

    [Misc] Improve validation errors around best_of and n (#9167)

    Signed-off-by: Travis Johnson <tsjohnso@us.ibm.com>

commit acce7630c1dd655ca95a9f1abff23d92ef76262c
Author: Yuan Tang <terrytangyuan@gmail.com>
Date:   Tue Oct 8 23:58:49 2024 -0400

    Update link to KServe deployment guide (#9173)

commit ffc4b27ea8924b4b5add13552063c93d0a14fb85
Author: Yuan Tang <terrytangyuan@gmail.com>
Date:   Tue Oct 8 22:30:48 2024 -0400

    Add classifiers in setup.py (#9171)

commit 2f4117c38e101ee63b65521c93b22efe3526f77e
Author: chenqianfzh <51831990+chenqianfzh@users.noreply.github.com>
Date:   Tue Oct 8 18:52:19 2024 -0700

    support bitsandbytes quantization with more models (#9148)

commit 9ba0bd6aa6a9a3cefa5c320800ea736a0abbaf36
Author: Michael Goin <michael@neuralmagic.com>
Date:   Tue Oct 8 21:22:31 2024 -0400

    Add `lm-eval` directly to requirements-test.txt (#9161)

commit 2a131965a8144d571a4a211a44d1fc32e202ae10
Author: Russell Bryant <rbryant@redhat.com>
Date:   Tue Oct 8 18:08:22 2024 -0400

    mypy: check additional directories (#9162)

    Signed-off-by: Russell Bryant <rbryant@redhat.com>

commit bd37b9fbe274e28e12c0687cb9a8111dda270936
Author: bnellnm <49004751+bnellnm@users.noreply.github.com>
Date:   Tue Oct 8 17:28:12 2024 -0400

    [Bugfix] Try to handle older versions of pytorch (#9086)

commit de24046fcd24e8faa81de34b17351887bcdfbe51
Author: Rafael Vasquez <rafvasq21@gmail.com>
Date:   Tue Oct 8 16:22:08 2024 -0400

    [Doc] Improve contributing and installation documentation (#9132)

    Signed-off-by: Rafael Vasquez <rafvasq21@gmail.com>

commit 1874c6a1b0ae0f9eb2b485653b4e17ed1d861a32
Author: Sayak Paul <spsayakpaul@gmail.com>
Date:   Tue Oct 8 23:42:29 2024 +0530

    [Doc] Update vlm.rst to include an example on videos (#9155)

    Co-authored-by: Cyrus Leung <cyrus.tl.leung@gmail.com>

commit 9a94ca4a5d31c0ba57ca67fc1c252233d3284012
Author: Daniele <36171005+dtrifiro@users.noreply.github.com>
Date:   Tue Oct 8 18:38:40 2024 +0200

    [Bugfix] fix OpenAI API server startup with --disable-frontend-multiprocessing (#8537)

commit cfba685bd462f360994da7ac0d33f9759589506e
Author: Peter Pan <peter.pan@daocloud.io>
Date:   Wed Oct 9 00:37:34 2024 +0800

    [CI/Build] Add examples folder into Docker image so that we can leverage the templates*.jinja when serving models (#8758)

    Signed-off-by: Peter Pan <Peter.Pan@daocloud.io>

commit 069d3bd8d01a72e93c0a5b51f8b567e8aaddc6e9
Author: Alex Brooks <alex.brooks@ibm.com>
Date:   Tue Oct 8 08:31:26 2024 -0600

    [Frontend] Add Early Validation For Chat Template / Tool Call Parser (#9151)

    Signed-off-by: Alex-Brooks <Alex.Brooks@ibm.com>

commit a3691b6b5eb7e60039a8ff34550be5a7e8365394
Author: Alex Brooks <alex.brooks@ibm.com>
Date:   Tue Oct 8 08:12:56 2024 -0600

    [Core][Frontend] Add Support for Inference Time mm_processor_kwargs (#9131)

    Signed-off-by: Alex-Brooks <Alex.Brooks@ibm.com>

commit 8c746226c956f7c8a4672689fee91c7d22befed6
Author: Brendan Wong <35351983+LunrEclipse@users.noreply.github.com>
Date:   Mon Oct 7 22:51:43 2024 -0700

    [Frontend] API support for beam search for MQLLMEngine (#9117)

commit e1faa2a59876bba99d804c0a94d427cee87b0995
Author: youkaichao <youkaichao@gmail.com>
Date:   Mon Oct 7 22:26:25 2024 -0700

    [misc] improve ux on readme (#9147)

commit 80b57f00d554db8a2126d351bb5374c190b56699
Author: Kunshang Ji <kunshang.ji@intel.com>
Date:   Tue Oct 8 11:51:14 2024 +0800

    [Intel GPU] Fix xpu decode input  (#9145)

commit 04c12f81572be22c819018c2fcbddac5f08715d0
Author: youkaichao <youkaichao@gmail.com>
Date:   Mon Oct 7 19:51:49 2024 -0700

    [misc] update utils to support comparing multiple settings (#9140)

commit 8eeb85708428b7735bbd1156c81692431fd5ff34
Author: Simon Mo <simon.mo@hey.com>
Date:   Mon Oct 7 17:06:21 2024 -0700

    Add Slack to README (#9137)

commit fa45513a5189b3a9f73a59730c9ac65d061e1311
Author: youkaichao <youkaichao@gmail.com>
Date:   Mon Oct 7 16:07:05 2024 -0700

    [misc] fix comment and variable name (#9139)

commit c0d9a98d0c7182b73c2e7f88508e690a186bf0e3
Author: Kuntai Du <kuntai@uchicago.edu>
Date:   Mon Oct 7 15:04:06 2024 -0700

    [Doc] Include performance benchmark in README (#9135)

commit e0dbdb013dfe5cdbe044317b4d7d55644d6399b3
Author: Russell Bryant <rbryant@redhat.com>
Date:   Mon Oct 7 17:18:10 2024 -0400

    [CI/Build] Add linting for github actions workflows (#7876)

    Signed-off-by: Russell Bryant <rbryant@redhat.com>

commit 93cf74a8a7b0b483becdba95e3056adbf201b7b2
Author: TimWang <7367474+haitwang-cloud@users.noreply.github.com>
Date:   Tue Oct 8 04:31:45 2024 +0800

    [Doc]: Add deploying_with_k8s guide (#8451)

commit 151ef4efd2fb52554f4d30408aca619e181ea751
Author: Cyrus Leung <tlleungac@connect.ust.hk>
Date:   Mon Oct 7 19:55:12 2024 +0800

    [Model] Support NVLM-D and fix QK Norm in InternViT (#9045)

    Co-authored-by: Roger Wang <ywang@roblox.com>
    Co-authored-by: Isotr0py <mozf@mail2.sysu.edu.cn>

commit f19da64871065510691cd4fcaa5f4096b661dcec
Author: Isotr0py <2037008807@qq.com>
Date:   Mon Oct 7 18:01:46 2024 +0800

    [Core] Refactor GGUF parameters packing and forwarding (#8859)

commit 4f95ffee6f40198911ee824ed06d645fe9678511
Author: Isotr0py <2037008807@qq.com>
Date:   Mon Oct 7 14:50:35 2024 +0800

    [Hardware][CPU] Cross-attention and Encoder-Decoder models support on CPU backend (#9089)

commit 8c6de96ea1e6e51e49a170c28ad3efc16db9413e
Author: Cyrus Leung <tlleungac@connect.ust.hk>
Date:   Mon Oct 7 14:10:35 2024 +0800

    [Model] Explicit interface for vLLM models and support OOT embedding models (#9108)

commit 18b296fdb2248e8a65bf005e7193ebd523b875b6
Author: youkaichao <youkaichao@gmail.com>
Date:   Sun Oct 6 22:47:04 2024 -0700

    [core] remove beam search from the core (#9105)

commit c8f26bb63694adb4202ab275efb0759c13edcaa8
Author: sroy745 <142070531+sroy745@users.noreply.github.com>
Date:   Sun Oct 6 20:52:42 2024 -0700

    [BugFix][Core] Fix BlockManagerV2 when Encoder Input is None (#9103)

commit 487678d046fe56560ff5dc6c91c3f3c31af7de6f
Author: Isotr0py <2037008807@qq.com>
Date:   Mon Oct 7 10:14:27 2024 +0800

    [Bugfix][Hardware][CPU] Fix CPU model input for decode (#9044)

commit cb3b2b9ba4a95c413a879e30e2b8674187519a93
Author: Varun Sundar Rabindranath <varunsundar08@gmail.com>
Date:   Sun Oct 6 15:48:11 2024 -0400

    [Bugfix] Fix incorrect updates to num_computed_tokens in multi-step scheduling (#9038)

    Co-authored-by: Varun Sundar Rabindranath <varun@neuralmagic.com>

commit fdf59d30eaf1a62979b2a13016b4f47f28f12f88
Author: Yanyi Liu <wolfsonliu@163.com>
Date:   Sun Oct 6 20:51:08 2024 +0800

    [Bugfix] fix tool_parser error handling when serve a model not support it (#8709)

commit b22b79847153ae10710523cdb4a5fb98ac864cf4
Author: Cyrus Leung <tlleungac@connect.ust.hk>
Date:   Sun Oct 6 16:35:27 2024 +0800

    [Model] PP support for embedding models and update docs (#9090)

    Co-authored-by: Roger Wang <136131678+ywang96@users.noreply.github.com>

commit f22619fe96c842ee2406638678d2b60009d8ff14
Author: Cyrus Leung <tlleungac@connect.ust.hk>
Date:   Sun Oct 6 16:33:52 2024 +0800

    [Misc] Remove user-facing error for removed VLM args (#9104)

commit 168cab6bbfb733f97defc8c1aa13df90c5319f19
Author: Brendan Wong <35351983+LunrEclipse@users.noreply.github.com>
Date:   Sat Oct 5 23:39:03 2024 -0700

    [Frontend] API support for beam search (#9087)

    Co-authored-by: youkaichao <youkaichao@126.com>

commit 23fea8714a1e90f018163e0eee59d73bc5a500e7
Author: TJian <tunjian1996@gmail.com>
Date:   Sat Oct 5 22:00:04 2024 -0700

    [Bugfix] Fix try-catch conditions to import correct Flash Attention Backend in Draft Model (#9101)

commit f4dd830e0945300dbe2039af79d1994f074ffcbb
Author: youkaichao <youkaichao@gmail.com>
Date:   Sat Oct 5 19:37:31 2024 -0700

    [core] use forward context for flash infer (#9097)

commit 5df183489537a155bbaad9232f25b8e57694d7b8
Author: Andy Dai <76841985+Imss27@users.noreply.github.com>
Date:   Sat Oct 5 10:35:11 2024 -0700

    [Bugfix] Fix order of arguments matters in config.yaml (#8960)

commit cfadb9c68798c0cc4d674de19970a8e3b5ea1273
Author: Chen Zhang <zhangch99@outlook.com>
Date:   Sat Oct 5 06:56:40 2024 -0700

    [Bugfix] Deprecate registration of custom configs to huggingface (#9083)

commit 15986f598c7b1f2969918c92f5c4cf7e28d5c0df
Author: Xin Yang <105740670+xyang16@users.noreply.github.com>
Date:   Fri Oct 4 23:57:05 2024 -0700

    [Model] Support Gemma2 embedding model (#9004)

commit 53b3a330273967a3c4124cbfef2cacac92f553ba
Author: hhzhang16 <54051230+hhzhang16@users.noreply.github.com>
Date:   Fri Oct 4 22:05:37 2024 -0700

    [Bugfix] Fixes Phi3v & Ultravox Multimodal EmbeddingInputs (#8979)

commit dac914b0d6bc36de4eb4bf70a9d20954560893ea
Author: Chen Zhang <zhangch99@outlook.com>
Date:   Fri Oct 4 21:45:38 2024 -0700

    [Bugfix] use blockmanagerv1 for encoder-decoder (#9084)

    Co-authored-by: Roger Wang <ywang@roblox.com>

commit a95354a36ee65523a499b3eb42f70a4a0ea4322d
Author: Zhuohan Li <zhuohan123@gmail.com>
Date:   Fri Oct 4 19:54:45 2024 -0700

    [Doc] Update README.md with Ray summit slides (#9088)

commit 663874e048d88aa7bf087628430d50f9f5245175
Author: youkaichao <youkaichao@gmail.com>
Date:   Fri Oct 4 16:43:50 2024 -0700

    [torch.compile] improve allreduce registration (#9061)

commit cc90419e89c358f906e17a5ec484fbe04092c277
Author: Chongming Ni <chongmni@amazon.com>
Date:   Fri Oct 4 16:42:20 2024 -0700

    [Hardware][Neuron] Add on-device sampling support for Neuron (#8746)

    Co-authored-by: Ashraf Mahgoub <ashymahg@amazon.com>

commit 27302dd5841d4b0fa4788076ad9ff2993e133409
Author: Cody Yu <hao.yu.cody@gmail.com>
Date:   Fri Oct 4 16:07:54 2024 -0700

    [Misc] Fix CI lint (#9085)

commit 0cc566ca8fd2d21a94f3a8e48bf5c5b60d42b59f
Author: Andy Dai <76841985+Imss27@users.noreply.github.com>
Date:   Fri Oct 4 14:58:57 2024 -0700

    [Misc] Add random seed for prefix cache benchmark (#9081)

commit 05c531be476e8a864a1ab83a65f7e056315ea1fc
Author: Andy Dai <76841985+Imss27@users.noreply.github.com>
Date:   Fri Oct 4 14:38:42 2024 -0700

    [Misc] Improved prefix cache example (#9077)

commit fbb74420e7018bf0cc1bc81e6fd71a2392347227
Author: Kuntai Du <kuntai@uchicago.edu>
Date:   Fri Oct 4 14:01:44 2024 -0700

    [CI] Update performance benchmark: upgrade trt-llm to r24.07, and add SGLang (#7412)

commit 05d686432f2e13296127962861b21c25cdcdfc8b
Author: ElizaWszola <eliza@neuralmagic.com>
Date:   Fri Oct 4 20:34:44 2024 +0200

    [Kernel] Zero point support in fused MarlinMoE kernel + AWQ Fused MoE (#8973)

    Co-authored-by: Dipika <dipikasikka1@gmail.com>
    Co-authored-by: Dipika Sikka <ds3822@columbia.edu>

commit 0dcc8cbe5abd4f2fafd495bd1c65fdd75d8dd919
Author: Flávia Béo <119421251+flaviabeo@users.noreply.github.com>
Date:   Fri Oct 4 15:31:40 2024 -0300

    Adds truncate_prompt_tokens param for embeddings creation (#8999)

    Signed-off-by: Flavia Beo <flavia.beo@ibm.com>

commit 26aa325f4ffe8bf1d9b921535cc02fb31d80a96d
Author: Roger Wang <136131678+ywang96@users.noreply.github.com>
Date:   Fri Oct 4 10:38:25 2024 -0700

    [Core][VLM] Test registration for OOT multimodal models (#8717)

    Co-authored-by: DarkLight1337 <tlleungac@connect.ust.hk>

commit e5dc713c2343b3549b43d6e2764a1036e4052bf8
Author: Varad Ahirwadkar <86718090+varad-ahirwadkar@users.noreply.github.com>
Date:   Fri Oct 4 22:54:42 2024 +0530

    [Hardware][PowerPC] Make oneDNN dependency optional for Power (#9039)

    Signed-off-by: Varad Ahirwadkar <varad.ahirwadkar1@ibm.com>

commit 36eecfbddb9ac2c491174c86b28ee83c4773eb5e
Author: Simon Mo <simon.mo@hey.com>
Date:   Fri Oct 4 10:17:16 2024 -0700

    Remove AMD Ray Summit Banner (#9075)

commit 9ade8bbc8dc63c03b9399f05e85a0d0ddc6f5788
Author: Prashant Gupta <prashantgupta@us.ibm.com>
Date:   Fri Oct 4 09:24:40 2024 -0700

    [Model] add a bunch of supported lora modules for mixtral (#9008)

    Signed-off-by: Prashant Gupta <prashantgupta@us.ibm.com>

commit 22482e495e00d409c9b5c78dade6e672ddf7fbc2
Author: Lucas Wilkinson <LucasWilkinson@users.noreply.github.com>
Date:   Fri Oct 4 11:43:15 2024 -0400

    [Bugfix] Flash attention arches not getting set properly (#9062)

commit 3d826d2c52242f4f78789adcb7c02938c84ed18b
Author: whyiug <whyiug@hotmail.com>
Date:   Fri Oct 4 22:34:58 2024 +0800

    [Bugfix] Reshape the dimensions of the input image embeddings in Qwen2VL (#9071)

commit 0e36fd4909780392a9c5d0e367b0a84250d55fa8
Author: Cyrus Leung <tlleungac@connect.ust.hk>
Date:   Fri Oct 4 18:01:37 2024 +0800

    [Misc] Move registry to its own file (#9064)

commit 0f6d7a9a347944bffd2204cbf9686299e9dd6557
Author: Murali Andoorveedu <37849411+andoorve@users.noreply.github.com>
Date:   Thu Oct 3 19:56:58 2024 -0700

    [Models] Add remaining model PP support (#7168)

    Signed-off-by: Muralidhar Andoorveedu <muralidhar.andoorveedu@centml.ai>
    Signed-off-by: Murali Andoorveedu <muralidhar.andoorveedu@centml.ai>
    Co-authored-by: DarkLight1337 <tlleungac@connect.ust.hk>

commit 303d44790a2ccab86257f1b6097e67795f0845d4
Author: Michael Goin <michael@neuralmagic.com>
Date:   Thu Oct 3 22:55:42 2024 -0400

    [Misc] Enable multi-step output streaming by default (#9047)

commit aeb37c2a725554791ff6f258b1e18830867a3ab9
Author: Lucas Wilkinson <LucasWilkinson@users.noreply.github.com>
Date:   Thu Oct 3 22:55:25 2024 -0400

    [CI/Build] Per file CUDA Archs (improve wheel size and dev build times) (#8845)

commit 3dbb215b38c010c050f7fde3528fe2c6673f7a07
Author: 代君 <sydnash@users.noreply.github.com>
Date:   Fri Oct 4 10:36:39 2024 +0800

    [Frontend][Feature] support tool calling for internlm/internlm2_5-7b-chat model (#8405)

commit 2838d6b38e1e37b303b01f2af0a9ddee2dd66f39
Author: Domen Vreš <56541137+domenVres@users.noreply.github.com>
Date:   Fri Oct 4 01:53:29 2024 +0200

    [Bugfix] Weight loading fix for OPT model (#9042)

    Co-authored-by: dvres <dvres@fri.uni-lj.si>

commit 91add85ec409a3628d01a1e4d4b3230e0fd3aa3f
Author: sroy745 <142070531+sroy745@users.noreply.github.com>
Date:   Thu Oct 3 16:07:29 2024 -0700

    Fix failing spec decode test (#9054)

commit 9aaf14c62e16a7c74b5192a44d01a78125dab2fc
Author: youkaichao <youkaichao@gmail.com>
Date:   Thu Oct 3 12:09:42 2024 -0700

    [misc] add forward context for attention (#9029)

commit 63e39937f990818e2f22a9b821a4aa22387057a7
Author: xendo <xendoo@gmail.com>
Date:   Thu Oct 3 20:02:07 2024 +0200

    [Frontend] [Neuron] Parse literals out of override-neuron-config (#8959)

    Co-authored-by: Jerzy Zagorski <jzagorsk@amazon.com>

commit f5d72b2fc6771de19c351945f1fbbb0198d53b8e
Author: sroy745 <142070531+sroy745@users.noreply.github.com>
Date:   Thu Oct 3 09:44:21 2024 -0700

    [Core] Make BlockSpaceManagerV2 the default BlockManager to use. (#8678)

commit 83caf35e082b2657dce5f71ff965a13653a763b0
Author: Guillaume Calmettes <guillaume.calmettes@gmail.com>
Date:   Thu Oct 3 10:44:52 2024 +0200

    [BugFix] Enforce Mistral ToolCall id constraint when using the Mistral tool call parser (#9020)

commit 01843c89b8ddae00d4a0f0f56b8aa7fbaa3efc42
Author: Divakar Verma <137818590+divakar-amd@users.noreply.github.com>
Date:   Wed Oct 2 23:31:07 2024 -0500

    [Misc] log when using default MoE config (#8971)

commit 19a4dd09904975d121a10e5e3f707927f3e09faa
Author: Travis Johnson <tsjohnso@us.ibm.com>
Date:   Wed Oct 2 21:04:17 2024 -0600

    [Bugfix] example template should not add parallel_tool_prompt if tools is none (#9007)

commit 18c2e30c5754dc83f86d9b8c75af0499a77e4b3f
Author: Nick Hill <nickhill@us.ibm.com>
Date:   Thu Oct 3 03:42:24 2024 +0100

    [Doc] Update Granite model docs (#9025)

commit 19f0d2579695e518c9bfc166544cf23775772bf8
Author: Shawn Tan <shawn@wtf.sg>
Date:   Wed Oct 2 21:33:57 2024 -0400

    [Model]  Adding Granite MoE. (#8206)

    Co-authored-by: Nick Hill <nickhill@us.ibm.com>

commit f58d4fccc9b270838be438f5f0db71bea156a56d
Author: Sergey Shlyapnikov <Sergeishlyapnikov@gmail.com>
Date:   Thu Oct 3 01:50:01 2024 +0400

    [OpenVINO] Enable GPU support for OpenVINO vLLM backend (#8192)

commit afb050b29d0cac27c32c19c8206a9ac2a4662de2
Author: Varun Sundar Rabindranath <varunsundar08@gmail.com>
Date:   Wed Oct 2 15:44:39 2024 -0400

    [Core] CUDA Graphs for Multi-Step + Chunked-Prefill (#8645)

    Co-authored-by: Varun Sundar Rabindranath <varun@neuralmagic.com>

commit 7f60520deb05d2e097b408e3310f1d383fbf1de6
Author: Alex Brooks <alex.brooks@ibm.com>
Date:   Wed Oct 2 05:44:38 2024 -0600

    [Misc] Update Default Image Mapper Error Log (#8977)

    Signed-off-by: Alex-Brooks <Alex.Brooks@ibm.com>
    Co-authored-by: Roger Wang <136131678+ywang96@users.noreply.github.com>

commit 563649aafe7d4b9cb0047bba60d6f58efa53fd28
Author: afeldman-nm <156691304+afeldman-nm@users.noreply.github.com>
Date:   Wed Oct 2 03:52:20 2024 -0400

    [Core] Combined support for multi-step scheduling, chunked prefill & prefix caching (#8804)

    Co-authored-by: Varun Sundar Rabindranath <varun@neuralmagic.com>
    Co-authored-by: Andrew Feldman <afeld2012@gmail.com>

commit 15702038642192002cd8973cf8948751b750fd07
Author: Lily Liu <lilyliupku@gmail.com>
Date:   Tue Oct 1 16:04:42 2024 -0700

    [Spec Decode] (1/2) Remove batch expansion (#8839)

commit 22f5851b807376a836eb3551903c7fc6c81eaa9b
Author: vlsav <vl_sav@mail.ru>
Date:   Tue Oct 1 21:07:06 2024 +0300

    Update benchmark_serving.py to read and write json-datasets, results in UTF8, for better compatibility with Windows (#8997)

commit 4f341bd4bf35c5b431dc523bab86e4ae210baaf8
Author: Cyrus Leung <tlleungac@connect.ust.hk>
Date:   Wed Oct 2 00:35:39 2024 +0800

    [Doc] Update list of supported models (#8987)

commit 35bd2151684ffb20cdad825abe33e0e6f0cc005a
Author: Sebastian Schoennenbeck <sebastian.schoennenbeck@comma-soft.com>
Date:   Tue Oct 1 11:58:06 2024 +0200

    [Core] [Frontend] Priority scheduling for embeddings and in the OpenAI-API (#8965)

commit 1fe0a4264aa94ceeccc7e8d99ac0d72f0560f541
Author: Alex Brooks <alex.brooks@ibm.com>
Date:   Tue Oct 1 03:52:44 2024 -0600

    [Bugfix] Fix Token IDs Reference for MiniCPM-V When Images are Provided With No Placeholders (#8991)

    Signed-off-by: Alex-Brooks <Alex.Brooks@ibm.com>

commit bc4eb65b5492b4f84a1b714bfc14bcff73d401f1
Author: Isotr0py <2037008807@qq.com>
Date:   Tue Oct 1 17:51:41 2024 +0800

    [Bugfix] Fix Fuyu tensor parallel inference (#8986)

commit 82f3937e599a4f088a62e59abe81d51e11bb8f83
Author: Divakar Verma <137818590+divakar-amd@users.noreply.github.com>
Date:   Mon Sep 30 22:46:41 2024 -0500

    [Misc] add process_weights_after_loading for DummyLoader (#8969)

commit 7da2487591888da043254f8c7045a48d5dbcc753
Author: youkaichao <youkaichao@gmail.com>
Date:   Mon Sep 30 20:40:48 2024 -0700

    [torch.compile] fix tensor alias (#8982)

commit aaccca2b4d3895d64d34b123e61731404c8fc2c0
Author: Kevin H. Luu <kevin@anyscale.com>
Date:   Mon Sep 30 20:33:12 2024 -0700

    [CI/Build] Fix machete generated kernel files ordering (#8976)

    Signed-off-by: kevin <kevin@anyscale.com>
    Co-authored-by: Cody Yu <hao.yu.cody@gmail.com>

commit 062c89e7c9c6fa9fd7fb2d28fd50321c6f78f389
Author: Joe Runde <Joseph.Runde@ibm.com>
Date:   Mon Sep 30 19:34:25 2024 -0600

    [Frontend][Core] Move guided decoding params into sampling params (#8252)

    Signed-off-by: Joe Runde <Joseph.Runde@ibm.com>
    Co-authored-by: Nick Hill <nickhill@us.ibm.com>

commit bce324487a8e36140143ea37f4b27d273a0fd661
Author: Lily Liu <lilyliupku@gmail.com>
Date:   Mon Sep 30 17:51:40 2024 -0700

    [CI][SpecDecode] Fix spec decode tests, use flash attention backend for spec decode CI tests. (#8975)

commit 1425a1bcf9c53e24fe5f4812acc5b656f2aa02f3
Author: Kevin H. Luu <kevin@anyscale.com>
Date:   Mon Sep 30 17:47:08 2024 -0700

    [ci] Add CODEOWNERS for test directories  (#8795)

    Signed-off-by: kevin <kevin@anyscale.com>

commit 1cabfcefb64a489c8ff9dcb289b4dd47cf8f89cf
Author: Jee Jee Li <pandaleefree@gmail.com>
Date:   Mon Sep 30 20:57:39 2024 +0800

    [Misc] Adjust max_position_embeddings for LoRA compatibility (#8957)

commit be76e5aabf8c026e1a82028ad70167e8c652cee9
Author: Sebastian Schoennenbeck <sebastian.schoennenbeck@comma-soft.com>
Date:   Mon Sep 30 14:28:44 2024 +0200

    [Core] Make scheduling policy settable via EngineArgs (#8956)

commit 2ae25f79cf1e8d21f7bcba097e4c039463c22be4
Author: Isotr0py <2037008807@qq.com>
Date:   Mon Sep 30 13:01:20 2024 +0800

    [Model] Expose InternVL2 max_dynamic_patch as a mm_processor_kwarg (#8946)

commit 8e60afa15eb9a0540ce6c453b974a945adff3320
Author: Jee Jee Li <pandaleefree@gmail.com>
Date:   Mon Sep 30 12:31:55 2024 +0800

    [Model][LoRA]LoRA support added for MiniCPMV2.6 (#8943)

    Co-authored-by: DarkLight1337 <tlleungac@connect.ust.hk>

commit b6d7392579286b6dbd8ca96c0bcb4cc6f7c3c4a0
Author: Roger Wang <136131678+ywang96@users.noreply.github.com>
Date:   Sun Sep 29 21:28:26 2024 -0700

    [Misc][CI/Build] Include `cv2` via `mistral_common[opencv]`  (#8951)

commit e01ab595d897698c9a5fe9eaebd983eb3e23470a
Author: whyiug <whyiug@hotmail.com>
Date:   Mon Sep 30 11:16:10 2024 +0800

    [Model] support input embeddings for qwen2vl (#8856)

commit f13a07b1f8c11ddbdc53b40f1fbb24bf3166b900
Author: Mor Zusman <mor.zusmann@gmail.com>
Date:   Mon Sep 30 00:35:58 2024 +0300

    [Kernel][Model] Varlen prefill + Prefill chunking support for mamba kernels and Jamba model (#8533)

commit 6c9ba48fdebe2f44c82eabfe136dc8dc6ad6f4ed
Author: danieljannai21 <100521221+danieljannai21@users.noreply.github.com>
Date:   Sun Sep 29 20:59:47 2024 +0300

    [Frontend] Added support for HF's new `continue_final_message` parameter (#8942)

commit 1fb9c1b0bf8e65e6576ff4c45f5623d233d7194b
Author: juncheoll <127460634+juncheoll@users.noreply.github.com>
Date:   Mon Sep 30 00:05:54 2024 +0900

    [Misc] Fix typo in BlockSpaceManagerV1 (#8944)

commit 31f46a0d35da80118bac5f80c533019cd50ddd9a
Author: Nick Hill <nickhill@us.ibm.com>
Date:   Sun Sep 29 10:43:14 2024 +0100

    [BugFix] Fix seeded random sampling with encoder-decoder models (#8870)

    Co-authored-by: Roger Wang <ywang@roblox.com>

commit 3d49776bbb25927abf91bb7c5537e0006c199c16
Author: Jee Jee Li <pandaleefree@gmail.com>
Date:   Sun Sep 29 14:59:45 2024 +0800

    [Model][LoRA]LoRA support added for MiniCPMV2.5 (#7199)

commit bc2ef1f77c1578612198f60ec392731efb3847c5
Author: Zilin Zhu <zilinzhu@tencent.com>
Date:   Sun Sep 29 12:19:39 2024 +0800

    [Model] Support Qwen2.5-Math-RM-72B (#8896)

commit 2e7fe7e79f41e294eeed2f484eeb791284ec48a2
Author: Tyler Michael Smith <tyler@neuralmagic.com>
Date:   Sat Sep 28 23:13:01 2024 -0400

    [Build/CI] Set FETCHCONTENT_BASE_DIR to one location for better caching (#8930)

commit 26a68d5d7e7dd47c7d8538a326493c8a171f5016
Author: Cyrus Leung <tlleungac@connect.ust.hk>
Date:   Sun Sep 29 10:50:51 2024 +0800

    [CI/Build] Add test decorator for minimum GPU memory (#8925)

commit d081da0064b5cda9e344f0fd519d67523a437a39
Author: ElizaWszola <eliza@neuralmagic.com>
Date:   Sun Sep 29 03:19:40 2024 +0200

    [Bugfix] Fix Marlin MoE act order when is_k_full == False (#8741)

    Co-authored-by: Tyler Michael Smith <tyler@neuralmagic.com>

commit 5bf8789b2a28df1305f92b9999fe60264f839caa
Author: sroy745 <142070531+sroy745@users.noreply.github.com>
Date:   Sat Sep 28 18:17:45 2024 -0700

    [Bugfix] Block manager v2 with preemption and lookahead slots (#8824)

commit d1537039ce7e6018db510d0c0d9b0c0fccb62b63
Author: Russell Bryant <rbryant@redhat.com>
Date:   Sat Sep 28 21:17:07 2024 -0400

    [Core] Improve choice of Python multiprocessing method (#8823)

    Signed-off-by: Russell Bryant <rbryant@redhat.com>
    Co-authored-by: youkaichao <youkaichao@126.com>

commit cc276443b5ac0732b00a88472f4bc4330aa14606
Author: youkaichao <youkaichao@gmail.com>
Date:   Sat Sep 28 17:48:41 2024 -0700

    [doc] organize installation doc and expose per-commit docker (#8931)

commit e585b583a92903c9a5cc8055a444a208f4387891
Author: Chen Zhang <zhangch99@outlook.com>
Date:   Sat Sep 28 11:51:22 2024 -0700

    [Bugfix] Support testing prefill throughput with benchmark_serving.py --hf-output-len 1 (#8891)

commit 090e945e36cfe849b484db5414f64df96e97d678
Author: Edouard B. <eduard.r.balzin@gmail.com>
Date:   Sat Sep 28 20:30:21 2024 +0200

    [Frontend] Make beam search emulator temperature modifiable (#8928)

    Co-authored-by: Eduard Balzin <nfunctor@yahoo.fr>

commit e1a3f5e831a467b2867a66e0e56ac0f70ed44394
Author: Cyrus Leung <tlleungac@connect.ust.hk>
Date:   Sun Sep 29 00:54:35 2024 +0800

    [CI/Build] Update models tests & examples (#8874)

    Co-authored-by: Roger Wang <ywang@roblox.com>

commit 19d02ff93812fb6a28f0f1a0a0f9233e9388d616
Author: Varun Sundar Rabindranath <varunsundar08@gmail.com>
Date:   Sat Sep 28 11:52:46 2024 -0400

    [Bugfix] Fix PP for Multi-Step (#8887)

commit 39d3f8d94fd2691b70ee809e7565402f8a061c6b
Author: tastelikefeet <58414341+tastelikefeet@users.noreply.github.com>
Date:   Sat Sep 28 23:24:12 2024 +0800

    [Bugfix] Fix code for downloading models from modelscope (#8443)

commit b0298aa8cc4a54bde659e57271778630785abc9b
Author: Cyrus Leung <tlleungac@connect.ust.hk>
Date:   Sat Sep 28 16:11:25 2024 +0800

    [Misc] Remove vLLM patch of `BaichuanTokenizer` (#8921)

commit 260024a3749fb6856625dfee28560a98a92dd339
Author: Tyler Titsworth <titswortht@gmail.com>
Date:   Fri Sep 27 23:45:50 2024 -0700

    [Bugfix][Intel] Fix XPU Dockerfile Build (#7824)

    Signed-off-by: tylertitsworth <tyler.titsworth@intel.com>
    Co-authored-by: youkaichao <youkaichao@126.com>

commit d86f6b2afb006ea4b4b14a49a58f64bf3b952de6
Author: youkaichao <youkaichao@gmail.com>
Date:   Fri Sep 27 22:10:44 2024 -0700

    [misc] fix wheel name (#8919)

commit bd429f2b75f3622fabaf9c9470ca2e921f6f56ca
Author: Sebastian Schoennenbeck <sebastian.schoennenbeck@comma-soft.com>
Date:   Sat Sep 28 00:07:10 2024 +0200

    [Core] Priority-based scheduling in async engine (#8850)

commit 18e60d7d1394541b48bf48b0a57a546a93607ac2
Author: youkaichao <youkaichao@gmail.com>
Date:   Fri Sep 27 14:27:56 2024 -0700

    [misc][distributed] add VLLM_SKIP_P2P_CHECK flag (#8911)

commit c2ec430ab5713d0626c1a7809718ef6c4eebf389
Author: Varun Sundar Rabindranath <varunsundar08@gmail.com>
Date:   Fri Sep 27 16:32:07 2024 -0400

    [Core] Multi-Step + Single Step Prefills via Chunked Prefill code path (#8378)

    Co-authored-by: Varun Sundar Rabindranath <varun@neuralmagic.com>

commit c5d55356f9d2b2075ac53cf20453358c1e2b7bde
Author: Lucas Wilkinson <LucasWilkinson@users.noreply.github.com>
Date:   Fri Sep 27 15:12:34 2024 -0400

    [Bugfix] fix for deepseek w4a16 (#8906)

    Co-authored-by: mgoin <michael@neuralmagic.com>

commit 172d1cd27634e9e7adc9cb9feac73552cfae1b24
Author: Luka Govedič <ProExpertProg@users.noreply.github.com>
Date:   Fri Sep 27 14:25:10 2024 -0400

    [Kernel] AQ AZP 4/4: Integrate asymmetric quantization to linear method (#7271)

commit a9b15c606fea67a072416ea0ea115261a2756058
Author: youkaichao <youkaichao@gmail.com>
Date:   Fri Sep 27 08:11:32 2024 -0700

    [torch.compile] use empty tensor instead of None for profiling (#8875)

commit 8df2dc3c8812c0abb97ce3e2913411d88524e59f
Author: Brittany <24945384+bvrockwell@users.noreply.github.com>
Date:   Fri Sep 27 01:16:55 2024 -0700

    [TPU] Update pallas.py to support trillium (#8871)

commit 6d792d2f31b2cfb335d1a4a7c45fe4ce143c203a
Author: Isotr0py <2037008807@qq.com>
Date:   Fri Sep 27 16:15:58 2024 +0800

    [Bugfix][VLM] Fix Fuyu batching inference with `max_num_seqs>1` (#8892)

commit 0e088750af2e8035c07d356b56c03393cfb56004
Author: Peter Pan <peter.pan@daocloud.io>
Date:   Fri Sep 27 16:13:25 2024 +0800

    [MISC] Fix invalid escape sequence '\' (#8830)

    Signed-off-by: Peter Pan <Peter.Pan@daocloud.io>

commit dc4e3df5c23282b2ebaead95f179c25c9d7ec4d8
Author: youkaichao <youkaichao@gmail.com>
Date:   Fri Sep 27 00:26:38 2024 -0700

    [misc] fix collect env (#8894)

commit 3b00b9c26c91e9f9ada12975b613555698054e39
Author: Cyrus Leung <tlleungac@connect.ust.hk>
Date:   Fri Sep 27 11:35:15 2024 +0800

    [Core] rename`PromptInputs` and `inputs` (#8876)

commit 344cd2b6f4c22bf278cff96066001d216ec1fe82
Author: Maximilien de Bayser <mbayser@br.ibm.com>
Date:   Thu Sep 26 21:01:42 2024 -0300

    [Feature] Add support for Llama 3.1 and 3.2 tool use (#8343)

    Signed-off-by: Max de Bayser <mbayser@br.ibm.com>

commit 1b49148e474d4d18731e159ea0460145ae52e220
Author: Cyrus Leung <tlleungac@connect.ust.hk>
Date:   Fri Sep 27 07:54:09 2024 +0800

    [Installation] Allow lower versions of FastAPI to maintain Ray 2.9 compatibility (#8764)

commit 4b377d6febed7ddd964f1b96079d7e78c231325e
Author: Nick Hill <nickhill@us.ibm.com>
Date:   Fri Sep 27 00:46:43 2024 +0100

    [BugFix] Fix test breakages from transformers 4.45 upgrade (#8829)

commit 71d21c73abfb9b12ea402ce6b11c1b8e31eddf4c
Author: Tyler Michael Smith <tyler@neuralmagic.com>
Date:   Thu Sep 26 19:23:45 2024 -0400

    [Bugfix] Fixup advance_step.cu warning (#8815)

commit ee2da3e9efb38add804e2023d47e9f42f38bd638
Author: Chirag Jain <jain.chirag925@gmail.com>
Date:   Fri Sep 27 04:53:17 2024 +0530

    fix validation: Only set tool_choice `auto` if at least one tool is provided (#8568)

commit e2f6f26e8636b8a23e5c0cda533a70c40ade01ec
Author: Tyler Michael Smith <tyler@neuralmagic.com>
Date:   Thu Sep 26 19:18:26 2024 -0400

    [Bugfix] Fix print_warning_once's line info (#8867)

commit b28d2104dea6ba80c0f1f6c4596b5703d7ef923d
Author: Michael Goin <michael@neuralmagic.com>
Date:   Thu Sep 26 19:18:14 2024 -0400

    [Misc] Change dummy profiling and BOS fallback warns to log once (#8820)

commit 93d364da3406f5523e5e4772ffbc3c72dac7bbf4
Author: Pernekhan Utemuratov <pernekhan@deepinfra.com>
Date:   Thu Sep 26 15:47:00 2024 -0700

    [Bugfix] Include encoder prompts len to non-stream api usage response (#8861)

commit d9cfbc891e2e1d62d74c7aae93bde436a29bd574
Author: Kevin H. Luu <kevin@anyscale.com>
Date:   Thu Sep 26 15:02:16 2024 -0700

    [ci] Soft fail Entrypoints, Samplers, LoRA, Decoder-only VLM (#8872)

    Signed-off-by: kevin <kevin@anyscale.com>

commit 70de39f6b46f6b90aecba52358825127a50b3921
Author: youkaichao <youkaichao@gmail.com>
Date:   Thu Sep 26 13:19:04 2024 -0700

    [misc][installation] build from source without compilation (#8818)

commit 68988d4e0d8765901c51f07f9bfbda58f35f6f63
Author: fyuan1316 <yuanfang@alauda.io>
Date:   Fri Sep 27 02:04:39 2024 +0800

    [CI/Build] Fix missing ci dependencies (#8834)

commit 520db4dbc10cfc60be65e85ff4ef3a6aeeeb7836
Author: Michael Goin <michael@neuralmagic.com>
Date:   Thu Sep 26 14:02:52 2024 -0400

    [Docs] Add README to the build docker image (#8825)

commit f70bccac75a0aecc0a5fc934859158a3e1f019a5
Author: Tyler Michael Smith <tyler@neuralmagic.com>
Date:   Thu Sep 26 13:07:18 2024 -0400

    [Build/CI] Upgrade to gcc 10 in the base build Docker image (#8814)

commit 4bb98f2190aaf408cb063df5184829fb54ee5f81
Author: Roger Wang <136131678+ywang96@users.noreply.github.com>
Date:   Thu Sep 26 07:45:30 2024 -0700

    [Misc] Update config loading for Qwen2-VL and remove Granite (#8837)

commit 7193774b1ff8603ad5bf4598e5efba0d9a39b436
Author: Michael Goin <michael@neuralmagic.com>
Date:   Wed Sep 25 17:46:22 2024 -0400

    [Misc] Support quantization of MllamaForCausalLM (#8822)

commit e2c6e0a8291126c868b669f631837c7781646fdc
Author: Roger Wang <136131678+ywang96@users.noreply.github.com>
Date:   Wed Sep 25 13:29:48 2024 -0700

    [Doc] Update doc for Transformers 4.45 (#8817)

commit 770ec6024fc00cd696899f5c6fdc53b7148876e6
Author: Chen Zhang <zhangch99@outlook.com>
Date:   Wed Sep 25 13:29:32 2024 -0700

    [Model] Add support for the multi-modal Llama 3.2 model (#8811)

    Co-authored-by: simon-mo <xmo@berkeley.edu>
    Co-authored-by: Chang Su <chang.s.su@oracle.com>
    Co-authored-by: Simon Mo <simon.mo@hey.com>
    Co-authored-by: Roger Wang <136131678+ywang96@users.noreply.github.com>
    Co-authored-by: Roger Wang <ywang@roblox.com>

commit 4f1ba0844b83b4e7d0ff1672b7ba502ce8732f95
Author: Simon Mo <simon.mo@hey.com>
Date:   Wed Sep 25 10:36:26 2024 -0700

    Revert "rename PromptInputs and inputs with backward compatibility (#8760) (#8810)

commit 873edda6cf8a2902e8b08eea0bf8f8f6d73704a8
Author: Michael Goin <michael@neuralmagic.com>
Date:   Wed Sep 25 12:43:36 2024 -0400

    [Misc] Support FP8 MoE for compressed-tensors (#8588)

commit 64840dfae48621c5c2004eb8f1cb7fba49f9b24e
Author: 科英 <abatom@163.com>
Date:   Thu Sep 26 00:37:41 2024 +0800

    [Frontend] MQLLMEngine supports profiling. (#8761)

commit 28e1299e60e565a56a2db41396380f74b8d29e57
Author: Cyrus Leung <tlleungac@connect.ust.hk>
Date:   Thu Sep 26 00:36:47 2024 +0800

    rename PromptInputs and inputs with backward compatibility (#8760)

commit 0c4d2ad5e641de145682674066a84ffc632e714e
Author: DefTruth <31974251+DefTruth@users.noreply.github.com>
Date:   Thu Sep 26 00:35:53 2024 +0800

    [VLM][Bugfix] internvl with num_scheduler_steps > 1 (#8614)

commit c6f2485c823b5cd76cca70798e653c6eadb811de
Author: Jee Jee Li <pandaleefree@gmail.com>
Date:   Thu Sep 26 00:35:23 2024 +0800

    [[Misc]] Add extra deps for openai server image (#8792)

commit 300da09177477d0a4d2b55790addefd971f52ae0
Author: bnellnm <49004751+bnellnm@users.noreply.github.com>
Date:   Wed Sep 25 10:35:52 2024 -0400

    [Kernel] Fullgraph and opcheck tests (#8479)

commit 1c046447a6d1ac3c99b9f453796f0d355d673deb
Author: Hongxia Yang <62075498+hongxiayang@users.noreply.github.com>
Date:   Wed Sep 25 10:26:37 2024 -0400

    [CI/Build][Bugfix][Doc][ROCm] CI fix and doc update after ROCm 6.2 upgrade (#8777)

commit 8fae5ed7f6bfd63b81310fcb24b310d9205c9687
Author: Woo-Yeon Lee <wooyeon0.lee@samsung.com>
Date:   Wed Sep 25 16:53:03 2024 +0900

    [Misc] Fix minor typo in scheduler (#8765)

commit 3368c3ab36436af1342a3156971412e9efdb6419
Author: David Newman <darthhexx@gmail.com>
Date:   Wed Sep 25 17:52:26 2024 +1000

    [Bugfix] Ray 2.9.x doesn't expose available_resources_per_node (#8767)

    Signed-off-by: darthhexx <darthhexx@gmail.com>

commit 1ac3de09cd87290f7494ce6337623d6edd3f8667
Author: Adam Tilghman <agt@ucsd.edu>
Date:   Wed Sep 25 00:49:26 2024 -0700

    [Frontend] OpenAI server: propagate usage accounting to FastAPI middleware layer (#8672)

commit 3e073e66f1790f7ce339dad71514983e6e402f30
Author: sohamparikh <sohamparikh47@gmail.com>
Date:   Wed Sep 25 02:16:30 2024 -0400

    [Bugfix] load fc bias from config for eagle (#8790)

commit c23953675f78bc85045d66fa98aea7d0581c2167
Author: Isotr0py <2037008807@qq.com>
Date:   Wed Sep 25 14:16:11 2024 +0800

    [Hardware][CPU] Enable mrope and support Qwen2-VL on CPU backend (#8770)

commit e3dd0692fa2c803cd6f59a88d2fdf8bca26d8d96
Author: zifeitong <zifeitong@gmail.com>
Date:   Tue Sep 24 22:53:43 2024 -0700

    [BugFix] Propagate 'trust_remote_code' setting in internvl and minicpmv (#8250)

commit fc3afc20df410dd523f94967b98836084f561ab7
Author: sroy745 <142070531+sroy745@users.noreply.github.com>
Date:   Tue Sep 24 21:26:36 2024 -0700

    Fix tests in test_chunked_prefill_scheduler which fail with BlockManager V2 (#8752)

commit b4522474a32b6e0bf5573a9b6a6830cb787dfb63
Author: sasha0552 <admin@sasha0552.org>
Date:   Wed Sep 25 04:26:33 2024 +0000

    [Bugfix][Kernel] Implement acquire/release polyfill for Pascal (#8776)

commit ee777d9c30418ffa9d98f98dd27c0ddea346c49c
Author: sroy745 <142070531+sroy745@users.noreply.github.com>
Date:   Tue Sep 24 21:26:18 2024 -0700

    Fix test_schedule_swapped_simple in test_scheduler.py (#8780)

commit 6e0c9d6bd07464b311eb098e2dac8196eed16721
Author: Joe Runde <Joseph.Runde@ibm.com>
Date:   Tue Sep 24 21:37:38 2024 -0600

    [Bugfix] Use heartbeats instead of health checks (#8583)

commit 6da1ab6b4134d76391a0c31a048e5d04b6283769
Author: Archit Patke <apatke@illinois.edu>
Date:   Tue Sep 24 21:50:50 2024 -0500

    [Core] Adding Priority Scheduling (#5958)

commit 01b6f9e1f0530a7cb81486ff34d3d935e4f75d28
Author: Travis Johnson <tsjohnso@us.ibm.com>
Date:   Tue Sep 24 18:29:56 2024 -0600

    [Core][Bugfix] Support prompt_logprobs returned with speculative decoding (#8047)

    Signed-off-by: Travis Johnson <tsjohnso@us.ibm.com>

commit 13f9f7a3d0373421ee9fd7498e450214e134aa6c
Author: Jee Jee Li <pandaleefree@gmail.com>
Date:   Wed Sep 25 08:08:55 2024 +0800

    [[Misc]Upgrade bitsandbytes to the latest version 0.44.0 (#8768)

commit 1e7d5c01f5c35424eede1bbe6f723dd8781120f0
Author: youkaichao <youkaichao@gmail.com>
Date:   Tue Sep 24 15:48:39 2024 -0700

    [misc] soft drop beam search (#8763)

commit 2467b642dd9bde32a334fe5967efd78a53aa49da
Author: Daniele <36171005+dtrifiro@users.noreply.github.com>
Date:   Tue Sep 24 21:38:12 2024 +0200

    [CI/Build] fix setuptools-scm usage (#8771)

commit 72fc97a0f100b92f1ff6c6a16e27d12f1c7569aa
Author: Lucas Wilkinson <LucasWilkinson@users.noreply.github.com>
Date:   Tue Sep 24 14:33:21 2024 -0400

    [Bugfix] Fix torch dynamo fixes caused by `replace_parameters` (#8748)

commit 2529d09b5a4a124a316b6976e7d782f54e0bddde
Author: Andy <37781802+aandyw@users.noreply.github.com>
Date:   Tue Sep 24 12:44:11 2024 -0400

    [Frontend] Batch inference for llm.chat() API  (#8648)

    Co-authored-by: Cyrus Leung <cyrus.tl.leung@gmail.com>
    Co-authored-by: Cyrus Leung <tlleungac@connect.ust.hk>
    Co-authored-by: Roger Wang <ywang@roblox.com>
    Co-authored-by: Roger Wang <136131678+ywang96@users.noreply.github.com>

commit a928ded99519f803d4cf6389df6acc707239a5cc
Author: ElizaWszola <eliza@neuralmagic.com>
Date:   Tue Sep 24 18:31:42 2024 +0200

    [Kernel] Split Marlin MoE kernels into multiple files (#8661)

    Co-authored-by: mgoin <michael@neuralmagic.com>

commit cc4325b66ac49e403ed9e1a8c38156a5324e1174
Author: Hanzhi Zhou <hanzhi713@gmail.com>
Date:   Tue Sep 24 01:08:14 2024 -0700

    [Bugfix] Fix potentially unsafe custom allreduce synchronization (#8558)

commit 8ff7ced996d5dc8b682913471f36c9fefb0e843f
Author: Alex Brooks <alex.brooks@ibm.com>
Date:   Tue Sep 24 01:36:46 2024 -0600

    [Model] Expose Phi3v num_crops as a mm_processor_kwarg (#8658)

    Signed-off-by: Alex-Brooks <Alex.Brooks@ibm.com>
    Co-authored-by: Cyrus Leung <cyrus.tl.leung@gmail.com>
    Co-authored-by: DarkLight1337 <tlleungac@connect.ust.hk>

commit 3f06bae9079ee495a34cfadcd9c1ef2a23636084
Author: Peter Salas <peter@fixie.ai>
Date:   Tue Sep 24 00:14:15 2024 -0700

    [Core][Model] Support loading weights by ID within models (#7931)

commit b8747e8a7c318ab774862f94ccbdbba5b7d9dd4a
Author: Cody Yu <hao.yu.cody@gmail.com>
Date:   Mon Sep 23 23:10:03 2024 -0700

    [MISC] Skip dumping inputs when unpicklable (#8744)

commit 3185fb0ccae73816018d0936c03171b7cf1ba2f8
Author: Simon Mo <simon.mo@hey.com>
Date:   Mon Sep 23 22:45:20 2024 -0700

    Revert "[Core] Rename `PromptInputs` to `PromptType`, and `inputs` to `prompt`" (#8750)

commit 0250dd68c5df12ead29d2ec7d922855c9a257b06
Author: youkaichao <youkaichao@gmail.com>
Date:   Mon Sep 23 22:08:12 2024 -0700

    re-implement beam search on top of vllm core (#8726)

    Co-authored-by: Brendan Wong <bjwpokemon@gmail.com>

commit 88577ac92808cfd9468e4b54b757d5fcbe9aa486
Author: sroy745 <142070531+sroy745@users.noreply.github.com>
Date:   Mon Sep 23 21:43:13 2024 -0700

    Fix tests in test_scheduler.py that fail with BlockManager V2 (#8728)

commit 530821d00cb2beeb8dc62f74f0e4e0003868dc93
Author: Hongxia Yang <62075498+hongxiayang@users.noreply.github.com>
Date:   Mon Sep 23 21:52:39 2024 -0400

    [Hardware][AMD] ROCm6.2 upgrade (#8674)

commit 1a2aef3e59f5429299618bd3b242833cb377f554
Author: Alexander Matveev <59768536+alexm-neuralmagic@users.noreply.github.com>
Date:   Mon Sep 23 18:38:04 2024 -0400

    Add output streaming support to multi-step + async while ensuring RequestOutput obj reuse (#8335)

commit 5f7bb584272ee15147a411b887e7ababd6b9b9d0
Author: jiqing-feng <107918818+jiqing-feng@users.noreply.github.com>
Date:   Tue Sep 24 03:32:27 2024 +0800

    Fix typical acceptance sampler with correct recovered token ids (#8562)

commit b05f5c9238c3e0c3a98080b4ffc90acfa33f9e1f
Author: Russell Bryant <rbryant@redhat.com>
Date:   Mon Sep 23 15:15:41 2024 -0400

    [Core] Allow IPv6 in VLLM_HOST_IP with zmq (#8575)

    Signed-off-by: Russell Bryant <rbryant@redhat.com>

commit 9b0e3ec970f6a19427be358848a2ed663fd735e1
Author: Jee Jee Li <pandaleefree@gmail.com>
Date:   Tue Sep 24 02:57:42 2024 +0800

    [Kernel][LoRA]  Add assertion for punica sgmv kernels (#7585)

commit 86e9c8df29a954a7a2fc46e9985fecc2a2e15ae8
Author: Lucas Wilkinson <LucasWilkinson@users.noreply.github.com>
Date:   Mon Sep 23 13:46:26 2024 -0400

    [Kernel] (2/N) Machete - Integrate into CompressedTensorsWNA16 and GPTQMarlin (#7701)

    Co-authored-by: mgoin <michael@neuralmagic.com>
    Co-authored-by: Divakar Verma <137818590+divakar-amd@users.noreply.github.com>
    Co-authored-by: Tyler Michael Smith <tyler@neuralmagic.com>

commit ee5f34b1c2c71b2d56054a5ca23fe1c50c1458bb
Author: Daniele <36171005+dtrifiro@users.noreply.github.com>
Date:   Mon Sep 23 18:44:26 2024 +0200

    [CI/Build] use setuptools-scm to set __version__ (#4738)

    Co-authored-by: youkaichao <youkaichao@126.com>

commit f2bd246c17ba67d7749a2560a30711f74cd19177
Author: Jani Monoses <jani.monoses@gmail.com>
Date:   Mon Sep 23 17:43:09 2024 +0300

    [VLM] Fix paligemma, fuyu and persimmon with transformers 4.45 : use config.text_config.vocab_size (#8707)

commit a79e5229843e2800956956d0668b1b4858dbb61e
Author: Yanyi Liu <wolfsonliu@163.com>
Date:   Mon Sep 23 21:46:59 2024 +0800

    [Model] Support pp for qwen2-vl (#8696)

commit 3e83c12b5caa466bf533b144a9ec7944a9ce9d49
Author: Li, Jiang <jiang1.li@intel.com>
Date:   Mon Sep 23 21:15:16 2024 +0800

    [Bugfix][CPU] fix missing input intermediate_tensors in the cpu_model_runner (#8733)

commit e551ca1555b64ba1ecb2310ea658f3e25c62571d
Author: Isotr0py <2037008807@qq.com>
Date:   Mon Sep 23 20:12:20 2024 +0800

    [Hardware][CPU] Refactor CPU model runner (#8729)

commit 9b8c8ba1198cbcd311d28b7647f0f8d5dcdc9212
Author: Alex Brooks <alex.brooks@ibm.com>
Date:   Mon Sep 23 01:44:48 2024 -0600

    [Core][Frontend] Support Passing Multimodal Processor Kwargs (#8657)

    Signed-off-by: Alex-Brooks <Alex.Brooks@ibm.com>

commit d23679eb9960ad2a876b88ebd0028dbe55c3172a
Author: Yan Ma <yan.ma@intel.com>
Date:   Mon Sep 23 13:54:18 2024 +0800

    [Bugfix] fix docker build for xpu (#8652)

commit 57a0702e63d9dc477ab7a82e686a30d14fb6c69d
Author: Luka Govedič <ProExpertProg@users.noreply.github.com>
Date:   Sun Sep 22 23:40:46 2024 -0400

    [Bugfix] Fix CPU CMake build (#8723)

    Co-authored-by: Yuan <yuan.zhou@intel.com>

commit 3dda7c22502033854e963fef3826c1f64627e33b
Author: Tyler Michael Smith <tyler@neuralmagic.com>
Date:   Sun Sep 22 22:24:59 2024 -0400

    [Bugfix] Avoid some bogus messages RE CUTLASS's revision when building (#8702)

commit 92ba7e7477619ec81464ccb64a17226f3d5047bb
Author: youkaichao <youkaichao@gmail.com>
Date:   Sun Sep 22 15:41:59 2024 -0700

    [misc] upgrade mistral-common (#8715)

commit d4a2ac830291305f202a85e157bff3a07b58e616
Author: youkaichao <youkaichao@gmail.com>
Date:   Sun Sep 22 12:47:54 2024 -0700

    [build] enable existing pytorch (for GH200, aarch64, nightly) (#8713)

commit c6bd70d7728b50f358cb5cb6e66e02b75aeb3d20
Author: Lily Liu <lilyliupku@gmail.com>
Date:   Sun Sep 22 12:34:14 2024 -0700

    [SpecDec][Misc] Cleanup, remove bonus token logic. (#8701)

commit 5b59532760c82a9d91f65a3e227524da2af7d4ef
Author: litianjian <45817262+litianjian@users.noreply.github.com>
Date:   Mon Sep 23 01:51:44 2024 +0800

    [Model][VLM] Add LLaVA-Onevision model support (#8486)

    Co-authored-by: litianjian <litianjian@bytedance.com>
    Co-authored-by: Cyrus Leung <cyrus.tl.leung@gmail.com>
    Co-authored-by: Roger Wang <ywang@roblox.com>
    Co-authored-by: DarkLight1337 <tlleungac@connect.ust.hk>

commit ca2b628b3c25b014b9951731c0331b75262a59e0
Author: Huazhong Ji <hzji210@gmail.com>
Date:   Mon Sep 23 01:44:09 2024 +0800

    [MISC] rename CudaMemoryProfiler to DeviceMemoryProfiler (#8703)

commit 8ca5051b9afb6f8d2b3ae1b71d45d84e5d1c6f57
Author: Alex Brooks <alex.brooks@ibm.com>
Date:   Sun Sep 22 06:56:20 2024 -0600

    [Misc] Use NamedTuple in Multi-image example (#8705)

    Signed-off-by: Alex-Brooks <Alex.Brooks@ibm.com>

commit 06ed2815e2be50e527839c7ab09ce2639b7910b6
Author: Cyrus Leung <tlleungac@connect.ust.hk>
Date:   Sun Sep 22 20:24:21 2024 +0800

    [Model] Refactor BLIP/BLIP-2 to support composite model loading (#8407)

commit 0e40ac9b7b5d953dfe38933bc7d2fb0a6c8da53c
Author: youkaichao <youkaichao@gmail.com>
Date:   Sat Sep 21 23:24:58 2024 -0700

    [ci][build] fix vllm-flash-attn (#8699)

commit 13d88d4137f97b8cf3c79f39d7df5e4c8348603a
Author: Isotr0py <2037008807@qq.com>
Date:   Sun Sep 22 12:33:27 2024 +0800

    [Bugfix] Refactor composite weight loading logic (#8656)

commit d66ac62854e04c8fda83506dc93ef7971ebf593a
Author: Tyler Michael Smith <tyler@neuralmagic.com>
Date:   Sat Sep 21 19:45:02 2024 -0400

    [Kernel][Bugfix] Delete some more useless code in marlin_moe_ops.cu (#8643)

commit 9dc7c6c7f332ac6c08311c7a946c6945e0782701
Author: Divakar Verma <137818590+divakar-amd@users.noreply.github.com>
Date:   Sat Sep 21 16:09:39 2024 -0500

    [dbrx] refactor dbrx experts to extend FusedMoe class (#8518)

commit ec4aaad8124baadc7954e30c612ca9444b22d7e7
Author: rasmith <Randall.Smith@amd.com>
Date:   Sat Sep 21 04:20:54 2024 -0500

    [Kernel][Triton][AMD] Remove tl.atomic_add from awq_gemm_kernel, 2-5x speedup MI300, minor improvement for MI250 (#8646)

commit 4dfdf4319676c3dca72cdfba20470ac76d0cadf4
Author: Andy Dai <76841985+Imss27@users.noreply.github.com>
Date:   Sat Sep 21 00:24:12 2024 -0700

    [Doc] Fix typo in AMD installation guide (#8689)

commit 5e85f4f82a5b6eaad6869198d6ac76a0c12cf6d0
Author: Cyrus Leung <tlleungac@connect.ust.hk>
Date:   Sat Sep 21 14:28:56 2024 +0800

    [VLM] Use `SequenceData.from_token_counts` to create dummy data (#8687)

commit 71c60491f287d8a23bed1743513b4b3e7927c69e
Author: Luka Govedič <ProExpertProg@users.noreply.github.com>
Date:   Sat Sep 21 02:27:10 2024 -0400

    [Kernel] Build flash-attn from source (#8245)

commit 0faab90eb006c677add65cd4c2d0f740a63e064d
Author: youkaichao <youkaichao@gmail.com>
Date:   Fri Sep 20 19:55:33 2024 -0700

    [beam search] add output for manually checking the correctness (#8684)

commit 0455c46ed434d70f0a6219204e89ee04f1d01336
Author: Cyrus Leung <tlleungac@connect.ust.hk>
Date:   Sat Sep 21 10:30:39 2024 +0800

    [Core] Factor out common code in `SequenceData` and `Sequence` (#8675)

commit d4bf085ad064ba68a77862e2022f37c33a66e94a
Author: Kunshang Ji <kunshang.ji@intel.com>
Date:   Sat Sep 21 10:03:55 2024 +0800

    [MISC] add support custom_op check (#8557)

    Co-authored-by: youkaichao <youkaichao@126.com>

commit 0057894ef7f8db0d51385aa7254219d7fbd6c784
Author: Cyrus Leung <tlleungac@connect.ust.hk>
Date:   Sat Sep 21 10:00:54 2024 +0800

    [Core] Rename `PromptInputs` and `inputs`(#8673)

commit 0f961b3ce9ac3d3fd13e201c4358884bc094905e
Author: zyddnys <zyddnys@outlook.com>
Date:   Fri Sep 20 18:48:32 2024 -0400

    [Bugfix] Fix incorrect llava next feature size calculation (#8496)

commit 7f9c8902e3d50a9d715b38e0531280a58d2bbe14
Author: omrishiv <327609+omrishiv@users.noreply.github.com>
Date:   Fri Sep 20 15:19:44 2024 -0700

    [Hardware][AWS] update neuron to 2.20 (#8676)

    Signed-off-by: omrishiv <327609+omrishiv@users.noreply.github.com>

commit 7c8566aa4ff16b79a576436fbb50f03643febf07
Author: omrishiv <327609+omrishiv@users.noreply.github.com>
Date:   Fri Sep 20 15:04:37 2024 -0700

    [Doc] neuron documentation update (#8671)

    Signed-off-by: omrishiv <327609+omrishiv@users.noreply.github.com>

commit b4e4eda92e1d3a013fc4007db64b69d8604264ff
Author: Patrick von Platen <patrick.v.platen@gmail.com>
Date:   Fri Sep 20 23:33:03 2024 +0200

    [Bugfix][Core] Fix tekken edge case for mistral tokenizer (#8640)

commit 2874bac618052a079efd837fc82cf3f3519079c7
Author: Pastel! <1627301104@qq.com>
Date:   Sat Sep 21 05:00:45 2024 +0800

    [Bugfix] Config got an unexpected keyword argument 'engine' (#8556)

commit 035fa895ecedea87810889aabbe50ba8a2ad7d5d
Author: Cyrus Leung <tlleungac@connect.ust.hk>
Date:   Sat Sep 21 04:52:19 2024 +0800

    [Misc] Show AMD GPU topology in `collect_env.py` (#8649)

commit b28298f2f4bd4ec6d1020c10b923a9eb7993dc89
Author: saumya-saran <saumya.saran@c3.ai>
Date:   Fri Sep 20 12:46:02 2024 -0700

    [Bugfix] Validate SamplingParam n is an int (#8548)

commit 2940afa04e39fa9f248c565687d9a2acf7401355
Author: Alexey Kondratiev(AMD) <143633163+alexeykondrat@users.noreply.github.com>
Date:   Fri Sep 20 13:27:44 2024 -0400

    [CI/Build] Removing entrypoints/openai/test_embedding.py test from ROCm build (#8670)

commit 3b63de9353ce51ba6c1c167ae8d4b87b8bcf9c9e
Author: Niklas Muennighoff <n.muennighoff@gmail.com>
Date:   Fri Sep 20 09:31:41 2024 -0700

    [Model] Add OLMoE (#7922)

commit 260d40b5ea48df9421325388abcc8d907a560fc5
Author: Jiaxin Shan <seedjeffwan@gmail.com>
Date:   Thu Sep 19 23:20:56 2024 -0700

    [Core] Support Lora lineage and base model metadata management (#6315)

commit 9e5ec35b1f8239453b1aaab28e7a02307db4ab1f
Author: William Lin <SolitaryThinker@users.noreply.github.com>
Date:   Thu Sep 19 20:49:54 2024 -0700

    [bugfix] [AMD] add multi-step advance_step to ROCmFlashAttentionMetadata (#8474)

commit 18ae428a0d8792d160d811a9cd5bb004d68ea8bd
Author: Amit Garg <mitgarg17495@gmail.com>
Date:   Thu Sep 19 17:54:02 2024 -0700

    [Bugfix] Fix Phi3.5 mini and MoE LoRA inference (#8571)

commit de6f90a13d7b98c4958ba107ec16cb6f95efb10f
Author: bnellnm <49004751+bnellnm@users.noreply.github.com>
Date:   Thu Sep 19 18:36:30 2024 -0400

    [Misc] guard against change in cuda library name (#8609)

commit 6cb748e190a94e20987314025614b8bd806602f2
Author: Alexey Kondratiev(AMD) <143633163+alexeykondrat@users.noreply.github.com>
Date:   Thu Sep 19 16:06:32 2024 -0400

    [CI/Build] Re-enabling Entrypoints tests on ROCm, excluding ones that fail (#8551)

commit 9e99407e3ccbb290bae77af230da38c70a52a055
Author: Simon Mo <simon.mo@hey.com>
Date:   Thu Sep 19 12:16:28 2024 -0700

    Create SECURITY.md (#8642)

commit ea4647b7d77c4738c5ed2ab77a2c9f5ad335f6fb
Author: Isotr0py <2037008807@qq.com>
Date:   Fri Sep 20 03:15:55 2024 +0800

    [Doc] Add documentation for GGUF quantization (#8618)

commit e42c634acbd1b86b5becca51e8b8108a32a438d5
Author: 盏一 <w@hidva.com>
Date:   Fri Sep 20 02:28:25 2024 +0800

    [Core] simplify logits resort in _apply_top_k_top_p (#8619)

commit 9cc373f39036af789fb1ffc1e06b23766996d3f4
Author: Charlie Fu <charlifu@amd.com>
Date:   Thu Sep 19 12:37:57 2024 -0500

    [Kernel][Amd] Add fp8 kv cache support for rocm custom paged attention (#8577)

commit 76515f303b44cb3ffc6de63c49148d5081a77119
Author: Nick Hill <nickhill@us.ibm.com>
Date:   Thu Sep 19 17:51:06 2024 +0100

    [Frontend] Use MQLLMEngine for embeddings models too (#8584)

commit 855c8ae2c9a4085b1ebd66d9a978fb23f47f822c
Author: Kunshang Ji <kunshang.ji@intel.com>
Date:   Thu Sep 19 13:33:20 2024 +0800

    [MISC] remove engine_use_ray in benchmark_throughput.py (#8615)

commit c52ec5f03471008fa1312d82fb17d40b95a3ca5d
Author: Kuntai Du <kuntai@uchicago.edu>
Date:   Wed Sep 18 22:24:24 2024 -0700

    [Bugfix] fixing sonnet benchmark bug in benchmark_serving.py (#8616)

commit 02c9afa2d04a85269faa2760e9af30527a61d7f6
Author: Roger Wang <136131678+ywang96@users.noreply.github.com>
Date:   Wed Sep 18 21:14:28 2024 -0700

    Revert "[Misc][Bugfix] Disable guided decoding for mistral tokenizer" (#8593)

commit 3118f63385c0d767fba8b6d2039fc35440678da9
Author: sroy745 <142070531+sroy745@users.noreply.github.com>
Date:   Wed Sep 18 19:24:15 2024 -0700

    [Bugfix] [Encoder-Decoder] Bugfix for encoder specific metadata construction during decode of encoder-decoder models.  (#8545)

commit 4c34ce8916da0e4967eadefcb7f91eb58dd7ac61
Author: Tyler Michael Smith <tyler@neuralmagic.com>
Date:   Wed Sep 18 21:42:49 2024 -0400

    [Kernel] Remove marlin moe templating on thread_m_blocks (#8573)

    Co-authored-by: lwilkinson@neuralmagic.com

commit 0d47bf3bf40edfe9fcfd7e5cd909388497535bc5
Author: Joe Runde <Joseph.Runde@ibm.com>
Date:   Wed Sep 18 16:10:01 2024 -0600

    [Bugfix] add `dead_error` property to engine client (#8574)

    Signed-off-by: Joe Runde <Joseph.Runde@ibm.com>

commit d9cd78eb718c233ebc5b84377fc2226af7ef0fa2
Author: Nick Hill <nickhill@us.ibm.com>
Date:   Wed Sep 18 21:17:55 2024 +0100

    [BugFix] Nonzero exit code if MQLLMEngine startup fails (#8572)

commit db9120cdedba5033037432775417df0b6117495d
Author: Tyler Michael Smith <tyler@neuralmagic.com>
Date:   Wed Sep 18 16:05:06 2024 -0400

    [Kernel] Change interface to Mamba selective_state_update for continuous batching (#8039)

commit b3195bc9e4d57b6107af2222afea26c51475e262
Author: Gregory Shtrasberg <156009573+gshtras@users.noreply.github.com>
Date:   Wed Sep 18 13:41:08 2024 -0400

    [AMD][ROCm]Quantization methods on ROCm; Fix _scaled_mm call (#8380)

    Co-authored-by: Alexei-V-Ivanov-AMD <156011006+Alexei-V-Ivanov-AMD@users.noreply.github.com>
    Co-authored-by: Michael Goin <michael@neuralmagic.com>

commit e18749ff09c277f7cdab278895ebdd9b1041b6e8
Author: Geun, Lim <shing100@Naver.com>
Date:   Thu Sep 19 02:04:00 2024 +0900

    [Model] Support Solar Model (#8386)

    Co-authored-by: Michael Goin <michael@neuralmagic.com>

commit d65798f78c76f03f068fc2f69a68cff430ee6b6f
Author: Russell Bryant <rbryant@redhat.com>
Date:   Wed Sep 18 12:10:27 2024 -0400

    [Core] zmq: bind only to 127.0.0.1 for local-only usage (#8543)

    Signed-off-by: Russell Bryant <rbryant@redhat.com>

commit a8c1d161a7d87dbc6c7cccfce303dcbe2e4ed6be
Author: afeldman-nm <156691304+afeldman-nm@users.noreply.github.com>
Date:   Wed Sep 18 11:38:43 2024 -0400

    [Core] *Prompt* logprobs support in Multi-step (#8199)

commit 7c7714d856eee6fa94aade729b67f00584f72a4c
Author: Alexander Matveev <59768536+alexm-neuralmagic@users.noreply.github.com>
Date:   Wed Sep 18 09:56:58 2024 -0400

    [Core][Bugfix][Perf] Introduce `MQLLMEngine` to avoid `asyncio` OH (#8157)

    Co-authored-by: Nick Hill <nickhill@us.ibm.com>
    Co-authored-by: rshaw@neuralmagic.com <rshaw@neuralmagic.com>
    Co-authored-by: Robert Shaw <114415538+robertgshaw2-neuralmagic@users.noreply.github.com>
    Co-authored-by: Simon Mo <simon.mo@hey.com>

commit 9d104b5beb7bbb51c64b680e007f39169489ea86
Author: Aaron Pham <contact@aarnphm.xyz>
Date:   Wed Sep 18 07:00:56 2024 -0400

    [CI/Build] Update Ruff version (#8469)

    Signed-off-by: Aaron Pham <contact@aarnphm.xyz>
    Co-authored-by: Cyrus Leung <cyrus.tl.leung@gmail.com>

commit 6ffa3f314c59e42238f1c5f923ff2839e0af9698
Author: Cyrus Leung <tlleungac@connect.ust.hk>
Date:   Wed Sep 18 18:38:11 2024 +0800

    [CI/Build] Avoid CUDA initialization (#8534)

commit e351572900f7d87e14fe203ea3a49c1c7ddae0d6
Author: Jiaxin Shan <seedjeffwan@gmail.com>
Date:   Wed Sep 18 02:51:59 2024 -0700

    [Misc] Add argument to disable FastAPI docs (#8554)

commit 95965d31b6ac2c9557816a6ffabe4a3117a5ccb2
Author: Daniele <36171005+dtrifiro@users.noreply.github.com>
Date:   Wed Sep 18 04:49:53 2024 +0200

    [CI/Build] fix Dockerfile.cpu on podman (#8540)

commit 8110e44529f431d54b02060528601c0d3e3f7d02
Author: Tyler Michael Smith <tyler@neuralmagic.com>
Date:   Tue Sep 17 19:44:27 2024 -0400

    [Kernel] Change interface to Mamba causal_conv1d_update for continuous batching (#8012)

commit 09deb4721f830602d0417604c7e18b7e384f9594
Author: Alexey Kondratiev(AMD) <143633163+alexeykondrat@users.noreply.github.com>
Date:   Tue Sep 17 19:40:29 2024 -0400

    [CI/Build] Excluding kernels/test_gguf.py from ROCm (#8520)

commit fa0c114fad4e2b807503e78d5110558cfee92ba4
Author: youkaichao <youkaichao@gmail.com>
Date:   Tue Sep 17 16:24:06 2024 -0700

    [doc] improve installation doc (#8550)

    Co-authored-by: Andy Dai <76841985+Imss27@users.noreply.github.com>

commit 98f9713399bd602ff954a83e6e6abcb4cf8b8864
Author: Joe Runde <Joseph.Runde@ibm.com>
Date:   Tue Sep 17 17:17:08 2024 -0600

    [Bugfix] Fix TP > 1 for new granite (#8544)

    Signed-off-by: Joe Runde <Joseph.Runde@ibm.com>

commit 56c3de018c35580fd088655c2f9951cd4da5335d
Author: Nick Hill <nickhill@us.ibm.com>
Date:   Tue Sep 17 20:24:29 2024 +0100

    [Misc] Don't dump contents of kvcache tensors on errors (#8527)

commit a54ed8024953dc6b59906072a7a89cd4791ec4f0
Author: Patrick von Platen <patrick.v.platen@gmail.com>
Date:   Tue Sep 17 19:50:37 2024 +0200

    [Model] Add mistral function calling format to all models loaded with "mistral" format (#8515)

    Co-authored-by: Cyrus Leung <cyrus.tl.leung@gmail.com>

commit 9855b99502c7537db5ef018129e603650800ac46
Author: chenqianfzh <51831990+chenqianfzh@users.noreply.github.com>
Date:   Tue Sep 17 08:09:12 2024 -0700

    [Feature][kernel] tensor parallelism with bitsandbytes quantization (#8434)

commit 1009e93c5d634c724eeff3d4e453369337f502d4
Author: sroy745 <142070531+sroy745@users.noreply.github.com>
Date:   Tue Sep 17 07:35:01 2024 -0700

    [Encoder decoder] Add cuda graph support during decoding for encoder-decoder models (#7631)

commit 1b6de8352b878348974b3f117cbb68ed18daa609
Author: Isotr0py <2037008807@qq.com>
Date:   Tue Sep 17 15:34:27 2024 +0800

    [Benchmark] Support sample from HF datasets and image input for benchmark_serving (#8495)

commit cbdb25225914a04d94e8830f4e739faca8ff3b9d
Author: Rui Qiao <161574667+ruisearch42@users.noreply.github.com>
Date:   Tue Sep 17 00:06:26 2024 -0700

    [Misc] Limit to ray[adag] 2.35 to avoid backward incompatible change…
Alvant pushed a commit to compressa-ai/vllm that referenced this pull request Oct 26, 2024
garg-amit pushed a commit to garg-amit/vllm that referenced this pull request Oct 28, 2024
…chat model (vllm-project#8405)

Signed-off-by: Amit Garg <mitgarg17495@gmail.com>
sumitd2 pushed a commit to sumitd2/vllm that referenced this pull request Nov 14, 2024
…chat model (vllm-project#8405)

Signed-off-by: Sumit Dubey <sumit.dubey2@ibm.com>
KuntaiDu pushed a commit to KuntaiDu/vllm that referenced this pull request Nov 20, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ready ONLY add when PR is ready to merge/full CI is needed
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants