-
-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Model] Phi-3 4k sliding window temp. fix #4380
Conversation
@caiom Should we apply this padding in |
For a window size of 2047 we have:
This is true for all Models that uses a sliding window. Regardless of the PR, for Mistral, the block_manager_v1 will think the window size is smaller than it actually is (with the flash backend). The window size of mistralai/Mistral-7B-v0.1 is 4096 and hence the flash backend will have a window with 4097 tokens, while block_manager_v1 is assuming 4096 tokens. For Phi-3, with this PR, block_manager_v1 may assume a larger number of tokens than the actual number. For instance, with the window size of 2047, the PR will add one and hence the block_manager_v1 will see it as 2048 tokens. However, with sdpa, that number is 2047. So, there's mismatch between block_manager_v1 and the actual number of tokens in the window. This is true for all Models, for more or for less and depending on the backend. Now, I'm not sure if any of these cases of mismatch are a significant problem as I don't really know how block_manager_v1 actually works. Hopefully my explanation is good enough :) |
@caiom Thanks for your detail explanation. Look good to me. @cadedaniel WDYT? |
What if instead we round up the sliding window in the block manager to nearest block size? |
@cadedaniel I have same thought before, but it seems that there's a few references to |
I'm guessing that for PagedAttention, what matters is what is in the blocks_table. The max context length for mistralai/Mistral-7B-v0.1 is 4097 but the PagedAttention will consider only 4096 tokens. This is fine since the model can handle 4096 tokens. For Phi-3, using this PR, PagedAttention have the correct/maximum number of tokens in context. So, if we are going to round it, we should always round down so that we never feed more tokens than the model was trained for. Unless we are just rounding by 1, so the +1 in my PR should be safe.
I may be totally wrong, though. |
So the block manager's concern here (and only concern) should be allocating enough space for sliding window. As for how the model would like to use that space is up to the model. So from a north-star perspective the block manager should not be involved in the minutia of context len for sliding window. It's OK if the model_runner uses a subset of the blocks for context. While this PR is a working hotfix, in design terms it couples the exact sliding window length used by attention closely with the block manager, which isn't necessary. Ideally we avoid this! Can we test out my suggestion? There may be some other coupling between block manager and exact context length that I don't know about. |
Thanks for the review @cadedaniel, I have implemented your suggestion and tested microsoft/Phi-3-mini-4k-instruct using several long prompts, everything looks good. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, thanks! Small thing:
- can you update the PR description with latest?
- can you run a manual test with
mistralai/Mistral-7B-v0.1
? I tried but my node is borked at the moment..
Done! I tested the mistralai/Mistral-7B-Instruct-v0.1 model by inputting Wikipedia articles, asking it to create summaries, and then eyeballing the results. Everything looks good. Additionally, the rounding should not affect Mistral models. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the fix! We can merge once CI finishes its run
hi, I find Phi-3 medium is released ,when i try it with vllm, framework logged it was using xformers because of sliding window, my vllm is building from latest source, do you have any idea how to use flash-attn instead of xformers?
|
@ShadowTeamCN Same problem here for phi3-mini. Cannot use FlashAttention-2 backend due to sliding window. |
Hey wanted to check: is this problem fixed? Both my mistral and phi 3 model cannot use FA2
and Im using the latest version of vLLM |
I am also unable to use flash attention with |
Same problem here for Phi-3.5-vision. Cannot use FlashAttention-2 backend due to sliding window. |
The model microsoft/Phi-3-mini-4k-instruct uses a sliding window of size 2047 (which should result in 2048 tokens of context) but due to the issue #3385 the model loading will break:
File "/usr/local/lib/python3.10/dist-packages/vllm/core/block_manager_v1.py", line 223, in __init__ assert sliding_window % block_size == 0, (sliding_window, AssertionError: (2047, 16)
My idea is to have this quick fix so that people can use the model, I can also create a draft PR for a definitive solution. However, a definitive solution to the issue can be involved, the issue affects xformers and sdpa but not flash backend. It also affects block_manager_v1 which is the bit I would need some help.
Let me know your thoughts.
RELATED: #3385
Update:
After some discussion, we decided to change only
block_manager_v1
by removing the assert and rounding upblock_sliding_window
. This change has no effect anywhere else (model_runner
or actual model code).Models tested by feeding long prompts:
BEFORE SUBMITTING, PLEASE READ THE CHECKLIST BELOW AND FILL IN THE DESCRIPTION ABOVE
PR Checklist (Click to Expand)
Thank you for your contribution to vLLM! Before submitting the pull request, please ensure the PR meets the following criteria. This helps vLLM maintain the code quality and improve the efficiency of the review process.
PR Title and Classification
Only specific types of PRs will be reviewed. The PR title is prefixed appropriately to indicate the type of change. Please use one of the following:
[Bugfix]
for bug fixes.[CI/Build]
for build or continuous integration improvements.[Doc]
for documentation fixes and improvements.[Model]
for adding a new model or improving an existing model. Model name should appear in the title.[Frontend]
For changes on the vLLM frontend (e.g., OpenAI API server,LLM
class, etc.)[Kernel]
for changes affecting CUDA kernels or other compute kernels.[Core]
for changes in the core vLLM logic (e.g.,LLMEngine
,AsyncLLMEngine
,Scheduler
, etc.)[Hardware][Vendor]
for hardware-specific changes. Vendor name should appear in the prefix (e.g.,[Hardware][AMD]
).[Misc]
for PRs that do not fit the above categories. Please use this sparingly.Note: If the PR spans more than one category, please include all relevant prefixes.
Code Quality
The PR need to meet the following code quality standards:
format.sh
to format your code.docs/source/
if the PR modifies the user-facing behaviors of vLLM. It helps vLLM user understand and utilize the new features or changes.Notes for Large Changes
Please keep the changes as concise as possible. For major architectural changes (>500 LOC excluding kernel/data/config/test), we would expect a GitHub issue (RFC) discussing the technical design and justification. Otherwise, we will tag it with
rfc-required
and might not go through the PR.What to Expect for the Reviews
The goal of the vLLM team is to be a transparent reviewing machine. We would like to make the review process transparent and efficient and make sure no contributor feel confused or frustrated. However, the vLLM team is small, so we need to prioritize some PRs over others. Here is what you can expect from the review process:
action-required
label on the PR if there are changes required. The contributor should address the comments and ping the reviewer to re-review the PR.Thank You
Finally, thank you for taking the time to read these guidelines and for your interest in contributing to vLLM. Your contributions make vLLM a great tool for everyone!