-
-
Notifications
You must be signed in to change notification settings - Fork 5.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[V1][Spec Decode] Ngram Spec Decode #12193
base: main
Are you sure you want to change the base?
Conversation
👋 Hi! Thank you for contributing to the vLLM project. Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can do one of these:
🚀 |
This pull request has merge conflicts that must be resolved before it can be |
Signed-off-by: LiuXiaoxuanPKU <lilyliupku@gmail.com>
Surely late here, but why is a speculative decoding-aware scheduler needed? Wouldn't it be possible to just assume multi-token generation per-step as default? |
Because the scheduler has to know how many kv-cache slots are needed for each request. We use lookahead slots in v0 that always allocates k lookahead slots for each request when spec decode is enabled. However, it's inefficient when we don't have k spec tokens for every request. This may happen in the following examples:
So in this design for v1, we first get the spec tokens, and let the target model scheduler allocate the exact number of slots accordingly. |
self._spec_token_ids = [] | ||
|
||
@property | ||
def spec_token_ids(self) -> List[int]: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This function should return ConstantList
to be read-only. See output_token_ids
and all_token_ids
as references.
# When calculating new full blocks, we exclude speculative tokens. | ||
# We only cache blocks where token_ids are valid. KV cache of | ||
# speculative tokens will be valid once these tokens are accepted | ||
# (tracked by num_computed_tokens). | ||
num_cached_tokens = request.num_computed_tokens + num_tokens - len( | ||
request.spec_token_ids) | ||
num_full_blocks_after_append = num_cached_tokens // self.block_size |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I just realized that the current _cache_full_blocks
may not support caching the same block twice. It may have issues in an edge case (block size 4, k=3):
Step 1: [0, 1, 2, 3] + [4, S0, S1, S2]
- The first block is already cached in the last step.
- Assuming all spec tokens are rejected.
Step 2: [0, 1, 2, 3] + [4, 5, S0, S1] + [S2]
- `5` is the bonus token.
- num_cached_tokens = 5 + 1 - 3 = 3
- num_full_blocks_after_append = 3 // 4 = 0
- So you attempt to cache the first block again.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
#12415 should fix this.
logger = init_logger(__name__) | ||
|
||
|
||
class RejectionSampler(nn.Module): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we use FlashInfer kernel for reject sampling?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will add a TODO, saying we can replace it with flashinfer in the future.
I see, thanks a lot for elaborating @comaniac! |
@@ -621,6 +663,8 @@ class SchedulerOutput: | |||
num_scheduled_tokens: Dict[str, int] | |||
total_num_scheduled_tokens: int | |||
scheduled_encoder_inputs: Dict[str, List[int]] | |||
use_spec_decode: bool | |||
scheduled_spec_decode_tokens: Dict[str, List[int]] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe a more general approach:
- no specific to spec decode: add a function to scheduler so it can schedule n new tokens (with token ids)
- specific to spec decode: rejection sampling, some tokens might be rejected, some ways of rewinding
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the PR !
@@ -361,11 +363,15 @@ def make_sampling_metadata( | |||
# TODO - Replace this with incremental update to output token | |||
# statistics. | |||
output_token_ids.append(req_id_output_token_ids[req_id]) | |||
if rejection_sampling: | |||
assert req_id_to_spec_token_ids is not None | |||
spec_token_ids.append(req_id_to_spec_token_ids[req_id]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it possible that there are no speculations for a req_id? If so do we need to do req_id_to_spec_token_ids.get(req_id, []) ?
request.append_output_token_ids(token_id) | ||
num_new_tokens = 1 | ||
if request.num_computed_tokens >= request.num_tokens: | ||
request.clear_spec_tokens() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am wondering if we should clear the spec tokens always? What happens if the target model does not accept any of the spec tokens? Is that case being handled here?
This pull request has merge conflicts that must be resolved before it can be |
@@ -0,0 +1,47 @@ | |||
import torch |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use torch tensor or numpy to implement the first version of rejection sampling
This PR tries to add ngram spec decode to V1. Design doc: here.
Major changes:
_spec_token_ids
inRequest
to track speculated tokens.model_runner
:3.1 Change the
_prepare_input
to also return the logits of speculated tokens.3.2 Change the
_prepare_input
to add speculated tokens as input tokens.3.3 Change the
execute_model
to generate multiple tokens per call. Concretely, it will add more than one tokens toinput_batch
andreq_state
.What is missing
Tasks out of the scope of this PR
[Update]
I will move the following two features into following PRs:
Minor: There is a minimal example/test in
tests/v1/e2e/test_basic_specdecode.py
. You can check it for the current use and check correctness withpytest -s tests/v1/e2e/test_basic_specdecode.py
.