Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[vLLM] add prefix caching support #2239

Merged
merged 2 commits into from
Jul 30, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -61,6 +61,7 @@ class LmiDistRbProperties(Properties):
max_logprobs: Optional[int] = 20
enable_chunked_prefill: Optional[bool] = False
cpu_offload_gb_per_gpu: Optional[int] = 0
enable_prefix_caching: Optional[bool] = False

@model_validator(mode='after')
def validate_mpi(self):
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -56,6 +56,7 @@ class VllmRbProperties(Properties):
max_logprobs: Optional[int] = 20
enable_chunked_prefill: Optional[bool] = False
cpu_offload_gb_per_gpu: Optional[int] = 0
enable_prefix_caching: Optional[bool] = False

@field_validator('engine')
def validate_engine(cls, engine):
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -83,6 +83,7 @@ def __init__(self, model_id_or_path: str, properties: dict, **kwargs):
revision=self.lmi_dist_config.revision,
enable_chunked_prefill=self.lmi_dist_config.enable_chunked_prefill,
cpu_offload_gb=self.lmi_dist_config.cpu_offload_gb_per_gpu,
enable_prefix_caching=self.lmi_dist_config.enable_prefix_caching,
**engine_kwargs)

kwargs = {}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -232,7 +232,8 @@ def get_engine_args_from_config(config: VllmRbProperties) -> EngineArgs:
revision=config.revision,
max_logprobs=config.max_logprobs,
enable_chunked_prefill=config.enable_chunked_prefill,
cpu_offload_gb=config.cpu_offload_gb_per_gpu)
cpu_offload_gb=config.cpu_offload_gb_per_gpu,
enable_prefix_caching=config.enable_prefix_caching)


def get_multi_modal_data(request: Request) -> dict:
Expand Down
1 change: 1 addition & 0 deletions serving/docs/lmi/user_guides/lmi-dist_user_guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -137,3 +137,4 @@ Here are the advanced parameters that are available when using LMI-Dist.
| option.max_cpu_loras | \>= 0.27.0 | Pass Through | This config determines the maximum number of LoRA adapters to cache in memory. All others will be evicted to disk. | Default: `None` |
| option.enable_chunked_prefill | \>= 0.29.0 | Pass Through | This config enables chunked prefill support. With chunked prefill, longer prompts will be chunked and batched with decode requests to reduce inter token latency. This option is EXPERIMENTAL and tested for llama and falcon models only. This does not work with LoRA and speculative decoding yet. | Default: `False` |
| option.cpu_offload_gb_per_gpu | \>= 0.29.0 | Pass Through | This config allows offloading model weights into CPU to enable large model running with limited GPU memory. | Default: `0` |
| option.enable_prefix_caching | \>= 0.29.0 | Pass Through | This config allows the engine to cache the context memory and reuse to speed up inference. | Default: `False` |
2 changes: 2 additions & 0 deletions serving/docs/lmi/user_guides/vllm_user_guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -126,3 +126,5 @@ In that situation, there is nothing LMI can do until the issue is fixed in the b
| option.lora_extra_vocab_size | \>= 0.27.0 | Pass Through | This config determines the maximum additional vocabulary that can be added through a LoRA adapter. | Default: `256` |
| option.max_cpu_loras | \>= 0.27.0 | Pass Through | This config determines the maximum number of LoRA adapters to cache in memory. All others will be evicted to disk. | Default: `None` |
| option.enable_chunked_prefill | \>= 0.29.0 | Pass Through | This config enables chunked prefill support. With chunked prefill, longer prompts will be chunked and batched with decode requests to reduce inter token latency. This option is EXPERIMENTAL and tested for llama and falcon models only. This does not work with LoRA and speculative decoding yet. | Default: `False` |
| option.cpu_offload_gb_per_gpu | \>= 0.29.0 | Pass Through | This config allows offloading model weights into CPU to enable large model running with limited GPU memory. | Default: `0` |
| option.enable_prefix_caching | \>= 0.29.0 | Pass Through | This config allows the engine to cache the context memory and reuse to speed up inference. | Default: `False` |
Original file line number Diff line number Diff line change
Expand Up @@ -71,6 +71,8 @@ public final class LmiConfigRecommender {
Map.entry("phi3_v", "lmi-dist"),
// vllm 0.5.3
Map.entry("chameleon", "lmi-dist"),
Map.entry("deepseek", "lmi-dist"),
Map.entry("deepseek_v2", "lmi-dist"),
Map.entry("fuyu", "lmi-dist"));

private static final Set<String> OPTIMIZED_TASK_ARCHITECTURES =
Expand Down
Loading