Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature] Make vLLM optional in model code #1673

Open
3 of 5 tasks
ByronHsu opened this issue Oct 15, 2024 · 1 comment
Open
3 of 5 tasks

[Feature] Make vLLM optional in model code #1673

ByronHsu opened this issue Oct 15, 2024 · 1 comment
Assignees
Labels

Comments

@ByronHsu
Copy link
Collaborator

ByronHsu commented Oct 15, 2024

Motivation

This is a tracker of removing vLLM dependencies in general model code (not considering quantization). This is our current import from vLLM, and we want to remove all them.

from vllm.config import CacheConfig
from vllm.distributed import get_tensor_model_parallel_world_size
from vllm.model_executor.layers.rotary_embedding import get_rope
from vllm.model_executor.layers.vocab_parallel_embedding import (
   ParallelLMHead,
   VocabParallelEmbedding,
)

Tracker

@vkc1vk
Copy link

vkc1vk commented Nov 10, 2024

Just curious, are the following imports in model_runner.py also being considered for removal, in later stages

from vllm.config import DeviceConfig, LoadConfig
from vllm.config import ModelConfig as VllmModelConfig
from vllm.distributed import (
    get_tp_group,
    init_distributed_environment,
    initialize_model_parallel,
    set_custom_all_reduce,
)
from vllm.distributed.parallel_state import in_the_same_node_as
from vllm.model_executor.model_loader import get_model
from vllm.model_executor.models import ModelRegistry

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

4 participants