You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I encounter the following error when trying to load a fine-tuned LLaVA model:
~$ python3 -m sglang.launch_server --model-path org/llava_1.5_13b_finetune --tokenizer-path llava-hf/llava-1.5-13b-hf --port 30000Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.Process Process-1:router init state: Traceback (most recent call last): File "/opt/conda/envs/llava_sglang/lib/python3.10/site-packages/sglang/srt/managers/router/manager.py", line 68, in start_router_process model_client = ModelRpcClient(server_args, port_args) File "/opt/conda/envs/llava_sglang/lib/python3.10/site-packages/sglang/srt/managers/router/model_rpc.py", line 448, in __init__ self.model_server.exposed_init_model(0, server_args, port_args) File "/opt/conda/envs/llava_sglang/lib/python3.10/site-packages/sglang/srt/managers/router/model_rpc.py", line 54, in exposed_init_model self.model_runner = ModelRunner( File "/opt/conda/envs/llava_sglang/lib/python3.10/site-packages/sglang/srt/managers/router/model_runner.py", line 229, in __init__ self.load_model() File "/opt/conda/envs/llava_sglang/lib/python3.10/site-packages/sglang/srt/managers/router/model_runner.py", line 275, in load_model model.load_weights( File "/opt/conda/envs/llava_sglang/lib/python3.10/site-packages/sglang/srt/models/llava.py", line 177, in load_weights self.language_model.load_weights( File "/opt/conda/envs/llava_sglang/lib/python3.10/site-packages/sglang/srt/models/llama2.py", line 306, in load_weights param = params_dict[name]KeyError: 'model.vision_tower.vision_tower.vision_model.encoder.layers.0.self_attn.qkv_proj.weight'
Seems like an easy solution would be to ignore the vision_tower in the params_dict?
The text was updated successfully, but these errors were encountered:
Hi, I encounter the following error when trying to load a fine-tuned LLaVA model:
Seems like an easy solution would be to ignore the
vision_tower
in theparams_dict
?The text was updated successfully, but these errors were encountered: