Skip to content

Commit

Permalink
Update version to 0.1.12 (#178)
Browse files Browse the repository at this point in the history
  • Loading branch information
merrymercy authored Feb 11, 2024
1 parent c51020c commit 624b21e
Show file tree
Hide file tree
Showing 4 changed files with 15 additions and 3 deletions.
10 changes: 10 additions & 0 deletions docs/model_support.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
## How to Support a New Model

To support a new model in SGLang, you only need to add a single file under [SGLang Models Directory](https://github.com/sgl-project/sglang/tree/main/python/sglang/srt/models).

You can learn from existing model implementations and create new files for the new models. Most models are based on the transformer architecture, making them very similar.

Another valuable resource is the vLLM model implementations. vLLM has extensive coverage of models, and SGLang has reused vLLM for most parts of the model implementations. This similarity makes it easy to port many models from vLLM to SGLang.

1. Compare these two files [SGLang LLaMA Implementation](https://github.com/sgl-project/sglang/blob/main/python/sglang/srt/models/llama2.py) and [vLLM LLaMA Implementation](https://github.com/vllm-project/vllm/blob/main/vllm/model_executor/models/llama.py). This comparison will help you understand how to convert a model implementation from vLLM to SGLang. The major difference is the replacement of PagedAttention with RadixAttention. The other parts are almost identical.
2. Convert models from vLLM to SGLang by visiting the [vLLM Models Directory](https://github.com/vllm-project/vllm/tree/main/vllm/model_executor/models).
4 changes: 3 additions & 1 deletion examples/quick_start/srt_example_llava.py
Original file line number Diff line number Diff line change
Expand Up @@ -43,9 +43,11 @@ def batch():


if __name__ == "__main__":
runtime = sgl.Runtime(model_path="liuhaotian/llava-v1.5-7b",
runtime = sgl.Runtime(model_path="liuhaotian/llava-v1.6-vicuna-7b",
tokenizer_path="llava-hf/llava-1.5-7b-hf")
sgl.set_default_backend(runtime)
print(f"chat template: {runtime.endpoint.chat_template.name}")

# Or you can use API models
# sgl.set_default_backend(sgl.OpenAI("gpt-4-vision-preview"))
# sgl.set_default_backend(sgl.VertexAI("gemini-pro-vision"))
Expand Down
2 changes: 1 addition & 1 deletion python/pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ build-backend = "setuptools.build_meta"

[project]
name = "sglang"
version = "0.1.11"
version = "0.1.12"
description = "A structured generation langauge for LLMs."
readme = "README.md"
requires-python = ">=3.8"
Expand Down
2 changes: 1 addition & 1 deletion python/sglang/__init__.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
__version__ = "0.1.11"
__version__ = "0.1.12"

from sglang.api import *
from sglang.global_config import global_config

0 comments on commit 624b21e

Please sign in to comment.