Skip to content

Commit

Permalink
example(LLM): update llm examples runtime req version (#2426)
Browse files Browse the repository at this point in the history
  • Loading branch information
tianweidut authored Jun 30, 2023
1 parent ec4f118 commit 8da0308
Show file tree
Hide file tree
Showing 7 changed files with 11 additions and 13 deletions.
4 changes: 2 additions & 2 deletions example/LLM/belle-bloom/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ swcli runtime build --yaml runtime.yaml
## Build Starwhale Model

```bash
python sw.py build bloom/4bit
python sw.py build bloom-4bit
```

![model build](https://github.com/star-whale/starwhale/assets/590748/63249227-34eb-4331-9029-8789cb92e7c8)
Expand Down Expand Up @@ -41,4 +41,4 @@ swcli model serve -u belle-bloom-4bit --runtime belle

- `sw.py`: build, model evaluation, model serving, model fine-tune python script with Starwhale SDK.
- `runtime.yaml`: Starwhale Runtime spec.
- `.swignore`: A file defines ignore pattern, same as .gitignore.
- `.swignore`: A file defines ignore pattern, same as .gitignore.
4 changes: 1 addition & 3 deletions example/LLM/belle-bloom/runtime.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -5,8 +5,6 @@ environment:
os: ubuntu:20.04
cuda: 11.7
python: 3.9
docker:
image: docker-registry.starwhale.cn/star-whale/starwhale:0.4.5-cuda11.7
configs:
pip:
index_url: https://mirrors.aliyun.com/pypi/simple
Expand All @@ -30,7 +28,7 @@ dependencies:
- deepspeed==0.9.0
- safetensors==0.3.0
# external starwhale dependencies
- starwhale >= 0.4.5
- starwhale[serve] >= 0.5.0
- wheels:
# quant_cuda is built from https://github.com/LianjiaTech/BELLE/blob/main/models/gptq/setup_cuda.py @ cf191f9d178326782e01dceacd8357d507b9aab8
# because of the quant_cuda does not use setup.py script, so we cannot install it from git+https url.
Expand Down
4 changes: 1 addition & 3 deletions example/LLM/belle-bloom/runtime_conda.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -5,8 +5,6 @@ environment:
os: ubuntu:20.04
cuda: 11.7
python: 3.9
docker:
image: docker-registry.starwhale.cn/star-whale/starwhale:0.4.5-cuda11.7
configs:
pip:
index_url: https://mirrors.aliyun.com/pypi/simple
Expand Down Expand Up @@ -46,7 +44,7 @@ dependencies:
- deepspeed==0.9.0
- safetensors==0.3.0
# external starwhale dependencies
- starwhale >= 0.4.5
- starwhale[serve] >= 0.5.0
- wheels:
# quant_cuda is built from https://github.com/LianjiaTech/BELLE/blob/main/models/gptq/setup_cuda.py @ cf191f9d178326782e01dceacd8357d507b9aab8
# because of the quant_cuda does not use setup.py script, so we cannot install it from git+https url.
Expand Down
1 change: 1 addition & 0 deletions example/LLM/belle-bloom/sw.py
Original file line number Diff line number Diff line change
Expand Up @@ -142,6 +142,7 @@ def _do_pre_process(data: dict, external: dict) -> str:
"mkqa-mini": "query",
"z_bench_common": "prompt",
"webqsp": "rawquestion",
"vicuna": "text",
}
ds_name = external["dataset_uri"].name
keyword = "question"
Expand Down
4 changes: 2 additions & 2 deletions example/LLM/guanaco/runtime.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ environment:
arch: noarch
os: ubuntu:20.04
cuda: 11.7
python: 3.9
python: "3.10"
configs:
pip:
index_url: https://mirrors.aliyun.com/pypi/simple
Expand All @@ -23,4 +23,4 @@ dependencies:
# download repo from huggingface hub
- huggingface-hub
# external starwhale dependencies
- starwhale >= 0.4.6
- starwhale[serve] >= 0.5.0
4 changes: 2 additions & 2 deletions example/LLM/llama/runtime.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ environment:
arch: noarch
os: ubuntu:20.04
cuda: 11.7
python: 3.9
python: "3.10"
configs:
pip:
index_url: https://mirrors.aliyun.com/pypi/simple
Expand All @@ -23,4 +23,4 @@ dependencies:
# download repo from huggingface hub
- huggingface-hub
# external starwhale dependencies
- starwhale >= 0.4.6
- starwhale[serve] >= 0.5.0
3 changes: 2 additions & 1 deletion example/LLM/vicuna/runtime.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ environment:
arch: noarch
os: ubuntu:20.04
cuda: 11.7
python: 3.9
python: "3.10"
configs:
pip:
index_url: https://mirrors.aliyun.com/pypi/simple
Expand All @@ -16,3 +16,4 @@ dependencies:
- transformers==4.28.0
- peft==0.3.0
- accelerate==0.20.3
- starwhale[serve] >= 0.5.0

0 comments on commit 8da0308

Please sign in to comment.