Skip to content

Commit

Permalink
[Docs] Add Modal to deployment frameworks (vllm-project#11907)
Browse files Browse the repository at this point in the history
Signed-off-by: Fred Reiss <frreiss@us.ibm.com>
  • Loading branch information
charlesfrye authored and frreiss committed Jan 10, 2025
1 parent ced0933 commit 4cb61e8
Show file tree
Hide file tree
Showing 3 changed files with 9 additions and 1 deletion.
2 changes: 1 addition & 1 deletion docs/source/deployment/frameworks/bentoml.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,6 @@

# BentoML

[BentoML](https://github.com/bentoml/BentoML) allows you to deploy a large language model (LLM) server with vLLM as the backend, which exposes OpenAI-compatible endpoints. You can serve the model locally or containerize it as an OCI-complicant image and deploy it on Kubernetes.
[BentoML](https://github.com/bentoml/BentoML) allows you to deploy a large language model (LLM) server with vLLM as the backend, which exposes OpenAI-compatible endpoints. You can serve the model locally or containerize it as an OCI-compliant image and deploy it on Kubernetes.

For details, see the tutorial [vLLM inference in the BentoML documentation](https://docs.bentoml.com/en/latest/use-cases/large-language-models/vllm.html).
1 change: 1 addition & 0 deletions docs/source/deployment/frameworks/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@ cerebrium
dstack
helm
lws
modal
skypilot
triton
```
7 changes: 7 additions & 0 deletions docs/source/deployment/frameworks/modal.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
(deployment-modal)=

# Modal

vLLM can be run on cloud GPUs with [Modal](https://modal.com), a serverless computing platform designed for fast auto-scaling.

For details on how to deploy vLLM on Modal, see [this tutorial in the Modal documentation](https://modal.com/docs/examples/vllm_inference).

0 comments on commit 4cb61e8

Please sign in to comment.