Releases: bentoml/OpenLLM
v0.6.6
What's Changed
- ci: pre-commit autoupdate [pre-commit.ci] by @pre-commit-ci in #1043
- chore(deps): bump openai from 1.35.12 to 1.35.13 by @dependabot in #1041
- chore(deps): bump softprops/action-gh-release from 2.0.6 to 2.0.8 by @dependabot in #1046
- chore(deps): bump openai from 1.35.13 to 1.36.1 by @dependabot in #1045
- ci: pre-commit autoupdate [pre-commit.ci] by @pre-commit-ci in #1047
- chore(deps): bump openai from 1.36.1 to 1.37.1 by @dependabot in #1048
- docs: Update OpenLLM readme by @Sherlock113 in #1051
Full Changelog: v0.6.5...v0.6.6
v0.6.5
What's Changed
Full Changelog: v0.6.4...v0.6.5
v0.6.3
v0.6.0
We are thrilled to announce the release of OpenLLM 0.6, which marks a significant shift in our project's philosophy. This release introduces breaking changes to the codebase, reflecting our renewed focus on streamlining cloud deployment for LLMs.
In the previous releases, our goal was to provide users with the ability to fully customize their LLM deployment. However, we realized that the customization support in OpenLLM led to scope creep, deviating from our core focus on making LLM deployment simple. With the rise of open source LLMs and the growing emphasis on LLM-focused application development, we have decided to concentrate on what OpenLLM does best - simplifying LLM deployment.
We have completely revamped the architecture to make OpenLLM a tool that simplifies running LLMs as an API endpoint, prioritizing ease of use and performance. This means that 0.6 breaks away from many of the old Python APIs provided in 0.5, emphasizing itself as an easy-to-use CLI tool with cross-platform compatibility for users to deploy open source LLMs.
To learn more about the exciting features and capabilities of OpenLLM, visit our [GitHub](https://github.com/bentoml/OpenLLM) repository. We invite you to explore the new release, provide feedback, and join us in our mission to make cloud deployment of LLMs accessible and efficient for everyone.
Thank you for your continued support and trust in OpenLLM. We look forward to seeing the incredible applications you will build with the tool.
v0.5.7
Installation
pip install openllm==0.5.7
To upgrade from a previous version, use the following command:
pip install --upgrade openllm==0.5.7
Usage
To start a LLM: python -m openllm start HuggingFaceH4/zephyr-7b-beta
Find more information about this release in the CHANGELOG.md
Full Changelog: v0.5.6...v0.5.7
OpenLLM: v0.5
OpenLLM has undergone a significant upgrade in its v0.5 release to enhance compatibility with the BentoML 1.2 SDK. The CLI has also been streamlined to focus on delivering the most easy-to-use and reliable experience for deploying open-source LLMs to production. However, version 0.5 introduces breaking changes.
Breaking changes, and the reason why.
After releasing version 0.4, we realized that while OpenLLM offers a high degree of flexibility and power to users, they encountered numerous issues when attempting to deploy these models. OpenLLM had been trying to accomplish a lot by providing support for different backends (mainly PyTorch for CPU inference and vLLM for GPU inference) and accelerators. Although this provided users with the option to quickly test on their local machines, we discovered that this brought a lot of confusion when running OpenLLM locally versus the cloud. The difference between local and cloud deployment made it difficult for users to understand and control the packaged Bento to behave correctly on the cloud.
The motivation for 0.5 is to focus on cloud deployment. Cloud deployments often focus on high throughput and high concurrency serving, and GPU is the most common choice of hardware for achieving high throughput and concurrency serving. Therefore, we simplified backend support to just vLLM which is the most suitable and reliable for serving LLM on GPU on the cloud.
Architecture changes and SDK.
For version 0.5, we have decided to reduce the scope and support the backend that yields the most performance (in this case, vLLM). This means that pip install openllm will also depend on vLLM. In other words, we will currently pause our support for CPU going forward.
All interactions with OpenLLM servers going forward should be done through clients (i.e., BentoML's Clients, OpenAI, etc.).
CLI
CLI has now been simplified to openllm start
and openllm build
HuggingFace models
openllm start
openllm start
will continue to accept HuggingFace model id for supported model architectures:
openllm start microsoft/Phi-3-mini-4k-instruct --trust-remote-code
For any models that requires remote code execution, one should pass in
--trust-remote-code
openllm start
will also accept serving from local path directly. Make sure to also pass in --trust-remote-code
if you wish to use with openllm start
openllm start path/to/custom-phi-instruct --trust-remote-code
openllm build
In previous versions, OpenLLM would copy the local cache of the models into the generated Bento store, resulting in having two copies of the models on users’ machine. From v0.5 going forward, models won't be packaged with the Bento and will be downloaded into Hugging Face cache first time on deployment.
openllm build microsoft/Phi-3-mini-4k-instruct --trust-remote-code
Successfully built Bento 'microsoft--phi-3-mini-4k-instruct-service:5fa34190089f0ee40f9cce3cafc396b89b2e5e83'.
██████╗ ██████╗ ███████╗███╗ ██╗██╗ ██╗ ███╗ ███╗
██╔═══██╗██╔══██╗██╔════╝████╗ ██║██║ ██║ ████╗ ████║
██║ ██║██████╔╝█████╗ ██╔██╗ ██║██║ ██║ ██╔████╔██║
██║ ██║██╔═══╝ ██╔══╝ ██║╚██╗██║██║ ██║ ██║╚██╔╝██║
╚██████╔╝██║ ███████╗██║ ╚████║███████╗███████╗██║ ╚═╝ ██║
╚═════╝ ╚═╝ ╚══════╝╚═╝ ╚═══╝╚══════╝╚══════╝╚═╝ ╚═╝.
📖 Next steps:
☁️ Deploy to BentoCloud:
$ bentoml deploy microsoft--phi-3-mini-4k-instruct-service:5fa34190089f0ee40f9cce3cafc396b89b2e5e83 -n ${DEPLOYMENT_NAME}
☁️ Update existing deployment on BentoCloud:
$ bentoml deployment update --bento microsoft--phi-3-mini-4k-instruct-service:5fa34190089f0ee40f9cce3cafc396b89b2e5e83 ${DEPLOYMENT_NAME}
🐳 Containerize BentoLLM:
$ bentoml containerize microsoft--phi-3-mini-4k-instruct-service:5fa34190089f0ee40f9cce3cafc396b89b2e5e83 --opt progress=plain
For quantized models, make sure to also pass in the --quantize
flag during build
openllm build casperhansen/llama-3-70b-instruct-awq --quantize awq
See openllm build --help
for more information
Private models
openllm start
For private models, we recommend users to save it to [BentoML’s Model store](https://docs.bentoml.com/en/latest/guides/model-store.html#model-store) first before using openllm start
:
with bentoml.models.create(name="my-private-models") as model:
PrivateTrainedModel.save_pretrained(model.path)
MyTokenizer.save_pretrained(model.path)
Note: Make sure to also save your tokenizer in this bentomodel
You can then pass in the private model name directly to openllm start
openllm start my-private-models
openllm build
Similar to openllm start
, openllm build
will only accept private models from BentoML’s model store:
openllm build my-private-models
What's next?
Currently, OpenAI's compatibility will only have the /chat/completions
and /models
endpoints supported. We will continue bringing /completions
as well as function calling support soon, so stay tuned.
Thank you for your continued support and trust in us. We would love to hear more of your feedback on the releases.
v0.5.5
Installation
pip install openllm==0.5.5
To upgrade from a previous version, use the following command:
pip install --upgrade openllm==0.5.5
Usage
To start a LLM: python -m openllm start HuggingFaceH4/zephyr-7b-beta
Find more information about this release in the CHANGELOG.md
What's Changed
- feat(models): command-r by @aarnphm in #1005
- ci: pre-commit autoupdate [pre-commit.ci] by @pre-commit-ci in #1007
- chore(deps): bump taiki-e/install-action from 2.33.34 to 2.34.0 by @dependabot in #1006
Full Changelog: v0.5.4...v0.5.5
v0.5.4
Installation
pip install openllm==0.5.4
To upgrade from a previous version, use the following command:
pip install --upgrade openllm==0.5.4
Usage
To start a LLM: python -m openllm start HuggingFaceH4/zephyr-7b-beta
Find more information about this release in the CHANGELOG.md
What's Changed
Full Changelog: v0.5.3...v0.5.4
v0.5.3
Installation
pip install openllm==0.5.3
To upgrade from a previous version, use the following command:
pip install --upgrade openllm==0.5.3
Usage
To start a LLM: python -m openllm start HuggingFaceH4/zephyr-7b-beta
Find more information about this release in the CHANGELOG.md
Full Changelog: v0.5.2...v0.5.3
v0.5.2
Installation
pip install openllm==0.5.2
To upgrade from a previous version, use the following command:
pip install --upgrade openllm==0.5.2
Usage
To start a LLM: python -m openllm start HuggingFaceH4/zephyr-7b-beta
Find more information about this release in the CHANGELOG.md
Full Changelog: v0.5.1...v0.5.2