Skip to content
This repository has been archived by the owner on May 28, 2024. It is now read-only.

Releases: ray-project/ray-llm

v0.5.0

18 Jan 20:38
5255abe
Compare
Choose a tag to compare

What's Changed

  • Add tensorrt-llm backend (v0.6.1).
  • Add embedding backend.
  • Add Mixtral serve config.
  • Upgrading vllm support to (v0.2.5)
  • Upgrading ray to v2.9.1

Thanks for contributions from:
@avnishn
@csivanich
@sihanwang41
@Yard1
@tterrysun

v0.4.0

28 Oct 00:09
c2a22af
Compare
Choose a tag to compare

The following changes are introduced:

  • Renaming aviary to rayllm.
  • Support for reading models from gcs in addition to aws s3.
  • Increased testing for prompting.
  • New model configs for Falcon 7B and 40B.
  • Make frontend compatible with Ray Serve 2.7

Thanks for contributions from:
@avnishn
@csivanich
@shrekris-anyscale
@sihanwang41
@richardliaw
@Yard1

v0.3.1

04 Oct 21:33
Compare
Choose a tag to compare

What's Changed

New Contributors

Full Changelog: v0.3.0...v0.3.1

v0.3.0

02 Oct 23:25
470a5e2
Compare
Choose a tag to compare

Please note that API stability is not expected until 1.0 release. This update introduces breaking changes.

This release introduces a new vLLM backend and removes the dependency on TGI. This is because TGI is not Apache 2.0 licensed anymore, and the new license is too restrictive for most organizations to run in production. On the other hand, vllm is Apache 2.0 licensed and is a better foundation to build on top of. There are some breaking changes to model configuration YAMLs related to the new vLLM backend.

Refer to the updated ray-llm/models/README.md file for details on the updated configuration file format.

What's changed?

  • Documentation

    • Updated readme and documentation
  • API & SDK

    • Updated the format of model configuration yamls.
  • Backend

    • Completely replaced the text-generation-inference based backend with vLLM based backend. This means RayLLM now supports all models vLLM supports.
    • Improved observability and metrics.
    • Improved testing.

In order to use RayLLM, ensure you are using the official Docker image anyscale/aviary:latest.

v0.2.0

04 Aug 03:34
6fe00c9
Compare
Choose a tag to compare

What's changed?

  • Documentation

    • Updated readme and documentation
  • API & SDK

    • Full OpenAI API compatibility (Aviary can now be queried with the openai Python package)
      • /v1/completions
        • Parameters not yet supported (will be ignored): suffix, n, logprobs, echo, best_of, logit_bias, user
        • Additional parameters not present in OpenAI API: top_k, typical_p, watermark, seed
      • /v1/chat/completions
        • Parameters not yet supported (will be ignored): n, logprobs, echo, logit_bias, user
        • Additional parameters not present in OpenAI API: top_k, typical_p, watermark, seed
      • /v1/models
      • /v1/models/<MODEL>
    • Added frequency_penalty and presence_penalty parameters
    • aviary run is now blocking by default and will clarify that rerunning aviary run will remove existing models
    • Streamlined model configuration YAMLs
    • Added model configuration YAMLs for llama-2
    • Frontend Gradio app will now be started on /frontend route to avoid conflicts with backend
    • openai package is now a dependency for Aviary
  • Backend

    • Refactor of multiple internal APIs
      • Renamed Predictor to Engine
        • Engine combines the functionality of initializers, predictors and pipelines.
        • Removed Predictor and Pipeline
      • Removed shallow classes and simplified abstractions
      • Removed dead code
      • Broke up large files & improved file structure
    • Removal of static batching
    • Added OpenAI-style frequency_penalty and presence_penalty parameters
    • Fixed generated special tokens not being returned correctly
    • Standardization of modelling code on an Apache 2.0 fork of text-generation-inference
    • Improved performance and stability
      • Added automatic warmup for supported models, ensuring that memory is used efficiently.
      • Made scheduler and scheduler policy less prone to errors.
    • Made sure that HUGGING_FACE_HUB_TOKEN env var is propagated throughout all Aviary Backend processes to allow access to gated models such as llama-2
    • Added unit testing for core Aviary components
    • Added validations for user supplied parameters
    • Improved error handling and reporting
      • Error responses will now have correct status codes
    • Added basic observability for tokens & requests through Ray Metrics (piped through to Prometheus/Grafana)

This update introduces breaking changes to model configuration YAMLs and the Aviary SDK. Refer to the migration guide below for more details.

In order to use Aviary backend, ensure you are using the official Docker image anyscale/aviary:latest. Using the backend without Docker is not a supported usecase. anyscale/aviary:latest-tgi image has been superseded by anyscale/aviary:latest.

Migration Guide For Model YAMLs

In the most recent version of Aviary we introduce breaking changes in the model YAMLs. This guide will help you migrate your existing model YAMLs to the new format.

Changes

  1. Move any fields under model_config.initialization to be under model_config
    and then remove model_config.initialization.

Then remove the following sections/fields and everything that is under them:
- model_config.initializer
- model_config.pipeline
- model_config.batching

  1. Rename model_config to engine_config.

    In v0.2, we introduce Engine, the Aviary abstraction for interacting with a model. In short, Engine combines the functionality of initializers, pipelines, and predictors.

    Pipeline and initializer parameters are no longer configurable.
    In v0.2 we remove the option to specify static batching and instead do continuous batching by default for performance improvement.

  2. Add the Scheduler and Policy configs.

    The scheduler is a component of the engine that determines which requests to run inference on. The policy is a component of the scheduler that determines the scheduling strategy. These components previously existed in Aviary, however they weren't explicitly configurable.

    Previously the following parameters were specified under model_config.generation:

    • max_batch_total_tokens
    • max_total_tokens
    • max_waiting_tokens
    • max_input_length
    • max_batch_prefill_tokens

    rename max_waiting_tokens to max_iterations_curr_batch

    place these parameters under engine_config.scheduler.policy

    for example:

    engine_config:
      scheduler:
        policy:
          max_iterations_curr_batch: 100
          max_batch_total_tokens: 100000
          max_total_tokens: 100000
          max_input_length: 100
          max_batch_prefill_tokens: 100000

v0.1.2

28 Jul 18:03
56ab835
Compare
Choose a tag to compare

What's Changed

  • Updated TGI version
  • Fixed small issues

Full Changelog: v0.1.1...v0.1.2

v0.1.1

15 Jul 23:14
Compare
Choose a tag to compare

What's Changed

  • Performance, reliability and consistency improvements and fixes for continuous batching
  • Progress on OpenAI API
  • Execution hooks
  • Fixed missing Ray Dashboard dependencies in docker images

Note: This update requires changes to model config YAMLs

Full Changelog: v0.1.0...v0.1.1

v0.1.0

03 Jul 21:31
Compare
Choose a tag to compare

What's Changed

Note: This update breaks existing APIs and requires changes to model config YAMLs

Full Changelog: v0.0.3...v0.1.0

v0.0.3

21 Jun 20:57
Compare
Choose a tag to compare

What's Changed

  • Added streaming support in both backend and frontend
  • Aviary now follows the multi-application Ray Serve convention
  • Refactored parts of SDK (more changes are coming)
  • Added CI
  • Minor tweaks to frontend
  • Added typing-extensions as a dependency to fix import issues on python < 3.9

New Contributors

Full Changelog: v0.0.2...v0.0.3

v0.0.2

03 Jun 02:40
Compare
Choose a tag to compare

What's Changed

  • Slimmed down docker, removed unnecessary requirements and fixed the Ray Cluster Launcher configuration file causing an infinite worker node initializing loop - 54d0ebb
  • Increased maximum input length and reduced batch size - f063f15
  • Added ability to query OpenAI models through CLI (for comparison purposes) - 8e4e965
  • Added static news ticker to Gradio app - fe670ae

Full Changelog: v0.0.1...v0.0.2