Skip to content

Pinned Loading

  1. vllm vllm Public

    A high-throughput and memory-efficient inference and serving engine for LLMs

    Python 35.8k 5.4k

  2. llm-compressor llm-compressor Public

    Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM

    Python 911 75

Repositories

Showing 10 of 13 repositories
  • vllm Public

    A high-throughput and memory-efficient inference and serving engine for LLMs

    vllm-project/vllm’s past year of commit activity
    Python 35,775 Apache-2.0 5,429 1,166 (9 issues need help) 474 Updated Feb 1, 2025
  • vllm-project/production-stack’s past year of commit activity
    Python 120 Apache-2.0 16 9 3 Updated Feb 1, 2025
  • llm-compressor Public

    Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM

    vllm-project/llm-compressor’s past year of commit activity
    Python 911 Apache-2.0 75 12 37 Updated Feb 1, 2025
  • buildkite-ci Public
    vllm-project/buildkite-ci’s past year of commit activity
    HCL 8 17 0 3 Updated Jan 30, 2025
  • vllm-project/vllm-project.github.io’s past year of commit activity
    HTML 6 MIT 7 0 1 Updated Jan 30, 2025
  • vllm-project/vllm-blog-source’s past year of commit activity
    SCSS 4 MIT 7 0 0 Updated Jan 30, 2025
  • vllm-spyre Public

    Community maintained hardware plugin for vLLM on Spyre

    vllm-project/vllm-spyre’s past year of commit activity
    2 Apache-2.0 0 0 0 Updated Jan 29, 2025
  • vllm-ascend Public

    Community maintained hardware plugin for vLLM on Ascend

    vllm-project/vllm-ascend’s past year of commit activity
    12 Apache-2.0 3 1 1 Updated Jan 29, 2025
  • flash-attention Public Forked from Dao-AILab/flash-attention

    Fast and memory-efficient exact attention

    vllm-project/flash-attention’s past year of commit activity
    C++ 43 BSD-3-Clause 1,447 0 8 Updated Jan 26, 2025
  • vllm_allocator_adaptor Public

    An adaptor to allow Python allocator for PyTorch pluggable allocator

    vllm-project/vllm_allocator_adaptor’s past year of commit activity
    C++ 2 Apache-2.0 0 0 0 Updated Jan 5, 2025

Sponsors

  • @terrytangyuan
  • @trianxy
  • @adheep04
  • @mhupfauer
  • @kiritoxkiriko
  • @AlpinDale
  • @HiddenPeak
  • @dvlpjrs
  • @vincentkoc
  • @mgoin
  • @robertgshaw2-redhat
  • Private Sponsor

Most used topics

Loading…