Skip to content

A fast, easy-to-use, production-ready inference server for computer vision supporting deployment of many popular model architectures and fine-tuned models.

License

Notifications You must be signed in to change notification settings

paulguerrie/inference

Β 
Β 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Roboflow Inference banner

🎬 pip install inference

Roboflow Inference is the easiest way to use and deploy computer vision models. Inference supports running object detection, classification, instance segmentation, and even foundation models (like CLIP and SAM). You can train and deploy your own custom model or use one of the 50,000+ fine-tuned models shared by the community.

There are three primary inference interfaces:

πŸƒ Getting Started

Get up and running with inference on your local machine in 3 minutes.

pip install inference # or inference-gpu if you have CUDA

Setup your Roboflow Private API Key by exporting a ROBOFLOW_API_KEY environment variable or adding it to a .env file.

export ROBOFLOW_API_KEY=your_key_here

Run an open-source Rock, Paper, Scissors model on your webcam stream:

import inference

inference.Stream(
    source="webcam", # or rtsp stream or camera id
    model="rock-paper-scissors-sxsw/11", # from Universe

    on_prediction=lambda predictions, image: (
        print(predictions) # now hold up your hand: πŸͺ¨ πŸ“„ βœ‚οΈ
    )
)

Note

Currently, the stream interface only supports object detection

Now let's extend the example to use Supervision to visualize the predictions and display them on screen with OpenCV:

import cv2
import inference
import supervision as sv

annotator = sv.BoxAnnotator()

inference.Stream(
    source="webcam", # or rtsp stream or camera id
    model="rock-paper-scissors-sxsw/11", # from Universe

    output_channel_order="BGR",
    use_main_thread=True, # for opencv display
    
    on_prediction=lambda predictions, image: (
        print(predictions), # now hold up your hand: πŸͺ¨ πŸ“„ βœ‚οΈ
        
        cv2.imshow(
            "Prediction", 
            annotator.annotate(
                scene=image, 
                detections=sv.Detections.from_roboflow(predictions)
            )
        ),
        cv2.waitKey(1)
    )
)

πŸ‘©β€πŸ« More Examples

The /examples directory contains code samples for working with and extending inference including using foundation models like CLIP, HTTP and UDP clients, and an insights dashboard, along with community examples (PRs welcome)!

πŸŽ₯ Inference in action

Check out Inference running on a video of a football game:

inference.mp4

πŸ’» Why Inference?

Inference provides a scalable method through which you can manage inferences for your vision projects.

Inference is composed of:

  • Thousands of pre-trained community models that you can use as a starting point.

  • Foundation models like CLIP, SAM, and OCR.

  • A tight integration with Supervision.

  • An HTTP server, so you don’t have to reimplement things like image processing and prediction visualization on every project and you can scale your GPU infrastructure independently of your application code, and access your model from whatever language your app is written in.

  • Standardized APIs for computer vision tasks, so switching out the model weights and architecture can be done independently of your application code.

  • A model registry, so your code can be independent from your model weights & you don't have to re-build and re-deploy every time you want to iterate on your model weights.

  • Active Learning integrations, so you can collect more images of edge cases to improve your dataset & model the more it sees in the wild.

  • Seamless interoperability with Roboflow for creating datasets, training & deploying custom models.

And more!

πŸ“Œ Use the Inference Server

You can learn more about Roboflow Inference Docker Image build, pull and run in our documentation.

  • Run on x86 CPU:
docker run -it --net=host roboflow/roboflow-inference-server-cpu:latest
  • Run on NVIDIA GPU:
docker run -it --network=host --gpus=all roboflow/roboflow-inference-server-gpu:latest
πŸ‘‰ more docker run options
  • Run on arm64 CPU:
docker run -p 9001:9001 roboflow/roboflow-inference-server-arm-cpu:latest
  • Run on NVIDIA Jetson with JetPack 4.x:
docker run --privileged --net=host --runtime=nvidia roboflow/roboflow-inference-server-jetson:latest
  • Run on NVIDIA Jetson with JetPack 5.x:
docker run --privileged --net=host --runtime=nvidia roboflow/roboflow-inference-server-jetson-5.1.1:latest

Extras:

Some functionality requires extra dependencies. These can be installed by specifying the desired extras during installation of Roboflow Inference.

extra description
clip Ability to use the core CLIP model (by OpenAI)
gaze Ability to use the core Gaze model
http Ability to run the http interface
sam Ability to run the core Segment Anything model (by Meta AI)

Note: Both CLIP and Segment Anything require pytorch to run. These are included in their respective dependencies however pytorch installs can be highly environment dependent. See the official pytorch install page for instructions specific to your enviornment.

Example install with CLIP dependencies:

pip install "inference[clip]"

Inference Client

To consume predictions from inference server in Python you can use the inference-sdk package.

pip install inference-sdk
from inference_sdk import InferenceHTTPClient

image_url = "https://media.roboflow.com/inference/soccer.jpg"

# Replace ROBOFLOW_API_KEY with your Roboflow API Key
client = InferenceHTTPClient(
    api_url="http://localhost:9001", # or https://detect.roboflow.com for Hosted API
    api_key="ROBOFLOW_API_KEY"
)
with client.use_model("soccer-players-5fuqs/1"):
    predictions = client.infer(image_url)

print(predictions)

Visit our documentation to discover capabilities of inference-clients library.

Single Image Inference

After installing inference via pip, you can run a simple inference on a single image (vs the video stream example above) by instantiating a model and using the infer method (don't forget to setup your ROBOFLOW_API_KEY environment variable or .env file):

from inference.models.utils import get_roboflow_model

model = get_roboflow_model(
    model_id="soccer-players-5fuqs/1"
)

# you can also infer on local images by passing a file path,
# a PIL image, or a numpy array
results = model.infer(
  image="https://media.roboflow.com/inference/soccer.jpg",
  confidence=0.5,
  iou_threshold=0.5
)

print(results)

Getting CLIP Embeddings

You can run inference with OpenAI's CLIP model using:

from inference.models import Clip

image_url = "https://media.roboflow.com/inference/soccer.jpg"

model = Clip()
embeddings = model.embed_image(image_url)

print(embeddings)

Using SAM

You can run inference with Meta's Segment Anything model using:

from inference.models import SegmentAnything

image_url = "https://media.roboflow.com/inference/soccer.jpg"

model = SegmentAnything()
embeddings = model.embed_image(image_url)

print(embeddings)

πŸ—οΈ inference Process

To standardize the inference process throughout all our models, Roboflow Inference has a structure for processing inference requests. The specifics can be found on each model's respective page, but overall it works like this for most models:

inference structure

βœ… Supported Models

Load from Roboflow

You can use models hosted on Roboflow with the following architectures through Inference:

  • YOLOv5 Object Detection
  • YOLOv5 Instance Segmentation
  • YOLOv8 Object Detection
  • YOLOv8 Classification
  • YOLOv8 Segmentation
  • YOLACT Segmentation
  • ViT Classification

Core Models

Core Models are foundation models and models that have not been fine-tuned on a specific dataset.

The following core models are supported:

  1. CLIP
  2. L2CS (Gaze Detection)
  3. Segment Anything (SAM)

πŸ“ License

The Roboflow Inference code is distributed under an Apache 2.0 license. The models supported by Roboflow Inference have their own licenses. View the licenses for supported models below.

model license
inference/models/clip MIT
inference/models/gaze MIT, Apache 2.0
inference/models/sam Apache 2.0
inference/models/vit Apache 2.0
inference/models/yolact MIT
inference/models/yolov5 AGPL-3.0
inference/models/yolov7 GPL-3.0
inference/models/yolov8 AGPL-3.0

Inference CLI

We've created a CLI tool with useful commands to make the inference usage easier. Check out docs.

πŸš€ Enterprise

With a Roboflow Inference Enterprise License, you can access additional Inference features, including:

  • Server cluster deployment
  • Device management
  • Active learning
  • YOLOv5 and YOLOv8 commercial license

To learn more, contact the Roboflow team.

πŸ“š documentation

Visit our documentation for usage examples and reference for Roboflow Inference.

πŸ† contribution

We would love your input to improve Roboflow Inference! Please see our contributing guide to get started. Thank you to all of our contributors! πŸ™

πŸ’» explore more Roboflow open source projects

Project Description
supervision General-purpose utilities for use in computer vision projects, from predictions filtering and display to object tracking to model evaluation.
Autodistill Automatically label images for use in training computer vision models.
Inference (this project) An easy-to-use, production-ready inference server for computer vision supporting deployment of many popular model architectures and fine-tuned models.
Notebooks Tutorials for computer vision tasks, from training state-of-the-art models to tracking objects to counting objects in a zone.
Collect Automated, intelligent data collection powered by CLIP.

About

A fast, easy-to-use, production-ready inference server for computer vision supporting deployment of many popular model architectures and fine-tuned models.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 96.0%
  • HTML 1.9%
  • TypeScript 1.1%
  • Roff 0.3%
  • Slim 0.3%
  • Shell 0.2%
  • Other 0.2%