Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Inference Docs Overhaul #385

Merged
merged 3 commits into from
May 14, 2024
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
80 changes: 39 additions & 41 deletions docs/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ hide:

![Roboflow Inference banner](https://github.com/roboflow/inference/blob/main/banner.png?raw=true)

Roboflow Inference is an open-source platform designed to simplify the deployment of computer vision models. It enables developers to perform object detection, classification, instance segmentation and keypoint detection, and utilize foundation models like [CLIP](https://inference.roboflow.com/foundation/clip), [Segment Anything](https://inference.roboflow.com/foundation/sam), and [YOLO-World](https://inference.roboflow.com/foundation/yolo_world) through a Python-native package, a self-hosted inference server, or a fully [managed API](https://docs.roboflow.com/).
Roboflow Inference is an open-source platform designed to simplify the deployment of computer vision models. It enables developers to perform object detection, classification, instance segmentation and keypoint detection, and utilize foundation models like [CLIP](/foundation/clip), [Segment Anything](/foundation/sam), and [YOLO-World](/foundation/yolo_world) through a Python-native package, a self-hosted inference server, or a fully [managed API](https://docs.roboflow.com/).

Explore our [enterprise options](https://roboflow.com/sales) for advanced features like server deployment, device management, active learning, and commercial licenses for YOLOv5 and YOLOv8.

Expand Down Expand Up @@ -33,10 +33,9 @@ Here is an example of a model running on a video using Inference:
<source src="https://media.roboflow.com/football-video.mp4" type="video/mp4">
</video>


## 💻 install

Inference package requires [**Python>=3.8,<=3.11**](https://www.python.org/). Click [here](https://inference.roboflow.com/quickstart/docker/) to learn more about running Inference inside Docker.
Inference package requires [**Python>=3.8,<=3.11**](https://www.python.org/). Click [here](/quickstart/docker/) to learn more about running Inference inside Docker.

```bash
pip install inference
Expand All @@ -45,21 +44,21 @@ pip install inference
<details>
<summary>👉 additional considerations</summary>

### Hardware
### Hardware

Enhance model performance in GPU-accelerated environments by installing CUDA-compatible dependencies.
```bash
pip install inference-gpu
```
Enhance model performance in GPU-accelerated environments by installing CUDA-compatible dependencies.

```bash
pip install inference-gpu
```

### Model-specific dependencies
### Model-specific dependencies

The `inference` and `inference-gpu` packages install only the minimal shared dependencies. Install model-specific dependencies to ensure code compatibility and license compliance. Learn more about the [models](https://inference.roboflow.com/#extras) supported by Inference.
The `inference` and `inference-gpu` packages install only the minimal shared dependencies. Install model-specific dependencies to ensure code compatibility and license compliance. Learn more about the [models](#extras) supported by Inference.

```bash
pip install inference[yolo-world]
```
```bash
pip install inference[yolo-world]
```

</details>

Expand Down Expand Up @@ -99,8 +98,7 @@ results = model.infer(
<details>
<summary>👉 foundational models</summary>


- [CLIP Embeddings](https://inference.roboflow.com/foundation/clip) - generate text and image embeddings that you can use for zero-shot classification or assessing image similarity.
- [CLIP Embeddings](/foundation/clip) - generate text and image embeddings that you can use for zero-shot classification or assessing image similarity.

```python
from inference.models import Clip
Expand All @@ -111,7 +109,7 @@ results = model.infer(
embeddings_image = model.embed_image("https://media.roboflow.com/inference/soccer.jpg")
```

- [Segment Anything](https://inference.roboflow.com/foundation/sam) - segment all objects visible in the image or only those associated with selected points or boxes.
- [Segment Anything](/foundation/sam) - segment all objects visible in the image or only those associated with selected points or boxes.

```python
from inference.models import SegmentAnything
Expand All @@ -121,13 +119,13 @@ results = model.infer(
result = model.segment_image("https://media.roboflow.com/inference/soccer.jpg")
```

- [YOLO-World](https://inference.roboflow.com/foundation/yolo_world) - an almost real-time zero-shot detector that enables the detection of any objects without any training.
- [YOLO-World](/foundation/yolo_world) - an almost real-time zero-shot detector that enables the detection of any objects without any training.

```python
from inference.models import YOLOWorld

model = YOLOWorld(model_id="yolo_world/l")

result = model.infer(
image="https://media.roboflow.com/inference/dog.jpeg",
text=["person", "backpack", "dog", "eye", "nose", "ear", "tongue"],
Expand All @@ -142,33 +140,33 @@ results = model.infer(
You can also run Inference as a microservice with Docker.

### deploy server

The inference server is distributed via Docker. Behind the scenes, inference will download and run the image that is appropriate for your hardware. [Here](https://inference.roboflow.com/quickstart/docker/#advanced-build-a-docker-container-from-scratch), you can learn more about the supported images.

```bash
inference server start
```
The inference server is distributed via Docker. Behind the scenes, inference will download and run the image that is appropriate for your hardware. [Here](/quickstart/docker/#advanced-build-a-docker-container-from-scratch), you can learn more about the supported images.

```bash
inference server start
```

### run client

Consume inference server predictions using the HTTP client available in the Inference SDK.

```python
from inference_sdk import InferenceHTTPClient

client = InferenceHTTPClient(
api_url="http://localhost:9001",
api_key=<ROBOFLOW_API_KEY>
)
with client.use_model(model_id="soccer-players-5fuqs/1"):
predictions = client.infer("https://media.roboflow.com/inference/soccer.jpg")
```

If you're using the hosted API, change the local API URL to `https://detect.roboflow.com`. Accessing the hosted inference server and/or using any of the fine-tuned models require a `ROBOFLOW_API_KEY`. For further information, visit the 🔑 keys section.
Consume inference server predictions using the HTTP client available in the Inference SDK.

```python
from inference_sdk import InferenceHTTPClient

client = InferenceHTTPClient(
api_url="http://localhost:9001",
api_key=<ROBOFLOW_API_KEY>
)
with client.use_model(model_id="soccer-players-5fuqs/1"):
predictions = client.infer("https://media.roboflow.com/inference/soccer.jpg")
```

If you're using the hosted API, change the local API URL to `https://detect.roboflow.com`. Accessing the hosted inference server and/or using any of the fine-tuned models require a `ROBOFLOW_API_KEY`. For further information, visit the 🔑 keys section.

## 🎥 inference pipeline

The inference pipeline is an efficient method for processing static video files and streams. Select a model, define the video source, and set a callback action. You can choose from predefined callbacks that allow you to [display results](https://inference.roboflow.com/docs/reference/inference/core/interfaces/stream/sinks/#inference.core.interfaces.stream.sinks.render_boxes) on the screen or [save them to a file](https://inference.roboflow.com/docs/reference/inference/core/interfaces/stream/sinks/#inference.core.interfaces.stream.sinks.VideoFileSink).
The inference pipeline is an efficient method for processing static video files and streams. Select a model, define the video source, and set a callback action. You can choose from predefined callbacks that allow you to [display results](/docs/reference/inference/core/interfaces/stream/sinks/#inference.core.interfaces.stream.sinks.render_boxes) on the screen or [save them to a file](/docs/reference/inference/core/interfaces/stream/sinks/#inference.core.interfaces.stream.sinks.VideoFileSink).

```python
from inference import InferencePipeline
Expand All @@ -194,8 +192,8 @@ export ROBOFLOW_API_KEY=<YOUR_API_KEY>

## 📚 documentation

Visit our [documentation](https://inference.roboflow.com) to explore comprehensive guides, detailed API references, and a wide array of tutorials designed to help you harness the full potential of the Inference package.
Visit our [documentation](/) to explore comprehensive guides, detailed API references, and a wide array of tutorials designed to help you harness the full potential of the Inference package.

## © license

The Roboflow Inference code is distributed under the [Apache 2.0](https://github.com/roboflow/inference/blob/master/LICENSE.md) license. However, each supported model is subject to its licensing. Detailed information on each model's license can be found [here](https://inference.roboflow.com/quickstart/licensing/#model-code-licenses).
The Roboflow Inference code is distributed under the [Apache 2.0](https://github.com/roboflow/inference/blob/master/LICENSE.md) license. However, each supported model is subject to its licensing. Detailed information on each model's license can be found [here](https://inference.roboflow.com/quickstart/licensing/#model-code-licenses).
2 changes: 1 addition & 1 deletion docs/models/from_local_weights.md
Original file line number Diff line number Diff line change
Expand Up @@ -48,4 +48,4 @@ In the code above, replace:

Your model weights will be uploaded to Roboflow. It may take a few minutes for your weights to be processed. Once your weights have been processed, your dataset version page will be updated to say that a model is available with your weights.

You can then use the model with Inference following our [Run a Private, Fine-Tuned Model](https://inference.roboflow.com/quickstart/explore_models/#run-a-private-fine-tuned-model) model.
You can then use the model with Inference following our [Run a Private, Fine-Tuned Model](/quickstart/explore_models/#run-a-private-fine-tuned-model) model.
62 changes: 57 additions & 5 deletions docs/quickstart/aliases.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,14 +2,24 @@ Inference supports running any of the 50,000+ pre-trained public models hosted o

We have defined IDs for common models for ease of use. These models do not require an API key for use unlike other public or private models.

!!! Tip
Using it in `inference` is as simple as:

See the [Use a fine-tuned model](../guides/use-a-fine-tuned-model.md) guide for an example on how to deploy a model.
```python
from inference import get_model

You can click the link associated with a model below to test the model in your browser, and use the ID with Inference to deploy the model to the edge.
model = get_model(model_id="yolov8n-640")

results = model.infer("https://media.roboflow.com/inference/people-walking.jpg")
```

!!! Tip

See the [Use a fine-tuned model](/quickstart/explore_models) guide for an example on how to deploy your own model.

## Supported Pre-Trained Models

You can click the link associated with a model below to test the model in your browser, and use the ID with Inference to deploy the model to the edge.

<style>
table {
width: 100%;
Expand Down Expand Up @@ -180,9 +190,51 @@ table {
</tr>
<tr>
<td>YOLOv8x Instance Segmentation</td>
<td>640</td>
<td>1280</td>
<td>Instance Segmentation</td>
<td>yolov8x-seg-1280</td>
<td><a href="https://universe.roboflow.com/microsoft/coco-dataset-vdnr1/model/11">Test in Browser</a></td>
</tr>
</table>
<tr>
<td>YOLOv8x Keypoint Detection</td>
<td>1280</td>
<td>Keypoint Detection</td>
<td>yolov8x-pose-1280</td>
<td><a href="https://universe.roboflow.com/microsoft/coco-pose-detection/6">Test in Browser</a></td>
</tr>
<tr>
<td>YOLOv8x Keypoint Detection</td>
<td>640</td>
<td>Keypoint Detection</td>
<td>yolov8x-pose-640</td>
<td><a href="https://universe.roboflow.com/microsoft/coco-pose-detection/5">Test in Browser</a></td>
</tr>
<tr>
<td>YOLOv8l Keypoint Detection</td>
<td>640</td>
<td>Keypoint Detection</td>
<td>yolov8l-pose-640</td>
<td><a href="https://universe.roboflow.com/microsoft/coco-pose-detection/4">Test in Browser</a></td>
</tr>
<tr>
<td>YOLOv8m Keypoint Detection</td>
<td>640</td>
<td>Keypoint Detection</td>
<td>yolov8m-pose-640</td>
<td><a href="https://universe.roboflow.com/microsoft/coco-pose-detection/3">Test in Browser</a></td>
</tr>
<tr>
<td>YOLOv8s Keypoint Detection</td>
<td>640</td>
<td>Keypoint Detection</td>
<td>yolov8s-pose-640</td>
<td><a href="https://universe.roboflow.com/microsoft/coco-pose-detection/2">Test in Browser</a></td>
</tr>
<tr>
<td>YOLOv8n Keypoint Detection</td>
<td>640</td>
<td>Keypoint Detection</td>
<td>yolov8n-pose-640</td>
<td><a href="https://universe.roboflow.com/microsoft/coco-pose-detection/1">Test in Browser</a></td>
</tr>
</table>
87 changes: 2 additions & 85 deletions docs/quickstart/explore_models.md
Original file line number Diff line number Diff line change
@@ -1,93 +1,10 @@
With Inference, you can run any of the 50,000+ models available on Roboflow Universe. You can also run private, fine-tuned models that you have trained or uploaded to Roboflow.
With Inference, you can run private, fine-tuned models that you have trained or uploaded to Roboflow.

All models run on your own hardware.

## Run a Model From Roboflow Universe

In the first example, we showed how to run a people detection model. This model was hosted on Universe. Let's find another model to try.

Go to the <a href="https://universe.roboflow.com" target="_blank">Roboflow Universe</a> homepage and use the search bar to find a model.

![Roboflow Universe search bar](https://media.roboflow.com/universe-search.png)

!!! info

Add "model" to your search query to only find models.

Browse the search page to find a model.

![Search page](https://media.roboflow.com/universe-search-page.png)

When you have found a model, click on the model card to learn more. Click the "Model" link in the sidebar to get the information you need to use the model.

Then, install Inference and supervision, which we will use to run our model and handle model predictions, respectively:

```bash
pip install inference supervision
```

Next, create a new Python file and add the following code:

```python
# import a utility function for loading Roboflow models
from inference import get_model
# import supervision to visualize our results
import supervision as sv
# import cv2 to helo load our image
import cv2

# define the image url to use for inference
image_file = "people-walking.jpg"
image = cv2.imread(image_file)

# load a pre-trained yolov8n model
model = get_model(model_id="yolov8n-640")

# run inference on our chosen image, image can be a url, a numpy array, a PIL image, etc.
results = model.infer(image)

# load the results into the supervision Detections api
detections = sv.Detections.from_inference(results[0].dict(by_alias=True, exclude_none=True))

# create supervision annotators
bounding_box_annotator = sv.BoundingBoxAnnotator()
label_annotator = sv.LabelAnnotator()

# annotate the image with our inference results
annotated_image = bounding_box_annotator.annotate(
scene=image, detections=detections)
annotated_image = label_annotator.annotate(
scene=annotated_image, detections=detections)

# display the image
sv.plot_image(annotated_image)
```

!!! Tip

To see more models, check out the [Pre-Trained Models](/quickstart/aliases/) page and [Roboflow Universe](https://universe.roboflow.com).

The `people-walking.jpg` file is hosted <a href="https://media.roboflow.com/inference/people-walking.jpg" target="_blank">here</a>.

Replace `yolov8n-640` with the model ID you found on Universe, replace `image` with the image of your choosing, and be sure to export your API key:

```
export ROBOFLOW_API_KEY=<your api key>
```

Then, run the Python script:

```
python app.py
```

You should see your model's predictions visualized on your screen.

![People Walking Annotated](https://storage.googleapis.com/com-roboflow-marketing/inference/people-walking-annotated.jpg)

## Run a Private, Fine-Tuned Model

You can run models you have trained privately on Roboflow with Inference. To do so, first go to your <a href="https://app.roboflow.com" target="_blank">Roboflow dashboard</a>. Then, choose the model you want to run.
To run a model, first go to your <a href="https://app.roboflow.com" target="_blank">Roboflow dashboard</a>. Then, choose the model you want to run.

![Roboflow dashboard](https://media.roboflow.com/docs-models.png)

Expand Down
Loading
Loading