Skip to content

Commit

Permalink
move cli to extra file, macOS tests add documentation (#501)
Browse files Browse the repository at this point in the history
* add documentation

* update files: for cli

* update cli definition

* update: openapi

* undo: infer changes

* loosen: openai-restrictions

* update docs, update mac unit tests

* update: cli / docs

* add cli / test

* refactor cli . inf server tests

* cli: remove defered typing

* improve tolerance of embedding compat
  • Loading branch information
michaelfeil authored Jan 1, 2025
1 parent 944643b commit 2ed3884
Show file tree
Hide file tree
Showing 14 changed files with 501 additions and 568 deletions.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -159,7 +159,7 @@ The cache path inside the docker container is set by the environment variable `H
Checkout `infinity_emb v2 --help` for all args and validation.

Multiple Model CLI Playbook:
- 1. cli options can be repeated e.g. `v2 --model-id model/id1 --model-id/id2 --batch-size 8 --batch-size 4`. This will create two models `model/id1` and `model/id2`
- 1. cli options can be repeated e.g. `v2 --model-id model/id1 --model-id model/id2 --batch-size 8 --batch-size 4`. This will create two models `model/id1` and `model/id2`
- 2. or adapt the defaults by setting ENV Variables separated by `;`: `INFINITY_MODEL_ID="model/id1;model/id2;" && INFINITY_BATCH_SIZE="8;4;"`
- 3. single items are broadcasted to `--model-id` length, `v2 --model-id model/id1 --model-id/id2 --batch-size 8` making both models have batch-size 8.
- 4. Everything is broadcasted to the number of `--model-id` + API requests are routed to the `--served-model-name/--model-id`
Expand Down
2 changes: 1 addition & 1 deletion docs/assets/openapi.json

Large diffs are not rendered by default.

2 changes: 1 addition & 1 deletion docs/docs/cli_v2.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ $ infinity_emb v2 --help
Infinity API ♾️ cli v2. MIT License. Copyright (c) 2023-now Michael Feil
Multiple Model CLI Playbook:
- 1. cli options can be overloaded i.e. `v2 --model-id model/id1 --model-id/id2 --batch-size 8 --batch-size 4`
- 1. cli options can be overloaded i.e. `v2 --model-id model/id1 --model-id model/id2 --batch-size 8 --batch-size 4`
- 2. or adapt the defaults by setting ENV Variables separated by `;`: INFINITY_MODEL_ID="model/id1;model/id2;" &&
INFINITY_BATCH_SIZE="8;4;"
- 3. single items are broadcasted to `--model-id` length, making `v2 --model-id model/id1 --model-id/id2 --batch-size
Expand Down
8 changes: 6 additions & 2 deletions docs/docs/contribution.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,18 +10,22 @@ cd libs/infinity_emb
poetry install --extras all --with test
```

To ensure your contributions pass the Continuous Integration (CI) checks:
To ensure your contributions pass the Continuous Integration (CI), there are some useful local actions.
The `libs/infinity_emb/Makefile` is a useful entrypoint for this.
```bash
cd libs/infinity_emb
make format
make lint
make template-docker
poetry run pytest ./tests
```
As an alternative, you can also use the following command:

As an alternative, you can also use the following command, which bundles a range of the above.
```bash
cd libs/infinity_emb
make precommit
```

## CLA
Infinity is developed as open source project.
All contributions must be made in a way to be compatible with the MIT License of this repo.
108 changes: 0 additions & 108 deletions docs/docs/index.md
Original file line number Diff line number Diff line change
@@ -1,108 +0,0 @@
# [Infinity](https://github.com/michaelfeil/infinity)

Infinity is a high-throughput, low-latency REST API for serving vector embeddings, supporting all sentence-transformer models and frameworks. Infinity is developed under [MIT License](https://github.com/michaelfeil/infinity/blob/main/LICENSE). Infinity powers inference behind [Gradient.ai](https://gradient.ai) and other Embedding API providers.

## Why Infinity

Infinity provides the following features:

* **Deploy any model from MTEB**: deploy the model you know from [SentenceTransformers](https://github.com/UKPLab/sentence-transformers/)
* **Fast inference backends**: The inference server is built on top of [torch](https://github.com/pytorch/pytorch), [optimum(onnx/tensorrt)](https://huggingface.co/docs/optimum/index) and [CTranslate2](https://github.com/OpenNMT/CTranslate2), using FlashAttention to get the most out of **CUDA**, **ROCM**, **CPU** or **MPS** device.
* **Dynamic batching**: New embedding requests are queued while GPU is busy with the previous ones. New requests are squeezed intro your device as soon as ready. Similar max throughput on GPU as text-embeddings-inference.
* **Correct and tested implementation**: Unit and end-to-end tested. Embeddings via infinity are identical to [SentenceTransformers](https://github.com/UKPLab/sentence-transformers/) (up to numerical precision). Lets API users create embeddings till infinity and beyond.
* **Easy to use**: The API is built on top of [FastAPI](https://fastapi.tiangolo.com/), [Swagger](https://swagger.io/) makes it fully documented. API are aligned to [OpenAI's Embedding specs](https://platform.openai.com/docs/guides/embeddings/what-are-embeddings). See below on how to get started.

## Getting started

Install `infinity_emb` via pip
```bash
pip install infinity-emb[all]
```

<details>
<summary>Install from source with Poetry</summary>

Advanced:
To install via Poetry use Poetry 1.8.4, Python 3.11 on Ubuntu 22.04
```bash
git clone https://github.com/michaelfeil/infinity
cd infinity
cd libs/infinity_emb
poetry install --extras all
```
</details>

### Launch the CLI using a pre-built docker container (recommended)

```bash
port=7997
model1=michaelfeil/bge-small-en-v1.5
model2=mixedbread-ai/mxbai-rerank-xsmall-v1
volume=$PWD/data

docker run -it --gpus all \
-v $volume:/app/.cache \
-p $port:$port \
michaelf34/infinity:latest \
v2 \
--model-id $model1 \
--model-id $model2 \
--port $port
```
The cache path inside the docker container is set by the environment variable `HF_HOME`.

### or launch the cli after the pip install
After your pip install, with your venv activate, you can run the CLI directly.
Check the `--help` command to get a description for all parameters.

```bash
infinity_emb --help
```

## Launch FAQ
<details>
<summary>What are embedding models?</summary>
Embedding models can map any text to a low-dimensional dense vector which can be used for tasks like retrieval, classification, clustering, or semantic search.
And it also can be used in vector databases for LLMs.


The most know architecture are encoder-only transformers such as BERT, and most popular implementation include [SentenceTransformers](https://github.com/UKPLab/sentence-transformers/).
</details>

<details>
<summary>What models are supported?</summary>

All models of the sentence transformers org are supported https://huggingface.co/sentence-transformers / sbert.net.
LLM's like LLAMA2-7B are not intended for deployment.


With the command `--engine torch` the model must be compatible with https://github.com/UKPLab/sentence-transformers/.
- only models from Huggingface are supported.


With the command `--engine ctranslate2`
- only `BERT` models are supported.
- only models from Huggingface are supported.


For the latest trends, you might want to check out one of the following models.
https://huggingface.co/spaces/mteb/leaderboard

</details>


<details>
<summary>Using Langchain with Infinity</summary>
Now available under # Python Integrations in the side panel.
```
</details>
<details>
<summary>Question not answered here?</summary>
There is a Discussion section on the Github of Infinity:
https://github.com/michaelfeil/infinity/discussions
</details>
Loading

0 comments on commit 2ed3884

Please sign in to comment.