Skip to content

Commit

Permalink
Merge pull request #31 from supabase/or/docs-typos
Browse files Browse the repository at this point in the history
Docs fixes
  • Loading branch information
olirice authored Jul 31, 2023
2 parents 74351a2 + 43b73fa commit 4a001e9
Show file tree
Hide file tree
Showing 4 changed files with 12 additions and 11 deletions.
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ docs = vx.create_collection(name="docs", dimension=3)

# add records to the *docs* collection
docs.upsert(
vectors=[
records=[
(
"vec0", # the vector's identifier
[0.1, 0.2, 0.3], # the vector. list or np.array
Expand All @@ -76,7 +76,7 @@ docs.create_index()

# query the collection filtering metadata for "year" = 2012
docs.query(
query_vector=[0.4,0.5,0.6], # required
data=[0.4,0.5,0.6], # required
limit=1, # number of records to return
filters={"year": {"$eq": 2012}}, # metadata filters
)
Expand Down
6 changes: 3 additions & 3 deletions docs/api.md
Original file line number Diff line number Diff line change
Expand Up @@ -184,14 +184,14 @@ Adapters are an optional feature to transform data before adding to or querying

For a complete list of available adapters, see [built-in adapters](concepts_adapters.md#built-in-adapters).

As an example, we'll create a collection with an adapter that chunks text into paragraphs and converts each chunk into an embedding vector using the `all-Mini-LM6-v2` model.
As an example, we'll create a collection with an adapter that chunks text into paragraphs and converts each chunk into an embedding vector using the `all-MiniLM-L6-v2` model.

First, install `vecs` with optional dependencies for text embeddings:
```sh
pip install "vecs[text_embedding]"
```

Then create a collection with an adapter to chunk text into paragraphs and embed each paragraph using the `all-Mini-LM6-v2` 384 dimensional text embedding model.
Then create a collection with an adapter to chunk text into paragraphs and embed each paragraph using the `all-MiniLM-L6-v2` 384 dimensional text embedding model.

```python
import vecs
Expand All @@ -206,7 +206,7 @@ docs = vx.get_or_create_collection(
adapter=Adapter(
[
ParagraphChunker(skip_during_query=True),
TextEmbedding(model='all-Mini-LM6-v2'),
TextEmbedding(model='all-MiniLM-L6-v2'),
]
)
)
Expand Down
8 changes: 4 additions & 4 deletions docs/concepts_adapters.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,14 +5,14 @@ Adapters are an optional feature to transform data before adding to or querying
Additionally, adapter transformations are applied lazily and can internally batch operations which can make them more memory and CPU efficient compared to manually executing transforms.

## Example:
As an example, we'll create a collection with an adapter that chunks text into paragraphs and converts each chunk into an embedding vector using the `all-Mini-LM6-v2` model.
As an example, we'll create a collection with an adapter that chunks text into paragraphs and converts each chunk into an embedding vector using the `all-MiniLM-L6-v2` model.

First, install `vecs` with optional dependencies for text embeddings:
```sh
pip install "vecs[text_embedding]"
```

Then create a collection with an adapter to chunk text into paragraphs and embed each paragraph using the `all-Mini-LM6-v2` 384 dimensional text embedding model.
Then create a collection with an adapter to chunk text into paragraphs and embed each paragraph using the `all-MiniLM-L6-v2` 384 dimensional text embedding model.

```python
import vecs
Expand All @@ -27,7 +27,7 @@ docs = vx.get_or_create_collection(
adapter=Adapter(
[
ParagraphChunker(skip_during_query=True),
TextEmbedding(model='all-Mini-LM6-v2'),
TextEmbedding(model='all-MiniLM-L6-v2'),
]
)
)
Expand Down Expand Up @@ -111,7 +111,7 @@ vx.get_or_create_collection(
name="docs",
adapter=Adapter(
[
TextEmbedding(model='all-Mini-LM6-v2')
TextEmbedding(model='all-MiniLM-L6-v2')
]
)
)
Expand Down
5 changes: 3 additions & 2 deletions docs/support_changelog.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,9 +12,10 @@
- Feature: Uses (indexed) containment operator `@>` for metadata equality filters where possible
- Docs: Added docstrings to all methods, functions and modules

## master
## 0.3.0

- Feature: Collections can have `adapters` allowing upserting/querying by native media t types
- Breaking Change: Renamed argument `Collection.upsert(vectors, ...)` to `Collection.upsert(records, ...)` in support of adapters
- Breaking Change: Renamed argument `Collection.query(query_vector, ...)` to `Collection.query(data, ...)` in support of adapters
- Added

## master

0 comments on commit 4a001e9

Please sign in to comment.