Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: add_texts method of Weaviate vector store creats wrong embeddings #4933

Merged
merged 4 commits into from
May 22, 2023
Merged
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 4 additions & 3 deletions langchain/vectorstores/weaviate.py
Original file line number Diff line number Diff line change
Expand Up @@ -126,6 +126,8 @@ def json_serializable(value: Any) -> Any:
return value.isoformat()
return value

embeddings = self._embedding.embed_documents(list(texts))

with self._client.batch as batch:
ids = []
for i, doc in enumerate(texts):
Expand All @@ -137,19 +139,18 @@ def json_serializable(value: Any) -> Any:
data_properties[key] = json_serializable(metadatas[i][key])

# If the UUID of one of the objects already exists
# then the existing objectwill be replaced by the new object.
# then the existing object will be replaced by the new object.
if "uuids" in kwargs:
_id = kwargs["uuids"][i]
else:
_id = get_valid_uuid(uuid4())

if self._embedding is not None:
embeddings = self._embedding.embed_documents(list(doc))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

great catch! i wonder if we should just change list(doc) -> [doc] and leave the rest the same. reason being current implementation lets us lazily load in texts, whereas calling list(texts) up front would load all of them into memory

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

great catch! i wonder if we should just change list(doc) -> [doc] and leave the rest the same. reason being current implementation lets us lazily load in texts, whereas calling list(texts) up front would load all of them into memory

I checked the add_texts methods of all vectore stores. Here is what I found:

  1. Twelve vector stores turn the potential iterable texts into a list first and then embed them:
  • chroma
  • pgvector
  • qdrant
  • supabase
  • analyticdb
  • atlas
  • deeplake
  • elastic_vector_search
  • lancedb
  • milvus
  • opensearch_vector_search
  • tair
  1. Four vector stores that iterate over texts and embed text one by one lazily:
  • faiss
  • redis
  • pinecone
  • myscale

The behavious are not consistent. But most vector stores simply turn the texts variable into a list first.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yea we definitely haven't been consistent about it, which is on us. and it's possible for a lot of workloads / embedding methods that embedding all the texts at once is more efficient. but think i'd prefer to not change behavior in an existing vector store until we've come up with a best practice that we apply everywhere. meaning in this case slight preference to keep lazy for now (otherwise somebody who's using Weaviate today could see their memory usage change next update for seemingly no reason)

batch.add_data_object(
data_object=data_properties,
class_name=self._index_name,
uuid=_id,
vector=embeddings[0],
vector=embeddings[i],
)
else:
batch.add_data_object(
Expand Down