-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Obs AI Assistant] Make content from Search connectors fully searchable #175434
Comments
Pinging @elastic/obs-knowledge-team (Team:obs-knowledge) |
If we want to retrieve multiple passages from the same text document, we need to split them before ingesting them and store 1 document per passage. |
Do you mean that we can only select a subset of passages if we split them up into separate documents? |
Yes, at least that is my understanding after talking to the AI Search folks. Assuming you have a large document, and you create nested fields of each passage and create embeddings for each passage. So to get multiple passage hits we need to store multiple documents in ES, which would then let us turn up the |
@dgieselaar The thing I said above is true for using So as long as we use ELSER (or rather some model that produces Example query:
Pseudo query for multi model hybrid search:
|
@miltonhultgren that sounds good AFAICT, do you see any concerns? |
KNN supports multiple inner hits in 8.13 🚀 I haven't gotten to really trying these things out yet. It seems the path is being paved for us here (and For this issue I will stick to using ELSER, chunking into a nested object, using a I have two small concerns for this ticket:
Number 1 would be in case, for example, there isn't any embeddings in a I'm going to research number 2 next. |
Sample query combining nested query
|
Would it be desired/ideal to perform a single ranked search across text, dense and sparse vectors but also across all indices at once? Rather than per source (knowledge base, search connectors in different indices)? What are the trade offs for that? How would one combine that with "API search", meaning searches that hit an API rather than Elasticsearch? Just thinking out loud here for the future. |
@miltonhultgren yes it would be preferable (a single search), but we have different privilege models for the knowledge base versus |
We're waiting for |
Update: This is still blocked by |
This will be solved by elastic/obs-ai-assistant-team#162 |
Just to clarify, this issue is about content ingested via Search connectors, which have their own mappings and ingest pipelines (i.e. different from the knowledge base index). The linked issue makes no mention of changing the Search connector mappings or ingest pipelines, so we'd need to verify if they already are using semantic_text to generate embeddings. |
Today, if we ingest a large piece of text into a Knowledge base entry, only the first 512 word pieces are used for creating the embeddings that ELSER uses to match on during semantic search.
This means that if the relevant parts for the query is not that the "start" of this big text, it won't match even though there may be critical information at the end of this text.
We should attempt to apply chunking to all documents ingested into the Knowledge base so that the recall search has a better chance of finding relevant hits, regardless of their size.
As a stretch, it would also be valuable if it was possible to extract only the relevant chunk (512 word pieces?) from the matched document in order to send less (and only relevant) text to the LLM.
AC
search-*
indices as wellMore resources on chunking https://github.com/elastic/elasticsearch-labs/tree/main/notebooks/document-chunking
The text was updated successfully, but these errors were encountered: