Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(rag): Auto-RAG #2301

Open
gaocegege opened this issue Apr 10, 2024 · 14 comments
Open

feat(rag): Auto-RAG #2301

gaocegege opened this issue Apr 10, 2024 · 14 comments
Labels
area/llm LLMs related content kind/feature

Comments

@gaocegege
Copy link
Member

/kind feature

Ref https://arxiv.org/pdf/2404.01037.pdf

Auto-RAG: The idea of automatically optimizing RAG systems, akin to Auto-ML’s approach
in traditional machine learning, presents a significant opportunity for future exploration. Currently, selecting the optimal configuration of RAG components — e.g., chunking strategies,
window sizes, and parameters within rerankers — relies on manual experimentation and intuition. An automated system could systematically explore a vast space of RAG configurations
and select the very best model (Markr.AI, 2024).

RAG requires some hyperparameters e.g. chunking strategies, and window sizes for sentence window retrieval. It should be done automatically.


Love this feature? Give it a 👍 We prioritize the features with the most 👍

@gaocegege
Copy link
Member Author

Maybe we could add an example to showcase how to use Katib and LlamaIndex to AutoRAG.

Not sure if there is any new feature to be implemented.

@gaocegege
Copy link
Member Author

@tariq-hasan
Copy link
Contributor

Are you thinking of adding an example that uses the proposed tuning API for LLMs to demonstrate Auto-RAG?

@gaocegege
Copy link
Member Author

@tariq-hasan It should work. But I do not have the bandwidth for it. I'm simply presenting the idea for consideration at this point.

@andreyvelich
Copy link
Member

Thanks for creating this @gaocegege.
Are there any differences to optimize these HPs for RAG (e.g. chunking strategies and window sizes) compare to our current optimization flow with Experiment -> Suggestion -> Trials?
I guess, Trials can consume prompt and produce the metrics.

@gaocegege
Copy link
Member Author

The workflow should be similar. I think. We could make a demo based on llama index to see if there is anything we miss.

@vkehfdl1
Copy link

vkehfdl1 commented Jul 9, 2024

Hi!
I'm the developer of AutoRAG.
Do you still interested in implement AutoRAG or use it? Make demo for this?
We are open for any kind of collaboration.

@andreyvelich
Copy link
Member

Nice to meet you @vkehfdl1!
Sure, that would be great, maybe you can attend one of our upcoming AutoML and Training WG community calls to give a demo and we can discuss how we can collaborate.
cc @kubeflow/wg-training-leads

@vkehfdl1
Copy link

Hi @andreyvelich Nice to meet you.

First, It will be hard to attend the community call today because the timezone. It is 2:00 a.m. here so hard to attend.
Maybe other community call two weeks later in 2:00 UTC can be fine, or we can book another call.

Thanks!

@andreyvelich
Copy link
Member

andreyvelich commented Jul 10, 2024

Sure, that sounds great! I added you to the meeting agenda on July 24th.

@andreyvelich
Copy link
Member

Hi @vkehfdl1, just a reminder that our community call starts in 10 minutes, if you want to give AutoRAG demo.

@andreyvelich
Copy link
Member

/area llm

@google-oss-prow google-oss-prow bot added the area/llm LLMs related content label Aug 21, 2024
Copy link

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@andreyvelich
Copy link
Member

/remove-lifecycle stale

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/llm LLMs related content kind/feature
Projects
None yet
Development

No branches or pull requests

4 participants