-
Notifications
You must be signed in to change notification settings - Fork 443
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(rag): Auto-RAG #2301
Comments
Maybe we could add an example to showcase how to use Katib and LlamaIndex to AutoRAG. Not sure if there is any new feature to be implemented. |
Are you thinking of adding an example that uses the proposed tuning API for LLMs to demonstrate Auto-RAG? |
@tariq-hasan It should work. But I do not have the bandwidth for it. I'm simply presenting the idea for consideration at this point. |
Thanks for creating this @gaocegege. |
The workflow should be similar. I think. We could make a demo based on llama index to see if there is anything we miss. |
Hi! |
Nice to meet you @vkehfdl1! |
Hi @andreyvelich Nice to meet you. First, It will be hard to attend the community call today because the timezone. It is 2:00 a.m. here so hard to attend. Thanks! |
Sure, that sounds great! I added you to the meeting agenda on July 24th. |
Hi @vkehfdl1, just a reminder that our community call starts in 10 minutes, if you want to give AutoRAG demo. |
/area llm |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
/remove-lifecycle stale |
/kind feature
Ref https://arxiv.org/pdf/2404.01037.pdf
RAG requires some hyperparameters e.g. chunking strategies, and window sizes for sentence window retrieval. It should be done automatically.
Love this feature? Give it a 👍 We prioritize the features with the most 👍
The text was updated successfully, but these errors were encountered: