This project integrates an LLM chatbot with document loading and caching capabilities, allowing users to engage in AI-powered conversations while fetching relevant documents from the web or local directories.
concurrency-management redis-caching asyncio-python document-loading ollama-embeded-chat llm-api-integration ai-powered-research-tools
-
Updated
Sep 18, 2024 - Python