-
-
Notifications
You must be signed in to change notification settings - Fork 107
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[FEATURE] Integration chat bot in the community webiste #1204
Comments
why OpenAPI? Thanks for opening the issue, there are multiple folks in the community talking about the same. Basically redefining concept that we worked on before ChatGPT AI boom -> https://github.com/asyncapi/chatbot Pinging folks that started talking about it in slack already: Regarding training - we should not just train the bot on data from docs but also:
|
https://github.com/janhq/jan this is a opensource alternative for OpenAPI. |
@derberg I thought we don't have any chat bot before thanks, Just a doubt, i found one issue regarding the archive asyncapi/chatbot#96 why do we need to archieve the repo ? Can we continue starting working on it |
yeah, its current shape, correct me if Im wrong @AceTheCreator , is a subject for complete rework. Even existence of repo ... maybe we do not need separate repo, maybe work can be done in some other existing repo - no strong opinion |
My experience with LLM and GenAI is that, unless absolutely necessary, don't train your own model. It's costly and resource-consuming. Unless we need the model to learn some really specific patterns in the data (like doing specialized tasks such as categorizing emotions in reviews), we don't need to train a new model. Plus, our community always has new data (discussions, new PRs, new issues) coming in, so training a model is not preferable since it will not know any data or information beyond the data used to train it. I think using RAG might be preferable (https://research.aimultiple.com/retrieval-augmented-generation/). We can use a vector database to store the Slack chat history, Issues and PRs on Github, specifications and other documents. When a user or developer asks a question, it would be embedded for doing a search in the vector database. The retrieved data will be used for generating a response by the LLM. Although the workflow is clear, there are a few questions that we should discuss:
Some other potentially useful resources that I have found: BentoML; Tutorial on StackOverflow; Slack Chat Bot with LLM I am happy to work on this project : ). Let me know if any of you have any thoughts! Let's discuss! |
This issue has been automatically marked as stale because it has not had recent activity 😴 It will be closed in 120 days if no further activity occurs. To unstale this issue, add a comment with a detailed explanation. There can be many reasons why some specific issue has no activity. The most probable cause is lack of time, not lack of interest. AsyncAPI Initiative is a Linux Foundation project not owned by a single for-profit company. It is a community-driven initiative ruled under open governance model. Let us figure out together how to push this issue forward. Connect with us through one of many communication channels we established here. Thank you for your patience ❤️ |
Why do we need this improvement?
We can start integrating AI tools in our organisation, starting with chat bot that increases the contributor experience throughout the organisation. Later, we can start training the bot with our documentation.
How will this change help?
Screenshots
No response
How could it be implemented/designed?
🚧 Breaking changes
No
👀 Have you checked for similar open issues?
🏢 Have you read the Contributing Guidelines?
Are you willing to work on this issue?
Yes I am willing to submit a PR!
The text was updated successfully, but these errors were encountered: