You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Please use the 👍 reaction to show that you are interested into the same feature.
Please don't comment if you have no relevant information to add. It's just extra noise for everyone subscribed to this issue.
Subscribe to receive notifications on status change and new comments.
Feature request
Which Nextcloud Version are you currently using:
Nextcloud 30.0.4
OpenAI and LocalAI integration 3.4.0
Talk 20.1.3
Assistant Talk Bot 3.0.1
Assistant's Batch process is slow to reply.
Even though the current Assistant Talk Bot is running as a daemon in a Docker container, this container only registers batch processing to ai_talk_bot_process_request.
This registers it in Nextcloud's oc_taskprocessing_tasks table, and there is no reply to the message until the task is executed by cron.
The message is sent to the LLM model in Nextcloud's batch process and replied to /message?reply_to=XXX&token=TOKEN.
Faster replies by responding directly without going through the Batch process.
However, if the Docker container (ghcr.io/nextcloud/talk_bot_ai) is modified to call the AI service directly and reply to the message, there is no need to wait for the cron task to be executed.
I believe there is room for improvement in this AI Talk Bot.
The text was updated successfully, but these errors were encountered:
How to use GitHub
Feature request
Which Nextcloud Version are you currently using:
Assistant's Batch process is slow to reply.
Even though the current Assistant Talk Bot is running as a daemon in a Docker container, this container only registers batch processing to
ai_talk_bot_process_request
.This registers it in Nextcloud's oc_taskprocessing_tasks table, and there is no reply to the message until the task is executed by cron.
The message is sent to the LLM model in Nextcloud's batch process and replied to
/message?reply_to=XXX&token=TOKEN
.Faster replies by responding directly without going through the Batch process.
However, if the Docker container (ghcr.io/nextcloud/talk_bot_ai) is modified to call the AI service directly and reply to the message, there is no need to wait for the cron task to be executed.
I believe there is room for improvement in this AI Talk Bot.
The text was updated successfully, but these errors were encountered: