You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hey @shilv520, I'm here to help you with your coding conundrums! Let's squash those bugs and unravel the mysteries of code together. 🤖
A similar issue has been discussed in an open thread where enabling detailed logging was suggested as a way to diagnose the problem further. You can enable verbose logging by using the chatchat-config command with the --verbose option set to true:
chatchat-config basic --verbose true
Additionally, check the logs for any errors reported by xinference, as they might provide more insights into the issue [1].
Regarding your question about not calling the OpenAI API, Langchain-Chatchat can use local models as an alternative to the OpenAI API for generating completions. It supports various local model deployment frameworks like Xinference, Ollama, LocalAI, and FastChat, which allow you to run models locally without relying on external APIs [2][3].
An error occurred during streaming
INFO: 127.0.0.1:57108 - "POST /chat/chat/completions HTTP/1.1" 200 OK
2025-01-16 17:26:43,528 httpx 3854 INFO HTTP Request: POST http://127.0.0.1:7861/chat/chat/completions "HTTP/1.1 200 OK"
2025-01-16 17:26:43,536 httpx 3812 INFO HTTP Request: POST http://127.0.0.1:9997/v1/chat/completions "HTTP/1.1 200 OK"
2025-01-16 17:26:43.547 | ERROR | chatchat.server.api_server.openai_routes:generator:105 - openai request error: An error occurred during streaming
这个怎么解决啊,可以不调用openai吗
The text was updated successfully, but these errors were encountered: