-
Notifications
You must be signed in to change notification settings - Fork 205
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feature: add streaming response in production env. #27
Comments
Thanks for the feedback. I am aware that in production the response is not being streamed as of right now. I am working on this. The other issue of the chatbubble being black, can you elaborate on that issue? Is it only when there is too much content? |
I mean is that this is the same question. The process of AI generating articles word by word is not present, and users can only wait for all the content to be generated before seeing the results. |
This, is something that was bugging me as well for #50; no showstopper though. It probably has to do with the post-processing of the LangchaingJS stream in React. |
@jakobhoeg Any idea when this will be worked on? This would probably bring the most benefit to users from the current state. If you host ollama on a slow machine and worse without graphics card, the speed can be very slow. With streaming at least you see that something is happening. |
I've been very busy with other stuff these past few weeks, but I'll try to make it a priority and take a look at it this weekend |
I buid app, The chat box cannot dynamically display one word at a time. but in run dev ,it work
When there is too much content, the chat box is black and waiting for a long time
The text was updated successfully, but these errors were encountered: