-
Notifications
You must be signed in to change notification settings - Fork 9.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
SimpleChat: Simple histogram/repeatMatching driven garbageTrimming, Settings UI, Streaming mode, OpenAi Compat (Model, Authorization Bearer), Save/Restore session, Auto Settings UI #7548
Conversation
Streaming support has been added now, so that the end user can view the ai-model response as it is being generated, instead of having to wait till the end of the generation. User can toggle between the stream and oneshot mode, in settings. As part of same, the code has been cleaned up and structured to match the flow better. The helpers for handling the server response has been moved into SimpleChat class itself. |
Ah, good to hear. Is this still in draft? |
The current commits was a initial check wrt streaming, which seems to work sufficiently enough, as well as gave a reason to consolidate response handling into SimpleChat class itself to make things more cleaner from a overall structure perspective. However I may cleanup the multipart/stream handling a bit, bcas I am not sure under all circumstences across all sw stacks (os+lib+browser) the data will come and bubble up as clean cut wrt maintaining line-data-block(s) boundary as what I seem to be seeing on my machine. As at core I am not a web developer and rather keep jumping btw Sys / Hw / Sw (asm-highlevel, preboot-apps), so I dont have enough data wrt how well behaved the different JS implementations are across different browsers and platforms, but can think of enough reasons why the clean boundries may not be maintained always, so may decouple that aspect a bit (in some ways similar to ChatParts in my ChatOn chat-templating PR). |
Make streaming mode data line handling more robust. Added support for changing the server base url including port, openai/equivalent compat fields (model, authorization bearer) in settings ui, and equivalent enabling logic. |
Very long text generation, can result in no user interaction and machine going into power saving mode, which can result in network connection being stopped in some platforms and the raising of corresponding exception. Now the logic will trap the same in streaming mode, so that text generated till that moment is not lost. |
Add support for save and restore wrt chat session across browsing sessions, using browser localStorage. |
@hanishkvc ah editor config caught some trailing whitespace you should delete |
47214ea
to
48fc60e
Compare
Hi @mofosyne Bit surprised with that editorconfig failure. It wasnt clear which readme file had the issue and the line number. I cross checked both the readmes' which I had modified. I didnt notice any issue. Also created a simple script to auto check all files in this PR, it also didnt flag any ' ' or '\t' at end of the lines. So I have rebased the code to the latest upstream/master (ggerganov/llama.cpp) and force pushed for now. |
@hanishkvc if you are getting strange stuff like that readme.md editorconfig error that is not from stuff you edited, don't forget to check the main branch. I'm guessing the [no ci] policy we recently encouraged is letting documentation lints slip though. So I'll stop promoting that policy and see if there is at least a way to reduce the time taken for documentation changes only commits to be processed. FYI: the latest branch seems to be having failing CIs, so keep going as it might not be your side. I'll make an assessment when ready if it's a CI issue on your side or master. |
Hi @mofosyne @ngxson @ggerganov There seems to be some bug with the way things are setup wrt CI testing and server. The examples/server folders deps.sh updates the public/index.js from the net, while at the same time the index.js which is bundled by default is commited part of the git repo. And as of now, potentially one or more of the js modules downloaded by deps.sh has got updated on the net, so the CI test is failing, because the one it creates wont match what is commited to git So potentially either the index.js commited needs to be updated, or better still this CI test related full structure itself needs to be modified, because just updating index.js from net and commiting it into git repo, wont stop from same issue occuring in future, when any of the js modules gets updated on net, which is outside this project's control |
@hanishkvc ah if you are talking about js ci issues, gg #7670 thinks its something upstream. He's considering this PR to fix it. |
Hi @mofosyne, now that the issue with ci is fixed, those 4 failing ci tests should pass. This is ready for merging. |
rerunning "Server / server (ADDRESS, RelWithDebInfo) (pull_request_target) In progress — This check has started..." test as it failed, just in case it's something else. Error doesn't make sense. If failed again, try rebasing against last known functional CI commit in main branch (a323ec6) |
Ah, I see it's failing in the same position @hanishkvc and is actually the same error that gg corrected into the main branch already. Please rebase against a323ec6 |
Use it to bring in a simple trim garbage at end logic, which is used to trim received response. Also given that importmap assumes esm / standard js modules, so also global variables arent implicitly available outside the modules. So add it has a member of document for now
Dont try utf8-decode and newlines-add_append if no data to work on. If there is no more data to get (ie done is set), then let NewLines instance return line without newline at end, So that we dont miss out on any last-data-line without newline kind of scenario. Pass stream flag wrt utf-8 decode, so that if any multi-byte char is only partly present in the passed buffer, it can be accounted for along with subsequent buffer. At sametime, bcas of utf-8's characteristics there shouldnt be any unaccounted bytes at end, for valid block of utf8 data split across chunks, so not bothering calling with stream set to false at end. LATER: Look at TextDecoder's implementation, for any over intelligence, it may be doing.. If needed, one can use done flag to account wrt both cases.
This should allow easy updating of the base url at runtime by the end user.
This ensures that if the user is running the server with a different port or wants to try connect to server on a different machine, then this can be used.
Inturn allow Authorization to be sent, if not empty.
use it to set placeholder wrt Authorization header. Also fix copy-paste oversight.
May help testing with openai/equivalent web services, if they require this field.
This can help ensure that data fetched till that point, can be made use of, rather than losing it. On some platforms, the time taken wrt generating a long response, may lead to the network connection being broken when it enters some user-no-interaction related power saving mode.
When the response handling was moved into SimpleChat, I had changed a flow bit unnecessarily and carelessly, which resulted in the non trim flow, missing out on retaining the ai assistant response. This has been fixed now.
This ensures that throwing the caught exception again for higher up logic, doesnt lose the response collated till that time. Go through theResp.assistant in catch block, just to keep simple consistency wrt backtracing just in case. Update the readme file.
This allows the settings ui to be cleaner structured.
Convert SystemPrompt into a textarea with 2 rows. Reduce user-input-textarea to 2 rows from 3, so that overall vertical space usage remains same. Shorten usage messages a bit, cleanup to sync with settings ui.
Inturn when ever a chat message (system/user/model) is added, the chat will be saved into browser's localStorage.
This is a temporary flow
This also allows being able to set the common system prompt ui element to loaded chat's system prompt.
5831ab5
to
c4141a5
Compare
garbage trimming
Given the limited context size of local LLMs and , many a times when context gets filled between the prompt and the response, it can lead to repeating text garbage generation. And many a times setting penalty wrt repeatation leads to over-intelligent garbage repeatation with slight variations. These garbage inturn leads to overloading of the available model context, leading to less valuable response for subsequent prompts/queries, if chat history is sent to ai model.
So two simple minded garbage trimming logics are tried.
The char-histogram driven one is a bit more flexible in that it allows for some variations in the repeatation. It tracks the chars and their frequency in a specified length of substring at the end of the generated text and inturn checks if moving further into the generated text from the end remains within the same char subset or goes beyond it and based on that either trims the string at the end or not. This allows to filter garbage at the end, including even if there are certain kind of small variations in the repeated text wrt position of seen chars.
The repeat-matching based trimming can be let loose on longer substring based probing, given its more well defined characteristics.
settings ui
a simple ui is added to change some of the behaviour, without needing to open the browser's devel-tool/console.
Setting the option ChatHistoryInCtxt to Last0, stops any chat history from being sent to the server/ai-model, thus ensuring that the response is purely based on the set system-prompt and the current-query.
Keeping ChatHistoryInCtxt to the default "Last1" mode and bTrimGarbage to the default enabled state, can many a times allow the user to recover/continue the previous large response with garbage at the end, from the part where the garbage is automatically removed, by requesting the ai to continue the last response like "please continue" or so.
Streaming mode
Allow user to set between oneshot (at the end) and streamed viewing (as it is being generated) of ai-model generative text response. The streaming mode pushes more packets over the network, but at the same time it allows one to view the response, as it is being generated. For long responses, this allows the user to view the response as it is becoming available, instead of having to wait till the end of response generation.
OpenAi Compat
The basic skeleton is implement to chat with a openai/equivalent (including llama.cpp's) server, at a basic level.
Save/Restore
Auto saves chat session locally using browser's localStorage, as the chat is occuring. Inturn on a fresh start, the option is given to restore a previously saved corresponding chat session.
ChatRequestOptions auto Settings UI
String/Numeric fields (including any added by user at runtime) in gMe.chatRequestOptions will get entries in Settings UI automatically.
cleanup/structure
Move to a multi file based js code structure, so that some of the helpers can be moved into their own files. Also bring in bit more of the request and response handling into SimpleChat class itself.