Releases: alexpinel/Dot
🔵 TTS, WHISPER, SETTINGS, AND MUCH MORE!!! 🔵
Hello!
Made some changes and added a few features! As always Dot can be installed through the website, or the binaries can be found on this very release. Please note that the Windows update will release in a few days.
🦔 FEATURES:
- Added whisper.cpp, you can now use your voice to interact with Dot!
- Added Text-To-Speech! A TTS model can now be used to read the LLM generated answers. TTS funcionalities are handled via sherpaonnx, the default TTS model is this really cool GLaDOS voice. Changing the voice model is not fully supported yet but I can provide instructions on how to do that if anyone is interested.
- Brand new settings button! Really regret not having done this before but users can now change the LLM settings in multiple ways, things like context length, max tokens, or default prompt can now be easily changed.
- LLM selection! It is now possible to use Dot with your local LLM of choice, any GGUF model should work without issues!
Screen.Recording.2024-05-20.at.20.27.52.1.mov
🦆CHANGES:
- Phi-3 is now the default LLM that will be used by Dot, it is not only extremely light but it works surprisingly well for RAG tasks.
- Minor UI changes
🐝 KNOWN BUGS:
- Sometimes answers that are too long are not displayed at all. This is due to the JSON output generated by python being split due to its length, the JS render process is then unable to read the truncated JSON. This issue can now be avoided by decreasing the max tokens setting.
- The LLM can take an annoyingly long time to start up once the first message is sent, but otherwise it works fine once loaded.
🐓 WORKING ON:
- Docker image
- Linux version
- MacOS intel version
- Improved performance
- Making it work on 8gb RAM devices
- Text streaming
- Make things faster
- It now talks and hears, maybe it should also see? 🤔
Anyway, hope you enjoy! Please let me know if there are any issues :)
🔵 New features, changes, and fixes! 🔵
Hello!
Made some changes and added a few features! As always Dot can be installed through the website, or the binaries can be found on this very release.
🦔 FEATURES:
- Dot will now remember what documents you loaded after closing the app!
- Doc Dot will now display the texts from which it formulated its answers, for PDFs it will also display the actual document on the correct page!
🦆CHANGES:
- Dot will not be installed with Mistral 7B anymore but will rather install it when opening the app for the first time.
- Changed embedding model to BAAI llm-embedder this should increase the quality of Doc Dot's semantic searching capabilities.
🐝 KNOWN BUGS:
- Sometimes answers that are too long are not displayed at all. This is due to the JSON output generated by python being split due to its length, the JS render process is then unable to read the truncated JSON.
🐓 WORKING ON:
- Docker image
- Linux version
- MacOS intel version
- Improved performance
- Making it work on 8gb RAM devices
Anyway, hope you enjoy! Please let me know if there are any issues :)
Windows fix
Hi! After some back and forth it looks like I manged to fix the problems with the windows version.
The new release (available on the website here should address the issue where the app was stuck on "dot is typing..." for windows users. The cause of the issue was the lack of a few dependencies required for llama.cpp to function properly.
Also, I recently learned just how expensive it is to code sign an app for windows, and unfortunately, that is waaaay beyond my budget right now which means windows defender might not like Dot at first...
Please let me know if you are facing any issues!