Chat Circuit - Experimental UI for branching/forking conversations : r/LocalLLaMA #859
Labels
AI-Chatbots
Topics related to advanced chatbot platforms integrating multiple AI models
code-generation
code generation models and tools like copilot and aider
forum-discussion
Quotes clipped from forums
Git-Repo
Source code repository like gitlab or gh
llm-applications
Topics related to practical applications of Large Language Models in various fields
llm-experiments
experiments with large language models
python
Python code, tools, info
Chat Circuit - Experimental UI for branching/forking conversations : r/LocalLLaMA
Resources
I have been experimenting with a UI where you can branch/fork conversations and ask follow-up questions using any available LLM.
At the moment, it supports local LLMs running with Ollama, but it's possible to extend it to use other providers.
Here is a quick demo of the application and some of its capabilities.
It maintains context for a branch/fork and sends it to the LLM along with the last question.
The application is developed using Python/PyQt6 (just because of familiarity with the language/framework), and the source is available on GitHub.
Please try it out if you can and suggest improvements/ideas to make it better.
Comments
l33t-Mt
Awesome, ive built something extremely similar. Mine is just a python quart/flask app that hosts the frontend in a web interface. Uses Ollama as the llm backend.
Here is a little sample video. https://streamable.com/jzmnzh
namuan
This looks cool. Is this open source?
tronathan
What did you use for the node/graph generation on the frontend? Is it all custom or is it some kind of node-editor framework?
l33t-Mt
This is all in html5/css/javascript. Custom.
l33t-Mt
Do you have the ability to pipe the output to multiple nodes? What about back to the original?
namuan
Not at the moment. Do you have some example use cases where it'll be useful?
l33t-Mt
If you wanted secondary or tertiary COT or If you wanted to repeat your content generation.
namuan
I see. Adding different prompting method will be interesting to add. Repeating content generation is possible by re-generating it for each node. Not ideal for large branches.
phira
Might be worth considering another button, "Re-run downstream". The idea is that with your example you'd have:
Who is the author of The Martian?
Tell me about the movie
and if you changed card (1) to be "Who is the author of Twilight?" then hit re-run downstream, the "Tell me about the movie" card would update to be talking about the Twilight movie.
This case is a bit contrived, but if you imagine a developer situation you could have:
(schema for a database as content)
Come up with a plan to add a table that holds historical address & phone number for users
Implement the plan in python
Write tests for this implementation
Review the given implementation
Correct the implementation given the review
Then any time you want to do a new thing with your schema, you modify the prompt in (2) and then hit "re-run downstream" and it runs 3-6 sequentially, giving you a reviewed and corrected implementation as the output.
phira
Might also want nodes that are pure content, not prompt, just as modular context. Also you could add nodes that are "fetch" nodes that grab content from a url, that way you could pipeline the news for today or whatever.
ThePriceIsWrong_99
Interested in working on this? The bunch of us could probably knock something out this weekend.
Suggested labels
None
The text was updated successfully, but these errors were encountered: