Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to connect Lluminous to local llama.cpp? #5

Open
vojtapolasek opened this issue Aug 8, 2024 · 1 comment
Open

How to connect Lluminous to local llama.cpp? #5

vojtapolasek opened this issue Aug 8, 2024 · 1 comment

Comments

@vojtapolasek
Copy link

Hello,
I really like your app, it looks great!
However, I am lacking information about using local models through llama.cpp. I am using the packaged client + server on Linux. I use the "--llama" parameter to give path to my llama.cpp repository with compiled llama-server etc... but I don't see models. How should I use this please?
Thank you.

@zakkor
Copy link
Owner

zakkor commented Aug 17, 2024

The directory where lluminous looks for models is hardcoded as "models" inside the llama.cpp directory you pass through the --lama parameter, so you'll need to create it and move your models inside there.

image

It's worth noting though, that local model support currently only works with models that use the ChatML template format.
The feature I'm currently working on is adding support for Ollama, which means it's going to work with any model, and hopefully with even less hassle. Sorry for the inconvenience!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants