Replies: 2 comments
-
I have already downloaded an LLM file but I don't know how to reference it in PRELOAD_MODELS. It only seems to want URL values to download new ones (sorta defeats the purpose) |
Beta Was this translation helpful? Give feedback.
-
Hi, guys. Welcome. For @ebudmada. Thanks for your idea. It is really cool. However, we have the repo go-llama.cpp. It is resposible for bounding llama.cpp. We need to make sure the features are tested before released. And some features need the specifc version of llama.cpp. So, I believe the bouding is the most reliable solution now. For @muunkky. Have you try |
Beta Was this translation helpful? Give feedback.
-
Hi, Thank for your works guys.
I believe this feature is not possible, but i guess it would be more simple for LocalAI to use the existent llama.cpp install on the current system, and add the feature of local API.
This would solve a lots of requests and bug in the issues section.
Let me know your thoughts about this, have a nice day!
Beta Was this translation helpful? Give feedback.
All reactions