-
-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
LLaMa.cpp support #49
Comments
According to README, LLaMa.cpp server has a special |
It is good idea, and thank you for information. Yeah, it is possible to use llama.cpp like backend, but one things. For now QodeAssist works only with FIM model for code completion. I am working for extending that by instruct models. Maybe today or tomorrow I will finish. And I am waiting to QtCreator 15.0.1. Because Qt has already shared only 15.0.1 and I can't build 15.0.0 by github actions for all platforms. If you have time and patience, then wait and I will add everything and release it in the next version |
I am back to this, @Alex20129 do you have a model or link to model which support FIM? I need for testing. |
IDK how to test FIM function properly, because i've only used it in chat mode. Anyway, here is the models:
which makes me think that the "qwen-coder" itself and all its derivatives were specifically trained with FIM objective in mind. |
I also tried running |
I'm currently playing with LLaMa.cpp (qwen-instruct GGUF model) in console chat mode. And I wondered if it would be possible to seamlessly integrate LLaMa.cpp with Qt Creator? That's how I got here.
i would like to try QodeAssist with LLaMa.cpp as a backend.
The text was updated successfully, but these errors were encountered: