Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

support new version of llamacpp #9

Open
ParisNeo opened this issue May 17, 2023 · 13 comments
Open

support new version of llamacpp #9

ParisNeo opened this issue May 17, 2023 · 13 comments

Comments

@ParisNeo
Copy link
Contributor

Hi Abdeladim, there are many new models that can't run on the pyllamacpp binding because they are using version 2 of ggml.
If you have some time, can you try and add support to this please?

@absadiki
Copy link
Owner

Hi Saifeddine,

Yes I am in the process of syncing the bindings with the latest llama.cpp progress, but there are so many breaking changes to llama.cpp lately so it is taking time.

Meanwhile, could you please share with me some links of the models that do not work so I can test them ?

Thank you!

@ParisNeo
Copy link
Contributor Author

hi, thanks, here are some. The bloke has up todate models. for example :
https://huggingface.co/TheBloke/Wizard-Vicuna-7B-Uncensored-GGML

@absadiki
Copy link
Owner

Yeah, those models are converted to the newest version of ggml after this breaking change ggerganov/llama.cpp#1405.
The problem is there is no backward compatibility with older models!

I think I will push a release up to that change, and then another release after that change, this way one can choose what version to use based on the models they have ?

what do you think ?

@ParisNeo
Copy link
Contributor Author

It can be done like this. For now, I use the official llamacpp binding for the new models format and yours for the previous ones. But maybe having two releases is a good thing.

@absadiki
Copy link
Owner

absadiki commented May 20, 2023

  • Version 2.3.0 is now built with the latest llama.cpp release ( 699b1ad ) and it is working with the newest version of the models ( I've tested it with TheBloke's model above at least).

  • The 2.2.0 version can still be used for older models.

But Yeah feel free to use either one. The official bindings are great as well.

@ghost
Copy link

ghost commented May 21, 2023

I used these steps to update the new model:

Follow these steps to easily acquire the (Alpaca/LLaMA) F16 model:

   1. Download and install the Alpaca-lora repo. https://github.com/tloen/alpaca-lora
   2. Once you've successfully downloaded the model weights, you should have them inside a folder like this (on linux):
   3. run python convert-pth-to-ggml.py ~/.cache/huggingface/hub/models--decapoda-research--llama-7b-hf/snapshots/5f98eefcc80e437ef68d457ad7bf167c2c6a1348 1
   4. Once you get your f16 model, copy it into the llama.cpp/models folder.
   5.  Run: ./quantize ./models/ggml-model-f16.bin ./models/ggml-model-q4_0.bin q4_0

Done.

..but indeed - the pyllamacpp bindings are now broken.

I'll have a look and see if I can switch to the abetlen/llama-cpp-python bindings in the meantime, and get it to work. But yeah - version upgrade is a real time waster, which is why developers should take note and either 1. make it easy to update or 2. make your app/framework/etc. backward compatible with older versions.

  • Version 2.3.0 is now built with the latest llama.cpp release ( 699b1ad ) and it is working with the newest version of the models ( I've tested it with TheBloke's model above at least).

    • The 2.2.0 version can still be used for older models.

But Yeah feel free to use either one. The official bindings are great as well.

It doesn't work for me. It is hallucinating 100% of the time, often in random languages, and not responding coherently to any of my prompts.

@ghost
Copy link

ghost commented May 21, 2023

Ok, I have a workaround for now.

It seems that the prompt_context, prompt_suffix and prompt_prefix are broken. So I have to add them manually into the prompt for now.

These python bindings are the only ones working with the new update of LLama.cpp. So well done

UPDATE:

It seems to repeat a lot of answers, which I don't think used to happen before (or maybe I missed it?).

@Naugustogi
Copy link

How do i upgrade any model to the new version of ggjt 2?
using gpt4-x-alpaca-13b-native-ggml-model-q4_0
(i'm now able to compile with cmake)

@absadiki
Copy link
Owner

I used these steps to update the new model:

Follow these steps to easily acquire the (Alpaca/LLaMA) F16 model:

   1. Download and install the Alpaca-lora repo. https://github.com/tloen/alpaca-lora
   2. Once you've successfully downloaded the model weights, you should have them inside a folder like this (on linux):
   3. run python convert-pth-to-ggml.py ~/.cache/huggingface/hub/models--decapoda-research--llama-7b-hf/snapshots/5f98eefcc80e437ef68d457ad7bf167c2c6a1348 1
   4. Once you get your f16 model, copy it into the llama.cpp/models folder.
   5.  Run: ./quantize ./models/ggml-model-f16.bin ./models/ggml-model-q4_0.bin q4_0

Done.

..but indeed - the pyllamacpp bindings are now broken.

I'll have a look and see if I can switch to the abetlen/llama-cpp-python bindings in the meantime, and get it to work. But yeah - version upgrade is a real time waster, which is why developers should take note and either 1. make it easy to update or 2. make your app/framework/etc. backward compatible with older versions.

  • Version 2.3.0 is now built with the latest llama.cpp release ( 699b1ad ) and it is working with the newest version of the models ( I've tested it with TheBloke's model above at least).

    • The 2.2.0 version can still be used for older models.

But Yeah feel free to use either one. The official bindings are great as well.

It doesn't work for me. It is hallucinating 100% of the time, often in random languages, and not responding coherently to any of my prompts.

@twinlizzie from where did you get the steps you described here ?
Is convert-pth-to-ggml.py is now updated to to convert the models to the new ggjtv2 format ?

Yeah, unfortunately, llama.cpp introduced some breaking changes and it is not backward compatible, the models need to be reconverted! I tried to push aversion to PYPI (v2.2.0) before updating to the latest version to be compatible with the older models. You can give it a try as well ?

@absadiki
Copy link
Owner

Ok, I have a workaround for now.

It seems that the prompt_context, prompt_suffix and prompt_prefix are broken. So I have to add them manually into the prompt for now.

These python bindings are the only ones working with the new update of LLama.cpp. So well done

UPDATE:

It seems to repeat a lot of answers, which I don't think used to happen before (or maybe I missed it?).

What do you mean by it repeat answers ? You mean you get the same answer everytime you run the generation ?

@absadiki
Copy link
Owner

How do i upgrade any model to the new version of ggjt 2? using gpt4-x-alpaca-13b-native-ggml-model-q4_0 (i'm now able to compile with cmake)

@Naugustogi, afaik you will need to get the Pytorch models and re-quantize them to a supported format.

@ghost
Copy link

ghost commented May 22, 2023

@twinlizzie from where did you get the steps you described here ? Is convert-pth-to-ggml.py is now updated to to convert the models to the new ggjtv2 format ?

Yeah, unfortunately, llama.cpp introduced some breaking changes and it is not backward compatible, the models need to be reconverted! I tried to push aversion to PYPI (v2.2.0) before updating to the latest version to be compatible with the older models. You can give it a try as well ?

Yep. Convert-pth-to-ggml works now to convert larger models to the ggjtv2. And I figured out the steps on my own.

Ok, I have a workaround for now.
It seems that the prompt_context, prompt_suffix and prompt_prefix are broken. So I have to add them manually into the prompt for now.
These python bindings are the only ones working with the new update of LLama.cpp. So well done
UPDATE:
It seems to repeat a lot of answers, which I don't think used to happen before (or maybe I missed it?).

What do you mean by it repeat answers ? You mean you get the same answer everytime you run the generation ?

Actually, I'm not entirely sure. I set the repeat_penalty to 1.2 and it seems to have fixed it, for now.

It would sometimes get stuck in a loop where you get the same type of answer no matter what you ask.

On the llama-cpp-python repo it seems to be even worse because you always get the same answers 100%. (To the same question that is) .Or maybe I'm missing something about how to properly run the api...

The Llama.cpp itself does not have this problem and works perfectly even with my diy upgraded model. I get a different answer to the same question which is how I want it.

@absadiki
Copy link
Owner

@twinlizzie from where did you get the steps you described here ? Is convert-pth-to-ggml.py is now updated to to convert the models to the new ggjtv2 format ?
Yeah, unfortunately, llama.cpp introduced some breaking changes and it is not backward compatible, the models need to be reconverted! I tried to push aversion to PYPI (v2.2.0) before updating to the latest version to be compatible with the older models. You can give it a try as well ?

Yep. Convert-pth-to-ggml works now to convert larger models to the ggjtv2. And I figured out the steps on my own.

Ok, I have a workaround for now.
It seems that the prompt_context, prompt_suffix and prompt_prefix are broken. So I have to add them manually into the prompt for now.
These python bindings are the only ones working with the new update of LLama.cpp. So well done
UPDATE:
It seems to repeat a lot of answers, which I don't think used to happen before (or maybe I missed it?).

What do you mean by it repeat answers ? You mean you get the same answer everytime you run the generation ?

Actually, I'm not entirely sure. I set the repeat_penalty to 1.2 and it seems to have fixed it, for now.

It would sometimes get stuck in a loop where you get the same type of answer no matter what you ask.

On the llama-cpp-python repo it seems to be even worse because you always get the same answers 100%. (To the same question that is) .Or maybe I'm missing something about how to properly run the api...

The Llama.cpp itself does not have this problem and works perfectly even with my diy upgraded model. I get a different answer to the same question which is how I want it.

I just tested it again on my end with the models above and I don't have this problem. Everytime I run it I get a different answer.
Could you please share the code ? Maybe you are doing something wrong!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants