-
Notifications
You must be signed in to change notification settings - Fork 10.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Question] Can I load a the huggingface llama model aswell? #708
Comments
With some modifications you might be able to use this: alpaca-convert-colab, I haven't tested it however. You would only need to run the first two blocks up to |
Here's what I used last night. I'm not sure if this is the same thing KASR is mentioning or not. |
@maxkraft7 I enjoy automatic tracker additions via https://github.com/c0re100/qBittorrent-Enhanced-Edition but you can add these manually to anything https://github.com/ngosang/trackerslist/blob/master/trackers_all.txt |
I did it in two steps: I modified export_state_dict_checkpoint.py from alpaca-lora to create a consolidated file, then used a slightly modified Here's a gist with my changes. Tested with the 7B-30B LLaMA models from decapoda-research I mentioned it elsewhere, but if you're quantizing, I've had better results with |
@MillionthOdin16 , I got following error via your script. Could you please help to check? |
I have downloaded the llama-model from here. There it got converted to be compatible with pytorch. But the biggest advantage is that it is actually available. The magnet link from that PR has no trackers so it's not starting to download at least for me. And the IPFS files always have a different checksum when I download them. So since I only have the huggingface-version is it possible use their model with llama.cpp somehow?
The text was updated successfully, but these errors were encountered: