You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Misc. bug: error when converting lora to gguf (ERROR:lora-to-gguf:Unexpected name 'base_model.model.lm_head.weight': Not a lora_A or lora_B tensor)
#11554
Open
Leeeef552 opened this issue
Jan 31, 2025
· 0 comments
$ ./llama-cli --version
version: 2999 (42b4109e)
built with cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 for x86_64-linux-gnu
Operating systems
Linux
Which llama.cpp modules do you know to be affected?
Python/Bash scripts
Command line
Problem description & steps to reproduce
Hi all i was trying to convert my lora adapters to gguf using the convert-lora-to-gguf.py scrand faced this error.
i finetuned using axolotl.
First Bad Commit
No response
Relevant log output
INFO:lora-to-gguf:Loading base model from Hugging Face: NousResearch/Meta-Llama-3.1-8B-Instruct
DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): huggingface.co:443
DEBUG:urllib3.connectionpool:https://huggingface.co:443 "HEAD /NousResearch/Meta-Llama-3.1-8B-Instruct/resolve/main/config.json HTTP/1.1" 200 0
INFO:gguf.gguf_writer:gguf: This GGUF file is for Little Endian only
INFO:lora-to-gguf:Exporting model...
ERROR:lora-to-gguf:Unexpected name 'base_model.model.lm_head.weight': Not a lora_A or lora_B tensor
ERROR:lora-to-gguf:Embeddings is present in the adapter. This can be due to new tokens added during fine tuning
ERROR:lora-to-gguf:Please refer to https://github.com/ggerganov/llama.cpp/pull/9948
The text was updated successfully, but these errors were encountered:
Name and Version
$ ./llama-cli --version
version: 2999 (42b4109e)
built with cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 for x86_64-linux-gnu
Operating systems
Linux
Which llama.cpp modules do you know to be affected?
Python/Bash scripts
Command line
Problem description & steps to reproduce
Hi all i was trying to convert my lora adapters to gguf using the convert-lora-to-gguf.py scrand faced this error.
i finetuned using axolotl.
First Bad Commit
No response
Relevant log output
The text was updated successfully, but these errors were encountered: