Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Misc. bug: error when converting lora to gguf (ERROR:lora-to-gguf:Unexpected name 'base_model.model.lm_head.weight': Not a lora_A or lora_B tensor) #11554

Open
Leeeef552 opened this issue Jan 31, 2025 · 0 comments

Comments

@Leeeef552
Copy link

Name and Version

$ ./llama-cli --version
version: 2999 (42b4109e)
built with cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 for x86_64-linux-gnu

Operating systems

Linux

Which llama.cpp modules do you know to be affected?

Python/Bash scripts

Command line

Problem description & steps to reproduce

Hi all i was trying to convert my lora adapters to gguf using the convert-lora-to-gguf.py scrand faced this error.

i finetuned using axolotl.

Image

First Bad Commit

No response

Relevant log output

INFO:lora-to-gguf:Loading base model from Hugging Face: NousResearch/Meta-Llama-3.1-8B-Instruct
DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): huggingface.co:443
DEBUG:urllib3.connectionpool:https://huggingface.co:443 "HEAD /NousResearch/Meta-Llama-3.1-8B-Instruct/resolve/main/config.json HTTP/1.1" 200 0
INFO:gguf.gguf_writer:gguf: This GGUF file is for Little Endian only
INFO:lora-to-gguf:Exporting model...
ERROR:lora-to-gguf:Unexpected name 'base_model.model.lm_head.weight': Not a lora_A or lora_B tensor
ERROR:lora-to-gguf:Embeddings is present in the adapter. This can be due to new tokens added during fine tuning
ERROR:lora-to-gguf:Please refer to https://github.com/ggerganov/llama.cpp/pull/9948
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant