-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
saftetensor in subfolder not supported #1154
Comments
Thanks for reporting. Could you please provide a full example of how to trigger the error, as well as the complete error message? |
Hi @yxli2123 from transformers import (
AutoConfig,
AutoModelForCausalLM,
)
from peft import PeftModel
config = AutoConfig.from_pretrained("LoftQ/Llama-2-13b-hf-4bit-64rank")
model = AutoModelForCausalLM.from_pretrained(
"LoftQ/Llama-2-7b-hf-4bit-64rank",
config=config,
load_in_4bit=True
)
peft_model = PeftModel.from_pretrained(model, "LoftQ/Llama-2-13b-hf-4bit-64rank", subfolder="loftq_init", is_trainable=True) I get __init__() got an unexpected keyword argument 'loftq_config' This is because the adapter config that is saved contains fields that are not supported by peft such as: https://huggingface.co/LoftQ/Llama-2-13b-hf-4bit-64rank/blob/main/loftq_init/adapter_config.json#L11 |
@younesbelkada This is based on #1150, so testing requires to check out that branch. |
Oh I see, thanks ! |
You may need to set the HF hub to from transformers import (
AutoConfig,
AutoModelForCausalLM,
)
from peft import PeftModel
config = AutoConfig.from_pretrained("LoftQ/Mistral-7B-v0.1-4bit-32rank")
model = AutoModelForCausalLM.from_pretrained(
"LoftQ/Mistral-7B-v0.1-4bit-32rank",
config=config,
load_in_4bit=True
)
peft_model = PeftModel.from_pretrained(model, "LoftQ/Mistral-7B-v0.1-4bit-32rank", subfolder="loftq_init", is_trainable=True) I got error messages like: (still Traceback (most recent call last):
File "/opt/conda/lib/python3.10/site-packages/huggingface_hub/utils/_errors.py", line 270, in hf_raise_for_status
response.raise_for_status()
File "/opt/conda/lib/python3.10/site-packages/requests/models.py", line 1021, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 404 Client Error: Not Found for url: https://huggingface.co/LoftQ/Llama-2-13b-hf-4bit-64rank/resolve/main/loftq_init/adapter_model.bin
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/conda/lib/python3.10/site-packages/peft/utils/save_and_load.py", line 213, in load_peft_weights
filename = hf_hub_download(model_id, WEIGHTS_NAME, **hf_hub_download_kwargs)
File "/opt/conda/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn
return fn(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 1247, in hf_hub_download
metadata = get_hf_file_metadata(
File "/opt/conda/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn
return fn(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 1624, in get_hf_file_metadata
r = _request_wrapper(
File "/opt/conda/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 402, in _request_wrapper
response = _request_wrapper(
File "/opt/conda/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 426, in _request_wrapper
hf_raise_for_status(response)
File "/opt/conda/lib/python3.10/site-packages/huggingface_hub/utils/_errors.py", line 280, in hf_raise_for_status
raise EntryNotFoundError(message, response) from e
huggingface_hub.utils._errors.EntryNotFoundError: 404 Client Error. (Request ID: Root=1-655d7fb9-2d52e81e1e58a55e4bb6b09a;9d8b8617-6b0b-4691-9e63-3929ecd15e7d)
Entry Not Found for url: https://huggingface.co/LoftQ/Llama-2-13b-hf-4bit-64rank/resolve/main/loftq_init/adapter_model.bin.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/mnt/amlt_code/examples/loftq_finetuning/train_gsm8k_llama.py", line 856, in <module>
main()
File "/mnt/amlt_code/examples/loftq_finetuning/train_gsm8k_llama.py", line 481, in main
model = PeftModel.from_pretrained(model, args.model_name_or_path, subfolder="loftq_init", is_trainable=True)
File "/opt/conda/lib/python3.10/site-packages/peft/peft_model.py", line 332, in from_pretrained
model.load_adapter(model_id, adapter_name, is_trainable=is_trainable, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/peft/peft_model.py", line 629, in load_adapter
adapters_weights = load_peft_weights(model_id, device=torch_device, **hf_hub_download_kwargs)
File "/opt/conda/lib/python3.10/site-packages/peft/utils/save_and_load.py", line 215, in load_peft_weights
raise ValueError(
ValueError: Can't find weights for LoftQ/Llama-2-13b-hf-4bit-64rank in LoftQ/Llama-2-13b-hf-4bit-64rank or in the Hugging Face Hub. Please check that the file adapter_model.bin or adapter_model.safetensors is present at LoftQ/Llama-2-13b-hf-4bit-64rank.
|
@yxli2123 is this still an issue? Are you able to repro on peft main? |
so,how to address this issue? |
@yfangZhang can you share a reproducible snippet of the issue you are facing? |
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. |
System Info
When I put
adapter_model.safetensor
in a subfolder of a Huggingface Hub, for example,LoftQ/Llama-2-7b-hf-4bit-64rank
,PeftModel.from_pretrained("oftQ/Llama-2-7b-hf-4bit-64rank", subfolder='loftq_init')
is not able to find theadapter_model.safetensor
properly. It only supportsadapter_model.bin
. It would be great if you can support safetensors sincePeftModel.save_pretrained()
is automatically using safetensors. Thank you~Who can help?
No response
Information
Tasks
examples
folderReproduction
Expected behavior
peft_model = PeftModel.from_pretrained(model, "LoftQ/Llama-2-13b-hf-4bit-64rank", subfolder="loftq_init",
would find theadapter_model.safetensors
automatically.The text was updated successfully, but these errors were encountered: