-
Notifications
You must be signed in to change notification settings - Fork 18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Is there any way to save lora-converted model? #12
Comments
This is implemented/fixed in #13 which has been merged. Please note that the weight naming is incompatible with peft at the moment. If this is a problem, please feel free to raise an issue and I will fix it |
Thank you very much! I tried this and get a 536KB
Is it as expected? I also want to know how to apply the Lora tensors after loading a |
No, the prefix was incorrect but it should be fixed now. To load the Lora tensors, pass
That
|
Really helpful, thanks again! |
Glad to help! |
I tried to fine tune TinyLlama with this crate. I use
candle-lora/candle-lora-transformers/examples/llama.rs
to loadmodel.safetensors
, do stuff about training, eventually find that there's no way to save the model in safetensors format.I tried to implement a save method myself wrapping
candle_core::safetensors::save()
, but how can I get the weight of lora part? All I can get is the raw model before it converted to lora model.For example, if you run
/candle-lora-macro/examples/linear.rs
, byprintln!("{:?}", model.a);
you will see it printed as Linear struct, not a LoraLinear struct, and you can't getff_a
、ff_b
frommodel.a
, despite that the model is converted to a lora model.The text was updated successfully, but these errors were encountered: