Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

v 0.0.7 #84

Merged
merged 35 commits into from
Dec 25, 2022
Merged

v 0.0.7 #84

merged 35 commits into from
Dec 25, 2022

Conversation

cloneofsimo
Copy link
Owner

Few new features

  • better find_modules, safe_tensor feature by @hafriedlander
  • datasets added
  • Inspect_lora, save_all
  • pivotal tuning CLI added. Now you can perform all three training with CLIs. Example :
export MODEL_NAME="runwayml/stable-diffusion-v1-5"
export INSTANCE_DIR="./data_example_captioned_small"
export OUTPUT_DIR="./exps/output_example_ffl_larger_wd_new"

lora_pti \
  --pretrained_model_name_or_path=$MODEL_NAME  \
  --instance_data_dir=$INSTANCE_DIR \
  --output_dir=$OUTPUT_DIR \
  --train_text_encoder \
  --resolution=512 \
  --train_batch_size=1 \
  --gradient_accumulation_steps=1 \
  --learning_rate_unet=1e-4 \
  --learning_rate_text=1e-5 \
  --learning_rate_ti=5e-4 \
  --color_jitter \
  --lr_scheduler="constant" \
  --lr_warmup_steps=100 \
  --placeholder_token="<krk>" \
  --learnable_property="object"\
  --initializer_token="girl" \
  --save_steps=500 \
  --max_train_steps_ti=2000 \
  --max_train_steps_tuning=2000 \
  --perform_inversion=True \
  --weight_decay_ti=0.1 \
  --weight_decay_lora=0.1\
  --device="cuda:0"\

I've also added lora_kiriko2.pt that was the direct result of the cli.

image

brian6091 and others added 30 commits December 14, 2022 07:30
* ENH: allow controlling rank of approximation

* Training script accepts lora_rank
* Develop (#34)

* Add parameter to control rank of decomposition (#28)

* ENH: allow controlling rank of approximation

* Training script accepts lora_rank

* feat : statefully monkeypatch different loras + example ipynb + readme

Co-authored-by: brian6091 <brian6091@gmail.com>

* release : version 0.0.4, now able to tune rank, now add loras dynamically

* readme : add brain6091's discussions

* fix:inject lora in to_out module list

* feat: added weight self apply lora

* chore: add import copy

* fix: readded r

Co-authored-by: Simo Ryu <35953539+cloneofsimo@users.noreply.github.com>
Co-authored-by: brian6091 <brian6091@gmail.com>
Co-authored-by: SimoRyu <cloneofsimo@korea.ac.kr>
* v 0.0.5 (#42)

* Add parameter to control rank of decomposition (#28)

* ENH: allow controlling rank of approximation

* Training script accepts lora_rank

* feat : statefully monkeypatch different loras + example ipynb + readme

* Fix lora inject, added weight self apply lora (#39)

* Develop (#34)

* Add parameter to control rank of decomposition (#28)

* ENH: allow controlling rank of approximation

* Training script accepts lora_rank

* feat : statefully monkeypatch different loras + example ipynb + readme

Co-authored-by: brian6091 <brian6091@gmail.com>

* release : version 0.0.4, now able to tune rank, now add loras dynamically

* readme : add brain6091's discussions

* fix:inject lora in to_out module list

* feat: added weight self apply lora

* chore: add import copy

* fix: readded r

Co-authored-by: Simo Ryu <35953539+cloneofsimo@users.noreply.github.com>
Co-authored-by: brian6091 <brian6091@gmail.com>
Co-authored-by: SimoRyu <cloneofsimo@korea.ac.kr>

* Revert "Fix lora inject, added weight self apply lora (#39)" (#40)

This reverts commit fececf3.

* fix : rank bug in monkeypatch

* fix cli fix

* visualizatio on effect of LR

Co-authored-by: brian6091 <brian6091@gmail.com>
Co-authored-by: Davide Paglieri <paglieridavide@gmail.com>

* Fix save_steps, max_train_steps, and logging

Corrected indenting so checking save_steps, max_train_steps, and updating logs are performed every step instead at the end of an epoch.

Co-authored-by: Simo Ryu <35953539+cloneofsimo@users.noreply.github.com>
Co-authored-by: brian6091 <brian6091@gmail.com>
Co-authored-by: Davide Paglieri <paglieridavide@gmail.com>
* v 0.0.5 (#42)

* Add parameter to control rank of decomposition (#28)

* ENH: allow controlling rank of approximation

* Training script accepts lora_rank

* feat : statefully monkeypatch different loras + example ipynb + readme

* Fix lora inject, added weight self apply lora (#39)

* Develop (#34)

* Add parameter to control rank of decomposition (#28)

* ENH: allow controlling rank of approximation

* Training script accepts lora_rank

* feat : statefully monkeypatch different loras + example ipynb + readme

Co-authored-by: brian6091 <brian6091@gmail.com>

* release : version 0.0.4, now able to tune rank, now add loras dynamically

* readme : add brain6091's discussions

* fix:inject lora in to_out module list

* feat: added weight self apply lora

* chore: add import copy

* fix: readded r

Co-authored-by: Simo Ryu <35953539+cloneofsimo@users.noreply.github.com>
Co-authored-by: brian6091 <brian6091@gmail.com>
Co-authored-by: SimoRyu <cloneofsimo@korea.ac.kr>

* Revert "Fix lora inject, added weight self apply lora (#39)" (#40)

This reverts commit fececf3.

* fix : rank bug in monkeypatch

* fix cli fix

* visualizatio on effect of LR

Co-authored-by: brian6091 <brian6091@gmail.com>
Co-authored-by: Davide Paglieri <paglieridavide@gmail.com>

* Enable resume training unet/text encoder (#48)

* Enable resume training unet/text encoder

New flags --resume_text_encoder --resume_unet accept the paths to .pt files to resume.
Make sure to change the output directory from the previous training session, or else .pt files will be overwritten since training does not resume from previous global step.

* Load weights from .pt with inject_trainable_lora

Adds new loras argument to inject_trainable_lora function which accepts path to a .pt file containing previously trained weights.

Co-authored-by: Simo Ryu <35953539+cloneofsimo@users.noreply.github.com>
Co-authored-by: brian6091 <brian6091@gmail.com>
Co-authored-by: Davide Paglieri <paglieridavide@gmail.com>
* Learning rate switching & fix indent

Make learning rates switch from training textual inversion to unet/text encoder after unfreeze_lora_step.
I think this is how it was explained in the paper linked(?)

Either way, it might be useful to add another parameter to activate unet/text encoder training at a certain step instead of at unfreeze_lora_step.
This would let the user have more control.

Also fix indenting to make save_steps and logging work properly.

* Fix indent

fix accelerator.wait_for_everyone() indent according to original dreambooth training
Fix indenting of accelerator.wait_for_everyone()
according to original dreambooth training
* Refactor module replacement to work with nested Linears

* Make monkeypatch_remove_lora remove all LoraInjectedLinear instances
* Make image resize optional with --resize

Toggle off image resizing using --resize=False. Default is true for to maintain consistent operation.

* Make image resize optional with --resize

Toggle off image resizing using --resize=False. Default is true for to maintain consistent operation.

* Make image resize optional with --resize

Toggle off image resizing using --resize=False. Default is true for to maintain consistent operation.
* Add safetensors supports

* Add some documentation for the safetensors load and save methods
hafriedlander and others added 5 commits December 25, 2022 10:35
* Make image resize optional with --resize

Toggle off image resizing using --resize=False. Default is true for to maintain consistent operation.

* Make image resize optional with --resize

Toggle off image resizing using --resize=False. Default is true for to maintain consistent operation.

* Make image resize optional with --resize

Toggle off image resizing using --resize=False. Default is true for to maintain consistent operation.

* Fix resize==False functionality

* Update train_lora_pt_caption.py

* Update train_lora_w_ti.py
* feat : save utils on lora

* fix : stochastic attribute

* feat : cleaner training code

* fix : bit of bugs on inspect and trainer

* fix : moved pti training to cli

* feat : patch now accepts target arg

* fix : gelu in target

* fix : gradient being way too large : autocast was the problem

* fix : hflip

* fix : example running well!
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants