Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix lora inject, added weight self apply lora #39

Merged

Conversation

DavidePaglieri
Copy link
Contributor

Hi, awesome job!

I might be mistaken, but I think there might be a bug in the current version of inject_trainable_lora.

The cross attention module in the unet initially is:

CrossAttention(
              (to_q): Linear(in_features=1280, out_features=1280, bias=False)
              (to_k): Linear(in_features=768, out_features=1280, bias=False)
              (to_v): Linear(in_features=768, out_features=1280, bias=False)
              (to_out): ModuleList(
                (0): Linear(in_features=1280, out_features=1280, bias=True)
                (1): Dropout(p=0.0, inplace=False)
              )
            )

After injecting lora with the current method it becomes:

CrossAttention(
		[…]
              (to_out): ModuleList(
                (0): Linear(in_features=1280, out_features=1280, bias=True)
                (1): Dropout(p=0.0, inplace=False)
              )
              (to_out.0): LoraInjectedLinear(
                (linear): Linear(in_features=1280, out_features=1280, bias=True)
                (lora_down): Linear(in_features=1280, out_features=4, bias=False)
                (lora_up): Linear(in_features=4, out_features=1280, bias=False)
              )
            )

However if you go and see how CrossAttention works in diffusers you can see that the forward calls self.to_out[0] and self.to_out[1] which I think would probably call the original layer and dropout in the ModuleList, and not the LoraInjectedLayer.

In the PR I should have fixed this, now the cross attention injected becomes:

CrossAttention(
		[…]
              (to_out): ModuleList(
                (0): LoraInjectedLinear(
                  (linear): Linear(in_features=1280, out_features=1280, bias=True)
                  (lora_down): Linear(in_features=1280, out_features=4, bias=False)
                  (lora_up): Linear(in_features=4, out_features=1280, bias=False)
                )
                (1): Dropout(p=0.0, inplace=False)
              )
            )

Which should make it work correctly. I have also added weight_self_apply_lora which self applies the lora weights to a lora_injected model, and returns the injected model in standard form.

Let me know if this makes sense to you.

cloneofsimo and others added 7 commits December 14, 2022 18:12
* Add parameter to control rank of decomposition (#28)

* ENH: allow controlling rank of approximation

* Training script accepts lora_rank

* feat : statefully monkeypatch different loras + example ipynb + readme

Co-authored-by: brian6091 <brian6091@gmail.com>
@cloneofsimo
Copy link
Owner

Amazing! How did you find this? I guess this isn't critical issue, as qkv gets tuned, but could be certainly underperforming factor. Thank you so much!! Can you switch the branch to develop?

@cloneofsimo cloneofsimo changed the base branch from master to develop December 15, 2022 04:01
@cloneofsimo cloneofsimo merged commit fececf3 into cloneofsimo:develop Dec 15, 2022
cloneofsimo added a commit that referenced this pull request Dec 15, 2022
cloneofsimo added a commit that referenced this pull request Dec 15, 2022
@cloneofsimo
Copy link
Owner

@DavidePaglieri ah, so I found an issue regarding this PR. So this assume that to_out behaving in specific way, which CLIPTextTransformer might not agree, or any other method really because it take an target_replace_module in general.

@cloneofsimo cloneofsimo added the enhancement New feature or request label Dec 15, 2022
@cloneofsimo
Copy link
Owner

I think this was amazing issue that you've found. I'll mark this as an enhancement and work on making this more explicit / better later on.

@cloneofsimo
Copy link
Owner

Maybe making submodule for look with depth 1 and adding lambda module : module isinstance(nn.ModuleList) and has dropout as a filter might work

@cloneofsimo
Copy link
Owner

In any way, this needs major fix as you've pointed out

@DavidePaglieri
Copy link
Contributor Author

I found it when implementing the weight self apply method as there was a mismatch in the layers. Yes I mostly checked it with the unet, but the fix I made would not be super general. Agree that a condition based on whether it finds the nn.ModuleList could work well.

@cloneofsimo
Copy link
Owner

Kind of in a mess right now. Probably just better to reimplement the stable diffusion (forked from diffusers or whatnot), and modify alphas and etc in non-hacky, non-monkey-patchy-way, but that would be too much work haha

cloneofsimo added a commit that referenced this pull request Dec 15, 2022
* Add parameter to control rank of decomposition (#28)

* ENH: allow controlling rank of approximation

* Training script accepts lora_rank

* feat : statefully monkeypatch different loras + example ipynb + readme

* Fix lora inject, added weight self apply lora (#39)

* Develop (#34)

* Add parameter to control rank of decomposition (#28)

* ENH: allow controlling rank of approximation

* Training script accepts lora_rank

* feat : statefully monkeypatch different loras + example ipynb + readme

Co-authored-by: brian6091 <brian6091@gmail.com>

* release : version 0.0.4, now able to tune rank, now add loras dynamically

* readme : add brain6091's discussions

* fix:inject lora in to_out module list

* feat: added weight self apply lora

* chore: add import copy

* fix: readded r

Co-authored-by: Simo Ryu <35953539+cloneofsimo@users.noreply.github.com>
Co-authored-by: brian6091 <brian6091@gmail.com>
Co-authored-by: SimoRyu <cloneofsimo@korea.ac.kr>

* Revert "Fix lora inject, added weight self apply lora (#39)" (#40)

This reverts commit fececf3.

* fix : rank bug in monkeypatch

* fix cli fix

* visualizatio on effect of LR

Co-authored-by: brian6091 <brian6091@gmail.com>
Co-authored-by: Davide Paglieri <paglieridavide@gmail.com>
cloneofsimo added a commit that referenced this pull request Dec 16, 2022
* v 0.0.5 (#42)

* Add parameter to control rank of decomposition (#28)

* ENH: allow controlling rank of approximation

* Training script accepts lora_rank

* feat : statefully monkeypatch different loras + example ipynb + readme

* Fix lora inject, added weight self apply lora (#39)

* Develop (#34)

* Add parameter to control rank of decomposition (#28)

* ENH: allow controlling rank of approximation

* Training script accepts lora_rank

* feat : statefully monkeypatch different loras + example ipynb + readme

Co-authored-by: brian6091 <brian6091@gmail.com>

* release : version 0.0.4, now able to tune rank, now add loras dynamically

* readme : add brain6091's discussions

* fix:inject lora in to_out module list

* feat: added weight self apply lora

* chore: add import copy

* fix: readded r

Co-authored-by: Simo Ryu <35953539+cloneofsimo@users.noreply.github.com>
Co-authored-by: brian6091 <brian6091@gmail.com>
Co-authored-by: SimoRyu <cloneofsimo@korea.ac.kr>

* Revert "Fix lora inject, added weight self apply lora (#39)" (#40)

This reverts commit fececf3.

* fix : rank bug in monkeypatch

* fix cli fix

* visualizatio on effect of LR

Co-authored-by: brian6091 <brian6091@gmail.com>
Co-authored-by: Davide Paglieri <paglieridavide@gmail.com>

* Fix save_steps, max_train_steps, and logging

Corrected indenting so checking save_steps, max_train_steps, and updating logs are performed every step instead at the end of an epoch.

Co-authored-by: Simo Ryu <35953539+cloneofsimo@users.noreply.github.com>
Co-authored-by: brian6091 <brian6091@gmail.com>
Co-authored-by: Davide Paglieri <paglieridavide@gmail.com>
cloneofsimo added a commit that referenced this pull request Dec 16, 2022
* v 0.0.5 (#42)

* Add parameter to control rank of decomposition (#28)

* ENH: allow controlling rank of approximation

* Training script accepts lora_rank

* feat : statefully monkeypatch different loras + example ipynb + readme

* Fix lora inject, added weight self apply lora (#39)

* Develop (#34)

* Add parameter to control rank of decomposition (#28)

* ENH: allow controlling rank of approximation

* Training script accepts lora_rank

* feat : statefully monkeypatch different loras + example ipynb + readme

Co-authored-by: brian6091 <brian6091@gmail.com>

* release : version 0.0.4, now able to tune rank, now add loras dynamically

* readme : add brain6091's discussions

* fix:inject lora in to_out module list

* feat: added weight self apply lora

* chore: add import copy

* fix: readded r

Co-authored-by: Simo Ryu <35953539+cloneofsimo@users.noreply.github.com>
Co-authored-by: brian6091 <brian6091@gmail.com>
Co-authored-by: SimoRyu <cloneofsimo@korea.ac.kr>

* Revert "Fix lora inject, added weight self apply lora (#39)" (#40)

This reverts commit fececf3.

* fix : rank bug in monkeypatch

* fix cli fix

* visualizatio on effect of LR

Co-authored-by: brian6091 <brian6091@gmail.com>
Co-authored-by: Davide Paglieri <paglieridavide@gmail.com>

* Enable resume training unet/text encoder (#48)

* Enable resume training unet/text encoder

New flags --resume_text_encoder --resume_unet accept the paths to .pt files to resume.
Make sure to change the output directory from the previous training session, or else .pt files will be overwritten since training does not resume from previous global step.

* Load weights from .pt with inject_trainable_lora

Adds new loras argument to inject_trainable_lora function which accepts path to a .pt file containing previously trained weights.

Co-authored-by: Simo Ryu <35953539+cloneofsimo@users.noreply.github.com>
Co-authored-by: brian6091 <brian6091@gmail.com>
Co-authored-by: Davide Paglieri <paglieridavide@gmail.com>
cloneofsimo added a commit that referenced this pull request Dec 16, 2022
* Add parameter to control rank of decomposition (#28)

* ENH: allow controlling rank of approximation

* Training script accepts lora_rank

* feat : statefully monkeypatch different loras + example ipynb + readme

* Fix lora inject, added weight self apply lora (#39)

* Develop (#34)

* Add parameter to control rank of decomposition (#28)

* ENH: allow controlling rank of approximation

* Training script accepts lora_rank

* feat : statefully monkeypatch different loras + example ipynb + readme

Co-authored-by: brian6091 <brian6091@gmail.com>

* release : version 0.0.4, now able to tune rank, now add loras dynamically

* readme : add brain6091's discussions

* fix:inject lora in to_out module list

* feat: added weight self apply lora

* chore: add import copy

* fix: readded r

Co-authored-by: Simo Ryu <35953539+cloneofsimo@users.noreply.github.com>
Co-authored-by: brian6091 <brian6091@gmail.com>
Co-authored-by: SimoRyu <cloneofsimo@korea.ac.kr>

* Revert "Fix lora inject, added weight self apply lora (#39)" (#40)

This reverts commit fececf3.

* fix : rank bug in monkeypatch

* fix cli fix

* visualizatio on effect of LR

* Fix save_steps, max_train_steps, and logging (#45)

* v 0.0.5 (#42)

* Add parameter to control rank of decomposition (#28)

* ENH: allow controlling rank of approximation

* Training script accepts lora_rank

* feat : statefully monkeypatch different loras + example ipynb + readme

* Fix lora inject, added weight self apply lora (#39)

* Develop (#34)

* Add parameter to control rank of decomposition (#28)

* ENH: allow controlling rank of approximation

* Training script accepts lora_rank

* feat : statefully monkeypatch different loras + example ipynb + readme

Co-authored-by: brian6091 <brian6091@gmail.com>

* release : version 0.0.4, now able to tune rank, now add loras dynamically

* readme : add brain6091's discussions

* fix:inject lora in to_out module list

* feat: added weight self apply lora

* chore: add import copy

* fix: readded r

Co-authored-by: Simo Ryu <35953539+cloneofsimo@users.noreply.github.com>
Co-authored-by: brian6091 <brian6091@gmail.com>
Co-authored-by: SimoRyu <cloneofsimo@korea.ac.kr>

* Revert "Fix lora inject, added weight self apply lora (#39)" (#40)

This reverts commit fececf3.

* fix : rank bug in monkeypatch

* fix cli fix

* visualizatio on effect of LR

Co-authored-by: brian6091 <brian6091@gmail.com>
Co-authored-by: Davide Paglieri <paglieridavide@gmail.com>

* Fix save_steps, max_train_steps, and logging

Corrected indenting so checking save_steps, max_train_steps, and updating logs are performed every step instead at the end of an epoch.

Co-authored-by: Simo Ryu <35953539+cloneofsimo@users.noreply.github.com>
Co-authored-by: brian6091 <brian6091@gmail.com>
Co-authored-by: Davide Paglieri <paglieridavide@gmail.com>

* Enable resuming (#52)

* v 0.0.5 (#42)

* Add parameter to control rank of decomposition (#28)

* ENH: allow controlling rank of approximation

* Training script accepts lora_rank

* feat : statefully monkeypatch different loras + example ipynb + readme

* Fix lora inject, added weight self apply lora (#39)

* Develop (#34)

* Add parameter to control rank of decomposition (#28)

* ENH: allow controlling rank of approximation

* Training script accepts lora_rank

* feat : statefully monkeypatch different loras + example ipynb + readme

Co-authored-by: brian6091 <brian6091@gmail.com>

* release : version 0.0.4, now able to tune rank, now add loras dynamically

* readme : add brain6091's discussions

* fix:inject lora in to_out module list

* feat: added weight self apply lora

* chore: add import copy

* fix: readded r

Co-authored-by: Simo Ryu <35953539+cloneofsimo@users.noreply.github.com>
Co-authored-by: brian6091 <brian6091@gmail.com>
Co-authored-by: SimoRyu <cloneofsimo@korea.ac.kr>

* Revert "Fix lora inject, added weight self apply lora (#39)" (#40)

This reverts commit fececf3.

* fix : rank bug in monkeypatch

* fix cli fix

* visualizatio on effect of LR

Co-authored-by: brian6091 <brian6091@gmail.com>
Co-authored-by: Davide Paglieri <paglieridavide@gmail.com>

* Enable resume training unet/text encoder (#48)

* Enable resume training unet/text encoder

New flags --resume_text_encoder --resume_unet accept the paths to .pt files to resume.
Make sure to change the output directory from the previous training session, or else .pt files will be overwritten since training does not resume from previous global step.

* Load weights from .pt with inject_trainable_lora

Adds new loras argument to inject_trainable_lora function which accepts path to a .pt file containing previously trained weights.

Co-authored-by: Simo Ryu <35953539+cloneofsimo@users.noreply.github.com>
Co-authored-by: brian6091 <brian6091@gmail.com>
Co-authored-by: Davide Paglieri <paglieridavide@gmail.com>

* feat : low-rank pivotal tuning

* feat :  pivotal tuning

* v 0.0.6

Co-authored-by: brian6091 <brian6091@gmail.com>
Co-authored-by: SimoRyu <cloneofsimo@korea.ac.kr>
Co-authored-by: Davide Paglieri <paglieridavide@gmail.com>
Co-authored-by: hdeezy <82070413+hdeezy@users.noreply.github.com>
@hafriedlander
Copy link
Collaborator

Ran across the same bug.

This is probably fixable by looking for "." in the module name, and recursing through the model tree if it's found.

I'm working on a more generic patching method (that can also remove the injected LoraInjectedLinears and replace them with normal Linears again) that will fix this. I'll link here once I'm done and you can comment if you like the approach.

@hafriedlander
Copy link
Collaborator

hafriedlander commented Dec 19, 2022

Here's my rewrite of the injector (based off main - hopefully develop hasn't changed things too much). If this approach suits, I'll make a PR to add the same to this codebase.

https://github.com/hafriedlander/stable-diffusion-grpcserver/blob/dev/sdgrpcserver/pipeline/lora.py

cloneofsimo added a commit that referenced this pull request Dec 19, 2022
* Add parameter to control rank of decomposition (#28)

* ENH: allow controlling rank of approximation

* Training script accepts lora_rank

* feat : statefully monkeypatch different loras + example ipynb + readme

* Fix lora inject, added weight self apply lora (#39)

* Develop (#34)

* Add parameter to control rank of decomposition (#28)

* ENH: allow controlling rank of approximation

* Training script accepts lora_rank

* feat : statefully monkeypatch different loras + example ipynb + readme

Co-authored-by: brian6091 <brian6091@gmail.com>

* release : version 0.0.4, now able to tune rank, now add loras dynamically

* readme : add brain6091's discussions

* fix:inject lora in to_out module list

* feat: added weight self apply lora

* chore: add import copy

* fix: readded r

Co-authored-by: Simo Ryu <35953539+cloneofsimo@users.noreply.github.com>
Co-authored-by: brian6091 <brian6091@gmail.com>
Co-authored-by: SimoRyu <cloneofsimo@korea.ac.kr>

* Revert "Fix lora inject, added weight self apply lora (#39)" (#40)

This reverts commit fececf3.

* fix : rank bug in monkeypatch

* fix cli fix

* visualizatio on effect of LR

* Fix save_steps, max_train_steps, and logging (#45)

* v 0.0.5 (#42)

* Add parameter to control rank of decomposition (#28)

* ENH: allow controlling rank of approximation

* Training script accepts lora_rank

* feat : statefully monkeypatch different loras + example ipynb + readme

* Fix lora inject, added weight self apply lora (#39)

* Develop (#34)

* Add parameter to control rank of decomposition (#28)

* ENH: allow controlling rank of approximation

* Training script accepts lora_rank

* feat : statefully monkeypatch different loras + example ipynb + readme

Co-authored-by: brian6091 <brian6091@gmail.com>

* release : version 0.0.4, now able to tune rank, now add loras dynamically

* readme : add brain6091's discussions

* fix:inject lora in to_out module list

* feat: added weight self apply lora

* chore: add import copy

* fix: readded r

Co-authored-by: Simo Ryu <35953539+cloneofsimo@users.noreply.github.com>
Co-authored-by: brian6091 <brian6091@gmail.com>
Co-authored-by: SimoRyu <cloneofsimo@korea.ac.kr>

* Revert "Fix lora inject, added weight self apply lora (#39)" (#40)

This reverts commit fececf3.

* fix : rank bug in monkeypatch

* fix cli fix

* visualizatio on effect of LR

Co-authored-by: brian6091 <brian6091@gmail.com>
Co-authored-by: Davide Paglieri <paglieridavide@gmail.com>

* Fix save_steps, max_train_steps, and logging

Corrected indenting so checking save_steps, max_train_steps, and updating logs are performed every step instead at the end of an epoch.

Co-authored-by: Simo Ryu <35953539+cloneofsimo@users.noreply.github.com>
Co-authored-by: brian6091 <brian6091@gmail.com>
Co-authored-by: Davide Paglieri <paglieridavide@gmail.com>

* Enable resuming (#52)

* v 0.0.5 (#42)

* Add parameter to control rank of decomposition (#28)

* ENH: allow controlling rank of approximation

* Training script accepts lora_rank

* feat : statefully monkeypatch different loras + example ipynb + readme

* Fix lora inject, added weight self apply lora (#39)

* Develop (#34)

* Add parameter to control rank of decomposition (#28)

* ENH: allow controlling rank of approximation

* Training script accepts lora_rank

* feat : statefully monkeypatch different loras + example ipynb + readme

Co-authored-by: brian6091 <brian6091@gmail.com>

* release : version 0.0.4, now able to tune rank, now add loras dynamically

* readme : add brain6091's discussions

* fix:inject lora in to_out module list

* feat: added weight self apply lora

* chore: add import copy

* fix: readded r

Co-authored-by: Simo Ryu <35953539+cloneofsimo@users.noreply.github.com>
Co-authored-by: brian6091 <brian6091@gmail.com>
Co-authored-by: SimoRyu <cloneofsimo@korea.ac.kr>

* Revert "Fix lora inject, added weight self apply lora (#39)" (#40)

This reverts commit fececf3.

* fix : rank bug in monkeypatch

* fix cli fix

* visualizatio on effect of LR

Co-authored-by: brian6091 <brian6091@gmail.com>
Co-authored-by: Davide Paglieri <paglieridavide@gmail.com>

* Enable resume training unet/text encoder (#48)

* Enable resume training unet/text encoder

New flags --resume_text_encoder --resume_unet accept the paths to .pt files to resume.
Make sure to change the output directory from the previous training session, or else .pt files will be overwritten since training does not resume from previous global step.

* Load weights from .pt with inject_trainable_lora

Adds new loras argument to inject_trainable_lora function which accepts path to a .pt file containing previously trained weights.

Co-authored-by: Simo Ryu <35953539+cloneofsimo@users.noreply.github.com>
Co-authored-by: brian6091 <brian6091@gmail.com>
Co-authored-by: Davide Paglieri <paglieridavide@gmail.com>

* feat : low-rank pivotal tuning

* feat :  pivotal tuning

* v 0.0.6

* Learning rate switching & fix indent (#57)

* Learning rate switching & fix indent

Make learning rates switch from training textual inversion to unet/text encoder after unfreeze_lora_step.
I think this is how it was explained in the paper linked(?)

Either way, it might be useful to add another parameter to activate unet/text encoder training at a certain step instead of at unfreeze_lora_step.
This would let the user have more control.

Also fix indenting to make save_steps and logging work properly.

* Fix indent

fix accelerator.wait_for_everyone() indent according to original dreambooth training

* Re:Fix indent (#58)

Fix indenting of accelerator.wait_for_everyone()
according to original dreambooth training

Co-authored-by: brian6091 <brian6091@gmail.com>
Co-authored-by: Davide Paglieri <paglieridavide@gmail.com>
Co-authored-by: hdeezy <82070413+hdeezy@users.noreply.github.com>
@hafriedlander
Copy link
Collaborator

@cloneofsimo Take a look at the file above - is that an approach you'd accept a PR for?

cloneofsimo added a commit that referenced this pull request Dec 21, 2022
* Add parameter to control rank of decomposition (#28)

* ENH: allow controlling rank of approximation

* Training script accepts lora_rank

* feat : statefully monkeypatch different loras + example ipynb + readme

* Fix lora inject, added weight self apply lora (#39)

* Develop (#34)

* Add parameter to control rank of decomposition (#28)

* ENH: allow controlling rank of approximation

* Training script accepts lora_rank

* feat : statefully monkeypatch different loras + example ipynb + readme

Co-authored-by: brian6091 <brian6091@gmail.com>

* release : version 0.0.4, now able to tune rank, now add loras dynamically

* readme : add brain6091's discussions

* fix:inject lora in to_out module list

* feat: added weight self apply lora

* chore: add import copy

* fix: readded r

Co-authored-by: Simo Ryu <35953539+cloneofsimo@users.noreply.github.com>
Co-authored-by: brian6091 <brian6091@gmail.com>
Co-authored-by: SimoRyu <cloneofsimo@korea.ac.kr>

* Revert "Fix lora inject, added weight self apply lora (#39)" (#40)

This reverts commit fececf3.

* fix : rank bug in monkeypatch

* fix cli fix

* visualizatio on effect of LR

* Fix save_steps, max_train_steps, and logging (#45)

* v 0.0.5 (#42)

* Add parameter to control rank of decomposition (#28)

* ENH: allow controlling rank of approximation

* Training script accepts lora_rank

* feat : statefully monkeypatch different loras + example ipynb + readme

* Fix lora inject, added weight self apply lora (#39)

* Develop (#34)

* Add parameter to control rank of decomposition (#28)

* ENH: allow controlling rank of approximation

* Training script accepts lora_rank

* feat : statefully monkeypatch different loras + example ipynb + readme

Co-authored-by: brian6091 <brian6091@gmail.com>

* release : version 0.0.4, now able to tune rank, now add loras dynamically

* readme : add brain6091's discussions

* fix:inject lora in to_out module list

* feat: added weight self apply lora

* chore: add import copy

* fix: readded r

Co-authored-by: Simo Ryu <35953539+cloneofsimo@users.noreply.github.com>
Co-authored-by: brian6091 <brian6091@gmail.com>
Co-authored-by: SimoRyu <cloneofsimo@korea.ac.kr>

* Revert "Fix lora inject, added weight self apply lora (#39)" (#40)

This reverts commit fececf3.

* fix : rank bug in monkeypatch

* fix cli fix

* visualizatio on effect of LR

Co-authored-by: brian6091 <brian6091@gmail.com>
Co-authored-by: Davide Paglieri <paglieridavide@gmail.com>

* Fix save_steps, max_train_steps, and logging

Corrected indenting so checking save_steps, max_train_steps, and updating logs are performed every step instead at the end of an epoch.

Co-authored-by: Simo Ryu <35953539+cloneofsimo@users.noreply.github.com>
Co-authored-by: brian6091 <brian6091@gmail.com>
Co-authored-by: Davide Paglieri <paglieridavide@gmail.com>

* Enable resuming (#52)

* v 0.0.5 (#42)

* Add parameter to control rank of decomposition (#28)

* ENH: allow controlling rank of approximation

* Training script accepts lora_rank

* feat : statefully monkeypatch different loras + example ipynb + readme

* Fix lora inject, added weight self apply lora (#39)

* Develop (#34)

* Add parameter to control rank of decomposition (#28)

* ENH: allow controlling rank of approximation

* Training script accepts lora_rank

* feat : statefully monkeypatch different loras + example ipynb + readme

Co-authored-by: brian6091 <brian6091@gmail.com>

* release : version 0.0.4, now able to tune rank, now add loras dynamically

* readme : add brain6091's discussions

* fix:inject lora in to_out module list

* feat: added weight self apply lora

* chore: add import copy

* fix: readded r

Co-authored-by: Simo Ryu <35953539+cloneofsimo@users.noreply.github.com>
Co-authored-by: brian6091 <brian6091@gmail.com>
Co-authored-by: SimoRyu <cloneofsimo@korea.ac.kr>

* Revert "Fix lora inject, added weight self apply lora (#39)" (#40)

This reverts commit fececf3.

* fix : rank bug in monkeypatch

* fix cli fix

* visualizatio on effect of LR

Co-authored-by: brian6091 <brian6091@gmail.com>
Co-authored-by: Davide Paglieri <paglieridavide@gmail.com>

* Enable resume training unet/text encoder (#48)

* Enable resume training unet/text encoder

New flags --resume_text_encoder --resume_unet accept the paths to .pt files to resume.
Make sure to change the output directory from the previous training session, or else .pt files will be overwritten since training does not resume from previous global step.

* Load weights from .pt with inject_trainable_lora

Adds new loras argument to inject_trainable_lora function which accepts path to a .pt file containing previously trained weights.

Co-authored-by: Simo Ryu <35953539+cloneofsimo@users.noreply.github.com>
Co-authored-by: brian6091 <brian6091@gmail.com>
Co-authored-by: Davide Paglieri <paglieridavide@gmail.com>

* feat : low-rank pivotal tuning

* feat :  pivotal tuning

* v 0.0.6

* Learning rate switching & fix indent (#57)

* Learning rate switching & fix indent

Make learning rates switch from training textual inversion to unet/text encoder after unfreeze_lora_step.
I think this is how it was explained in the paper linked(?)

Either way, it might be useful to add another parameter to activate unet/text encoder training at a certain step instead of at unfreeze_lora_step.
This would let the user have more control.

Also fix indenting to make save_steps and logging work properly.

* Fix indent

fix accelerator.wait_for_everyone() indent according to original dreambooth training

* Re:Fix indent (#58)

Fix indenting of accelerator.wait_for_everyone()
according to original dreambooth training

* ff now training default

* feat : dataset

* feat : utils to back training

* readme : more contents. citations, etc.

* fix : weight init

Co-authored-by: brian6091 <brian6091@gmail.com>
Co-authored-by: Davide Paglieri <paglieridavide@gmail.com>
Co-authored-by: hdeezy <82070413+hdeezy@users.noreply.github.com>
cloneofsimo added a commit that referenced this pull request Dec 25, 2022
* Add parameter to control rank of decomposition (#28)

* ENH: allow controlling rank of approximation

* Training script accepts lora_rank

* feat : statefully monkeypatch different loras + example ipynb + readme

* Fix lora inject, added weight self apply lora (#39)

* Develop (#34)

* Add parameter to control rank of decomposition (#28)

* ENH: allow controlling rank of approximation

* Training script accepts lora_rank

* feat : statefully monkeypatch different loras + example ipynb + readme

Co-authored-by: brian6091 <brian6091@gmail.com>

* release : version 0.0.4, now able to tune rank, now add loras dynamically

* readme : add brain6091's discussions

* fix:inject lora in to_out module list

* feat: added weight self apply lora

* chore: add import copy

* fix: readded r

Co-authored-by: Simo Ryu <35953539+cloneofsimo@users.noreply.github.com>
Co-authored-by: brian6091 <brian6091@gmail.com>
Co-authored-by: SimoRyu <cloneofsimo@korea.ac.kr>

* Revert "Fix lora inject, added weight self apply lora (#39)" (#40)

This reverts commit fececf3.

* fix : rank bug in monkeypatch

* fix cli fix

* visualizatio on effect of LR

* Fix save_steps, max_train_steps, and logging (#45)

* v 0.0.5 (#42)

* Add parameter to control rank of decomposition (#28)

* ENH: allow controlling rank of approximation

* Training script accepts lora_rank

* feat : statefully monkeypatch different loras + example ipynb + readme

* Fix lora inject, added weight self apply lora (#39)

* Develop (#34)

* Add parameter to control rank of decomposition (#28)

* ENH: allow controlling rank of approximation

* Training script accepts lora_rank

* feat : statefully monkeypatch different loras + example ipynb + readme

Co-authored-by: brian6091 <brian6091@gmail.com>

* release : version 0.0.4, now able to tune rank, now add loras dynamically

* readme : add brain6091's discussions

* fix:inject lora in to_out module list

* feat: added weight self apply lora

* chore: add import copy

* fix: readded r

Co-authored-by: Simo Ryu <35953539+cloneofsimo@users.noreply.github.com>
Co-authored-by: brian6091 <brian6091@gmail.com>
Co-authored-by: SimoRyu <cloneofsimo@korea.ac.kr>

* Revert "Fix lora inject, added weight self apply lora (#39)" (#40)

This reverts commit fececf3.

* fix : rank bug in monkeypatch

* fix cli fix

* visualizatio on effect of LR

Co-authored-by: brian6091 <brian6091@gmail.com>
Co-authored-by: Davide Paglieri <paglieridavide@gmail.com>

* Fix save_steps, max_train_steps, and logging

Corrected indenting so checking save_steps, max_train_steps, and updating logs are performed every step instead at the end of an epoch.

Co-authored-by: Simo Ryu <35953539+cloneofsimo@users.noreply.github.com>
Co-authored-by: brian6091 <brian6091@gmail.com>
Co-authored-by: Davide Paglieri <paglieridavide@gmail.com>

* Enable resuming (#52)

* v 0.0.5 (#42)

* Add parameter to control rank of decomposition (#28)

* ENH: allow controlling rank of approximation

* Training script accepts lora_rank

* feat : statefully monkeypatch different loras + example ipynb + readme

* Fix lora inject, added weight self apply lora (#39)

* Develop (#34)

* Add parameter to control rank of decomposition (#28)

* ENH: allow controlling rank of approximation

* Training script accepts lora_rank

* feat : statefully monkeypatch different loras + example ipynb + readme

Co-authored-by: brian6091 <brian6091@gmail.com>

* release : version 0.0.4, now able to tune rank, now add loras dynamically

* readme : add brain6091's discussions

* fix:inject lora in to_out module list

* feat: added weight self apply lora

* chore: add import copy

* fix: readded r

Co-authored-by: Simo Ryu <35953539+cloneofsimo@users.noreply.github.com>
Co-authored-by: brian6091 <brian6091@gmail.com>
Co-authored-by: SimoRyu <cloneofsimo@korea.ac.kr>

* Revert "Fix lora inject, added weight self apply lora (#39)" (#40)

This reverts commit fececf3.

* fix : rank bug in monkeypatch

* fix cli fix

* visualizatio on effect of LR

Co-authored-by: brian6091 <brian6091@gmail.com>
Co-authored-by: Davide Paglieri <paglieridavide@gmail.com>

* Enable resume training unet/text encoder (#48)

* Enable resume training unet/text encoder

New flags --resume_text_encoder --resume_unet accept the paths to .pt files to resume.
Make sure to change the output directory from the previous training session, or else .pt files will be overwritten since training does not resume from previous global step.

* Load weights from .pt with inject_trainable_lora

Adds new loras argument to inject_trainable_lora function which accepts path to a .pt file containing previously trained weights.

Co-authored-by: Simo Ryu <35953539+cloneofsimo@users.noreply.github.com>
Co-authored-by: brian6091 <brian6091@gmail.com>
Co-authored-by: Davide Paglieri <paglieridavide@gmail.com>

* feat : low-rank pivotal tuning

* feat :  pivotal tuning

* v 0.0.6

* Learning rate switching & fix indent (#57)

* Learning rate switching & fix indent

Make learning rates switch from training textual inversion to unet/text encoder after unfreeze_lora_step.
I think this is how it was explained in the paper linked(?)

Either way, it might be useful to add another parameter to activate unet/text encoder training at a certain step instead of at unfreeze_lora_step.
This would let the user have more control.

Also fix indenting to make save_steps and logging work properly.

* Fix indent

fix accelerator.wait_for_everyone() indent according to original dreambooth training

* Re:Fix indent (#58)

Fix indenting of accelerator.wait_for_everyone()
according to original dreambooth training

* ff now training default

* feat : dataset

* feat : utils to back training

* readme : more contents. citations, etc.

* fix : weight init

* Feature/monkeypatch improvements (#73)

* Refactor module replacement to work with nested Linears

* Make monkeypatch_remove_lora remove all LoraInjectedLinear instances

* Turn off resizing images with --resize=False (#71)

* Make image resize optional with --resize

Toggle off image resizing using --resize=False. Default is true for to maintain consistent operation.

* Make image resize optional with --resize

Toggle off image resizing using --resize=False. Default is true for to maintain consistent operation.

* Make image resize optional with --resize

Toggle off image resizing using --resize=False. Default is true for to maintain consistent operation.

* Revert "Turn off resizing images with --resize=False (#71)" (#77)

This reverts commit 39affb7.

* Use safetensors to store Loras (#74)

* Add safetensors supports

* Add some documentation for the safetensors load and save methods

* Fix typing-related syntax errors in Python < 3.10 introduced in recent refactor (#79)

* Fix the --resize=False option (#81)

* Make image resize optional with --resize

Toggle off image resizing using --resize=False. Default is true for to maintain consistent operation.

* Make image resize optional with --resize

Toggle off image resizing using --resize=False. Default is true for to maintain consistent operation.

* Make image resize optional with --resize

Toggle off image resizing using --resize=False. Default is true for to maintain consistent operation.

* Fix resize==False functionality

* Update train_lora_pt_caption.py

* Update train_lora_w_ti.py

* Pivotal Tuning with hackable training code for CLI (#83)

* feat : save utils on lora

* fix : stochastic attribute

* feat : cleaner training code

* fix : bit of bugs on inspect and trainer

* fix : moved pti training to cli

* feat : patch now accepts target arg

* fix : gelu in target

* fix : gradient being way too large : autocast was the problem

* fix : hflip

* fix : example running well!

* merge master

Co-authored-by: brian6091 <brian6091@gmail.com>
Co-authored-by: Davide Paglieri <paglieridavide@gmail.com>
Co-authored-by: hdeezy <82070413+hdeezy@users.noreply.github.com>
Co-authored-by: Hamish Friedlander <hafriedlander@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants