-
Notifications
You must be signed in to change notification settings - Fork 910
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support for torch.compile
#1024
Conversation
Torch compile not available on Windows right? Also what improvements / changes it brings? |
Yes, In my small experiment, training with options wandb: https://wandb.ai/p1atdev/sd-scripts-torch_compile/workspace?workspace=user-p1atdev The training is very slow while a few steps after https://pytorch.org/tutorials/intermediate/torch_compile_tutorial.html Therefore, using |
ty can you tell the it / s difference? by the way best looking example is xformers |
The following are screenshots taken during longer trainings:
wandb: https://wandb.ai/p1atdev/pvc-torch_compile I'm not familiar with |
Sorry for the delay. Thank you so much for this great PR! I don't use Linux/WSL personally, but this is really nice! |
I noticed that this is not sd-script's accelerator 0.0.23 but 0.0.25. |
I updated |
what does second time of training means? |
Can you please test speed with inductor backend? |
@p1atdev What version of PyTorch do you recommend, 2.1 or does 2.0 work fine? I would like to mention it in the documentation when updating. |
I tested with PyTorch version 2.1.2+cu118 and it worked. Also according to the PyTorch release notes, |
Thank you for clarification! |
@p1atdev What training script did you test with? With
(However, I am using pytorch-rocm 2.3 nightly and don't know if |
I've got same error on mac, and also got same error on wsl with cuda. Mac's torch version is 2.2.0, and wsl has 2.1.2. |
same error on torch 2.2.0 cu118 |
i updated my |
Thank you for reply. This solved non hashable error both Mac and wsl(updated einops to 0.7.0). |
https://discuss.pytorch.org/t/how-to-save-load-a-model-with-torch-compile/179739/2
|
Also saving Lora probably needs to do the same or it is structurally broken. |
Added:
--torch_compile
and--dynamo_backend
--torch_compile
: Enablestorch.compile
. Default is False.--xformers
. Please use--sdpa
option instead.--dynamo_backend
: The backend used withtorch.compile
. Default is"inductor"
."eager", "aot_eager", "inductor", "aot_ts_nvfuser", "nvprims_nvfuser", "cudagraphs", "ofi", "fx2trt", "onnxrt"
are avaiable, but most are not tested.inductor
andeager
were worked.Changed:
einops
version from0.6.0
to0.6.1
due to be compatible withtorch.compile
. (more information)Related:
accelerate
torch.compile() support for faster training on Pytorch 2.0 #65