You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you for your excellent work and the impressive results presented in your paper.
I have attempted to train and fine-tune the model using the provided scripts (process/train_animediffusion.py and ft_animediffusion.py), with 300 epochs for training and 10 epochs for fine-tuning. The colorization results look good when the line art is identical to the reference image. However, when using a different line art, the model seems to struggle to achieve similar results. All line art images are preprocessed by XDoG function.
Could you provide some guidance on how to improve the colorization performance with different line art inputs, or share any tips for using the model?
Thank you very much for your assistance!
The text was updated successfully, but these errors were encountered:
Thank you for your excellent work and the impressive results presented in your paper.
I have attempted to train and fine-tune the model using the provided scripts (process/train_animediffusion.py and ft_animediffusion.py), with 300 epochs for training and 10 epochs for fine-tuning. The colorization results look good when the line art is identical to the reference image. However, when using a different line art, the model seems to struggle to achieve similar results. All line art images are preprocessed by XDoG function.
Could you provide some guidance on how to improve the colorization performance with different line art inputs, or share any tips for using the model?
Thank you very much for your assistance!
The text was updated successfully, but these errors were encountered: