Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About Textual Inversion Embedding #1

Open
YouPassTheButter opened this issue Jun 19, 2023 · 3 comments
Open

About Textual Inversion Embedding #1

YouPassTheButter opened this issue Jun 19, 2023 · 3 comments

Comments

@YouPassTheButter
Copy link

Hi, great job! Would you mind sharing the code or pipeline for training the textual inversion embedding?

@shunk031
Copy link

Hi, @VSAnimator

I'd like to know about training textual inversion too. In particular, I'd appreciate details regarding the FineTuneConcept class shown in the notebook. Thanks!

@cangzihan
Copy link

Maybe we can follow this demo to train a Textual Inversion model with a size of [1024]:
https://github.com/huggingface/diffusers/tree/main/examples/textual_inversion

@diptanshu-singh
Copy link

The textual inversion used in FTC seems to be different original textual_inversion, I used the textual inversion code base to learn but it would take a lot of time for each layer. Here in the assets shared, it seems like they are not learning a new token - but instead a modifier on top of layer token. I am not sure how this is being done but seems very cool thing to try out

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants