-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Test custom scene graph #12
Comments
I also want to test with my own scene graphs. I modified the testset_ddim_sampler.py to only load all the stuff and then created my own data:
to use it for the sampler (image generation):
Result is worse, but I don't know if it is on my model (trained to epoch 35) or the image. I wonder why I need to add the image to the data for the generation process. |
You need to train the model for much longer if you want to obtain good results, it took me roughly 8 days and 335 epochs to reproduce the authors' results, see #7 (comment). You will also need to carefully design your custom scene graphs: the original VG dataset is highly unbalanced so the diffusion model does not learn efficient representation for all types of relations. In my experiments, it works relatively well to reconstruct images from graphs composed of spatial relations but it will fail with other more complex relations (such as semantics, e.g. "person eating sandwich", "person drinking wine" etc). |
Hi, Ling Yang
If I want to test generating an image from a custom scene graph, What data should I need to prepare and which part of the code should I change?
The text was updated successfully, but these errors were encountered: