Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Shape and Texture pretraining on the custom dataset to create 'sdf_rgb_pretrained' folder #13

Open
alihankeleser opened this issue Jan 26, 2023 · 5 comments
Labels
good first issue Good for newcomers

Comments

@alihankeleser
Copy link

Hi @zubair-irshad,

How do I re-generate the folder 'sdf_rgb_pretrained' by pretraining the shape and texture network using my custom dataset?
Which folders from 'sdf_rgb_pretrained' do I exactly need to reproduce in order to train the network on my custom dataset? I am assuming I need all of them.

Can you please help me in reproducing this part?

Thanks a lot for your help in advance!

@zubair-irshad
Copy link
Owner

Hi @alihankeleser,

Thanks for your interest in our work. Unfortunately, it might not be possible to release the shape and appearance training part of the codebase but I can certainly help out with your questions/share some of the unofficial implementations here so you can reproduce it on your end in a seamless way. Let me know if you have any specific questions but on the high-level, here is how you can reproduce some of the shape and appearance training part of the codebase:

  • Our shape pre-training is similar to DeepSDF so I highly encourage you to start from there. We have released our decoder here. For the rest of training, we follow this training script mainly which is heavily inspired from the original DeepSDF codebase, and we mainly make two changes. 1. We train all categories jointly from the NOCS shapenet dataset and 2. We employ a contrastive loss to enable better regression of latent codes in the main downstream network. Please also note that the training script I shared above is un-official implementation and we do not plan to release it as part of our public repo, but it is closest to what we have implemented for ShAPO shape pretraining.

  • For our appearance pre-training, we start with the shape latent codes, appearance latent codes and points extracted from the deepsdf decoder and concatenate them before passing it to the rgb network as here, where rgbnet is defined here. For loss, we employ MSE loss between ground truth RGB and predicted rgb. It is important to note that we only define textures or colors at the surface so shape pretraining is done first and then appearance pretraining is done. I have also shared our training script (please see train_rgbnet.py for reference on how you can train the appearance network.

Hope this helps and please feel free to let me know if you might have additional question.

@zubair-irshad zubair-irshad added the good first issue Good for newcomers label Jan 26, 2023
@Trulli99
Copy link

Sorry to bother, but would you mind to detail it a little more, please?

@zubair-irshad
Copy link
Owner

Sure! Let me know if you have specific questions on what do you need more help with?

@Trulli99
Copy link

For the shape pre training I just need to follow the instructions from the DeepSDF and use your specs.json file?

@zubair-irshad
Copy link
Owner

Apologies for the delayed response, yes I linked more scripts in my comment here. As you can see, the scripts borrow a lot of code from DeepSDF. We made some changes such as training one model for all categories + contrastive losses. Feel free to go to the links I mentioned in the comment above to see those changes and train the shape and appearance models. Hope it helps!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
good first issue Good for newcomers
Projects
None yet
Development

No branches or pull requests

3 participants