-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Problem with reproducing results on CLEVR / CLEVRTEX #1
Comments
ok, thanks, we'll try and report back. |
Hello, Please do not consider my previous comment. It seems that the issue is simply caused by a bug in the management of the background complexity flag for simple backgrounds in supervised mode, which is the case for CLEVR. I will soon update the repo to correct this bug. |
Hello, Thank you for the update you made, having tried with the updated code, we were able to recreate the results for CLEVR. However, for CLEVRTEX on curriculum training, the MIoU does not surpass 36%. Could you please look into that as well? |
Hello, Thanks for the feedback, Could you please send me some samples of images generated during training (stored in the directory defined by the path "training_images_output_directory" of MF_config config file ), for example at epoch 0,4,8,20,60 ? Regards, |
You may also check that the file paths are updated in the MF_config.py file in a way which is consistent with the github template. For example, the key "train_dataset_background_path" should be associated to the path of the rgba image folder generated by the background model, not the rgb image folder. |
Hu Bruno, thank you for your message. Having considered all the model works appropriately, but I still fail to reach the accuracy with the necessary variations as proposed by you in the paper i.e. for CLEVRTEX 79.58 ± 0.54 and for CLVER 90.27 ± 0.20. For CLEVR we get close to 87 and CLEVRTEX close to 72. |
Thanks for sharing the code of the article.
We tried re-running the experiments, (first) on CLEVR.
We ran it 3 times using the provided configuration.
Each time, after background training, in the segmentation training, the mIoU increases to at most 15% and stays at this value (compared to the 90.2% in the article).
Any idea of what we might need to change to reproduce the results?
The text was updated successfully, but these errors were encountered: