-
Notifications
You must be signed in to change notification settings - Fork 980
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Questions regarding mIoU, accuracy, FID #39
Comments
Hello,
|
Thank you so much for your quick response. |
Hello ! |
I resize the label as the generator photo(512x256), and the result is 53.5 mIoU, 91.0 accu, I have a question for author, what size of the label and generator's photo when evaluation? |
@ZzzackChen @SolidShen Hi, guys. I have tested the pretrained models (Cityscapes and ADE20k), and I got 64.07 and 43.02 (both represent mIoU). I have downsampled the labels using nearest neighbor interpolation as the authors suggested, (512x256 for cityscapes and 256x256 for others). However, I am confused about the unexpected higher scores than the scores in the paper, especially for ADE20k. @taesungp Would you like to presenet more details for evaluation? |
Hi, link for "baseline-resnet101-upernet" is invalid now, can you share this model with me? |
I have successfully downloaded by information from https://github.com/CSAILVision/semantic-segmentation-pytorch/blob/master/demo_test.sh and https://github.com/CSAILVision/semantic-segmentation-pytorch/blob/master/config/ade20k-resnet101-upernet.yaml |
Hi,
Thank you for sharing this awesome code!
Base on this issue, I understand that you are not going to release the evaluation code, and I'm working on reimplementing them myself.
I have the following questions:
When computing the FID scores, do you compare to the generated images the original images or the cropped images (the same size as the generated ones)?
What are the image sizes you used for evaluation? Do you generate higher resolution ones for evaluation or just use the default size (512x256 for cityscape, and 256x256 for the others)?
What are the pre-trained segmentation models and code base you use for each datasets? Based on the paper, I assume these are the ones you use. Could you please confirm them?
deeplabv2_resnet101_msc-cocostuff164k-100000.pth
Thanks in advance.
Best,
Godo
The text was updated successfully, but these errors were encountered: