Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Questions regarding mIoU, accuracy, FID #39

Closed
Godo1995 opened this issue Apr 28, 2019 · 7 comments
Closed

Questions regarding mIoU, accuracy, FID #39

Godo1995 opened this issue Apr 28, 2019 · 7 comments

Comments

@Godo1995
Copy link

Hi,

Thank you for sharing this awesome code!
Base on this issue, I understand that you are not going to release the evaluation code, and I'm working on reimplementing them myself.
I have the following questions:

  1. When computing the FID scores, do you compare to the generated images the original images or the cropped images (the same size as the generated ones)?

  2. What are the image sizes you used for evaluation? Do you generate higher resolution ones for evaluation or just use the default size (512x256 for cityscape, and 256x256 for the others)?

  3. What are the pre-trained segmentation models and code base you use for each datasets? Based on the paper, I assume these are the ones you use. Could you please confirm them?

  1. When you evaluate mIoUs and accuracies, do you upsample the images or downsample the labels? If so, how do you interpolate them?

Thanks in advance.

Best,
Godo

@taesungp
Copy link
Contributor

Hello,

  1. I did not crop, but resized the whole image to the same size as generated ones (usually 256x256).
  2. (512x256 for Cityscapes and 256x256 for the others).
  3. You are all correct about the models.
  4. I downsampled the labels using nearest neighbor interpolation.

@Godo1995
Copy link
Author

Thank you so much for your quick response.

@SolidShen
Copy link

Hello !
Can you achieve the mIoU score as the paper mentioned?(62.3 on Cityscapes). I follow your guide and only get 48.7 mIoU on cityscapes val set.

@zkchen95
Copy link

zkchen95 commented Jul 25, 2019

Hello !
Can you achieve the mIoU score as the paper mentioned?(62.3 on Cityscapes). I follow your guide and only get 48.7 mIoU on cityscapes val set.

I resize the label as the generator photo(512x256), and the result is 53.5 mIoU, 91.0 accu,
and resize both the label and the generator photo to (1024x512), the result is 58 mIoU, 92.9 accu

I have a question for author, what size of the label and generator's photo when evaluation?

@ShihuaHuang95
Copy link

@ZzzackChen @SolidShen Hi, guys. I have tested the pretrained models (Cityscapes and ADE20k), and I got 64.07 and 43.02 (both represent mIoU). I have downsampled the labels using nearest neighbor interpolation as the authors suggested, (512x256 for cityscapes and 256x256 for others). However, I am confused about the unexpected higher scores than the scores in the paper, especially for ADE20k. @taesungp Would you like to presenet more details for evaluation?

@fido20160817
Copy link

fido20160817 commented May 29, 2023

Hi,

Thank you for sharing this awesome code! Base on this issue, I understand that you are not going to release the evaluation code, and I'm working on reimplementing them myself. I have the following questions:

  1. When computing the FID scores, do you compare to the generated images the original images or the cropped images (the same size as the generated ones)?
  2. What are the image sizes you used for evaluation? Do you generate higher resolution ones for evaluation or just use the default size (512x256 for cityscape, and 256x256 for the others)?
  3. What are the pre-trained segmentation models and code base you use for each datasets? Based on the paper, I assume these are the ones you use. Could you please confirm them?
  1. When you evaluate mIoUs and accuracies, do you upsample the images or downsample the labels? If so, how do you interpolate them?

Thanks in advance.

Best, Godo

Hi, link for "baseline-resnet101-upernet" is invalid now, can you share this model with me?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants