-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Questions regarding mIoU and accuracy #4
Comments
Hello,
|
Hi, @xh-liu |
@ZzzackChen That's wired. I just tested the model again and it's still 82.3 pixel accuracy. I use the model and code from https://github.com/fyu/drn. The calculation of pixel accuracy is not provided in the code. How did you implement it? |
|
@xh-liu Thanks a lot! Now I can reproduce results :D |
@ZzzackChen If you ignore 255 labels then the result will be 93 as you calculated. If you count 255 in the result will be 82.3. To keep consistent with the SPADE paper (https://arxiv.org/pdf/1903.07291.pdf) I chose the second calculation method for CityScapes dataset. For COCO-Stuff and ADE datasets, pixel accuracy calculation is included in the evaluation code, and I used the calculation method in the original code. |
@xh-liu |
Hi, I found that in the original paper, FID for Cityscapes dataset is 71.8 instead of 53.53 as you report, how about this wired result? |
@wjbKimberly The FID for Cityscapes is 54.3 reported in our paper. 71.8 is the FID score reported in the SPADE paper (https://arxiv.org/abs/1903.07291). |
@justin-hpcnt Do you know how to train on 8 GPUs? Thanks a lot. |
@xh-liu How to count 255 in the result when choosing the second calculation(DRN) method for CityScapes dataset? Thanks. |
Hi,
Thank you sharing the code and replying my previous question!
While reproducing the metrics, I have some questions:
I'm referring SPADE issue to implement evaluation code. Did you use same repo and pre-trained weight for evaluation?
If so, in regards to the COCO-Stuff dataset, original deeplab v2 shows 66.8 pixel accuracy and 39.1 mIoU score for ground truth validation images. However, CC-FPSE reaches 70.7 pixel accuracy and 41.6 mIoU score, which seems weird. I think the difference might come from the different input size to the deeplab model. How did you feed inputs to the deeplab network? (for example, use 256x256 image or upsampling 256x256 image to 321x321 with bilinear interpolation)
The text was updated successfully, but these errors were encountered: