Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ground truth boxes for evaluation? #7

Open
Moqeet opened this issue Sep 9, 2021 · 2 comments
Open

Ground truth boxes for evaluation? #7

Moqeet opened this issue Sep 9, 2021 · 2 comments

Comments

@Moqeet
Copy link

Moqeet commented Sep 9, 2021

Hi Bowen,

I really like that you made a metric which focuses on the boundary quality rather than overall IOU. I was applying your method on my models and was not sure exactly how to use the ground truth boxes for evaluation (Section 6.2, Table 4). I have the intuition that instead of taking the region predicted by RPN, you use the ground truth boxes' region and apply a ROIAlign operation to get the region of interest. Am I correct?

Also, can you suggest a clean way of doing this?

Thanks in advance.

Best,
Hamd ul Moqeet Riaz

@bowenc0221
Copy link
Owner

You do not necessarily need to use ground truth box for evaluation, you can simply calculate the Boundary IoU between your prediction masks and ground truth masks.

The reason that we use ground truth boxes in our paper is only for analysis purpose: there are many factors contribute to the performance of an instance segmentation algorithm besides mask quality, the localization and classification errors are also penalized in the AP metric. In order to study the segmentation (mask) quality on its own, we feed ground truth box to the mask branch of Mask R-CNN to rule out the effect of localization and classification error.

Regarding your question. As I mentioned earlier, if you simply want to use Boundary IoU to evaluate your method, there is not need to use ground truth boxes. You can simply take the prediction masks with boxes from RPN predictions.

@Moqeet
Copy link
Author

Moqeet commented Sep 12, 2021

Thanks for your answer. Actually, the model that I trained was generating more false positives and negatives predictions compared to the baseline. The mask quality however is looking better than the baseline. Therefore, I exactly wanted to test what you mentioned in your answer i.e the mask quality independent of classification and localization errors.
I guess I already got the answer that you feed ground truth boxes instead predictions from bounding boxes to the mask branch during evaluation. I will do that for my model as well.
Once again, thanks for your help.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants