-
Notifications
You must be signed in to change notification settings - Fork 85
Quality issue #4
Comments
Deleted my comments. Turns out my download of the paperdoll dataset was bad. The trick was to re-extract the URLs from Chictopia SQL rather than from the |
Hi, Regarding the polygon annotations, you should be able to generate the mask as we had in the paper by taking the group of the annotations. This would needs to have a code to parse through the annotation, but it should be easily done. There is also a way to parse through the annotations to fix the problem in getting mask for each shoe in the same image. We will release the mask as we had in the paper soon. |
Thanks to the change of Japanese legislation, Kota has kindly released the full image dataset in paperdoll, so there would be no missing images in ModaNet: https://github.com/kyamagu/paperdoll/tree/master/data/chictopia |
@mxk7721 Hi, I meet a similary problem. I want to do a object detection. After downloading the train.json, I put bboxs on images. However, almost all footwears are wrong labeled. Could you tell me how do you deal with this situation? |
I refined the bboxes' quality using the segmentation maps. |
hi, can you show us an example of a refinement you made? |
hi guys, I have a much better version of the annotations json wich you can download here it tries to merge and move the annotations of footwear and boots, by deleting double shapes present in more annotations and by trying to move the wrong bounding box to fix it. code in my repo: https://github.com/cad0p/maskrcnn-modanet/blob/master/maskrcnn_modanet/fix_annotations.py if you install my code you can replicate the instances_all.json let me know! @mxk7721 @Hliang1994 |
@cad0p the instances_all.json object that you provided seems to have duplicate annotation IDs which are associated with different image_ids. |
By that you mean that I copied some annotations from one image to another
one inadvertently?
You can reconstruct the file by using the original one and applying the
script in my repo to it (you can apply it several times)
|
The dataset is poorly labelled. And doesn't stand to the standards mentioned in the paper!
The text was updated successfully, but these errors were encountered: