Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

If we use a custom coco type dataset, what changes should we make? #10

Open
sure7018 opened this issue Apr 18, 2022 · 6 comments
Open

Comments

@sure7018
Copy link

sure7018 commented Apr 18, 2022

If we use a custom coco type dataset, what changes should we make?

@sure7018 sure7018 changed the title If you use a custom coco type dataset, what changes should you make? If we use a custom coco type dataset, what changes should you make? Apr 18, 2022
@sure7018 sure7018 changed the title If we use a custom coco type dataset, what changes should you make? If we use a custom coco type dataset, what changes should we make? Apr 18, 2022
@songhwanjun
Copy link
Collaborator

If your datasets are the same with COCO w.r.t annotations, there is nothing to do except making a new data loader.
We have not tried to use other data, so please let me know if there will be problems happend.

@sure7018
Copy link
Author

sure7018 commented Apr 21, 2022

If your datasets are the same with COCO w.r.t annotations, there is nothing to do except making a new data loader. We have not tried to use other data, so please let me know if there will be problems happend.

My dataset is in coco format, but there is no mark of mask.
I modified the num_classes of "methods/vidt/detector.py" and "methods/vidt_ wo_ neck/detecto.py" , change to the num class + 1 in my datasets. And I using two methods of training(using pre training model and not using pre training model).
However, the following errors are reported:

Load the backbone in the given path
number of params: 37006090
num of total trainable prams:37006090
Resolution: shortest at most 800
loading annotations into memory...
Done (t=0.15s)
creating index...
index created!
Resolution: shortest at most 800
800
loading annotations into memory...
Done (t=0.03s)
creating index...
index created!

train: 1451 , # val 362

load a checkpoint from swin_tiny_patch4_window7_224.pth
Start training
Traceback (most recent call last):
File "main.py", line 314, in
main(args)
File "main.py", line 208, in main
args.clip_max_norm, n_iter_to_acc=args.n_iter_to_acc, print_freq=args.print_freq)
File "/home1/lws/vidt/engine.py", line 51, in train_one_epoch
for samples, targets in metric_logger.log_every(data_loader, print_freq, header):
File "/home1/lws/vidt/util/misc.py", line 223, in log_every
for obj in iterable:
File "/home/user/anaconda3/envs/lou/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 435, in next
data = self._next_data()
File "/home/user/anaconda3/envs/lou/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 1085, in _next_data
return self._process_data(data)
File "/home/user/anaconda3/envs/lou/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 1111, in _process_data
data.reraise()
File "/home/user/anaconda3/envs/lou/lib/python3.7/site-packages/torch/_utils.py", line 428, in reraise
raise self.exc_type(msg)
IndexError: Caught IndexError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/home/user/anaconda3/envs/lou/lib/python3.7/site-packages/torch/utils/data/_utils/worker.py", line 198, in _worker_loop
data = fetcher.fetch(index)
File "/home/user/anaconda3/envs/lou/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/user/anaconda3/envs/lou/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home1/lws/vidt/datasets/coco.py", line 41, in getitem
img, target = self.prepare(img, target)
File "/home1/lws/vidt/datasets/coco.py", line 89, in call
masks = convert_coco_poly_to_mask(segmentations, h, w)
File "/home1/lws/vidt/datasets/coco.py", line 49, in convert_coco_poly_to_mask
rles = coco_mask.frPyObjects(polygons, height, width)
File "pycocotools/_mask.pyx", line 293, in pycocotools._mask.frPyObjects
IndexError: list index out of range

After reading your article, I made the same experiment on DETR, Deformer DETR and YOLOS, and used my dataset. The modification method is the same as that above, and can be used normally.The above error did not occur. What is the reason???

and My training instruction is:python -m torch.distributed.launch --nproc_per_node=2 --nnodes=1 --use_env main.py --method vidt --backbone_name swin_tiny --coco_path data/coco --output_dir output/ --pre_trained swin_tiny_patch4_window7_224.pth

@PatelKrutarth
Copy link

@sure7018: Were you able to do the training?

@sure7018
Copy link
Author

sure7018 commented May 3, 2022

@sure7018: Were you able to do the training?

Not yet

@ananthu-aniraj
Copy link

Hi, I was able to do the training but I am having some issues with the inference. Can someone explain what the "vectors" field in the outputs contain for the ViDT+ models? I assume it is the instance segmentation mask, however, it is stored as a [batch_size, num_queries,256] dimension which doesn't make sense to me. Is it a 256-point polygon ?

@PatelKrutarth
Copy link

Hi @ananthu-aniraj, can you please mention the edits and steps you followed for training on datasets in coco format without segmentation labels? Or even if you were able to do it with segmentation mask labels. Thank you very much!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants