Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

IndexError: index 1 is out of bounds for dimension 0 with size 1 #2

Closed
hieulv3 opened this issue Oct 18, 2024 · 6 comments
Closed

IndexError: index 1 is out of bounds for dimension 0 with size 1 #2

hieulv3 opened this issue Oct 18, 2024 · 6 comments

Comments

@hieulv3
Copy link

hieulv3 commented Oct 18, 2024

Thank you for your efforts,

image

This issue occur when I train model with custom dataset. I did follow your guild

Peterande added a commit that referenced this issue Oct 18, 2024
Fix class mapping. #2
@Peterande
Copy link
Owner

It looks like you're trying to train a new dataset using the pretrained weights (and the classes less than both COCO and Objects365). Currently, the code logic maps the Objects365 classes to COCO during finetuning if the categories are different. If you want to map to your own custom classes, you can customize the self.obj365_ids. Also, I’ve just updated the code in src/solver/_solver.py. Now, the mismatched class heads will be automatically skipped. You can choose the appropriate solution based on your situation.

@hieulv3
Copy link
Author

hieulv3 commented Oct 18, 2024

It looks like you're trying to train a new dataset using the pretrained weights (and the classes less than both COCO and Objects365). Currently, the code logic maps the Objects365 classes to COCO during finetuning if the categories are different. If you want to map to your own custom classes, you can customize the self.obj365_ids. Also, I’ve just updated the code in src/solver/_solver.py. Now, the mismatched class heads will be automatically skipped. You can choose the appropriate solution based on your situation.

Thank you. Did these models support training huge image (img_size: 5300x3300)? I've got this error after fix the above error
image

@Peterande
Copy link
Owner

It seems that this issue is related to the PIL library rather than the model itself.

During training, the model randomly resizes all images to a range of 400x400 to 800x800 to improve generalization. However, during inference, as long as there is enough GPU memory, there are no specific size restrictions. The problem arises from PIL's limitations on image sizes, and is clearly written in the Error Log.

I recommend checking the script located at tools/dataset/resize_obj365.py for preprocessing large images. This script can help you resize the images before training to avoid the decompression bomb error.

@hieulv3
Copy link
Author

hieulv3 commented Oct 18, 2024

It seems that this issue is related to the PIL library rather than the model itself.

During training, the model randomly resizes all images to a range of 400x400 to 800x800 to improve generalization. However, during inference, as long as there is enough GPU memory, there are no specific size restrictions. The problem arises from PIL's limitations on image sizes, and is clearly written in the Error Log.

I recommend checking the script located at tools/dataset/resize_obj365.py for preprocessing large images. This script can help you resize the images before training to avoid the decompression bomb error.

Yes. This issue is related to PIL. So I added MAX_IMAGE_PIXELS = None in the function _decompression_bomb_check of PIL.Image to bypass this error. Thank you and have a nice day

@hieulv3 hieulv3 closed this as completed Oct 18, 2024
@Peterande
Copy link
Owner

You're welcome. Have a nice day~

@Peterande
Copy link
Owner

Optional Custom Finetune Tutorial is added: eed6df6

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants