-
Hi @leondgarse , I converted a custom ds (#78) !CUDA_VISIBLE_DEVICES='0' python ./coco_train_script.py -p adamw -b 4 -i 512 --data_name data.json Initially the following weights were loaded: After unfreezing backbone 2 files were loaded Load pretrained from: .keras/models/efficientnetv2/efficientnetv1-b0-noisy_student.h5
Is it normal? After training coco_eval_script.py produced an error !CUDA_VISIBLE_DEVICES='1' python ./coco_eval_script.py --data_name data.json -m /kecam/checkpoints/EfficientDetD0_512_adamw_data.json_batchsize_4_randaug_6_mosaic_0.5_color_random_hsv_position_rts_lr512_0.008_wd_0.02_anchors_mode_None_epoch_105_val_cls_acc_1.0000.h5 File "./coco_eval_script.py", line 122, ======================= When trying manual inference I got pretrained = '/kecam/checkpoints/EfficientDetD0_512_adamw_data.json_batchsize_4_randaug_6_mosaic_0.5_color_random_hsv_position_rts_lr512_0.008_wd_0.02_anchors_mode_None_epoch_105_val_cls_acc_1.0000.h5' mm = efficientdet.EfficientDetD0(input_shape=(512, 512, 3), num_classes=1)
from PIL import Image mm = efficientdet.EfficientDetD0(input_shape=(512, 512, 3), num_classes=1, pretrained=pretrained) im = Image.open(img_file) when loading weights this way 2 sets of weights were loaded and there were no predictions
The problem might be the score threshold. How do I pass it when making predictions? Any help is appreciated. Edit: bboxs, lables, confidences = mm.decode_predictions(preds, score_threshold=0.1)[0] |
Beta Was this translation helpful? Give feedback.
Replies: 4 comments 1 reply
-
|
Beta Was this translation helpful? Give feedback.
-
Did another round of training with augmentations turned off. preds = mm(mm.preprocess_input(imm2)) produced 0 bboxs. However, preds.shape is [1, 49104, 5] dd = coco.decode_bboxes(preds[0], anchors).numpy() Maximum score in preds is 0.00593921 (dd[:, 4:]) |
Beta Was this translation helpful? Give feedback.
-
Just fixed some issues:
Anyway I think at least your prediction should work, most of these fixes are related with evaluating process. You may also try if reloading entire COCO pretrained weights works with |
Beta Was this translation helpful? Give feedback.
-
Thank you! I've found the culprit. When converting from coco I put category_id into indices_2_labels={1: "foo"}. category_ids start from 1. Setting indices_2_labels={0: "foo"} fixed the training and validation issues. Checking for such situation (when index doesn't start from 0) might help others. 3 small issues with the training script. There might be an option to speed up validation slightly. I got multiple messages from pycocotools like creating index...
When using on premises jupyter notebook, epoch losses output is messed up. It's not happening in your colab notebook. ==================================== In case you decide to include coco format conversion in your library or if someone has similar needs, here is the working code: `import json
class CocoDataset():
`ds_train = CocoDataset() ds_validation = CocoDataset() cats = ds_train.coco.dataset['categories'] num_classes = len(indices_2_labels) final_data = { with open('data.json', 'w') as f: |
Beta Was this translation helpful? Give feedback.
Thank you!
I've found the culprit. When converting from coco I put category_id into indices_2_labels={1: "foo"}. category_ids start from 1. Setting indices_2_labels={0: "foo"} fixed the training and validation issues. Checking for such situation (when index doesn't start from 0) might help others.
3 small issues with the training script.
There might be an option to speed up validation slightly. I got multiple messages from pycocotools like
creating index...
index created!
When using on premises jupyter notebook, epoch losses output is messed up. It's not happening in your colab notebook.
tensorflow 2.10.0
jupyter 1.0.0
…