Replies: 3 comments 7 replies
-
I guess you are using from keras_cv_attention_models import yolor
# rescale_mode is "torch" if not specified in training, use "raw01" if need `[0, 1]` inputs value range.
mm = yolor.YOLOR_P6((512, 512, 3), num_classes=1, num_anchors=9, use_object_scores=False, rescale_mode="torch")
mm.load_weights(...)
...
# Add efficientdet anchors
from keras_cv_attention_models.coco import anchors_func
anchors = anchors_func.get_anchors(input_shape=(512, 512), pyramid_levels=[3, 6]) # YOLOR_P6 using 4 features
mm.decode_predictions.anchors = anchors
# predict and decode
img=Image.open(fl)
image = np.array(img.resize((image_size,image_size)), dtype=input_data_type)
pred = mm(mm.preprocess_input(image))
bboxs, lables, confidences = mm.decode_predictions(pred)[0]
# Show result
from keras_cv_attention_models.coco import data
data.show_image_with_bboxes(image, bboxs, lables, confidences) |
Beta Was this translation helpful? Give feedback.
6 replies
-
Beta Was this translation helpful? Give feedback.
1 reply
-
|
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I did some training with YOLOR_P6 with coco_train_script.py using my own object detection dataset. I am trying to do evaluations on the trained model using the following code but get the error message:
ValueError: Cannot assign to variable head_1_2_conv/kernel:0 due to variable shape (1, 1, 256, 18) and value shape (45, 256, 1, 1) are incompatible
The error is occuring on mm.load_weights line. Is there another way I should be doing evaluations with YOLOR models?
Beta Was this translation helpful? Give feedback.
All reactions