-
-
Notifications
You must be signed in to change notification settings - Fork 232
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Yolov8 rejecting labels during model.training #94
Comments
Hello! Thank you for providing detailed information about your issue. It looks like you're encountering a problem with label format during the training phase of your YOLOv8-pose model. Let's address your concerns step-by-step. Label Format IssueThe error message indicates that your labels require 15 columns each, but the provided labels do not match this requirement. For pose estimation, the label format should follow the structure outlined in the Ultralytics documentation: Format with Dim = 2:
Format with Dim = 3:
Given your
This totals to (1 + 4 + (5 \times 2) = 15) columns per label. Sample Label CorrectionYour sample labels should look like this:
YAML ConfigurationEnsure your YAML files correctly reflect the dataset structure and keypoints configuration. Here’s a simplified version of your # Data
path: /content/datasets/echo-pose
train: images/train2017
val: images/val2017
# Keypoints
kpt_shape: [5, 2] # number of keypoints, number of dims (2 for x,y or 3 for x,y,visible)
flip_idx: [0, 1, 2, 3, 4] # no flipping
# Classes
names:
0: Class0
1: Class1
2: Class2
3: Class3
4: Class4 Next Steps
Example CodeHere’s a snippet to help you get started with training after ensuring your labels and configurations are correct: from ultralytics import YOLO
# Load a pretrained model
model = YOLO('yolov8n-pose.pt')
# Train the model
results = model.train(data='/content/YAML/data5.yaml', epochs=100, imgsz=640) Feel free to reach out if you have any more questions or need further assistance. Happy training! 🚀 |
Hi Paula,
Thanks again for excellent guidance, will look into these issues today and update comments.
Best
Paul
From: Paula Derrenger ***@***.***
Sent: Monday, June 10, 2024 11:45 PM
To: ultralytics/JSON2YOLO
Cc: Paul Petronelli; Author
Subject: Re: [ultralytics/JSON2YOLO] Yolov8 rejecting labels during model.training (Issue #94)
Hello!
Thank you for providing detailed information about your issue. It looks like you're encountering a problem with label format during the training phase of your YOLOv8-pose model. Let's address your concerns step-by-step.
Label Format Issue
The error message indicates that your labels require 15 columns each, but the provided labels do not match this requirement. For pose estimation, the label format should follow the structure outlined in the Ultralytics documentation <https://docs.ultralytics.com/datasets/pose/> :
Format with Dim = 2:
<class-index> <x> <y> <width> <height> <px1> <py1> <px2> <py2> ... <pxn> <pyn>
Format with Dim = 3:
<class-index> <x> <y> <width> <height> <px1> <py1> <p1-visibility> <px2> <py2> <p2-visibility> ... <pxn> <pyn> <pn-visibility>
Given your kpt_shape: [5, 2], each label should have:
* 1 class index
* 4 bounding box coordinates (x, y, width, height)
* 5 keypoints, each with 2 coordinates (px, py)
This totals to (1 + 4 + (5 \times 2) = 15) columns per label.
Sample Label Correction
Your sample labels should look like this:
0 0.680863 0.595223 0.20369 0.0726488 0.579017 0.631547 0.687827 0.609047 0.748973 0.590758 0.782708 0.558898 0.6325 0.624151
YAML Configuration
Ensure your YAML files correctly reflect the dataset structure and keypoints configuration. Here’s a simplified version of your data5.yaml:
# Data
path: /content/datasets/echo-pose
train: images/train2017
val: images/val2017
# Keypoints
kpt_shape: [5, 2] # number of keypoints, number of dims (2 for x,y or 3 for x,y,visible)
flip_idx: [0, 1, 2, 3, 4] # no flipping
# Classes
names:
0: Class0
1: Class1
2: Class2
3: Class3
4: Class4
Next Steps
1. Verify Label Format: Ensure all your label files conform to the required format.
2. Update Packages: Make sure you are using the latest versions of torch and ultralytics. You can update them using:
pip install --upgrade torch ultralytics
3. Minimum Reproducible Example: If the issue persists, please provide a minimum reproducible code example. This helps us investigate the problem more effectively. You can find guidelines here <https://docs.ultralytics.com/help/minimum_reproducible_example> .
Example Code
Here’s a snippet to help you get started with training after ensuring your labels and configurations are correct:
from ultralytics import YOLO
# Load a pretrained model
model = YOLO('yolov8n-pose.pt')
# Train the model
results = model.train(data='/content/YAML/data5.yaml', epochs=100, imgsz=640)
Feel free to reach out if you have any more questions or need further assistance. Happy training! 🚀
—
Reply to this email directly, view it on GitHub <#94 (comment)> , or unsubscribe <https://github.com/notifications/unsubscribe-auth/ABXH6FT25KCJVMS34N72GZTZG2MH7AVCNFSM6AAAAABJDN6XC6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCNJZHEZDANRQHE> .
You are receiving this because you authored the thread. <https://github.com/notifications/beacon/ABXH6FT6L6W5ZUQ64K5JT4DZG2MH7A5CNFSM6AAAAABJDN6XC6WGG33NNVSW45C7OR4XAZNMJFZXG5LFINXW23LFNZ2KUY3PNVWWK3TUL5UWJTUAXXC6C.gif> Message ID: ***@***.***>
|
Hi Paul, Thank you for your kind words and for your continued efforts in resolving this issue. 😊 To ensure we can assist you effectively, please verify a couple of things:
Here’s a quick example to get you started with training after ensuring your labels and configurations are correct: from ultralytics import YOLO
# Load a pretrained model
model = YOLO('yolov8n-pose.pt')
# Train the model
results = model.train(data='/content/YAML/data5.yaml', epochs=100, imgsz=640) Feel free to reach out if you have any more questions or need further assistance. The YOLO community and the Ultralytics team are here to help! |
CASE: Yolov8-pose rejecting labels during training
QUESTION: Where does yolov8 determine how many columns are required for each label?
ERROR at model.train phase:
train: WARNING⚠️ /content/datasets/echo-pose/images/train2017/frame_000151.PNG: ignoring corrupt image/label: labels require 15 columns each
SAMPLE LABELS FROM train2017:
0 0.680863 0.595223 0.20369 0.0726488 0.579017 0.631547 2 0.687827 0.609047 2 0.748973 0.590758 2 0.782708 0.558898 2 0.6325 0.624151 2
0 0.659062 0.613906 0.23378 0.0539435 0.542172 0.640877 2 0.645104 0.61308 2 0.714791 0.617574 2 0.775952 0.586934 2 0.589449 0.637366 2
<etc.>
BACKROUND
#####C O D E#####
LOAD MODEL
ECHO_YAMAL=os.path.join ("/content/YAML/yolov8n-echo-pose.yaml")
model = YOLO(ECHO_YAMAL, verbose = VERBOSE)
model = YOLO('yolov8n-pose.pt') # load a pretrained model (recommended for training)
model = YOLO('yolov8n-pose.yaml').load('yolov8n-pose.pt')
TRAIN MODEL
YP=os.path.join ("/content/YAML/data5.yaml")
model.train(data=YP, epochs=EPOCHS, imgsz=640)
HERE ARE THE TWO YAML FILES:
=====================
yolovn-echo-pose.yaml
Ultralytics YOLO 🚀, AGPL-3.0 license
YOLOv8-pose-p6 keypoints/pose estimation model. For Usage examples see https://docs.ultralytics.com/tasks/pose
Parameters
nc: 5 # number of classes # UPDATED *
kpt_shape: [5, 2] # number of keypoints,* number of dims (2 for x,y or 3 for x,y,visible)
flip_indx: [0,1,2,3,4] # no flipping *
scales: # model compound scaling constants, i.e. 'model=yolov8n-p6.yaml' will call yolov8-p6.yaml with scale 'n'
[depth, width, max_channels]
n: [0.33, 0.25, 1024]
s: [0.33, 0.50, 1024]
m: [0.67, 0.75, 768]
l: [1.00, 1.00, 512]
x: [1.00, 1.25, 512]
YOLOv8.0x6 backbone
backbone:
[from, repeats, module, args]
YOLOv8.0x6 head
head:
[-1, 1, nn.Upsample, [None, 2, "nearest"]]
[[-1, 8], 1, Concat, [1]] # cat backbone P5
[-1, 3, C2, [768, False]] # 14
[-1, 1, nn.Upsample, [None, 2, "nearest"]]
[[-1, 6], 1, Concat, [1]] # cat backbone P4
[-1, 3, C2, [512, False]] # 17
[-1, 1, nn.Upsample, [None, 2, "nearest"]]
[[-1, 4], 1, Concat, [1]] # cat backbone P3
[-1, 3, C2, [256, False]] # 20 (P3/8-small)
[-1, 1, Conv, [256, 3, 2]]
[[-1, 17], 1, Concat, [1]] # cat head P4
[-1, 3, C2, [512, False]] # 23 (P4/16-medium)
[-1, 1, Conv, [512, 3, 2]]
[[-1, 14], 1, Concat, [1]] # cat head P5
[-1, 3, C2, [768, False]] # 26 (P5/32-large)
[-1, 1, Conv, [768, 3, 2]]
[[-1, 11], 1, Concat, [1]] # cat head P6
[-1, 3, C2, [1024, False]] # 29 (P6/64-xlarge)
[[20, 23, 26, 29], 1, Pose, [nc, kpt_shape]] # Pose(P3, P4, P5, P6)
Classes
names:
0: Class0
1: Class1
2: Class2
3: Class3
4: Class4
==========
data5.yaml
#keypoint data sets
Data
#path: D:POSE/data
path: /content/datasets/echo-pose
train: images/train2017 # subsets to D
val: images/val2017 # relative to path
Keypoints
kpt_shape: [5,2] # number of keypoints, number of dim
flip_indx: [0,1,2,3,4] # no flipping
#backbone: uskng standard backbone
backbone:
[from, repeats, module, args]
#usn standard head
head:
[-1, 1, nn.Upsample, [None, 2, "nearest"]]
[[-1, 8], 1, Concat, [1]] # cat backbone P5
[-1, 3, C2, [768, False]] # 14
[-1, 1, nn.Upsample, [None, 2, "nearest"]]
[[-1, 6], 1, Concat, [1]] # cat backbone P4
[-1, 3, C2, [512, False]] # 17
[-1, 1, nn.Upsample, [None, 2, "nearest"]]
[[-1, 4], 1, Concat, [1]] # cat backbone P3
[-1, 3, C2, [256, False]] # 20 (P3/8-small)
[-1, 1, Conv, [256, 3, 2]]
[[-1, 17], 1, Concat, [1]] # cat head P4
[-1, 3, C2, [512, False]] # 23 (P4/16-medium)
[-1, 1, Conv, [512, 3, 2]]
[[-1, 14], 1, Concat, [1]] # cat head P5
[-1, 3, C2, [768, False]] # 26 (P5/32-large)
[-1, 1, Conv, [768, 3, 2]]
[[-1, 11], 1, Concat, [1]] # cat head P6
[-1, 3, C2, [1024, False]] # 29 (P6/64-xlarge)
[[20, 23, 26, 29], 1, Pose, [nc, kpt_shape]] # Pose(P3, P4, P5, P6)
Classes
names:
0: Class0
1: Class1
2: Class2
3: Class3
4: Class4
The text was updated successfully, but these errors were encountered: