You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
defined the bottleneckRES function to use ResNet50
Traceback (most recent call last):
File “train.py”, line 858, in
main(opt)
File “train.py”, line 632, in main
train(opt.hyp, opt, device, callbacks)
File “train.py”, line 199, in train
model = Model(cfg, ch=3, nc=nc, anchors=hyp.get(“anchors”)).to(device) # create
File “/home/firo-msi/RESNET/yolov5/models/yolo.py”, line 253, in init
m.stride = torch.tensor([s / x.shape[-2] for x in forward(torch.zeros(1, ch, s, s))]) # forward
File “/home/firo-msi/RESNET/yolov5/models/yolo.py”, line 252, in
forward = lambda x: self.forward(x)[0] if isinstance(m, Segment) else self.forward(x)
File “/home/firo-msi/RESNET/yolov5/models/yolo.py”, line 268, in forward
return self._forward_once(x, profile, visualize) # single-scale inference, train
File “/home/firo-msi/RESNET/yolov5/models/yolo.py”, line 171, in _forward_once
x = m(x) # run
File “/home/firo-msi/anaconda3/envs/yolov5/lib/python3.8/site-packages/torch/nn/modules/module.py”, line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File “/home/firo-msi/anaconda3/envs/yolov5/lib/python3.8/site-packages/torch/nn/modules/module.py”, line 1520, in _call_impl
return forward_call(*args, **kwargs)
File “/home/firo-msi/RESNET/yolov5/models/common.py”, line 1163, in forward
out = self.relu(self.bn1(self.conv1(x)))
File “/home/firo-msi/anaconda3/envs/yolov5/lib/python3.8/site-packages/torch/nn/modules/module.py”, line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File “/home/firo-msi/anaconda3/envs/yolov5/lib/python3.8/site-packages/torch/nn/modules/module.py”, line 1520, in _call_impl
return forward_call(*args, **kwargs)
File “/home/firo-msi/anaconda3/envs/yolov5/lib/python3.8/site-packages/torch/nn/modules/conv.py”, line 460, in forward
return self._conv_forward(input, self.weight, self.bias)
File “/home/firo-msi/anaconda3/envs/yolov5/lib/python3.8/site-packages/torch/nn/modules/conv.py”, line 456, in _conv_forward
return F.conv2d(input, weight, bias, self.stride,
RuntimeError: Given groups=1, weight of size [64, 64, 1, 1], expected input[1, 32, 128, 128] to have 64 channels, but got 32 channels instead
However, I am encountering an error that seems related to incorrect channels. Is there a solution for this? PLEASE ㅠㅠ
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
`# YOLOv5 🚀 by Ultralytics, AGPL-3.0 license
Parameters
nc: 2 # number of classes
depth_multiple: 0.33 # model depth multiple
width_multiple: 0.50 # layer channel multiple
anchors:
YOLOv5 v6.0 backbone
backbone:
[
[-1, 1, Conv, [64, 3, 1]], # Conv:
[-1, 1, MaxPool2d, [3, 2, 1]], # MaxPool2d
[-1, 1, BottleneckRES, [64, 64, 1]], # BottleneckRES:
[-1, 2, BottleneckRES, [256, 64, 1]], # BottleneckRES:
[-1, 1, BottleneckRES, [256, 128, 2]], # BottleneckRES:
[-1, 4, BottleneckRES, [512, 128, 1]], # BottleneckRES:
[-1, 6, BottleneckRES, [512, 256, 2]], # BottleneckRES
[-1, 3, BottleneckRES, [1024, 256, 1]], # BottleneckRES:
[-1, 3, BottleneckRES, [1024, 512, 2]], # BottleneckRES:
[-1, 1, BottleneckRES, [2048, 512, 1]], # BottleneckRES:
]
YOLOv5 v6.0 head
head:
[
[-1, 1, BottleneckCSP, [2048, 512, 1]], # BottleneckCSP:
[-1, 1, Conv, [256, 1, 1]], # Conv:
[-1, 1, nn.Upsample, [None, 2, "nearest"]],
[[-1, 6], 1, Concat, [1]], # cat backbone P4
[-1, 3, C3, [512, 256, 1]], # C3:
]
`
This is my YAML file. I have connected the ResNet50 backbone to the YOLOv5 head
and
`class BottleneckRES(nn.Module):
expansion = 4
defined the bottleneckRES function to use ResNet50
Traceback (most recent call last):
File “train.py”, line 858, in
main(opt)
File “train.py”, line 632, in main
train(opt.hyp, opt, device, callbacks)
File “train.py”, line 199, in train
model = Model(cfg, ch=3, nc=nc, anchors=hyp.get(“anchors”)).to(device) # create
File “/home/firo-msi/RESNET/yolov5/models/yolo.py”, line 253, in init
m.stride = torch.tensor([s / x.shape[-2] for x in forward(torch.zeros(1, ch, s, s))]) # forward
File “/home/firo-msi/RESNET/yolov5/models/yolo.py”, line 252, in
forward = lambda x: self.forward(x)[0] if isinstance(m, Segment) else self.forward(x)
File “/home/firo-msi/RESNET/yolov5/models/yolo.py”, line 268, in forward
return self._forward_once(x, profile, visualize) # single-scale inference, train
File “/home/firo-msi/RESNET/yolov5/models/yolo.py”, line 171, in _forward_once
x = m(x) # run
File “/home/firo-msi/anaconda3/envs/yolov5/lib/python3.8/site-packages/torch/nn/modules/module.py”, line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File “/home/firo-msi/anaconda3/envs/yolov5/lib/python3.8/site-packages/torch/nn/modules/module.py”, line 1520, in _call_impl
return forward_call(*args, **kwargs)
File “/home/firo-msi/RESNET/yolov5/models/common.py”, line 1163, in forward
out = self.relu(self.bn1(self.conv1(x)))
File “/home/firo-msi/anaconda3/envs/yolov5/lib/python3.8/site-packages/torch/nn/modules/module.py”, line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File “/home/firo-msi/anaconda3/envs/yolov5/lib/python3.8/site-packages/torch/nn/modules/module.py”, line 1520, in _call_impl
return forward_call(*args, **kwargs)
File “/home/firo-msi/anaconda3/envs/yolov5/lib/python3.8/site-packages/torch/nn/modules/conv.py”, line 460, in forward
return self._conv_forward(input, self.weight, self.bias)
File “/home/firo-msi/anaconda3/envs/yolov5/lib/python3.8/site-packages/torch/nn/modules/conv.py”, line 456, in _conv_forward
return F.conv2d(input, weight, bias, self.stride,
RuntimeError: Given groups=1, weight of size [64, 64, 1, 1], expected input[1, 32, 128, 128] to have 64 channels, but got 32 channels instead
However, I am encountering an error that seems related to incorrect channels. Is there a solution for this? PLEASE ㅠㅠ
Beta Was this translation helpful? Give feedback.
All reactions