You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, the MobileNet-V3 (Large) backbone option for DeepLabV3 (added in #3276) is configured with an output stride of 16 (Link); for more background information on the technical jargon, please take a look at #7955. The authors of DeepLabV3 use an output stride of 16 when the rates are (6, 12, 18) and double these rates when the output stride is 8 (Link). The link between the output stride, atrous rates and input spatial resolution holds regardless of the backbone used. Now, the Torchvision port of DeepLabV3 hardcodes the atrous rates to 12, 24, and 36 (Link) instead, and the ResNet-50 and ResNet-101 backbones are configured correctly with an output stride of 8. To verify this, I've written a short script:
importtorchimporttorchvisioninput=torch.rand(1, 3, 512, 512)
# A pre-forward hook function to attach to one of the ASPP module's convolutional layers# The input is the backbone-encoded feature maps, so we're just printing its shapehook=lambda_, i: print(i[0].shape)
# Try out deeplabv3_resnet50() or deeplabv3_resnet101() :)model=torchvision.models.segmentation.deeplabv3_mobilenet_v3_large()
model.classifier[0].convs[0].register_forward_pre_hook(hook)
model.eval()
withtorch.no_grad():
out=model(input)["out"] # Prints (N, C, 32, 32) instead of (N, C, 64, 64)!
This is a deviation from the guidelines set in the paper more so than it is a bug. If there's a reason why such a deviation exists, it isn't well-documented or made clear. In the case that this isn't a documentation issue, there are a few possible resolutions: (1) modify the MobileNet backbone to have an output stride of 8 or (2) make the atrous rates configurable via the DeepLabHead constructor and set the rates to (6, 12, 18) (Link). The latter is much easier to implement, and could even be extended to allow user-configurable atrous rates, which is loosely motivated in #7955; see the fourth footnote and its reference.
Versions
PyTorch version: 2.0.1
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 13.4 (arm64)
GCC version: Could not collect
Clang version: 14.0.3 (clang-1403.0.22.14.1)
CMake version: Could not collect
Libc version: N/A
Python version: 3.10.6 | packaged by conda-forge | (main, Aug 22 2022, 20:38:29) [Clang 13.0.1 ] (64-bit runtime)
Python platform: macOS-13.4-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M1
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.25.0
[pip3] pytorch-lightning==2.0.4
[pip3] pytorch-nlp==0.5.0
[pip3] torch==2.0.1
[pip3] torchmetrics==1.0.0
[pip3] torchvision==0.15.2
[conda] numpy 1.25.0 pypi_0 pypi
[conda] pytorch-lightning 2.0.4 pypi_0 pypi
[conda] pytorch-nlp 0.5.0 pypi_0 pypi
[conda] torch 2.0.1 pypi_0 pypi
[conda] torchmetrics 1.0.0 pypi_0 pypi
[conda] torchvision 0.15.2 pypi_0 pypi
The text was updated successfully, but these errors were encountered:
🐛 Describe the bug
Currently, the MobileNet-V3 (Large) backbone option for DeepLabV3 (added in #3276) is configured with an output stride of 16 (Link); for more background information on the technical jargon, please take a look at #7955. The authors of DeepLabV3 use an output stride of 16 when the rates are (6, 12, 18) and double these rates when the output stride is 8 (Link). The link between the output stride, atrous rates and input spatial resolution holds regardless of the backbone used. Now, the Torchvision port of DeepLabV3 hardcodes the atrous rates to 12, 24, and 36 (Link) instead, and the ResNet-50 and ResNet-101 backbones are configured correctly with an output stride of 8. To verify this, I've written a short script:
This is a deviation from the guidelines set in the paper more so than it is a bug. If there's a reason why such a deviation exists, it isn't well-documented or made clear. In the case that this isn't a documentation issue, there are a few possible resolutions: (1) modify the MobileNet backbone to have an output stride of 8 or (2) make the atrous rates configurable via the
DeepLabHead
constructor and set the rates to (6, 12, 18) (Link). The latter is much easier to implement, and could even be extended to allow user-configurable atrous rates, which is loosely motivated in #7955; see the fourth footnote and its reference.Versions
The text was updated successfully, but these errors were encountered: