Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Refactor]: Unified parameter initialization #622

Merged
merged 61 commits into from
Jul 1, 2021
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
61 commits
Select commit Hold shift + click to select a range
b896597
support 3dssd
xiliu8006 Mar 24, 2021
010024e
support one-stage method
xiliu8006 Apr 7, 2021
d52c837
for lint
xiliu8006 Apr 7, 2021
d317b7c
support two_stage
xiliu8006 Apr 7, 2021
d42898e
Merge branch 'master' into add_initializer
xiliu8006 Apr 17, 2021
be73dd2
Support all methods
xiliu8006 Apr 18, 2021
f1838d7
remove init_cfg=[] in configs
xiliu8006 Apr 19, 2021
e2c74a2
test
xiliu8006 Apr 19, 2021
1781db5
Merge branch 'master' into add_initializer
xiliu8006 Apr 19, 2021
a649d1a
support h3dnet
xiliu8006 Apr 19, 2021
487823d
fix lint error
xiliu8006 Apr 20, 2021
2fa3dd6
fix isort
xiliu8006 Apr 20, 2021
69c6ce6
fix code style error
xiliu8006 Apr 20, 2021
7ba358f
fix imvotenet bug
xiliu8006 Apr 26, 2021
9301014
fix configs conflict
xiliu8006 Apr 26, 2021
953a4b7
rename init_weight->init_weights
xiliu8006 Apr 28, 2021
b824fcb
clean comma
xiliu8006 Apr 28, 2021
30e8195
Merge branch 'master' into add_initializer
xiliu8006 Apr 28, 2021
613f503
fix test_apis does not init weights
xiliu8006 Apr 29, 2021
46afb23
Merge branch 'master' into add_initializer
xiliu8006 May 5, 2021
458ec22
support newest mmdet and mmcv
xiliu8006 May 5, 2021
e9f6630
fix test_heads h3dnet bug
xiliu8006 May 5, 2021
af9524f
rm *.swp
xiliu8006 May 5, 2021
87b04f6
remove the wrong code in build.yml
xiliu8006 May 5, 2021
3f99ed3
fix ssn low map
xiliu8006 May 12, 2021
94924ad
modify docs
xiliu8006 May 12, 2021
188da1e
modified ssn init_config
xiliu8006 May 12, 2021
33292c0
modify params in backbone pointnet2_sa_ssg
xiliu8006 May 18, 2021
2f5382c
fix segmentor build
xiliu8006 May 19, 2021
455d2de
add ssn direction init_cfg
xiliu8006 May 19, 2021
d443ad4
support segmentor
xiliu8006 May 19, 2021
7f391e1
fix conflict
xiliu8006 May 19, 2021
6f926d7
add conv a=sqrt(5)
xiliu8006 May 20, 2021
3fad6aa
Merge branch 'add_initializer' of https://github.com/xiliu8006/mmdete…
xiliu8006 May 20, 2021
15d9188
Convmodule uses kaiming_init
xiliu8006 May 21, 2021
1eb9274
fix centerpointhead init bug
xiliu8006 May 21, 2021
a0eecb9
add second conv2d init cfg
xiliu8006 Jun 1, 2021
dc3f2fb
add unittest to confirm the input is not be modified
xiliu8006 Jun 7, 2021
6439d61
assert gt_bboxes_3d
xiliu8006 Jun 7, 2021
20dfaba
add compatibility
xiliu8006 Jun 7, 2021
4a5857c
rm .swag
xiliu8006 Jun 7, 2021
ff56ae9
modify docs mmdet version
xiliu8006 Jun 7, 2021
8830d63
Merge branch 'master' into add_initializer
xiliu8006 Jun 7, 2021
1082498
Merge branch 'bg_filter_unittest' into add_initializer
xiliu8006 Jun 7, 2021
d91e120
adopt fcosmono3d
xiliu8006 Jun 10, 2021
ca6e908
add fcos 3d original init method
xiliu8006 Jun 19, 2021
68eec71
fix mmseg version
xiliu8006 Jun 21, 2021
37f4aee
add init cfg in fcos_mono3d.py
xiliu8006 Jun 30, 2021
e2dc867
merge newest master
xiliu8006 Jun 30, 2021
17c2485
merge newest master
xiliu8006 Jun 30, 2021
d24204e
remove unused code
xiliu8006 Jun 30, 2021
b5f2da2
modify focs config due to changes of resnet
xiliu8006 Jun 30, 2021
0e43d8d
support imvoxelnet pointnet2
xiliu8006 Jun 30, 2021
19c2ef3
modified the dependencies version
xiliu8006 Jun 30, 2021
245aea0
support decode head
xiliu8006 Jun 30, 2021
85a2fbe
fix inference bug
xiliu8006 Jul 1, 2021
8315abe
modify the useless init_cfg
xiliu8006 Jul 1, 2021
e9aaeec
merge newest master
xiliu8006 Jul 1, 2021
6156cf9
fix multi_modality BC-breaking
xiliu8006 Jul 1, 2021
2d3a88b
fix error blank
xiliu8006 Jul 1, 2021
e142171
modify docs error
xiliu8006 Jul 1, 2021
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions .github/workflows/build.yml
Original file line number Diff line number Diff line change
Expand Up @@ -91,8 +91,8 @@ jobs:
- name: Install mmdet3d dependencies
run: |
pip install mmcv-full -f https://download.openmmlab.com/mmcv/dist/cu101/${{matrix.torch_version}}/index.html
pip install mmdet==2.11.0
pip install mmsegmentation==0.14.0
pip install mmdet==2.14.0
pip install mmsegmentation==0.14.1
pip install -r requirements.txt
- name: Build and install
run: |
Expand Down
1 change: 0 additions & 1 deletion configs/_base_/models/fcos3d.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,6 @@
out_channels=256,
start_level=1,
add_extra_convs=True,
extra_convs_on_inputs=False, # use P5
num_outs=5,
relu_before_extra_convs=True),
bbox_head=dict(
Expand Down
4 changes: 2 additions & 2 deletions configs/_base_/models/imvotenet_image.py
Original file line number Diff line number Diff line change
Expand Up @@ -99,8 +99,8 @@
nms_across_levels=False,
nms_pre=1000,
nms_post=1000,
max_num=1000,
nms_thr=0.7,
max_per_img=1000,
nms=dict(type='nms', iou_threshold=0.7),
min_bbox_size=0),
img_rcnn=dict(
score_thr=0.05,
Expand Down
2 changes: 1 addition & 1 deletion configs/imvoxelnet/imvoxelnet_kitti-3d-car.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,5 @@
model = dict(
type='ImVoxelNet',
pretrained='torchvision://resnet50',
backbone=dict(
type='ResNet',
depth=50,
Expand All @@ -9,6 +8,7 @@
frozen_stages=1,
norm_cfg=dict(type='BN', requires_grad=False),
norm_eval=True,
init_cfg=dict(type='Pretrained', checkpoint='torchvision://resnet50'),
style='pytorch'),
neck=dict(
type='FPN',
Expand Down
9 changes: 9 additions & 0 deletions docs/compatibility.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,15 @@ This document provides detailed descriptions of the BC-breaking changes in MMDet

## MMDetection3D 0.15.0

### Unified parameter initialization

To unify the parameter initialization in OpenMMLab projects, MMCV supports `BaseModule` that accepts `init_cfg` to allow the modules' parameters initialized in a flexible and unified manner. Now the users need to explicitly call `model.init_weights()` in the training script to initialize the model (as in [here](https://github.com/open-mmlab/mmdetection3d/blob/master/tools/train.py#L183), previously this was handled by the detector. Please refer to PR #622 for details.

### BackgroundPointsFilter

We modified the dataset aumentation function `BackgroundPointsFilter`(in [here](https://github.com/open-mmlab/mmdetection3d/blob/mmdet3d/datasets/pipelines/transforms_3d.py#L1101)). In previous version of MMdetection3D, `BackgroundPointsFilter` changes the gt_bboxes_3d's bottom center to the gravity center. In MMDetection3D 0.15.0,
`BackgroundPointsFilter` will not change it. Please refer to PR #609 for details.

### Enhance `IndoorPatchPointSample` transform

We enhance the pipeline function `IndoorPatchPointSample` used in point cloud segmentation task by adding more choices for patch selection. Also, we plan to remove the unused parameter `sample_rate` in the future. Please modify the code as well as the config files accordingly if you use this transform.
Expand Down
4 changes: 2 additions & 2 deletions docs/getting_started.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,8 +12,8 @@ The required versions of MMCV, MMDetection and MMSegmentation for different vers

| MMDetection3D version | MMDetection version | MMSegmentation version | MMCV version |
|:-------------------:|:-------------------:|:-------------------:|:-------------------:|
| master | mmdet>=2.10.0, <=2.11.0| mmseg==0.14.0 | mmcv-full>=1.3.1, <=1.4|
| 0.14.0 | mmdet>=2.10.0, <=2.11.0| mmseg==0.14.0 | mmcv-full>=1.3.1, <=1.4|
| master | mmdet>=2.12.0 | mmseg>=0.14.1 | mmcv-full>=1.3.2, <=1.4|
| 0.14.0 | mmdet>=2.10.0, <=2.11.0| mmseg>=0.13.0 | mmcv-full>=1.3.1, <=1.4|
| 0.13.0 | mmdet>=2.10.0, <=2.11.0| Not required | mmcv-full>=1.2.4, <=1.4|
| 0.12.0 | mmdet>=2.5.0, <=2.11.0 | Not required | mmcv-full>=1.2.4, <=1.4|
| 0.11.0 | mmdet>=2.5.0, <=2.11.0 | Not required | mmcv-full>=1.2.4, <=1.4|
Expand Down
36 changes: 17 additions & 19 deletions docs/tutorials/customize_models.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,9 +33,6 @@ class HardVFE(nn.Module):

def forward(self, x): # should return a tuple
pass

def init_weights(self, pretrained=None):
pass
```

#### 2. Import the module
Expand Down Expand Up @@ -83,16 +80,13 @@ from ..builder import BACKBONES


@BACKBONES.register_module()
class SECOND(nn.Module):
class SECOND(BaseModule):

def __init__(self, arg1, arg2):
pass

def forward(self, x): # should return a tuple
pass

def init_weights(self, pretrained=None):
pass
```

#### 2. Import the module
Expand Down Expand Up @@ -135,7 +129,7 @@ Create a new file `mmdet3d/models/necks/second_fpn.py`.
from ..builder import NECKS

@NECKS.register
class SECONDFPN(nn.Module):
class SECONDFPN(BaseModule):

def __init__(self,
in_channels=[128, 128, 256],
Expand All @@ -144,7 +138,8 @@ class SECONDFPN(nn.Module):
norm_cfg=dict(type='BN', eps=1e-3, momentum=0.01),
upsample_cfg=dict(type='deconv', bias=False),
conv_cfg=dict(type='Conv2d', bias=False),
use_conv_for_no_stride=False):
use_conv_for_no_stride=False,
init_cfg=None):
pass

def forward(self, X):
Expand Down Expand Up @@ -198,7 +193,7 @@ from mmdet.models.builder import HEADS
from .bbox_head import BBoxHead

@HEADS.register_module()
class PartA2BboxHead(nn.Module):
class PartA2BboxHead(BaseModule):
"""PartA2 RoI head."""

def __init__(self,
Expand All @@ -224,11 +219,9 @@ class PartA2BboxHead(nn.Module):
type='CrossEntropyLoss',
use_sigmoid=True,
reduction='none',
loss_weight=1.0)):
super(PartA2BboxHead, self).__init__()

def init_weights(self):
# conv layers are already initialized by ConvModule
loss_weight=1.0),
init_cfg=None):
super(PartA2BboxHead, self).__init__(init_cfg=init_cfg)

def forward(self, seg_feats, part_feats):

Expand All @@ -242,15 +235,16 @@ from torch import nn as nn


@HEADS.register_module()
class Base3DRoIHead(nn.Module, metaclass=ABCMeta):
class Base3DRoIHead(BaseModule, metaclass=ABCMeta):
"""Base class for 3d RoIHeads."""

def __init__(self,
bbox_head=None,
mask_roi_extractor=None,
mask_head=None,
train_cfg=None,
test_cfg=None):
test_cfg=None,
init_cfg=None):

@property
def with_bbox(self):
Expand Down Expand Up @@ -333,9 +327,13 @@ class PartAggregationROIHead(Base3DRoIHead):
part_roi_extractor=None,
bbox_head=None,
train_cfg=None,
test_cfg=None):
test_cfg=None,
init_cfg=None):
super(PartAggregationROIHead, self).__init__(
bbox_head=bbox_head, train_cfg=train_cfg, test_cfg=test_cfg)
bbox_head=bbox_head,
train_cfg=train_cfg,
test_cfg=test_cfg,
init_cfg=init_cfg)
self.num_classes = num_classes
assert semantic_head is not None
self.semantic_head = build_head(semantic_head)
Expand Down
10 changes: 5 additions & 5 deletions mmdet3d/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ def digit_version(version_str):
return digit_version


mmcv_minimum_version = '1.3.1'
mmcv_minimum_version = '1.3.8'
mmcv_maximum_version = '1.4.0'
mmcv_version = digit_version(mmcv.__version__)

Expand All @@ -27,17 +27,17 @@ def digit_version(version_str):
f'MMCV=={mmcv.__version__} is used but incompatible. ' \
f'Please install mmcv>={mmcv_minimum_version}, <={mmcv_maximum_version}.'

mmdet_minimum_version = '2.10.0'
mmdet_maximum_version = '2.11.0'
mmdet_minimum_version = '2.14.0'
mmdet_maximum_version = '3.0.0'
mmdet_version = digit_version(mmdet.__version__)
assert (mmdet_version >= digit_version(mmdet_minimum_version)
and mmdet_version <= digit_version(mmdet_maximum_version)), \
f'MMDET=={mmdet.__version__} is used but incompatible. ' \
f'Please install mmdet>={mmdet_minimum_version}, ' \
f'<={mmdet_maximum_version}.'

mmseg_minimum_version = '0.14.0'
mmseg_maximum_version = '0.14.0'
mmseg_minimum_version = '0.14.1'
mmseg_maximum_version = '1.0.0'
mmseg_version = digit_version(mmseg.__version__)
assert (mmseg_version >= digit_version(mmseg_minimum_version)
and mmseg_version <= digit_version(mmseg_maximum_version)), \
Expand Down
23 changes: 10 additions & 13 deletions mmdet3d/models/backbones/base_pointnet.py
Original file line number Diff line number Diff line change
@@ -1,23 +1,20 @@
import warnings
from abc import ABCMeta
from mmcv.runner import load_checkpoint
from torch import nn as nn
from mmcv.runner import BaseModule


class BasePointNet(nn.Module, metaclass=ABCMeta):
class BasePointNet(BaseModule, metaclass=ABCMeta):
"""Base class for PointNet."""

def __init__(self):
super(BasePointNet, self).__init__()
def __init__(self, init_cfg=None, pretrained=None):
super(BasePointNet, self).__init__(init_cfg)
self.fp16_enabled = False

def init_weights(self, pretrained=None):
"""Initialize the weights of PointNet backbone."""
# Do not initialize the conv layers
# to follow the original implementation
assert not (init_cfg and pretrained), \
'init_cfg and pretrained cannot be setting at the same time'
if isinstance(pretrained, str):
from mmdet3d.utils import get_root_logger
logger = get_root_logger()
load_checkpoint(self, pretrained, strict=False, logger=logger)
warnings.warn('DeprecationWarning: pretrained is a deprecated, '
'please use "init_cfg" instead')
self.init_cfg = dict(type='Pretrained', checkpoint=pretrained)

@staticmethod
def _split_point_feats(points):
Expand Down
21 changes: 11 additions & 10 deletions mmdet3d/models/backbones/multi_backbone.py
Original file line number Diff line number Diff line change
@@ -1,14 +1,15 @@
import copy
import torch
import warnings
from mmcv.cnn import ConvModule
from mmcv.runner import auto_fp16, load_checkpoint
from mmcv.runner import BaseModule, auto_fp16
from torch import nn as nn

from mmdet.models import BACKBONES, build_backbone


@BACKBONES.register_module()
class MultiBackbone(nn.Module):
class MultiBackbone(BaseModule):
"""MultiBackbone with different configs.
Args:
Expand All @@ -31,8 +32,10 @@ def __init__(self,
norm_cfg=dict(type='BN1d', eps=1e-5, momentum=0.01),
act_cfg=dict(type='ReLU'),
suffixes=('net0', 'net1'),
init_cfg=None,
pretrained=None,
**kwargs):
super().__init__()
super().__init__(init_cfg=init_cfg)
assert isinstance(backbones, dict) or isinstance(backbones, list)
if isinstance(backbones, dict):
backbones_list = []
Expand Down Expand Up @@ -77,14 +80,12 @@ def __init__(self,
bias=True,
inplace=True))

def init_weights(self, pretrained=None):
"""Initialize the weights of PointNet++ backbone."""
# Do not initialize the conv layers
# to follow the original implementation
assert not (init_cfg and pretrained), \
'init_cfg and pretrained cannot be setting at the same time'
if isinstance(pretrained, str):
from mmdet3d.utils import get_root_logger
logger = get_root_logger()
load_checkpoint(self, pretrained, strict=False, logger=logger)
warnings.warn('DeprecationWarning: pretrained is a deprecated, '
'please use "init_cfg" instead')
self.init_cfg = dict(type='Pretrained', checkpoint=pretrained)

@auto_fp16()
def forward(self, points):
Expand Down
4 changes: 2 additions & 2 deletions mmdet3d/models/backbones/nostem_regnet.py
Original file line number Diff line number Diff line change
Expand Up @@ -57,8 +57,8 @@ class NoStemRegNet(RegNet):
(1, 1008, 1, 1)
"""

def __init__(self, arch, **kwargs):
super(NoStemRegNet, self).__init__(arch, **kwargs)
def __init__(self, arch, init_cfg=None, **kwargs):
super(NoStemRegNet, self).__init__(arch, init_cfg=init_cfg, **kwargs)

def _make_stem_layer(self, in_channels, base_channels):
"""Override the original function that do not initialize a stem layer
Expand Down
5 changes: 3 additions & 2 deletions mmdet3d/models/backbones/pointnet2_sa_msg.py
Original file line number Diff line number Diff line change
Expand Up @@ -56,8 +56,9 @@ def __init__(self,
type='PointSAModuleMSG',
pool_mod='max',
use_xyz=True,
normalize_xyz=False)):
super().__init__()
normalize_xyz=False),
init_cfg=None):
super().__init__(init_cfg=init_cfg)
self.num_sa = len(sa_channels)
self.out_indices = out_indices
assert max(out_indices) < self.num_sa
Expand Down
5 changes: 3 additions & 2 deletions mmdet3d/models/backbones/pointnet2_sa_ssg.py
Original file line number Diff line number Diff line change
Expand Up @@ -43,8 +43,9 @@ def __init__(self,
type='PointSAModule',
pool_mod='max',
use_xyz=True,
normalize_xyz=True)):
super().__init__()
normalize_xyz=True),
init_cfg=None):
super().__init__(init_cfg=init_cfg)
self.num_sa = len(sa_channels)
self.num_fp = len(fp_channels)

Expand Down
25 changes: 14 additions & 11 deletions mmdet3d/models/backbones/second.py
Original file line number Diff line number Diff line change
@@ -1,12 +1,13 @@
import warnings
from mmcv.cnn import build_conv_layer, build_norm_layer
from mmcv.runner import load_checkpoint
from mmcv.runner import BaseModule
from torch import nn as nn

from mmdet.models import BACKBONES


@BACKBONES.register_module()
class SECOND(nn.Module):
class SECOND(BaseModule):
"""Backbone network for SECOND/PointPillars/PartA2/MVXNet.
Args:
Expand All @@ -24,8 +25,10 @@ def __init__(self,
layer_nums=[3, 5, 5],
layer_strides=[2, 2, 2],
norm_cfg=dict(type='BN', eps=1e-3, momentum=0.01),
conv_cfg=dict(type='Conv2d', bias=False)):
super(SECOND, self).__init__()
conv_cfg=dict(type='Conv2d', bias=False),
init_cfg=None,
pretrained=None):
super(SECOND, self).__init__(init_cfg=init_cfg)
assert len(layer_strides) == len(layer_nums)
assert len(out_channels) == len(layer_nums)

Expand Down Expand Up @@ -61,14 +64,14 @@ def __init__(self,

self.blocks = nn.ModuleList(blocks)

def init_weights(self, pretrained=None):
"""Initialize weights of the 2D backbone."""
# Do not initialize the conv layers
# to follow the original implementation
assert not (init_cfg and pretrained), \
'init_cfg and pretrained cannot be setting at the same time'
if isinstance(pretrained, str):
from mmdet3d.utils import get_root_logger
logger = get_root_logger()
load_checkpoint(self, pretrained, strict=False, logger=logger)
warnings.warn('DeprecationWarning: pretrained is a deprecated, '
'please use "init_cfg" instead')
self.init_cfg = dict(type='Pretrained', checkpoint=pretrained)
else:
self.init_cfg = dict(type='Kaiming', layer='Conv2d')

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

According to L66-67, why use Kaiming init here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if we dont use the pretrained model, we want to use kaiming_init to init all Conv2d layer.


def forward(self, x):
"""Forward function.
Expand Down
Loading