Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Enhance] Support select gpu-ids in non-distribute testing time #6781

Merged
merged 25 commits into from
Dec 24, 2021
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
25 commits
Select commit Hold shift + click to select a range
d1fb57f
Update README_zh-CN.md (#6652)
LJoson Dec 2, 2021
bc97116
add abstract and sketch to the CONFIGS/algorithm/README.md (#6654)
Czm369 Dec 2, 2021
a9d32b0
[Fix]fix init when densehead contains dcn (#6625)
jshilong Dec 2, 2021
95a3def
fix init of convfchead (#6624)
jshilong Dec 2, 2021
82a3041
polish docs (#6623)
jshilong Dec 2, 2021
cc8ceea
fix pseudosampler (#6622)
jshilong Dec 2, 2021
339f0ce
[Fix]Add an example of combining swin and one-stage models (#6621)
jshilong Dec 2, 2021
edd8248
add mmhuman3d in readme (#6699)
ZCMax Dec 6, 2021
7277a25
[Fix] Fix init weights in Swin and PVT. (#6663)
RangiLyu Dec 7, 2021
d5f40aa
[Fix] update metafile (#6717)
BIGWangYuDong Dec 8, 2021
2461375
Release YOLOX model (#6698)
hhaAndroid Dec 8, 2021
de60de7
Add 'get_ann_info' to dataset_wrappers (#6526)
zhaoxin111 Dec 8, 2021
3c91d21
[Enchance] Update FAQ docs (#6587)
hhaAndroid Dec 8, 2021
926e457
Support keeping image ratio in the multi-scale training of YOLOX (#6732)
GT9505 Dec 10, 2021
43699a2
[Doc]Add doc for detect_anomalous_params (#6697)
jshilong Dec 10, 2021
c9f4297
Fix dtype bug in base_dense_head
shinya7y Dec 12, 2021
d3d42fd
Support `bbox_clip_border` for the augmentations of YOLOX (#6730)
GT9505 Dec 13, 2021
cc721ee
[Fix] Fix SimOTA with no valid bbox. (#6733)
RangiLyu Dec 13, 2021
c870e8e
[Enhance] support select gpu-ids in testing time
BIGWangYuDong Dec 14, 2021
cd0e6d6
fix conflict
BIGWangYuDong Dec 16, 2021
83eea4b
[Fix] fix link (#6796)
BIGWangYuDong Dec 16, 2021
55d60d1
Merge branch 'dev-v2.20.0' into support-gpus
BIGWangYuDong Dec 16, 2021
a12ae5a
support select gpu-ids in testing time
BIGWangYuDong Dec 22, 2021
0bc8cdd
Merge branch 'master' into support-gpus
BIGWangYuDong Dec 22, 2021
1838f30
minor fix
BIGWangYuDong Dec 24, 2021
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
19 changes: 18 additions & 1 deletion tools/test.py
Original file line number Diff line number Diff line change
Expand Up @@ -33,6 +33,12 @@ def parse_args():
action='store_true',
help='Whether to fuse conv and bn, this will slightly increase'
'the inference speed')
parser.add_argument(
'--gpu-ids',
type=int,
nargs='+',
help='ids of gpus to use '
'(only applicable to non-distributed testing)')
parser.add_argument(
'--format-only',
action='store_true',
Expand Down Expand Up @@ -155,9 +161,20 @@ def main():
for ds_cfg in cfg.data.test:
ds_cfg.pipeline = replace_ImageToTensor(ds_cfg.pipeline)

if args.gpu_ids is not None:
cfg.gpu_ids = args.gpu_ids
else:
cfg.gpu_ids = range(1)

# init distributed env first, since logger depends on the dist info.
if args.launcher == 'none':
distributed = False
if len(cfg.gpu_ids) > 1:
warnings.warn(
f'We treat {cfg.gpu_ids} as gpu-ids, and reset to '
f'{cfg.gpu_ids[0:1]} as gpu-ids to avoid potential error in '
'non-distribute testing time.')
cfg.gpu_ids = cfg.gpu_ids[0:1]
else:
distributed = True
init_dist(args.launcher, **cfg.dist_params)
Expand Down Expand Up @@ -195,7 +212,7 @@ def main():
model.CLASSES = dataset.CLASSES

if not distributed:
model = MMDataParallel(model, device_ids=[0])
model = MMDataParallel(model, device_ids=cfg.gpu_ids)
outputs = single_gpu_test(model, data_loader, args.show, args.show_dir,
args.show_score_thr)
else:
Expand Down
6 changes: 6 additions & 0 deletions tools/train.py
Original file line number Diff line number Diff line change
Expand Up @@ -112,6 +112,12 @@ def main():
# init distributed env first, since logger depends on the dist info.
if args.launcher == 'none':
distributed = False
if len(cfg.gpu_ids) > 1:
warnings.warn(
f'We treat {cfg.gpu_ids} as gpu-ids, and reset to '
f'{cfg.gpu_ids[0:1]} as gpu-ids to avoid potential error in '
'non-distribute training time.')
cfg.gpu_ids = cfg.gpu_ids[0:1]
else:
distributed = True
init_dist(args.launcher, **cfg.dist_params)
Expand Down