Skip to content

Commit

Permalink
Add webcam demo (open-mmlab#729)
Browse files Browse the repository at this point in the history
This PR adds a webcam demo tool with the following features:

1. Read video stream from the webcam or offline video file
2. Async model inference (detection + human pose + animal pose) and video I/O
3. Optimized visualization functions of bbox, object label, and pose
4. Apply special effects (sunglasses or bug-eye)
5. Show statistic information, e.g., FPS, CPU usage.
6. Optionally, save out the video
  • Loading branch information
ly015 authored Jul 2, 2021
1 parent 46bb615 commit cc1b52e
Show file tree
Hide file tree
Showing 21 changed files with 1,175 additions and 48 deletions.
8 changes: 8 additions & 0 deletions demo/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -65,3 +65,11 @@ This page provides tutorials about running demos. Please click the caption for m

[3D hand_pose demo](docs/3d_hand_demo.md)
</div>

<br>

<div align="center">
<img src="https://user-images.githubusercontent.com/15977946/124059525-ce20c580-da5d-11eb-8e4a-2d96cd31fe9f.gif" width="600px" alt><br>

[Webcam demo](docs/webcam_demo.md)
</div>
49 changes: 49 additions & 0 deletions demo/docs/webcam_demo.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@
## Webcam Demo

We provide a webcam demo tool which integrartes detection and 2D pose estimation for humans and animals. You can simply run the following command:

```python
python demo/webcam_demo.py
```

It will launch a window to display the webcam video steam with detection and pose estimation results:

<div align="center">
<img src="https://user-images.githubusercontent.com/15977946/124059525-ce20c580-da5d-11eb-8e4a-2d96cd31fe9f.gif" width="600px" alt><br>
</div>

### Usage Tips

- **Which model is used in the demo tool?**

Please check the following default arguments in the script. You can also choose other models from the [MMDetection Model Zoo](https://github.com/open-mmlab/mmdetection/blob/master/docs/model_zoo.md) and [MMPose Model Zoo](https://mmpose.readthedocs.io/en/latest/modelzoo.html#) or use your own models.

| Model | Arguments |
| :--: | :-- |
| Detection | `--det-config`, `--det-checkpoint` |
| Human Pose | `--human-pose-config`, `--human-pose-checkpoint` |
| Animal Pose | `--animal-pose-config`, `--animal-pose-checkpoint` |

- **Can this tool run without GPU?**

Yes, you can set `--device=cpu` and the model inference will be performed on CPU. Of course, this may cause a low inference FPS compared to using GPU devices.

- **Why there is time delay between the pose visualization and the video?**

The video I/O and model inference are running asynchronously and the latter usually takes more time for a single frame. To allevidate the time delay, you can:

1. set `--display-delay=MILLISECONDS` to defer the video stream, according to the inference delay shown at the top left corner. Or,

2. set `--synchronous-mode` to force video stream being aligned with inference results. This may reduce the video display FPS.

- **Can this tool process video files?**

Yes. You can set `--cam_id=VIDEO_FILE_PATH` to run the demo tool in offline mode on a video file. Note that `--synchronous-mode` should be set in this case.

- **How to enable/disable the special effects?**

The special effects can be enabled/disabled at launch time by setting arguments like `--bugeye`, `--sunglasses`, *etc*. You can also toggle the effects by keyboard shorcuts like `b`, `s` when the tool starts.

- **What if my computer doesn't have a camera?**

You can use a smart phone as a webcam with apps like [Camo](https://reincubate.com/camo/) or [DroidCam](https://www.dev47apps.com/).
14 changes: 7 additions & 7 deletions demo/mmdetection_cfg/cascade_rcnn_x101_64x4d_fpn_1class.py
Original file line number Diff line number Diff line change
Expand Up @@ -206,7 +206,7 @@
max_per_img=100)))

dataset_type = 'CocoDataset'
data_root = 'data/coco/'
data_root = 'data/coco'
img_norm_cfg = dict(
mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
train_pipeline = [
Expand Down Expand Up @@ -239,17 +239,17 @@
workers_per_gpu=2,
train=dict(
type=dataset_type,
ann_file=data_root + 'annotations/instances_train2017.json',
img_prefix=data_root + 'train2017/',
ann_file=f'{data_root}/annotations/instances_train2017.json',
img_prefix=f'{data_root}/train2017/',
pipeline=train_pipeline),
val=dict(
type=dataset_type,
ann_file=data_root + 'annotations/instances_val2017.json',
img_prefix=data_root + 'val2017/',
ann_file=f'{data_root}/annotations/instances_val2017.json',
img_prefix=f'{data_root}/val2017/',
pipeline=test_pipeline),
test=dict(
type=dataset_type,
ann_file=data_root + 'annotations/instances_val2017.json',
img_prefix=data_root + 'val2017/',
ann_file=f'{data_root}/annotations/instances_val2017.json',
img_prefix=f'{data_root}/val2017/',
pipeline=test_pipeline))
evaluation = dict(interval=1, metric='bbox')
14 changes: 7 additions & 7 deletions demo/mmdetection_cfg/cascade_rcnn_x101_64x4d_fpn_coco.py
Original file line number Diff line number Diff line change
Expand Up @@ -207,7 +207,7 @@
max_per_img=100)))

dataset_type = 'CocoDataset'
data_root = 'data/coco/'
data_root = 'data/coco'
img_norm_cfg = dict(
mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
train_pipeline = [
Expand Down Expand Up @@ -240,17 +240,17 @@
workers_per_gpu=2,
train=dict(
type=dataset_type,
ann_file=data_root + 'annotations/instances_train2017.json',
img_prefix=data_root + 'train2017/',
ann_file=f'{data_root}/annotations/instances_train2017.json',
img_prefix=f'{data_root}/train2017/',
pipeline=train_pipeline),
val=dict(
type=dataset_type,
ann_file=data_root + 'annotations/instances_val2017.json',
img_prefix=data_root + 'val2017/',
ann_file=f'{data_root}/annotations/instances_val2017.json',
img_prefix=f'{data_root}/val2017/',
pipeline=test_pipeline),
test=dict(
type=dataset_type,
ann_file=data_root + 'annotations/instances_val2017.json',
img_prefix=data_root + 'val2017/',
ann_file=f'{data_root}/annotations/instances_val2017.json',
img_prefix=f'{data_root}/val2017/',
pipeline=test_pipeline))
evaluation = dict(interval=1, metric='bbox')
14 changes: 7 additions & 7 deletions demo/mmdetection_cfg/faster_rcnn_r50_fpn_1class.py
Original file line number Diff line number Diff line change
Expand Up @@ -133,7 +133,7 @@
))

dataset_type = 'CocoDataset'
data_root = 'data/coco/'
data_root = 'data/coco'
img_norm_cfg = dict(
mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
train_pipeline = [
Expand Down Expand Up @@ -166,17 +166,17 @@
workers_per_gpu=2,
train=dict(
type=dataset_type,
ann_file=data_root + 'annotations/instances_train2017.json',
img_prefix=data_root + 'train2017/',
ann_file=f'{data_root}/annotations/instances_train2017.json',
img_prefix=f'{data_root}/train2017/',
pipeline=train_pipeline),
val=dict(
type=dataset_type,
ann_file=data_root + 'annotations/instances_val2017.json',
img_prefix=data_root + 'val2017/',
ann_file=f'{data_root}/annotations/instances_val2017.json',
img_prefix=f'{data_root}/val2017/',
pipeline=test_pipeline),
test=dict(
type=dataset_type,
ann_file=data_root + 'annotations/instances_val2017.json',
img_prefix=data_root + 'val2017/',
ann_file=f'{data_root}/annotations/instances_val2017.json',
img_prefix=f'{data_root}/val2017/',
pipeline=test_pipeline))
evaluation = dict(interval=1, metric='bbox')
14 changes: 7 additions & 7 deletions demo/mmdetection_cfg/faster_rcnn_r50_fpn_coco.py
Original file line number Diff line number Diff line change
Expand Up @@ -133,7 +133,7 @@
))

dataset_type = 'CocoDataset'
data_root = 'data/coco/'
data_root = 'data/coco'
img_norm_cfg = dict(
mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
train_pipeline = [
Expand Down Expand Up @@ -166,17 +166,17 @@
workers_per_gpu=2,
train=dict(
type=dataset_type,
ann_file=data_root + 'annotations/instances_train2017.json',
img_prefix=data_root + 'train2017/',
ann_file=f'{data_root}/annotations/instances_train2017.json',
img_prefix=f'{data_root}/train2017/',
pipeline=train_pipeline),
val=dict(
type=dataset_type,
ann_file=data_root + 'annotations/instances_val2017.json',
img_prefix=data_root + 'val2017/',
ann_file=f'{data_root}/annotations/instances_val2017.json',
img_prefix=f'{data_root}/val2017/',
pipeline=test_pipeline),
test=dict(
type=dataset_type,
ann_file=data_root + 'annotations/instances_val2017.json',
img_prefix=data_root + 'val2017/',
ann_file=f'{data_root}/annotations/instances_val2017.json',
img_prefix=f'{data_root}/val2017/',
pipeline=test_pipeline))
evaluation = dict(interval=1, metric='bbox')
140 changes: 140 additions & 0 deletions demo/mmdetection_cfg/yolov3_d53_320_273e_coco.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,140 @@
# model settings
model = dict(
type='YOLOV3',
pretrained='open-mmlab://darknet53',
backbone=dict(type='Darknet', depth=53, out_indices=(3, 4, 5)),
neck=dict(
type='YOLOV3Neck',
num_scales=3,
in_channels=[1024, 512, 256],
out_channels=[512, 256, 128]),
bbox_head=dict(
type='YOLOV3Head',
num_classes=80,
in_channels=[512, 256, 128],
out_channels=[1024, 512, 256],
anchor_generator=dict(
type='YOLOAnchorGenerator',
base_sizes=[[(116, 90), (156, 198), (373, 326)],
[(30, 61), (62, 45), (59, 119)],
[(10, 13), (16, 30), (33, 23)]],
strides=[32, 16, 8]),
bbox_coder=dict(type='YOLOBBoxCoder'),
featmap_strides=[32, 16, 8],
loss_cls=dict(
type='CrossEntropyLoss',
use_sigmoid=True,
loss_weight=1.0,
reduction='sum'),
loss_conf=dict(
type='CrossEntropyLoss',
use_sigmoid=True,
loss_weight=1.0,
reduction='sum'),
loss_xy=dict(
type='CrossEntropyLoss',
use_sigmoid=True,
loss_weight=2.0,
reduction='sum'),
loss_wh=dict(type='MSELoss', loss_weight=2.0, reduction='sum')),
# training and testing settings
train_cfg=dict(
assigner=dict(
type='GridAssigner',
pos_iou_thr=0.5,
neg_iou_thr=0.5,
min_pos_iou=0)),
test_cfg=dict(
nms_pre=1000,
min_bbox_size=0,
score_thr=0.05,
conf_thr=0.005,
nms=dict(type='nms', iou_threshold=0.45),
max_per_img=100))
# dataset settings
dataset_type = 'CocoDataset'
data_root = 'data/coco'
img_norm_cfg = dict(mean=[0, 0, 0], std=[255., 255., 255.], to_rgb=True)
train_pipeline = [
dict(type='LoadImageFromFile', to_float32=True),
dict(type='LoadAnnotations', with_bbox=True),
dict(type='PhotoMetricDistortion'),
dict(
type='Expand',
mean=img_norm_cfg['mean'],
to_rgb=img_norm_cfg['to_rgb'],
ratio_range=(1, 2)),
dict(
type='MinIoURandomCrop',
min_ious=(0.4, 0.5, 0.6, 0.7, 0.8, 0.9),
min_crop_size=0.3),
dict(type='Resize', img_scale=(320, 320), keep_ratio=True),
dict(type='RandomFlip', flip_ratio=0.5),
dict(type='Normalize', **img_norm_cfg),
dict(type='Pad', size_divisor=32),
dict(type='DefaultFormatBundle'),
dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels'])
]
test_pipeline = [
dict(type='LoadImageFromFile'),
dict(
type='MultiScaleFlipAug',
img_scale=(320, 320),
flip=False,
transforms=[
dict(type='Resize', keep_ratio=True),
dict(type='RandomFlip'),
dict(type='Normalize', **img_norm_cfg),
dict(type='Pad', size_divisor=32),
dict(type='DefaultFormatBundle'),
dict(type='Collect', keys=['img'])
])
]
data = dict(
samples_per_gpu=8,
workers_per_gpu=4,
train=dict(
type=dataset_type,
ann_file=f'{data_root}/annotations/instances_train2017.json',
img_prefix=f'{data_root}/train2017/',
pipeline=train_pipeline),
val=dict(
type=dataset_type,
ann_file=f'{data_root}/annotations/instances_val2017.json',
img_prefix=f'{data_root}/val2017/',
pipeline=test_pipeline),
test=dict(
type=dataset_type,
ann_file=f'{data_root}/annotations/instances_val2017.json',
img_prefix=f'{data_root}/val2017/',
pipeline=test_pipeline))
# optimizer
optimizer = dict(type='SGD', lr=0.001, momentum=0.9, weight_decay=0.0005)
optimizer_config = dict(grad_clip=dict(max_norm=35, norm_type=2))
# learning policy
lr_config = dict(
policy='step',
warmup='linear',
warmup_iters=2000, # same as burn-in in darknet
warmup_ratio=0.1,
step=[218, 246])
# runtime settings
runner = dict(type='EpochBasedRunner', max_epochs=273)
evaluation = dict(interval=1, metric=['bbox'])

checkpoint_config = dict(interval=1)
# yapf:disable
log_config = dict(
interval=50,
hooks=[
dict(type='TextLoggerHook'),
# dict(type='TensorboardLoggerHook')
])
# yapf:enable
custom_hooks = [dict(type='NumClassCheckHook')]

dist_params = dict(backend='nccl')
log_level = 'INFO'
load_from = None
resume_from = None
workflow = [('train', 1)]
Original file line number Diff line number Diff line change
Expand Up @@ -202,7 +202,7 @@
std=[58.395, 57.12, 57.375],
to_rgb=True),
dict(type='Pad', size_divisor=32),
dict(type='ImageToTensor', keys=['img']),
dict(type='DefaultFormatBundle', keys=['img']),
dict(type='VideoCollect', keys=['img'])
])
]
Expand Down Expand Up @@ -272,7 +272,7 @@
std=[58.395, 57.12, 57.375],
to_rgb=True),
dict(type='Pad', size_divisor=32),
dict(type='ImageToTensor', keys=['img']),
dict(type='DefaultFormatBundle', keys=['img']),
dict(type='VideoCollect', keys=['img'])
])
]),
Expand All @@ -296,7 +296,7 @@
std=[58.395, 57.12, 57.375],
to_rgb=True),
dict(type='Pad', size_divisor=32),
dict(type='ImageToTensor', keys=['img']),
dict(type='DefaultFormatBundle', keys=['img']),
dict(type='VideoCollect', keys=['img'])
])
]))
Expand Down
Binary file added demo/resources/sunglasses.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading

0 comments on commit cc1b52e

Please sign in to comment.