Robust Human Matting via Semantic Guidance (ACCV 2022)
SGHM is a robust and accurate method for automatic human matting which requires no trimap input. Semantic guidance is well incorporated into our model to predict coarse mask and fine-grained alpha matte successively.
- Semantic Guided Network : A segmentation sub-network is first employed for the task of segmentation, and then it is reused to guide the matting process to focus on the surrounding area of the segmentation mask. To improve the performance and reduce computation, we share semantic encoder in two tasks. Under the guidance of powerful semantic features, our matting module successfully handle many challenging cases.
- Data Efficient : With only about 200 matting images, our method is able to produce high quality alpha details. We can efficiently improve matting performance by collecting more coarse human masks in an easy and fast way rather than paying for the high cost fine-detailed alpha annotating.
- SOTA Result : We conduct comparisons on 5 benchmarks qualitatively and quantitatively. SGHM outperforms other methods across all benchmarks.
- Install Segpeo from Github
pip install git+https://github.com/BreezeWhite/segpeo
usage: segpeo [-h] -i IMAGE_PATH [-o OUTPUT_DIR] [-c CHECKPOINT_PATH]
Test Images
options:
-h, --help show this help message and exit
-i IMAGE_PATH, --image-path IMAGE_PATH
Could be a file path or a directory that contains images
-o OUTPUT_DIR, --output-dir OUTPUT_DIR
Path to output the result image. Default to the same folder of input image.
-c CHECKPOINT_PATH, --checkpoint-path CHECKPOINT_PATH
Optionally provide your own checkpoint to use. Will download and use the default
model if not specified.
- Test your own images
The output path will default to the same folder as input.
# Will automatically download the checkpoint upon first run.
segpeo --image-path "PATH_TO_FILE_OR_DIR"
- Test your video (Not yet incorporated, please refer to the original repo)
python test_video.py \
--video "PATH_TO_INPUT_VIDEO" \
--output-video "PATH_TO_OUTPUT_VIDEO" \
--pretrained-weight SGHM-ResNet50.pth
If you use this code for your research, please consider to star this repo and cite our paper.
@inproceedings{chen2022sghm,
author = {Chen, Xiangguang and Zhu, Ye and Li, Yu and Fu, Bingtao and Sun, Lei and Shan, Ying and Liu, Shan},
title = {Robust Human Matting via Semantic Guidance},
booktitle={Proceedings of the Asian Conference on Computer Vision (ACCV)},
year={2022}
}
In this project, parts of the code are adapted from : BMV2 and MG . We thank the authors for sharing codes for their great works.