Skip to content

Aim to accelerate the image-animation-model inference through the inference frameworks such as onnx、tensorrt and openvino.

Notifications You must be signed in to change notification settings

TalkUHulk/Image-Animation-Turbo-Boost

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Image Animation Turbo Boost

Aim to accelerate the image-animation-model inference through the inference frameworks such as onnx、tensorrt and openvino.


354.MP4
sd1665028774_2.MP4

FOMM

The model using from FOMM

Convert

  • Convert to onnx:
python export_onnx.py --output-name-kp kp_detector.onnx --output-name-fomm fomm.onnx --config config/vox-adv-256.yaml --ckpt ./checkpoints/vox-adv-cpk.pth.tar
  • Convert to trt:

dev environment: docker pull chaoyiyuan/tensorrt8:latest

Run:

onnx2trt fomm.onnx -o fomm.trt

Demo


TPSMM

The model using from TPSMM

Convert

  • Convert to onnx:
python export_onnx.py --output-name-kp kp_detector.onnx --output-name-tpsmm tpsmm.onnx --config config/vox-256.yaml --ckpt ./checkpoints/vox.pth.tar
  • Convert to openvino:

dev environment: docker pull openvino/ubuntu18_dev:2021.4.2_src

python3 mo.py --input_model ./tpsmm.onnx  --output_dir ./openvino --data_type FP32

Demo

ONNXRuntime

To test python demo run:

python demo/ONNXRuntime/python/demo.py --source ../assets/source.png --driving ../assets/driving.mp4 --onnx-file-tpsmm tpsmm.onnx --onnx-file-kp kp_detector.onnx

To test c++ demo run:

  • build
mkdir build && cd build
cmake ..
make -j8
./onnx_demo xxx/tpsmm.onnx xxx/kp_detector.onnx xxx/source.png xxx/driving.mp4 ./generated_onnx.mp4

OpenVINO

To test python demo run:

python demo/OpenVINO/python/demo.py --source ../assets/source.png --driving ../assets/driving.mp4 --xml-kp xxxx/kp_detector_sim.xml --xml-tpsmm xxx/tpsmm_sim.xml --bin-kp xxx/kp_detector_sim.bin --bin-tpsmm xxx/tpsmm_sim.bin

To test c++ demo run:

  • build
mkdir build && cd build
cmake ..
make -j8
./openvino_demo xxx/tpsmm.xml xxx/tpsmm.bin xxx/kp_detector.xml xxx/kp_detector.bin xxx/source.png xxx/driving.mp4 ./generated_onnx.mp4

Result

FrameWork Elapsed(s) Language
pytorch(cpu) 6 python
ONNXRuntime ~1.2 python
ONNXRuntime ~1.6 c++
OpenVINO ~0.6 python
OpenVINO ~0.6 c++

ONNXRuntime C++ is slower compared with python, maybe related to libraries which compiled by myself.


generated by python onnx.


generated by python openvino.


generated by cpp onnx.


generated by cpp openvino.

To Do

Failed to convert to tensorrt, maybe scatter ops is not supported. This will be fixed in 8.4GA, according to issues

Pretrained Models

Please download the pre-trained models from the following links.

Path Description
FOMM Original Pretrained Pytorch Model.
TPSMM Original Pretrained Pytorch Model.
FOMM Onnx onnx model of fomm.
FOMM TensorRT trt model of fomm.
TPSMM Onnx onnx model of tpsmm.
TPSMM OpenVINO openvino model of tpsmm.

Acknowledgments

FOMM is AliaksandrSiarohin's work.

TPSMM is yoyo-nb's work.

Thanks for the excellent works!

My work is to modify part of the network,and enable the model can be converted to onnx、openvino or tensorrt.

About

Aim to accelerate the image-animation-model inference through the inference frameworks such as onnx、tensorrt and openvino.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published