allows the user to control trained student models with their facial movement, which is captured by a web camera and processed by the Mediapipe FaceLandmarker model.
Please make sure that, before you invoke the program, your computer has a web camera plugged in. The program will use a web camera, but it does not allow you to specify which. In case your machine has more than one web camera, you can turn off all camera except the one that you want to use.
You can also inspect the source code and change the
video_capture = cv2.VideoCapture(0)
line to choose a particular camera that you want to use.
Make sure you have (1) created a Python environment and (2) downloaded model files as instruction in the main README file.
- Open a shell.
cd
to the repository's directory.cd SOMEWHERE/talking-head-anime-4-demo
- Run the program.
bin/run src/tha4/app/character_model_mediapipe_puppeteer.py
- Open a shell.
cd
to the repository's directory.cd SOMEWHERE\talking-head-anime-4-demo
- Run the program.
bin\run.bat src\tha4\app\character_model_mediapipe_puppeteer.py