This repository contains the code for X2Face, presented at ECCV 2018.
The demo notebooks demonstrate the following:
- How to load the pre-trained models
- How to drive a face with another face in
./UnwrapMosaic/Face2Face_UnwrapMosaic.ipynb
- How to edit the embedded face with a drawing or tattoo in
./UnwrapMosaic/Face2Face_UnwrapMosaic.ipynb
- How to drive with pose in
./UnwrapMosaic/Pose2Face.ipynb
- How to drive with audio in
./UnwrapMosiac/Audio2Face.ipynb
Update: We have added updated code and installation instructions to run the demo notebooks with pytorch 0.4.1 and python 2.7 in the branch 'pytorch_0.4.1' and for pytorch 0.4.1 and python 3.7 in the branch 'py37_pytorch_0.4.1'.
To run the notebooks, you need:
- pytorch=0.2.0_4
- torchvision
- PIL
- numpy
- matplotlib
It is important to use the right version of pytorch, as the defaults for sampling and some other things have changed in more recent versions of pytorch. In these cases, the pretrained models will not work properly.
Once the environment is set up, the pre-trained models can be downloaded from the project page and the model paths in the notebooks updated appropriately (this should simply require setting the BASE_MODEL_PATH in the notebook to the correct location).
If you find this useful in your work, please cite the paper appropriately.
Training code requires:
- tensorboardX
To train a model yourself, we have given an example training file using only the photometric loss. To run this:
- Go to the website
- In the data section download the images and training/testing splits
- Update the paths in ./UnwrapMosaic/VoxCelebData_withmask.py
- Run the code with
python train_model.py --results_folder $WHERE_TO_SAVE_TENSORBOARD_FILES --model_epoch_path $WHERETOSAVEMODELS
(Note that this can be run with any version of pytorch -- it is merely important that you train/test with the same version.)