Skip to content

CHELSEA234/Dense-Face

Repository files navigation

Dense-face

This repo is the implementation of "Dense-Face: Personalized Face Generation Model via Dense Annotation Prediction". ArXiv and project page.

Teaser

Quick Demo

  • To create your environment by
    conda env create -f Dense-Face.yaml
    
    or mannually install pytorch=1.12.1 and torchvision=0.13.1 in python 3.8.
  • Go to link to download pre-trained weights, and then put them in inference_code/ckpt.
  • To visualize the results in
    cd ./inference_code
    bash inference.sh
    
    results are dumped into ./inference_code/output.
  • More qualitative results can be found at: project page.

T2I-Dense-Face Dataset

  • T2I-Dense-Face contains face images from CASIA and CelebA, and we release the section of CASIA.
  • We offer dataset_usage/*.ipynb to help understand the dataset.
  • Download CASIA_tiny via link, and put them in CASIA_tiny. This is important to run face generation mode.
  • If you want to see 5%~10% of the proposed dataset, please find CASIA_small via (link), and the download link to CASIA_full can be obtained via sending email to guoxia11@msu.edu.

Detailed Method

Teaser

Pre-trained Weights

  • Download three weights via link, and put them in inference_code/ckpt.
  • These three weights are used for text-edting mode (*.safetensors), face generation mode (epoch*.ckpt), and training from scratch (*init.ckpt).

step 1: Dense-Face's text-editing mode.

  • Please refer to inference_code/stage_1_text_editing/stage_1_text_editing.ipynb.
  • The generated results will be saved in inference_code/output_stage_1.
  • Do not forget:
 pip install diffusers

step 2: Generate conditions (e.g., face region and head pose) for generation based on the Fig. 8.

step 3: Dense-Face's face-generation mode.

  • Given the conditions generated from step 2, we modify results from step 1. For example, we save reference images and their arcface features in inference_code/reference_id; the source image and its face region are in inference_code/cropped_face/ and inference_code/mask/, respectively. Then run:
  cd ../inference_code
  bash inference.sh
  • It uses the reference image arcface feature (inference_code/reference_id) to inpaint the face region of inference_code/cropped_face/.
  • The results are dumped into inference_code/output

Source Code Structure.

The quick view on the code structure:

./Dense-Face
    ├── Dense-Face.yaml 
    ├── inference_code
    │      ├── stage_1_text_editing/stage_1_text_editing.ipynb (the Huggingface interface on Text-based Editing mode.)
    │      ├── inference.py (demo inference code)
    │      ├── inference.sh (demo inference entrance)
    │      ├── main.py (preliminary version train code)
    │      ├── main.sh (train entrance)
    │      ├── reference_id (reference image and arcface feature)
    │      ├── ropped_face (base image)
    │      ├── mask (face region mask)
    │      ├── output (output generated samples)
    │      └── ...
    ├── annotation_toolbox
    │      ├── dense_annotation_demo.ipynb (Crop the SD output image and produce the face region mask)
    │      ├── Dense-Face-mediapipe.yml (env. file for the annotation)
    │      └── ...
    ├── dataset_usage
    │      └── readCelebAFacesDataset.ipynb (Instruction on how to use dataset)
    └── test_samples (we offer 25 celebrity test samples)

Reference

If you think our work is helpful, please cite:

@article{denseface,
  title={Dense-Face: Personalized Face Generation Model via Dense Annotation Prediction}, 
  author={Xiao Guo and Manh Tran and Jiaxin Cheng and Xiaoming Liu},
  journal={arXiv preprint arXiv:2412.18149},
  year={2024}
}

About

Face Generation Work (Preprint)

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published