Skip to content

Latest commit

 

History

History
153 lines (106 loc) · 7.35 KB

README.md

File metadata and controls

153 lines (106 loc) · 7.35 KB

ConsistentID : Portrait Generation with Multimodal Fine-Grained Identity Preserving Paper page

[📄Paper]   [🚩Project Page]  

[📸Model Card]   [🤗Hugging Face]

🌠 Key Features:

  1. Portrait generation with extremely high ID fidelity, without sacrificing diversity, text controllability.
  2. Introducing FaceParsing and FaceID information into the Diffusion model.
  3. Rapid customization within seconds, with no additional LoRA training.
  4. Can serve as an Adapter to collaborate with other Base Models alongside LoRA modules in community.

🔥 Examples

🚩 To-Do List

Your star will help facilitate the process. The extended code and data will be published upon acceptance of our paper.

  • Release ConsistentID training, evaluation code, and demo!
  • Release the SDXL model trained with more data, with enhanced resolution and generalizability.
  • Release the multi-ID input version, inpainting with controlnet version et. al to guide the improvement of diversity.

Release

🏷️ Introduce

This is a work in the field of AIGC that introduces FaceParsing information and FaceID information into the Diffusion model. Previous work mainly focused on overall ID preservation, even though fine-grained ID preservation models such as InstantID have recently been proposed, the injection of facial ID features will be fixed. In order to achieve more flexible consistency maintenance of fine-grained IDs for facial features, a batch of 50000 multimodal fine-grained ID datasets was reconstructed for training the proposed FacialEncoder model, which can support common functions such as personalized photos, gender/age changes, and identity confusion.

At the same time, we have defined a unified measurement benchmark FGIS for Fine-Grained Identity Preservice, covering several common facial personalized character scenes and characters, and constructed a fine-grained ID preservation model baseline.

Finally, a large number of experiments were conducted in this article, and ConsistentID achieved the effect of SOTA in facial personalization task processing. It was verified that ConsistentID can improve ID consistency and even modify facial features by selecting finer-grained prompts, which opens up a direction for future research on Fine-Grained facial personalization.

🔧 Requirements

conda create --name ConsistentID python=3.8.10
conda activate ConsistentID
pip install -U pip

# Install requirements
pip install -r requirements.txt

📦️ Data Preparation

Prepare Data in the following format

├── data
|   ├── JSON_all.json 
|   ├── resize_IMG # Imgaes 
|   ├── all_faceID  # FaceID
|   └── parsing_mask_IMG # Parsing Mask 

The .json file should be like

[
    {
        "IMG": "Path of image...",
        "parsing_mask_IMG": "...",
        "vqa_llva": "...",
        "id_embed_file_resize": "...",
        "vqa_llva_facial": "..."
    },
    ...
]

🚀 Train

Ensure that the workspace is the root directory of the project.

bash train_bash.sh

🧪 Usage

Ensure that the workspace is the root directory of the project. Then, run convert_weights.py to save the weights efficiently.

Infer

python infer.py

Infer Inpaint & Inpaint Controlnet

python -m demo.inpaint_demo
python -m demo.controlnet_demo

⏬ Model weights

The model will be automatically downloaded through the following two lines:

from huggingface_hub import hf_hub_download
ConsistentID_path = hf_hub_download(repo_id="JackAILab/ConsistentID", filename="ConsistentID-v1.bin", repo_type="model")

The pre-trained model parameters of the model can also be downloaded on Google Drive or Baidu Netdisk.

Acknowledgement

Disclaimer

This project strives to impact the domain of AI-driven image generation positively. Users are granted the freedom to create images using this tool, but they are expected to comply with local laws and utilize it responsibly. The developers do not assume any responsibility for potential misuse by users.

Citation

If you found this code helpful, please consider citing:

@article{huang2024consistentid,
  title={ConsistentID: Portrait Generation with Multimodal Fine-Grained Identity Preserving},
  author={Huang, Jiehui and Dong, Xiao and Song, Wenhui and Li, Hanhui and Zhou, Jun and Cheng, Yuhao and Liao, Shutao and Chen, Long and Yan, Yiqiang and Liao, Shengcai and others},
  journal={arXiv preprint arXiv:2404.16771},
  year={2024}
}