Skip to content

Codes of Learning Prior Feature and Attention Enhanced Image Inpainting (ECCV2022)

License

Notifications You must be signed in to change notification settings

ewrfcas/MAE-FAR

Repository files navigation

MAE-FAR

Codes of Learning Prior Feature and Attention Enhanced Image Inpainting (ECCV2022)

Paper and Supplemental Material (arXiv)

Updates

  • Codes about MAE pre-training/inference
  • Codes about ACR
  • Pre-trained MAE weights
  • Release weights trained on Places2

Preparation

You can download irregular/coco masks from here. Of course, you can use your own masks with a txt index as link.

Then download models for perceptual loss from LaMa:

mkdir -p ade20k/ade20k-resnet50dilated-ppm_deepsup/
wget -P ade20k/ade20k-resnet50dilated-ppm_deepsup/ http://sceneparsing.csail.mit.edu/model/pytorch/ade20k-resnet50dilated-ppm_deepsup/encoder_epoch_20.pth

MAE for Inpainting

Pre-trained MAE for Inpainting

FFHQ: link

Places2: link

Pre-training MAE

python -m torch.distributed.launch --nproc_per_node=2 --use_env mae_pretrain.py \
    --data_path ${IMAGE_FILES_TXT} \
    --mask_path ${IRR_MASK_TXT} ${COCO_MASK_TXT} \
    --batch_size 256 \
    --model mae_vit_base_patch16 \
    --mask_ratio 0.75 \
    --epochs 200 \
    --warmup_epochs 10 \
    --blr 1.5e-4 --weight_decay 0.05 \
    --num_workers 16 \
    --output_dir ./ckpts/mae_wo_cls_wo_pixnorm \
    --log_dir ./ckpts/mae_wo_cls_wo_pixnorm

mask_path can also be set as one file with --mask_path ${YOUR_MASK_TXT}.

You can also set --finetune and --random_mask for different MAE pre-training settings (not recommended in inpainting). Details are discussed in the paper.

Simple Inference

See simple_test.ipynb.

ACR

TIPS: Now we recommend to use features from layer6 of the MAE instead of layer8 to enjoy superior performance.

Ensure you have downloaded pre-trained resnet50dilated from LaMa.

Training

If multiple gpus (>1) are used, codes will work in DDP automatically .

python train.py --config configs/config_FAR_places2.yml \
                --exp_name ${EXP_NAME} \
                --resume_mae ${MAE_PATH}

Finetuning for 512x512~256x256

python finetune.py --config configs/config_FAR_places2_finetune_512.yml \
                   --exp_name ${EXP_NAME} \
                   --pl_resume ${PL_MODEL_PATH} \
                   --dynamic_size # if you need dynamic size training from 256 to 512

Testing

Download weights from link

This model is re-trained by the new code.

CUDA_VISIBLE_DEVICES=0 python test.py \
  --resume ${PL_MODEL_PATH} \
  --config ${CONFIG_PATH} \
  --output_path ${OUTPUT_PATH} \
  --image_size ${TEST_IMG_SCALE} \
  --load_pl

Inference for specific image/mask paths. MASK_PATH and IMAGE_PATH should contain equivalent images with the same names.

CUDA_VISIBLE_DEVICES=0 python test_custom.py \
  --resume ${PL_MODEL_PATH} \
  --config ${CONFIG_PATH} \
  --image_path ${IMAGE_PATH} \
  --mask_path ${MASK_PATH} \
  --output_path ${OUTPUT_PATH} \
  --image_size ${TEST_IMG_SCALE} \
  --load_pl

Acknowledgments

Our codes are based on LaMa and MAE.

Cite

If you found our program helpful, please consider citing:

@inproceedings{cao2022learning,
      title={Learning Prior Feature and Attention Enhanced Image Inpainting}, 
      author={Cao, Chenjie and Dong, Qiaole and Fu, Yanwei},
      journal={{ECCV}},
      year={2022}
}

About

Codes of Learning Prior Feature and Attention Enhanced Image Inpainting (ECCV2022)

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published