This repo is trying to reproduce some results of awesome Deepfill v2 (Project & Code) because I personally prefer pytorh. Besides, I abuse the awesome detectron2 lib to implement although it is originally designed for object detection, because I appreciate its well-organized codes. If you know more suitable tools, welcome to give recommendations.
This repo is yet to be finished and tested.
- Build up the model.
- Translate the pretrained tensorflow model into pytorch.
- Fix the bug of converting tensorflow pretrained model to pytorch.
Tensorflow behaves slightly different from Pytorch
on Conv2d when stride is greater than 1 (e.g. 2). Hence, I deal with
this issue by manually striding the convolutional feature map.
Moreover, original
tensorflow requires nearest neighbor downsample with
align_corners=True
while official pytorchinterpolate
does not supportalign_corners=True
whenmode="nearest"
. Therefore, I write my own downsampling functions enabling thealign_corners
. - Evaluate the pretrained model on Places2 and CelebA-HQ.
- Train the model on Places2 and CelebA-HQ.
Now we can reproduce the demo results given in the original repo.
- Python==3.6
- Pytorch==1.3.0 (yet not tested for higher version)
- detectron2==0.1
The pretrained model is converted from tensorflow to pytorch using
param_convertor.py
. You can download the tensorflow pretrained model
Places2
and convert the parameters or directly download the converted model. Make sure the folder that contains the pretrained model is like
./output
./output/pretrained/
./output/pretrained/places2_256_deepfill_v2.pth
run the jupyter notebook file ./inpaint_demo.ipynb
. The results are
dumped in the folder ./demo_outputs
.
TO BE COMPLETED
TO BE COMPLETED