The implementation is built on the pytorch implementation of Faster RCNN jwyang/faster-rcnn.pytorch
- Clone the code and create a folder
git clone https://github.com/TKKim93/DivMatch.git
cd faster-rcnn.pytorch && mkdir data
- Build the Cython modules
cd DivMatch/lib
sh make.sh
- Python 3.6
- Pytorch 0.4.0 or 0.4.1
- CUDA 8.0 or higher
- cython, cffi, opencv-python, scipy, easydict, matplotlib, pyyaml
You can download pretrained VGG and ResNet101 from jwyang's repository. Default location in my code is './data/pretrained_model/'.
DivMatch
├── cfgs
├── data
│ ├── pretrained_model
├── datasets
│ ├── clipart
│ │ ├── Annotations
│ │ ├── ImageSets
│ │ ├── JPEGImages
│ ├── clipart_CP
│ ├── clipart_CPR
│ ├── clipart_R
│ ├── comic
│ ├── comic_CP
│ ├── comic_CPR
│ ├── comic_R
│ ├── Pascal
│ ├── watercolor_CP
│ ├── watercolor_CPR
│ ├── watercolor_R
├── lib
├── models (save location)
Here are the simplest ways to generate shifted domains via CycleGAN. Some of them performs unnecessary computations, thus you may revise the I2I code for faster training.
- CP shift
Change line 177 in models/cycle_gan_model.py to
loss_G = self.loss_G_A + self.loss_G_B + self.loss_idt_A + self.loss_idt_B
- R shift
Change line 177 in models/cycle_gan_model.py to
loss_G = self.loss_G_A + self.loss_G_B + self.loss_cycle_A + self.loss_cycle_B
- CPR shift
Use the original cyclegan model.
Here is an example of adapting from Pascal VOC to Clipart1k:
- You can prepare the Pascal VOC datasets from py-faster-rcnn and the Clipart1k dataset from cross-domain-detection in VOC data format.
- Shift the source domain through domain shifter. Basically, I used a residual generator and a patchGAN discriminator. For the short cut, you can download some examples of shifted domains (Link) and put these datasets into data folder.
- Train the object detector through MRL for the Pascal -> Clipart1k adaptation task.
python train.py --dataset clipart --net vgg16 --cuda
- Test the model
python test.py --dataset clipart --net vgg16 --cuda