The official GitHub repository for the paper on DocSynth: A Layout Guided Approach for Controllable Document Image Synthesis
The work will feature in 16th ICDAR 2021 (Lausanne, Switzerland).
git clone https://github.com/biswassanket/synth_doc_generation.git
cd synth_doc_generation
Step 2: Make sure you have conda installed. If you do not have conda, here's the magic command to install miniconda.
curl -o ./miniconda.sh -O https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh
chmod +x ./miniconda.sh
./miniconda.sh -b -u -p .
- To create conda environment:
conda env create -f environment.yml
conda activate layout2im
- To download PubLayNet dataset:
curl -o <YOUR_TARGET_DIR>/publaynet.tar.gz https://dax-cdn.cdn.appdomain.cloud/dax-publaynet/1.0.0/publaynet.tar.gz
- Download the trained models to
checkpoints/pretrained/
.
Testing on PubLayNet dataset:
$ python layout2im/test.py --dataset publaynet --coco_dir datasets/publaynet \
--saved_model checkpoints/pretrained/publaynet_netG.pkl \
--results_dir checkpoints/pretrained_results_publaynet
$ python layout2im/train.py
If you find this code useful in your research then please cite
@inproceedings{biswas2021docsynth,
title={DocSynth: A Layout Guided Approach for Controllable Document Image Synthesis},
author={Biswas, Sanket and Riba, Pau and Llad{\'o}s, Josep and Pal, Umapada},
booktitle={International Conference on Document Analysis and Recognition (ICDAR)},
year={2021}
}
Our project has adapted and borrowed the code structure from layout2im. We thank the authors. This research has been partially supported by the Spanish projects RTI2018-095645-B-C21, and FCT-19-15244, and the Catalan projects 2017-SGR-1783, the CERCA Program / Generalitat de Catalunya and PhD Scholarship from AGAUR (2021FIB-10010).
Thank you and sorry for the bugs!