Torch implementation of "Pixel-Level Domain Transfer", bug-fixed version of repo
- I found too many bugs in the original repo, and the code was not running.
- Finally, I decided to release bug-fixed version of "Pixel-Level Domain Transfer" with enhanced features.
- The majority of the codes were brought from original repo.
- Most importantly, BUG FIXED.
- The input sizes of both 64 x 64 to 128 x 128 are supported.
- Network structure was slightly modified acccording to input size.
- A grid form of generated images are saved in png format.
- Torch logger is added.
- (step 1) Download LOOKBOOK dataset: Here
- (step 2) Place 'lookbook.tar' zip file at root dir.
- (step 3) run
sh setup.sh
- (step 4) preprocess dataset using
prepare_data.ipynb
(you may need to use jupyter notebook)
- Start training:
(change training options in opts.lua file beforehand)
python run.py
python run.py & (if want to run in background)
- Run server for visualization:
(change server_ip and server_port options in opts.lua file beforehand)
th server.lua
th server.lua & (if want to run in background)
You can see the generated images and loss graphs using web browser.
https://<server_ip>:<port>
(example)
Result so far (it is still being trained at this moment.)
training | Final |
---|---|
condition: 128x128, 0.8 epoch | condition: 128x128, 0.8 epoch |
It seems that torch uses all memories across the GPUs if you use muti gpu(torch/cutorch#180).
To prevent this and use only single GPU, I gave 'UDA_VISIBLE_DEVICES=n' options when running lua script.
If you are running with single GPU, or do not want to disable multi-gpu memory allocation of torch, remove this option in run.py
(e.g.) os.system('CUDA_VISIBLE_DEVICES=0 th ./main.lua') --> os.system('th ./main.lua')
MinchulShin / @nashory
Any insane bug reports or questions are welcome. (min.stellastra[at]gmail.com) :-)