Skip to content

Torch implementation of "Pixel-Level Domain Transfer"

Notifications You must be signed in to change notification settings

nashory/pixel-level-dt-torch

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

23 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

pixel-level-dt-torch

Torch implementation of "Pixel-Level Domain Transfer", bug-fixed version of repo

image

Note (IMPORTANT):

What is different from original repo?

  • Most importantly, BUG FIXED.
  • The input sizes of both 64 x 64 to 128 x 128 are supported.
  • Network structure was slightly modified acccording to input size.
  • A grid form of generated images are saved in png format.
  • Torch logger is added.

Prerequisites

Dataset preparation

  • (step 1) Download LOOKBOOK dataset: Here
  • (step 2) Place 'lookbook.tar' zip file at root dir.
  • (step 3) run sh setup.sh
  • (step 4) preprocess dataset using prepare_data.ipynb (you may need to use jupyter notebook)

How to run?

  • Start training:
(change training options in opts.lua file beforehand)
python run.py
python run.py & (if want to run in background)
  • Run server for visualization:
(change server_ip and server_port options in opts.lua file beforehand)
th server.lua
th server.lua & (if want to run in background)

Visualization (display)

You can see the generated images and loss graphs using web browser.
https://<server_ip>:<port> (example) image

Experimental Results

Result so far (it is still being trained at this moment.)

training Final
condition: 128x128, 0.8 epoch condition: 128x128, 0.8 epoch

[!!] Trouble-shooting Multi-GPU Memory Allocation issue in torch

It seems that torch uses all memories across the GPUs if you use muti gpu(torch/cutorch#180). To prevent this and use only single GPU, I gave 'UDA_VISIBLE_DEVICES=n' options when running lua script. If you are running with single GPU, or do not want to disable multi-gpu memory allocation of torch, remove this option in run.py

(e.g.) os.system('CUDA_VISIBLE_DEVICES=0 th ./main.lua') --> os.system('th ./main.lua')

Acknowledgement

Author

MinchulShin / @nashory
Any insane bug reports or questions are welcome. (min.stellastra[at]gmail.com) :-)

About

Torch implementation of "Pixel-Level Domain Transfer"

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published