This code implements depth aware neural style transfer for application to videos.
Completes locally on my Macbook Pro 2019 without gpu within 15 mins for 1 min video 🙆🏽♂️
To install the package, please clone the repository and install via pip:
git clone https://github.com/Aayushchou/depth-aware-style-transfer.git
cd image-styler
pip install -e .
To run for your videos and files, simply run the following command in your terminal (update paths to your files accordingly):
style-transfer -i "data/input/video/1am_trimmed.mp4" -sf "data/input/style/neon1.png" -sb "data/input/style/cyber.png" -o "data/output/test"
The style transfer is done in the following steps:
- The video is split into frames using split.py.
- The frames are passed through the style transfer model with transfer.py to create stylized frames.
- The directory containing the frames from step 1 must be passed to transfer.py.
- Additionally, the style transfer has capability to take in two style images, one for the foreground and one for the background.
- The max_width parameter determines the size of the images, 512 has worked best in my experience.
- There are some options to blur the background as well.
- This makes it easier to distinguish the main areas of focus in the video.
- The stylized frames are joined to a video using join.py.
The video below demostrates one of the outputs from this:
(Song is Mohe - 1 am)
1am_trimmed.mp4
1am_blurred_hybrid.mov
- add main.py to orchestrate end-to-end procedure
- improvements to depth recognition
- try out more style image examples
- add front-end