Skip to content

Content Masked Loss: Human-Like Brush Stroke Planning in a Reinforcement Learning Painting Agent. Code for AAAI'21 Paper.

License

Notifications You must be signed in to change notification settings

pschaldenbrand/ContentMaskedLoss

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

38 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Content Masked Loss

Peter Schaldenbrand, Jean Oh
Carnegie Mellon University
2021

Human-Like Brush Stroke Planning in a Reinforcement Learning Painting Agent.

*** If you are interested in Robot Painting, we highly recommend using our newer work, FRIDA: https://github.com/pschaldenbrand/Frida

Content Masked Loss is an enhancement to the reward function in a reinforcement learning model that learns to paint in a manner that is more similar to how humans paint than existing state-of-the-art methods. The algorithm converts an image into a series of paint brush instructions. The model receives the most reward when painting in regions that contain important features such as eyes, mouths, edges, etc.

Content Masked Loss Results

Baseline Ours
L2 Content Masked Loss

Robot Painting

The brush stroke instructions can be fed to an Arduino Braccio robotic arm to be painted onto a canvas. Robot Arduino code is available here

Braccio Arm Paints Humanoid Painter Strokes

Generate the stroke instructions then have your robot arm paint them to have results like this: A Photo of Director David Lynch

Download Pre-Trained Models

The actor and renderer models can be downloaded from this box account. https://cmu.box.com/s/ojydzfocwjhbm4tsjbgt4ju5uwd6013c

Generating Strokes for Robot Painting Arm

Run the generate_actions.py script on your desired image, and the brush stroke instructions will be found in .csv files in a directory named /arduino_actions.

$ python generate_actions.py --img=[image to paint] --max_step=[number of brush strokes] \
--actor=pretrained_models/cml1/actor.pkl --renderer=renderer_constrained.pkl

Run Arduino Braccio Code

The load the Arduino with code from here.

Send the instructions to the Robot Arm

A python program parses the brush stroke instruction csv files and sends them to the robot arm:

$ python arduino_paint.py

By default, this script sends the instructions from arduino_actions/actions_all.csv, but it can be changed to a file of your choice with command-line argument --instructionsfile

See AniPainter for more robot painting fun!

Train the model yourself

Monitor the training progress using: $ tensorboard --logdir=train_log --port=6006

Train Neural Renderer

$ python train_renderer.py --constrained=True

Train the Actor

Download the training data using RobotPainter.ipynb then run:

$ python train.py --debug --batch_size=8 --max_step=120 --renderer=renderer.pkl --resume=pretrained_models/[gan|l1|l2|cm|cml1] --loss_fcn=[gan|l1|l2|cm|cml1]

Citation

Please cite our paper:

@inproceedings{schaldenbrand2021contentMaskedLoss,
  title={Content masked loss: Human-like brush stroke planning in a reinforcement learning painting agent},
  author={Schaldenbrand, Peter and Oh, Jean},
  booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
  volume={35},
  number={1},
  pages={505--512},
  year={2021}
}

Images in the neisley repository are provided courtesy of Nick Eisley. Please give him credit:

@MISC{nickEisley,
    author={{Nicholas Eisley}},
    url={http://neisley.com/}
}

Acknowledgement

We used the code from Huang et al. 2019 as starter code for this project.

Frechet Inception Distance code from Heusel et al. 2017

About

Content Masked Loss: Human-Like Brush Stroke Planning in a Reinforcement Learning Painting Agent. Code for AAAI'21 Paper.

Topics

Resources

License

Stars

Watchers

Forks