DanceMuse is a bash tool for preprocessing data for music-to-dance generation built on top of Slurm, OpenPose, and FFmpeg. The current version is costumized for DanceRevolution, aimed at handling all model preprocessing and testing using this specific music-to-dance model. DanceMuse is a tool to augment artistic thought using machine learning. By using user-defined data and audio, artists are able to explore the boundaries between technology and art.
DanceMuse is developed by Yuval Ofek, Jason Kurian, and Yuecen (Crystal) Wang to facilitate the interaction between artists, specifically choreographers and dancers, and the deep learning community.
See project website here.
DanceMuse does not create art, but aims at giving suggestions and helping artists explore dance and audio in a new way. We propose a 4 step process of interacting with the system to allow artists iteratively refine their ideas and thoughts, shown in the figure above.