Skip to content

From video to interpolated poses and formatted audio

Notifications You must be signed in to change notification settings

tinydance/DanceMuse

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

60 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

DanceMuse is a bash tool for preprocessing data for music-to-dance generation built on top of Slurm, OpenPose, and FFmpeg. The current version is costumized for DanceRevolution, aimed at handling all model preprocessing and testing using this specific music-to-dance model. DanceMuse is a tool to augment artistic thought using machine learning. By using user-defined data and audio, artists are able to explore the boundaries between technology and art.

DanceMuse is developed by Yuval Ofek, Jason Kurian, and Yuecen (Crystal) Wang to facilitate the interaction between artists, specifically choreographers and dancers, and the deep learning community.

See project website here.

Using DanceMuse

DanceMuse does not create art, but aims at giving suggestions and helping artists explore dance and audio in a new way. We propose a 4 step process of interacting with the system to allow artists iteratively refine their ideas and thoughts, shown in the figure above.

Dance Outputs

About

From video to interpolated poses and formatted audio

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published