Express yourself, dance - our computer vision app will turn your body movements into music.
We believe music is fundamental in our lives, to let us express our creativity, let go of tension and stress we might be feeling, and reconnect with ourselves and others.
Thus, we created NeuralSynth, for anyone to start creating music, even without any experience or instrument. NeuralSynth works with a computer vision model capable of tracking your body movements and mapping them to different audio parameters to let you create music with your body.
You can try our web-app at https://zephirl.github.io/NeuralSynth. For the best experience, please use Google Chrome on a computer.
Alternatively, a recorded demo video can be viewed here.
The NeuralSynth system is composed of three main components. Namely, live pose estimation with the Mediapipe API, audio synthesis through Tone.JS, and the web-app user interface made with React and javascript.
To learn more, you can read our paper detailing NeuralSynth in this repo here.