Skip to content

Latest commit

 

History

History
22 lines (12 loc) · 1.43 KB

README.md

File metadata and controls

22 lines (12 loc) · 1.43 KB

NeuralSynth A.I. - Make music uniquely you

Express yourself, dance - our computer vision app will turn your body movements into music.

Main idea

We believe music is fundamental in our lives, to let us express our creativity, let go of tension and stress we might be feeling, and reconnect with ourselves and others.

Thus, we created NeuralSynth, for anyone to start creating music, even without any experience or instrument. NeuralSynth works with a computer vision model capable of tracking your body movements and mapping them to different audio parameters to let you create music with your body.

NeuralSynth

You can try our web-app at https://zephirl.github.io/NeuralSynth. For the best experience, please use Google Chrome on a computer.

Alternatively, a recorded demo video can be viewed here.

System structure

The NeuralSynth system is composed of three main components. Namely, live pose estimation with the Mediapipe API, audio synthesis through Tone.JS, and the web-app user interface made with React and javascript.

System Structure-3

To learn more, you can read our paper detailing NeuralSynth in this repo here.