Skip to content

Latest commit

 

History

History
79 lines (57 loc) · 4.46 KB

README.md

File metadata and controls

79 lines (57 loc) · 4.46 KB

Amadeus Logo

🎵 A new, more natural experience for digital music composition

Overview

Amadeus is a music composition app for iPad / Android tablets which aims to provide a better experience for composers and songwriters. Inspired by my own experiences using composition software, my goal for this project is to use a bottom-up approach to Optical Music Recognition and allow using a stylus on a touchscreen for music composition. Given a user’s input into the app, a machine learning model will attempt to recognize the notes and convert them to a digital format.

Application Architecture

This is subject to change depending on features / fixes:

Amadeus Application Architecture

Current Features

  • Drawing any number of notes on a canvas using a stylus (Tested with iPad/Apple Pencil setup).
  • Recoginition of notes via AWS Lambda & Rekognition; automatically placing identified note or symbol in the canvas.
  • Moving previously placed notes or symbols using the stylus.
  • Clearing all drawings from the canvas in case of mistakes.

AWS Rekognition

The original model was trained using the HOMUS dataset, with the dataset split at 80% for training, and 20% for testing. Current estimate for accuracy in production is 90%, but more testing is needed.

Rekognition Current Metrics

Tensorflow

A new model was trained using the HOMUS dataset, with the dataset split at 70% for training, 20% for testing, and 10% for validation. Current estimate for accuracy in production is 83%, but more testing is needed.

Tensorflow Current Metrics

Pending Tasks

Fix needed:

  • Add support for Apple Pencil's built-in palm rejection.
  • Optimization of file upload (currently, the entire canvas is uploaded, which adds to SVG to PNG conversion time and overall identification time. This needs to be cropped before uploading).

Planned Features:

  • Moving Identify ("Brain") button closer to where the note is being drawn (for easier access).
  • Allowing user to browse, select, and add notes via a menu (the "traditional" way).
  • Recognizing triplets, ties, slurs, chord markings, more complex structures.
  • "Snap-to-grid" feature for notes and symbols.
  • Store & export user compositions as MusicXML or PDF.
  • Pipeline for data collection to get more data for OMR model.

Others:

  • More UI/UX Research to get input from potential users.

Media

G-Clef identification

Quarter Note identification

Barline identification

Final Result