Skip to content

cheriell/automatic-music-accompaniment

Repository files navigation

automatic-music-accompaniment

Masters Final Project @QMUL, August 2018
Student: Lele Liu
Student Number: 130800488
Supervisor: Emmanouil Benetos
Student Email: lele.liu@se13.qmul.ac.uk or liulelecherie@gmail.com

Project description

Here are the supporting materials for masters project Automatic Music Accompaniment. This poject uses deep learning LSTM neural network to predict and generate music accompaniment according to a given melody. The model is designed to include two LSTM layers and two time distributed dense layers. Dropout is used for the two dense layers. The model architecture can be seen here. For the user study of this project, please go to this page.

System reqirements

This project uses python 3.6. Python package PrettyMIDI is needed for the data preperation for dealing with midi files. The deep model in this project is built using Keras with Tensorflow backend. To run the code, please install Tensorflow first, and check to ensure you have the following python packages:

keras for building deep learning network
pretty_midi for processing midi data
numpy for array functions
matplotlib for ploting figures during model training

You would need 3146MB free GPU memory for model training.

File description

In the root folder, there are three trained model files final_model.hdf5, final_model2.hdf5, and simple_model.hdf5. The first two are models trained on a network with two LSTM and two dense layers, while the last one is trained on a simple network with one LSTM layer and one dense layer. Only classical music pieces are used in the training dataset of the three models, so the models would work best for predicting classical music.

The python codes are in code folder, and some of the segments of the generated music pieces are provided in folder generated music segments. The scores of the generated midi files is provided in pdfs.

Running instructions

To use the code provided, please first save your midi dataset in a midis folder in the following path format:

    midis/composer/midifile.mid

Please make sure that all the midis you add in your dataset have two music parts.

Next, run load_data_to_files.py, this will encode the midis into data representations in .npy format. The encoded musics will be monophonic and only contains two music parts.

After that, please create the following folders under code folder:

    data
    |---train
    |---validation

Run file divide_train_validation.py, this will copy the encoded .npy files into training and validation sets.

Add an experiment folder under code, and use train.py to train the complex model or simple_model.py to train the simple model. The model training results will be saved in the created experiment folder including models at the end of each epoch and figures for the losses and accuracies.

You can use generate.py to generate music accompaniments. Run the file with command line options:

    midifile.mid --model_file  model.hdf5 --diversity div

The diversity is in float format, and will be used in sampling notes in generating accompaniments. If you are using the provided models final_model.hdf5 or final_model2.hdf5, the suggesting diversity is around 0.8. And if you are using simple_model.hdf5, you can try diversity around 0.6.

About

My masters Final Project @QMUL

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages