This project demonstrates a simple approach to music composition using machine learning. An LSTM model is trained on synthetic musical sequences to learn note patterns and generate new music. The generated sequence is then converted into a MIDI file, which you can listen to using any MIDI player.
This repository includes both a Jupyter Notebook (Music_Composition_with_Machine_Learning.ipynb
) and a Python script (music_composition_with_machine_learning.py
) that implement the complete music composition pipeline.
- generated.mid: A sample generated MIDI file.
- Music_Composition_with_Machine_Learning.ipynb: Jupyter Notebook containing the complete implementation.
- music_composition_with_machine_learning.py: Python script version of the project.
- README.md: This file.
- LSTM-Based Music Generation: Uses a Long Short-Term Memory (LSTM) network to learn and generate musical sequences.
- Synthetic Dataset: Generates synthetic sequences to mimic musical phrases.
- MIDI Conversion: Converts generated note sequences into a MIDI file using the
pretty_midi
library. - Google Colab Compatibility: Designed to run on Google Colab for ease of experimentation and demonstration.
- Python 3.x
- TensorFlow
- pretty_midi
- NumPy
You can install the required packages using pip:
pip install tensorflow pretty_midi numpy
- Open the
Music_Composition_with_Machine_Learning.ipynb
notebook in Google Colab. - Run the cells sequentially. The notebook installs dependencies, trains the model, and generates a MIDI file named
generated.mid
. - Once the notebook finishes, you can download the
generated.mid
file from the file explorer in Colab.
- Clone the repository:
git clone https://github.com/mohiuddin-khan-shiam/Music-Composition-with-Machine-Learning.git
cd Music-Composition-with-Machine-Learning
- Install the required dependencies:
pip install tensorflow pretty_midi numpy
- Run the Python script:
python music_composition_with_machine_learning.py
The script will generate a MIDI file named generated.mid
in the repository directory.
The project generates synthetic musical sequences using a random walk approach within a defined MIDI note range. These sequences serve as training data for the model.
An LSTM network is built with an embedding layer, an LSTM layer, and a dense output layer. The model is trained to predict the next note in the sequence using the synthetic dataset.
Using a seed sequence from the training data, the model generates new musical notes. Temperature sampling is applied to control the randomness of the predictions.
The generated sequence of notes is converted into a MIDI file using the pretty_midi
library. Each note is assigned a fixed duration to create a continuous musical piece.
This project is licensed under the MIT License. See the LICENSE file for more details.
- Thanks to the developers of TensorFlow and pretty_midi for providing the tools needed for machine learning-based music composition.
- Inspiration from various music generation projects and research papers in the field of deep learning and music.
Feel free to modify any sections as needed to better match your project's details or preferences. Enjoy sharing your machine learning music composition project!