Skip to content

Official implementation of "Learning Music Audio Representations Via Weak Language Supervision" (ICASSP 2022)

Notifications You must be signed in to change notification settings

ilaria-manco/mulap

Repository files navigation

Learning Music Audio Representations Via Weak Language Supervision

Ilaria Manco1 2, Emmanouil Benetos1, Elio Quinton2, Gyorgy Fazekas1
1 Queen Mary University of London, 2 Universal Music Group

License: GPL v3 arXiv

This repository is the official implementation of "Learning Music Audio Representations Via Weak Language Supervision" (ICASSP 2022).

In this work we introduced MuLaP, a framework for music-and-language pre-training to learn general-purpose music audio representations. MuLaP allows an audio backbone to learn from weakly aligned natural language descriptions of the audio content via a multimodal co-attention Transformer module. This audio-linguistic pre-training endows the model with good transfer learning capabilities, resulting in representations that are useful for a variety of music classification and regression downstream tasks.

We provide code for pre-training, downstream training and evaluation of MuLaP on 4 tasks: music auto-tagging, genre classification, instrument recognition and emotion recognition.

Installation

Clone the repository and install the dependencies. We recommend using a fresh virtual environment.

git clone https://www.github.com/ilaria-manco/mulap 
cd mulap 
pip install -r requirements.txt
pip install -e .

Preparing the dataset

MuLaP is pre-trained on a multimodal dataset of (audio, text) pairs.

Annotations should be provided in JSON format and must include the following fields:

audio_id: the unique identifier for each audio track in the dataset

caption : a string with the textual description of the audio track

audio_path: path to the audio track, relative to the root audio directory

One JSON file per split must be provided and stored in the data/datasets directory, following this structure:

dataset_name
├── audio            
│   ├── track_1.npy
│   ├── track_2.npy
|   └── ...
├── dataset_train.json    
├── dataset_val.json    
└── dataset_test.json

An illustrative example of the dataset is provided in data/datasets/audiocaption/.

Pre-training MuLaP

Dataset, model and training configurations are set in the respective yaml files in configs. You can also pass some options via the CLI, overwriting the arguments in the config files. For more details on the CLI options, please refer to the training script.

To pre-train the model with the default configs, simply run

cd mulap/scripts/
python pretrain.py 

This will generate a pretrain_id and create a new folder in save/experiments/ where the output will be saved.

If you wish to resume pre-training from a saved checkpoint, run this command:

python pretrain.py --experiment_id <pretrain_id> 

Transferring MuLaP to downstream tasks

After pre-training, you can train a classifier on top of the audio backbone for one of the downstream tasks supported by running

cd mulap/scripts/
python downstream.py <pretrain_id> <downstream_task>

The downstream tasks supported are:

You'll need to download the datasets inside the datasets/ folder and preprocess them before running downstream training. Dataset, model and training configurations for each task are set in the respective yaml files in configs/downstream.

Evaluating downstream performance

After downstream training, you can run the evaluation as follows:

cd <project_name>/scripts/
python eval.py <pretrain_id> <downstream_id> 

Cite

If you use the code in this repo, please consider citing our work:

@inproceedings{manco2022learning,
  title={Learning Music Audio Representations Via Weak Language Supervision}, 
  author={Manco, Ilaria and Benetos, Emmanouil and Quinton, Elio and Fazekas, György},
  booktitle={ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, 
  year={2022},
  pages={456-460},
  doi={10.1109/ICASSP43922.2022.9746996}
}

License

This repository is released under the GNU General Public License v3.0 license. Please see the LICENSE file for more details.

Some of the code is adapted from the following repos:

Contact

If you have any questions, please get in touch: i.manco@qmul.ac.uk.

About

Official implementation of "Learning Music Audio Representations Via Weak Language Supervision" (ICASSP 2022)

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages