AMT-APC is a method for training an automatic piano cover generation model by fine-tuning an AMT (Automatic Music Transcription) model.
- Project page: AMT-APC
- Paper: [2409.14086] AMT-APC: Automatic Piano Cover by Fine-Tuning an Automatic Music Transcription Model
Python version: 3.10
- Install dependencies
pip install -r requirements.txt
Alternatively, if you only need to run the inference code, you can install just the necessary packages.
pip install torch torchaudio pretty-midi tqdm
- Download the pre-trained model
wget -P models/params/ https://github.com/misya11p/amt-apc/releases/download/beta/apc.pth
- Run the inference code
python infer input.wav
You can also input a YouTube URL (requires yt-dlp
).
python infer 'https://www.youtube.com/watch?v=...'
You can also specify a style (level1
, level2
, level3
).
python infer input.wav --style level3
Python version: 3.10
- Install dependencies
pip install -r requirements.txt
- Download the pre-trained AMT model
wget -P models/params/ https://github.com/misya11p/amt-apc/releases/download/beta/amt.pth
- Download the dataset
python download.py
The dataset directory is set to dataset/
by default. You can change this directory by modifying path.dataset
in config.json
.
- Create the dataset
python data/sync.py
python data/transcribe.py
python data/sv/extract.py
python data/create_labels.py
python data/create_dataset.py
- Train the model
python train --n_gpus 1
- Evaluate the model
Calculate
git clone https://github.com/albincorreya/ChromaCoverId.git eval/ChromaCoverId
python eval/cover.py
python eval/distance.py
Detailed configuration can be done through config.json
or by using command line options, which are explained with --help
. The default values are those used in the experiments in the paper.
@article{komiya2024,
title={AMT-APC: Automatic Piano Cover by Fine-Tuning an Automatic Music Transcription Model},
author={Komiya, Kazuma and Fukuhara, Yoshihisa},
journal={arXiv preprint arXiv:2409.14086},
year={2024}
}