Skip to content
Change the repository type filter

All

    Repositories list

    • A curated list of Datasets, Models and Papers for Music Emotion Recognition (MER)
      0900Updated Sep 25, 2024Sep 25, 2024
    • Python
      MIT License
      0201Updated Sep 4, 2024Sep 4, 2024
    • 0100Updated Sep 3, 2024Sep 3, 2024
    • IAMM

      Public
      An exploration of how generative text-to-music AI models can be used for emotion guidance
      0000Updated Jul 31, 2024Jul 31, 2024
    • Video2Music: Suitable Music Generation from Videos using an Affective Multimodal Transformer model
      Python
      MIT License
      2114110Updated Jul 30, 2024Jul 30, 2024
    • mustango

      Public
      Mustango: Toward Controllable Text-to-Music Generation
      Python
      MIT License
      2833250Updated Jul 24, 2024Jul 24, 2024
    • MidiCaps

      Public
      A large-scale dataset of caption-annotated MIDI files.
      Python
      MIT License
      14510Updated Jul 23, 2024Jul 23, 2024
    • Resources for DisfluencySpeech
      MIT License
      0500Updated Jul 15, 2024Jul 15, 2024
    • Python
      0600Updated Jun 5, 2024Jun 5, 2024
    • Conditional VAE for Accented Speech Generation
      HTML
      6100Updated Jun 4, 2024Jun 4, 2024
    • CM-HRNN

      Public
      Hierarchical Recurrent Neural Networks for Conditional Melody Generation with Long-term Structure
      Python
      2000Updated May 31, 2024May 31, 2024
    • Website emotion guidance
      JavaScript
      0000Updated Mar 14, 2024Mar 14, 2024
    • a list of demo websites for automatic music generation research
      41100Updated Nov 15, 2023Nov 15, 2023
    • This is a list of datasets consisting of speech, music, and sound effects, which can provide training data for Generative AI, AIGC, AI model training, intelligent audio tool development, and audio applications. It is mainly used for speech recognition, speech synthesis, singing voice synthesis, music information retrieval, music generation, etc.
      MIT License
      33100Updated Oct 31, 2023Oct 31, 2023
    • Web interface for AI music generation models
      JavaScript
      2000Updated Oct 19, 2023Oct 19, 2023
    • PreBit

      Public
      This is the repo accompanying the paper: "A multimodal model with Twitter FinBERT embeddings for extreme price movement prediction of Bitcoin"
      Jupyter Notebook
      4500Updated Oct 19, 2023Oct 19, 2023
    • Code for paper A dataset and classification model for Malay, Hindi, Tamil and Chinese music
      Jupyter Notebook
      0100Updated Oct 19, 2023Oct 19, 2023
    • Fundamental Music Embedding, FME
      Python
      7000Updated Oct 16, 2023Oct 16, 2023
    • MuVi

      Public
      Predicting emotion from music videos: exploring the relative contribution of visual and auditory information on affective responses
      Python
      01600Updated Oct 3, 2023Oct 3, 2023
    • nnAudio

      Public
      Audio processing by using pytorch 1D convolution network
      Python
      MIT License
      89000Updated Sep 4, 2023Sep 4, 2023
    • ReconVAT

      Public
      ReconVAT: a semi-supervised automatic music transcription (AMT) model
      Python
      4000Updated Aug 29, 2023Aug 29, 2023
    • PyTorch Dataset for Speech and Music audio
      Python
      12000Updated Aug 18, 2023Aug 18, 2023
    • Jointist

      Public
      Official Implementation of Jointist
      Python
      2000Updated Jul 26, 2023Jul 26, 2023
    • Demucs Lightning: A PyTorch lightning version of Demucs with Hydra and Tensorboard features
      Python
      10000Updated May 3, 2023May 3, 2023
    • DiffRoll

      Public
      PyTorch implementation of DiffRoll, a diffusion-based generative automatic music transcription (AMT) model
      Jupyter Notebook
      MIT License
      11000Updated Apr 24, 2023Apr 24, 2023
    • Regression-based Music Emotion Prediction using Triplet Neural Networks
      Jupyter Notebook
      5000Updated Mar 24, 2023Mar 24, 2023
    • A novel seq2seq framework where high-level musicalities (such us the valence of the chord progression) are fed to the Encoder, and they are "translated" to lead sheet events in the Decoder. For further details please read and cite our paper:
      Python
      4001Updated Jan 4, 2023Jan 4, 2023
    • Conditional Drums Generation using Compound Word Representations
      Python
      3000Updated Jul 29, 2022Jul 29, 2022
    • MusIAC

      Public
      music inpainting control
      Jupyter Notebook
      1000Updated Feb 8, 2022Feb 8, 2022
    • TeX
      1000Updated Jan 21, 2022Jan 21, 2022