-
SpeechGPT (2023/05) - Empowering Large Language Models with Intrinsic Cross-Modal Conversational Abilities
-
SpeechGPT-Gen (2024/01) - Scaling Chain-of-Information Speech Generation
- [2024/2/20] We proposed AnyGPT: Unified Multimodal LLM with Discrete Sequence Modeling. Checkout the paper and github.
- [2024/1/25] We released SpeechGPT-Gen: Scaling Chain-of-Information Speech Generation. Checkout the paper and github.
- [2024/1/9] We proposed SpeechAgents: Human-Communication Simulation with Multi-Modal Multi-Agent Systems. Checkout the paper and github.
- [2023/9/15] We released SpeechGPT code and checkpoints and SpeechInstruct dataset.
- [2023/9/1] We proposed SpeechTokenizer: Unified Speech Tokenizer for Speech Language Models. We released the code and checkpoints of SpeechTokenizer. Checkout the paper, demo and github.
- [2023/5/18] We released SpeechGPT: Empowering Large Language Models with Intrinsic Cross-Modal Conversational Abilities. We propose SpeechGPT, the first multi-modal LLM capable of perceiving and generating multi-modal contents following multi-modal human instructions. Checkout the paper and demo.
- We express our appreciation to Fuliang Weng and Rong Ye for their valuable suggestions and guidance.
If you find our work useful for your research and applications, please cite using the BibTex:
@misc{zhang2023speechgpt,
title={SpeechGPT: Empowering Large Language Models with Intrinsic Cross-Modal Conversational Abilities},
author={Dong Zhang and Shimin Li and Xin Zhang and Jun Zhan and Pengyu Wang and Yaqian Zhou and Xipeng Qiu},
year={2023},
eprint={2305.11000},
archivePrefix={arXiv},
primaryClass={cs.CL}
}