Open-Lyrics is a Python library that transcribes voice files using
faster-whisper, and translates/polishes the resulting text
into .lrc
files in the desired language using LLM,
e.g. OpenAI-GPT, Anthropic-Claude.
- Well preprocessed audio to reduce hallucination (Loudness Norm & optional Noise Suppression).
- Context-aware translation to improve translation quality. Check prompt for details.
- Check here for an overview of the architecture.
- 2024.5.7:
- Add custom endpoint (base_url) support for OpenAI & Anthropic:
lrcer = LRCer(base_url_config={'openai': 'https://api.chatanywhere.tech', 'anthropic': 'https://example/api'})
- Generating bilingual subtitles
lrcer.run('./data/test.mp3', target_lang='zh-cn', bilingual_sub=True)
- Add custom endpoint (base_url) support for OpenAI & Anthropic:
- 2024.5.11: Add glossary into prompt, which is confirmed to improve domain specific translation. Check here for details.
- 2024.5.17: You can route model to arbitrary Chatbot SDK (either OpenAI or Anthropic) by setting
chatbot_model
toprovider: model_name
together with base_url_config:lrcer = LRCer(chatbot_model='openai: claude-3-haiku-20240307', base_url_config={'openai': 'https://api.g4f.icu/v1/'})
- 2024.6.25: Support Gemini as translation engine LLM, try using
gemini-1.5-flash
:lrcer = LRCer(chatbot_model='gemini-1.5-flash')
- 2024.9.10: Now openlrc depends on
a specific commit of
faster-whisper, which is not published on PyPI. Install it from source:
pip install "faster-whisper @ https://github.com/SYSTRAN/faster-whisper/archive/8327d8cc647266ed66f6cd878cf97eccface7351.tar.gz"
- 2024.12.19: Add
ModelConfig
for chat model routing, which is more flexible than model name string, The ModelConfig can be ModelConfig(provider='', model_name='', base_url='', proxy=''), e.g.:from openlrc import LRCer, ModelConfig, ModelProvider chatbot_model1 = ModelConfig( provider=ModelProvider.OPENAI, name='deepseek-chat', base_url='https://api.deepseek.com/beta', api_key='sk-APIKEY' ) chatbot_model2 = ModelConfig( provider=ModelProvider.OPENAI, name='gpt-4o-mini', api_key='sk-APIKEY' ) lrcer = LRCer(chatbot_model=chatbot_model1, retry_model=chatbot_model2)
-
Please install CUDA 11.x and cuDNN 8 for CUDA 11 first according to https://opennmt.net/CTranslate2/installation.html to enable
faster-whisper
.faster-whisper
also needs cuBLAS for CUDA 11 installed.For Windows Users (click to expand)
(For Windows Users only) Windows user can Download the libraries from Purfview's repository:
Purfview's whisper-standalone-win provides the required NVIDIA libraries for Windows in a single archive. Decompress the archive and place the libraries in a directory included in the
PATH
. -
Add LLM API keys, you can either:
- Add your OpenAI API key to environment variable
OPENAI_API_KEY
. - Add your Anthropic API key to environment variable
ANTHROPIC_API_KEY
. - Add your Google API Key to environment variable
GOOGLE_API_KEY
.
- Add your OpenAI API key to environment variable
-
Install ffmpeg and add
bin
directory to yourPATH
. -
This project can be installed from PyPI:
pip install openlrc
or install directly from GitHub:
pip install git+https://github.com/zh-plus/openlrc
-
Install latest fast-whisper from source:
pip install "faster-whisper @ https://github.com/SYSTRAN/faster-whisper/archive/8327d8cc647266ed66f6cd878cf97eccface7351.tar.gz"
-
Install PyTorch:
pip install --force-reinstall torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu124
-
Fix the
typing-extensions
issue:pip install typing-extensions -U
from openlrc import LRCer
if __name__ == '__main__':
lrcer = LRCer()
# Single file
lrcer.run('./data/test.mp3',
target_lang='zh-cn') # Generate translated ./data/test.lrc with default translate prompt.
# Multiple files
lrcer.run(['./data/test1.mp3', './data/test2.mp3'], target_lang='zh-cn')
# Note we run the transcription sequentially, but run the translation concurrently for each file.
# Path can contain video
lrcer.run(['./data/test_audio.mp3', './data/test_video.mp4'], target_lang='zh-cn')
# Generate translated ./data/test_audio.lrc and ./data/test_video.srt
# Use glossary to improve translation
lrcer = LRCer(glossary='./data/aoe4-glossary.yaml')
# To skip translation process
lrcer.run('./data/test.mp3', target_lang='en', skip_trans=True)
# Change asr_options or vad_options, check openlrc.defaults for details
vad_options = {"threshold": 0.1}
lrcer = LRCer(vad_options=vad_options)
lrcer.run('./data/test.mp3', target_lang='zh-cn')
# Enhance the audio using noise suppression (consume more time).
lrcer.run('./data/test.mp3', target_lang='zh-cn', noise_suppress=True)
# Change the LLM model for translation
lrcer = LRCer(chatbot_model='claude-3-sonnet-20240229')
lrcer.run('./data/test.mp3', target_lang='zh-cn')
# Clear temp folder after processing done
lrcer.run('./data/test.mp3', target_lang='zh-cn', clear_temp=True)
# Change base_url
lrcer = LRCer(base_url_config={'openai': 'https://api.g4f.icu/v1',
'anthropic': 'https://example/api'})
# Route model to arbitrary Chatbot SDK
lrcer = LRCer(chatbot_model='openai: claude-3-sonnet-20240229',
base_url_config={'openai': 'https://api.g4f.icu/v1/'})
# Bilingual subtitle
lrcer.run('./data/test.mp3', target_lang='zh-cn', bilingual_sub=True)
Check more details in Documentation.
Add glossary to improve domain specific translation. For example aoe4-glossary.yaml
:
{
"aoe4": "帝国时代4",
"feudal": "封建时代",
"2TC": "双TC",
"English": "英格兰文明",
"scout": "侦察兵"
}
lrcer = LRCer(glossary='./data/aoe4-glossary.yaml')
lrcer.run('./data/test.mp3', target_lang='zh-cn')
or directly use dictionary to add glossary:
lrcer = LRCer(glossary={"aoe4": "帝国时代4", "feudal": "封建时代"})
lrcer.run('./data/test.mp3', target_lang='zh-cn')
pricing data from OpenAI and Anthropic
Model Name | Pricing for 1M Tokens (Input/Output) (USD) |
Cost for 1 Hour Audio (USD) |
---|---|---|
gpt-3.5-turbo |
0.5, 1.5 | 0.01 |
gpt-4o-mini |
0.5, 1.5 | 0.01 |
gpt-4-0125-preview |
10, 30 | 0.5 |
gpt-4-turbo-preview |
10, 30 | 0.5 |
gpt-4o |
5, 15 | 0.25 |
claude-3-haiku-20240307 |
0.25, 1.25 | 0.015 |
claude-3-sonnet-20240229 |
3, 15 | 0.2 |
claude-3-opus-20240229 |
15, 75 | 1 |
claude-3-5-sonnet-20240620 |
3, 15 | 0.2 |
gemini-1.5-flash |
0.175, 2.1 | 0.01 |
gemini-1.0-pro |
0.5, 1.5 | 0.01 |
gemini-1.5-pro |
1.75, 21 | 0.1 |
deepseek-chat |
0.18, 2.2 | 0.01 |
Note the cost is estimated based on the token count of the input and output text. The actual cost may vary due to the language and audio speed.
For english audio, we recommend using deepseek-chat
, gpt-4o-mini
or gemini-1.5-flash
.
For non-english audio, we recommend using claude-3-5-sonnet-20240620
.
To maintain context between translation segments, the process is sequential for each audio file.
- [Efficiency] Batched translate/polish for GPT request (enable contextual ability).
- [Efficiency] Concurrent support for GPT request.
- [Translation Quality] Make translate prompt more robust according to https://github.com/openai/openai-cookbook.
- [Feature] Automatically fix json encoder error using GPT.
- [Efficiency] Asynchronously perform transcription and translation for multiple audio inputs.
- [Quality] Improve batched translation/polish prompt according to gpt-subtrans.
- [Feature] Input video support.
- [Feature] Multiple output format support.
- [Quality] Speech enhancement for input audio.
- [Feature] Preprocessor: Voice-music separation.
- [Feature] Align ground-truth transcription with audio.
- [Quality] Use multilingual language model to assess translation quality.
- [Efficiency] Add Azure OpenAI Service support.
- [Quality] Use claude for translation.
- [Feature] Add local LLM support.
- [Feature] Multiple translate engine (Anthropic, Microsoft, DeepL, Google, etc.) support.
- [Feature] Build a electron + fastapi GUI for cross-platform application.
- [Feature] Web-based streamlit GUI.
- Add fine-tuned whisper-large-v2 models for common languages.
- [Feature] Add custom OpenAI & Anthropic endpoint support.
- [Feature] Add local translation model support (e.g. SakuraLLM).
- [Quality] Construct translation quality benchmark test for each patch.
- [Quality] Split subtitles using LLM (ref).
- [Quality] Trim extra long subtitle using LLM (ref).
- [Others] Add transcribed examples.
- Song
- Podcast
- Audiobook
- https://github.com/guillaumekln/faster-whisper
- https://github.com/m-bain/whisperX
- https://github.com/openai/openai-python
- https://github.com/openai/whisper
- https://github.com/machinewrapped/gpt-subtrans
- https://github.com/MicrosoftTranslator/Text-Translation-API-V3-Python
- https://github.com/streamlit/streamlit
@book{openlrc2024zh,
title = {zh-plus/openlrc},
url = {https://github.com/zh-plus/openlrc},
author = {Hao, Zheng},
date = {2024-09-10},
year = {2024},
month = {9},
day = {10},
}