Chat with LLM that speak on your audio, supports full offline mode with LibreTranslate and Ollama.
Quickstart example:
npm install
docker compose up --profile piper up -d
npm run start
For Piper, get the voices from piper initial release page on github, get the .onnx
and .onnx.json
files from the zip and put it inside the models
folder, then put the voice name in the .env
file, for example:
PIPER_VOICE_NAME=en-us-ryan-high
Start
docker compose --profile voicevox-cpu up -d
Stop
docker compose --profile voicevox-cpu down
Start
docker compose --profile voicevox-gpu up -d
Stop
docker compose --profile voicevox-gpu down
Start
docker compose --profile piper up -d
Stop
docker compose --profile piper down
- Write a better Docs
- LLM uses memory/memory management
- Make the code better