Skip to content

Commit

Permalink
Update docs and samples
Browse files Browse the repository at this point in the history
  • Loading branch information
leng-yue committed May 11, 2024
1 parent 56962cc commit b96ebdb
Show file tree
Hide file tree
Showing 9 changed files with 320 additions and 179 deletions.
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -23,3 +23,4 @@ asr-label*
/.cache
/fishenv
/.locale
/demo-audios
170 changes: 103 additions & 67 deletions docs/en/finetune.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,65 +2,22 @@

Obviously, when you opened this page, you were not satisfied with the performance of the few-shot pre-trained model. You want to fine-tune a model to improve its performance on your dataset.

`Fish Speech` consists of three modules: `VQGAN`, `LLAMA`and `VITS`.
`Fish Speech` consists of three modules: `VQGAN`, `LLAMA`, and `VITS`.

!!! info
You should first conduct the following test to determine if you need to fine-tune `VQGAN`:
You should first conduct the following test to determine if you need to fine-tune `VITS Decoder`:
```bash
python tools/vqgan/inference.py -i test.wav
python tools/vits_decoder/inference.py \
-ckpt checkpoints/vits_decoder_v1.1.ckpt \
-i fake.npy -r test.wav \
--text "The text you want to generate"
```
This test will generate a `fake.wav` file. If the timbre of this file differs from the speaker's original voice, or if the quality is not high, you need to fine-tune `VQGAN`.
This test will generate a `fake.wav` file. If the timbre of this file differs from the speaker's original voice, or if the quality is not high, you need to fine-tune `VITS Decoder`.

Similarly, you can refer to [Inference](inference.md) to run `generate.py` and evaluate if the prosody meets your expectations. If it does not, then you need to fine-tune `LLAMA`.

It is recommended to fine-tune the LLAMA and VITS model first, then fine-tune the `VQGAN` according to your needs.

## Fine-tuning VQGAN
### 1. Prepare the Dataset

```
.
├── SPK1
│ ├── 21.15-26.44.mp3
│ ├── 27.51-29.98.mp3
│ └── 30.1-32.71.mp3
└── SPK2
└── 38.79-40.85.mp3
```

You need to format your dataset as shown above and place it under `data`. Audio files can have `.mp3`, `.wav`, or `.flac` extensions.

### 2. Split Training and Validation Sets

```bash
python tools/vqgan/create_train_split.py data
```

This command will create `data/vq_train_filelist.txt` and `data/vq_val_filelist.txt` in the `data/demo` directory, to be used for training and validation respectively.

!!!info
For the VITS format, you can specify a file list using `--filelist xxx.list`.
Please note that the audio files in `filelist` must also be located in the `data` folder.

### 3. Start Training

```bash
python fish_speech/train.py --config-name vqgan_finetune
```

!!! note
You can modify training parameters by editing `fish_speech/configs/vqgan_finetune.yaml`, but in most cases, this won't be necessary.

### 4. Test the Audio

```bash
python tools/vqgan/inference.py -i test.wav --checkpoint-path results/vqgan_finetune/checkpoints/step_000010000.ckpt
```

You can review `fake.wav` to assess the fine-tuning results.

!!! note
You may also try other checkpoints. We suggest using the earliest checkpoint that meets your requirements, as they often perform better on out-of-distribution (OOD) data.
It is recommended to fine-tune the LLAMA first, then fine-tune the `VITS Decoder` according to your needs.

## Fine-tuning LLAMA
### 1. Prepare the dataset
Expand Down Expand Up @@ -168,8 +125,27 @@ After training is complete, you can refer to the [inference](inference.md) secti
By default, the model will only learn the speaker's speech patterns and not the timbre. You still need to use prompts to ensure timbre stability.
If you want to learn the timbre, you can increase the number of training steps, but this may lead to overfitting.

## Fine-tuning VITS
### 1. Prepare the dataset
#### Fine-tuning with LoRA

!!! note
LoRA can reduce the risk of overfitting in models, but it may also lead to underfitting on large datasets.

If you want to use LoRA, please add the following parameter: `+lora@model.lora_config=r_8_alpha_16`.

After training, you need to convert the LoRA weights to regular weights before performing inference.

```bash
python tools/llama/merge_lora.py \
--llama-config dual_ar_2_codebook_medium \
--lora-config r_8_alpha_16 \
--llama-weight checkpoints/text2semantic-sft-medium-v1.1-4k.pth \
--lora-weight results/text2semantic-finetune-medium-lora/checkpoints/step_000000200.ckpt \
--output checkpoints/merged.ckpt
```


## Fine-tuning VITS Decoder
### 1. Prepare the Dataset

```
.
Expand All @@ -184,32 +160,92 @@ After training is complete, you can refer to the [inference](inference.md) secti
├── 38.79-40.85.lab
└── 38.79-40.85.mp3
```

!!! note
The fine-tuning for VITS only support the .lab format files, please don't use .list file!
VITS fine-tuning currently only supports `.lab` as the label file and does not support the `filelist` format.

You need to convert the dataset to the format above, and move them to the `data` , the suffix of the files can be `.mp3`, `.wav` `.flac`, the label files' suffix are recommended to be `.lab`.
You need to format your dataset as shown above and place it under `data`. Audio files can have `.mp3`, `.wav`, or `.flac` extensions, and the annotation files should have the `.lab` extension.

### 2.Start Training
### 2. Split Training and Validation Sets

```bash
python fish_speech/train.py --config-name vits_decoder_finetune
python tools/vqgan/create_train_split.py data
```

This command will create `data/vq_train_filelist.txt` and `data/vq_val_filelist.txt` in the `data/demo` directory, to be used for training and validation respectively.

#### Fine-tuning with LoRA
!!! info
For the VITS format, you can specify a file list using `--filelist xxx.list`.
Please note that the audio files in `filelist` must also be located in the `data` folder.

### 3. Start Training

```bash
python fish_speech/train.py --config-name vits_decoder_finetune
```

!!! note
LoRA can reduce the risk of overfitting in models, but it may also lead to underfitting on large datasets.
You can modify training parameters by editing `fish_speech/configs/vits_decoder_finetune.yaml`, but in most cases, this won't be necessary.

If you want to use LoRA, please add the following parameter: `+lora@model.lora_config=r_8_alpha_16`.
### 4. Test the Audio

```bash
python tools/vits_decoder/inference.py \
--checkpoint-path results/vits_decoder_finetune/checkpoints/step_000010000.ckpt \
-i test.npy -r test.wav \
--text "The text you want to generate"
```

After training, you need to convert the LoRA weights to regular weights before performing inference.
You can review `fake.wav` to assess the fine-tuning results.


## Fine-tuning VQGAN (Not Recommended)


We no longer recommend using VQGAN for fine-tuning in version 1.1. Using VITS Decoder will yield better results, but if you still want to fine-tune VQGAN, you can refer to the following steps.

### 1. Prepare the Dataset

```
.
├── SPK1
│ ├── 21.15-26.44.mp3
│ ├── 27.51-29.98.mp3
│ └── 30.1-32.71.mp3
└── SPK2
└── 38.79-40.85.mp3
```

You need to format your dataset as shown above and place it under `data`. Audio files can have `.mp3`, `.wav`, or `.flac` extensions.

### 2. Split Training and Validation Sets

```bash
python tools/llama/merge_lora.py \
--llama-config dual_ar_2_codebook_medium \
--lora-config r_8_alpha_16 \
--llama-weight checkpoints/text2semantic-sft-medium-v1.1-4k.pth \
--lora-weight results/text2semantic-finetune-medium-lora/checkpoints/step_000000200.ckpt \
--output checkpoints/merged.ckpt
python tools/vqgan/create_train_split.py data
```

This command will create `data/vq_train_filelist.txt` and `data/vq_val_filelist.txt` in the `data/demo` directory, to be used for training and validation respectively.

!!!info
For the VITS format, you can specify a file list using `--filelist xxx.list`.
Please note that the audio files in `filelist` must also be located in the `data` folder.

### 3. Start Training

```bash
python fish_speech/train.py --config-name vqgan_finetune
```

!!! note
You can modify training parameters by editing `fish_speech/configs/vqgan_finetune.yaml`, but in most cases, this won't be necessary.

### 4. Test the Audio

```bash
python tools/vqgan/inference.py -i test.wav --checkpoint-path results/vqgan_finetune/checkpoints/step_000010000.ckpt
```

You can review `fake.wav` to assess the fine-tuning results.

!!! note
You may also try other checkpoints. We suggest using the earliest checkpoint that meets your requirements, as they often perform better on out-of-distribution (OOD) data.
4 changes: 2 additions & 2 deletions docs/en/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,13 +39,13 @@ pip3 install torch torchvision torchaudio
# Install fish-speech
pip3 install -e .

#install sox
# (Ubuntu / Debian User) Install sox
apt install libsox-dev
```

## Changelog

- 2024/05/10: Updated Fish-Speech to 1.1 version, importing VITS as the Decoder part.
- 2024/05/10: Updated Fish-Speech to 1.1 version, implement VITS decoder to reduce WER and improve timbre similarity.
- 2024/04/22: Finished Fish-Speech 1.0 version, significantly modified VQGAN and LLAMA models.
- 2023/12/28: Added `lora` fine-tuning support.
- 2023/12/27: Add `gradient checkpointing`, `causual sampling`, and `flash-attn` support.
Expand Down
42 changes: 38 additions & 4 deletions docs/en/inference.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,10 +5,12 @@ Inference support command line, HTTP API and web UI.
!!! note
Overall, reasoning consists of several parts:

1. Encode a given 5-10 seconds of voice using VQGAN.
1. Encode a given ~10 seconds of voice using VQGAN.
2. Input the encoded semantic tokens and the corresponding text into the language model as an example.
3. Given a new piece of text, let the model generate the corresponding semantic tokens.
4. Input the generated semantic tokens into VQGAN to decode and generate the corresponding voice.
4. Input the generated semantic tokens into VITS / VQGAN to decode and generate the corresponding voice.

In version 1.1, we recommend using VITS for decoding, as it performs better than VQGAN in both timbre and pronunciation.

## Command Line Inference

Expand All @@ -17,6 +19,7 @@ Download the required `vqgan` and `text2semantic` models from our Hugging Face r
```bash
huggingface-cli download fishaudio/fish-speech-1 vq-gan-group-fsq-2x1024.pth --local-dir checkpoints
huggingface-cli download fishaudio/fish-speech-1 text2semantic-sft-medium-v1.1-4k.pth --local-dir checkpoints
huggingface-cli download fishaudio/fish-speech-1 vits_decoder_v1.1.ckpt --local-dir checkpoints
```

### 1. Generate prompt from voice:
Expand Down Expand Up @@ -56,6 +59,16 @@ This command will create a `codes_N` file in the working directory, where N is a
If you are using your own fine-tuned model, please be sure to carry the `--speaker` parameter to ensure the stability of pronunciation.

### 3. Generate vocals from semantic tokens:

#### VITS Decoder
```bash
python tools/vits_decoder/inference.py \
--checkpoint-path checkpoints/vits_decoder_v1.1.ckpt \
-i codes_0.npy -r ref.wav \
--text "The text you want to generate"
```

#### VQGAN Decoder (not recommended)
```bash
python tools/vqgan/inference.py \
-i "codes_0.npy" \
Expand All @@ -71,11 +84,20 @@ python -m tools.api \
--listen 0.0.0.0:8000 \
--llama-checkpoint-path "checkpoints/text2semantic-sft-medium-v1.1-4k.pth" \
--llama-config-name dual_ar_2_codebook_medium \
--vqgan-checkpoint-path "checkpoints/vq-gan-group-fsq-2x1024.pth"
--decoder-checkpoint-path "checkpoints/vq-gan-group-fsq-2x1024.pth" \
--decoder-config-name vqgan_pretrain
```

After that, you can view and test the API at http://127.0.0.1:8000/.

!!! info
You should use following parameters to start VITS decoder:

```bash
--decoder-config-name vits_decoder_finetune \
--decoder-checkpoint-path "checkpoints/vits_decoder_v1.1.ckpt" # or your own model
```

## WebUI Inference

You can start the WebUI using the following command:
Expand All @@ -84,7 +106,19 @@ You can start the WebUI using the following command:
python -m tools.webui \
--llama-checkpoint-path "checkpoints/text2semantic-sft-medium-v1.1-4k.pth" \
--llama-config-name dual_ar_2_codebook_medium \
--vqgan-checkpoint-path "checkpoints/vq-gan-group-fsq-2x1024.pth"
--vqgan-checkpoint-path "checkpoints/vq-gan-group-fsq-2x1024.pth" \
--vits-checkpoint-path "checkpoints/vits_decoder_v1.1.ckpt"
```

!!! info
You should use following parameters to start VITS decoder:

```bash
--decoder-config-name vits_decoder_finetune \
--decoder-checkpoint-path "checkpoints/vits_decoder_v1.1.ckpt" # or your own model
```

!!! note
You can use Gradio environment variables, such as `GRADIO_SHARE`, `GRADIO_SERVER_PORT`, `GRADIO_SERVER_NAME` to configure WebUI.

Enjoy!
Loading

0 comments on commit b96ebdb

Please sign in to comment.