Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Attempting to continue Finetuning #72

Closed
IIEleven11 opened this issue Nov 23, 2023 · 8 comments
Closed

Attempting to continue Finetuning #72

IIEleven11 opened this issue Nov 23, 2023 · 8 comments

Comments

@IIEleven11
Copy link

IIEleven11 commented Nov 23, 2023

Hi, awesome stuff so far thank you! I am running into a speed bump though. While using your train_finetune.py script on my own dataset. I have made it to epoch 109 the model sounds pretty good and I was hoping to continue tuning except I cannot seem to get past epoch 110 step ~98/99. I started with batch size of 10 and have tried lowering it a few times, I am currently just trying to work at a batch size of 4 incase there are possible memory issues.

Within my config I adjusted the "pretrained_model: Models/LibriTTS/epoch_2nd_00109.pth", made sure "second_stage_load_pretrained: true" was true and had to set "load_only_params: false" from true to false as the "load_checkpoint" function in "train_finetune.py" only loads the model parameters and ignores the epoch and iteration numbers saved in the checkpoint.

I am using a single RTX A6000. Any guidance would be greatly appreciated, thank you! Here is the complete traceback followed by my config:

Traceback (most recent call last):
File "/home/Ubuntu/WORK/StyleTTS2/Colab/train_finetune.py", line 714, in
main()
File "/home/Ubuntu/WORK/StyleTTS2/Colab/venv/lib/python3.10/site-packages/click/core.py", line 1157, in call
return self.main(*args, **kwargs)
File "/home/Ubuntu/WORK/StyleTTS2/Colab/venv/lib/python3.10/site-packages/click/core.py", line 1078, in main
rv = self.invoke(ctx)
File "/home/Ubuntu/WORK/StyleTTS2/Colab/venv/lib/python3.10/site-packages/click/core.py", line 1434, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/home/Ubuntu/WORK/StyleTTS2/Colab/venv/lib/python3.10/site-packages/click/core.py", line 783, in invoke
return __callback(*args, **kwargs)
File "/home/Ubuntu/WORK/StyleTTS2/Colab/train_finetune.py", line 509, in main
loss_gen_lm.backward()
File "/home/Ubuntu/WORK/StyleTTS2/Colab/venv/lib/python3.10/site-packages/torch/_tensor.py", line 492, in backward
torch.autograd.backward(
File "/home/Ubuntu/WORK/StyleTTS2/Colab/venv/lib/python3.10/site-packages/torch/autograd/init.py", line 251, in backward
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [1, 1, 1]] is at version 3; expected version 1 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).

After adding "torch.autograd.set_detect_anomaly(True)"

Traceback (most recent call last):
File "/home/Ubuntu/WORK/StyleTTS2/Colab/train_finetune.py", line 716, in
main()
File "/home/Ubuntu/WORK/StyleTTS2/Colab/venv/lib/python3.10/site-packages/click/core.py", line 1157, in call
return self.main(*args, **kwargs)
File "/home/Ubuntu/WORK/StyleTTS2/Colab/venv/lib/python3.10/site-packages/click/core.py", line 1078, in main
rv = self.invoke(ctx)
File "/home/Ubuntu/WORK/StyleTTS2/Colab/venv/lib/python3.10/site-packages/click/core.py", line 1434, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/home/Ubuntu/WORK/StyleTTS2/Colab/venv/lib/python3.10/site-packages/click/core.py", line 783, in invoke
return __callback(*args, **kwargs)
File "/home/Ubuntu/WORK/StyleTTS2/Colab/train_finetune.py", line 511, in main
loss_gen_lm.backward()
File "/home/Ubuntu/WORK/StyleTTS2/Colab/venv/lib/python3.10/site-packages/torch/_tensor.py", line 492, in backward
torch.autograd.backward(
File "/home/Ubuntu/WORK/StyleTTS2/Colab/venv/lib/python3.10/site-packages/torch/autograd/init.py", line 251, in backward
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [1, 1, 1]] is at version 3; expected version 1 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!

config_ft.yml:

{ASR_config: Utils/ASR/config.yml, ASR_path: Utils/ASR/epoch_00080.pth, F0_path: Utils/JDC/bst.t7,
PLBERT_dir: Utils/PLBERT/, batch_size: 4, data_params: {OOD_data: /home/Ubuntu/WORK/StyleTTS2/Data/OOD_texts.txt,
min_length: 50, root_path: /home/Ubuntu/WORK/StyleTTS2/Data/wavs, train_data: /home/Ubuntu/WORK/StyleTTS2/Data/train_list.txt,
val_data: /home/Ubuntu/WORK/StyleTTS2/Data/val_list.txt}, device: cuda, epochs: 200,
load_only_params: false, log_dir: Models/LJSpeech, log_interval: 10, loss_params: {
diff_epoch: 10, joint_epoch: 110, lambda_F0: 1.0, lambda_ce: 20.0, lambda_diff: 1.0,
lambda_dur: 1.0, lambda_gen: 1.0, lambda_mel: 5.0, lambda_mono: 1.0, lambda_norm: 1.0,
lambda_s2s: 1.0, lambda_slm: 1.0, lambda_sty: 1.0}, max_len: 100, model_params: {
decoder: {resblock_dilation_sizes: [[1, 3, 5], [1, 3, 5], [1, 3, 5]], resblock_kernel_sizes: [
3, 7, 11], type: hifigan, upsample_initial_channel: 512, upsample_kernel_sizes: [
20, 10, 6, 4], upsample_rates: [10, 5, 3, 2]}, diffusion: {dist: {estimate_sigma_data: true,
mean: -3.0, sigma_data: 0.2, std: 1.0}, embedding_mask_proba: 0.1, transformer: {
head_features: 64, multiplier: 2, num_heads: 8, num_layers: 3}}, dim_in: 64,
dropout: 0.2, hidden_dim: 512, max_conv_dim: 512, max_dur: 50, multispeaker: true,
n_layer: 3, n_mels: 80, n_token: 178, slm: {hidden: 768, initial_channel: 64,
model: microsoft/wavlm-base-plus, nlayers: 13, sr: 16000}, style_dim: 128},
optimizer_params: {bert_lr: 1.0e-05, ft_lr: 0.0001, lr: 0.0001}, preprocess_params: {
spect_params: {hop_length: 300, n_fft: 2048, win_length: 1200}, sr: 24000}, pretrained_model: Models/LibriTTS/epoch_2nd_00109.pth,
save_freq: 20, second_stage_load_pretrained: true, slmadv_params: {batch_percentage: 0.5,
iter: 10, max_len: 500, min_len: 400, scale: 0.01, sig: 1.5, thresh: 5}}

@devidw
Copy link
Contributor

devidw commented Nov 23, 2023

run into the same issue during fine-tuning around epoch 30 of a 36 min dataset using h100:

Traceback (most recent call last):
  File "./train_finetune.py", line 714, in <module>
    main()
  File "/usr/local/lib/python3.8/dist-packages/click/core.py", line 1130, in __call__
    return self.main(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/click/core.py", line 1055, in main
    rv = self.invoke(ctx)
  File "/usr/local/lib/python3.8/dist-packages/click/core.py", line 1404, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/usr/local/lib/python3.8/dist-packages/click/core.py", line 760, in invoke
    return __callback(*args, **kwargs)
  File "./train_finetune.py", line 509, in main
    loss_gen_lm.backward()
  File "/home/ubuntu/.local/lib/python3.8/site-packages/torch/_tensor.py", line 492, in backward
    torch.autograd.backward(
  File "/home/ubuntu/.local/lib/python3.8/site-packages/torch/autograd/__init__.py", line 251, in backward
    Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [1, 1, 1]] is at version 3; expected version 1 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).

@kmn1024
Copy link
Contributor

kmn1024 commented Nov 24, 2023

I had the same issue here: #73

The issue appears to be here: https://github.com/yl4579/StyleTTS2/blob/main/train_finetune.py#L504. Changing it to d_loss_slm.backward(retain_graph=True) seems to fix the problem.

retain_graph=True is necessary because the graph gets deleted after backward() pass to save memory. But you need the graph for the loss_gen_lm.backward() call.

@IIEleven11
Copy link
Author

I will try it now

@IIEleven11
Copy link
Author

Almost made it, it appears that this is possibly because I am using a single GPU. Traceback points to these lines 34-39 in train_finetune.py. I did move back to batch_size 10. Then tried with a size of 6 and 8 with it throwing the original error.

class MyDataParallel(torch.nn.DataParallel):
def getattr(self, name):
try:
return super().getattr(name)
except AttributeError:
return getattr(self.module, name)

Traceback (most recent call last):
File "/home/Ubuntu/WORK/StyleTTS2/Colab/train_finetune.py", line 714, in
main()
File "/home/Ubuntu/WORK/StyleTTS2/Colab/venv/lib/python3.10/site-packages/click/core.py", line 1157, in call
return self.main(*args, **kwargs)
File "/home/Ubuntu/WORK/StyleTTS2/Colab/venv/lib/python3.10/site-packages/click/core.py", line 1078, in main
rv = self.invoke(ctx)
File "/home/Ubuntu/WORK/StyleTTS2/Colab/venv/lib/python3.10/site-packages/click/core.py", line 1434, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/home/Ubuntu/WORK/StyleTTS2/Colab/venv/lib/python3.10/site-packages/click/core.py", line 783, in invoke
return __callback(*args, **kwargs)
File "/home/Ubuntu/WORK/StyleTTS2/Colab/train_finetune.py", line 487, in main
slm_out = slmadv(i,
File "/home/Ubuntu/WORK/StyleTTS2/Colab/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/Ubuntu/WORK/StyleTTS2/Colab/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/home/Ubuntu/WORK/StyleTTS2/Colab/Modules/slmadv.py", line 138, in forward
y_pred = self.model.decoder(en, F0_fake, N_fake, sp[:, :128])
File "/home/Ubuntu/WORK/StyleTTS2/Colab/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/Ubuntu/WORK/StyleTTS2/Colab/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/home/Ubuntu/WORK/StyleTTS2/Colab/venv/lib/python3.10/site-packages/torch/nn/parallel/data_parallel.py", line 183, in forward
return self.module(*inputs[0], **module_kwargs[0])
File "/home/Ubuntu/WORK/StyleTTS2/Colab/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/Ubuntu/WORK/StyleTTS2/Colab/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/home/Ubuntu/WORK/StyleTTS2/Colab/Modules/hifigan.py", line 474, in forward
x = self.generator(x, s, F0_curve)
File "/home/Ubuntu/WORK/StyleTTS2/Colab/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/Ubuntu/WORK/StyleTTS2/Colab/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(args, **kwargs)
File "/home/Ubuntu/WORK/StyleTTS2/Colab/Modules/hifigan.py", line 339, in forward
xs = self.resblocks[i
self.num_kernels+j](x, s)
File "/home/Ubuntu/WORK/StyleTTS2/Colab/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/Ubuntu/WORK/StyleTTS2/Colab/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/home/Ubuntu/WORK/StyleTTS2/Colab/Modules/hifigan.py", line 70, in forward
xt = n2(xt, s)
File "/home/Ubuntu/WORK/StyleTTS2/Colab/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/Ubuntu/WORK/StyleTTS2/Colab/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/home/Ubuntu/WORK/StyleTTS2/Colab/Modules/hifigan.py", line 24, in forward
return (1 + gamma) * self.norm(x) + beta
File "/home/Ubuntu/WORK/StyleTTS2/Colab/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/Ubuntu/WORK/StyleTTS2/Colab/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/home/Ubuntu/WORK/StyleTTS2/Colab/venv/lib/python3.10/site-packages/torch/nn/modules/instancenorm.py", line 87, in forward
return self._apply_instance_norm(input)
File "/home/Ubuntu/WORK/StyleTTS2/Colab/venv/lib/python3.10/site-packages/torch/nn/modules/instancenorm.py", line 36, in apply_instance_norm
return F.instance_norm(
File "/home/Ubuntu/WORK/StyleTTS2/Colab/venv/lib/python3.10/site-packages/torch/nn/functional.py", line 2523, in instance_norm
return torch.instance_norm(
RuntimeError: NVML_SUCCESS == DriverAPI::get()->nvmlInit_v2
() INTERNAL ASSERT FAILED at "../c10/cuda/CUDACachingAllocator.cpp":1123, please report a bug to PyTorch.

Batch size 8 and 4 traceback

Traceback (most recent call last):
File "/home/Ubuntu/WORK/StyleTTS2/Colab/train_finetune.py", line 714, in
main()
File "/home/Ubuntu/WORK/StyleTTS2/Colab/venv/lib/python3.10/site-packages/click/core.py", line 1157, in call
return self.main(*args, **kwargs)
File "/home/Ubuntu/WORK/StyleTTS2/Colab/venv/lib/python3.10/site-packages/click/core.py", line 1078, in main
rv = self.invoke(ctx)
File "/home/Ubuntu/WORK/StyleTTS2/Colab/venv/lib/python3.10/site-packages/click/core.py", line 1434, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/home/Ubuntu/WORK/StyleTTS2/Colab/venv/lib/python3.10/site-packages/click/core.py", line 783, in invoke
return __callback(*args, **kwargs)
File "/home/Ubuntu/WORK/StyleTTS2/Colab/train_finetune.py", line 509, in main
loss_gen_lm.backward()
File "/home/Ubuntu/WORK/StyleTTS2/Colab/venv/lib/python3.10/site-packages/torch/_tensor.py", line 492, in backward
torch.autograd.backward(
File "/home/Ubuntu/WORK/StyleTTS2/Colab/venv/lib/python3.10/site-packages/torch/autograd/init.py", line 251, in backward
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [1, 1, 1]] is at version 3; expected version 1 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).

@IIEleven11
Copy link
Author

Problem might be on my end, reinstalling cuda toolkit now

@kmn1024
Copy link
Contributor

kmn1024 commented Nov 24, 2023

Sorry one more change is needed to get this to work! See #74.

@IIEleven11
Copy link
Author

That worked, thank you!

yl4579 added a commit that referenced this issue Nov 24, 2023
@yl4579
Copy link
Owner

yl4579 commented Nov 24, 2023

Should be solved now.

@yl4579 yl4579 closed this as completed Nov 24, 2023
Akito-UzukiP added a commit to Akito-UzukiP/StyleTTS2 that referenced this issue Jan 13, 2024
* Create emo_gen.py

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* update server.py, fix bugs in func get_text() and infer(). (yl4579#52)

* Extract get_text() and infer() from webui.py. (yl4579#53)

* Extract get_text() and infer() from webui.py.

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

* add emo emb

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* init emo gen

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* init emo

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* init emo

* Delete bert/bert-base-japanese-v3 directory

* Create .gitkeep

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Create add_punc.py

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* fix bug in bert_gen.py (yl4579#54)

* Update README.md

* fix bug in models.py (yl4579#56)

* 更新 models.py

* Fix japanese cleaner (yl4579#61)

* 初步,睡觉明天继续写(

* 好好好放错分支了,熬夜是大忌

* [pre-commit.ci] pre-commit autoupdate (yl4579#55)

* [pre-commit.ci] pre-commit autoupdate

updates:
- [github.com/pre-commit/pre-commit-hooks: v4.4.0 → v4.5.0](pre-commit/pre-commit-hooks@v4.4.0...v4.5.0)

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

* Create tokenizer_config.json

* update preprocess_text.py:过滤一个音频匹配多个文本的情况 (yl4579#57)

* update preprocess_text.py:过滤音频不存在的情况 (yl4579#58)

* 修复日语cleaner和bert

* better

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Co-authored-by: Stardust·减 <star_dust_chen@foxmail.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Sora <atri@suzakuintsubaki.com>

* Apply Code Formatter Change

* Add config.yml for global configuration. (yl4579#62)

* Add config.yml for global configuration.

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Fix bug in webui.py.

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Rename config.yml to default_config.yml. Add ./config.yml to gitignore.

* Add config.py to parse config.yml

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

* Update webui.py (yl4579#65)

* Update webui.py:
1. Add auto translation from Chinese to Japanese.
2. Start to use config.py in webui.py to set config instead of using the command line.

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

* Fix (yl4579#68)

* 加上ー

* fix

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

* Update infer.py and webui.py.  Supports loading and inference models of 1.1.1 version. (yl4579#66)

* Update infer.py and webui.py. Supports loading and inference models of 1.1.1 version.

* Update config.json

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

* Fix bug in translate.py (yl4579#69)

* Supports loading and inference models of 1.1、1.0.1、1.0 version. (yl4579#70)

* Supports loading and inference models of 1.1、1.0.1、1.0 version.

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Delete useless file in OldVersion

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

* Update japanese.py (yl4579#71)

Handling JA long pronunciations

* 使用配置文件配置bert_gen.py, preprocess_text.py, resample.py (yl4579#72)

* Update bert_gen.py, preprocess_text.py, resample.py. Support using config.yml in these files.

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Update bert_gen.py

* Update bert_gen.py, fix bug.

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

* Delete bert/bert-base-japanese-v3 directory

* Create config.json

* Create tokenizer_config.json

* Create vocab.txt

* Update server.py. 支持多版本多模型 (yl4579#76)

* Update server.py. 支持多版本多模型

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

* Dev webui (yl4579#77)

* 申请pr (yl4579#75)

* 2023/10/11 update

界面优化

* Update webui.py

翻译英文页面为中文

* Update train_ms.py

单卡训练

* 加入图片

* Update extern_subprocess.py

* Update asr_transcript.py

* Update asr_transcript.py

* Update asr_transcript.py

* Update extern_subprocess.py

* Update asr_transcript.py

* Update asr_transcript.py

* Update asr_transcript.py

* Update all_process.py

* Update extern_subprocess.py

* Update all_process.py

* Update all_process.py

* Update asr_transcript.py

* Update extern_subprocess.py

* Update webui.py

* Create re_matching.py

* Update webui.py

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Update all_process.py

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Update all_process.py

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Update all_process.py

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Update asr_transcript.py

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Pack 'update' functions into a module

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Update all_process.py

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Update asr_transcript.py

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Update extern_subprocess.py

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update all_process.py

* Update asr_transcript.py

* Update webui.py

* Add files via upload

* Update extern_subprocess.py

* Update all_process.py

* Update asr_transcript.py

* Update bert_gen.py

* Update extern_subprocess.py

* Update preprocess_text.py

* Update re_matching.py

* Update resample.py

* Update update_status.py

* Update update_status.py

* Update webui.py

* Update all_process.py

* Update preprocess_text.py

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Update train_ms.py

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Stardust·减 <star_dust_chen@foxmail.com>
Co-authored-by: innnky <67028263+innnky@users.noreply.github.com>

* Delete all_process.py

* Delete asr_transcript.py

* Delete extern_subprocess.py

---------

Co-authored-by: spicysama <122108331+AnyaCoder@users.noreply.github.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: innnky <67028263+innnky@users.noreply.github.com>

* Create config.json

* Create  preprocessor_config.json

* Create vocab.json

* Delete emotional/wav2vec2-large-robust-12-ft-emotion-msp-dim/.gitkeep

* Update emo_gen.py

* Delete add_punc.py

* add emotion_clustering.i

* Apply Code Formatter Change

* Update models.py

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Update preprocess_text.py (yl4579#78)

* Update preprocess_text.py. 检测重复以及不存在的音频 (yl4579#79)

* Handle Janpanese long pronunciations (yl4579#80)

* Handle Janpanese long pronunciations

* Update japanese.py

* Update japanese.py

* Use unified phonemes for Japanese long vowel (yl4579#82)

* Use an unified phoneme for Japanese long vowel

`symbol.py` has not been updated to ensure compatibility with older version models.

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

* 增加一个按钮,点击后可以按句子切分,添加“|” (yl4579#81)

* Update re_matching.py

* Update webui.py

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

* Fix phonemer bug (yl4579#83)

* Fix phonemer bug

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

* Fix long vowel handler bug (yl4579#84)

* Fix long vowel handler bug

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

* 加入整合包管理器的特性:长文本合成可以自定义句间段间停顿 (yl4579#85)

* Update webui.py

* Update re_matching.py

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

* Update train_ms.py

* fix'

* Update cleaner.py

* add en

* add en

* Update english.py

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* add en

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* add en

* add en

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* add en

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* 更新 README.md

* 更新 README.md

* 更新 README.md

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Change phonemer to pyopenjtalk (yl4579#86)

* Change phonemer to pyopenjtalk

* 修改为openjtalk便于安装

---------

Co-authored-by: Stardust·减 <star_dust_chen@foxmail.com>

* 更新 english.py

* Fix english_bert_mock.py. (yl4579#87)

* Add punctuation execptions (yl4579#88)

* Add punctuation execptions

* Ellipses exceptions

* remove get bert

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Fix bug in oldVersion. (yl4579#89)

* Update requirements.txt

* change to large

* rollback requirements.txt

* Feat: Enable 1.1.1 models using fix-ver infer. (yl4579#91)

* Feat: Enable 1.1.1 models using fix-ver infer.

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

* Add Japanese accent (high-low) (yl4579#90)

* Add punctuation execptions

* Ellipses exceptions

* Add Japanese accent

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

* Do not replace iteration mark (yl4579#92)

* Add punctuation execptions

* Ellipses exceptions

* Add Japanese accent

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Do not replace iteration mark

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

* Fix: fix import error in oldVersion (yl4579#93)

* Refactor: reusing model loading in webui.py and server.py. (yl4579#94)

* Feat: Enable using config.yml in train_ms.py (yl4579#96)

* 更新 emo_gen.py

* Change emo_gen.py (yl4579#97)

* Fix emo_gen bugs

* Add multiprocess

* Fix queue (yl4579#98)

* Fix emo_gen bugs

* Add multiprocess

* Del var

* Fix queue

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

* Fix training bugs (yl4579#99)

* Updatge cluster notebook

* Fix train

* Fix filename

* Update infer.py (yl4579#100)

* Update infer.py

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

* Add reference audio (yl4579#101)

* Add reference audio

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Update

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Update

* Update

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Update

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Stardust·减 <star_dust_chen@foxmail.com>

* Fix: fix 1.1.1-fix (yl4579#102)

* Fix infer bug (yl4579#103)

* Feat: Add server_fastapi.py. (yl4579#104)

* Feat: Add server_fastapi.py.

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Fix: Update requirements.txt.

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

* Fix: requirements.txt. (yl4579#105)

* Swith to deberta-v3-large (yl4579#106)

* Swith to deberta-v3-large

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

* Feat: Update config.py. (yl4579#107)

* Feat: Update config.py.

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

* Dev fix (yl4579#108)

* fix bugs when deploying

* fix bugs when deploying

* fix bugs when deploying

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

* Revert "Dev fix (yl4579#108)" (yl4579#109)

This reverts commit 685e18a10498d602b1a9a26079340d11925646f0.

* Dev fix (yl4579#110)

* fix bugs when deploying

* fix bugs when deploying

* fix bugs when deploying

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* fix fixed bugs

* fix fixed bugs

* fix fixed bug 3

* fix fixed bug 4

* fix fixed bug 5

* fix

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

* Add emo vec quantizer (yl4579#111)

Co-authored-by: Stardust·减 <star_dust_chen@foxmail.com>

* Clean req and gitignore (yl4579#112)

* Clean req and gitignore

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

* Switch to deberta-v2-large-japanese (yl4579#113)

* Switch to deberta-v2-large-japanese

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

* Fix emo bugs (yl4579#114)

* Fix english (yl4579#115)

* Remove emo (yl4579#117)

* Don't train codebook

* Remove emo

* Update

* Update

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

* Merge dev into no-emo (yl4579#122)

* [pre-commit.ci] pre-commit autoupdate (yl4579#95)

* [pre-commit.ci] pre-commit autoupdate

updates:
- [github.com/astral-sh/ruff-pre-commit: v0.0.292 → v0.1.1](astral-sh/ruff-pre-commit@v0.0.292...v0.1.1)
- [github.com/psf/black: 23.9.1 → 23.10.0](psf/black@23.9.1...23.10.0)

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

* Don't train codebook (yl4579#116)

* Update requirements.txt

* Update english_bert_mock.py

* Fix: server_fastapi.py (yl4579#118)

* Fix: server_fastapi.py

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

* Fix: don't print debug logging. (yl4579#119)

* Fix: don't print debug logging.

* Feat: support emo_gen config

* Fix config

* Apply Code Formatter Change

* 更新,修正bug (yl4579#121)

* Feat: Update infer.py preprocess_text.py server_fastapi.py.

* Fix resample.py. Maintain same directory structure in out_dir as in_dir.

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Update resample.py

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

* Update server_fastapi.py to no-emo ver

* Update config.py, no emo config

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: OedoSoldier <31711261+OedoSoldier@users.noreply.github.com>
Co-authored-by: Stardust·减 <star_dust_chen@foxmail.com>
Co-authored-by: Stardust-minus <Stardust-minus@users.noreply.github.com>

* Update train_ms.py

* Update latest version info (yl4579#124)

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: jiangyuxiaoxiao <atri@suzakuintsubaki.com>
Co-authored-by: AkitoLiu <39857739+Akito-UzukiP@users.noreply.github.com>
Co-authored-by: Stardust-minus <Stardust-minus@users.noreply.github.com>
Co-authored-by: OedoSoldier <31711261+OedoSoldier@users.noreply.github.com>
Co-authored-by: spicysama <122108331+AnyaCoder@users.noreply.github.com>
Co-authored-by: innnky <67028263+innnky@users.noreply.github.com>
Co-authored-by: YYuX-1145 <138500330+YYuX-1145@users.noreply.github.com>
nawed2611 pushed a commit to team-listnr/StyleTTS2 that referenced this issue Feb 8, 2024
nawed2611 pushed a commit to team-listnr/StyleTTS2 that referenced this issue Feb 8, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants