Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Crash when launching new model #526

Closed
5 tasks done
ApatheticWrath opened this issue Sep 12, 2024 · 2 comments
Closed
5 tasks done

Crash when launching new model #526

ApatheticWrath opened this issue Sep 12, 2024 · 2 comments
Labels
bug Something isn't working

Comments

@ApatheticWrath
Copy link

Self Checks

  • This is only for bug report, if you would like to ask a question, please head to Discussions.
  • I have searched for existing issues search for existing issues, including closed ones.
  • I confirm that I am using English to submit this report (我已阅读并同意 Language Policy).
  • [FOR CHINESE USERS] 请务必使用英文提交 Issue,否则会被关闭。谢谢!:)
  • Please do not modify this template :) and fill in all the required fields.

Cloud or Self Hosted

Self Hosted (Source)

Steps to reproduce

I click start.bat and go to inference configuration.
for Decoder Model Path path i select:
checkpoints\fish-speech-1.4\firefly-gan-vq-fsq-8x1024-21hz-generator.pth
Decoder Model Config:
firefly_gan_vq
LLAMA Model Path:
checkpoints\fish-speech-1.4
Compile model:
No

click Open Inference Server

I get this

"HF_ENDPOINT: https://huggingface.co"
"NO_PROXY: "
model.pth 已存在,跳过下载。
README.md 已存在,跳过下载。
special_tokens_map.json 已存在,跳过下载。
tokenizer_config.json 已存在,跳过下载。
tokenizer.json 已存在,跳过下载。
config.json 已存在,跳过下载。
firefly-gan-vq-fsq-4x1024-42hz-generator.pth 已存在,跳过下载。
ffmpeg.exe 已存在,跳过下载。
ffprobe.exe 已存在,跳过下载。
asr-label-win-x64.exe 已存在,跳过下载。
Debug: flags =

Next launch the page...
['', 'C:\\Users\\J__mo\\Downloads\\fish-speech-1.2.1\\fish-speech-1.2.1\\fish_speech\\webui', 'C:\\Users\\J__mo\\Downloads\\fish-speech-1.2.1\\fish-speech-1.2.1', 'C:\\Users\\J__mo\\Downloads\\fish-speech-1.2.1\\fish-speech-1.2.1\\fishenv\\env\\python310.zip', 'C:\\Users\\J__mo\\Downloads\\fish-speech-1.2.1\\fish-speech-1.2.1\\fishenv\\env\\DLLs', 'C:\\Users\\J__mo\\Downloads\\fish-speech-1.2.1\\fish-speech-1.2.1\\fishenv\\env\\lib', 'C:\\Users\\J__mo\\Downloads\\fish-speech-1.2.1\\fish-speech-1.2.1\\fishenv\\env', 'C:\\Users\\J__mo\\Downloads\\fish-speech-1.2.1\\fish-speech-1.2.1\\fishenv\\env\\lib\\site-packages', '__editable__.fish_speech-0.1.0.finder.__path_hook__']
You are in  C:\Users\J__mo\Downloads\fish-speech-1.2.1\fish-speech-1.2.1
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
2024-09-11 20:50:33.852 | INFO     | __main__:clean_infer_cache:146 - C:\Users\J__mo\AppData\Local\Temp\gradio was not found
2024-09-11 20:50:49.634 | INFO     | __main__:<module>:584 - Loading Llama model...
2024-09-11 20:50:55.998 | INFO     | tools.llama.generate:load_model:352 - Restored model from checkpoint
2024-09-11 20:50:55.999 | INFO     | tools.llama.generate:load_model:356 - Using DualARTransformer
2024-09-11 20:50:56.001 | INFO     | __main__:<module>:591 - Llama model loaded, loading VQ-GAN model...
Traceback (most recent call last):
  File "C:\Users\J__mo\Downloads\fish-speech-1.2.1\fish-speech-1.2.1\tools\webui.py", line 593, in <module>
    decoder_model = load_decoder_model(
  File "C:\Users\J__mo\Downloads\fish-speech-1.2.1\fish-speech-1.2.1\tools\vqgan\inference.py", line 41, in load_model
    result = model.load_state_dict(state_dict, strict=False)
  File "C:\Users\J__mo\Downloads\fish-speech-1.2.1\fish-speech-1.2.1\fishenv\env\lib\site-packages\torch\nn\modules\module.py", line 2191, in load_state_dict
    raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for FireflyArchitecture:
        size mismatch for quantizer.residual_fsq.rvqs.0.project_in.weight: copying a param with shape torch.Size([4, 64]) from checkpoint, the shape in current model is torch.Size([4, 128]).
        size mismatch for quantizer.residual_fsq.rvqs.0.project_out.weight: copying a param with shape torch.Size([64, 4]) from checkpoint, the shape in current model is torch.Size([128, 4]).
        size mismatch for quantizer.residual_fsq.rvqs.0.project_out.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([128]).
        size mismatch for quantizer.residual_fsq.rvqs.1.project_in.weight: copying a param with shape torch.Size([4, 64]) from checkpoint, the shape in current model is torch.Size([4, 128]).
        size mismatch for quantizer.residual_fsq.rvqs.1.project_out.weight: copying a param with shape torch.Size([64, 4]) from checkpoint, the shape in current model is torch.Size([128, 4]).
        size mismatch for quantizer.residual_fsq.rvqs.1.project_out.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([128]).
        size mismatch for quantizer.residual_fsq.rvqs.2.project_in.weight: copying a param with shape torch.Size([4, 64]) from checkpoint, the shape in current model is torch.Size([4, 128]).
        size mismatch for quantizer.residual_fsq.rvqs.2.project_out.weight: copying a param with shape torch.Size([64, 4]) from checkpoint, the shape in current model is torch.Size([128, 4]).
        size mismatch for quantizer.residual_fsq.rvqs.2.project_out.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([128]).
        size mismatch for quantizer.residual_fsq.rvqs.3.project_in.weight: copying a param with shape torch.Size([4, 64]) from checkpoint, the shape in current model is torch.Size([4, 128]).
        size mismatch for quantizer.residual_fsq.rvqs.3.project_out.weight: copying a param with shape torch.Size([64, 4]) from checkpoint, the shape in current model is torch.Size([128, 4]).
        size mismatch for quantizer.residual_fsq.rvqs.3.project_out.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([128]).

The 1.2 model loads and works fine just not 1.4

✔️ Expected Behavior

For the inference server to launch with the new model.

❌ Actual Behavior

It crashed.

@ApatheticWrath ApatheticWrath added the bug Something isn't working label Sep 12, 2024
@PoTaTo-Mika
Copy link
Contributor

PoTaTo-Mika commented Sep 12, 2024

Try the command with your virtual environment activated.
python tools/webui.py
please run the command under the fish-speech folder.
Also, please ensure your code version, you can't run the V1.4 model with V1.2 source code.

@ApatheticWrath
Copy link
Author

I just grabbed latest release without noticing the date. I wasn't using latest repo code, it works now. Thank you.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants