Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug] Cannot use xtts on CPU #2980

Closed
violet17 opened this issue Sep 21, 2023 · 13 comments
Closed

[Bug] Cannot use xtts on CPU #2980

violet17 opened this issue Sep 21, 2023 · 13 comments
Labels
bug Something isn't working

Comments

@violet17
Copy link

violet17 commented Sep 21, 2023

Describe the bug

I use the following script to generate speech with xtts, but find that xtts cannot load to cpu.


 > Using model: xtts
Traceback (most recent call last):
  File "/home/a/crystal/llm/test_xtts_cpu.py", line 6, in <module>
    tts_model = TTS(model_path="./xtts_v1/", config_path="./xtts_v1/config.json", progress_bar=True, gpu=False)
  File "/home/a/miniconda3/envs/llm/lib/python3.9/site-packages/TTS/api.py", line 86, in __init__
    self.load_tts_model_by_path(
  File "/home/a/miniconda3/envs/llm/lib/python3.9/site-packages/TTS/api.py", line 211, in load_tts_model_by_path
    self.synthesizer = Synthesizer(
  File "/home/a/miniconda3/envs/llm/lib/python3.9/site-packages/TTS/utils/synthesizer.py", line 93, in __init__
    self._load_tts(tts_checkpoint, tts_config_path, use_cuda)
  File "/home/a/miniconda3/envs/llm/lib/python3.9/site-packages/TTS/utils/synthesizer.py", line 192, in _load_tts
    self.tts_model.load_checkpoint(self.tts_config, tts_checkpoint, eval=True)
  File "/home/a/miniconda3/envs/llm/lib/python3.9/site-packages/TTS/tts/models/xtts.py", line 645, in load_checkpoint
    self.load_state_dict(load_fsspec(model_path)["model"], strict=strict)
  File "/home/a/miniconda3/envs/llm/lib/python3.9/site-packages/TTS/utils/io.py", line 86, in load_fsspec
    return torch.load(f, map_location=map_location, **kwargs)
  File "/home/a/miniconda3/envs/llm/lib/python3.9/site-packages/torch/serialization.py", line 809, in load
    return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
  File "/home/a/miniconda3/envs/llm/lib/python3.9/site-packages/torch/serialization.py", line 1172, in _load
    result = unpickler.load()
  File "/home/a/miniconda3/envs/llm/lib/python3.9/site-packages/torch/serialization.py", line 1142, in persistent_load
    typed_storage = load_tensor(dtype, nbytes, key, _maybe_decode_ascii(location))
  File "/home/a/miniconda3/envs/llm/lib/python3.9/site-packages/torch/serialization.py", line 1116, in load_tensor
    wrap_storage=restore_location(storage, location),
  File "/home/a/miniconda3/envs/llm/lib/python3.9/site-packages/torch/serialization.py", line 217, in default_restore_location
    result = fn(storage, location)
  File "/home/a/miniconda3/envs/llm/lib/python3.9/site-packages/torch/serialization.py", line 182, in _cuda_deserialize
    device = validate_cuda_device(location)
  File "/home/a/miniconda3/envs/llm/lib/python3.9/site-packages/torch/serialization.py", line 166, in validate_cuda_device
    raise RuntimeError('Attempting to deserialize object on a CUDA '
RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU.

Then I set map_location="cpu" on /home/a/miniconda3/envs/llm/lib/python3.9/site-packages/TTS/utils/io.py.
After that I get:

Traceback (most recent call last):
  File "/home/a/crystal/llm/test_xtts_cpu.py", line 11, in <module>
    tts_model.tts_to_file(text.replace("\n",",").replace(" ","")+"。", language="zh-cn", file_path="audio_out.wav")
  File "/home/a/miniconda3/envs/llm/lib/python3.9/site-packages/TTS/api.py", line 384, in tts_to_file
    self._check_arguments(speaker=speaker, language=language, speaker_wav=speaker_wav, **kwargs)
  File "/home/a/miniconda3/envs/llm/lib/python3.9/site-packages/TTS/api.py", line 237, in _check_arguments
    if self.is_multi_lingual and language is None:
  File "/home/a/miniconda3/envs/llm/lib/python3.9/site-packages/TTS/api.py", line 109, in is_multi_lingual
    if "xtts" in self.model_name:
TypeError: argument of type 'NoneType' is not iterable

Then I add

if "xtts" in self.model_path:
            return True

to /home/a/miniconda3/envs/llm/lib/python3.9/site-packages/TTS/api.py line 108.

Run the original scripts, then

Traceback (most recent call last):
  File "/home/a/crystal/llm/test_xtts_cpu.py", line 11, in <module>
    tts_model.tts_to_file(text.replace("\n",",").replace(" ","")+"。", language="zh-cn", file_path="audio_out.wav")
  File "/home/a/miniconda3/envs/llm/lib/python3.9/site-packages/TTS/api.py", line 386, in tts_to_file
    self._check_arguments(speaker=speaker, language=language, speaker_wav=speaker_wav, **kwargs)
  File "/home/a/miniconda3/envs/llm/lib/python3.9/site-packages/TTS/api.py", line 239, in _check_arguments
    if self.is_multi_lingual and language is None:
  File "/home/a/miniconda3/envs/llm/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1614, in __getattr__
    raise AttributeError("'{}' object has no attribute '{}'".format(
AttributeError: 'TTS' object has no attribute 'is_multi_lingual'

Can you help to fix this issue? Thanks a lot!!!

To Reproduce

from TTS.api import TTS
import time
text = "游客可以登上观光球,欣赏上海全景和周边美景;或者在空中庭院中欣赏美景,感受大自然的气息;或者在旋转餐厅中品尝美食,享受美味与旋转的乐趣"

tts_model = TTS(model_path="./xtts_v1/", config_path="./xtts_v1/config.json", progress_bar=True, gpu=False)
#tts_model = TTS(model_name="tts_models/multilingual/multi-dataset/xtts_v1", progress_bar=True, gpu=False)
#print("tts.is_multi_lingual", tts_model.is_multi_lingual)
print("tts_model: ", tts_model)
t1 = time.time()
tts_model.tts_to_file(text.replace("\n",",").replace(" ","")+"。", language="zh-cn", file_path="audio_out.wav")
print("cost time: ", time.time()-t1)

Expected behavior

No response

Logs

No response

Environment

{
    "CUDA": {
        "GPU": [],
        "available": false,
        "version": null
    },
    "Packages": {
        "PyTorch_debug": false,
        "PyTorch_version": "2.0.1a0+cxx11.abi",
        "TTS": "0.17.4",
        "numpy": "1.22.0"
    },
    "System": {
        "OS": "Linux",
        "architecture": [
            "64bit",
            "ELF"
        ],
        "processor": "x86_64",
        "python": "3.9.18",
        "version": "#32~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Fri Aug 18 10:40:13 UTC 2"
    }
}

Additional context

No response

@violet17 violet17 added the bug Something isn't working label Sep 21, 2023
@rouseabout
Copy link

Observe the same error running tts from command line:

$ tts --model_name tts_models/multilingual/multi-dataset/xtts_v1 --text "Hello World" --use_cuda False
 > tts_models/multilingual/multi-dataset/xtts_v1 is already downloaded.
 > Using model: xtts
Traceback (most recent call last):
  File "/home/user/env/bin/tts", line 8, in <module>
    sys.exit(main())
             ^^^^^^
  File "/home/user/env/lib/python3.11/site-packages/TTS/bin/synthesize.py", line 401, in main
    synthesizer = Synthesizer(
                  ^^^^^^^^^^^^
  File "/home/user/env/lib/python3.11/site-packages/TTS/utils/synthesizer.py", line 109, in __init__
    self._load_tts_from_dir(model_dir, use_cuda)
  File "/home/user/env/lib/python3.11/site-packages/TTS/utils/synthesizer.py", line 164, in _load_tts_from_dir
    self.tts_model.load_checkpoint(config, checkpoint_dir=model_dir, eval=True)
  File "/home/user/env/lib/python3.11/site-packages/TTS/tts/models/xtts.py", line 645, in load_checkpoint
    self.load_state_dict(load_fsspec(model_path)["model"], strict=strict)
                         ^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/user/env/lib/python3.11/site-packages/TTS/utils/io.py", line 86, in load_fsspec
    return torch.load(f, map_location=map_location, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/user/env/lib/python3.11/site-packages/torch/serialization.py", line 809, in load
    return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/user/env/lib/python3.11/site-packages/torch/serialization.py", line 1172, in _load
    result = unpickler.load()
             ^^^^^^^^^^^^^^^^
  File "/home/user/env/lib/python3.11/site-packages/torch/serialization.py", line 1142, in persistent_load
    typed_storage = load_tensor(dtype, nbytes, key, _maybe_decode_ascii(location))
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/user/env/lib/python3.11/site-packages/torch/serialization.py", line 1116, in load_tensor
    wrap_storage=restore_location(storage, location),
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/user/env/lib/python3.11/site-packages/torch/serialization.py", line 217, in default_restore_location
    result = fn(storage, location)
             ^^^^^^^^^^^^^^^^^^^^^
  File "/home/user/env/lib/python3.11/site-packages/torch/serialization.py", line 182, in _cuda_deserialize
    device = validate_cuda_device(location)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/user/env/lib/python3.11/site-packages/torch/serialization.py", line 166, in validate_cuda_device
    raise RuntimeError('Attempting to deserialize object on a CUDA '
RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU.
$ pip list | grep TTS
TTS                       0.17.4

@Lenos500
Copy link

Observe the same error running tts from command line:

$ tts --model_name tts_models/multilingual/multi-dataset/xtts_v1 --text "Hello World" --use_cuda False
 > tts_models/multilingual/multi-dataset/xtts_v1 is already downloaded.
 > Using model: xtts
Traceback (most recent call last):
  File "/home/user/env/bin/tts", line 8, in <module>
    sys.exit(main())
             ^^^^^^
  File "/home/user/env/lib/python3.11/site-packages/TTS/bin/synthesize.py", line 401, in main
    synthesizer = Synthesizer(
                  ^^^^^^^^^^^^
  File "/home/user/env/lib/python3.11/site-packages/TTS/utils/synthesizer.py", line 109, in __init__
    self._load_tts_from_dir(model_dir, use_cuda)
  File "/home/user/env/lib/python3.11/site-packages/TTS/utils/synthesizer.py", line 164, in _load_tts_from_dir
    self.tts_model.load_checkpoint(config, checkpoint_dir=model_dir, eval=True)
  File "/home/user/env/lib/python3.11/site-packages/TTS/tts/models/xtts.py", line 645, in load_checkpoint
    self.load_state_dict(load_fsspec(model_path)["model"], strict=strict)
                         ^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/user/env/lib/python3.11/site-packages/TTS/utils/io.py", line 86, in load_fsspec
    return torch.load(f, map_location=map_location, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/user/env/lib/python3.11/site-packages/torch/serialization.py", line 809, in load
    return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/user/env/lib/python3.11/site-packages/torch/serialization.py", line 1172, in _load
    result = unpickler.load()
             ^^^^^^^^^^^^^^^^
  File "/home/user/env/lib/python3.11/site-packages/torch/serialization.py", line 1142, in persistent_load
    typed_storage = load_tensor(dtype, nbytes, key, _maybe_decode_ascii(location))
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/user/env/lib/python3.11/site-packages/torch/serialization.py", line 1116, in load_tensor
    wrap_storage=restore_location(storage, location),
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/user/env/lib/python3.11/site-packages/torch/serialization.py", line 217, in default_restore_location
    result = fn(storage, location)
             ^^^^^^^^^^^^^^^^^^^^^
  File "/home/user/env/lib/python3.11/site-packages/torch/serialization.py", line 182, in _cuda_deserialize
    device = validate_cuda_device(location)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/user/env/lib/python3.11/site-packages/torch/serialization.py", line 166, in validate_cuda_device
    raise RuntimeError('Attempting to deserialize object on a CUDA '
RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU.
$ pip list | grep TTS
TTS                       0.17.4

Did you find any fix?

@Lenos500
Copy link

Describe the bug

I use the following script to generate speech with xtts, but find that xtts cannot load to cpu.


 > Using model: xtts
Traceback (most recent call last):
  File "/home/a/crystal/llm/test_xtts_cpu.py", line 6, in <module>
    tts_model = TTS(model_path="./xtts_v1/", config_path="./xtts_v1/config.json", progress_bar=True, gpu=False)
  File "/home/a/miniconda3/envs/llm/lib/python3.9/site-packages/TTS/api.py", line 86, in __init__
    self.load_tts_model_by_path(
  File "/home/a/miniconda3/envs/llm/lib/python3.9/site-packages/TTS/api.py", line 211, in load_tts_model_by_path
    self.synthesizer = Synthesizer(
  File "/home/a/miniconda3/envs/llm/lib/python3.9/site-packages/TTS/utils/synthesizer.py", line 93, in __init__
    self._load_tts(tts_checkpoint, tts_config_path, use_cuda)
  File "/home/a/miniconda3/envs/llm/lib/python3.9/site-packages/TTS/utils/synthesizer.py", line 192, in _load_tts
    self.tts_model.load_checkpoint(self.tts_config, tts_checkpoint, eval=True)
  File "/home/a/miniconda3/envs/llm/lib/python3.9/site-packages/TTS/tts/models/xtts.py", line 645, in load_checkpoint
    self.load_state_dict(load_fsspec(model_path)["model"], strict=strict)
  File "/home/a/miniconda3/envs/llm/lib/python3.9/site-packages/TTS/utils/io.py", line 86, in load_fsspec
    return torch.load(f, map_location=map_location, **kwargs)
  File "/home/a/miniconda3/envs/llm/lib/python3.9/site-packages/torch/serialization.py", line 809, in load
    return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
  File "/home/a/miniconda3/envs/llm/lib/python3.9/site-packages/torch/serialization.py", line 1172, in _load
    result = unpickler.load()
  File "/home/a/miniconda3/envs/llm/lib/python3.9/site-packages/torch/serialization.py", line 1142, in persistent_load
    typed_storage = load_tensor(dtype, nbytes, key, _maybe_decode_ascii(location))
  File "/home/a/miniconda3/envs/llm/lib/python3.9/site-packages/torch/serialization.py", line 1116, in load_tensor
    wrap_storage=restore_location(storage, location),
  File "/home/a/miniconda3/envs/llm/lib/python3.9/site-packages/torch/serialization.py", line 217, in default_restore_location
    result = fn(storage, location)
  File "/home/a/miniconda3/envs/llm/lib/python3.9/site-packages/torch/serialization.py", line 182, in _cuda_deserialize
    device = validate_cuda_device(location)
  File "/home/a/miniconda3/envs/llm/lib/python3.9/site-packages/torch/serialization.py", line 166, in validate_cuda_device
    raise RuntimeError('Attempting to deserialize object on a CUDA '
RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU.

Then I set map_location="cpu" on /home/a/miniconda3/envs/llm/lib/python3.9/site-packages/TTS/utils/io.py. After that I get:

Traceback (most recent call last):
  File "/home/a/crystal/llm/test_xtts_cpu.py", line 11, in <module>
    tts_model.tts_to_file(text.replace("\n",",").replace(" ","")+"。", language="zh-cn", file_path="audio_out.wav")
  File "/home/a/miniconda3/envs/llm/lib/python3.9/site-packages/TTS/api.py", line 384, in tts_to_file
    self._check_arguments(speaker=speaker, language=language, speaker_wav=speaker_wav, **kwargs)
  File "/home/a/miniconda3/envs/llm/lib/python3.9/site-packages/TTS/api.py", line 237, in _check_arguments
    if self.is_multi_lingual and language is None:
  File "/home/a/miniconda3/envs/llm/lib/python3.9/site-packages/TTS/api.py", line 109, in is_multi_lingual
    if "xtts" in self.model_name:
TypeError: argument of type 'NoneType' is not iterable

Then I add

if "xtts" in self.model_path:
            return True

to /home/a/miniconda3/envs/llm/lib/python3.9/site-packages/TTS/api.py line 108.

Run the original scripts, then

Traceback (most recent call last):
  File "/home/a/crystal/llm/test_xtts_cpu.py", line 11, in <module>
    tts_model.tts_to_file(text.replace("\n",",").replace(" ","")+"。", language="zh-cn", file_path="audio_out.wav")
  File "/home/a/miniconda3/envs/llm/lib/python3.9/site-packages/TTS/api.py", line 386, in tts_to_file
    self._check_arguments(speaker=speaker, language=language, speaker_wav=speaker_wav, **kwargs)
  File "/home/a/miniconda3/envs/llm/lib/python3.9/site-packages/TTS/api.py", line 239, in _check_arguments
    if self.is_multi_lingual and language is None:
  File "/home/a/miniconda3/envs/llm/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1614, in __getattr__
    raise AttributeError("'{}' object has no attribute '{}'".format(
AttributeError: 'TTS' object has no attribute 'is_multi_lingual'

Can you help to fix this issue? Thanks a lot!!!

To Reproduce

from TTS.api import TTS import time text = "游客可以登上观光球,欣赏上海全景和周边美景;或者在空中庭院中欣赏美景,感受大自然的气息;或者在旋转餐厅中品尝美食,享受美味与旋转的乐趣"

tts_model = TTS(model_path="./xtts_v1/", config_path="./xtts_v1/config.json", progress_bar=True, gpu=False) #tts_model = TTS(model_name="tts_models/multilingual/multi-dataset/xtts_v1", progress_bar=True, gpu=False) #print("tts.is_multi_lingual", tts_model.is_multi_lingual) print("tts_model: ", tts_model) t1 = time.time() tts_model.tts_to_file(text.replace("\n",",").replace(" ","")+"。", language="zh-cn", file_path="audio_out.wav") print("cost time: ", time.time()-t1)

Expected behavior

No response

Logs

No response

Environment

{
    "CUDA": {
        "GPU": [],
        "available": false,
        "version": null
    },
    "Packages": {
        "PyTorch_debug": false,
        "PyTorch_version": "2.0.1a0+cxx11.abi",
        "TTS": "0.17.4",
        "numpy": "1.22.0"
    },
    "System": {
        "OS": "Linux",
        "architecture": [
            "64bit",
            "ELF"
        ],
        "processor": "x86_64",
        "python": "3.9.18",
        "version": "#32~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Fri Aug 18 10:40:13 UTC 2"
    }
}

Additional context

No response

Did you find any fix?

@audioclassify
Copy link

The same issue was seen with Ubuntu 22.04.3 LTS running on an 5th gen Intel i7 with no GPU (error: 'Attempting to deserialize object on a CUDA ') with the python file shown at https://github.com/coqui-ai/TTS/#running-a-multi-speaker-and-multi-lingual-model in a venv.

python3 --version
Python 3.10.12

pip3 --version
pip 22.0.2 from /usr/lib/python3/dist-packages/pip (python 3.10)

pip3 install TTS # this would not run with CPU only

However, using https://github.com/coqui-ai/TTS/pkgs/container/tts-cpu with docker runs fine:

sudo docker pull ghcr.io/coqui-ai/tts-cpu:da8b6bbce1040ce8a4a58b959faec14e969294a0

This page explains how to start tts-cpu:

https://tts.readthedocs.io/en/latest/docker_images.html#id4

After starting the docker image and then the server.py app, the TTS server can be seen at:

http://[::1]:5002/

@violet17
Copy link
Author

@Lenos500 hi, I think you can try this commit

@Lenos500
Copy link

@Lenos500 hi, I think you can try this commit

I already did but with no luck

@Lenos500
Copy link

The same issue was seen with Ubuntu 22.04.3 LTS running on an 5th gen Intel i7 with no GPU (error: 'Attempting to deserialize object on a CUDA ') with the python file shown at https://github.com/coqui-ai/TTS/#running-a-multi-speaker-and-multi-lingual-model in a venv.

python3 --version Python 3.10.12

pip3 --version pip 22.0.2 from /usr/lib/python3/dist-packages/pip (python 3.10)

pip3 install TTS # this would not run with CPU only

However, using https://github.com/coqui-ai/TTS/pkgs/container/tts-cpu with docker runs fine:

sudo docker pull ghcr.io/coqui-ai/tts-cpu:da8b6bbce1040ce8a4a58b959faec14e969294a0

This page explains how to start tts-cpu:

https://tts.readthedocs.io/en/latest/docker_images.html#id4

After starting the docker image and then the server.py app, the TTS server can be seen at:

http://[::1]:5002/

So does XTTS run fine on Linux systems with docker installed and tts-cpu? Can you share the exact commands or script you used to run the model successfully using tts-cpu?

@erogol
Copy link
Member

erogol commented Sep 25, 2023

You need a reference wav file. Please see the docs.
https://tts.readthedocs.io/en/latest/models/xtts.html

@Lenos500
Copy link

You need a reference wav file. Please see the docs. https://tts.readthedocs.io/en/latest/models/xtts.html

Explain more

@erogol
Copy link
Member

erogol commented Sep 28, 2023

read more

@Lenos500
Copy link

read more

Actually it loads the model now on cpu but it has the Voicebpetokenizer attribute error when trying to clone voices

@violet17
Copy link
Author

violet17 commented Oct 9, 2023

You need a reference wav file. Please see the docs. https://tts.readthedocs.io/en/latest/models/xtts.html

Yes, I the following code with speaker_wav to generate audio:
tts_model.tts_to_file(text.replace("\n",",").replace(" ","")+"。", speaker_wav="0.wav", language="zh-cn", file_path="audio_out.wav")

Thanks.

@Innomen
Copy link

Innomen commented Nov 28, 2024

read more

This behavior should not be encouraged. If you don't wanna answer, don't. Why sabotage the potential for others to answer?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

6 participants