You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am about to release v2 of AllTalk, at least in BETA #237 and #211
V2 will support multiple TTS engines, all with varying capabilities and GPU support.
Because of how I have broken out the individual TTS engines loaders, this means it should be far easier for people to setup/code additional GPU support (where possible). Because I dont have an Intel GPU, AMD GPU or Mac M series computer to test with, I am going to put out a request for anyone with a bit of coding experience (and the correct type systems) to take a shot at adding support (where possible).
Would be nice to have this work with Intel Arc gpus (a750, a770) using ipex or vulkan instead of cuda.
Links to stuff related to this:
https://github.com/intel/intel-extension-for-deepspeed
https://github.com/intel/intel-extension-for-pytorch
https://www.intel.com/content/www/us/en/developer/tools/oneapi/overview.html
https://www.intel.com/content/www/us/en/developer/tools/openvino-toolkit/overview.html
https://www.lunarg.com/vulkan-sdk/
https://ipex-llm.readthedocs.io/en/latest/doc/LLM/Quickstart/webui_quickstart.html
The text was updated successfully, but these errors were encountered: