-
Notifications
You must be signed in to change notification settings - Fork 919
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ERROR: Failed building wheel for llama-cpp-python for SYCL installation on Windows #1614
Comments
I think @abetlen should precompile a windows version as he did on linux. Windows is such a mess when compiling python modules that use c++ code. |
Hi @abetlen, can I assist you in anyway to make this possible? Thank you, Sunil. |
I meet the same issue, have you solved this? |
Hi @kylo5aby, unfortunately no |
The last version I was able to build SYCL wheel was v0.2.44. |
I have found the same issue today |
Hi! I also ran into this issue a few days ago. Is there a workaround for that? Out of desperation I already tried thowing the SYCL-flavored libraries from llama-cpp and their dependencies into the lib folder of the venv, but without success. |
I was unable to proceed with this one. However I went back to and followed the instructions to build the llama.cpp locally and now I am able to run the LLMs on my ACR 770 and it is running greate. |
Hi and thanks for the reply! Do you use "plain" llama.cpp for inferencing? For what I'm doing I have to have it integrated into an python project. Therefor I used llama-cpp-python so far with just CPU inferncing. But since I also need it working on ARC GPUs I was wondering if there is already someone who managed to do so. (Btw. I'm building on Windows11 with VS2022, CMake and the oneAPI Toolkit installed) |
I am a noob so, I am just learning this stuff. I am running with llama.cpp with sycl on A770 with gguf models and so far it is running great. sorry I can't help you with your python question; I could not build it when I tried. |
SYCL with intel gpu and igpu run with some models on llama cpp and not run on llama cpp python why? |
Hardware: CPU Intel 14900K, GPU Intel arc a770
Software: Win 11 Pro
The text was updated successfully, but these errors were encountered: