-
-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(sycl): Add support for Intel GPUs with sycl (#1647) #1660
Conversation
✅ Deploy Preview for localai ready!
To edit notification comments on pull requests, go to your Netlify site configuration. |
Dockerfile
Outdated
# oneapi requirements | ||
RUN if [ "${BUILD_TYPE}" = "sycl_f16" ] || [ "${BUILD_TYPE}" = "sycl_f32" ]; then \ | ||
wget https://registrationcenter-download.intel.com/akdlm/IRC_NAS/163da6e4-56eb-4948-aba3-debcec61c064/l_BaseKit_p_2024.0.1.46_offline.sh && \ | ||
sh ./l_BaseKit_p_2024.0.1.46_offline.sh \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm expecting some fun here as this is going to be interactive
it fails here:
friendly ping @abhilash1910 @NeoZhangJianyu , chances you can help here? any pointers would be appreciated! thanks! It looks something wrt linking, but I thought the installation steps should be general enough to apply to all the binaries in the example folder? |
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
nevermind, turns out I'm somehow not capable to select the GPU and hence that's what happens (the error, even if cryptic is not regarding linking issues in build-time, but rather ops not supported by the device that was selected) |
Could you share whole log? |
it looks that somehow I cannot see the GPU at all, I was confused that's why I picked up the first one (listed as acc):
must be related to my drivers, and likely the ones I have in my openSUSE box don't work out of the box with sycl |
ok, going to follow-up from images built from master. I think the changes here are correct as I see sycl output all over, the problem is I cannot select my GPU device |
You can choose GPU by environment variable: GGML_SYCL_DEVICE |
Correct, I've tried that, but in my case see #1660 (comment) , there is no iGPU detected |
Currently, GGML SYCL backend only support GPU. If there is no GPU, it can't work well. If you want to run on Intel GPU, you could use GGML oneMKL backend. |
….0 by renovate (#18178) This PR contains the following updates: | Package | Update | Change | |---|---|---| | [docker.io/localai/localai](https://github.com/mudler/LocalAI) | minor | `v2.7.0-cublas-cuda11-ffmpeg-core` -> `v2.8.0-cublas-cuda11-ffmpeg-core` | | [docker.io/localai/localai](https://github.com/mudler/LocalAI) | minor | `v2.7.0-cublas-cuda11-core` -> `v2.8.0-cublas-cuda11-core` | | [docker.io/localai/localai](https://github.com/mudler/LocalAI) | minor | `v2.7.0-cublas-cuda12-ffmpeg-core` -> `v2.8.0-cublas-cuda12-ffmpeg-core` | | [docker.io/localai/localai](https://github.com/mudler/LocalAI) | minor | `v2.7.0-cublas-cuda12-core` -> `v2.8.0-cublas-cuda12-core` | | [docker.io/localai/localai](https://github.com/mudler/LocalAI) | minor | `v2.7.0-ffmpeg-core` -> `v2.8.0-ffmpeg-core` | | [docker.io/localai/localai](https://github.com/mudler/LocalAI) | minor | `v2.7.0` -> `v2.8.0` | --- > [!WARNING] > Some dependencies could not be looked up. Check the Dependency Dashboard for more information. --- ### Release Notes <details> <summary>mudler/LocalAI (docker.io/localai/localai)</summary> ### [`v2.8.0`](https://github.com/mudler/LocalAI/releases/tag/v2.8.0) [Compare Source](https://github.com/mudler/LocalAI/compare/v2.7.0...v2.8.0) This release adds support for Intel GPUs, and it deprecates old ggml-based backends which are by now superseded by llama.cpp (that now supports more architectures out-of-the-box). See also [https://github.com/mudler/LocalAI/issues/1651](https://github.com/mudler/LocalAI/issues/1651). Images are now based on Ubuntu 22.04 LTS instead of Debian bullseye. ##### Intel GPUs There are now images tagged with "sycl". There are sycl-f16 and sycl-f32 images indicating f16 or f32 support. For example, to start phi-2 with an Intel GPU it is enough to use the container image like this: docker run -e DEBUG=true -ti -v $PWD/models:/build/models -p 8080:8080 -v /dev/dri:/dev/dri --rm quay.io/go-skynet/local-ai:master-sycl-f32-ffmpeg-core phi-2 ##### What's Changed ##### Exciting New Features 🎉 - feat(sycl): Add support for Intel GPUs with sycl ([#​1647](https://github.com/mudler/LocalAI/issues/1647)) by [@​mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/1660](https://github.com/mudler/LocalAI/pull/1660) - Drop old falcon backend (deprecated) by [@​mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/1675](https://github.com/mudler/LocalAI/pull/1675) - ⬆️ Update ggerganov/llama.cpp by [@​localai-bot](https://github.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/1678](https://github.com/mudler/LocalAI/pull/1678) - Drop ggml-based gpt2 and starcoder (supported by llama.cpp) by [@​mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/1679](https://github.com/mudler/LocalAI/pull/1679) - fix(Dockerfile): sycl dependencies by [@​mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/1686](https://github.com/mudler/LocalAI/pull/1686) - feat: Use ubuntu as base for container images, drop deprecated ggml-transformers backends by [@​mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/1689](https://github.com/mudler/LocalAI/pull/1689) ##### 👒 Dependencies - ⬆️ Update ggerganov/llama.cpp by [@​localai-bot](https://github.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/1656](https://github.com/mudler/LocalAI/pull/1656) - ⬆️ Update ggerganov/llama.cpp by [@​localai-bot](https://github.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/1665](https://github.com/mudler/LocalAI/pull/1665) - ⬆️ Update ggerganov/llama.cpp by [@​localai-bot](https://github.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/1669](https://github.com/mudler/LocalAI/pull/1669) - ⬆️ Update ggerganov/llama.cpp by [@​localai-bot](https://github.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/1673](https://github.com/mudler/LocalAI/pull/1673) - ⬆️ Update ggerganov/llama.cpp by [@​localai-bot](https://github.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/1683](https://github.com/mudler/LocalAI/pull/1683) - ⬆️ Update ggerganov/llama.cpp by [@​localai-bot](https://github.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/1688](https://github.com/mudler/LocalAI/pull/1688) - ⬆️ Update mudler/go-stable-diffusion by [@​localai-bot](https://github.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/1674](https://github.com/mudler/LocalAI/pull/1674) ##### Other Changes - ⬆️ Update docs version mudler/LocalAI by [@​localai-bot](https://github.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/1661](https://github.com/mudler/LocalAI/pull/1661) - feat(mamba): Add bagel-dpo-2.8b by [@​richiejp](https://github.com/richiejp) in [https://github.com/mudler/LocalAI/pull/1671](https://github.com/mudler/LocalAI/pull/1671) - fix (docs): fixed broken links `github/` -> `github.com/` by [@​Wansmer](https://github.com/Wansmer) in [https://github.com/mudler/LocalAI/pull/1672](https://github.com/mudler/LocalAI/pull/1672) - Fix HTTP links in README.md by [@​vfiftyfive](https://github.com/vfiftyfive) in [https://github.com/mudler/LocalAI/pull/1677](https://github.com/mudler/LocalAI/pull/1677) - ⬆️ Update ggerganov/llama.cpp by [@​localai-bot](https://github.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/1681](https://github.com/mudler/LocalAI/pull/1681) - ci: cleanup worker before run by [@​mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/1685](https://github.com/mudler/LocalAI/pull/1685) - Revert "fix(Dockerfile): sycl dependencies" by [@​mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/1687](https://github.com/mudler/LocalAI/pull/1687) - ⬆️ Update ggerganov/llama.cpp by [@​localai-bot](https://github.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/1691](https://github.com/mudler/LocalAI/pull/1691) ##### New Contributors - [@​richiejp](https://github.com/richiejp) made their first contribution in [https://github.com/mudler/LocalAI/pull/1671](https://github.com/mudler/LocalAI/pull/1671) - [@​Wansmer](https://github.com/Wansmer) made their first contribution in [https://github.com/mudler/LocalAI/pull/1672](https://github.com/mudler/LocalAI/pull/1672) - [@​vfiftyfive](https://github.com/vfiftyfive) made their first contribution in [https://github.com/mudler/LocalAI/pull/1677](https://github.com/mudler/LocalAI/pull/1677) **Full Changelog**: mudler/LocalAI@v2.7.0...v2.8.0 </details> --- ### Configuration 📅 **Schedule**: Branch creation - "before 10pm on monday" in timezone Europe/Amsterdam, Automerge - At any time (no schedule defined). 🚦 **Automerge**: Enabled. ♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox. 🔕 **Ignore**: Close this PR and you won't be reminded about these updates again. --- - [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check this box --- This PR has been generated by [Renovate Bot](https://github.com/renovatebot/renovate). <!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiIzNy4xODMuMCIsInVwZGF0ZWRJblZlciI6IjM3LjE4My4wIiwidGFyZ2V0QnJhbmNoIjoibWFzdGVyIn0=-->
part of #1647
based on: ggerganov/llama.cpp#2690
Note on exposing the GPU with docker: ggerganov/llama.cpp#2690 (comment)
Testing with: