-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ollama error after a few requests - ubatch must be set as the times of VS #12216
Comments
"Interesting thing is if I keep pressing the refresh button it works every time." I'd like to correct myself. I've tried this with smaller models and I can click refresh 10 times, it works every time, but now I tried it with Mistral-Small which is a larger model and it failed even with the refresh button, not just after a new message. |
Screencast.From.2024-10-17.03-37-55.webmYou can see a first few fails first try, ollama reloads the model every time, but than by clicking only the refresh button it works perfectly. |
I encountered a similar problem "ubatch must be set as the times of GS". |
Thank you for your feedback, you may try ipex-llm[cpp] latest (version number >= 10.17) tomorrow. |
The original issue is fixed, no errors and it doesn't reload, but after the first request the response it total nonsense, just random text. Same behavior with all models. If you need any more information, please let me know. I really appreciate the help. |
Hi,
My config: A770 + Ollama + OpenWebui + intelanalytics/ipex-llm-inference-cpp-xpu:latest docker
After 2-3 chat message I get this error:
ollama_llama_server: /home/runner/_work/llm.cpp/llm.cpp/llm.cpp/bigdl-core-xe/llama_backend/sdp_xmx_kernel.cpp:191: void sdp_causal_xmx_kernel(const void *, const void *, const void *, const void *, const void *, const void *, float *, const int64_t, const int64_t, const int64_t, const int64_t, const int64_t, const int64_t, const int64_t, const int64_t, const int64_t, const int64_t, const int64_t, const int64_t, const int64_t, const int64_t, const int64_t, const int64_t, const int64_t, const int64_t, const int64_t, const int, const int, const int, const int, const int, const float, sycl::queue &) [HD = 128, VS = 32, RepeatCount = 8, Depth = 16, ExecuteSize = 8]: Assertion `(context_length-seq_len)%VS==0 && "ubatch must be set as the times of VS\n"' failed.
If click the 'refresh'/'again' button in the OpenWebui chat Ollama reloads the model and it works, but again, after a few messages it fails.
Interesting thing is if I keep pressing the refresh button it works every time.
I've tried multiple models and the OpenWebui in the Intel docker + the official latest version.
Can someone point me to the right direction? Thank you
The text was updated successfully, but these errors were encountered: