-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error in Tabby deployment - llama_cpp_bindings::llama: crates/llama-cpp-bindings/src/llama.rs #1666
Comments
Hi, thanks for reporting the issue. Would you please upgrade to 0.9.0 to see if the problem still persist? |
Would require significant efforts, will keep this as last resort. |
Happens for me too, on 0.9.1, when running with |
Could you also share the log output and your system info? |
Seems related: @mprudra could you share the model you were using when encountering the issue? |
|
Is it the case that Deepseek-Coder models aren't yet supported? |
ggerganov/llama.cpp#5981 is the latest issue opened to support deepseek in llama.cpp |
Deepseek series model has been supported. |
Describe the bug
I'm noticing below error with our Tabby deployment, looks like a memory error. Don't have any additional logs, since we've modified the logs to mask input, output information, this was needed for production deployment.
Process exit code was 1.
Information about your version
0.5.5
Information about your GPU
The text was updated successfully, but these errors were encountered: