-
Notifications
You must be signed in to change notification settings - Fork 308
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Mistral-Nemo-Instruct-2407 Q8_0 GGUF: Model failed with error: ShapeMismatchBinaryOp { lhs: [1, 26, 4096], rhs: [26, 32, 160], op: "reshape" } #643
Comments
Hey @Remember20240719! Can you please run with |
@Remember20240719 fixed it in #657! Please confirm that it works now after |
@Remember20240719 closing as complete via #657. |
That was quick, thanks! This time, the command runs normally, but crashes when given a prompt: git HEAD a9b8b2e
|
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Describe the bug
Hello, I found in PR #595 that Mistral Nemo Instruct 2407 was supported. It is working really well (using ISQ on HF safetensors).
Are GGUF supported too?
Using the Q8_0 from https://huggingface.co/bartowski/Mistral-Nemo-Instruct-2407-GGUF/tree/main:
Server:
Client:
Latest commit or version
Latest commit 38fb942
The text was updated successfully, but these errors were encountered: