-
Notifications
You must be signed in to change notification settings - Fork 9.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Convert-hf-to-gguf fails with command-r-plus #6488
Comments
I fixed this by including that line in config.JSON, however it still fails after doing that for a different reason: Can not map tensor 'model.layers.0.self_attn.k_norm.weight' |
Yes, I just found the same thing after it was suggested to add the max length. New output in its entirety:
|
This fixes the tensor map error. |
Yeah, but it still fails per @bartowski1182 's attempts. Everybody's chomping at the bit for this beast. |
For the mean time, if you are on mac there is https://huggingface.co/mlx-community/c4ai-command-r-plus-4bit |
The map error is probably caused by an outdated gguf-py. The recommendation is to install the local version. |
Trying to quantize the just-released Command R+ model. I know command R support was added a while back, but there appears to be something different about this new, bigger model that is causing issues. With a fresh clone of LCPP from a few minutes ago, this is the failure I get when trying to convert.
https://huggingface.co/CohereForAI/c4ai-command-r-plus
The text was updated successfully, but these errors were encountered: