You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Information in #4856 suggest that at 2.31 bpw (IQ2_XS) the ppl of Mixtral should be 4.514. However, the actual ppl obtained via perplexity calculation are much higher.
Below is the command I used to calculate the ppl. I get 5.275 as result.
The quantized model was directly obtained from https://huggingface.co/ikawrakow/various-2bit-sota-gguf (mixtral-8x7b-2.34bpw.gguf), which should has imatrix. I also quantized a Mixtral model myself with imatrix, the ppl is slightly different but is still around 5.27. @ikawrakow
Oops, I think I made a mistake. The ppl in #4856 was calculated using 4096 ctx rather than the default. I haven't done recalculating it yet, but from what I got so far, it should be close to 4.5.
Update: Yes the number matches.
Information in #4856 suggest that at 2.31 bpw (IQ2_XS) the ppl of Mixtral should be 4.514. However, the actual ppl obtained via perplexity calculation are much higher.
Below is the command I used to calculate the ppl. I get 5.275 as result.
The quantized model was directly obtained from https://huggingface.co/ikawrakow/various-2bit-sota-gguf (mixtral-8x7b-2.34bpw.gguf), which should has imatrix. I also quantized a Mixtral model myself with imatrix, the ppl is slightly different but is still around 5.27.
@ikawrakow
Just to add, the ppl of iq3_xxs match with the #5196 , which is around 4.456. I haven't test out other model or quant yet.
In case you ask, no, it's nothing to do with koboldcpp. The models are in the koboldcpp folder, but the actual ppl calculation uses llamacpp.
The text was updated successfully, but these errors were encountered: