Skip to content

Commit

Permalink
llama : fix qs.n_attention_wv for DeepSeek-V2 (#9156)
Browse files Browse the repository at this point in the history
  • Loading branch information
compilade authored Aug 27, 2024
1 parent a77feb5 commit 78eb487
Showing 1 changed file with 2 additions and 1 deletion.
3 changes: 2 additions & 1 deletion src/llama.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -16822,7 +16822,8 @@ static void llama_model_quantize_internal(const std::string & fname_inp, const s

// TODO: avoid hardcoded tensor names - use the TN_* constants
if (name.find("attn_v.weight") != std::string::npos ||
name.find("attn_qkv.weight") != std::string::npos) {
name.find("attn_qkv.weight") != std::string::npos ||
name.find("attn_kv_b.weight")!= std::string::npos) {
++qs.n_attention_wv;
} else if (name == LLM_TN(model.arch)(LLM_TENSOR_OUTPUT, "weight")) {
qs.has_output = true;
Expand Down

0 comments on commit 78eb487

Please sign in to comment.