Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bug: Gemma-2 not supported on b3262 #8195

Closed
nmandic78 opened this issue Jun 28, 2024 · 15 comments
Closed

Bug: Gemma-2 not supported on b3262 #8195

nmandic78 opened this issue Jun 28, 2024 · 15 comments
Labels
bug-unconfirmed medium severity Used to report medium severity bugs in llama.cpp (e.g. Malfunctioning Features but still useable)

Comments

@nmandic78
Copy link

nmandic78 commented Jun 28, 2024

What happened?

I pulled and built b3262, but when loading the model (both server and cli) I get response that gemma2 is unknown architecture.
$ git log -1 --oneline
38373cf (HEAD -> master, tag: b3262, origin/master, origin/HEAD) Add SPM infill support (#8016)

Looking at release notes, I expected it to be supported from 2 releases before:
b3259
llama: Add support for Gemma2ForCausalLM (#8156)
Inference support for Gemma 2 model family

Am I missing something (as I don't see anybody else complaining)?

Name and Version

version: 3262 (38373cf)
built with cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 for x86_64-linux-gnu

What operating system are you seeing the problem on?

Linux

Relevant log output

$ ./server -m /mnt/disk2/LLM_MODELS/models/gemma-2-9b-it-Q8_0.gguf -ngl 99 -c 4096
{"tid":"131173750738944","timestamp":1719588568,"level":"INFO","function":"main","line":2940,"msg":"build info","build":2964,"commit":"9b3d8331"}
{"tid":"131173750738944","timestamp":1719588568,"level":"INFO","function":"main","line":2945,"msg":"system info","n_threads":6,"n_threads_batch":-1,"total_threads":6,"system_info":"AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | "}
llama_model_loader: loaded meta data with 25 key-value pairs and 464 tensors from /mnt/disk2/LLM_MODELS/models/gemma-2-9b-it-Q8_0.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = gemma2
llama_model_loader: - kv   1:                               general.name str              = Gemma2 9B
llama_model_loader: - kv   2:                      gemma2.context_length u32              = 8192
llama_model_loader: - kv   3:                         gemma2.block_count u32              = 42
llama_model_loader: - kv   4:                    gemma2.embedding_length u32              = 3584
llama_model_loader: - kv   5:                 gemma2.feed_forward_length u32              = 14336
llama_model_loader: - kv   6:                gemma2.attention.head_count u32              = 16
llama_model_loader: - kv   7:             gemma2.attention.head_count_kv u32              = 8
llama_model_loader: - kv   8:                gemma2.attention.key_length u32              = 256
llama_model_loader: - kv   9:              gemma2.attention.value_length u32              = 256
llama_model_loader: - kv  10:    gemma2.attention.layer_norm_rms_epsilon f32              = 0.000001
llama_model_loader: - kv  11:                       tokenizer.ggml.model str              = llama
llama_model_loader: - kv  12:                tokenizer.ggml.bos_token_id u32              = 2
llama_model_loader: - kv  13:                tokenizer.ggml.eos_token_id u32              = 1
llama_model_loader: - kv  14:            tokenizer.ggml.padding_token_id u32              = 0
llama_model_loader: - kv  15:            tokenizer.ggml.unknown_token_id u32              = 3
llama_model_loader: - kv  16:                      tokenizer.ggml.tokens arr[str,256128]  = ["<pad>", "<eos>", "<bos>", "<unk>", ...
llama_model_loader: - kv  17:                      tokenizer.ggml.scores arr[f32,256128]  = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv  18:                  tokenizer.ggml.token_type arr[i32,256128]  = [3, 3, 3, 2, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  19:               general.quantization_version u32              = 2
llama_model_loader: - kv  20:                          general.file_type u32              = 7
llama_model_loader: - kv  21:                      quantize.imatrix.file str              = /models/gemma-2-9b-it-GGUF/gemma-2-9b...
llama_model_loader: - kv  22:                   quantize.imatrix.dataset str              = /training_data/calibration_datav3.txt
llama_model_loader: - kv  23:             quantize.imatrix.entries_count i32              = 294
llama_model_loader: - kv  24:              quantize.imatrix.chunks_count i32              = 128
llama_model_loader: - type  f32:  169 tensors
llama_model_loader: - type q8_0:  295 tensors
llama_model_load: error loading model: error loading model architecture: unknown model architecture: 'gemma2'
llama_load_model_from_file: failed to load model
llama_init_from_gpt_params: error: failed to load model '/mnt/disk2/LLM_MODELS/models/gemma-2-9b-it-Q8_0.gguf'
{"tid":"131173750738944","timestamp":1719588569,"level":"ERR","function":"load_model","line":692,"msg":"unable to load model","model":"/mnt/disk2/LLM_MODELS/models/gemma-2-9b-it-Q8_0.gguf"}
free(): invalid pointer
Aborted (core dumped)
@nmandic78 nmandic78 added bug-unconfirmed medium severity Used to report medium severity bugs in llama.cpp (e.g. Malfunctioning Features but still useable) labels Jun 28, 2024
@bartowski1182
Copy link
Contributor

can you show your build log? i can confirm it works for me (though not specifically b3262, will update and verify), so wondering if it cached some build stuff and you didn't actually get latest

@MoonRide303
Copy link

gemma-2-9b-it works fine for me - Q6_K quant, converted and launched using llama.cpp b3259.

@slaren
Copy link
Collaborator

slaren commented Jun 28, 2024

The name of the server binary has been changed to llama-server, you are probably using an old build.

@brittlewis12
Copy link

@nmandic78 server & main have been renamed as of #7809. you may inadvertently be using stale compilation artifacts.

[2024 Jun 12] Binaries have been renamed w/ a `llama-` prefix. `main` is now `llama-cli`, `server` is `llama-server`, etc (https://github.com/ggerganov/llama.cpp/pull/7809)

try llama-server & llama-cli instead

@nmandic78
Copy link
Author

Oh, I feel so stupid now :D Should have read that.
Indeed, when using renamed (ones really built with new release) binaries, there is no problem.

Thank you!

@werruww
Copy link

werruww commented Jun 29, 2024

gemma 9b not work

lama.cpp b3259
pip install git+https://github.com/ggerganov/llama.cpp.git@b3259

llama_model_load: error loading model: error loading model architecture: unknown model architecture: 'gemma2'
llama_load_model_from_file: failed to load model
Traceback (most recent call last):
File "C:\Users\m\Desktop\ollama\1.py", line 4, in
llm = Llama(
^^^^^^
File "C:\Users\m\AppData\Local\Programs\Python\Python311\Lib\site-packages\llama_cpp\llama.py", line 358, in init
self._model = self._stack.enter_context(contextlib.closing(_LlamaModel(
^^^^^^^^^^^^
File "C:\Users\m\AppData\Local\Programs\Python\Python311\Lib\site-packages\llama_cpp_internals.py", line 54, in init
raise ValueError(f"Failed to load model from file: {path_model}")
ValueError: Failed to load model from file: ./gemma-2-9b-it.Q4_K.gguf

@wesleysanjose
Copy link

i still see following when running gemma2 27b q4_k_m of https://huggingface.co/bartowski/gemma-2-27b-it-GGUF after updating to latest commits

llama_model_load: error loading model: error loading model architecture: unknown model architecture: 'gemma2'

@MoonRide303
Copy link

i still see following when running gemma2 27b q4_k_m of https://huggingface.co/bartowski/gemma-2-27b-it-GGUF after updating to latest commits

llama_model_load: error loading model: error loading model architecture: unknown model architecture: 'gemma2'

I just tested build from current master (d0a7145) using freshly downloaded Q3_K_S quant from this repo, launched like this:
llama-server.exe -v -ngl 99 -m gemma-2-27b-it-Q3_K_S.gguf -c 4096
and it seems to be working (at least in basic scope - stuff like interleaved SWA/full attention might be still missing):
image

@bartowski1182
Copy link
Contributor

@wesleysanjose make sure you're using llama-cli and have all the latest binaries

@wesleysanjose
Copy link

@bartowski1182 I use server to launch openai compatible server, what's the difference? I always uses that

./server -m $1 --n-gpu-layers $2 -c $3 --host 192.168.0.184 --port 5000 -b 4096 -to 120 -ts 20,6

@bartowski1182
Copy link
Contributor

The binary names were updated a few weeks ago so you're using the old ones that have been sitting around

It should be ./llama-server

@wesleysanjose
Copy link

that works, thank you so much @bartowski1182

@jim-plus
Copy link

jim-plus commented Aug 2, 2024

I'm getting this error when attempting to quantize a bf16 GGUF after building on Windows. I'm tacking this on here, as a fix may be related.

$ ./llama-cli --version
version: 3504 (e09a800)
built with MSVC 19.40.33812.0 for x64

And yet:
llama.cpp\build\bin\release\quantize temp.gguf ./text-generation-webui/models/%1.Q8_0.gguf q8_0

Eventually ends with:
llama_model_loader: - type f32: 105 tensors
llama_model_loader: - type bf16: 183 tensors
llama_model_quantize: failed to quantize: unknown model architecture: 'gemma2'
main: failed to quantize model from 'temp.gguf'

@brittlewis12
Copy link

@jim-plus the quantize binary was also renamed alongside main & server, with the same llama- prefix:

give llama-quantize a shot.

@jim-plus
Copy link

jim-plus commented Aug 3, 2024

Ah, that did it (along with clearing out all the old binaries for a clean rebuild). Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug-unconfirmed medium severity Used to report medium severity bugs in llama.cpp (e.g. Malfunctioning Features but still useable)
Projects
None yet
Development

No branches or pull requests

8 participants