Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Partial GPU offload broken for certain number of offloaded layers #5137

Closed
ikawrakow opened this issue Jan 26, 2024 · 8 comments · Fixed by #5145
Closed

Partial GPU offload broken for certain number of offloaded layers #5137

ikawrakow opened this issue Jan 26, 2024 · 8 comments · Fixed by #5145
Assignees
Labels
bug Something isn't working

Comments

@ikawrakow
Copy link
Contributor

Steps to reproduce

  1. Quantize Mixtral8x7B with a quantization that fully fits on the available GPU. In my case (16 GB GPU) these are IQ2_XXS and IQ2_XS
  2. Run a short perplexity calculation with the model fully offloaded to the GPU. A few chunks is enough.
  3. Now run the same calculation with -ngl 30, and observe how PPL is 2-3 times higher than in step 2
  4. To verify that this is not due to a broken CPU kernel, make a build without CUDA support and run on the CPU. Notice how PPL is very similar to the result of step 2.

Here are some example runs

All layers on GPU main: build = 1971 (1182cf4d) ... llm_load_print_meta: model ftype = IQ2_XSS - 2.0625 bpw llm_load_print_meta: model params = 46.70 B llm_load_print_meta: model size = 11.44 GiB (2.10 BPW) llm_load_print_meta: general.name = hf llm_load_print_meta: BOS token = 1 '' llm_load_print_meta: EOS token = 2 '' llm_load_print_meta: UNK token = 0 '' llm_load_print_meta: LF token = 13 '<0x0A>' llm_load_tensors: ggml ctx size = 0.76 MiB llm_load_tensors: offloading 32 repeating layers to GPU llm_load_tensors: offloaded 32/33 layers to GPU llm_load_tensors: CPU buffer size = 11712.97 MiB llm_load_tensors: CUDA0 buffer size = 11586.00 MiB .................................................................................................... llama_new_context_with_model: n_ctx = 512 llama_new_context_with_model: freq_base = 1000000.0 llama_new_context_with_model: freq_scale = 1 llama_kv_cache_init: CUDA0 KV buffer size = 64.00 MiB llama_new_context_with_model: KV self size = 64.00 MiB, K (f16): 32.00 MiB, V (f16): 32.00 MiB llama_new_context_with_model: CUDA_Host input buffer size = 9.01 MiB llama_new_context_with_model: CUDA0 compute buffer size = 109.03 MiB llama_new_context_with_model: CUDA_Host compute buffer size = 70.50 MiB llama_new_context_with_model: graph splits (measure): 4

system_info: n_threads = 32 / 32 | AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 |
perplexity: tokenizing the input ..
perplexity: tokenization took 567.194 ms
perplexity: calculating perplexity over 642 chunks, batch_size=512
perplexity: 1.32 seconds per pass - ETA 14.08 minutes
[1]4.0990,[2]4.9914,[3]5.6483,[4]6.3020,[5]6.2826,[6]6.2130,[7]6.4030,[8]6.4265,[9]6.5435,[10]6.8596,[11]7.0488,[12]7.0107,[13]7.0517,[14]7.0914

30 layers offloaded to GPU main: build = 1971 (1182cf4d) ... llm_load_print_meta: model ftype = IQ2_XSS - 2.0625 bpw llm_load_print_meta: model params = 46.70 B llm_load_print_meta: model size = 11.44 GiB (2.10 BPW) llm_load_print_meta: general.name = hf llm_load_print_meta: BOS token = 1 '' llm_load_print_meta: EOS token = 2 '' llm_load_print_meta: UNK token = 0 '' llm_load_print_meta: LF token = 13 '<0x0A>' llm_load_tensors: ggml ctx size = 0.76 MiB llm_load_tensors: offloading 30 repeating layers to GPU llm_load_tensors: offloaded 30/33 layers to GPU llm_load_tensors: CPU buffer size = 11712.97 MiB llm_load_tensors: CUDA0 buffer size = 10806.75 MiB .................................................................................................... llama_new_context_with_model: n_ctx = 512 llama_new_context_with_model: freq_base = 1000000.0 llama_new_context_with_model: freq_scale = 1 llama_kv_cache_init: CUDA_Host KV buffer size = 4.00 MiB llama_kv_cache_init: CUDA0 KV buffer size = 60.00 MiB llama_new_context_with_model: KV self size = 64.00 MiB, K (f16): 32.00 MiB, V (f16): 32.00 MiB llama_new_context_with_model: CUDA_Host input buffer size = 9.01 MiB llama_new_context_with_model: CUDA0 compute buffer size = 108.03 MiB llama_new_context_with_model: CUDA_Host compute buffer size = 108.03 MiB llama_new_context_with_model: graph splits (measure): 5

system_info: n_threads = 32 / 32 | AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 |
perplexity: tokenizing the input ..
perplexity: tokenization took 564.023 ms
perplexity: calculating perplexity over 642 chunks, batch_size=512
perplexity: 1.44 seconds per pass - ETA 15.45 minutes
[1]9.7855,[2]10.1005,[3]12.9574,[4]13.0298,[5]12.7318,[6]11.8905,[7]11.7408,[8]11.9335,[9]11.8980,[10]12.4120,[11]12.8212,[12]13.9232,[13]13.9312,[14]14.1171

All on CPU main: build = 1971 (1182cf4) ... llm_load_print_meta: model type = 7B llm_load_print_meta: model ftype = IQ2_XSS - 2.0625 bpw llm_load_print_meta: model params = 46.70 B llm_load_print_meta: model size = 11.44 GiB (2.10 BPW) llm_load_print_meta: general.name = hf llm_load_print_meta: BOS token = 1 '' llm_load_print_meta: EOS token = 2 '' llm_load_print_meta: UNK token = 0 '' llm_load_print_meta: LF token = 13 '<0x0A>' llm_load_tensors: ggml ctx size = 0.38 MiB llm_load_tensors: offloading 0 repeating layers to GPU llm_load_tensors: offloaded 0/33 layers to GPU llm_load_tensors: CPU buffer size = 11712.97 MiB .................................................................................................... llama_new_context_with_model: n_ctx = 512 llama_new_context_with_model: freq_base = 1000000.0 llama_new_context_with_model: freq_scale = 1 llama_kv_cache_init: CPU KV buffer size = 64.00 MiB llama_new_context_with_model: KV self size = 64.00 MiB, K (f16): 32.00 MiB, V (f16): 32.00 MiB llama_new_context_with_model: CPU input buffer size = 9.01 MiB llama_new_context_with_model: CPU compute buffer size = 114.53 MiB llama_new_context_with_model: graph splits (measure): 1

system_info: n_threads = 32 / 32 | AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 |
perplexity: tokenizing the input ..
perplexity: tokenization took 566.337 ms
perplexity: calculating perplexity over 642 chunks, batch_size=512
perplexity: 72.40 seconds per pass - ETA 12 hours 54.68 minutes
[1]4.1341,[2]5.0092,[3]5.6687,[4]6.3300,[5]6.3044,[6]6.2292,[7]6.4185,[8]6.4343,[9]6.5516,[10]6.8710,[11]7.0630,[12]7.0260,[13]7.0671,

29 layers on GPU main: build = 1971 (1182cf4d) ... llm_load_print_meta: model type = 7B llm_load_print_meta: model ftype = IQ2_XSS - 2.0625 bpw llm_load_print_meta: model params = 46.70 B llm_load_print_meta: model size = 11.44 GiB (2.10 BPW) llm_load_print_meta: general.name = hf llm_load_print_meta: BOS token = 1 '' llm_load_print_meta: EOS token = 2 '' llm_load_print_meta: UNK token = 0 '' llm_load_print_meta: LF token = 13 '<0x0A>' llm_load_tensors: ggml ctx size = 0.76 MiB llm_load_tensors: offloading 29 repeating layers to GPU llm_load_tensors: offloaded 29/33 layers to GPU llm_load_tensors: CPU buffer size = 11712.97 MiB llm_load_tensors: CUDA0 buffer size = 10417.12 MiB .................................................................................................... llama_new_context_with_model: n_ctx = 512 llama_new_context_with_model: freq_base = 1000000.0 llama_new_context_with_model: freq_scale = 1 llama_kv_cache_init: CUDA_Host KV buffer size = 6.00 MiB llama_kv_cache_init: CUDA0 KV buffer size = 58.00 MiB llama_new_context_with_model: KV self size = 64.00 MiB, K (f16): 32.00 MiB, V (f16): 32.00 MiB llama_new_context_with_model: CUDA_Host input buffer size = 9.01 MiB llama_new_context_with_model: CUDA0 compute buffer size = 108.03 MiB llama_new_context_with_model: CUDA_Host compute buffer size = 108.03 MiB llama_new_context_with_model: graph splits (measure): 5

system_info: n_threads = 32 / 32 | AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 |
perplexity: tokenizing the input ..
perplexity: tokenization took 566.749 ms
perplexity: calculating perplexity over 642 chunks, batch_size=512
perplexity: 1.52 seconds per pass - ETA 16.28 minutes
[1]4.0521,[2]4.9624,[3]5.5985,[4]6.2678,[5]6.2614,[6]6.2038,[7]6.4056,[8]6.4241,[9]6.5409,[10]6.8630,[11]7.0622,[12]7.0257,[13]7.0661,[14]7.1006,

31 layers on GPU main: build = 1971 (1182cf4d) ... llm_load_print_meta: model type = 7B llm_load_print_meta: model ftype = IQ2_XSS - 2.0625 bpw llm_load_print_meta: model params = 46.70 B llm_load_print_meta: model size = 11.44 GiB (2.10 BPW) llm_load_print_meta: general.name = hf llm_load_print_meta: BOS token = 1 '' llm_load_print_meta: EOS token = 2 '' llm_load_print_meta: UNK token = 0 '' llm_load_print_meta: LF token = 13 '<0x0A>' llm_load_tensors: ggml ctx size = 0.76 MiB llm_load_tensors: offloading 31 repeating layers to GPU llm_load_tensors: offloaded 31/33 layers to GPU llm_load_tensors: CPU buffer size = 11712.97 MiB llm_load_tensors: CUDA0 buffer size = 11196.38 MiB .................................................................................................... llama_new_context_with_model: n_ctx = 512 llama_new_context_with_model: freq_base = 1000000.0 llama_new_context_with_model: freq_scale = 1 llama_kv_cache_init: CUDA_Host KV buffer size = 2.00 MiB llama_kv_cache_init: CUDA0 KV buffer size = 62.00 MiB llama_new_context_with_model: KV self size = 64.00 MiB, K (f16): 32.00 MiB, V (f16): 32.00 MiB llama_new_context_with_model: CUDA_Host input buffer size = 9.01 MiB llama_new_context_with_model: CUDA0 compute buffer size = 108.03 MiB llama_new_context_with_model: CUDA_Host compute buffer size = 108.03 MiB llama_new_context_with_model: graph splits (measure): 5

system_info: n_threads = 32 / 32 | AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 |
perplexity: tokenizing the input ..
perplexity: tokenization took 548.178 ms
perplexity: calculating perplexity over 642 chunks, batch_size=512
perplexity: 1.39 seconds per pass - ETA 14.82 minutes
[1]4.8836,[2]6.0415,[3]6.4471,[4]7.0981,[5]6.9666,[6]6.8581,[7]7.1009,[8]7.0858,[9]7.2431,[10]7.5545,[11]7.7723,[12]7.6741,[13]7.7159,[14]7.7463,

@ikawrakow ikawrakow added the bug Something isn't working label Jan 26, 2024
@ikawrakow
Copy link
Contributor Author

The problem appears to be CUDA-specific. Repeating the same experiment on a Mac using Metal, I get a very similar PPL for full offload and for 30 layers offloaded to the GPU:

Full:
[1]4.0747,[2]5.0019,[3]5.6953,[4]6.3264,[5]6.2909,[6]6.2173,[7]6.3998,[8]6.4105,[9]6.5176,[10]6.8183,[11]7.0142,[12]6.9679,[13]7.0130,[14]7.0579
30 layers on GPU:
[1]4.0577,[2]4.9726,[3]5.6724,[4]6.2990,[5]6.2691,[6]6.1972,[7]6.3821,[8]6.3952,[9]6.5109,[10]6.8192,[11]7.0248,[12]6.9796,[13]7.0217,[14]7.0631

@ikawrakow
Copy link
Contributor Author

The problem is related to the ggml-backend integration. If I check out 584d674, the last commit before PR #4766 was merged, I get a meaningful result with 30 layers offloaded to the GPU.

main: build = 1842 (584d674b) main: built with cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 for x86_64-linux-gnu main: seed = 1706261433 ggml_init_cublas: GGML_CUDA_FORCE_MMQ: no ggml_init_cublas: CUDA_USE_TENSOR_CORES: yes ggml_init_cublas: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 4080, compute capability 8.9, VMM: yes llama_model_loader: loaded meta data with 25 key-value pairs and 995 tensors from ../cuda/junk.bin (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.name str = hf llama_model_loader: - kv 2: llama.context_length u32 = 32768 llama_model_loader: - kv 3: llama.embedding_length u32 = 4096 llama_model_loader: - kv 4: llama.block_count u32 = 32 llama_model_loader: - kv 5: llama.feed_forward_length u32 = 14336 llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 7: llama.attention.head_count u32 = 32 llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 9: llama.expert_count u32 = 8 llama_model_loader: - kv 10: llama.expert_used_count u32 = 2 llama_model_loader: - kv 11: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 12: llama.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 13: general.file_type u32 = 19 llama_model_loader: - kv 14: tokenizer.ggml.model str = llama llama_model_loader: - kv 15: tokenizer.ggml.tokens arr[str,32000] = ["", "", "", "<0x00>", "<... llama_model_loader: - kv 16: tokenizer.ggml.scores arr[f32,32000] = [0.000000, 0.000000, 0.000000, 0.0000... llama_model_loader: - kv 17: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ... llama_model_loader: - kv 18: tokenizer.ggml.merges arr[str,58980] = ["▁ t", "i n", "e r", "▁ a", "h e... llama_model_loader: - kv 19: tokenizer.ggml.bos_token_id u32 = 1 llama_model_loader: - kv 20: tokenizer.ggml.eos_token_id u32 = 2 llama_model_loader: - kv 21: tokenizer.ggml.unknown_token_id u32 = 0 llama_model_loader: - kv 22: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 23: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 24: general.quantization_version u32 = 2 llama_model_loader: - type f32: 65 tensors llama_model_loader: - type f16: 32 tensors llama_model_loader: - type q2_K: 33 tensors llama_model_loader: - type q4_K: 32 tensors llama_model_loader: - type q5_K: 1 tensors llama_model_loader: - type iq2_xxs: 832 tensors llm_load_vocab: special tokens definition check successful ( 259/32000 ). llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = SPM llm_load_print_meta: n_vocab = 32000 llm_load_print_meta: n_merges = 0 llm_load_print_meta: n_ctx_train = 32768 llm_load_print_meta: n_embd = 4096 llm_load_print_meta: n_head = 32 llm_load_print_meta: n_head_kv = 8 llm_load_print_meta: n_layer = 32 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 4 llm_load_print_meta: n_embd_k_gqa = 1024 llm_load_print_meta: n_embd_v_gqa = 1024 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: n_ff = 14336 llm_load_print_meta: n_expert = 8 llm_load_print_meta: n_expert_used = 2 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 1000000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_yarn_orig_ctx = 32768 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: model type = 7B llm_load_print_meta: model ftype = IQ2_XSS - 2.0625 bpw llm_load_print_meta: model params = 46.70 B llm_load_print_meta: model size = 11.44 GiB (2.10 BPW) llm_load_print_meta: general.name = hf llm_load_print_meta: BOS token = 1 '' llm_load_print_meta: EOS token = 2 '' llm_load_print_meta: UNK token = 0 '' llm_load_print_meta: LF token = 13 '<0x0A>' llm_load_tensors: ggml ctx size = 0.38 MiB llm_load_tensors: using CUDA for GPU acceleration llm_load_tensors: system memory used = 906.60 MiB llm_load_tensors: VRAM used = 10806.75 MiB llm_load_tensors: offloading 30 repeating layers to GPU llm_load_tensors: offloaded 30/33 layers to GPU .................................................................................................... llama_new_context_with_model: n_ctx = 512 llama_new_context_with_model: freq_base = 1000000.0 llama_new_context_with_model: freq_scale = 1 llama_kv_cache_init: VRAM kv self = 60.00 MB llama_new_context_with_model: KV self size = 64.00 MiB, K (f16): 32.00 MiB, V (f16): 32.00 MiB llama_build_graph: non-view tensors processed: 1124/1124 llama_new_context_with_model: compute buffer total size = 117.72 MiB llama_new_context_with_model: VRAM scratch buffer: 114.53 MiB llama_new_context_with_model: total VRAM used: 10981.28 MiB (model: 10806.75 MiB, context: 174.53 MiB)

system_info: n_threads = 32 / 32 | AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 |
perplexity: tokenizing the input ..
perplexity: tokenization took 548.46 ms
perplexity: calculating perplexity over 642 chunks, batch_size=512
perplexity: 1.60 seconds per pass - ETA 17.07 minutes
[1]4.0965,[2]4.9995,[3]5.6524,[4]6.3036,[5]6.2862,[6]6.2144,[7]6.3981,[8]6.4203,[9]6.5414,[10]6.8581,[11]7.0507,[12]7.0140,[13]7.0551,[14]7.0926,

@slaren
Copy link
Collaborator

slaren commented Jan 26, 2024

Have you seen this issue in any model other than mixtral?

@slaren
Copy link
Collaborator

slaren commented Jan 26, 2024

As a workaround, increasing the alignment to 4096 in ggml_backend_cuda_buffer_type_get_alignment seems to fix it.

@ikawrakow
Copy link
Contributor Author

No, I haven't seen this on another model. Yes, changing ggml_backend_cuda_buffer_type_get_alignment to 4096 fixes it. It also changes perplexity results. Fortunately it seems to change them for the better. Why would alignment influence the results?

Example: Mixtral-8x7B, Q4_K_S quantization, 18 layers offloaded to the GPU

  • PPL on current master: 4.1923
  • PPL with ggml_backend_cuda_buffer_type_get_alignment set to 4096: 4.1682

Are there any downsides from having this set to 4096?

I did see quite significant changes in PPL for Mixtral-8x7B after PR #4766, see my comments there.

@slaren
Copy link
Collaborator

slaren commented Jan 26, 2024

My reasoning when testing increasing the alignment was that if there is a buffer overflow somewhere, adding a gap between the tensors may mask the issue by preventing it from corrupting the data of other tensors. Increasing the alignment effectively does that. I still don't know what is the source the issue, but at least there are less possibilities now. The downside of increasing the alignment is a slight increase in memory usage, but ultimately this is not a solution, it is just hiding the real issue.

@ikawrakow
Copy link
Contributor Author

Thanks. I didn't see the reported VRAM increase. Yes, I agree with you that there is an issue somewhere that leads to overriding buffers. It all started with me being curious what happens if the number of experts in a MoE model is changed from the default. I did a Q2_K_S quantization which from memory did fit in my 16 GB GPU, but now it did not. It did fit at 30 layers, but the result was totally wrong. After wasting quite a bit of time because I thought that I somehow broke Q2_K_S, I tried with the smaller models that do fully fit. But coming back to varying the number of used experts: if I change the experts to 1, the Q4_K_S run crashes with illegal memory access:

./perplexity -m models/m8x7/q4ks.gguf -f tests/wiki.test.raw -t 32 -ngl 18 --override-kv llama.expert_used_count=int:1
main: build = 1970 (fe54033b)
main: built with cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 for x86_64-linux-gnu
main: seed  = 1706276370
ggml_init_cublas: GGML_CUDA_FORCE_MMQ:   no
ggml_init_cublas: CUDA_USE_TENSOR_CORES: yes
ggml_init_cublas: found 1 CUDA devices:
  Device 0: NVIDIA GeForce RTX 4080, compute capability 8.9, VMM: yes
llama_model_loader: loaded meta data with 25 key-value pairs and 995 tensors from junk.bin (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.name str              = hf
llama_model_loader: - kv   2:                       llama.context_length u32              = 32768
llama_model_loader: - kv   3:                     llama.embedding_length u32              = 4096
llama_model_loader: - kv   4:                          llama.block_count u32              = 32
llama_model_loader: - kv   5:                  llama.feed_forward_length u32              = 14336
llama_model_loader: - kv   6:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv   7:                 llama.attention.head_count u32              = 32
llama_model_loader: - kv   8:              llama.attention.head_count_kv u32              = 8
llama_model_loader: - kv   9:                         llama.expert_count u32              = 8
llama_model_loader: - kv  10:                    llama.expert_used_count u32              = 2
llama_model_loader: - kv  11:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  12:                       llama.rope.freq_base f32              = 1000000.000000
llama_model_loader: - kv  13:                          general.file_type u32              = 14
llama_model_loader: - kv  14:                       tokenizer.ggml.model str              = llama
llama_model_loader: - kv  15:                      tokenizer.ggml.tokens arr[str,32000]   = ["<unk>", "<s>", "</s>", "<0x00>", "<...
llama_model_loader: - kv  16:                      tokenizer.ggml.scores arr[f32,32000]   = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv  17:                  tokenizer.ggml.token_type arr[i32,32000]   = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv  18:                      tokenizer.ggml.merges arr[str,58980]   = ["▁ t", "i n", "e r", "▁ a", "h e...
llama_model_loader: - kv  19:                tokenizer.ggml.bos_token_id u32              = 1
llama_model_loader: - kv  20:                tokenizer.ggml.eos_token_id u32              = 2
llama_model_loader: - kv  21:            tokenizer.ggml.unknown_token_id u32              = 0
llama_model_loader: - kv  22:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  23:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  24:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:   65 tensors
llama_model_loader: - type  f16:   32 tensors
llama_model_loader: - type q8_0:   64 tensors
llama_model_loader: - type q4_K:  769 tensors
llama_model_loader: - type q5_K:   64 tensors
llama_model_loader: - type q6_K:    1 tensors
validate_override: Using metadata override (  int) 'llama.expert_used_count' = 1
llm_load_vocab: special tokens definition check successful ( 259/32000 ).
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = llama
llm_load_print_meta: vocab type       = SPM
llm_load_print_meta: n_vocab          = 32000
llm_load_print_meta: n_merges         = 0
llm_load_print_meta: n_ctx_train      = 32768
llm_load_print_meta: n_embd           = 4096
llm_load_print_meta: n_head           = 32
llm_load_print_meta: n_head_kv        = 8
llm_load_print_meta: n_layer          = 32
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_embd_head_k    = 128
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 4
llm_load_print_meta: n_embd_k_gqa     = 1024
llm_load_print_meta: n_embd_v_gqa     = 1024
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: n_ff             = 14336
llm_load_print_meta: n_expert         = 8
llm_load_print_meta: n_expert_used    = 1
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 1000000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx  = 32768
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: model type       = 7B
llm_load_print_meta: model ftype      = Q4_K - Small
llm_load_print_meta: model params     = 46.70 B
llm_load_print_meta: model size       = 24.91 GiB (4.58 BPW) 
llm_load_print_meta: general.name     = hf
llm_load_print_meta: BOS token        = 1 '<s>'
llm_load_print_meta: EOS token        = 2 '</s>'
llm_load_print_meta: UNK token        = 0 '<unk>'
llm_load_print_meta: LF token         = 13 '<0x0A>'
llm_load_tensors: ggml ctx size =    0.76 MiB
llm_load_tensors: offloading 18 repeating layers to GPU
llm_load_tensors: offloaded 18/33 layers to GPU
llm_load_tensors:        CPU buffer size = 25503.87 MiB
llm_load_tensors:      CUDA0 buffer size = 14122.69 MiB
....................................................................................................
llama_new_context_with_model: n_ctx      = 512
llama_new_context_with_model: freq_base  = 1000000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init:  CUDA_Host KV buffer size =    28.00 MiB
llama_kv_cache_init:      CUDA0 KV buffer size =    36.00 MiB
llama_new_context_with_model: KV self size  =   64.00 MiB, K (f16):   32.00 MiB, V (f16):   32.00 MiB
llama_new_context_with_model:  CUDA_Host input buffer size   =     9.01 MiB
llama_new_context_with_model:      CUDA0 compute buffer size =    81.03 MiB
llama_new_context_with_model:  CUDA_Host compute buffer size =    72.03 MiB
llama_new_context_with_model: graph splits (measure): 5

system_info: n_threads = 32 / 64 | AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | 
perplexity: tokenizing the input ..
perplexity: tokenization took 656.153 ms
perplexity: calculating perplexity over 642 chunks, batch_size=512
CUDA error: an illegal memory access was encountered
  current device: 0, in function ggml_cuda_mul_mat_id at /home/iwan/other/llama.cpp/ggml-cuda.cu:9806
  cudaMemcpyAsync(ids_host.data(), ids_dev, ggml_nbytes(ids), cudaMemcpyDeviceToHost, stream)
GGML_ASSERT: /home/iwan/other/llama.cpp/ggml-cuda.cu:237: !"CUDA error"
Could not attach to process.  If your uid matches the uid of the target
process, check the setting of /proc/sys/kernel/yama/ptrace_scope, or try
again as the root user.  For more details, see /etc/sysctl.d/10-ptrace.conf
ptrace: Operation not permitted.
No stack.
The program is not being run.
Aborted (core dumped)

@slaren
Copy link
Collaborator

slaren commented Jan 26, 2024

So this was caused by an underestimation of the allocation size of non-contiguous tensors. Normally, the only non-contiguous tensors are views, and these don't have to be allocated because they share the memory of their parent tensor. However, when copying data between backends by ggml_backend_sched, it is possible that the tensor that needs to be copied is a view, and in that case an exact copy is allocated in the destination backend with the same memory layout. As a result, in some cases such as MoE that has some non-contiguos tensors, and previously with -nkvo as well which caused views of the KV to be copied between backends, not enough memory was allocated for these tensors, resulting in a buffer overflow and memory corruption. Should be fixed in #5145.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants