-
Notifications
You must be signed in to change notification settings - Fork 9.8k
dev notes
These are general free form note with pointers to good jumping to point to under stand the llama.cpp codebase.
(@<symbol>
is a vscode jump to symbol code for your convenience. Also making a feature request to vscode to be able to jump to file and symbol via <file>:@<symbol>
)
All of the gguf structure can be found in gguf.c
unless stated otherwise
GGUF Structure Of Interest | gguf.c reference | vscode search line |
---|---|---|
Overall File Structure | struct gguf_context |
@gguf_context |
File Header Structure | struct gguf_header |
@gguf_header |
Key Value Structure | struct gguf_kv |
@gguf_kv |
Tensor Info Structure | struct gguf_tensor_info |
@gguf_tensor_info |
Please use this as an index not as canonical reference. The purpose of this table is to allow you to quickly locate major elements of the gguf file standard.
Header Name | GGUF Elements Of Interest | c name | c type | vscode search line |
---|---|---|---|---|
GGUF Header | Magic | magic | uint8_t[4] |
gguf.c:@gguf_header
|
GGUF Header | Version | version | uint32_t |
gguf.c:@gguf_header
|
GGUF Header | Tensor Count | n_tensors | uint64_t |
gguf.c:@gguf_header
|
GGUF Header | Key Value Count | n_kv | uint64_t |
gguf.c:@gguf_header
|
GGUF Context | Key Value Linked List | kv | gguf_kv * |
gguf.c:@gguf_context
|
GGUF Context | Tensor Info Linked List | infos | gguf_tensor_info * |
gguf.c:@gguf_context
|
Key Value Entry | Key | gguf_kv.key | gguf_str |
gguf.c:@gguf_kv
|
Key Value Entry | Type | gguf_kv.type | gguf_type |
gguf.c:@gguf_kv
|
Key Value Entry | Type | gguf_kv.value | gguf_value |
gguf.c:@gguf_kv
|
Tensor Info Entry | Name | gguf_tensor_info.name | gguf_str |
gguf.c:@gguf_tensor_info
|
Tensor Info Entry | Tensor shape dimension count | gguf_tensor_info.n_dim | uint32_t |
gguf.c:@gguf_tensor_info
|
Tensor Info Entry | Tensor shape sizing array | gguf_tensor_info.ne | uint64_t[GGML_MAX_DIMS] |
gguf.c:@gguf_tensor_info
|
Tensor Info Entry | Tensor Encoding Scheme / Strategy | gguf_tensor_info.type | ggml_type |
gguf.c:@gguf_tensor_info
|
Tensor Info Entry | Offset from start of 'data' | gguf_tensor_info.offset | uint64_t |
gguf.c:@gguf_tensor_info
|
Also just note that these values are not actually part of gguf but is there for internal usage and is calculated during model loading. Aka it's for the writing/reading api.
Header Name | GGML Elements Of Interest | c name | c type | vscode search line |
---|---|---|---|---|
GGUF Context | Alignment | alignment | size_t |
gguf.c:@gguf_context
|
GGUF Context | Offset Of 'Data' From Beginning Of File | offset | size_t |
gguf.c:@gguf_context
|
GGUF Context | Size Of 'Data' In Bytes | size | size_t |
gguf.c:@gguf_context
|
Tensor Info Entry | Tensor memory mapped pointer location in computer | data | void * |
gguf.c:@gguf_tensor_info
|
Tensor Info Entry | Tensor memory mapped size of layer data in computer | size | size_t |
gguf.c:@gguf_tensor_info
|
There is this cpp example program that will write a test gguf write/read
In ggml.c refer to static const ggml_type_traits_t type_traits[GGML_TYPE_COUNT]
which is a lookup table containing enough information to deduce the size of a tensor layer
in bytes if given an offset and element dimension count.
One good example is shown below (but annotated for clarity):
static const ggml_type_traits_t type_traits[GGML_TYPE_COUNT] = {
...
[GGML_TYPE_F16] = {
// General Specs About This Tensor Encoding Scheme
.type_name = "f16",
.blck_size = 1,
.type_size = sizeof(ggml_fp16_t),
.is_quantized = false,
// C function methods for interpreting the blocks
.to_float = (ggml_to_float_t) ggml_fp16_to_fp32_row,
.from_float = (ggml_from_float_t) ggml_fp32_to_fp16_row,
.from_float_reference = (ggml_from_float_t) ggml_fp32_to_fp16_row,
// C functions methods plus extra specs required for dot product handling
.vec_dot = (ggml_vec_dot_t) ggml_vec_dot_f16,
.vec_dot_type = GGML_TYPE_F16,
.nrows = 1,
},
...
}
So basically these are used in various places to help allow the developers to get a sense of the tensor encoding spec and sizing as you can see with the getter methods below (Note didn't trace fully the other functions directly using the values within ggml.c, the few in this graph is just for illustrative purpose):
graph LR;
type_traits{"type_traits[]\n Lookup Table"}
type_traits-->type_name
type_traits-->blck_size
type_traits-->type_size
type_traits-->is_quantized
%%type_traits-->to_float
%%type_traits-->from_float
%%type_traits-->from_float_reference
%%type_traits-->vec_dot
%%type_traits-->vec_dot_type
%%type_traits-->nrows
subgraph getter functions / methods
ggml_type_name(["ggml_type_name()"])
ggml_blck_size(["ggml_blck_size()"])
ggml_type_size(["ggml_type_size()"])
ggml_is_quantized(["ggml_is_quantized()"])
end
type_name --> ggml_type_name(["ggml_type_name()"])
blck_size --> ggml_blck_size(["ggml_blck_size()"])
type_size --> ggml_type_size(["ggml_type_size()"])
is_quantized --> ggml_is_quantized(["ggml_is_quantized()"])
blck_size --> ggml_type_sizef(["ggml_type_sizef()"])
blck_size --> ggml_quantize_chunk(["ggml_quantize_chunk()"])
This is how the LUT is used to convert a tensor data area to/from float for processing (However these methods is not used in the GPU if i understand as these data area is processed directly using GPU specific instruction code. This is also why the tensors elements has to be packed in a certain way.)
The below analysis is only for connections within ggml.c
graph LR;
type_traits{"type_traits[]\n Lookup Table"}
%%type_traits-->type_name
%%type_traits-->blck_size
%%type_traits-->type_size
%%type_traits-->is_quantized
type_traits-->to_float
type_traits-->from_float
type_traits-->from_float_reference
%%type_traits-->vec_dot
%%type_traits-->vec_dot_type
%%type_traits-->nrows
ggml_compute_forward_add_q_f32(["ggml_compute_forward_add_q_f32()"])
to_float --> ggml_compute_forward_add_q_f32
ggml_compute_forward_out_prod_q_f32(["ggml_compute_forward_out_prod_q_f32()"])
to_float --> ggml_compute_forward_out_prod_q_f32
ggml_compute_forward_get_rows_q(["ggml_compute_forward_get_rows_q()"])
to_float --> ggml_compute_forward_get_rows_q
ggml_compute_forward_flash_attn_ext_f16(["ggml_compute_forward_flash_attn_ext_f16()"])
to_float --> ggml_compute_forward_flash_attn_ext_f16
ggml_compute_forward_dup_f16(["ggml_compute_forward_dup_f16()"])
from_float --> ggml_compute_forward_dup_f16
ggml_compute_forward_dup_bf16(["ggml_compute_forward_dup_bf16()"])
from_float --> ggml_compute_forward_dup_bf16
ggml_compute_forward_dup_f32(["ggml_compute_forward_dup_f32()"])
from_float --> ggml_compute_forward_dup_f32
ggml_compute_forward_add_q_f32(["ggml_compute_forward_add_q_f32()"])
from_float --> ggml_compute_forward_add_q_f32
ggml_compute_forward_mul_mat(["ggml_compute_forward_mul_mat()"])
from_float --> ggml_compute_forward_mul_mat
ggml_compute_forward_mul_mat_id(["ggml_compute_forward_mul_mat_id()"])
from_float --> ggml_compute_forward_mul_mat_id
ggml_compute_forward_flash_attn_ext_f16(["ggml_compute_forward_flash_attn_ext_f16()"])
from_float --> ggml_compute_forward_flash_attn_ext_f16
Useful information for users that doesn't fit into Readme.
- Home
- Feature Matrix
- GGML Tips & Tricks
- Chat Templating
- Metadata Override
- HuggingFace Model Card Metadata Interoperability Consideration
These are information useful for Maintainers and Developers which does not fit into code comments
Click on a badge to jump to workflow. This is here as a useful general view of all the actions so that we may notice quicker if main branch automation is broken and where.