Skip to content

Releases: ggerganov/llama.cpp

b2419

14 Mar 14:49
68265eb
Compare
Choose a tag to compare
embedding : print all resulting embeddings (#899)

b2418

14 Mar 13:08
381da2d
Compare
Choose a tag to compare
metal : build metallib + fix embed path (#6015)

* metal : build metallib + fix embed path

ggml-ci

* metal : fix embed build + update library load logic

ggml-ci

* metal : fix embeded library build

ggml-ci

* ci : fix iOS builds to use embedded library

b2417

14 Mar 08:57
0fd6c1f
Compare
Choose a tag to compare
embedding : print cosine similarity (#899)

b2414

13 Mar 19:47
4636283
Compare
Choose a tag to compare
grammar : handle missing "root" node (#6004)

b2413

13 Mar 19:45
f30ea47
Compare
Choose a tag to compare
llama : add pipeline parallelism support (#6017)

* llama : add pipeline parallelism support for batch processing with multiple CUDA GPUs

ggml-ci

* server : add -ub, --ubatch-size parameter

* fix server embedding test

* llama : fix Mamba inference for pipeline parallelism

Tested to work correctly with both `main` and `parallel` examples.

* llama : limit max batch size to n_batch

* add LLAMA_SCHED_MAX_COPIES to configure the number of input copies for pipeline parallelism
default increase to 4 (from 2)

changing this value may improve performance for some systems, but increases memory usage

* fix hip build

* fix sycl build (disable cpy_tensor_async)

* fix hip build

* llama : limit n_batch and n_ubatch to n_ctx during context creation

* llama : fix norm backend

* batched-bench : sync after decode

* swiftui : sync after decode

* ggml : allow ggml_get_rows to use multiple threads if they are available

* check n_ubatch >= n_tokens with non-casual attention

* llama : do not limit n_batch to n_ctx with non-casual attn

* server : construct batch with size of llama_n_batch

* ggml_backend_cpu_graph_compute : fix return value when alloc fails

* llama : better n_batch and n_ubatch comment

* fix merge

* small fix

* reduce default n_batch to 2048

---------

Co-authored-by: Francis Couture-Harpin <git@compilade.net>
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

b2412

13 Mar 16:43
d8fd0cc
Compare
Choose a tag to compare
test-backend-ops : skip CPU backend by default (#6028)

b2411

13 Mar 16:42
b3d9786
Compare
Choose a tag to compare
Update get version (#6025)

b2410

13 Mar 13:34
99b71c0
Compare
Choose a tag to compare
Server: Use multi-task for embeddings endpoint (#6001)

* use multitask for embd endpoint

* specify types

* remove redundant {"n_predict", 0}

b2409

12 Mar 21:36
306d34b
Compare
Choose a tag to compare
ci : remove tidy-review (#6021)

b2408

12 Mar 16:39
8030da7
Compare
Choose a tag to compare
ggml : reuse quantum structs across backends (#5943)

* ggml : reuse quant blocks across backends

ggml-ci

* ggml : define helper constants only for CUDA and SYCL

ggml-ci

* ggml : define helper quantum constants for SYCL

ggml-ci