-
Notifications
You must be signed in to change notification settings - Fork 10.5k
Issues: ggerganov/llama.cpp
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Misc. bug: Webui: Default light theme code blocks are not visible
bug-unconfirmed
#11623
opened Feb 3, 2025 by
mashdragon
Compile bug: Not compilable with MACOSX_DEPLOYMENT_TARGET < 10.15
bug-unconfirmed
#11612
opened Feb 3, 2025 by
uwu-420
server : add support for file upload to the Web UI
enhancement
New feature or request
good first issue
Good for newcomers
help wanted
Extra attention is needed
server/webui
#11611
opened Feb 3, 2025 by
ggerganov
4 tasks done
Feature Request: Add TPU/Hardware Accelerator Support (e.g., Google Coral, Hailo) to llama.cpp
enhancement
New feature or request
#11603
opened Feb 2, 2025 by
FixeQyt
4 tasks done
Misc. bug: Vulcan premature out of memory exception on AMD Instinct MI60
bug-unconfirmed
#11598
opened Feb 2, 2025 by
dazipe
Eval bug: trivial grammar crashes (DeepSeek R1 Distill Llama 8B)
bug
Something isn't working
#11591
opened Feb 2, 2025 by
ochafik
CUDA prompt processing performance is gimped by ~5% on Ampere or newer with GGML_NATIVE=OFF
#11587
opened Feb 2, 2025 by
JohannesGaessler
Misc. bug: CTRL-ENTER not visibly shown in web UI prompt
bug-unconfirmed
#11586
opened Feb 2, 2025 by
gnusupport
Feature Request: allow to run on CPU despite backend initialization failure.
enhancement
New feature or request
#11584
opened Feb 1, 2025 by
leok7v
4 tasks done
Feature Request: Ship llama.cpp binaries in AppImage format
enhancement
New feature or request
#11579
opened Feb 1, 2025 by
rgerganov
4 tasks done
Misc. bug: Thread [N]: EXC_BAD_ACCESS (code=1, address=0x11c)
bug-unconfirmed
#11578
opened Feb 1, 2025 by
maksymmatviievskyi
Feature Request: resize an existing context
enhancement
New feature or request
#11577
opened Feb 1, 2025 by
giladgd
4 tasks done
Feature Request: Web UI: Allow typing in textarea during generation
enhancement
New feature or request
#11565
opened Feb 1, 2025 by
mashdragon
4 tasks done
Misc. bug: llama-server web interface doesn't work in Firefox
bug-unconfirmed
#11563
opened Jan 31, 2025 by
ppearson
Compile bug: Fails to compile with undefined references in libggml.so
bug-unconfirmed
#11562
opened Jan 31, 2025 by
sjwhitak
Misc. bug: File "train-text-from-scratch" missing
bug-unconfirmed
#11561
opened Jan 31, 2025 by
mikedesu
Misc. bug: Quantization of deepseek r1 qwen models fails when using K quants.
bug-unconfirmed
#11560
opened Jan 31, 2025 by
ramanrewati
Misc. bug: Vulkan Q4_K_M inference speed degradation
bug-unconfirmed
#11559
opened Jan 31, 2025 by
neilmehta24
Misc. bug: server api endpoint /completion ignoring grammar parameter
bug-unconfirmed
#11544
opened Jan 31, 2025 by
norteo
Feature Request: when llama.cpp can support convert qwen2.5 VL 7B/72B model to gguf?
enhancement
New feature or request
#11541
opened Jan 31, 2025 by
sooit
4 tasks done
Feature Request: Prefix assistant answer
enhancement
New feature or request
#11536
opened Jan 31, 2025 by
99991
4 tasks done
Previous Next
ProTip!
no:milestone will show everything without a milestone.