-
Notifications
You must be signed in to change notification settings - Fork 970
Issues: abetlen/llama-cpp-python
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
sys:1: ResourceWarning: unclosed file <_io.TextIOWrapper name='nul' mode='w' encoding='cp932'>
#1828
opened Nov 11, 2024 by
AkiraRy
AttributeError: function 'llama_sampler_init_tail_free' not found after compiling llama.pcc with hipBLAS
#1818
opened Oct 30, 2024 by
Micromanner
Setting seed to -1 (random) or using default LLAMA_DEFAULT_SEED generates a deterministic reply chain
#1809
opened Oct 24, 2024 by
m-from-space
Assistant message with tool_calls and without content raises an error
#1805
opened Oct 21, 2024 by
feloy
4 tasks done
low level examples broken after [feat: Update sampling API for llama.cpp (#1742)]
#1803
opened Oct 20, 2024 by
mite51
Llama.from_pretrained
should work with HF_HUB_OFFLINE=1
#1801
opened Oct 16, 2024 by
davidgilbertson
server: chat completions returns wrong logprobs model
#1787
opened Oct 6, 2024 by
domdomegg
4 tasks done
Tool parser cannot analysis tool calls string from qwen2.5.
#1784
opened Oct 5, 2024 by
hpx502766238
Why is this not working for the current release. UNABLE TO USE GPU
#1781
opened Oct 2, 2024 by
AnirudhJM24
Previous Next
ProTip!
no:milestone will show everything without a milestone.