-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add steps to install from source for llama.cpp #1396
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oooh nice
@@ -37,9 +37,21 @@ You can quickly start a locally running chat-ui & LLM text-generation server tha | |||
|
|||
**Step 1 (Start llama.cpp server):** | |||
|
|||
Install llama.cpp w/ brew (for Mac): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
brew also exists for linux no?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It does, however, it only instals CPU only build, so would be quite slow. So recommended way would still be to clone
+ make
imo.
CI is 🟢 |
I just found that the docs at llama.cpp have some really useful links for how to install on my platform. Maybe you could add a link? https://github.com/ggerganov/llama.cpp/blob/master/docs/build.md |
Yes! great suggestion @dnouri - I added the suggestions here in line 47:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks great thanks!
* Add steps to install from source for llama.cpp * Formatting.
* Add steps to install from source for llama.cpp * Formatting.
ref: https://x.com/dnouri/status/1821197502760030686?s=46