Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

termux build issues #389

Closed
SubhranshuSharma opened this issue Jun 17, 2023 · 24 comments
Closed

termux build issues #389

SubhranshuSharma opened this issue Jun 17, 2023 · 24 comments
Labels

Comments

@SubhranshuSharma
Copy link
Contributor

SubhranshuSharma commented Jun 17, 2023

pip install llama-cpp-python doesn't work on termux and gives this error, even if ninja is installed it gives same error, might be a hardcoded absolute path problem

i have tried to use portable venv setup on my linux machine but running it on termux gave a dependency not found error, so maybe some paths in the source code were still absolute(even after the correction attempts in the blog post)

tried using pyinstaller, but it doesn't support this library yet, same missing dependency issue

another option is to use docker on termux but that requires root privileges and custom kernel

i hv tried to look into the source code of this repo but donno where to start, any hint on where to start?

the original llama.cpp library works fine on termux but doesn't have a server inbuilt, and doesn't work well unless using bash

should i make a pull request editing the readme, linking to the docker workaround on rooted phones

@aseok
Copy link

aseok commented Jun 18, 2023

I resolved ninja installation by termux-chroot. Also added following to
project CMakeLists.txt:
set(CMAKE_C_COMPILER "/data/data/com.termux/files/usr/bin/clang")
set(CMAKE_CXX_COMPILER "/data/data/com.termux/files/usr/bin/clang++")
And passing opencl args:
CMAKE_ARGS="-DLLAMA_CLBLAST=on -DLLAMA_OPENBLAS=on" FORCE_CMAKE=1 pip install llama-cpp-python
Here's output eror:
../CMake-src/Utilities/cmlibarchive/libarchive/archive.h:101:10: fatal error: 'android_lf.h' file not found
#include "android_lf.h"

@gjmulder gjmulder changed the title termux support termux build issues Jun 18, 2023
@gjmulder gjmulder added the build label Jun 18, 2023
@abetlen
Copy link
Owner

abetlen commented Jun 18, 2023

Looks to be related to scikit-build/cmake-python-distributions#223

The longer term solution seems to be migrating the project to scikit-build-core (something I'm in the process of doing). However someone in the thread mentioned they were able to get their example to work by building cmake and ninja from source on the android.

@aseok
Copy link

aseok commented Jun 18, 2023

Also tried following:
cmake .. -DLLAMA_CLBLAST=on -DLLAMA_OPENBLAS=on
make
make install

Here's output:
CMake Error: Target DependInfo.cmake file not found
[100%] Built target run
Install the project...
-- Install configuration: "RelWithDebInfo"
-- Up-to-date: /usr/local/llama_cpp/libllama.so

Is installing successful?

Python:

import llama_cpp
Traceback (most recent call last):
File "", line 1, in
ModuleNotFoundError: No module named 'llama_cpp'

@SubhranshuSharma
Copy link
Contributor Author

while trying to make it work on termux i ended up writing a small llama.cpp wrapper myself, instructions to use which are here, at the time llama.cpp cache files could be read but not generated/edited in termux, but that problem is sorted now, so put cache_is_supported_trust_me_bro=True in discord/termux/settings.py to use it.

these highlighted lines are all you need to make a wrapper of your own

@aseok
Copy link

aseok commented Jun 29, 2023 via email

@SubhranshuSharma
Copy link
Contributor Author

SubhranshuSharma commented Jun 29, 2023

Is it compatible with babyagi and other stuff that needs llama python wrapper?

nope, my wrapper is tightly integrated with my usecase and isn't a separate installable package, was thinking of doing so, but would be too much of a headache

@abetlen
Copy link
Owner

abetlen commented Jul 8, 2023

@SubhranshuSharma so for your wrapper you just built the libllama.so seperately correct, did you just build llama.cpp with the Makefile? Maybe one solution is to avoid building llama.cpp on install by setting an environment variable / path to a pre-built library.

@SubhranshuSharma
Copy link
Contributor Author

SubhranshuSharma commented Jul 8, 2023

@abetlen Yeah, I run git clone https://github.com/ggerganov/llama.cpp ~/llama.cpp;cd ~/llama.cpp;make

I suggest, try building llama.cpp but don't crash if u can't, just check if the main file exists at a pre defined/user inputted path, if not use if os.system('git clone https://github.com/ggerganov/llama.cpp ~/llama.cpp;cd ~/llama.cpp;make')!=0:print('windows is for pussies, install git and cmake');exit() to clone and make it, way more simple and easy to maintain.

@Freed-Wu
Copy link

The longer term solution seems to be migrating the project to scikit-build-core

Any progress about it?

@SubhranshuSharma
Copy link
Contributor Author

Any progress about it?

seems related

@abetlen
Copy link
Owner

abetlen commented Jul 23, 2023

@SubhranshuSharma @Freed-Wu implementing in #499 but I just have some issues with Macos still

@SubhranshuSharma
Copy link
Contributor Author

SubhranshuSharma commented Aug 9, 2023

unrelated question: is there any way of storing cache files on disk for quick reboot in the api

implementing in #499 but I just have some issues with Macos still

i would still suggest treating this repo and llama.cpp as different things and not letting failure in one stop the other (for as long as its possible), so make the compilation a try except pass, if compile fails, force user to set a system variable pointing to llama.cpp, I would also suggest keeping the system variable at first priority, as in if the system variable is set, it will be given the first priority. That way the project will be more robust, letting people find workarounds to issues that originate in llama.cpp.

@abetlen
Copy link
Owner

abetlen commented Sep 13, 2023

@SubhranshuSharma sorry for this very late reply but I finally merged in #499.

You can now set CMAKE_ARGS="-DLLAMA_BUILD=OFF pip install llama-cpp-python to avoid building from source, then just set LLAMA_CPP_LIB to the path to the shared library to use a pre-built library.

@Freed-Wu
Copy link

Freed-Wu commented Oct 1, 2023

So can we close this issue now?

@bishwenduk029
Copy link

bishwenduk029 commented Oct 8, 2023

@Freed-Wu , @abetlen , @SubhranshuSharma
I also tried packaging a python binary with llama-cpp-python, but when I try to run the binary standalone it fails with below error

INFO:root:Loading Model: TheBloke/synthia-7b-v1.3.Q5_K_M.gguf, on: cpu
INFO:root:This action can take a few minutes!
INFO:root:synthia-7b-v1.3.Q5_K_M.gguf
INFO:root:Using Llamacpp for GGUF/GGML quantized models
ERROR:root:Exception occurred: Shared library with base name 'llama' not found

But when I run simply by python3 main.py it works fine.

For pyinstaller do I need to generate the shared library seperatly and then use LLAMA_CPP_LIB

But I am using termex, plain terminal.

@SubhranshuSharma
Copy link
Contributor Author

You can now set CMAKE_ARGS="-DLLAMA_BUILD=OFF pip install llama-cpp-python to avoid building from source, then just set LLAMA_CPP_LIB to the path to the shared library to use a pre-built library.

this is the error i still get in termux when running CMAKE_ARGS="-DLLAMA_BUILD=OFF" pip install llama-cpp-python

am i missing something

@romanovj
Copy link

romanovj commented Oct 9, 2023

@SubhranshuSharma if you want to build cmake module termux/termux-packages#10065

or build without it

  1. CMAKE_ARGS="-DLLAMA_BUILD=OFF" pip install llama-cpp-python --no-build-isolation -v

  2. check error message and install missing modules

repeat 1-2 steps

@Freed-Wu
Copy link

Freed-Wu commented Oct 9, 2023

@SubhranshuSharma
Copy link
Contributor Author

SubhranshuSharma commented Oct 10, 2023

is the python library working for anyone?

@Freed-Wu is this related to adding original llama.cpp to termux package manager? if yes, llama.cpp was working out of the box on termux anyway, that's why i could make my own usecase-specific python wrapper around it, to quote myself:

@abetlen Yeah, I run git clone https://github.com/ggerganov/llama.cpp ~/llama.cpp;cd ~/llama.cpp;make

@romanovj this solution of yours did install cmake without errors, now pip list is returning cmake and a simple import cmake is working, but pip install llama-cpp-python still gives error saying No module named 'cmake', so maybe not all the references to installed libraries were updated.

and your second solution worked, and pip list is returning llama_cpp_python but import llama_cpp is returning shared library with base name 'llama' not found

@romanovj
Copy link

romanovj commented Oct 10, 2023

@SubhranshuSharma
still gives [error](https://pastebin.com/FkXtBYcm) saying No module named 'cmake

reinstall cmake
pkg rei cmake

also you can copy compiled cmake wheel to ~/wheels folder and install modules with
pip install ... --find-links ~/wheels


You need to build shared libs for llama.cpp like this
cmake -S . build64 -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=$PREFIX -DLLAMA_CLBLAST=ON -DLLAMA_OPENBLAS=ON
cmake --build build64

then export path to libllama.so

export LLAMA_CPP_LIB=/data/data/com.termux/files/home/llama.cpp/build64/libllama.so

python
import llama_cpp

ok

@abetlen
Copy link
Owner

abetlen commented Nov 13, 2023

Hey guys, I was finally able to put some time into this, the following worked for me:

pkg install python-pip python-cryptography cmake ninja autoconf automake libandroid-execinfo patchelf
MATHLIB=m CFLAGS=--target=aarch64-unknown-linux-android33 LDFLAGS=-lpython3.11 pip install numpy --force-reinstall --no-cache-dir
pip install llama-cpp-python --verbose

@SubhranshuSharma
Copy link
Contributor Author

SubhranshuSharma commented Jan 27, 2024

@abetlen it works inconsistently, on a clean termux install with python installed i usually also haveto install libexpat then openssl then run pkg update && pkg upgrade

this works more consistently for me, keep selecting default answers to prompts

pkg install libexpat openssl python-pip python-cryptography cmake ninja autoconf automake libandroid-execinfo patchelf
pkg update && pkg upgrade 
pip install llama-cpp-python

then run python -c 'import llama_cpp' to check if it installed

@abetlen abetlen closed this as completed Feb 26, 2024
@pinyaras
Copy link

Does the llama-cpp-server work in termux?

@SubhranshuSharma
Copy link
Contributor Author

@pinyaras original llama.cpp has its own server which is stable
the python server additionally requires rust which is unstable on termux but is working for now, after installing previous requirements run

pkg install rust
pip install llama-cpp-python[server]

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

8 participants