Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bug: docker GGML_CUDA=1 make [on llama-gen-docs] fails since arg refactor #9392

Closed
bartowski1182 opened this issue Sep 9, 2024 · 3 comments
Labels
bug-unconfirmed low severity Used to report low severity bugs in llama.cpp (e.g. cosmetic issues, non critical UI glitches)

Comments

@bartowski1182
Copy link
Contributor

What happened?

Error given is

./llama-gen-docs: error while loading shared libraries: libcuda.so.1: cannot open shared object file: No such file or directory

Since this was only recently added in #9308 I'm guessing that's to blame

I've been able to get around it by running:

RUN ln -s /usr/local/cuda/lib64/stubs/libcuda.so /usr/local/cuda/lib64/stubs/libcuda.so.1
RUN LD_LIBRARY_PATH=/usr/local/cuda/lib64/stubs/:$LD_LIBRARY_PATH GGML_CUDA=1 make -j64
RUN rm /usr/local/cuda/lib64/stubs/libcuda.so.1

but I guess my question is just why does it need this library at all and why is only this one failing?

Name and Version

b3707 ubuntu

What operating system are you seeing the problem on?

Linux

Relevant log output

./llama-gen-docs: error while loading shared libraries: libcuda.so.1: cannot open shared object file: No such file or directory
@bartowski1182 bartowski1182 added bug-unconfirmed low severity Used to report low severity bugs in llama.cpp (e.g. cosmetic issues, non critical UI glitches) labels Sep 9, 2024
@slaren
Copy link
Member

slaren commented Sep 9, 2024

The current docker images use cmake and shouldn't be affected.

@bartowski1182
Copy link
Contributor Author

I use make locally, is there a reason to prefer cmake?

@slaren
Copy link
Member

slaren commented Sep 9, 2024

Only the cmake build has all the options and the best defaults. For local testing make is ok, but cross-compiling and more advanced options cmake is necessary. The images were changed to use cmake to be able to link dynamically, and to set better defaults for the CUDA archs.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug-unconfirmed low severity Used to report low severity bugs in llama.cpp (e.g. cosmetic issues, non critical UI glitches)
Projects
None yet
Development

No branches or pull requests

2 participants