Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: install CUDA support on Windows if available #339

Closed
wants to merge 6 commits into from

Conversation

jordanbtucker
Copy link
Collaborator

@jordanbtucker jordanbtucker commented Sep 13, 2023

This PR installs official NVIDIA wheels for CUDA support so the CUDA Toolkit does not need to be installed.

It also installs precompiled llama-cpp-python wheels that support CUDA so VS / dev tools don't need to be present on the computer.

The install is also much faster since nothing needs to be compiled.

Based on #338 which should be merged first.

Steps for testing:

git clone https://github.com/jordanbtucker/open-interpreter.git pr-339
cd pr-339
git checkout cuda
python -m venv venv
./venv/Scripts/activate; source ./venv/bin/activate
pip install poetry
poetry install
poetry run interpreter --local

Choose any model, choose yes for GPU, ensure llama-cpp-python installs without error, ensure GPU is utilized after the first request.

To test again, uninstall llama-cpp-python first:

pip uninstall -y llama-cpp-python
poetry run interpreter --local

@jordanbtucker jordanbtucker marked this pull request as draft September 13, 2023 22:57
@jordanbtucker
Copy link
Collaborator Author

Converting to a draft because my code is pretty hacky since I wrote it rather hastily. Would prefer someone to review it and test it before merging.

@KillianLucas KillianLucas marked this pull request as ready for review September 14, 2023 06:34
@jerzydziewierz
Copy link
Contributor

jerzydziewierz commented Sep 14, 2023

result: fail:

(...)
Successfully installed diskcache-5.6.3 llama-cpp-python-0.1.85+cu122 numpy-1.25.2 typing-extensions-4.7.1

Traceback (most recent call last):
  File "/home/mib07150/git/private/testing-only/pr-339/interpreter/get_hf_llm.py", line 163, in get_hf_llm
    from llama_cpp import Llama
ModuleNotFoundError: No module named 'llama_cpp'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/mib07150/git/private/testing-only/pr-339/venv/lib/python3.11/site-packages/llama_cpp/llama_cpp.py", line 67, in _load_shared_library
    return ctypes.CDLL(str(_lib_path), **cdll_args)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/mib07150/prog/miniconda3/envs/py311/lib/python3.11/ctypes/__init__.py", line 376, in __init__
    self._handle = _dlopen(self._name, mode)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^
OSError: libcudart.so.12: cannot open shared object file: No such file or directory

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/mib07150/git/private/testing-only/pr-339/interpreter/interpreter.py", line 323, in chat
    self.llama_instance = get_hf_llm(self.model, self.debug_mode, self.context_window)
                          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/mib07150/git/private/testing-only/pr-339/interpreter/get_hf_llm.py", line 227, in get_hf_llm
    from llama_cpp import Llama
  File "/home/mib07150/git/private/testing-only/pr-339/venv/lib/python3.11/site-packages/llama_cpp/__init__.py", line 1, in <module>
    from .llama_cpp import *
  File "/home/mib07150/git/private/testing-only/pr-339/venv/lib/python3.11/site-packages/llama_cpp/llama_cpp.py", line 80, in <module>
    _lib = _load_shared_library(_lib_base_name)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/mib07150/git/private/testing-only/pr-339/venv/lib/python3.11/site-packages/llama_cpp/llama_cpp.py", line 69, in _load_shared_library
    raise RuntimeError(f"Failed to load shared library '{_lib_path}': {e}")
RuntimeError: Failed to load shared library '/home/mib07150/git/private/testing-only/pr-339/venv/lib/python3.11/site-packages/llama_cpp/libllama.so': libcudart.so.12: cannot open shared object file: No such file or directory

▌ Failed to install TheBloke/CodeLlama-7B-Instruct-GGUF.                                                                                                                  

Common Fixes: You can follow our simple setup docs at the link below to resolve common errors.                                                                              

                                                                                                                                                                            
 https://github.com/KillianLucas/open-interpreter/tree/main/docs                                             
 (...)

so, apparently, it is looking for libcudart.so.12 in a specific folder, and doesn't find it,

I am on Ubuntu 20 and I can do this:

GUI |py311|~  ⧭ 
𝄞 find . | grep libcudart.so.12
./git/private/testing-only/pr-339/venv/lib/python3.11/site-packages/nvidia/cuda_runtime/lib/libcudart.so.12

so the file is clearly there, and installed locally, and the code still doesn't find it there

@jerzydziewierz
Copy link
Contributor

jerzydziewierz commented Sep 14, 2023

UPDATE #1

adding the following to the command line, helps and begins to work correctly:

export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/home/mib07150/git/private/testing-only/pr-339/venv/lib/python3.11/site-packages/nvidia/cuda_runtime/lib:/home/mib07150/git/private/testing-only/pr-339/venv/lib/python3.11/site-packages/nvidia/cublas/lib

That's of course because I happen to know precisely where my files are, this will have to get adopted for the particular way of installing that you have used there.

@jerzydziewierz
Copy link
Contributor

Update #2

Even though the process now finishes with no error,
The model appears to be deaf-mute, as in, it does not actually produce any output. The only thing that the interpreter --local does is to prompt me if I want to change the folder to my current folder.

So, there is still something wrong.

@jordanbtucker
Copy link
Collaborator Author

@jerzydziewierz Thanks for testing. It looks like the libraries get placed in the bin directory on Windows but the lib directory on Linux (and probably macOS). I'll adjust the PR to account for that.

@jordanbtucker
Copy link
Collaborator Author

jordanbtucker commented Sep 15, 2023

Alright, I wasn't able to figure out Linux support the way I did for Windows, so I decided to just update this PR to only install precompiled CUDA and llama.cpp binaries on Windows when nvidia-smi is present.

A new change also detects whether the CPU supports AVX2, AVX, or neither and installs the appropriate precompiled llama-cpp-python package.

For now, Linux NVIDIA users will need to install CUDA themselves.

@jordanbtucker jordanbtucker changed the title feat: install CUDA support if available feat: install CUDA support on Windows if available Sep 15, 2023
@enikqi
Copy link

enikqi commented Oct 11, 2023

i got the same issue too .. on macOS Ventura 13.2.1, please some help to fix this thing

@ericrallen
Copy link
Collaborator

I’m going to go ahead and lose this one as stale and we’ll revisit it.

@ericrallen ericrallen closed this Nov 15, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants