Skip to content

fix: update image to support usage info #29

fix: update image to support usage info

fix: update image to support usage info #29

Triggered via pull request November 26, 2024 16:22
@jeffmauryjeffmaury
synchronize #13
GH-1730
Status Failure
Total duration 26m 19s
Artifacts

pr-check.yaml

on: pull_request
Matrix: Build image
Fit to window
Zoom out
Zoom in

Annotations

1 error
Build image (./chat/cuda/amd64/Containerfile, ai-lab-playground-chat-cuda, amd64)
Error: buildah exited with code 1 Trying to pull quay.io/opendatahub/workbench-images:cuda-ubi9-python-3.9-20231206... Getting image source signatures Copying blob sha256:bee86f8257632eeaa07cebb2436ccab03b967017e1ef485a4525ae5991f0ee33 Copying blob sha256:b824f4b30c465e487e640bdc22e46bafd6983e4e0eabf30085cacf945c261160 Copying blob sha256:7cb554c593ec96d7901a6f99e4c4d3b45976d92e0aa4f24db8f876ba68903fcb Copying blob sha256:a64827a24ae8ee62038a21834d13766a06c1526b54c86ac6b260c699820220bc Copying blob sha256:5f7d6ade1ce7871adb550730033e6696e928aaafea518b98a7f2c7cb89eda124 Copying blob sha256:2c6c6493f94e4d1f481a0976dec432e3b7c95f1ba764f7a0033b995670112ad7 Copying config sha256:1c79c1fc89c6ffbe76e2b62ee55f965fc87fdca64203dda12444b33b9bb1e147 Writing manifest to image destination error: subprocess-exited-with-error × Building wheel for llama_cpp_python (pyproject.toml) did not run successfully. │ exit code: 1 ╰─> [205 lines of output] *** scikit-build-core 0.10.7 using CMake 3.31.1 (wheel) *** Configuring CMake... loading initial cache file /tmp/tmpi08_8qur/build/CMakeInit.txt -- The C compiler identification is GNU 13.3.1 -- The CXX compiler identification is GNU 13.3.1 -- Detecting C compiler ABI info -- Detecting C compiler ABI info - done -- Check for working C compiler: /opt/rh/gcc-toolset-13/root/usr/bin/gcc - skipped -- Detecting C compile features -- Detecting C compile features - done -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Check for working CXX compiler: /opt/rh/gcc-toolset-13/root/usr/bin/g++ - skipped -- Detecting CXX compile features -- Detecting CXX compile features - done -- Found Git: /usr/bin/git (found version "2.39.3") fatal: detected dubious ownership in repository at '/locallm/llama-cpp-python/vendor/llama.cpp' To add an exception for this directory, call: git config --global --add safe.directory /locallm/llama-cpp-python/vendor/llama.cpp fatal: detected dubious ownership in repository at '/locallm/llama-cpp-python/vendor/llama.cpp' To add an exception for this directory, call: git config --global --add safe.directory /locallm/llama-cpp-python/vendor/llama.cpp -- Performing Test CMAKE_HAVE_LIBC_PTHREAD -- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success -- Found Threads: TRUE -- Warning: ccache not found - consider installing it for faster compilation or disable this warning with GGML_CCACHE=OFF -- CMAKE_SYSTEM_PROCESSOR: x86_64 -- Found OpenMP_C: -fopenmp (found version "4.5") -- Found OpenMP_CXX: -fopenmp (found version "4.5") -- Found OpenMP: TRUE (found version "4.5") -- OpenMP found -- Using llamafile -- x86 detected -- Using runtime weight conversion of Q4_0 to Q4_0_x_x to enable optimized GEMM/GEMV kernels -- Including CPU backend -- Using AMX -- Including AMX backend -- Found CUDAToolkit: /usr/local/cuda/targets/x86_64-linux/include (found version "11.8.89") -- CUDA Toolkit found -- Using CUDA architectures: 52;61;70;75 -- The CUDA compiler identification is NVIDIA 11.8.89 with host compiler GNU 11.4.1 -- Detecting CUDA compiler ABI info -- Detecting CUDA compiler ABI info - done -- Check for working CUDA compiler: /usr/local/cuda/bin/nvcc - skipped -- Detecting CUDA compile features -- Detecting CUDA compile features - done -- CUDA host compiler is GNU 11.4.1 -- Including CUDA backend CMake Warning (dev) at CMakeLists.txt:13 (install): Target llama has PUBLIC_HEADER files but no PUBLIC_HEADER DESTINATION. Call Stack (most recent call first): CMakeLists.txt:80 (llama_cpp_python_install_target) This warning is for project developers. Use -Wno-dev to suppress it. CMake Warning (dev) at CMakeLists.txt:21 (install): Target llama has PUBLIC_HEADER files but no PUBLIC_HEADER DESTINATION