Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

llama-cpp-python CMake,Failed building wheel for llama-cpp-python error on windows 11 pro #1619

Open
Shrutibajpeyi opened this issue Jul 24, 2024 · 0 comments
Labels
bug Something isn't working

Comments

@Shrutibajpeyi
Copy link

Prerequisites

Windows configuration - windows 11 pro
python version - 3.11.4
docker version - 27.0.3

Current Behavior

ninja: build stopped: subcommand failed
ERROR: Failed building wheel for llama-cpp-python

Environment and Context

using wsl on windows 11 pro

  • Physical (or

$ lscpu : Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 20
On-line CPU(s) list: 0-19
Vendor ID: GenuineIntel
Model name: 13th Gen Intel(R) Core(TM) i7-13800H
CPU family: 6
Model: 186
Thread(s) per core: 2
Core(s) per socket: 10
Socket(s): 1
Stepping: 2
BogoMIPS: 5836.80
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall n
x pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology tsc_reliable nonstop_tsc cpuid pni pclmulqdq vmx ssse3 f
ma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dn
owprefetch ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep
bmi2 erms invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves avx_vnni umip waitpkg gfn
i vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize flush_l1d arch_capabilities
Virtualization features:
Virtualization: VT-x
Hypervisor vendor: Microsoft
Virtualization type: full
Caches (sum of all):
L1d: 480 KiB (10 instances)
L1i: 320 KiB (10 instances)
L2: 12.5 MiB (10 instances)
L3: 24 MiB (1 instance)
Vulnerabilities:
Gather data sampling: Not affected
Itlb multihit: Not affected
L1tf: Not affected
Mds: Not affected
Meltdown: Not affected
Mmio stale data: Not affected
Retbleed: Mitigation; Enhanced IBRS
Spec rstack overflow: Not affected
Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Srbds: Not affected
Tsx async abort: Not affected

  • Operating System : ubuntu 22.04

$ uname -a : 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux

  • SDK version: windows sdk 10
$ python3 --version  : 3.11.4
$ make --version : 3.4
$ g++ --version : 11.4.0

Failure Information (for bugs)

unable to built it with docker

Failure Logs

290.1 ninja: build stopped: subcommand failed.
290.1
290.1 *** CMake build failed
290.1 [end of output]
290.1
290.1 note: This error originates from a subprocess, and is likely not a problem with pip.
290.1 ERROR: Failed building wheel for llama-cpp-python
290.1 Building wheel for quantile-python (pyproject.toml): started
290.4 Building wheel for quantile-python (pyproject.toml): finished with status 'done'
290.4 Created wheel for quantile-python: filename=quantile_python-1.1-py3-none-any.whl size=3443 sha256=9a8dba77926d8ad267b1c36c085e97282adf59b00642d597f11284061c0170f0
290.4 Stored in directory: /tmp/pip-ephem-wheel-cache-nfiruep_/wheels/67/a2/17/29e7169adf03a7e44b922abb6a42c2c1b0fda11f7bfbdb24a2
290.4 Successfully built quantile-python
290.4 Failed to build llama-cpp-python
290.4 ERROR: Could not build wheels for llama-cpp-python, which is required to install pyproject.toml-based projects
290.6
290.6 [notice] A new release of pip is available: 23.2.1 -> 24.1.2
290.6 [notice] To update, run: pip install --upgrade pip

Example environment info:

llama-cpp-python$ git log | head -1
commit 47b0aa6e957b93dbe2c29d53af16fbae2dd628f2

llama-cpp-python$ python3 --version
Python 3.10.10

llama-cpp-python$ pip list | egrep "uvicorn|fastapi|sse-starlette|numpy"
fastapi                  0.95.0
numpy                    1.24.3
sse-starlette            1.3.3
uvicorn                  0.21.1

llama-cpp-python/vendor/llama.cpp$ git log | head -3
commit 66874d4fbcc7866377246efbcee938e8cc9c7d76
Author: Kerfuffle <44031344+KerfuffleV2@users.noreply.github.com>
Date:   Thu May 25 20:18:01 2023 -0600
@Shrutibajpeyi Shrutibajpeyi changed the title llama-cpp-python CMake error on windows 11 pro llama-cpp-python CMake,Failed building wheel for llama-cpp-python error on windows 11 pro Jul 24, 2024
@abetlen abetlen added the bug Something isn't working label Aug 7, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants