Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The docker image does not build on M2 macbook air: failed to solve: executor failed running [/bin/sh -c cd llama.cpp && make && mv main llama]: exit code: 2 #64

Closed
2 tasks done
Alon-Yariv opened this issue Mar 25, 2023 · 3 comments

Comments

@Alon-Yariv
Copy link

Alon-Yariv commented Mar 25, 2023

Bug description

Steps to reproduce

The steps I followed:

  1. do as the readme suggests.

and it crushed.

Environment Information

docker --version
Docker version 23.0.1, build a5ee5b1dfc

OS:
sw_vers

ProductName: macOS
ProductVersion: 13.2.1
BuildVersion: 22D68

M2 chipset

Screenshots

No response

Relevant log output

docker compose up -d
[+] Building 35.3s (13/30)
 => [internal] load build definition from Dockerfile                       0.0s
 => => transferring dockerfile: 1.52kB                                     0.0s
 => [internal] load .dockerignore                                          0.0s
 => => transferring context: 71B                                           0.0s
 => [internal] load metadata for docker.io/library/ubuntu:22.04            2.6s
 => [internal] load metadata for docker.io/library/gcc:12                  2.6s
 => [internal] load build context                                          0.0s
 => => transferring context: 149.39kB                                      0.0s
 => [deployment  1/21] FROM docker.io/library/ubuntu:22.04@sha256:67211c1  3.1s
 => => resolve docker.io/library/ubuntu:22.04@sha256:67211c14fa74f070d27c  0.0s
 => => sha256:cd741b12a7eaa64357041c2d3f4590c898313a7f8 27.35MB / 27.35MB  2.6s
 => => sha256:67211c14fa74f070d27cc59d69a7fa9aeff8e28ea11 1.13kB / 1.13kB  0.0s
 => => sha256:537da24818633b45fcb65e5285a68c3ec1f3db25f5ae547 424B / 424B  0.0s
 => => sha256:bab8ce5c00ca3ef91e0d3eb4c6e6d6ec7cffa9574c4 2.32kB / 2.32kB  0.0s
 => => extracting sha256:cd741b12a7eaa64357041c2d3f4590c898313a7f8f65cd15  0.4s
 => [llama_builder 1/4] FROM docker.io/library/gcc:12@sha256:b12d1e7c37e  30.3s
 => => resolve docker.io/library/gcc:12@sha256:b12d1e7c37e101fd76848570b8  0.0s
 => => sha256:ba265c6e20b2489ecfef524fad8f28916c9d92a9e63 9.19kB / 9.19kB  0.0s
 => => sha256:7971239fe1d69763272ccc0b2527efa95547d37c536 5.15MB / 5.15MB  2.1s
 => => sha256:b2eeecc98d6bc3812474852a39ce0a97be52fc7b961 2.22kB / 2.22kB  0.0s
 => => sha256:8022b074731d9ecee7f4fba79b993920973811dda 53.70MB / 53.70MB  5.1s
 => => sha256:b12d1e7c37e101fd76848570b81352fe9546dd1caad 1.43kB / 1.43kB  0.0s
 => => sha256:26c861b53509d61c37240d2f80efb3a351d2f1d7f 10.87MB / 10.87MB  4.3s
 => => sha256:1714880ecc1c021a5f708f4369f91d3c2c53b998 54.68MB / 54.68MB  15.2s
 => => sha256:895a945a1f9ba441c2748501c4d46569edfbc2 189.73MB / 189.73MB  25.7s
 => => sha256:cd267d572e2202b3070cca7993eb424a4084c7844 16.13kB / 16.13kB  5.5s
 => => extracting sha256:8022b074731d9ecee7f4fba79b993920973811dda168bbc0  0.7s
 => => sha256:5f1a14b7155767f4a80c696309effd494189de 125.97MB / 125.97MB  23.5s
 => => extracting sha256:7971239fe1d69763272ccc0b2527efa95547d37c53630ed0  0.1s
 => => extracting sha256:26c861b53509d61c37240d2f80efb3a351d2f1d7f4f8e8ec  0.1s
 => => sha256:d29d4e33051b1fab13de7c854ee4fdac99d73675 10.02kB / 10.02kB  15.9s
 => => extracting sha256:1714880ecc1c021a5f708f4369f91d3c2c53b998a56d563d  0.7s
 => => sha256:f54184d767dfe3575b7a0f3411dec9c55dad00dc2e 1.89kB / 1.89kB  16.2s
 => => extracting sha256:895a945a1f9ba441c2748501c4d46569edfbc2bfbdb9b47d  2.2s
 => => extracting sha256:cd267d572e2202b3070cca7993eb424a4084c7844e7725d4  0.0s
 => => extracting sha256:5f1a14b7155767f4a80c696309effd494189dec7c5e06eba  1.9s
 => => extracting sha256:d29d4e33051b1fab13de7c854ee4fdac99d736756e704e57  0.0s
 => => extracting sha256:f54184d767dfe3575b7a0f3411dec9c55dad00dc2ea8d1e5  0.0s
 => [deployment  2/21] WORKDIR /usr/src/app                                0.1s
 => [deployment  3/21] RUN apt update                                      5.0s
 => CANCELED [deployment  4/21] RUN apt-get install -y python3-pip curl   24.4s
 => [llama_builder 2/4] WORKDIR /tmp                                       0.2s
 => [llama_builder 3/4] RUN git clone https://github.com/ggerganov/llama.  1.3s
 => ERROR [llama_builder 4/4] RUN cd llama.cpp &&     make &&     mv main  0.5s
------
 > [llama_builder 4/4] RUN cd llama.cpp &&     make &&     mv main llama:
#0 0.265 I llama.cpp build info:
#0 0.265 I UNAME_S:  Linux
#0 0.265 I UNAME_P:  unknown
#0 0.265 I UNAME_M:  aarch64
#0 0.265 I CFLAGS:   -I.              -O3 -DNDEBUG -std=c11   -fPIC -pthread -mcpu=native
#0 0.265 I CXXFLAGS: -I. -I./examples -O3 -DNDEBUG -std=c++11 -fPIC -pthread -mcpu=native
#0 0.265 I LDFLAGS:
#0 0.265 I CC:       cc (GCC) 12.2.0
#0 0.265 I CXX:      g++ (GCC) 12.2.0
#0 0.265
#0 0.265 cc  -I.              -O3 -DNDEBUG -std=c11   -fPIC -pthread -mcpu=native   -c ggml.c -o ggml.o
#0 0.484 In file included from ggml.c:137:
#0 0.484 /usr/local/lib/gcc/aarch64-linux-gnu/12.2.0/include/arm_neon.h: In function 'ggml_vec_dot_f16':
#0 0.485 /usr/local/lib/gcc/aarch64-linux-gnu/12.2.0/include/arm_neon.h:29182:1: error: inlining failed in call to 'always_inline' 'vfmaq_f16': target specific option mismatch
#0 0.485 29182 | vfmaq_f16 (float16x8_t __a, float16x8_t __b, float16x8_t __c)
#0 0.485       | ^~~~~~~~~
#0 0.485 ggml.c:799:37: note: called from here
#0 0.485   799 |     #define GGML_F16x8_FMA(a, b, c) vfmaq_f16(a, b, c)
#0 0.485       |                                     ^~~~~~~~~~~~~~~~~~
#0 0.485 ggml.c:823:41: note: in expansion of macro 'GGML_F16x8_FMA'
#0 0.485   823 |     #define GGML_F16_VEC_FMA            GGML_F16x8_FMA
#0 0.485       |                                         ^~~~~~~~~~~~~~
#0 0.485 ggml.c:1321:22: note: in expansion of macro 'GGML_F16_VEC_FMA'
#0 0.485  1321 |             sum[j] = GGML_F16_VEC_FMA(sum[j], ax[j], ay[j]);
#0 0.485       |                      ^~~~~~~~~~~~~~~~
#0 0.485 /usr/local/lib/gcc/aarch64-linux-gnu/12.2.0/include/arm_neon.h:29182:1: error: inlining failed in call to 'always_inline' 'vfmaq_f16': target specific option mismatch
#0 0.485 29182 | vfmaq_f16 (float16x8_t __a, float16x8_t __b, float16x8_t __c)
#0 0.485       | ^~~~~~~~~
#0 0.485 ggml.c:799:37: note: called from here
#0 0.485   799 |     #define GGML_F16x8_FMA(a, b, c) vfmaq_f16(a, b, c)
#0 0.485       |                                     ^~~~~~~~~~~~~~~~~~
#0 0.485 ggml.c:823:41: note: in expansion of macro 'GGML_F16x8_FMA'
#0 0.485   823 |     #define GGML_F16_VEC_FMA            GGML_F16x8_FMA
#0 0.485       |                                         ^~~~~~~~~~~~~~
#0 0.485 ggml.c:1321:22: note: in expansion of macro 'GGML_F16_VEC_FMA'
#0 0.485  1321 |             sum[j] = GGML_F16_VEC_FMA(sum[j], ax[j], ay[j]);
#0 0.485       |                      ^~~~~~~~~~~~~~~~
#0 0.485 /usr/local/lib/gcc/aarch64-linux-gnu/12.2.0/include/arm_neon.h:28760:1: error: inlining failed in call to 'always_inline' 'vaddq_f16': target specific option mismatch
#0 0.485 28760 | vaddq_f16 (float16x8_t __a, float16x8_t __b)
#0 0.485       | ^~~~~~~~~
#0 0.485 ggml.c:805:22: note: called from here
#0 0.485   805 |             x[2*i] = vaddq_f16(x[2*i], x[2*i+1]);                 \
#0 0.485       |                      ^~~~~~~~~~~~~~~~~~~~~~~~~~~
#0 0.485 ggml.c:826:41: note: in expansion of macro 'GGML_F16x8_REDUCE'
#0 0.485   826 |     #define GGML_F16_VEC_REDUCE         GGML_F16x8_REDUCE
#0 0.485       |                                         ^~~~~~~~~~~~~~~~~
#0 0.485 ggml.c:1326:5: note: in expansion of macro 'GGML_F16_VEC_REDUCE'
#0 0.485  1326 |     GGML_F16_VEC_REDUCE(sumf, sum);
#0 0.485       |     ^~~~~~~~~~~~~~~~~~~
#0 0.485 /usr/local/lib/gcc/aarch64-linux-gnu/12.2.0/include/arm_neon.h:28760:1: error: inlining failed in call to 'always_inline' 'vaddq_f16': target specific option mismatch
#0 0.485 28760 | vaddq_f16 (float16x8_t __a, float16x8_t __b)
#0 0.485       | ^~~~~~~~~
#0 0.485 ggml.c:808:22: note: called from here
#0 0.485   808 |             x[4*i] = vaddq_f16(x[4*i], x[4*i+2]);                 \
#0 0.485       |                      ^~~~~~~~~~~~~~~~~~~~~~~~~~~
#0 0.485 ggml.c:826:41: note: in expansion of macro 'GGML_F16x8_REDUCE'
#0 0.485   826 |     #define GGML_F16_VEC_REDUCE         GGML_F16x8_REDUCE
#0 0.485       |                                         ^~~~~~~~~~~~~~~~~
#0 0.485 ggml.c:1326:5: note: in expansion of macro 'GGML_F16_VEC_REDUCE'
#0 0.485  1326 |     GGML_F16_VEC_REDUCE(sumf, sum);
#0 0.485       |     ^~~~~~~~~~~~~~~~~~~
#0 0.485 /usr/local/lib/gcc/aarch64-linux-gnu/12.2.0/include/arm_neon.h:28760:1: error: inlining failed in call to 'always_inline' 'vaddq_f16': target specific option mismatch
#0 0.485 28760 | vaddq_f16 (float16x8_t __a, float16x8_t __b)
#0 0.485       | ^~~~~~~~~
#0 0.485 ggml.c:811:22: note: called from here
#0 0.485   811 |             x[8*i] = vaddq_f16(x[8*i], x[8*i+4]);                 \
#0 0.485       |                      ^~~~~~~~~~~~~~~~~~~~~~~~~~~
#0 0.485 ggml.c:826:41: note: in expansion of macro 'GGML_F16x8_REDUCE'
#0 0.485   826 |     #define GGML_F16_VEC_REDUCE         GGML_F16x8_REDUCE
#0 0.485       |                                         ^~~~~~~~~~~~~~~~~
#0 0.485 ggml.c:1326:5: note: in expansion of macro 'GGML_F16_VEC_REDUCE'
#0 0.485  1326 |     GGML_F16_VEC_REDUCE(sumf, sum);
#0 0.485       |     ^~~~~~~~~~~~~~~~~~~
#0 0.501 make: *** [Makefile:221: ggml.o] Error 1
------
failed to solve: executor failed running [/bin/sh -c cd llama.cpp &&     make &&     mv main llama]: exit code: 2

Confirmations

  • I'm running the latest version of the main branch.
  • I checked existing issues to see if this has already been described.
@Alon-Yariv Alon-Yariv added the bug label Mar 25, 2023
@nsarrazin
Copy link
Member

Is it still the case with the latest commit to main ? We rolled back GCC to version 11 for Mac compatibility

@kenny-caldieraro
Copy link

with the new version, it's work on my macbook air m2

@nsarrazin
Copy link
Member

Great to hear! Thanks for reporting

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants