You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
#28 [api builder 3/10] RUN make get-sources
#28 0.394 git clone --recurse-submodules https://github.com/go-skynet/go-llama.cpp go-llama
#28 0.395 Cloning into 'go-llama'...
#28 CANCELED
#29 [api stage-3 4/15] RUN make prepare-sources
#29 0.394 touch get-sources
#29 0.398 go mod edit -replace github.com/nomic-ai/gpt4all/gpt4all-bindings/golang=/build/gpt4all/gpt4all-bindings/golang
#29 0.402 go mod edit -replace github.com/go-skynet/go-ggml-transformers.cpp=/build/go-ggml-transformers
#29 0.405 go mod edit -replace github.com/donomii/go-rwkv.cpp=/build/go-rwkv
#29 0.408 go mod edit -replace github.com/ggerganov/whisper.cpp=/build/whisper.cpp
#29 0.411 go mod edit -replace github.com/go-skynet/go-bert.cpp=/build/go-bert
#29 0.413 go mod edit -replace github.com/mudler/go-stable-diffusion=/build/go-stable-diffusion
#29 0.416 go mod edit -replace github.com/mudler/go-piper=/build/go-piper
#29 0.419 go mod download
#29 0.888 go: github.com/go-skynet/go-llama.cpp@v0.0.0-20231009155254-aeba71ee8428 (replaced by /unreal/git/LocalAI/go-llama-stable): reading /unreal/git/LocalAI/go-llama-stable/go.mod: open /unreal/git/LocalAI/go-llama-stable/go.mod: no such file or directory
#29 0.889 make: *** [Makefile:236: prepare-sources] Error 1
#29 ERROR: process "/bin/sh -c make prepare-sources" did not complete successfully: exit code: 2
------
> [api stage-3 4/15] RUN make prepare-sources:
0.398 go mod edit -replace github.com/nomic-ai/gpt4all/gpt4all-bindings/golang=/build/gpt4all/gpt4all-bindings/golang
0.402 go mod edit -replace github.com/go-skynet/go-ggml-transformers.cpp=/build/go-ggml-transformers
0.405 go mod edit -replace github.com/donomii/go-rwkv.cpp=/build/go-rwkv
0.408 go mod edit -replace github.com/ggerganov/whisper.cpp=/build/whisper.cpp
0.411 go mod edit -replace github.com/go-skynet/go-bert.cpp=/build/go-bert
0.413 go mod edit -replace github.com/mudler/go-stable-diffusion=/build/go-stable-diffusion
0.416 go mod edit -replace github.com/mudler/go-piper=/build/go-piper
0.419 go mod download
0.888 go: github.com/go-skynet/go-llama.cpp@v0.0.0-20231009155254-aeba71ee8428 (replaced by /unreal/git/LocalAI/go-llama-stable): reading /unreal/git/LocalAI/go-llama-stable/go.mod: open /unreal/git/LocalAI/go-llama-stable/go.mod: no such file or directory
0.889 make: *** [Makefile:236: prepare-sources] Error 1
------
failed to solve: process "/bin/sh -c make prepare-sources" did not complete successfully: exit code: 2
Now I'm uncertain why there are so many llamas and resulting ggml-cudo.o files,but if I go around build them all with something like:
WHISPER_CUBLAS=1 LLAMA_CUBLAS=1 make
Then I can make it a bit further, but it still seems to be looking for this ggml-cuda.o somewhere it is not:
CMake Error: The source directory "/unreal/git/LocalAI/backend/cpp/llama/llama.cpp" does not appear to contain CMakeLists.txt.
Specify --help for usage, or press the help button on the CMake GUI.
go mod edit -replace github.com/go-skynet/go-llama.cpp=/unreal/git/LocalAI/go-llama
make[1]: *** [Makefile:43: grpc-server] Error 1
make[1]: Leaving directory '/unreal/git/LocalAI/backend/cpp/llama'
make: *** [Makefile:435: backend/cpp/llama/grpc-server] Error 2
make: *** Waiting for unfinished jobs....
go mod edit -replace github.com/go-skynet/go-bert.cpp=/unreal/git/LocalAI/go-bert
CGO_LDFLAGS="-lcublas -lcudart -L/opt/cuda/lib64/" C_INCLUDE_PATH=/unreal/git/LocalAI/go-llama LIBRARY_PATH=/unreal/git/LocalAI/go-llama \
go build -ldflags "-X "github.com/go-skynet/LocalAI/internal.Version=v1.40.0-13-g4e16bc2" -X "github.com/go-skynet/LocalAI/internal.Commit=4e16bc2f13f059991656402da2eea1b53a201436"" -tags "" -o backend-assets/grpc/llama ./cmd/grpc/llama/
go mod edit -replace github.com/mudler/go-stable-diffusion=/unreal/git/LocalAI/go-stable-diffusion
go mod edit -replace github.com/mudler/go-piper=/unreal/git/LocalAI/go-piper
I whisper.cpp build info:
I UNAME_S: Linux
I UNAME_P: unknown
I UNAME_M: x86_64
I CFLAGS: -I. -O3 -DNDEBUG -std=c11 -fPIC -D_XOPEN_SOURCE=600 -pthread -mavx2 -mfma -mf16c -mavx -msse3 -DGGML_USE_CUBLAS -I/usr/local/cuda/include -I/opt/cuda/include -I/opt/cuda/targets/x86_64-linux/include
I CXXFLAGS: -I. -I./examples -O3 -DNDEBUG -std=c++11 -fPIC -D_XOPEN_SOURCE=600 -pthread -DGGML_USE_CUBLAS -I/usr/local/cuda/include -I/opt/cuda/include -I/opt/cuda/targets/x86_64-linux/include
I LDFLAGS: -lcublas -lculibos -lcudart -lcublasLt -lpthread -ldl -lrt -L/usr/local/cuda/lib64 -L/opt/cuda/lib64 -L/opt/cuda/targets/x86_64-linux/lib
I CC: x86_64-pc-linux-gnu-gcc (GCC) 13.2.1 20230801
I CXX: g++ (GCC) 12.3.0
ar rcs libwhisper.a ggml.o ggml-cuda.o whisper.o
ar src libtransformers.a replit.o gptj.o mpt.o gptneox.o starcoder.o gpt2.o dolly.o falcon.o ggml.o common-ggml.o common.o ggml-cuda.o
ar: ggml-cuda.o: No such file or directory
make[1]: *** [Makefile:206: libtransformers.a] Error 1
make[1]: Leaving directory '/unreal/git/LocalAI/go-ggml-transformers'
make: *** [Makefile:197: go-ggml-transformers/libtransformers.a] Error 2
make[1]: Leaving directory '/unreal/git/LocalAI/whisper.cpp'
# github.com/go-skynet/go-llama.cpp
binding.cpp: In function ‘void llama_binding_free_model(void*)’:
binding.cpp:613:5: warning: possible problem detected in invocation of ‘operator delete’ [-Wdelete-incomplete]
613 | delete ctx->model;
| ^~~~~~~~~~~~~~~~~
binding.cpp:613:17: warning: invalid use of incomplete type ‘struct llama_model’
613 | delete ctx->model;
| ~~~~~^~~~~
In file included from go-llama-stable/llama.cpp/examples/common.h:5,
from binding.cpp:1:
go-llama-stable/llama.cpp/llama.h:70:12: note: forward declaration of ‘struct llama_model’
70 | struct llama_model;
| ^~~~~~~~~~~
binding.cpp:613:5: note: neither the destructor nor the class-specific ‘operator delete’ will be called, even if they are declared when the class is defined
613 | delete ctx->model;
| ^~~~~~~~~~~~~~~~~
# github.com/go-skynet/go-llama.cpp
binding.cpp: In function ‘void llama_binding_free_model(void*)’:
binding.cpp:809:5: warning: possible problem detected in invocation of ‘operator delete’ [-Wdelete-incomplete]
809 | delete ctx->model;
| ^~~~~~~~~~~~~~~~~
binding.cpp:809:17: warning: invalid use of incomplete type ‘struct llama_model’
809 | delete ctx->model;
| ~~~~~^~~~~
In file included from go-llama/llama.cpp/common/common.h:5,
from binding.cpp:1:
go-llama/llama.cpp/llama.h:60:12: note: forward declaration of ‘struct llama_model’
60 | struct llama_model;
| ^~~~~~~~~~~
binding.cpp:809:5: note: neither the destructor nor the class-specific ‘operator delete’ will be called, even if they are declared when the class is defined
809 | delete ctx->model;
| ^~~~~~~~~~~~~~~~~
nice -19 make BUILD_TYPE=cublas WHISPER_CUBLAS=1 LLAMA_CUBLAS=1 build 20.43s user 1.47s system 208% cpu 10.513 total
I might have uncovered multiple issues but I might just have a bunch PEBKAC ID::10T errors on my end, so I thought a discussion more appropriate than an issue.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
I feel like I'm going in circles here. I can run everything just fine with latest, however, I run into problems trying to enable cuda:
docker compose up --pull always
ends in:full log here
So then I tried a
full log here
But I got this error:
so I tried to just build locally, but it seemed to be missing this ggml-cuda.o
greping around I see there a number of Makefiles that include it:
Now I'm uncertain why there are so many llamas and resulting ggml-cudo.o files,but if I go around build them all with something like:
Then I can make it a bit further, but it still seems to be looking for this ggml-cuda.o somewhere it is not:
full log here
I might have uncovered multiple issues but I might just have a bunch PEBKAC ID::10T errors on my end, so I thought a discussion more appropriate than an issue.
Beta Was this translation helpful? Give feedback.
All reactions