Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: handle grpc and llama-cpp with REBUILD=true #1328

Merged
merged 1 commit into from
Nov 25, 2023

Conversation

mudler
Copy link
Owner

@mudler mudler commented Nov 24, 2023

Description

This PR fixes REBUILD=true in the container which currently is broken.

It was caused by not handling llama.cpp source code and the grpc dependencies not installed in the final container.

This PR handles now explicitly adds in the Makefile an entry to clone the source code during the preparation of the sources and it copies the grpc built artifacts over the running container and installing them.

@mudler mudler added the bug Something isn't working label Nov 24, 2023
Copy link

netlify bot commented Nov 24, 2023

Deploy Preview for localai canceled.

Name Link
🔨 Latest commit 0e949b9
🔍 Latest deploy log https://app.netlify.com/sites/localai/deploys/65612a34c51b99000888b21f

@mudler mudler force-pushed the fix_rebuild_container branch 3 times, most recently from 6d8e371 to 162f82a Compare November 24, 2023 19:05
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
@mudler mudler merged commit 6d187af into master Nov 25, 2023
24 checks passed
@mudler mudler deleted the fix_rebuild_container branch November 25, 2023 07:48
@mudler mudler linked an issue Nov 25, 2023 that may be closed by this pull request
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging this pull request may close these issues.

v1.40.0-cublas-cuda12 Container won't start
1 participant