chore(deps): update container image docker.io/localai/localai to v2.19.1 by renovate #24152
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR contains the following updates:
v2.17.1-aio-cpu
->v2.19.1-aio-cpu
v2.17.1-aio-gpu-nvidia-cuda-11
->v2.19.1-aio-gpu-nvidia-cuda-11
v2.17.1-aio-gpu-nvidia-cuda-12
->v2.19.1-aio-gpu-nvidia-cuda-12
v2.17.1-cublas-cuda11-ffmpeg-core
->v2.19.1-cublas-cuda11-ffmpeg-core
v2.17.1-cublas-cuda11-core
->v2.19.1-cublas-cuda11-core
v2.17.1-cublas-cuda12-ffmpeg-core
->v2.19.1-cublas-cuda12-ffmpeg-core
v2.17.1-cublas-cuda12-core
->v2.19.1-cublas-cuda12-core
v2.17.1-ffmpeg-core
->v2.19.1-ffmpeg-core
v2.17.1
->v2.19.1
Warning
Some dependencies could not be looked up. Check the Dependency Dashboard for more information.
Release Notes
mudler/LocalAI (docker.io/localai/localai)
v2.19.1
Compare Source
LocalAI 2.19.1 is out! 📣
TLDR; Summary spotlight
🖧 LocalAI Federation and AI swarms
LocalAI is revolutionizing the future of distributed AI workloads by making it simpler and more accessible. No more complex setups, Docker or Kubernetes configurations – LocalAI allows you to create your own AI cluster with minimal friction. By auto-discovering and sharing work or weights of the LLM model across your existing devices, LocalAI aims to scale both horizontally and vertically with ease.
How it works?
Starting LocalAI with
--p2p
generates a shared token for connecting multiple instances: and that's all you need to create AI clusters, eliminating the need for intricate network setups. Simply navigate to the "Swarm" section in the WebUI and follow the on-screen instructions.For fully shared instances, initiate LocalAI with
--p2p --federated
and adhere to the Swarm section's guidance. This feature, while still experimental, offers a tech preview quality experience.Federated LocalAI
Launch multiple LocalAI instances and cluster them together to share requests across the cluster. The "Swarm" tab in the WebUI provides one-liner instructions on connecting various LocalAI instances using a shared token. Instances will auto-discover each other, even across different networks.
Check out a demonstration video: Watch now
LocalAI P2P Workers
Distribute weights across nodes by starting multiple LocalAI workers, currently available only on the llama.cpp backend, with plans to expand to other backends soon.
Check out a demonstration video: Watch now
What's Changed
Bug fixes 🐛
🖧 P2P area
Exciting New Features 🎉
/scan
endpoint by @dave-gray101 in https://github.com/mudler/LocalAI/pull/2566🧠 Models
📖 Documentation and examples
👒 Dependencies
c25bc2a
to1b2e139
by @dependabot in https://github.com/mudler/LocalAI/pull/2801Other Changes
check_and_update.py
script by @dave-gray101 in https://github.com/mudler/LocalAI/pull/2778git submodule update
with--single-branch
by @dave-gray101 in https://github.com/mudler/LocalAI/pull/2847New Contributors
Full Changelog: mudler/LocalAI@v2.18.1...v2.19.0
v2.19.0
Compare Source
LocalAI 2.19.0 is out! 📣
TLDR; Summary spotlight
🖧 LocalAI Federation and AI swarms
LocalAI is revolutionizing the future of distributed AI workloads by making it simpler and more accessible. No more complex setups, Docker or Kubernetes configurations – LocalAI allows you to create your own AI cluster with minimal friction. By auto-discovering and sharing work or weights of the LLM model across your existing devices, LocalAI aims to scale both horizontally and vertically with ease.
How it works?
Starting LocalAI with
--p2p
generates a shared token for connecting multiple instances: and that's all you need to create AI clusters, eliminating the need for intricate network setups. Simply navigate to the "Swarm" section in the WebUI and follow the on-screen instructions.For fully shared instances, initiate LocalAI with
--p2p --federated
and adhere to the Swarm section's guidance. This feature, while still experimental, offers a tech preview quality experience.Federated LocalAI
Launch multiple LocalAI instances and cluster them together to share requests across the cluster. The "Swarm" tab in the WebUI provides one-liner instructions on connecting various LocalAI instances using a shared token. Instances will auto-discover each other, even across different networks.
Check out a demonstration video: Watch now
LocalAI P2P Workers
Distribute weights across nodes by starting multiple LocalAI workers, currently available only on the llama.cpp backend, with plans to expand to other backends soon.
Check out a demonstration video: Watch now
What's Changed
Bug fixes 🐛
🖧 P2P area
Exciting New Features 🎉
/scan
endpoint by @dave-gray101 in https://github.com/mudler/LocalAI/pull/2566🧠 Models
Configuration
📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).
🚦 Automerge: Enabled.
♻ Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.
🔕 Ignore: Close this PR and you won't be reminded about these updates again.
This PR has been generated by Renovate Bot.