-
-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(llama.cpp): Totally decentralized, private, distributed, p2p inference #2343
Conversation
As #2324 introduced distributed inferencing thanks to @rgerganov implementation in ggerganov/llama.cpp#6829 in upstream llama.cpp, now it is possible to distribute the workload to remote llama.cpp gRPC server. This changeset now uses mudler/edgevpn to establish a secure, distributed network between the nodes using a shared token. The token is generated automatically when starting the server with the `--p2p` flag, and can be used by starting the workers with `local-ai worker p2p-llama-cpp-rpc` by passing the token via environment variable (TOKEN) or with args (--token). As per how mudler/edgevpn works, a network is established between the server and the workers with dht and mdns discovery protocols, the llama.cpp rpc server is automatically started and exposed to the underlying p2p network so the API server can connect on. When the HTTP server is started, it will discover the workers in the network and automatically create the port-forwards to the service locally. Then llama.cpp is configured to use the services. This feature is behind the "p2p" GO_FLAGS Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
✅ Deploy Preview for localai canceled.
|
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
we need to build some safeguard to check if the rpc-server llama.cpp breaks CLI options, but there is lots of room for optimizations for a later batch of changes |
…6.0 by renovate (#22420) This PR contains the following updates: | Package | Update | Change | |---|---|---| | [docker.io/localai/localai](https://github.com/mudler/LocalAI) | minor | `v2.15.0-cublas-cuda11-ffmpeg-core` -> `v2.16.0-cublas-cuda11-ffmpeg-core` | | [docker.io/localai/localai](https://github.com/mudler/LocalAI) | minor | `v2.15.0-cublas-cuda11-core` -> `v2.16.0-cublas-cuda11-core` | | [docker.io/localai/localai](https://github.com/mudler/LocalAI) | minor | `v2.15.0-cublas-cuda12-ffmpeg-core` -> `v2.16.0-cublas-cuda12-ffmpeg-core` | | [docker.io/localai/localai](https://github.com/mudler/LocalAI) | minor | `v2.15.0-cublas-cuda12-core` -> `v2.16.0-cublas-cuda12-core` | | [docker.io/localai/localai](https://github.com/mudler/LocalAI) | minor | `v2.15.0-ffmpeg-core` -> `v2.16.0-ffmpeg-core` | | [docker.io/localai/localai](https://github.com/mudler/LocalAI) | minor | `v2.15.0` -> `v2.16.0` | --- > [!WARNING] > Some dependencies could not be looked up. Check the Dependency Dashboard for more information. --- ### Release Notes <details> <summary>mudler/LocalAI (docker.io/localai/localai)</summary> ### [`v2.16.0`](https://github.com/mudler/LocalAI/releases/tag/v2.16.0) [Compare Source](https://github.com/mudler/LocalAI/compare/v2.15.0...v2.16.0) ![local-ai-release-2 16](https://github.com/mudler/LocalAI/assets/2420543/bd3a6ace-8aec-4ac7-b457-b3e8cb5bb29e) ##### Welcome to LocalAI's latest update! ##### 🎉🎉🎉 woot woot! So excited to share this release, a lot of new features are landing in LocalAI!!!!! 🎉🎉🎉 ![](https://media.giphy.com/media/v1.Y2lkPTc5MGI3NjExZ2cycjRqbXFld2toenpqcjcyN3E3eWw1NHI5cm12Njc3Y2lzZWtyZyZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw/AR92HqL0HcenC/giphy.gif) ##### 🌟 Introducing Distributed Llama.cpp Inferencing Now it is possible to distribute the inferencing workload across different workers with llama.cpp models ! This feature has landed with [https://github.com/mudler/LocalAI/pull/2324](https://github.com/mudler/LocalAI/pull/2324) and is based on the upstream work of [@​rgerganov](https://github.com/rgerganov) in [https://github.com/ggerganov/llama.cpp/pull/6829](https://github.com/ggerganov/llama.cpp/pull/6829). **How it works:** a front-end server manages the requests compatible with the OpenAI API (LocalAI) and workers (llama.cpp) are used to distribute the workload. This makes possible to run larger models split across different nodes! ##### How to use it To start workers to offload the computation you can run: local-ai llamacpp-worker <listening_address> <listening_port> However, you can also follow the llama.cpp README and building the rpc-server (https://github.com/ggerganov/llama.cpp/blob/master/examples/rpc/README.md), which is still compatible with LocalAI. When starting the LocalAI server, which is going to accept the API requests, you can set a list of workers IP/address by specifying the addresses with `LLAMACPP_GRPC_SERVERS`: ```bash LLAMACPP_GRPC_SERVERS="address1:port,address2:port" local-ai run ``` At this point the workload hitting in the LocalAI server should be distributed across the nodes! ##### 🤖 Peer2Peer llama.cpp LocalAI is the **first** AI Free, Open source project offering complete, decentralized, peer2peer while private, LLM inferencing on top of the libp2p protocol. There is no "public swarm" to offload the computation, but rather empowers you to build your own cluster of local and remote machines to distribute LLM computation. ![](https://media.giphy.com/media/v1.Y2lkPTc5MGI3NjExZTdrZW9rc3hrMWxoZTV1OGo0ajF3d2MwMHFmeXVoMThqOGg1eHR4ZCZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw/q0KrtRcr10Bhu/giphy.gif) This feature leverages the ability of llama.cpp to distribute the workload explained just above and features from one of my other projects, https://github.com/mudler/edgevpn. LocalAI builds on top of the twos, and allows to create a private peer2peer network between nodes, without the need of centralizing connections or manually configuring IP addresses: it unlocks totally decentralized, private, peer-to-peer inferencing capabilities. Works also behind different NAT-ted networks (uses DHT and mDNS as discovery mechanism). **How it works:** A pre-shared token can be generated and shared between workers and the server to form a private, decentralized, p2p network. You can see the feature in action here: ![output](https://github.com/mudler/LocalAI/assets/2420543/8ca277cf-c208-4562-8929-808b2324b584) ##### How to use it 1. Start the server with `--p2p`: ```bash ./local-ai run --p2p ##### 1:02AM INF loading environment variables from file envFile=.env ##### 1:02AM INF Setting logging to info ##### 1:02AM INF P2P mode enabled ##### 1:02AM INF No token provided, generating one ##### 1:02AM INF Generated Token: ##### XXXXXXXXXXX ##### 1:02AM INF Press a button to proceed ``` A token is displayed, copy it and press enter. You can re-use the same token later restarting the server with `--p2ptoken` (or `P2P_TOKEN`). 2. Start the workers. Now you can copy the local-ai binary in other hosts, and run as many workers with that token: ```bash TOKEN=XXX ./local-ai p2p-llama-cpp-rpc ##### 1:06AM INF loading environment variables from file envFile=.env ##### 1:06AM INF Setting logging to info ##### {"level":"INFO","time":"2024-05-19T01:06:01.794+0200","caller":"config/config.go:288","message":"connmanager disabled\n"} ##### {"level":"INFO","time":"2024-05-19T01:06:01.794+0200","caller":"config/config.go:295","message":" go-libp2p resource manager protection enabled"} ##### {"level":"INFO","time":"2024-05-19T01:06:01.794+0200","caller":"config/config.go:409","message":"max connections: 100\n"} ##### 1:06AM INF Starting llama-cpp-rpc-server on '127.0.0.1:34371' ##### {"level":"INFO","time":"2024-05-19T01:06:01.794+0200","caller":"node/node.go:118","message":" Starting EdgeVPN network"} ##### create_backend: using CPU backend ##### Starting RPC server on 127.0.0.1:34371, backend memory: 31913 MB ##### 2024/05/19 01:06:01 failed to sufficiently increase receive buffer size (was: 208 kiB, wanted: 2048 kiB, got: 416 kiB). # See https://github.com/quic-go/quic-go/wiki/UDP-Buffer-Sizes for details. ##### {"level":"INFO","time":"2024-05-19T01:06:01.805+0200","caller":"node/node.go:172","message":" Node ID: 12D3KooWJ7WQAbCWKfJgjw2oMMGGss9diw3Sov5hVWi8t4DMgx92"} ##### {"level":"INFO","time":"2024-05-19T01:06:01.806+0200","caller":"node/node.go:173","message":" Node Addresses: [/ip4/127.0.0.1/tcp/44931 /ip4/127.0.0.1/udp/33251/quic-v1/webtransport/certhash/uEiAWAhZ-W9yx2ZHnKQm3BE_ft5jjoc468z5-Rgr9XdfjeQ/certhash/uEiB8Uwn0M2TQBELaV2m4lqypIAY2S-2ZMf7lt_N5LS6ojw /ip4/127.0.0.1/udp/35660/quic-v1 /ip4/192.168.68.110/tcp/44931 /ip4/192.168.68.110/udp/33251/quic-v1/webtransport/certhash/uEiAWAhZ-W9yx2ZHnKQm3BE_ft5jjoc468z5-Rgr9XdfjeQ/certhash/uEiB8Uwn0M2TQBELaV2m4lqypIAY2S-2ZMf7lt_N5LS6ojw /ip4/192.168.68.110/udp/35660/quic-v1 /ip6/::1/tcp/41289 /ip6/::1/udp/33160/quic-v1/webtransport/certhash/uEiAWAhZ-W9yx2ZHnKQm3BE_ft5jjoc468z5-Rgr9XdfjeQ/certhash/uEiB8Uwn0M2TQBELaV2m4lqypIAY2S-2ZMf7lt_N5LS6ojw /ip6/::1/udp/35701/quic-v1]"} ##### {"level":"INFO","time":"2024-05-19T01:06:01.806+0200","caller":"discovery/dht.go:104","message":" Bootstrapping DHT"} ``` (Note you can also supply the token via args) At this point, you should see in the server logs messages stating that new workers are found 3. Now you can start doing inference as usual on the server (the node used on step 1) Interested in to try it out? As we are still updating the documentation, you can read the full instructions here [https://github.com/mudler/LocalAI/pull/2343](https://github.com/mudler/LocalAI/pull/2343) ##### 📜 Advanced Function calling support with Mixed JSON Grammars LocalAI gets better at function calling with mixed grammars! With this release, LocalAI introduces a transformative capability: support for mixed JSON BNF grammars. It allows to specify a grammar for the LLM that allows to output structured JSON and free text. **How to use it:** To enable mixed grammars, you can set in the `YAML` configuration file `function.mixed_mode = true`, for example: ```yaml function: ##### disable injecting the "answer" tool disable_no_action: true grammar: ##### This allows the grammar to also return messages mixed_mode: true ``` This feature significantly enhances LocalAI's ability to interpret and manipulate JSON data coming from the LLM through a more flexible and powerful grammar system. Users can now combine multiple grammar types within a single JSON structure, allowing for dynamic parsing and validation scenarios. Grammars can also turned off entirely and leave the user to determine how the data is parsed from the LLM to be correctly interpretated by LocalAI to be still compliant to the OpenAI REST spec. For example, to interpret Hermes results, one can just annotate regexes in `function.json_regex_match` to extract the LLM response: ```yaml function: grammar: disable: true ##### disable injecting the "answer" tool disable_no_action: true return_name_in_function_response: true json_regex_match: - "(?s)<tool_call>(.*?)</tool_call>" - "(?s)<tool_call>(.*?)" replace_llm_results: ##### Drop the scratchpad content from responses - key: "(?s)<scratchpad>.*</scratchpad>" value: "" replace_function_results: ##### Replace everything that is not JSON array or object, just in case. - key: '(?s)^[^{\[]*' value: "" - key: '(?s)[^}\]]*$' value: "" ##### Drop the scratchpad content from responses - key: "(?s)<scratchpad>.*</scratchpad>" value: "" ``` Note that regex can still be used when enabling mixed grammars is enabled. This is especially important for models which does not support grammars - such as transformers or OpenVINO models, that now can support as well function calling. As we update the docs, further documentation can be found in the PRs that you can find in the changelog below. ##### 🚀 New Model Additions and Updates ![local-ai-yi-updates](https://github.com/mudler/LocalAI/assets/2420543/5d646703-0c64-4299-b551-a39074f63c2d) Our model gallery continues to grow with exciting new additions like Aya-35b, Mistral-0.3, Hermes-Theta and updates to existing models ensuring they remain at the cutting edge. This release is having major enhancements on tool calling support. Besides working on making our default models in AIO images more performant - now you can try an enhanced out-of-the-box experience with function calling in the Hermes model family ( [Hermes-2-Pro-Mistral](https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B-GGUF) and [Hermes-2-Theta-Llama-3](https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-8B-GGUF)) ##### Our LocalAI function model! ![local-ai-functioncall-model](https://github.com/mudler/LocalAI/assets/2420543/b2955459-49b6-4a57-96e8-242966ccef12) I have fine-tuned a function call model specific to leverage entirely the grammar support of LocalAI, you can find it in the model gallery already and on [huggingface](https://huggingface.co/mudler/LocalAI-Llama3-8b-Function-Call-v0.2) ##### 🔄 Single Binary Release: Simplified Deployment and Management In our continuous effort to streamline the user experience and deployment process, LocalAI v2.16.0 proudly introduces a single binary release. This enhancement, thanks to [@​sozercan](https://github.com/sozercan)'s contributions, consolidates all variants (CUDA and non-cuda releases) and dependencies into one compact executable file. This change simplifies the installation and update processes, reduces compatibility issues, and speeds up the setup for new users and existing deployments as now binary releases are even more portable than ever! ##### 🔧 Bug Fixes and Improvements A host of bug fixes have been implemented to ensure smoother operation and integration. Key fixes include enhancements to the Intel build process, stability adjustments for setuptools in Python backends, and critical updates ensuring the successful build of p2p configurations. ##### Migrating Python Backends: From Conda to UV LocalAI has migrated its Python backends from Conda to UV. This transition, thanks to [@​cryptk](https://github.com/cryptk) contributions, enhances the efficiency and scalability of our backend operations. Users will experience faster setup times and reduced complexity, streamlining the development process and making it easier to manage dependencies across different environments. ##### 📣 Let's Make Some Noise! A gigantic THANK YOU to everyone who’s contributed—your feedback, bug squashing, and feature suggestions are what make LocalAI shine. To all our heroes out there supporting other users and sharing their expertise, you’re the real MVPs! Remember, LocalAI thrives on community support—not big corporate bucks. If you love what we're building, show some love! A shoutout on social (@​LocalAI_OSS and @​mudler_it on twitter/X), joining our sponsors, or simply starring us on GitHub makes all the difference. Also, if you haven't yet joined our Discord, come on over! Here's the link: https://discord.gg/uJAeKSAGDy Thanks a ton, and.. enjoy this release! ##### What's Changed ##### Bug fixes 🐛 - build: do not specify a BUILD_ID by default by [@​mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2284](https://github.com/mudler/LocalAI/pull/2284) - fix: add missing openvino/optimum/etc libraries for Intel, fixes [#​2289](https://github.com/mudler/LocalAI/issues/2289) by [@​cryptk](https://github.com/cryptk) in [https://github.com/mudler/LocalAI/pull/2292](https://github.com/mudler/LocalAI/pull/2292) - add setuptools for openvino by [@​fakezeta](https://github.com/fakezeta) in [https://github.com/mudler/LocalAI/pull/2301](https://github.com/mudler/LocalAI/pull/2301) - fix: add setuptools to all requirements-intel.txt files for python backends by [@​cryptk](https://github.com/cryptk) in [https://github.com/mudler/LocalAI/pull/2333](https://github.com/mudler/LocalAI/pull/2333) - ci: correctly build p2p in GO_TAGS by [@​mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2369](https://github.com/mudler/LocalAI/pull/2369) - ci: generate specific image for intel builds by [@​mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2374](https://github.com/mudler/LocalAI/pull/2374) - fix: stablediffusion binary by [@​sozercan](https://github.com/sozercan) in [https://github.com/mudler/LocalAI/pull/2385](https://github.com/mudler/LocalAI/pull/2385) ##### Exciting New Features 🎉 - feat: migrate python backends from conda to uv by [@​cryptk](https://github.com/cryptk) in [https://github.com/mudler/LocalAI/pull/2215](https://github.com/mudler/LocalAI/pull/2215) - feat: create bash library to handle install/run/test of python backends by [@​cryptk](https://github.com/cryptk) in [https://github.com/mudler/LocalAI/pull/2286](https://github.com/mudler/LocalAI/pull/2286) - feat(grammar): support models with specific construct by [@​mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2291](https://github.com/mudler/LocalAI/pull/2291) - feat(ui): display number of available models for installation by [@​mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2298](https://github.com/mudler/LocalAI/pull/2298) - feat: auto select llama-cpp cpu variant by [@​sozercan](https://github.com/sozercan) in [https://github.com/mudler/LocalAI/pull/2305](https://github.com/mudler/LocalAI/pull/2305) - feat(llama.cpp): add `flash_attention` and `no_kv_offloading` by [@​mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2310](https://github.com/mudler/LocalAI/pull/2310) - feat(functions): support models with no grammar and no regex by [@​mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2315](https://github.com/mudler/LocalAI/pull/2315) - feat(functions): allow to set JSON matcher by [@​mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2319](https://github.com/mudler/LocalAI/pull/2319) - feat: auto select llama-cpp cuda runtime by [@​sozercan](https://github.com/sozercan) in [https://github.com/mudler/LocalAI/pull/2306](https://github.com/mudler/LocalAI/pull/2306) - feat(llama.cpp): add distributed llama.cpp inferencing by [@​mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2324](https://github.com/mudler/LocalAI/pull/2324) - feat(functions): mixed JSON BNF grammars by [@​mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2328](https://github.com/mudler/LocalAI/pull/2328) - feat(functions): simplify parsing, read functions as list by [@​mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2340](https://github.com/mudler/LocalAI/pull/2340) - feat(functions): Enable true regex replacement for the regexReplacement option by [@​lenaxia](https://github.com/lenaxia) in [https://github.com/mudler/LocalAI/pull/2341](https://github.com/mudler/LocalAI/pull/2341) - feat(backends): add openvoice backend by [@​mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2334](https://github.com/mudler/LocalAI/pull/2334) - feat(webui): statically embed js/css assets by [@​mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2348](https://github.com/mudler/LocalAI/pull/2348) - feat(functions): allow to use JSONRegexMatch unconditionally by [@​mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2349](https://github.com/mudler/LocalAI/pull/2349) - feat(functions): don't use yaml.MapSlice by [@​mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2354](https://github.com/mudler/LocalAI/pull/2354) - build: add sha by [@​mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2356](https://github.com/mudler/LocalAI/pull/2356) - feat(llama.cpp): Totally decentralized, private, distributed, p2p inference by [@​mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2343](https://github.com/mudler/LocalAI/pull/2343) - feat(functions): relax mixedgrammars by [@​mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2365](https://github.com/mudler/LocalAI/pull/2365) - models(gallery): add mistral-0.3 and command-r, update functions by [@​mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2388](https://github.com/mudler/LocalAI/pull/2388) ##### 🧠 Models - models(gallery): add aloe by [@​mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2283](https://github.com/mudler/LocalAI/pull/2283) - models(gallery): add Llama-3-8B-Instruct-abliterated by [@​mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2288](https://github.com/mudler/LocalAI/pull/2288) - models(gallery): add l3-chaoticsoliloquy-v1.5-4x8b by [@​mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2295](https://github.com/mudler/LocalAI/pull/2295) - models(gallery): add jsl-medllama-3-8b-v2.0 by [@​mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2296](https://github.com/mudler/LocalAI/pull/2296) - models(gallery): add llama-3-refueled by [@​mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2297](https://github.com/mudler/LocalAI/pull/2297) - models(gallery): add aura-llama-Abliterated by [@​mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2309](https://github.com/mudler/LocalAI/pull/2309) - models(gallery): add Bunny-llama by [@​mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2311](https://github.com/mudler/LocalAI/pull/2311) - models(gallery): add lumimaidv2 by [@​mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2312](https://github.com/mudler/LocalAI/pull/2312) - models(gallery): add orthocopter by [@​mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2313](https://github.com/mudler/LocalAI/pull/2313) - fix(gallery) Correct llama3-8b-instruct model file by [@​tannisroot](https://github.com/tannisroot) in [https://github.com/mudler/LocalAI/pull/2330](https://github.com/mudler/LocalAI/pull/2330) - models(gallery): add hermes-2-theta-llama-3-8b by [@​mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2331](https://github.com/mudler/LocalAI/pull/2331) - models(gallery): add yi 6/9b, sqlcoder, sfr-iterative-dpo by [@​mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2335](https://github.com/mudler/LocalAI/pull/2335) - models(gallery): add anita by [@​mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2344](https://github.com/mudler/LocalAI/pull/2344) - models(gallery): add master-yi by [@​mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2345](https://github.com/mudler/LocalAI/pull/2345) - models(gallery): update poppy porpoise mmproj by [@​mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2346](https://github.com/mudler/LocalAI/pull/2346) - models(gallery): add LocalAI-Llama3-8b-Function-Call-v0.2-GGUF by [@​mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2355](https://github.com/mudler/LocalAI/pull/2355) - models(gallery): add stheno by [@​mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2358](https://github.com/mudler/LocalAI/pull/2358) - fix(gallery): checksum Meta-Llama-3-70B-Instruct.Q4\_K_M.gguf - [#​2364](https://github.com/mudler/LocalAI/issues/2364) by [@​Nold360](https://github.com/Nold360) in [https://github.com/mudler/LocalAI/pull/2366](https://github.com/mudler/LocalAI/pull/2366) - models(gallery): add phi-3-medium-4k-instruct by [@​mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2367](https://github.com/mudler/LocalAI/pull/2367) - models(gallery): add hercules and helpingAI by [@​mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2376](https://github.com/mudler/LocalAI/pull/2376) - ci(checksum_checker): do get sha from hf API when available by [@​mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2380](https://github.com/mudler/LocalAI/pull/2380) - models(gallery): ⬆️ update checksum by [@​localai-bot](https://github.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/2383](https://github.com/mudler/LocalAI/pull/2383) - models(gallery): ⬆️ update checksum by [@​localai-bot](https://github.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/2386](https://github.com/mudler/LocalAI/pull/2386) - models(gallery): add aya-35b by [@​mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2391](https://github.com/mudler/LocalAI/pull/2391) ##### 📖 Documentation and examples - docs: Update semantic-todo/README.md by [@​eltociear](https://github.com/eltociear) in [https://github.com/mudler/LocalAI/pull/2294](https://github.com/mudler/LocalAI/pull/2294) - Add Home Assistant Integration by [@​valentinfrlch](https://github.com/valentinfrlch) in [https://github.com/mudler/LocalAI/pull/2387](https://github.com/mudler/LocalAI/pull/2387) - Add warning for running the binary on MacOS by [@​mauromorales](https://github.com/mauromorales) in [https://github.com/mudler/LocalAI/pull/2389](https://github.com/mudler/LocalAI/pull/2389) ##### 👒 Dependencies - ⬆️ Update ggerganov/llama.cpp by [@​localai-bot](https://github.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/2281](https://github.com/mudler/LocalAI/pull/2281) - ⬆️ Update docs version mudler/LocalAI by [@​localai-bot](https://github.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/2280](https://github.com/mudler/LocalAI/pull/2280) - ⬆️ Update ggerganov/llama.cpp by [@​localai-bot](https://github.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/2285](https://github.com/mudler/LocalAI/pull/2285) - ⬆️ Update ggerganov/llama.cpp by [@​localai-bot](https://github.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/2290](https://github.com/mudler/LocalAI/pull/2290) - feat(swagger): update swagger by [@​localai-bot](https://github.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/2302](https://github.com/mudler/LocalAI/pull/2302) - ⬆️ Update ggerganov/llama.cpp by [@​localai-bot](https://github.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/2303](https://github.com/mudler/LocalAI/pull/2303) - ⬆️ Update ggerganov/whisper.cpp by [@​localai-bot](https://github.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/2317](https://github.com/mudler/LocalAI/pull/2317) - ⬆️ Update ggerganov/whisper.cpp by [@​localai-bot](https://github.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/2326](https://github.com/mudler/LocalAI/pull/2326) - ⬆️ Update ggerganov/llama.cpp by [@​localai-bot](https://github.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/2316](https://github.com/mudler/LocalAI/pull/2316) - ⬆️ Update ggerganov/whisper.cpp by [@​localai-bot](https://github.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/2329](https://github.com/mudler/LocalAI/pull/2329) - ⬆️ Update ggerganov/llama.cpp by [@​localai-bot](https://github.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/2337](https://github.com/mudler/LocalAI/pull/2337) - ⬆️ Update ggerganov/llama.cpp by [@​localai-bot](https://github.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/2339](https://github.com/mudler/LocalAI/pull/2339) - ⬆️ Update ggerganov/llama.cpp by [@​localai-bot](https://github.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/2342](https://github.com/mudler/LocalAI/pull/2342) - ⬆️ Update ggerganov/llama.cpp by [@​localai-bot](https://github.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/2351](https://github.com/mudler/LocalAI/pull/2351) - ⬆️ Update ggerganov/whisper.cpp by [@​localai-bot](https://github.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/2352](https://github.com/mudler/LocalAI/pull/2352) - dependencies(grpcio): bump to fix CI issues by [@​mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2362](https://github.com/mudler/LocalAI/pull/2362) - deps(llama.cpp): update and adapt API changes by [@​mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2381](https://github.com/mudler/LocalAI/pull/2381) - ⬆️ Update ggerganov/whisper.cpp by [@​localai-bot](https://github.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/2361](https://github.com/mudler/LocalAI/pull/2361) - ⬆️ Update go-skynet/go-bert.cpp by [@​localai-bot](https://github.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/1225](https://github.com/mudler/LocalAI/pull/1225) - ⬆️ Update ggerganov/llama.cpp by [@​localai-bot](https://github.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/2360](https://github.com/mudler/LocalAI/pull/2360) ##### Other Changes - refactor: Minor improvements to BackendConfigLoader by [@​dave-gray101](https://github.com/dave-gray101) in [https://github.com/mudler/LocalAI/pull/2353](https://github.com/mudler/LocalAI/pull/2353) ##### New Contributors - [@​tannisroot](https://github.com/tannisroot) made their first contribution in [https://github.com/mudler/LocalAI/pull/2330](https://github.com/mudler/LocalAI/pull/2330) - [@​lenaxia](https://github.com/lenaxia) made their first contribution in [https://github.com/mudler/LocalAI/pull/2341](https://github.com/mudler/LocalAI/pull/2341) - [@​valentinfrlch](https://github.com/valentinfrlch) made their first contribution in [https://github.com/mudler/LocalAI/pull/2387](https://github.com/mudler/LocalAI/pull/2387) **Full Changelog**: mudler/LocalAI@v2.15.0...v2.16.0 </details> --- ### Configuration 📅 **Schedule**: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined). 🚦 **Automerge**: Enabled. ♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox. 🔕 **Ignore**: Close this PR and you won't be reminded about these updates again. --- - [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check this box --- This PR has been generated by [Renovate Bot](https://github.com/renovatebot/renovate). <!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiIzNy4zNzcuNCIsInVwZGF0ZWRJblZlciI6IjM3LjM3Ny40IiwidGFyZ2V0QnJhbmNoIjoibWFzdGVyIiwibGFiZWxzIjpbImF1dG9tZXJnZSIsInVwZGF0ZS9kb2NrZXIvZ2VuZXJhbC9ub24tbWFqb3IiXX0=-->
I've tried to attach local-ai-cpu worker to local-ai-gpu head and it failed because CUDA mismatch on worker side Also I would like to ask what p2p discovery method you are using? Is it KademilaDHT peer discovery? |
Are you using libp2p GossipSub for routing? |
yes! it uses https://github.com/mudler/edgevpn behind the scenes, which in turn, uses libp2p
Good point - I'm not sure what are the plans in llama.cpp about that, but looks something we want to cover here
it uses both DHT (the repository you linked) discovery and mDNS |
TLDR;
This PR introduces a peer-to-peer distribution of the workload for inferencing models with llama.cpp (aka 'sharding' for some folks). There are now workers and server nodes, and workers are automatically discovered by the servers also behind different networks.
The work is spread not for each request to a worker, but the workers are actually contributing to the same request alltogether.
This was possible only because of the upstream work in llama.cpp (a big thank you from the LocalAI community!), so it supports only gguf models.
Here you can observe computation split in the same node, but discovery is done with
mdns
:Description
As #2324 introduced distributed inferencing thanks to @rgerganov (thank you!) implementation in ggerganov/llama.cpp#6829 in upstream llama.cpp, thanks to that now it is possible to distribute the workload to remote llama.cpp gRPC servers.
To share the workload across nodes however, it would be needed to bring up the services manually and specify the IPs of the endpoints. The objective of this PR is to automatically setup a network of "workers" and the API server leveraging the workers that works in LANs but also behind different networks.
For this, we will split local-ai in two "entrypoints" now, the HTTP server, and the llama workers.
A shared token between the server and the workers is needed to let the communication happen via the p2p network. It supports both local network (with mdns discovery) and dht for communicating also behind different networks.
The token is generated automatically when starting the server with the
--p2p
flag, and can be used by starting the workers withlocal-ai worker p2p-llama-cpp-rpc
by passing the token via environment variable (TOKEN) or with args (--token).A network is established between the server and the workers with dht and mdns discovery protocols, the llama.cpp rpc server is automatically started and exposed to the underlying p2p network so the API server can connect on.
When the HTTP server is started, it will discover the workers in the network and automatically create the port-forwards to the service locally. Then llama.cpp is configured to use the services.
This feature is behind the "p2p" GO_FLAGS
Usage
--p2p
:A token is displayed, copy it and press enter.
You can re-use the same token later restarting the server with
--p2ptoken
(orP2P_TOKEN
).(Note you can also supply the token via args)
At this point, you should see in the server logs messages stating that new workers are found
Notes
Technical implementation
The PR leverages https://github.com/mudler/edgevpn to establish a secure, distributed network between the a set of nodes using a shared token. Edgevpn already implements a small protocol to decentralize and establish a peer-to-peer network between the nodes using libp2p. Edgevpn also offers a small shared ledger that is really useful for this purpose - as it allows to share automatically information about the workers and discover them automatically, as edgevpn is a library - which despite the name - does not necessarly is a VPN and can be used also only to establish a private p2p network.
Behind the scenes, this PR uses the
services
capabilities (https://github.com/mudler/edgevpn/blob/master/cmd/service.go) of Edgevpn to expose and connect to services in the private p2p network.Edgevpn has a shared ledger between all the nodes of a p2p network (identified by the token) - the worker creates a tunnel to the p2p network to expose a service, which is identified by a UUID. The UUID is announced to the ledger so all the nodes knows about the service available.
The HTTP server (started with the
--p2p
flag) is tracking and discovering services UUIDs in the ledger periodically, and when it finds new service in the ledger, it port-forwards to it by creating a local tunnel, and populating the LLAMA_CPP_GRPC_SERVER environment variable as expected