Releases: mudler/LocalAI
v2.8.0
This release adds support for Intel GPUs, and it deprecates old ggml-based backends which are by now superseded by llama.cpp (that now supports more architectures out-of-the-box). See also #1651.
Images are now based on Ubuntu 22.04 LTS instead of Debian bullseye.
Intel GPUs
There are now images tagged with "sycl". There are sycl-f16 and sycl-f32 images indicating f16 or f32 support.
For example, to start phi-2 with an Intel GPU it is enough to use the container image like this:
docker run -e DEBUG=true -ti -v $PWD/models:/build/models -p 8080:8080 -v /dev/dri:/dev/dri --rm quay.io/go-skynet/local-ai:master-sycl-f32-ffmpeg-core phi-2
Note
First off, a massive thank you to each and every one of you who've chipped in to squash bugs and suggest cool new features for LocalAI. Your help, kind words, and brilliant ideas are truly appreciated - more than words can say!
And to those of you who've been heros, giving up your own time to help out fellow users on Discord and in our repo, you're absolutely amazing. We couldn't have asked for a better community.
Just so you know, LocalAI doesn't have the luxury of big corporate sponsors behind it. It's all us, folks. So, if you've found value in what we're building together and want to keep the momentum going, consider showing your support. A little shoutout on your favorite social platforms using @LocalAI_OSS and @mudler_it or joining our sponsorship program can make a big difference.
Also, if you haven't yet joined our Discord, come on over! Here's the link: https://discord.gg/uJAeKSAGDy
Every bit of support, every mention, and every star adds up and helps us keep this ship sailing. Let's keep making LocalAI awesome, together.
Thanks a ton, and here's to more exciting times ahead with LocalAI! 🚀
What's Changed
Exciting New Features 🎉
- feat(sycl): Add support for Intel GPUs with sycl (#1647) by @mudler in #1660
- Drop old falcon backend (deprecated) by @mudler in #1675
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1678
- Drop ggml-based gpt2 and starcoder (supported by llama.cpp) by @mudler in #1679
- fix(Dockerfile): sycl dependencies by @mudler in #1686
- feat: Use ubuntu as base for container images, drop deprecated ggml-transformers backends by @mudler in #1689
👒 Dependencies
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1656
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1665
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1669
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1673
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1683
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1688
- ⬆️ Update mudler/go-stable-diffusion by @localai-bot in #1674
Other Changes
- ⬆️ Update docs version mudler/LocalAI by @localai-bot in #1661
- feat(mamba): Add bagel-dpo-2.8b by @richiejp in #1671
- fix (docs): fixed broken links
github/
->github.com/
by @Wansmer in #1672 - Fix HTTP links in README.md by @vfiftyfive in #1677
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1681
- ci: cleanup worker before run by @mudler in #1685
- Revert "fix(Dockerfile): sycl dependencies" by @mudler in #1687
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1691
New Contributors
- @richiejp made their first contribution in #1671
- @Wansmer made their first contribution in #1672
- @vfiftyfive made their first contribution in #1677
Full Changelog: v2.7.0...v2.8.0
v2.7.0
This release adds support to the transformer backend for LLM as well!
For now instance you can run codellama-7b with transformers with:
docker run -ti -p 8080:8080 --gpus all localai/localai:v2.7.0-cublas-cuda12 codellama-7b
In the quickstart there are more examples available https://localai.io/basics/getting_started/#running-models.
Note: As llama.cpp is ongoing with changes that could possible cause breakage, this release does not includes changes from ggerganov/llama.cpp#5138 (the future versions will).
What's Changed
Bug fixes 🐛
Exciting New Features 🎉
- feat(transformers): support also text generation by @mudler in #1630
- transformers: correctly load automodels by @mudler in #1643
- feat(startup): fetch model definition remotely by @mudler in #1654
👒 Dependencies
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1642
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1644
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1652
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1655
Other Changes
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1632
- ⬆️ Update docs version mudler/LocalAI by @localai-bot in #1631
Full Changelog: v2.6.1...v2.6.2
v2.6.1
This is a patch release containing bug-fixes around parallel request support with llama.cpp models.
What's Changed
Bug fixes 🐛
- fix(llama.cpp): Enable parallel requests by @tauven in #1616
- fix(llama.cpp): enable cont batching when parallel is set by @mudler in #1622
Exciting New Features 🎉
👒 Dependencies
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1623
Other Changes
- ⬆️ Update docs version mudler/LocalAI by @localai-bot in #1619
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1620
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1626
New Contributors
Full Changelog: v2.6.0...v2.6.1
v2.6.0
What's Changed
Bug fixes 🐛
- move BUILD_GRPC_FOR_BACKEND_LLAMA logic to makefile: errors in this section now immediately fail the build by @dionysius in #1576
- prepend built binaries in PATH for BUILD_GRPC_FOR_BACKEND_LLAMA by @dionysius in #1593
Exciting New Features 🎉
- minor: replace shell pwd in Makefile with CURDIR for better windows compatibility by @dionysius in #1571
- Makefile: allow to build without GRPC_BACKENDS by @mudler in #1607
- feat: 🐍 add mamba support by @mudler in #1589
- feat(extra-backends): Improvements, adding mamba example by @mudler in #1618
👒 Dependencies
- ⬆️ Update docs version mudler/LocalAI by @localai-bot in #1567
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1568
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1573
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1578
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1583
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1587
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1590
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1594
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1599
Other Changes
- Moving the how tos to self hosted by @lunamidori5 in #1574
- docs: missing golang requirement for local build for debian by @dionysius in #1596
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1597
- docs/examples: enhancements by @mudler in #1572
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1604
- Update README.md by @lunamidori5 in #1601
- docs: re-use original permalinks by @mudler in #1610
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1612
- Expanded and interlinked Docker documentation by @jamesbraza in #1614
- Modernized LlamaIndex integration by @jamesbraza in #1613
New Contributors
- @dionysius made their first contribution in #1571
Full Changelog: v2.5.1...v2.6.0
v2.5.1
Patch release to create /build/models
in the container images.
What's Changed
Other Changes
- ⬆️ Update docs version mudler/LocalAI by @localai-bot in #1562
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1563
Full Changelog: v2.5.0...v2.5.1
v2.5.0
What's Changed
This release adds more embedded models, and shrink image sizes.
You can run now phi-2
( see here for the full list ) locally by starting localai with:
docker run -ti -p 8080:8080 localai/localai:v2.5.0-ffmpeg-core phi-2
LocalAI accepts now as argument a list of short-hands models and/or URLs pointing to valid yaml file. A popular way to host those files are Github gists.
For instance, you can run llava
, by starting local-ai
with:
docker run -ti -p 8080:8080 localai/localai:v2.5.0-ffmpeg-core https://raw.githubusercontent.com/mudler/LocalAI/master/embedded/models/llava.yaml
Exciting New Features 🎉
👒 Dependencies
- deps(conda): use transformers-env with vllm,exllama(2) by @mudler in #1554
- deps(conda): use transformers environment with autogptq by @mudler in #1555
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1558
Other Changes
- ⬆️ Update docs version mudler/LocalAI by @localai-bot in #1557
Full Changelog: v2.4.1...v2.5.0
v2.4.1
What's Changed
Exciting New Features 🎉
Other Changes
- ⬆️ Update docs version mudler/LocalAI by @localai-bot in #1546
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1547
- docs: improve getting started by @mudler in #1553
Full Changelog: v2.4.0...v2.4.1
v2.4.0
What's Changed
Bug fixes 🐛
- fix(entrypoint): cd to backend dir before start by @mudler in #1530
- fix(diffusers): add omegaconf dependency by @mudler in #1540
Exciting New Features 🎉
- feat(preload): prepare models in galleries by @mudler in #1515
- feat: share models by url by @mudler in #1522
- feat(prepare): allow to specify additional files to download by @mudler in #1526
- feat: add trimsuffix by @mudler in #1528
👒 Dependencies
- deps(llama.cpp): update and sync grpc server by @mudler in #1527
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1531
Other Changes
- ⬆️ Update docs version mudler/LocalAI by @localai-bot in #1517
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1518
- docs(phi-2): add example by @mudler in #1525
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1533
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1544
- ci(dockerhub): push images also to dockerhub by @mudler in #1542
Full Changelog: v2.3.1...v2.4.0
v2.3.1
What's Changed
Bug fixes 🐛
- Fix: Set proper Homebrew install location for x86 Macs by @cnatale in #1510
- fix(download): correctly check for not found error by @mudler in #1514
👒 Dependencies
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1509
Other Changes
- Update version.json (2.3.0) by @lunamidori5 in #1511
New Contributors
Full Changelog: v2.3.0...v2.3.1
v2.3.0
What's Changed
Bug fixes 🐛
- fix: exllama2 backend by @sozercan in #1484
- fix: guidance_scale not work in sd by @hibobmaster in #1488
- fix(image-gen): remove default to stablediffusion by @JZacharie in #1500
Exciting New Features 🎉
- env(conda): use transformers for vall-e-x by @mudler in #1481
- feat: add 🐸 coqui by @mudler in #1489
- feat: partial download by @muka in #1486
- feat: add tiny dream stable diffusion support by @M0Rf30 in #1283
👒 Dependencies
- ⬆️ Update ggerganov/whisper.cpp by @localai-bot in #1479
- ⬆️ Update donomii/go-rwkv.cpp by @localai-bot in #1478
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1469
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1482
- ⬆️ Update ggerganov/whisper.cpp by @localai-bot in #1483
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1492
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1495
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1496
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1501
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1503
Other Changes
- ⬆️ Update docs version mudler/LocalAI by @localai-bot in #1477
- ⬆️ Update ggerganov/whisper.cpp by @localai-bot in #1480
- docs: add langchain4j integration by @mudler in #1476
- ci(apple): speedups by @mudler in #1471
New Contributors
- @hibobmaster made their first contribution in #1488
- @muka made their first contribution in #1486
- @JZacharie made their first contribution in #1500
Full Changelog: v2.2.0...v2.3.0