Skip to content

Releases: IntelLabs/fastRAG

v3.1.2

25 Nov 10:09
3abaae3
Compare
Choose a tag to compare

What's Changed

  • Updated IPEX embedder to work with new Haystack version (2.7) by @gadmarkovits in #74

New Contributors

Full Changelog: v3.1.1...v3.1.2

v3.1.1

24 Nov 14:22
1a6ab10
Compare
Choose a tag to compare

What's Changed

New Contributors

Full Changelog: v3.1.0...v3.1.1

v3.1.0

07 Nov 14:21
8cf1762
Compare
Choose a tag to compare

What's Changed

New Contributors

Full Changelog: v3.0.2...v3.1.0

v3.0.2

09 Jul 09:49
722860d
Compare
Choose a tag to compare

What's Changed

Full Changelog: v3.0.1...v3.0.2

v3.0.1

02 Jul 13:38
07ceba6
Compare
Choose a tag to compare

What's Changed

Full Changelog: v3.0...v3.0.1

v3.0.0

22 May 15:13
8087aea
Compare
Choose a tag to compare

Compatibility with Haystack v2

  • ⚡ All our classes are now compatible with 🤖 Haystack v2, including the example notebooks and yaml pipeline configurations.
  • 💻 We based our demos on the Chainlit UI library; examples include RAG chat with multi-modality! 🖼️

❤️ Feel free to report any issue, bug or question!

v2.0.0

24 Dec 16:24
373e546
Compare
Choose a tag to compare

fastRAG 2.0: Let's do RAG Efficiently 🔥

fastRAG 2.0 includes new highly-anticipated efficiency-oriented components, an updated chat-like demo experience with multi-modality and improvements to existing components.

The library now utilizes efficient Intel optimizations using Intel extensions for PyTorch (IPEX), 🤗 Optimum Intel and 🤗 Optimum-Habana for running as optimal as possible on Intel® Xeon® Processors and Intel® Gaudi® AI accelerators.

🚀 Intel Habana Gaudi 1 and Gaudi 2 Support

fastRAG is the first RAG framework to support Habana Gaudi accelerators for running LLMs efficiently; more details here.

🌀 Running LLMs with the ONNX Runtime and LlamaCPP Backends

Added support to run quantized LLMs on ONNX runtime and LlamaCPP for higher efficiency and speed for all your RAG pipelines.

⚡ CPU Efficient Embedders

We added support running bi-encoder embedders and cross-encoder ranker as efficiently as possible on Intel CPUs using Intel optimized software.

We integrated the optimized embedders into the following two components:

⏳ REPLUG

An implementation of REPLUG, an advanced technique for ensemble prompting of retrieved documents, processing them in parallel and combining their next token predictions for better results.

🏆 New Demos

We updated our demos (and demo page) to include two new demos that depict a chat-like experience plus fusing multi-modality RAG.

🐠 Enhancements

  • Added documentation for most models and components, containing examples and notebooks ready to run!
  • Support for the Fusion-in-Decoder (FiD) model using a dedicated invocation layer.
  • Various bug fixes and compatibility updates supporting the Haystack framework.

Full Changelog: v1.3.0...v2.0

v1.3.0

20 Jun 12:06
81b7a1a
Compare
Choose a tag to compare

What's Changed

Full Changelog: v1.2.1...v1.3.0

v1.2.1

13 Jun 10:57
bf4df83
Compare
Choose a tag to compare

What's Changed

Full Changelog: v1.2.0...v1.2.1

v1.2.0: New: Retrieval Augmented Generation with LLM

21 May 11:05
af86bc2
Compare
Choose a tag to compare
Retrieval Augmented Generation with LLM Demo (#16)

- Added a new RAG + prompt + LLM UI (demo).
- Added an example config and notebook.
- Updated main README with "updates" sub-section.
- Updated `run_demo.py` to include all the options to run a demo (UI, UI + service, UI + <user_defined_service>)