The Aleph Alpha Intelligence Layer️ offers a comprehensive suite of development tools for crafting solutions that harness the capabilities of large language models (LLMs). With a unified framework for LLM-based workflows, it facilitates seamless AI product development, from prototyping and prompt experimentation to result evaluation and deployment.
The key features of the Intelligence Layer are:
- Composability: Streamline your journey from prototyping to scalable deployment. The Intelligence Layer SDK offers seamless integration with diverse evaluation methods, manages concurrency, and orchestrates smaller tasks into complex workflows.
- Evaluability: Continuously evaluate your AI applications against your quantitaive quality requirements. With the Intelligence Layer SDK you can quickly iterate on different solution strategies, ensuring confidence in the performance of your final product. Take inspiration from the provided evaluations for summary and search when building a custom evaluation logic for your own use case.
- Traceability: At the core of the Intelligence Layer is the belief that all AI processes must be auditable and traceable. We provide full observability by seamlessly logging each step of every workflow. This enhances your debugging capabilities and offers greater control post-deployment when examining model responses.
- Examples: Get started by following our hands-on examples, demonstrating how to use the Intelligence Layer SDK and interact with its API.
- Aleph Alpha Intelligence Layer
- Table of contents
- Installation
- Getting started
- Models
- Example index
- References
- License
- For Developers
Clone the Intelligence Layer repository from github.
git clone git@github.com:Aleph-Alpha/intelligence-layer-sdk.git
The Intelligence Layer uses poetry
as a package manager. Follow the official instructions to install it.
Afterwards, simply run poetry install
to install all dependencies in a virtual environment.
poetry install
The environment can be activated via poetry shell
. See the official poetry documentation for more information.
After running the local installation steps, you can set whether you are using the Aleph-Alpha API or an on-prem setup via the environment variables.
Using the Aleph-Alpha API
In the Intelligence Layer the Aleph-Alpha API (https://api.aleph-alpha.com
) is set as default host URL. However, you will need an Aleph Alpha access token to run the examples.
Set your access token with
export AA_TOKEN=<YOUR TOKEN HERE>
Using an on-prem setup
In case you want to use an on-prem endpoint you will have to change the host URL by setting the CLIENT_URL
environment variable:
export CLIENT_URL=<YOUR_ENDPOINT_URL_HERE>
The program will warn you in case no CLIENT_URL
is set explicitly set.
After correctly setting up the environment variables you can run the jupyter notebooks.
For this, run jupyter lab
inside the virtual environment and go to the examples directory.
cd src/documentation && poetry run jupyter lab
To install the Aleph-Alpha Intelligence Layer from the JFrog artifactory in you project, you need an artifactory identity token. To generate this, log into artifactory in your browser and open the user menu by clicking in the top-right corner. Then select 'Edit Profile' from the resulting dropdown menu. On the following page you can generate an identity token by clicking the respective button after entering you password. Save the token at some secure place, e.g. your password manager.
With the token generated, you have to add this information to your poetry setup via the following four steps. First, add the artifactory as a source to your project via
poetry source add --priority=explicit artifactory https://alephalpha.jfrog.io/artifactory/api/pypi/python/simple
Second, to install the poetry environment, export your JFrog username and the generated token (NOT your actual password)
export POETRY_HTTP_BASIC_ARTIFACTORY_USERNAME=your@username.here
export POETRY_HTTP_BASIC_ARTIFACTORY_PASSWORD=your-token-here
Third, add the Intelligence Layer to the project
poetry add --source artifactory intelligence-layer
Fourth, execute
poetry install
Now the Intelligence Layer should be available as a Python package and ready to use.
from intelligence_layer.core import Task
In VSCode, to enable auto-import up to the second depth, where all symbols are exported, add the following entry to your ./.vscode/settings.json
:
"python.analysis.packageIndexDepths": [
{
"name": "intelligence_layer",
"depth": 2
}
]
To use the Intelligence Layer in Docker, a few settings are needed to not leak your Github token.
You will need your Github token set in your environment.
In order to modify the git config
add the following to your docker container:
RUN apt-get -y update
RUN apt-get -y install git curl gcc python3-dev
RUN pip install poetry
RUN poetry install --no-dev --no-interaction --no-ansi \
&& rm -f ~/.gitconfig
Not sure where to start? Familiarize yourself with the Intelligence Layer SDK using the below notebook as interactive tutorials. If you prefer you can also read about the concepts first.
The tutorials aim to guide you through implementing several common use-cases with the Intelligence Layer SDK. They introduce you to key concepts and enable you to create your own use-cases. In general the tutorials are build in a way that you can simply hop into the topic you are most interested in. However, for starters we recommend to read through the Summarization
tutorial first. It explains the core concepts of the intelligence layer in more depth while for the other tutorials we assume that these concepts are known.
Order | Topic | Description | Notebook 📓 |
---|---|---|---|
1 | Summarization | Summarize a document | summarization.ipynb |
2 | Question Answering | Various approaches for QA | qa.ipynb |
3 | Classification | Learn about two methods of classification | classification.ipynb |
4 | Evaluation | Evaluate LLM-based methodologies | evaluation.ipynb |
5 | Quickstart Task | Build a custom Task for your use case |
quickstart_task.ipynb |
6 | Document Index | Connect your proprietary knowledge base | document_index.ipynb |
7 | Human Evaluation | Connect to Argilla for manual evaluation | human_evaluation.ipynb |
8 | Performance tips | Contains some small tips for performance | performance_tips.ipynb |
9 | Deployment | Shows how to deploy a Task in a minimal FastAPI app. | fastapi_tutorial.ipynb |
10 | Issue Classification | Deploy a Task in Kubernetes to classify Jira issues | Found in adjacent repository |
The how-tos are quick lookups about how to do things. Compared to the tutorials, they are shorter and do not explain the concepts they are using in-depth.
Tutorial | Description |
---|---|
Tasks | |
...define a task | How to come up with a new task and formulate it |
...implement a task | Implement a formulated task and make it run with the Intelligence Layer |
...debug and log a task | Tools for logging and debugging in tasks |
...run the trace viewer | Downloading and running the trace viewer for debugging traces |
Analysis Pipeline | |
...implement a simple evaluation and aggregation logic | Basic examples of evaluation and aggregation logic |
...create a dataset | Create a dataset used for running a task |
...run a task on a dataset | Run a task on a whole dataset instead of single examples |
...evaluate multiple runs | Evaluate (multiple) runs in a single evaluation |
...aggregate multiple evaluations | Aggregate (multiple) evaluations in a single aggregation |
...retrieve data for analysis | Retrieve experiment data in multiple different ways |
...implement a custom human evaluation | Necessary steps to create an evaluation with humans as a judge via Argilla |
Currently, we support a bunch of models accessible via the Aleph Alpha API. Depending on your local setup, you may even have additional models available.
Model | Description |
---|---|
LuminousControlModel | Any control-type model based on the first Luminous generation, specifically luminous-base-control , luminous-extended-control and luminous-supreme-control . Multilingual support. |
Llama2InstructModel | Llama-2 based models prompted for one-turn instruction answering. Includes llama-2-7b-chat , llama-2-13b-chat and llama-2-70b-chat . Best suited for English tasks. |
Llama3InstructModel | Llama-3 based models prompted for one-turn instruction answering. Includes llama-3-8b-instruct and llama-3-70b-instruct . Best suited for English tasks and recommended over llama-2 models. |
To give you a starting point for using the Intelligence Layer, we provide some pre-configured Task
s that are ready to use out-of-the-box, as well as an accompanying "Getting started" guide in the form of Jupyter Notebooks.
Type | Task | Description |
---|---|---|
Classify | EmbeddingBasedClassify | Classify a short text by computing its similarity with example texts for each class. |
Classify | PromptBasedClassify | Classify a short text by assessing each class' probability using zero-shot prompting. |
Classify | PromptBasedClassifyWithDefinitions | Classify a short text by assessing each class' probability using zero-shot prompting. Each class is defined by a natural language description. |
Classify | KeywordExtract | Generate matching labels for a short text. |
QA | MultipleChunkRetrieverQa | Answer a question based on an entire knowledge base. Recommended for most RAG-QA use-cases. |
QA | LongContextQa | Answer a question based on one document of any length. |
QA | MultipleChunkQa | Answer a question based on a list of short texts. |
QA | SingleChunkQa | Answer a question based on a short text. |
QA | RetrieverBasedQa (deprecated) | Answer a question based on a document base using a BaseRetriever implementation. |
Search | Search | Search for texts in a document base using a BaseRetriever implementation. |
Search | ExpandChunks | Expand chunks retrieved with a BaseRetriever implementation. |
Summarize | SteerableLongContextSummarize | Condense a long text into a summary with a natural language instruction. |
Summarize | SteerableSingleChunkSummarize | Condense a short text into a summary with a natural language instruction. |
Summarize | RecursiveSummarize | Recursively condense a text into a summary. |
Note that we do not expect the above use cases to solve all of your issues. Instead, we encourage you to think of our pre-configured use cases as a foundation to fast-track your development process. By leveraging these tasks, you gain insights into the framework's capabilities and best practices.
We encourage you to copy and paste these use cases directly into your own project. From here, you can customize everything, including the prompt, model, and more intricate functional logic. For more information, check the tutorials and the how-tos
The full code documentation can be found in our read-the-docs here
This project can only be used after signing the agreement with Aleph Alpha®. Please refer to the LICENSE file for more details.
We follow the PEP 8 – Style Guide for Python Code. In addition, there are the following naming conventions:
- Class method names:
- Use only substantives for a method name having no side effects and returning some objects
- E.g.,
evaluation_overview
which returns an evaluation overview object
- E.g.,
- Use a verb for a method name if it has side effects and return nothing
- E.g.,
store_evaluation_overview
which saves a given evaluation overview (and returns nothing)
- E.g.,
- Use only substantives for a method name having no side effects and returning some objects
In VSCode
- Sidebar > Testing
- Select pytest as framework for the tests
- Select
intelligence_layer/tests
as source of the tests
You can then run the tests from the sidebar.
In a terminal In order to run a local proxy w.r.t. to the CI pipeline (required to merge) you can run
scripts/all.sh
This will run linters and all tests.
The scripts to run single steps can also be found in the scripts
folder.