Skip to content

Latest commit

 

History

History
227 lines (165 loc) · 12.4 KB

Concepts.md

File metadata and controls

227 lines (165 loc) · 12.4 KB

Concepts

The main focus of the Intelligence Layer is to enable developers to

  • implement their LLM use cases by building upon and composing existing functionalities
  • obtain insights into the runtime behavior of their implementations
  • iteratively improve their implementations or compare them to existing implementations by evaluating them against a given set of examples

How these focus points are realized in the Intelligence Layer is described in more detail in the following sections.

Task

At the heart of the Intelligence Layer is a Task. A task is actually a pretty generic concept that just transforms an input-parameter to an output like a function in mathematics.

Task: Input -> Output

In Python this is realized by an abstract class with type-parameters and the abstract method do_run in which the actual transformation is implemented:

class Task(ABC, Generic[Input, Output]):

    @abstractmethod
    def do_run(self, input: Input, task_span: TaskSpan) -> Output:
        ...

Input and Output are normal Python datatypes that can be serialized from and to JSON. For this the Intelligence Layer relies on Pydantic. The used types are defined in form of type-aliases PydanticSerializable.

The second parameter task_span is used for tracing which is described below.

do_run is the method that implements a concrete task and has to be provided by the user. It will be executed by the external interface method run of a task:

class Task(ABC, Generic[Input, Output]):
    @final
    def run(self, input: Input, tracer: Tracer) -> Output:
      ...

The signatures of the do_run and run methods differ only in the tracing parameters.

Levels of abstraction

Even though the concept is so generic the main purpose for a task is of course to make use of an LLM for the transformation. Tasks are defined at different levels of abstraction. There are higher level tasks (also called Use Cases) that reflect a typical user problem and there are lower level tasks that are more about interfacing with an LLM on a very generic or even technical level.

Examples for higher level tasks (Use Cases) are:

  • Answering a question based on a given document: QA: (Document, Question) -> Answer
  • Generate a summary of a given document: Summary: Document -> Summary

Examples for lower level tasks are:

  • Let the model generate text based on an instruction and some context: Instruct: (Context, Instruction) -> Completion
  • Chunk a text in smaller pieces at optimized boundaries (typically to make it fit into an LLM's context-size): Chunk: Text -> [Chunk]

Composability

Typically you would build higher level tasks from lower level tasks. Given a task you can draw a dependency graph that illustrates which sub-tasks it is using and in turn which sub-tasks they are using. This graph typically forms a hierarchy or more general a directed acyclic graph. The following drawing shows this graph for the Intelligence Layer's RecursiveSummarize task:

Trace

A task implements a workflow. It processes its input, passes it on to sub-tasks, processes the outputs of the sub-tasks and builds its own output. This workflow can be represented in a trace. For this a task's run method takes a Tracer that takes care of storing details on the steps of this workflow like the tasks that have been invoked along with their input and output and timing information. The following illustration shows the trace of an MultiChunkQa-task:

To represent this tracing defines the following concepts:

  • A Tracer is passed to a task's run method and provides methods for opening Spans or TaskSpans.
  • A Span is a Tracer and allows to group multiple logs and runtime durations together as a single, logical step in the workflow.
  • A TaskSpan is a Span that allows to group multiple logs together with the task's specific input and output. An opened TaskSpan is passed to Task.do_run. Since a TaskSpan is a Tracer a do_run implementation can pass this instance on to run methods of sub-tasks.

The following diagram illustrates their relationship:

Each of these concepts is implemented in form of an abstract base class and the Intelligence Layer provides several concrete implementations that store the actual traces in different backends. For each backend each of the three abstract classes Tracer, Span and TaskSpan needs to be implemented. Here only the top-level Tracer-implementations are listed:

  • The NoOpTracer can be used when tracing information shall not be stored at all.
  • The InMemoryTracer stores all traces in an in memory data structure and is most helpful in tests or Jupyter notebooks.
  • The FileTracer stores all traces in a json-file.
  • The OpenTelemetryTracer uses an OpenTelemetry Tracer to store the traces in an OpenTelemetry backend.

Evaluation

An important part of the Intelligence Layer is tooling that helps to evaluate custom tasks. Evaluation helps to measure how well the implementation of a task performs given real world examples. The outcome of an entire evaluation process is an aggregated evaluation result that consists out of metrics aggregated over all examples.

The evaluation process helps to:

  • optimize a task's implementation by comparing and verifying if changes improve the performance.
  • compare the performance of one implementation of a task with that of other (already existing) implementations.
  • compare the performance of models for a given task implementation.
  • verify how changes to the environment (new model version, new finetuning version) affect the performance of a task.

Dataset

The basis of an evaluation is a set of examples for the specific task-type to be evaluated. A single Example consists of:

  • an instance of the Input for the specific task and
  • optionally an expected output that can be anything that makes sense in context of the specific evaluation (e.g. in case of classification this could contain the correct classification result, in case of QA this could contain a golden answer, but if an evaluation is only about comparing results with other results of other runs this could also be empty)

To enable reproducibility of evaluations datasets are immutable. A single dataset can be used to evaluate all tasks of the same type, i.e. with the same Input and Output types.

Evaluation Process

The Intelligence Layer supports different kinds of evaluation techniques. Most important are:

  • Computing absolute metrics for a task where the aggregated result can be compared with results of previous result in a way that they can be ordered. Text classification could be a typical use case for this. In that case the aggregated result could contain metrics like accuracy which can easily compared with other aggregated results.
  • Comparing the individual outputs of different runs (all based on the same dataset) in a single evaluation process and produce a ranking of all runs as an aggregated result. This technique is useful when it is hard to come up with an absolute metrics to evaluate a single output, but it is easier to compare two different outputs and decide which one is better. An example use case could be summarization.

To support these techniques the Intelligence Layer differentiates between 3 consecutive steps:

  1. Run a task by feeding it all inputs of a dataset and collecting all outputs
  2. Evaluate the outputs of one or several runs and produce an evaluation result for each example. Typically a single run is evaluated if absolute metrics can be computed and several runs are evaluated when the outputs of runs shall be compared.
  3. Aggregate the evaluation results of one or several evaluation runs into a single object containing the aggregated metrics. Aggregating over several evaluation runs supports amending a previous comparison result with comparisons of new runs without the need to re-execute the previous comparisons again.

The following table shows how these three steps are represented in code:

Step Executor Custom Logic Repository
1. Run Runner Task RunRepository
2. Evaluate Evaluator EvaluationLogic EvaluationRepository
3. Aggregate Aggregator AggregationLogic AggregationRepository

Columns explained

  • "Executor" lists concrete implementations provided by the Intelligence Layer.
  • "Custom Logic" lists abstract classes that need to be implemented with the custom logic.
  • "Repository" lists abstract classes for storing intermediate results. The Intelligence Layer provides different implementations for these. See the next section for details.

Data Storage

During an evaluation process a lot of intermediate data is created before the final aggregated result can be produced. To avoid that expensive computations have to be repeated if new results are to be produced based on previous ones all intermediate results are persisted. For this the different executor-classes make use of repositories.

There are the following Repositories:

  • The DatasetRepository offers methods to manage datasets. The Runner uses it to read all Examples of a dataset and feeds them to the Task.
  • The RunRepository is responsible for storing a task's output (in form of an ExampleOutput) for each Example of a dataset which are created when a Runner runs a task using this dataset. At the end of a run a RunOverview is stored containing some metadata concerning the run. The Evaluator reads these outputs given a list of runs it should evaluate to create an evaluation result for each Example of the dataset.
  • The EvaluationRepository enables the Evaluator to store the evaluation result (in form of an ExampleEvaluation) for each example along with an EvaluationOverview. The Aggregator uses this repository to read the evaluation results.
  • The AggregationRepository stores the AggregationOverview containing the aggregated metrics on request of the Aggregator.

The following diagrams illustrate how the different concepts play together in case of the different types of evaluations.

Process of an absolute Evaluation
  1. The Runner reads the Examples of a dataset from the DatasetRepository and runs a Task for each Example.input to produce Outputs.
  2. Each Output is wrapped in an ExampleOutput and stored in the RunRepository.
  3. The Evaluator reads the ExampleOutputs for a given run from the RunRepository and the corresponding Example from the DatasetRepository and uses the EvaluationLogic to compute an Evaluation.
  4. Each Evaluation gets wrapped in an ExampleEvaluation and stored in the EvaluationRepository.
  5. The Aggregator reads all ExampleEvaluations for a given evaluation and feeds them to the AggregationLogic to produce a AggregatedEvaluation.
  6. The AggregatedEvalution is wrapped in an AggregationOverview and stored in the AggregationRepository.

The next diagram illustrates the more complex case of a relative evaluation.

Process of a relative Evaluation
  1. Multiple Runners read the same dataset and produce the corresponding Outputs for different Tasks.
  2. For each run all Outputs are stored in the RunRepository.
  3. The Evaluator gets as input previous evaluations (that were produced on basis of the same dataset, but by different Tasks) and the new runs of the current task.
  4. Given the previous evaluations and the new runs the Evaluator can read the ExampleOutputs of both the new runs and the runs associated to previous evaluations, collect all that belong to a single Example and pass them along with the Example to the EvaluationLogic to compute an Evaluation.
  5. Each Evaluation gets wrapped in an ExampleEvaluation and is stored in the EvaluationRepository.
  6. The Aggregator reads all ExampleEvaluation from all involved evaluations and feeds them to the AggregationLogic to produce a AggregatedEvaluation.
  7. The AggregatedEvalution is wrapped in an AggregationOverview and stored in the AggregationRepository.