Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Docs Observability #2172

Merged
merged 14 commits into from
Nov 12, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
124 changes: 124 additions & 0 deletions docs/docs/observability/01-overview.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,124 @@
---
title: Overview
description: Learn how to instrument your application with Agenta for enhanced observability. This guide covers the benefits of observability, how Agenta helps, and how to get started.
---

```mdx-code-block
import DocCard from '@theme/DocCard';
import clsx from 'clsx';
import Image from "@theme/IdealImage";

```

## Why Observability?

Observability is the practice of monitoring and understanding the behavior of your LLM application. With Agenta, you can add a few lines of code to start tracking all inputs, outputs, and metadata of your application.
This allows you to:

- **Debug Effectively**: View exact prompts sent and contexts retrieved. For complex workflows like agents, you can trace the data flow and quickly identify root causes.
- **Bootstrap Test Sets**: Track real-world inputs and outputs and use them to bootstrap test sets in which you can continuously iterate.
- **Find Edge Cases**: Identify latency spikes and cost increases. Understand performance bottlenecks to optimize your app's speed and cost-effectiveness.
- **Track Costs and Latency Over Time**: Monitor how your app's expenses and response times change.
- **Compare App Versions**: Compare the behavior in productions of different versions of your application to see which performs better.

<Image
style={{ display: "block", margin: "10 auto" }}
img={require("/images/observability/observability.png")}
alt="Illustration of observability"
loading="lazy"
/>

## Observability in Agenta

Agenta's observability features are built on **OpenTelemetry (OTel)**, an open-source standard for application observability. This provides several advantages:

- **Wide Library Support**: Use many supported libraries right out of the box.
- **Vendor Neutrality**: Send your traces to platforms like New Relic or Datadog without code changes. Switch vendors at will.
- **Proven Reliability**: Use a mature and actively maintained SDK that's trusted in the industry.
- **Ease of Integration**: If you're familiar with OTel, you already know how to instrument your app with Agenta. No new concepts or syntax to learn—Agenta uses familiar OTel concepts like traces and spans.

## Key Concepts

**Traces**: A trace represents the complete journey of a request through your application. In our context, a trace corresponds to a single request to your LLM application.

**Spans**: A span is a unit of work within a trace. Spans can be nested, forming a tree-like structure. The root span represents the overall operation, while child spans represent sub-operations. Agenta enriches each span with cost information and metadata when you make LLM calls.

## Next Steps

<section className='row'>
<article key='1' className="col col--6 margin-bottom--lg">

<DocCard
item={{
type: "link",
href: "/observability/quickstart",
label: "Quick Start",
description: "Get started with observability in Agenta",
}}
/>
</article>

<article key='2' className="col col--6 margin-bottom--lg">
<DocCard
item={{
type: "link",
href: "/observability/observability-sdk",
label: "Observability SDK",
description: "Learn how to use the Agenta observability SDK",
}}
/>
</article>
</section>

### Integrations

<section className='row'>

<article key="1" className="col col--6 margin-bottom--lg">
<DocCard
item={{
type: "link",
href: "/observability/integrations/openai",
label: "OpenAI",
description:
"Learn how to instrument your OpenAI application with Agenta",
}}
/>
</article>

<article key='2' className="col col--6 margin-bottom--lg">
<DocCard
item={{
type: "link",
href: "/evaluation/sdk-evaluation",
label: "LiteLLM",
description: "Learn how to instrument your LiteLLM application with Agenta",
}}
/>
</article>
</section>
<section className='row'>

<article key="1" className="col col--6 margin-bottom--lg">
<DocCard
item={{
type: "link",
href: "/observability/integrations/langchain",
label: "LangChain",
description:
"Learn how to instrument your LangChain application with Agenta",
}}
/>
</article>

<article key='2' className="col col--6 margin-bottom--lg">
<DocCard
item={{
type: "link",
href: "/observability/integrations/instructor",
label: "Instructor",
description: "Learn how to instrument your Instructor application with Agenta",
}}
/>
</article>
</section>
107 changes: 107 additions & 0 deletions docs/docs/observability/02-quickstart.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,107 @@
---
title: Quick Start
---

```mdx-code-block
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
import Image from "@theme/IdealImage";

```

Agenta enables you to capture all inputs, outputs, and metadata from your LLM applications, **whether they're hosted within Agenta or running in your own environment**.

This guide will walk you through setting up observability for an OpenAI application running locally.

:::note
If you create an application through the Agenta UI, tracing is enabled by default. No additional setup is required—simply go to the observability view to see all your requests.
:::

## Step-by-Step Guide

### 1. Install Required Packages

First, install the Agenta SDK, OpenAI, and the OpenTelemetry instrumentor for OpenAI:

```bash
pip install -U agenta openai opentelemetry-instrumentation-openai
```

### 2. Configure Environment Variables

<Tabs>
<TabItem value="cloud" label="Agenta Cloud or Enterprise">
If you're using Agenta Cloud or Enterprise Edition, you'll need an API key:

1. Visit the [Agenta API Keys page](https://cloud.agenta.ai/settings?tab=apiKeys).
2. Click on **Create New API Key** and follow the prompts.

```python
import os

os.environ["AGENTA_API_KEY"] = "YOUR_AGENTA_API_KEY"
os.environ["AGENTA_HOST"] = "https://cloud.agenta.ai"
```

</TabItem>
<TabItem value="oss" label="Agenta OSS Running Locally">

```python
import os

os.environ["AGENTA_HOST"] = "http://localhost"
```

</TabItem>
</Tabs>

### 3. Instrument Your Application

Below is a sample script to instrument an OpenAI application:

```python
# highlight-start
import agenta as ag
from opentelemetry.instrumentation.openai import OpenAIInstrumentor
import openai
# highlight-end

# highlight-start
ag.init()
# highlight-end

# highlight-start
OpenAIInstrumentor().instrument()
# highlight-end

response = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Write a short story about AI Engineering."},
],
)

print(response.choices[0].message.content)
```

**Explanation**:

- **Import Libraries**: Import Agenta, OpenAI, and the OpenTelemetry instrumentor.
- **Initialize Agenta**: Call `ag.init()` to initialize the Agenta SDK.
- **Instrument OpenAI**: Use `OpenAIInstrumentor().instrument()` to enable tracing for OpenAI calls.

### 4. View Traces in the Agenta UI

After running your application, you can view the captured traces in Agenta:

1. Log in to your Agenta dashboard.
2. Navigate to the **Observability** section.
3. You'll see a list of traces corresponding to your application's requests.

<Image
style={{ display: "block", margin: "10 auto" }}
img={require("/images/observability/observability.png")}
alt="Illustration of observability"
loading="lazy"
/>
Loading