Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs/AGE-1151-prompt-management-documentation #2149

Merged
merged 20 commits into from
Nov 12, 2024
Merged
Show file tree
Hide file tree
Changes from 6 commits
Commits
Show all changes
20 commits
Select commit Hold shift + click to select a range
585e0aa
docs(app): AGE-1151 restructuring tutorials
mmabrouk Oct 23, 2024
c18402d
docs(app): AGE-1151 updated quick start with new SDK
mmabrouk Oct 23, 2024
c873887
docs(app): AGE-1151 prompt management tutorial
mmabrouk Oct 23, 2024
a09a7d1
docs(app): AGE-1151 updated tutorial prompt mangement sdj
mmabrouk Oct 23, 2024
799f9e7
docs(app): AGE-1151 concept-motivation+link-fixes
mmabrouk Oct 23, 2024
de3fedd
docs(app): AGE-1151 update prompt management sdk howto
mmabrouk Oct 25, 2024
95b94ef
Update docs to match implementation, without testing in this case.
jp-agenta Nov 6, 2024
a8952ce
fix prompt variables for current state of template
jp-agenta Nov 7, 2024
77db657
Merge branch 'main' into mmabrouk/docs/AGE-1151-prompt-management-doc…
aybruhm Nov 12, 2024
7c84545
refactor (cli): add Prompt sdk type
aybruhm Nov 12, 2024
5740b57
refactor (docs): update prompt management docs based on new developme…
aybruhm Nov 12, 2024
037c956
minor refactor (docs): rename 'commit_variant' to 'commit'
aybruhm Nov 12, 2024
2477f46
minor refactor (docs): add 'await' keyword to execute coroutines in V…
aybruhm Nov 12, 2024
c812422
refactor (docs): update async method naming in config manager for con…
aybruhm Nov 12, 2024
064aaa5
fix (tests): resolve failing sdk tests
aybruhm Nov 12, 2024
a13becc
fix (cli:tests): resolve failing patched ConfigManager method
aybruhm Nov 12, 2024
186347f
Merge branch 'main' into mmabrouk/docs/AGE-1151-prompt-management-doc…
aybruhm Nov 12, 2024
46e3e8f
Merge branch 'mmabrouk/docs/AGE-1151-prompt-management-documentation'…
aybruhm Nov 12, 2024
0297b0b
Merge pull request #2247 from Agenta-AI/cleanup/prompt-management-doc
mmabrouk Nov 12, 2024
f664ae1
minor refactor (docs): update code example in 'fetching the prompt in…
aybruhm Nov 12, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docs/blog/main.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -196,7 +196,7 @@ We have worked extensively on improving the **reliability of evaluations**. Spec
- We fixed small UI issues with large output in human evaluations.
- We have added a new export button in the evaluation view to export the results as a CSV file.

Additionally, we have added a new [Cookbook for run evaluation using the SDK](/guides/cookbooks/evaluations_with_sdk).
Additionally, we have added a new [Cookbook for run evaluation using the SDK](/tutorials/sdk/evaluate-with-SDK).

In **observability**:

Expand Down
16 changes: 15 additions & 1 deletion docs/docs/concepts/01-concepts.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -4,10 +4,24 @@ title: "Core Concepts"

import Image from "@theme/IdealImage";

## Motivation

Building reliable LLM applications goes beyond crafting a single prompt or selecting a model. It's about experimentation—trying various prompts, models, configurations, and AI workflows to determine what works best. Without a robust system, managing these experiments can quickly become chaotic.

Agenta addresses this challenge by structuring experimentation in a manner similar to **git**'s code management. Rather than maintaining a single, linear history of changes, you can create multiple branches—called **variants** in Agenta. Each **variant** represents a distinct approach or solution you're exploring.

Within each **variant**, you can make changes that are saved as immutable **versions**. This means every change is recorded and you can always go back to a previous version if needed.

To move from experimentation to deployment, Agenta uses **environments** like development, staging, and production. You can deploy specific versions of your variants to these environments, controlling what gets tested and what goes live.

In each of your experiments pre-production or in the environment post-production, all the data that you generate is stored and linked to the **variant** and **version** that you used to generate it. This allows you to always go back to the **variant** and **version** that generated a particular data point and see which **variant** and **version** performed best on it.

## Concepts

Below are the description to the main terms and concepts used in Agenta.

<Image
style={{ display: "block", margin: "0 auto" }}
style={{ display: "block", margin: "10px auto" }}
img={require("/images/prompt_management/taxonomy-concepts.png")}
alt="Taxonomy of concepts in Agenta"
loading="lazy"
Expand Down
142 changes: 105 additions & 37 deletions docs/docs/prompt-management/02-quick-start.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -6,95 +6,163 @@ import Image from "@theme/IdealImage";

## Introduction

In this tutorial, we will **create a prompt** in the web UI, **publish** it to a deployment, **integrate** it with our code base.
In this tutorial, we will **create a prompt** in the web UI, **publish** it to a deployment, and **integrate** it with our codebase using the Agenta SDK.

:::note
If you want to do this whole process programatically, jump to [this guide](/prompt-management/integration/how-to-integrate-with-agenta)
If you want to do this whole process programmatically, jump to [this guide](/prompt-management/integration/how-to-integrate-with-agenta).
:::

## 1. Create a prompt
## 1. Create a Prompt

We will create a prompt from the web UI. This can be done simply by going to the app overview, clicking on create a prompt. You have here the choice between using a chat prompt or a text prompt:
We will create a prompt from the web UI. This can be done by going to the app overview and clicking on **Create a Prompt**. You have the choice between using a chat prompt or a text prompt:

- A text prompt is useful for single turn LLM applications such as question answering, text generation, entity extraction, and classification, etc.
- A chat application is designed for multi-turn applications like chatbots.
- **Text Prompt**: Useful for single-turn LLM applications such as question answering, text generation, entity extraction, classification, etc.
- **Chat Application**: Designed for multi-turn applications like chatbots.

<Image
style={{ width: "80%", display: "block", margin: "0 auto" }}
img={require("/images/prompt_management/create-prompt-modal.png")}
/>
<br />

## 2. Publish a variant
## 2. Publish a Variant

You can create in each LLM application multiple variants. You can think of variants as git branches. Each variant is versioned with each variant version having it's own commit number and being immutable.
Within each LLM application, you can create multiple variants. Variants are like Git branches, allowing you to experiment with different configurations. Each variant is versioned, and each version has its own commit number and is immutable.

When you are satified with a commit (after evaluating it for instance) you can publish it to a deployment. The deployment is tagged with an environment (`production`, `development` or `staging`) and provides you access to endpoint for both the published configuration and to a proxy for the calls.
When you are satisfied with a variant's version (after evaluating it, for instance), you can publish it to a deployment. A deployment is tagged with an environment (`production`, `development`, or `staging`) and provides you with access to endpoints for both the published configuration and proxying the calls.

To publish a variant, go to overview, click on the three dots on the **variant** that you want to publish and select Deploy (see screenshto)
To publish a variant, go to the overview, click on the three dots on the **variant** that you want to publish, and select **Deploy** (see screenshot):

<Image
style={{ width: "75%", display: "block", margin: "0 auto" }}
img={require("/images/prompt_management/deploy-action.png")}
/>
<br />
You can select now which deployments you want to publish the variant to

You can now select which environments you want to publish the variant to:

<Image
style={{ width: "75%", display: "block", margin: "0 auto" }}
img={require("/images/prompt_management/deployment-modal.png")}
/>

<br />

:::caution

New change
to the **variant** will not be automaticly published to the **deployment** unless
we explicitly **publish it**.
New changes to the **variant** will not be automatically published to the **deployment** unless you explicitly **publish it** again.

The reason is that we have published the last **version**/**commit** of the **variant** to
that deployment and not the **variant** itself!
The reason is that we have published a specific **version**/**commit** of the **variant** to that deployment, not the **variant** itself!

:::

## 3. Integrate with your code
## 3. Integrate with Your Code

To use the prompt in your code, you can utilize the Agenta SDK to fetch the configuration. Here's how you can do it in Python:

### a. Initialize the SDK

First, import the `agenta` module and initialize the SDK using `ag.init()`. Make sure you have your API key set in your environment variables or configuration file.

```python
import agenta as ag

# Initialize the SDK (API key can be set in environment variables or passed directly)
ag.init(api_key="your_api_key") # Replace with your actual API key or omit if set elsewhere
```

### b. Fetch the Configuration

To see the code snippet needed to use the prompt, click on the name of the deployment and you will see a drawer with the integration code. For example for python
To fetch the configuration from the production environment, use the `ConfigManager.get_from_registry` method:

```python
from agenta import Agenta
agenta = Agenta()
config = agenta.get_config(base_id="xxxxx", environment="production", cache_timeout=200) # Fetches the configuration with caching
# Fetch configuration from the production environment
config = ag.ConfigManager.get_from_registry(
app_slug="your-app-slug"
)
```

The prompt object will be like this:
- **Note**: If you do not provide a `variant_ref` or `environment_ref`, the SDK defaults to fetching the latest configuration deployed to the `production` environment.

### c. Use the Configuration

The `config` object will be a dictionary containing your prompt configuration. You can use it directly in your application:

```python
{'temperature': 1.0,
'model': 'gpt-3.5-turbo',
'max_tokens': -1,
'prompt_system': 'You are an expert in geography.',
'prompt_user': 'What is the capital of {country}?',
'top_p': 1.0,
'frequence_penalty': 0.0,
'presence_penalty': 0.0,
'force_json': 0}
# Use the configuration
print(config)
```

**Example Output:**

```python
{
'temperature': 1.0,
'model': 'gpt-3.5-turbo',
'max_tokens': -1,
'prompt_system': 'You are an expert in geography.',
'prompt_user': 'What is the capital of {country}?',
'top_p': 1.0,
'frequency_penalty': 0.0,
'presence_penalty': 0.0,
'force_json': 0
}
```

### d. (Optional) Schema Validation with Pydantic

If you have a predefined schema for your configuration, you can use Pydantic to validate it:

```python
from pydantic import BaseModel

# Define your configuration schema
class ConfigSchema(BaseModel):
temperature: float
model: str
max_tokens: int
prompt_system: str
prompt_user: str
top_p: float
frequency_penalty: float
presence_penalty: float
force_json: int

# Fetch configuration with schema validation
config = ag.ConfigManager.get_from_registry(
app_slug="your-app-slug",
schema=ConfigSchema
)

# Use the configuration
print(config)
```

- The `config` object will now be an instance of `ConfigSchema`, allowing you to access its fields directly with type checking.

### e. Asynchronous Fetching (Optional)

If your application is asynchronous, you can use the async version of the method:

```python
# Asynchronous fetching of configuration
config = await ag.ConfigManager.async_get_from_registry(
app_slug="your-app-slug"
)
```

## 4. Revert to previous deployment (optional)
## 4. Revert to Previous Deployment (Optional)

:::note
This feature is only available in cloud and enterprise version
This feature is only available in the cloud and enterprise versions.
:::

Optionally you would like to revert to a previously published commit. For this click on the deployment in the overview view, then click on **history**, you will see all the previous published version. You can revert to a previous version by clicking on revert.
If you need to revert to a previously published commit, click on the deployment in the overview view, then click on **History**. You will see all the previously published versions. You can revert to a previous version by clicking on **Revert**.

<Image
style={{ width: "75%", display: "block", margin: "0 auto" }}
img={require("/images/prompt_management/revert-deployment.png")}
/>

## Next steps
## Next Steps

Now that you've created and published your first prompt, you can learn how to do [prompt engineering in the playground](/prompt-management/using-the-playground) or dive deeper into [the capabilities of the prompt management SDK](/prompt-management/creating-a-custom-template)
Now that you've created and published your first prompt, you can learn how to do [prompt engineering in the playground](/prompt-management/using-the-playground) or dive deeper into [the capabilities of the prompt management SDK](/prompt-management/creating-a-custom-template).
Original file line number Diff line number Diff line change
Expand Up @@ -179,7 +179,7 @@ Agenta provides the flexibility to add any LLM application to the platform, so t

We've merely touched on what Agenta can do. You're not limited to apps that consist of a single file or function. You can create chains of prompts, or even agents. You can use the SDK allows you to track costs and log traces of your application.

More information about the SDK can be found in the [SDK section in the developer guide](/reference/sdk/quick_start). You can also explore a growing list of templates and tutorials in the [cookbook section](/guides/cookbooks/evaluations_with_sdk).
More information about the SDK can be found in the [SDK section in the developer guide](/reference/sdk/quick_start). You can also explore a growing list of templates and tutorials in the [tutorials section](/tutorials/sdk/evaluate-with-SDK).

Finally, our team is always ready to assist you with any custom application. Simply reach out to us on Slack, or book a call to discuss your use case in detail.
You can read more about the SDK in the . You can also check the growing list of templates and tutorials in the . Last please note, that our team is always available to help you with any custom applicatoin, just reach out to use on [Slack](https://join.slack.com/t/agenta-hq/shared_invite/zt-1zsafop5i-Y7~ZySbhRZvKVPV5DO_7IA) Or [book a call](https://cal.com/mahmoud-mabrouk-ogzgey/demo) to discuss your use case in details.
Loading