Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Documentation: add grids, tables to make intro info easier to digest #276

Open
wants to merge 10 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
13 changes: 8 additions & 5 deletions docs/agents.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,11 +7,14 @@ but multiple agents can also interact to embody more complex workflows.

The [`Agent`][pydantic_ai.Agent] class has full API documentation, but conceptually you can think of an agent as a container for:

* A [system prompt](#system-prompts) — a set of instructions for the LLM written by the developer
* One or more [function tool](tools.md) — functions that the LLM may call to get information while generating a response
* An optional structured [result type](results.md) — the structured datatype the LLM must return at the end of a run
* A [dependency](dependencies.md) type constraint — system prompt functions, tools and result validators may all use dependencies when they're run
* Agents may optionally also have a default [LLM model](api/models/base.md) associated with them; the model to use can also be specified when running the agent
| **Component** | **Description** |
sydney-runkle marked this conversation as resolved.
Show resolved Hide resolved
|-------------------------------------------------|----------------------------------------------------------------------------------------------------------|
| [System prompt(s)](#system-prompts) | A set of instructions for the LLM written by the developer. |
| [Function tool(s)](tools.md) | Functions that the LLM may call to get information while generating a response. |
| [Structured result type](results.md) | The structured datatype the LLM must return at the end of a run, if specified. |
| [Dependency type constraint](dependencies.md) | System prompt functions, tools, and result validators may all use dependencies when they're run. |
| [LLM model](api/models/base.md) | Optional default LLM model associated with the agent. Can also be specified when running the agent. |
| [Model Settings](#additional-configuration) | Optional default model settings to help fine tune requests. Can also be specified when running the agent.|
sydney-runkle marked this conversation as resolved.
Show resolved Hide resolved

In typing terms, agents are generic in their dependency and result types, e.g., an agent which required dependencies of type `#!python Foobar` and returned results of type `#!python list[str]` would have type `cAgent[Foobar, list[str]]`. In practice, you shouldn't need to care about this, it should just mean your IDE can tell you when you have the right type, and if you choose to use [static type checking](#static-type-checking) it should work well with PydanticAI.

Expand Down
28 changes: 28 additions & 0 deletions docs/extra/tweaks.css
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,34 @@ img.index-header {
max-width: 500px;
}

.pydantic-pink {
color: #FF007F;
}

.team-blue {
color: #0072CE;
}

.secure-green {
color: #00A86B;
}

.shapes-orange {
color: #FF7F32;
}

.puzzle-purple {
color: #652D90;
}

.wheel-gray {
color: #6E6E6E;
}

.vertical-middle {
vertical-align: middle;
}

.text-emphasis {
font-size: 1rem;
font-weight: 300;
Expand Down
40 changes: 29 additions & 11 deletions docs/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,14 +13,30 @@ We built PydanticAI with one simple aim: to bring that FastAPI feeling to GenAI

## Why use PydanticAI

* Built by the team behind Pydantic (the validation layer of the OpenAI SDK, the Anthropic SDK, LangChain, LlamaIndex, AutoGPT, Transformers, CrewAI, Instructor and many more)
* [Model-agnostic](models.md) — currently OpenAI, Anthropic, Gemini, Ollama, Groq, and Mistral are supported, and there is a simple interface to implement support for other models.
* [Type-safe](agents.md#static-type-checking)
* Control flow and agent composition is done with vanilla Python, allowing you to make use of the same Python development best practices you'd use in any other (non-AI) project
* [Structured response](results.md#structured-result-validation) validation with Pydantic
* [Streamed responses](results.md#streamed-results), including validation of streamed _structured_ responses with Pydantic
* Novel, type-safe [dependency injection system](dependencies.md), useful for testing and eval-driven iterative development
* [Logfire integration](logfire.md) for debugging and monitoring the performance and general behavior of your LLM-powered application
:material-account-group:{ .md .middle .team-blue }&nbsp;<strong class="vertical-middle">Built by the Pydantic Team</strong><br>
Built by the team behind [Pydantic](https://docs.pydantic.dev/latest/) (the validation layer of the OpenAI SDK, the Anthropic SDK, LangChain, LlamaIndex, AutoGPT, Transformers, CrewAI, Instructor and many more).

:fontawesome-solid-shapes:{ .md .middle .shapes-orange }&nbsp;<strong class="vertical-middle">Model-agnostic</strong><br>
Supports OpenAI, Anthropic, Gemini, Ollama, Groq, and Mistral, and there is a simple interface to implement support for [other models](models.md).

:logfire-logo:{ .md .middle }&nbsp;<strong class="vertical-middle">Pydantic Logfire Integration</strong><br>
Seamlessly [integrates](logfire.md) with [Pydantic Logfire](https://pydantic.dev/logfire) for real-time debugging, performance monitoring, and behavior tracking of your LLM-powered applications.

:material-shield-check:{ .md .middle .secure-green }&nbsp;<strong class="vertical-middle">Type-safe</strong><br>
Designed to make type checking as useful as possible for you, so it [integrates](agents.md#static-type-checking) well with static type checkers, like [`mypy`](https://github.com/python/mypy) and [`pyright`](https://github.com/microsoft/pyright).

:snake:{ .md .middle }&nbsp;<strong class="vertical-middle">Python-centric Design</strong><br>
Leverages Python’s familiar control flow and agent composition to build your AI-driven projects, making it easy to apply standard Python best practices you'd use in any other (non-AI) project

:simple-pydantic:{ .md .middle .pydantic-pink }&nbsp;<strong class="vertical-middle">Structured Responses</strong><br>
Harnesses the power of [Pydantic](https://docs.pydantic.dev/latest/) to [validate and structure](results.md#structured-result-validation) model outputs, ensuring responses are consistent across runs.

:material-puzzle-plus:{ .md .middle .puzzle-purple }&nbsp;<strong class="vertical-middle">Dependency Injection System</strong><br>
Offers an optional [dependency injection](dependencies.md) system to provide data and services to your agent's [system prompts](agents.md#system-prompts), [tools](tools.md) and [result validators](results.md#result-validators-functions).
This is useful for testing and eval-driven iterative development.

:material-sine-wave:{ .md .middle }&nbsp;<strong class="vertical-middle">Streamed Responses</strong><br>
Provides the ability to [stream](results.md#streamed-results) LLM outputs continuously, with immediate validation, ensuring rapid and accurate results.
sydney-runkle marked this conversation as resolved.
Show resolved Hide resolved

!!! example "In Beta"
PydanticAI is in early beta, the API is still subject to change and there's a lot more to do.
Expand All @@ -45,12 +61,14 @@ The first known use of "hello, world" was in a 1974 textbook about the C program
"""
```

1. Define a very simple agent — here we configure the agent to use [Gemini 1.5's Flash](api/models/gemini.md) model, but you can also set the model when running the agent.
sydney-runkle marked this conversation as resolved.
Show resolved Hide resolved
2. Register a static [system prompt](agents.md#system-prompts) using a keyword argument to the agent. For more complex dynamically-generated system prompts, see the example below.
3. [Run the agent](agents.md#running-agents) synchronously, conducting a conversation with the LLM. Here the exchange should be very short: PydanticAI will send the system prompt and the user query to the LLM, the model will return a text response.
1. We configure the agent to use [Gemini 1.5's Flash](api/models/gemini.md) model, but you can also set the model when running the agent.
2. Register a static [system prompt](agents.md#system-prompts) using a keyword argument to the agent.
3. [Run the agent](agents.md#running-agents) synchronously, conducting a conversation with the LLM.

_(This example is complete, it can be run "as is")_
sydney-runkle marked this conversation as resolved.
Show resolved Hide resolved

The exchange should be very short: PydanticAI will send the system prompt and the user query to the LLM, the model will return a text response.

Not very interesting yet, but we can easily add "tools", dynamic system prompts, and structured responses to build more powerful agents.
sydney-runkle marked this conversation as resolved.
Show resolved Hide resolved

## Tools & Dependency Injection Example
Expand Down
4 changes: 4 additions & 0 deletions docs/overrides/.icons/logfire/logo.svg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
6 changes: 5 additions & 1 deletion mkdocs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -57,6 +57,7 @@ extra:

theme:
name: "material"
custom_dir: docs/overrides
palette:
- media: "(prefers-color-scheme)"
scheme: default
Expand Down Expand Up @@ -133,6 +134,9 @@ markdown_extensions:
- pymdownx.emoji:
emoji_index: !!python/name:material.extensions.emoji.twemoji
emoji_generator: !!python/name:material.extensions.emoji.to_svg
options:
custom_icons:
- docs/overrides/.icons
- pymdownx.tabbed:
alternate_style: true
- pymdownx.tasklist:
Expand All @@ -145,7 +149,7 @@ watch:

plugins:
- search
- social
# - social
- glightbox
- mkdocstrings:
handlers:
Expand Down
Loading