Skip to content

Commit

Permalink
Updating docs
Browse files Browse the repository at this point in the history
  • Loading branch information
joaomdmoura committed Aug 11, 2024
1 parent 7b63b6f commit a17fa70
Show file tree
Hide file tree
Showing 20 changed files with 153 additions and 118 deletions.
10 changes: 6 additions & 4 deletions docs/core-concepts/Agents.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,14 +26,16 @@ description: What are crewAI Agents and how to use them.
| **Function Calling LLM** *(optional)* | `function_calling_llm` | Specifies the language model that will handle the tool calling for this agent, overriding the crew function calling LLM if passed. Default is `None`. |
| **Max Iter** *(optional)* | `max_iter` | Max Iter is the maximum number of iterations the agent can perform before being forced to give its best answer. Default is `25`. |
| **Max RPM** *(optional)* | `max_rpm` | Max RPM is the maximum number of requests per minute the agent can perform to avoid rate limits. It's optional and can be left unspecified, with a default value of `None`. |
| **Max Execution Time** *(optional)* | `max_execution_time` | Max Execution Time is the Maximum execution time for an agent to execute a task. It's optional and can be left unspecified, with a default value of `None`, meaning no max execution time. |
| **Max Execution Time** *(optional)* | `max_execution_time` | Max Execution Time is the maximum execution time for an agent to execute a task. It's optional and can be left unspecified, with a default value of `None`, meaning no max execution time. |
| **Verbose** *(optional)* | `verbose` | Setting this to `True` configures the internal logger to provide detailed execution logs, aiding in debugging and monitoring. Default is `False`. |
| **Allow Delegation** *(optional)* | `allow_delegation` | Agents can delegate tasks or questions to one another, ensuring that each task is handled by the most suitable agent. Default is `True`. |
| **Step Callback** *(optional)* | `step_callback` | A function that is called after each step of the agent. This can be used to log the agent's actions or to perform other operations. It will overwrite the crew `step_callback`. |
| **Cache** *(optional)* | `cache` | Indicates if the agent should use a cache for tool usage. Default is `True`. |
| **System Template** *(optional)* | `system_template` | Specifies the system format for the agent. Default is `None`. |
| **Prompt Template** *(optional)* | `prompt_template` | Specifies the prompt format for the agent. Default is `None`. |
| **Response Template** *(optional)* | `response_template` | Specifies the response format for the agent. Default is `None`. |
| **Allow Code Execution** *(optional)* | `allow_code_execution` | Enable code execution for the agent. Default is `False`. |
| **Max Retry Limit** *(optional)* | `max_retry_limit` | Maximum number of retries for an agent to execute a task when an error occurs. Default is `2`. |

## Creating an Agent

Expand Down Expand Up @@ -72,7 +74,8 @@ agent = Agent(
tools_handler=my_tools_handler, # Optional
cache_handler=my_cache_handler, # Optional
callbacks=[callback1, callback2], # Optional
agent_executor=my_agent_executor # Optional
allow_code_execution=True, # Optiona
max_retry_limit=2, # Optional
)
```

Expand Down Expand Up @@ -144,6 +147,5 @@ my_crew = Crew(agents=[agent1, agent2], tasks=[task1, task2])
crew = my_crew.kickoff(inputs={"input": "Mark Twain"})
```


## Conclusion
Agents are the building blocks of the CrewAI framework. By understanding how to define and interact with agents, you can create sophisticated AI systems that leverage the power of collaborative intelligence.
Agents are the building blocks of the CrewAI framework. By understanding how to define and interact with agents, you can create sophisticated AI systems that leverage the power of collaborative intelligence.
2 changes: 2 additions & 0 deletions docs/core-concepts/Collaboration.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,6 +28,8 @@ The `Crew` class has been enriched with several attributes to support advanced f
- **Embedder Configuration (`embedder`)**: Specifies the configuration for the embedder to be used by the crew for understanding and generating language. This attribute supports customization of the language model provider.
- **Cache Management (`cache`)**: Determines whether the crew should use a cache to store the results of tool executions, optimizing performance.
- **Output Logging (`output_log_file`)**: Specifies the file path for logging the output of the crew execution.
- **Planning Mode (`planning`)**: Allows crews to plan their actions before executing tasks by setting `planning=True` when creating the `Crew` instance. This feature enhances coordination and efficiency.
- **Replay Feature**: Introduces a new CLI for listing tasks from the last run and replaying from a specific task, enhancing task management and troubleshooting.

## Delegation: Dividing to Conquer
Delegation enhances functionality by allowing agents to intelligently assign tasks or seek help, thereby amplifying the crew's overall capability.
Expand Down
27 changes: 14 additions & 13 deletions docs/core-concepts/Crews.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,8 +32,8 @@ A crew in crewAI represents a collaborative group of agents working together to
| **Manager Agent** _(optional)_ | `manager_agent` | `manager` sets a custom agent that will be used as a manager. |
| **Manager Callbacks** _(optional)_ | `manager_callbacks` | `manager_callbacks` takes a list of callback handlers to be executed by the manager agent when a hierarchical process is used. |
| **Prompt File** _(optional)_ | `prompt_file` | Path to the prompt JSON file to be used for the crew. |
| **Planning** *(optional)* | `planning` | Adds planning ability to the Crew. When activated before each Crew iteration, all Crew data is sent to an AgentPlanner that will plan the tasks and this plan will be added to each task description.
| **Planning LLM** *(optional)* | `planning_llm` | The language model used by the AgentPlanner in a planning process. |
| **Planning** *(optional)* | `planning` | Adds planning ability to the Crew. When activated before each Crew iteration, all Crew data is sent to an AgentPlanner that will plan the tasks and this plan will be added to each task description. |
| **Planning LLM** *(optional)* | `planning_llm` | The language model used by the AgentPlanner in a planning process. |

!!! note "Crew Max RPM"
The `max_rpm` attribute sets the maximum number of requests per minute the crew can perform to avoid rate limits and will override individual agents' `max_rpm` settings if you set it.
Expand Down Expand Up @@ -183,14 +183,14 @@ result = my_crew.kickoff()
print(result)
```

### Different ways to Kicking Off a Crew
### Different Ways to Kick Off a Crew

Once your crew is assembled, initiate the workflow with the appropriate kickoff method. CrewAI provides several methods for better control over the kickoff process: `kickoff()`, `kickoff_for_each()`, `kickoff_async()`, and `kickoff_for_each_async()`.

`kickoff()`: Starts the execution process according to the defined process flow.
`kickoff_for_each()`: Executes tasks for each agent individually.
`kickoff_async()`: Initiates the workflow asynchronously.
`kickoff_for_each_async()`: Executes tasks for each agent individually in an asynchronous manner.
- `kickoff()`: Starts the execution process according to the defined process flow.
- `kickoff_for_each()`: Executes tasks for each agent individually.
- `kickoff_async()`: Initiates the workflow asynchronously.
- `kickoff_for_each_async()`: Executes tasks for each agent individually in an asynchronous manner.

```python
# Start the crew's task execution
Expand All @@ -215,33 +215,34 @@ for async_result in async_results:
print(async_result)
```

These methods provide flexibility in how you manage and execute tasks within your crew, allowing for both synchronous and asynchronous workflows tailored to your needs
These methods provide flexibility in how you manage and execute tasks within your crew, allowing for both synchronous and asynchronous workflows tailored to your needs.

### Replaying from a Specific Task

### Replaying from specific task:
You can now replay from a specific task using our cli command replay.
You can now replay from a specific task using our CLI command `replay`.

The replay feature in CrewAI allows you to replay from a specific task using the command-line interface (CLI). By running the command `crewai replay -t <task_id>`, you can specify the `task_id` for the replay process.

Kickoffs will now save the latest kickoffs returned task outputs locally for you to be able to replay from.

### Replaying from a Specific Task Using the CLI

### Replaying from specific task Using the CLI
To use the replay feature, follow these steps:

1. Open your terminal or command prompt.
2. Navigate to the directory where your CrewAI project is located.
3. Run the following command:

To view latest kickoff task_ids use:
To view the latest kickoff task IDs, use:

```shell
crewai log-tasks-outputs
```

Then, to replay from a specific task, use:

```shell
crewai replay -t <task_id>
```

These commands let you replay from your latest kickoff tasks, still retaining context from previously executed tasks.
These commands let you replay from your latest kickoff tasks, still retaining context from previously executed tasks.
14 changes: 6 additions & 8 deletions docs/core-concepts/Memory.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,9 +29,9 @@ description: Leveraging memory systems in the crewAI framework to enhance agent
When configuring a crew, you can enable and customize each memory component to suit the crew's objectives and the nature of tasks it will perform.
By default, the memory system is disabled, and you can ensure it is active by setting `memory=True` in the crew configuration. The memory will use OpenAI Embeddings by default, but you can change it by setting `embedder` to a different model.

The 'embedder' only applies to **Short-Term Memory** which uses Chroma for RAG using EmbedChain package.
The 'embedder' only applies to **Short-Term Memory** which uses Chroma for RAG using EmbedChain package.
The **Long-Term Memory** uses SQLLite3 to store task results. Currently, there is no way to override these storage implementations.
The data storage files are saved into a platform specific location found using the appdirs package
The data storage files are saved into a platform specific location found using the appdirs package
and the name of the project which can be overridden using the **CREWAI_STORAGE_DIR** environment variable.

### Example: Configuring Memory for a Crew
Expand Down Expand Up @@ -105,7 +105,7 @@ my_crew = Crew(
"provider": "azure_openai",
"config":{
"model": 'text-embedding-ada-002',
"deployment_name": "you_embedding_model_deployment_name"
"deployment_name": "your_embedding_model_deployment_name"
}
}
)
Expand Down Expand Up @@ -159,8 +159,8 @@ my_crew = Crew(
embedder={
"provider": "cohere",
"config":{
"model": "embed-english-v3.0"
"vector_dimension": 1024
"model": "embed-english-v3.0",
"vector_dimension": 1024
}
}
)
Expand Down Expand Up @@ -197,12 +197,10 @@ crewai reset_memories [OPTIONS]
- **Type:** Flag (boolean)
- **Default:** False



## Benefits of Using crewAI's Memory System
- **Adaptive Learning:** Crews become more efficient over time, adapting to new information and refining their approach to tasks.
- **Enhanced Personalization:** Memory enables agents to remember user preferences and historical interactions, leading to personalized experiences.
- **Improved Problem Solving:** Access to a rich memory store aids agents in making more informed decisions, drawing on past learnings and contextual insights.

## Getting Started
Integrating crewAI's memory system into your projects is straightforward. By leveraging the provided memory components and configurations, you can quickly empower your agents with the ability to remember, reason, and learn from their interactions, unlocking new levels of intelligence and capability.
Integrating crewAI's memory system into your projects is straightforward. By leveraging the provided memory components and configurations, you can quickly empower your agents with the ability to remember, reason, and learn from their interactions, unlocking new levels of intelligence and capability.
4 changes: 2 additions & 2 deletions docs/core-concepts/Pipeline.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ Each input creates its own run, flowing through all stages of the pipeline. Mult

| Attribute | Parameters | Description |
| :--------- | :--------- | :------------------------------------------------------------------------------------ |
| **Stages** | `stages` | A list of crews or lists of crews representing the stages to be executed in sequence. |
| **Stages** | `stages` | A list of crews, lists of crews, or routers representing the stages to be executed in sequence. |

## Creating a Pipeline

Expand Down Expand Up @@ -79,7 +79,7 @@ my_pipeline = Pipeline(
## Pipeline Output

!!! note "Understanding Pipeline Outputs"
The output of a pipeline in the crewAI framework is encapsulated within two main classes: `PipelineOutput` and `PipelineRunResult`. These classes provide a structured way to access the results of the pipeline's execution, including various formats such as raw strings, JSON, and Pydantic models.
The output of a pipeline in the crewAI framework is encapsulated within the `PipelineKickoffResult` class. This class provides a structured way to access the results of the pipeline's execution, including various formats such as raw strings, JSON, and Pydantic models.

### Pipeline Output Attributes

Expand Down
8 changes: 2 additions & 6 deletions docs/core-concepts/Planning.md