diff --git a/docs/core-concepts/Agents.md b/docs/core-concepts/Agents.md index 7b93fdde77..e23e77c55f 100644 --- a/docs/core-concepts/Agents.md +++ b/docs/core-concepts/Agents.md @@ -26,7 +26,7 @@ description: What are crewAI Agents and how to use them. | **Function Calling LLM** *(optional)* | `function_calling_llm` | Specifies the language model that will handle the tool calling for this agent, overriding the crew function calling LLM if passed. Default is `None`. | | **Max Iter** *(optional)* | `max_iter` | Max Iter is the maximum number of iterations the agent can perform before being forced to give its best answer. Default is `25`. | | **Max RPM** *(optional)* | `max_rpm` | Max RPM is the maximum number of requests per minute the agent can perform to avoid rate limits. It's optional and can be left unspecified, with a default value of `None`. | -| **Max Execution Time** *(optional)* | `max_execution_time` | Max Execution Time is the Maximum execution time for an agent to execute a task. It's optional and can be left unspecified, with a default value of `None`, meaning no max execution time. | +| **Max Execution Time** *(optional)* | `max_execution_time` | Max Execution Time is the maximum execution time for an agent to execute a task. It's optional and can be left unspecified, with a default value of `None`, meaning no max execution time. | | **Verbose** *(optional)* | `verbose` | Setting this to `True` configures the internal logger to provide detailed execution logs, aiding in debugging and monitoring. Default is `False`. | | **Allow Delegation** *(optional)* | `allow_delegation` | Agents can delegate tasks or questions to one another, ensuring that each task is handled by the most suitable agent. Default is `True`. | | **Step Callback** *(optional)* | `step_callback` | A function that is called after each step of the agent. This can be used to log the agent's actions or to perform other operations. It will overwrite the crew `step_callback`. | @@ -34,6 +34,8 @@ description: What are crewAI Agents and how to use them. | **System Template** *(optional)* | `system_template` | Specifies the system format for the agent. Default is `None`. | | **Prompt Template** *(optional)* | `prompt_template` | Specifies the prompt format for the agent. Default is `None`. | | **Response Template** *(optional)* | `response_template` | Specifies the response format for the agent. Default is `None`. | +| **Allow Code Execution** *(optional)* | `allow_code_execution` | Enable code execution for the agent. Default is `False`. | +| **Max Retry Limit** *(optional)* | `max_retry_limit` | Maximum number of retries for an agent to execute a task when an error occurs. Default is `2`. | ## Creating an Agent @@ -72,7 +74,8 @@ agent = Agent( tools_handler=my_tools_handler, # Optional cache_handler=my_cache_handler, # Optional callbacks=[callback1, callback2], # Optional - agent_executor=my_agent_executor # Optional + allow_code_execution=True, # Optiona + max_retry_limit=2, # Optional ) ``` @@ -144,6 +147,5 @@ my_crew = Crew(agents=[agent1, agent2], tasks=[task1, task2]) crew = my_crew.kickoff(inputs={"input": "Mark Twain"}) ``` - ## Conclusion -Agents are the building blocks of the CrewAI framework. By understanding how to define and interact with agents, you can create sophisticated AI systems that leverage the power of collaborative intelligence. +Agents are the building blocks of the CrewAI framework. By understanding how to define and interact with agents, you can create sophisticated AI systems that leverage the power of collaborative intelligence. \ No newline at end of file diff --git a/docs/core-concepts/Collaboration.md b/docs/core-concepts/Collaboration.md index 31c35feac6..cccb820b68 100644 --- a/docs/core-concepts/Collaboration.md +++ b/docs/core-concepts/Collaboration.md @@ -28,6 +28,8 @@ The `Crew` class has been enriched with several attributes to support advanced f - **Embedder Configuration (`embedder`)**: Specifies the configuration for the embedder to be used by the crew for understanding and generating language. This attribute supports customization of the language model provider. - **Cache Management (`cache`)**: Determines whether the crew should use a cache to store the results of tool executions, optimizing performance. - **Output Logging (`output_log_file`)**: Specifies the file path for logging the output of the crew execution. +- **Planning Mode (`planning`)**: Allows crews to plan their actions before executing tasks by setting `planning=True` when creating the `Crew` instance. This feature enhances coordination and efficiency. +- **Replay Feature**: Introduces a new CLI for listing tasks from the last run and replaying from a specific task, enhancing task management and troubleshooting. ## Delegation: Dividing to Conquer Delegation enhances functionality by allowing agents to intelligently assign tasks or seek help, thereby amplifying the crew's overall capability. diff --git a/docs/core-concepts/Crews.md b/docs/core-concepts/Crews.md index f43d6971ba..0da71bf1af 100644 --- a/docs/core-concepts/Crews.md +++ b/docs/core-concepts/Crews.md @@ -32,8 +32,8 @@ A crew in crewAI represents a collaborative group of agents working together to | **Manager Agent** _(optional)_ | `manager_agent` | `manager` sets a custom agent that will be used as a manager. | | **Manager Callbacks** _(optional)_ | `manager_callbacks` | `manager_callbacks` takes a list of callback handlers to be executed by the manager agent when a hierarchical process is used. | | **Prompt File** _(optional)_ | `prompt_file` | Path to the prompt JSON file to be used for the crew. | -| **Planning** *(optional)* | `planning` | Adds planning ability to the Crew. When activated before each Crew iteration, all Crew data is sent to an AgentPlanner that will plan the tasks and this plan will be added to each task description. -| **Planning LLM** *(optional)* | `planning_llm` | The language model used by the AgentPlanner in a planning process. | +| **Planning** *(optional)* | `planning` | Adds planning ability to the Crew. When activated before each Crew iteration, all Crew data is sent to an AgentPlanner that will plan the tasks and this plan will be added to each task description. | +| **Planning LLM** *(optional)* | `planning_llm` | The language model used by the AgentPlanner in a planning process. | !!! note "Crew Max RPM" The `max_rpm` attribute sets the maximum number of requests per minute the crew can perform to avoid rate limits and will override individual agents' `max_rpm` settings if you set it. @@ -183,14 +183,14 @@ result = my_crew.kickoff() print(result) ``` -### Different ways to Kicking Off a Crew +### Different Ways to Kick Off a Crew Once your crew is assembled, initiate the workflow with the appropriate kickoff method. CrewAI provides several methods for better control over the kickoff process: `kickoff()`, `kickoff_for_each()`, `kickoff_async()`, and `kickoff_for_each_async()`. -`kickoff()`: Starts the execution process according to the defined process flow. -`kickoff_for_each()`: Executes tasks for each agent individually. -`kickoff_async()`: Initiates the workflow asynchronously. -`kickoff_for_each_async()`: Executes tasks for each agent individually in an asynchronous manner. +- `kickoff()`: Starts the execution process according to the defined process flow. +- `kickoff_for_each()`: Executes tasks for each agent individually. +- `kickoff_async()`: Initiates the workflow asynchronously. +- `kickoff_for_each_async()`: Executes tasks for each agent individually in an asynchronous manner. ```python # Start the crew's task execution @@ -215,33 +215,34 @@ for async_result in async_results: print(async_result) ``` -These methods provide flexibility in how you manage and execute tasks within your crew, allowing for both synchronous and asynchronous workflows tailored to your needs +These methods provide flexibility in how you manage and execute tasks within your crew, allowing for both synchronous and asynchronous workflows tailored to your needs. +### Replaying from a Specific Task -### Replaying from specific task: -You can now replay from a specific task using our cli command replay. +You can now replay from a specific task using our CLI command `replay`. The replay feature in CrewAI allows you to replay from a specific task using the command-line interface (CLI). By running the command `crewai replay -t `, you can specify the `task_id` for the replay process. Kickoffs will now save the latest kickoffs returned task outputs locally for you to be able to replay from. +### Replaying from a Specific Task Using the CLI -### Replaying from specific task Using the CLI To use the replay feature, follow these steps: 1. Open your terminal or command prompt. 2. Navigate to the directory where your CrewAI project is located. 3. Run the following command: -To view latest kickoff task_ids use: +To view the latest kickoff task IDs, use: ```shell crewai log-tasks-outputs ``` +Then, to replay from a specific task, use: ```shell crewai replay -t ``` -These commands let you replay from your latest kickoff tasks, still retaining context from previously executed tasks. +These commands let you replay from your latest kickoff tasks, still retaining context from previously executed tasks. \ No newline at end of file diff --git a/docs/core-concepts/Memory.md b/docs/core-concepts/Memory.md index 889e860bd9..f489af1ce8 100644 --- a/docs/core-concepts/Memory.md +++ b/docs/core-concepts/Memory.md @@ -29,9 +29,9 @@ description: Leveraging memory systems in the crewAI framework to enhance agent When configuring a crew, you can enable and customize each memory component to suit the crew's objectives and the nature of tasks it will perform. By default, the memory system is disabled, and you can ensure it is active by setting `memory=True` in the crew configuration. The memory will use OpenAI Embeddings by default, but you can change it by setting `embedder` to a different model. -The 'embedder' only applies to **Short-Term Memory** which uses Chroma for RAG using EmbedChain package. +The 'embedder' only applies to **Short-Term Memory** which uses Chroma for RAG using EmbedChain package. The **Long-Term Memory** uses SQLLite3 to store task results. Currently, there is no way to override these storage implementations. -The data storage files are saved into a platform specific location found using the appdirs package +The data storage files are saved into a platform specific location found using the appdirs package and the name of the project which can be overridden using the **CREWAI_STORAGE_DIR** environment variable. ### Example: Configuring Memory for a Crew @@ -105,7 +105,7 @@ my_crew = Crew( "provider": "azure_openai", "config":{ "model": 'text-embedding-ada-002', - "deployment_name": "you_embedding_model_deployment_name" + "deployment_name": "your_embedding_model_deployment_name" } } ) @@ -159,8 +159,8 @@ my_crew = Crew( embedder={ "provider": "cohere", "config":{ - "model": "embed-english-v3.0" - "vector_dimension": 1024 + "model": "embed-english-v3.0", + "vector_dimension": 1024 } } ) @@ -197,12 +197,10 @@ crewai reset_memories [OPTIONS] - **Type:** Flag (boolean) - **Default:** False - - ## Benefits of Using crewAI's Memory System - **Adaptive Learning:** Crews become more efficient over time, adapting to new information and refining their approach to tasks. - **Enhanced Personalization:** Memory enables agents to remember user preferences and historical interactions, leading to personalized experiences. - **Improved Problem Solving:** Access to a rich memory store aids agents in making more informed decisions, drawing on past learnings and contextual insights. ## Getting Started -Integrating crewAI's memory system into your projects is straightforward. By leveraging the provided memory components and configurations, you can quickly empower your agents with the ability to remember, reason, and learn from their interactions, unlocking new levels of intelligence and capability. +Integrating crewAI's memory system into your projects is straightforward. By leveraging the provided memory components and configurations, you can quickly empower your agents with the ability to remember, reason, and learn from their interactions, unlocking new levels of intelligence and capability. \ No newline at end of file diff --git a/docs/core-concepts/Pipeline.md b/docs/core-concepts/Pipeline.md index 208d941ac1..a601bcf754 100644 --- a/docs/core-concepts/Pipeline.md +++ b/docs/core-concepts/Pipeline.md @@ -34,7 +34,7 @@ Each input creates its own run, flowing through all stages of the pipeline. Mult | Attribute | Parameters | Description | | :--------- | :--------- | :------------------------------------------------------------------------------------ | -| **Stages** | `stages` | A list of crews or lists of crews representing the stages to be executed in sequence. | +| **Stages** | `stages` | A list of crews, lists of crews, or routers representing the stages to be executed in sequence. | ## Creating a Pipeline @@ -79,7 +79,7 @@ my_pipeline = Pipeline( ## Pipeline Output !!! note "Understanding Pipeline Outputs" -The output of a pipeline in the crewAI framework is encapsulated within two main classes: `PipelineOutput` and `PipelineRunResult`. These classes provide a structured way to access the results of the pipeline's execution, including various formats such as raw strings, JSON, and Pydantic models. +The output of a pipeline in the crewAI framework is encapsulated within the `PipelineKickoffResult` class. This class provides a structured way to access the results of the pipeline's execution, including various formats such as raw strings, JSON, and Pydantic models. ### Pipeline Output Attributes diff --git a/docs/core-concepts/Planning.md b/docs/core-concepts/Planning.md index 36ae34437d..57c9c341e6 100644 --- a/docs/core-concepts/Planning.md +++ b/docs/core-concepts/Planning.md @@ -41,13 +41,11 @@ my_crew = Crew( ) ``` - ### Example When running the base case example, you will see something like the following output, which represents the output of the AgentPlanner responsible for creating the step-by-step logic to add to the Agents tasks. -```bash - +``` [2024-07-15 16:49:11][INFO]: Planning the crew execution **Step-by-Step Plan for Task Execution** @@ -133,6 +131,4 @@ A list with 10 bullet points of the most relevant information about AI LLMs. **Expected Output:** A fully-fledged report with the main topics, each with a full section of information. Formatted as markdown without '```'. - ---- -``` +``` \ No newline at end of file diff --git a/docs/core-concepts/Tasks.md b/docs/core-concepts/Tasks.md index 8c2f5d9dd6..fbb83572eb 100644 --- a/docs/core-concepts/Tasks.md +++ b/docs/core-concepts/Tasks.md @@ -17,16 +17,17 @@ Tasks within crewAI can be collaborative, requiring multiple agents to work toge | **Description** | `description` | A clear, concise statement of what the task entails. | | **Agent** | `agent` | The agent responsible for the task, assigned either directly or by the crew's process. | | **Expected Output** | `expected_output` | A detailed description of what the task's completion looks like. | -| **Tools** _(optional)_ | `tools` | The functions or capabilities the agent can utilize to perform the task. | -| **Async Execution** _(optional)_ | `async_execution` | If set, the task executes asynchronously, allowing progression without waiting for completion. | +| **Tools** _(optional)_ | `tools` | The functions or capabilities the agent can utilize to perform the task. Defaults to an empty list. | +| **Async Execution** _(optional)_ | `async_execution` | If set, the task executes asynchronously, allowing progression without waiting for completion. Defaults to False. | | **Context** _(optional)_ | `context` | Specifies tasks whose outputs are used as context for this task. | -| **Config** _(optional)_ | `config` | Additional configuration details for the agent executing the task, allowing further customization. | +| **Config** _(optional)_ | `config` | Additional configuration details for the agent executing the task, allowing further customization. Defaults to None. | | **Output JSON** _(optional)_ | `output_json` | Outputs a JSON object, requiring an OpenAI client. Only one output format can be set. | | **Output Pydantic** _(optional)_ | `output_pydantic` | Outputs a Pydantic model object, requiring an OpenAI client. Only one output format can be set. | | **Output File** _(optional)_ | `output_file` | Saves the task output to a file. If used with `Output JSON` or `Output Pydantic`, specifies how the output is saved. | -| **Output** _(optional)_ | `output` | The output of the task, containing the raw, JSON, and Pydantic output plus additional details. | -| **Callback** _(optional)_ | `callback` | A Python callable that is executed with the task's output upon completion. | -| **Human Input** _(optional)_ | `human_input` | Indicates if the task requires human feedback at the end, useful for tasks needing human oversight. | +| **Output** _(optional)_ | `output` | An instance of `TaskOutput`, containing the raw, JSON, and Pydantic output plus additional details. | +| **Callback** _(optional)_ | `callback` | A callable that is executed with the task's output upon completion. | +| **Human Input** _(optional)_ | `human_input` | Indicates if the task requires human feedback at the end, useful for tasks needing human oversight. Defaults to False.| +| **Converter Class** _(optional)_ | `converter_cls` | A converter class used to export structured output. Defaults to None. | ## Creating a Task @@ -56,7 +57,7 @@ By default, the `TaskOutput` will only include the `raw` output. A `TaskOutput` | Attribute | Parameters | Type | Description | | :---------------- | :-------------- | :------------------------- | :------------------------------------------------------------------------------------------------- | | **Description** | `description` | `str` | A brief description of the task. | -| **Summary** | `summary` | `Optional[str]` | A short summary of the task, auto-generated from the description. | +| **Summary** | `summary` | `Optional[str]` | A short summary of the task, auto-generated from the first 10 words of the description. | | **Raw** | `raw` | `str` | The raw output of the task. This is the default format for the output. | | **Pydantic** | `pydantic` | `Optional[BaseModel]` | A Pydantic model object representing the structured output of the task. | | **JSON Dict** | `json_dict` | `Optional[Dict[str, Any]]` | A dictionary representing the JSON output of the task. | @@ -311,4 +312,4 @@ save_output_task = Task( ## Conclusion -Tasks are the driving force behind the actions of agents in crewAI. By properly defining tasks and their outcomes, you set the stage for your AI agents to work effectively, either independently or as a collaborative unit. Equipping tasks with appropriate tools, understanding the execution process, and following robust validation practices are crucial for maximizing CrewAI's potential, ensuring agents are effectively prepared for their assignments and that tasks are executed as intended. +Tasks are the driving force behind the actions of agents in crewAI. By properly defining tasks and their outcomes, you set the stage for your AI agents to work effectively, either independently or as a collaborative unit. Equipping tasks with appropriate tools, understanding the execution process, and following robust validation practices are crucial for maximizing CrewAI's potential, ensuring agents are effectively prepared for their assignments and that tasks are executed as intended. \ No newline at end of file diff --git a/docs/core-concepts/Testing.md b/docs/core-concepts/Testing.md index 45ababafb0..93644fbc82 100644 --- a/docs/core-concepts/Testing.md +++ b/docs/core-concepts/Testing.md @@ -5,12 +5,11 @@ description: Learn how to test your crewAI Crew and evaluate their performance. ## Introduction -Testing is a crucial part of the development process, and it is essential to ensure that your crew is performing as expected. And with crewAI, you can easily test your crew and evaluate its performance using the built-in testing capabilities. +Testing is a crucial part of the development process, and it is essential to ensure that your crew is performing as expected. With crewAI, you can easily test your crew and evaluate its performance using the built-in testing capabilities. ### Using the Testing Feature -We added the CLI command `crewai test` to make it easy to test your crew. This command will run your crew for a specified number of iterations and provide detailed performance metrics. -The parameters are `n_iterations` and `model` which are optional and default to 2 and `gpt-4o-mini` respectively. For now the only provider available is OpenAI. +We added the CLI command `crewai test` to make it easy to test your crew. This command will run your crew for a specified number of iterations and provide detailed performance metrics. The parameters are `n_iterations` and `model` which are optional and default to 2 and `gpt-4o-mini` respectively. For now, the only provider available is OpenAI. ```bash crewai test @@ -22,9 +21,10 @@ If you want to run more iterations or use a different model, you can specify the crewai test --n_iterations 5 --model gpt-4o ``` -What happens when you run the `crewai test` command is that the crew will be executed for the specified number of iterations, and the performance metrics will be displayed at the end of the run. +When you run the `crewai test` command, the crew will be executed for the specified number of iterations, and the performance metrics will be displayed at the end of the run. A table of scores at the end will show the performance of the crew in terms of the following metrics: + ``` Task Scores (1-10 Higher is better) @@ -38,4 +38,3 @@ A table of scores at the end will show the performance of the crew in terms of t ``` The example above shows the test results for two runs of the crew with two tasks, with the average total score for each task and the crew as a whole. - diff --git a/docs/core-concepts/Tools.md b/docs/core-concepts/Tools.md index 1fa4a6fe15..caae28668b 100644 --- a/docs/core-concepts/Tools.md +++ b/docs/core-concepts/Tools.md @@ -80,11 +80,12 @@ write = Task( output_file='blog-posts/new_post.md' # The final blog post will be saved here ) -# Assemble a crew +# Assemble a crew with planning enabled crew = Crew( agents=[researcher, writer], tasks=[research, write], - verbose=True + verbose=True, + planning=True, # Enable planning feature ) # Execute tasks @@ -197,6 +198,5 @@ writer1 = Agent( #... ``` - ## Conclusion Tools are pivotal in extending the capabilities of CrewAI agents, enabling them to undertake a broad spectrum of tasks and collaborate effectively. When building solutions with CrewAI, leverage both custom and existing tools to empower your agents and enhance the AI ecosystem. Consider utilizing error handling, caching mechanisms, and the flexibility of tool arguments to optimize your agents' performance and capabilities. \ No newline at end of file diff --git a/docs/core-concepts/Training-Crew.md b/docs/core-concepts/Training-Crew.md index 1fae7ff4d3..6cd5f9e46c 100644 --- a/docs/core-concepts/Training-Crew.md +++ b/docs/core-concepts/Training-Crew.md @@ -16,9 +16,11 @@ To use the training feature, follow these steps: 3. Run the following command: ```shell -crewai train -n +crewai train -n ``` +!!! note "Replace `` with the desired number of training iterations and `` with the appropriate filename ending with `.pkl`." + ### Training Your Crew Programmatically To train your crew programmatically, use the following steps: @@ -27,21 +29,20 @@ To train your crew programmatically, use the following steps: 3. Execute the training command within a try-except block to handle potential errors. ```python - n_iterations = 2 - inputs = {"topic": "CrewAI Training"} +n_iterations = 2 +inputs = {"topic": "CrewAI Training"} +filename = "your_model.pkl" - try: - YourCrewName_Crew().crew().train(n_iterations= n_iterations, inputs=inputs) +try: + YourCrewName_Crew().crew().train(n_iterations=n_iterations, inputs=inputs, filename=filename) - except Exception as e: - raise Exception(f"An error occurred while training the crew: {e}") +except Exception as e: + raise Exception(f"An error occurred while training the crew: {e}") ``` -!!! note "Replace `` with the desired number of training iterations. This determines how many times the agents will go through the training process." - - ### Key Points to Note: - **Positive Integer Requirement:** Ensure that the number of iterations (`n_iterations`) is a positive integer. The code will raise a `ValueError` if this condition is not met. +- **Filename Requirement:** Ensure that the filename ends with `.pkl`. The code will raise a `ValueError` if this condition is not met. - **Error Handling:** The code handles subprocess errors and unexpected exceptions, providing error messages to the user. It is important to note that the training process may take some time, depending on the complexity of your agents and will also require your feedback on each iteration. diff --git a/docs/getting-started/Start-a-New-CrewAI-Project-Template-Method.md b/docs/getting-started/Start-a-New-CrewAI-Project-Template-Method.md index 3778fc0d55..eb4fc8da2f 100644 --- a/docs/getting-started/Start-a-New-CrewAI-Project-Template-Method.md +++ b/docs/getting-started/Start-a-New-CrewAI-Project-Template-Method.md @@ -7,10 +7,10 @@ description: A comprehensive guide to starting a new CrewAI project, including t Welcome to the ultimate guide for starting a new CrewAI project. This document will walk you through the steps to create, customize, and run your CrewAI project, ensuring you have everything you need to get started. -Before we start there are a couple of things to note: +Before we start, there are a couple of things to note: 1. CrewAI is a Python package and requires Python >=3.10 and <=3.13 to run. -2. The preferred way of setting up CrewAI is using the `crewai create` command.This will create a new project folder and install a skeleton template for you to work on. +2. The preferred way of setting up CrewAI is using the `crewai create crew` command. This will create a new project folder and install a skeleton template for you to work on. ## Prerequisites @@ -35,7 +35,7 @@ It is highly recommended that you use virtual environments to ensure that your C 3. Use Poetry (A Python package manager and dependency management tool): Poetry is an open-source Python package manager that simplifies the installation of packages and their dependencies. Poetry offers a convenient way to manage virtual environments and dependencies. - Poetry is CrewAI's prefered tool for package / dependancy management in CrewAI. + Poetry is CrewAI's preferred tool for package / dependency management in CrewAI. ### Code IDEs @@ -48,24 +48,13 @@ Most users of CrewAI use a Code Editor / Integrated Development Environment (IDE Pick one that suits your style and needs. ## Creating a New Project -In this example we will be using Venv as our virtual environment manager. +In this example, we will be using Venv as our virtual environment manager. -To setup a virtual environment, run the following CLI command: +To set up a virtual environment, run the following CLI command: +To create a new CrewAI project, run the following CLI command: ```shell -$ python3 -m venv -``` - -Activate your virtual environment by running the following CLI command: - -```shell -$ source /bin/activate -``` - -Now, to create a new CrewAI project, run the following CLI command: - -```shell -$ crewai create +$ crewai create crew ``` This command will create a new project folder with the following structure: @@ -128,13 +117,13 @@ research_candidates_task: {job_requirements} expected_output: > A list of 10 potential candidates with their contact information and brief profiles highlighting their suitability. - agent: researcher # THIS NEEDS TO MATCH THE AGENT NAME IN THE AGENTS.YAML FILE AND THE AGENT DEFINED IN THE Crew.PY FILE - context: # THESE NEED TO MATCH THE TASK NAMES DEFINED ABOVE AND THE TASKS.YAML FILE AND THE TASK DEFINED IN THE Crew.PY FILE + agent: researcher # THIS NEEDS TO MATCH THE AGENT NAME IN THE AGENTS.YAML FILE AND THE AGENT DEFINED IN THE crew.py FILE + context: # THESE NEED TO MATCH THE TASK NAMES DEFINED ABOVE AND THE TASKS.YAML FILE AND THE TASK DEFINED IN THE crew.py FILE - researcher ``` ### Referencing Variables: -Your defined functions with the same name will be used. For example, you can reference the agent for specific tasks from task.yaml file. Ensure your annotated agent and function name is the same otherwise your task wont recognize the reference properly. +Your defined functions with the same name will be used. For example, you can reference the agent for specific tasks from task.yaml file. Ensure your annotated agent and function name is the same otherwise your task won't recognize the reference properly. #### Example References agent.yaml @@ -162,7 +151,7 @@ email_summarizer_task: - research_task ``` -Use the annotations are used to properly reference the agent and task in the crew.py file. +Use the annotations to properly reference the agent and task in the crew.py file. ### Annotations include: * @agent @@ -175,10 +164,9 @@ Use the annotations are used to properly reference the agent and task in the cre * @output_pydantic * @cache_handler - crew.py ```py -... +# ... @llm def mixtal_llm(self): return ChatGroq(temperature=0, model_name="mixtral-8x7b-32768") @@ -194,11 +182,9 @@ crew.py return Task( config=self.tasks_config["email_summarizer_task"], ) -... +# ... ``` - - ## Installing Dependencies To install the dependencies for your project, you can use Poetry. First, navigate to your project directory: @@ -254,6 +240,26 @@ $ poetry run my_project This will initialize your crew of AI agents and begin task execution as defined in your configuration in the `main.py` file. +### Replay Tasks from Latest Crew Kickoff + +CrewAI now includes a replay feature that allows you to list the tasks from the last run and replay from a specific one. To use this feature, run: + +```shell +$ crewai replay +``` + +Replace `` with the ID of the task you want to replay. + +### Reset Crew Memory + +If you need to reset the memory of your crew before running it again, you can do so by calling the reset memory feature: + +```shell +$ crewai reset-memory +``` + +This will clear the crew's memory, allowing for a fresh start. + ## Deploying Your Project The easiest way to deploy your crew is through [CrewAI+](https://www.crewai.com/crewaiplus), where you can deploy your crew in a few clicks. diff --git a/docs/how-to/Coding-Agents.md b/docs/how-to/Coding-Agents.md index 6108cc6606..5ef72a79e7 100644 --- a/docs/how-to/Coding-Agents.md +++ b/docs/how-to/Coding-Agents.md @@ -22,11 +22,13 @@ coding_agent = Agent( ) ``` +**Note**: The `allow_code_execution` parameter defaults to `False`. + ## Important Considerations 1. **Model Selection**: It is strongly recommended to use more capable models like Claude 3.5 Sonnet and GPT-4 when enabling code execution. These models have a better understanding of programming concepts and are more likely to generate correct and efficient code. -2. **Error Handling**: The code execution feature includes error handling. If executed code raises an exception, the agent will receive the error message and can attempt to correct the code or provide alternative solutions. +2. **Error Handling**: The code execution feature includes error handling. If executed code raises an exception, the agent will receive the error message and can attempt to correct the code or provide alternative solutions. The `max_retry_limit` parameter, which defaults to 2, controls the maximum number of retries for a task. 3. **Dependencies**: To use the code execution feature, you need to install the `crewai_tools` package. If not installed, the agent will log an info message: "Coding tools not available. Install crewai_tools." @@ -73,4 +75,4 @@ result = analysis_crew.kickoff() print(result) ``` -In this example, the `coding_agent` can write and execute Python code to perform data analysis tasks. \ No newline at end of file +In this example, the `coding_agent` can write and execute Python code to perform data analysis tasks. diff --git a/docs/how-to/Conditional-Tasks.md b/docs/how-to/Conditional-Tasks.md index 580565c434..76be82b1ae 100644 --- a/docs/how-to/Conditional-Tasks.md +++ b/docs/how-to/Conditional-Tasks.md @@ -7,9 +7,10 @@ description: Learn how to use conditional tasks in a crewAI kickoff Conditional Tasks in crewAI allow for dynamic workflow adaptation based on the outcomes of previous tasks. This powerful feature enables crews to make decisions and execute tasks selectively, enhancing the flexibility and efficiency of your AI-driven processes. +## Example Usage + ```python from typing import List - from pydantic import BaseModel from crewai import Agent, Crew from crewai.tasks.conditional_task import ConditionalTask @@ -17,11 +18,10 @@ from crewai.tasks.task_output import TaskOutput from crewai.task import Task from crewai_tools import SerperDevTool - # Define a condition function for the conditional task # if false task will be skipped, true, then execute task def is_data_missing(output: TaskOutput) -> bool: - return len(output.pydantic.events) < 10: # this will skip this task + return len(output.pydantic.events) < 10 # this will skip this task # Define the agents data_fetcher_agent = Agent( @@ -46,11 +46,9 @@ summary_generator_agent = Agent( verbose=True, ) - class EventOutput(BaseModel): events: List[str] - task1 = Task( description="Fetch data about events in San Francisco using Serper tool", expected_output="List of 10 things to do in SF this week", @@ -64,7 +62,7 @@ conditional_task = ConditionalTask( fetch more events using Serper tool so that we have a total of 10 events in SF this week.. """, - expected_output="List of 10 Things to do in SF this week ", + expected_output="List of 10 Things to do in SF this week", condition=is_data_missing, agent=data_processor_agent, ) @@ -80,8 +78,10 @@ crew = Crew( agents=[data_fetcher_agent, data_processor_agent, summary_generator_agent], tasks=[task1, conditional_task, task3], verbose=True, + planning=True # Enable planning feature ) +# Run the crew result = crew.kickoff() print("results", result) -``` +``` \ No newline at end of file diff --git a/docs/how-to/Force-Tool-Ouput-as-Result.md b/docs/how-to/Force-Tool-Ouput-as-Result.md index ee812df234..94ea4e44fe 100644 --- a/docs/how-to/Force-Tool-Ouput-as-Result.md +++ b/docs/how-to/Force-Tool-Ouput-as-Result.md @@ -1,6 +1,6 @@ --- title: Forcing Tool Output as Result -description: Learn how to force tool output as the result in of an Agent's task in crewAI. +description: Learn how to force tool output as the result in of an Agent's task in CrewAI. --- ## Introduction @@ -13,19 +13,20 @@ Here's an example of how to force the tool output as the result of an agent's ta ```python # ... +from crewai.agent import Agent + # Define a custom tool that returns the result as the answer -coding_agent =Agent( +coding_agent = Agent( role="Data Scientist", - goal="Product amazing reports on AI", + goal="Produce amazing reports on AI", backstory="You work with data and AI", tools=[MyCustomTool(result_as_answer=True)], ) -# ... ``` -### Workflow in Action +## Workflow in Action 1. **Task Execution**: The agent executes the task using the tool provided. 2. **Tool Output**: The tool generates the output, which is captured as the task result. -3. **Agent Interaction**: The agent my reflect and take learnings from the tool but the output is not modified. +3. **Agent Interaction**: The agent may reflect and take learnings from the tool but the output is not modified. 4. **Result Return**: The tool output is returned as the task result without any modifications. diff --git a/docs/how-to/Hierarchical.md b/docs/how-to/Hierarchical.md index 46693e3d51..ccebe04b4a 100644 --- a/docs/how-to/Hierarchical.md +++ b/docs/how-to/Hierarchical.md @@ -56,6 +56,7 @@ project_crew = Crew( process=Process.hierarchical, # Specifies the hierarchical management approach memory=True, # Enable memory usage for enhanced task execution manager_agent=None, # Optional: explicitly set a specific agent as manager instead of the manager_llm + planning=True, # Enable planning feature for pre-execution strategy ) ``` diff --git a/docs/how-to/Human-Input-on-Execution.md b/docs/how-to/Human-Input-on-Execution.md index e24a28fcdc..2cb5c4f323 100644 --- a/docs/how-to/Human-Input-on-Execution.md +++ b/docs/how-to/Human-Input-on-Execution.md @@ -83,6 +83,7 @@ crew = Crew( tasks=[task1, task2], verbose=True, memory=True, + planning=True # Enable planning feature for the crew ) # Get your crew to work! diff --git a/docs/how-to/Kickoff-async.md b/docs/how-to/Kickoff-async.md index e8386288a6..0a2d103447 100644 --- a/docs/how-to/Kickoff-async.md +++ b/docs/how-to/Kickoff-async.md @@ -9,6 +9,21 @@ CrewAI provides the ability to kickoff a crew asynchronously, allowing you to st ## Asynchronous Crew Execution To kickoff a crew asynchronously, use the `kickoff_async()` method. This method initiates the crew execution in a separate thread, allowing the main thread to continue executing other tasks. +### Method Signature + +```python +def kickoff_async(self, inputs: dict) -> CrewOutput: +``` + +### Parameters + +- `inputs` (dict): A dictionary containing the input data required for the tasks. + +### Returns + +- `CrewOutput`: An object representing the result of the crew execution. + +## Example Here's an example of how to kickoff a crew asynchronously: ```python @@ -34,7 +49,6 @@ analysis_crew = Crew( tasks=[data_analysis_task] ) -# Execute the crew +# Execute the crew asynchronously result = analysis_crew.kickoff_async(inputs={"ages": [25, 30, 35, 40, 45]}) -``` - +``` \ No newline at end of file diff --git a/docs/how-to/LLM-Connections.md b/docs/how-to/LLM-Connections.md index 162c5d197e..4acdbb3e32 100644 --- a/docs/how-to/LLM-Connections.md +++ b/docs/how-to/LLM-Connections.md @@ -9,7 +9,7 @@ description: Comprehensive guide on integrating CrewAI with various Large Langua By default, CrewAI uses OpenAI's GPT-4o model (specifically, the model specified by the OPENAI_MODEL_NAME environment variable, defaulting to "gpt-4o") for language processing. You can configure your agents to use a different model or API as described in this guide. By default, CrewAI uses OpenAI's GPT-4 model (specifically, the model specified by the OPENAI_MODEL_NAME environment variable, defaulting to "gpt-4") for language processing. You can configure your agents to use a different model or API as described in this guide. -CrewAI provides extensive versatility in integrating with various Language Models (LLMs), including local options through Ollama such as Llama and Mixtral to cloud-based solutions like Azure. Its compatibility extends to all [LangChain LLM components](https://python.langchain.com/v0.2/docs/integrations/llms/), offering a wide range of integration possibilities for customized AI applications. +CrewAI provides extensive versatility in integrating with various Language Models (LLMs), including local options through Ollama such as Llama and Mixtral to cloud-based solutions like Azure. Its compatibility extends to all [LangChain LLM components](https://python.langchain.com/v0.2/docs/integrations/llms/), offering a wide range of integration possibilities for customized AI applications. The platform supports connections to an array of Generative AI models, including: @@ -37,6 +37,7 @@ example_agent = Agent( verbose=True ) ``` + ## Ollama Local Integration Ollama is preferred for local LLM integration, offering customization and privacy benefits. To integrate Ollama with CrewAI, you will need the `langchain-ollama` package. You can then set the following environment variables to connect to your Ollama instance running locally on port 11434. @@ -47,8 +48,8 @@ os.environ[OPENAI_API_KEY]='' # No API Key required for Ollama ``` ## Ollama Integration Step by Step (ex. for using Llama 3.1 8B locally) -1. [Download and install Ollama](https://ollama.com/download). -2. After setting up the Ollama, Pull the Llama3.1 8B model by typing following lines into your terminal ```ollama run llama3.1```. +1. [Download and install Ollama](https://ollama.com/download). +2. After setting up the Ollama, Pull the Llama3.1 8B model by typing following lines into your terminal ```ollama run llama3.1```. 3. Llama3.1 should now be served locally on `http://localhost:11434` ``` from crewai import Agent, Task, Crew @@ -165,7 +166,7 @@ llm = ChatCohere() For Azure OpenAI API integration, set the following environment variables: ```sh -os.environ[AZURE_OPENAI_DEPLOYMENT] = "You deployment" +os.environ[AZURE_OPENAI_DEPLOYMENT] = "Your deployment" os.environ["OPENAI_API_VERSION"] = "2023-12-01-preview" os.environ["AZURE_OPENAI_ENDPOINT"] = "Your Endpoint" os.environ["AZURE_OPENAI_API_KEY"] = "" @@ -191,5 +192,6 @@ azure_agent = Agent( llm=azure_llm ) ``` + ## Conclusion -Integrating CrewAI with different LLMs expands the framework's versatility, allowing for customized, efficient AI solutions across various domains and platforms. +Integrating CrewAI with different LLMs expands the framework's versatility, allowing for customized, efficient AI solutions across various domains and platforms. \ No newline at end of file diff --git a/docs/how-to/Replay-tasks-from-latest-Crew-Kickoff.md b/docs/how-to/Replay-tasks-from-latest-Crew-Kickoff.md index 250990f0ce..b4e746a13c 100644 --- a/docs/how-to/Replay-tasks-from-latest-Crew-Kickoff.md +++ b/docs/how-to/Replay-tasks-from-latest-Crew-Kickoff.md @@ -11,14 +11,13 @@ You must run `crew.kickoff()` before you can replay a task. Currently, only the Here's an example of how to replay from a task: -### Replaying from specific task Using the CLI +### Replaying from Specific Task Using the CLI To use the replay feature, follow these steps: 1. Open your terminal or command prompt. 2. Navigate to the directory where your CrewAI project is located. 3. Run the following command: - -To view latest kickoff task_ids use: +To view the latest kickoff task_ids use: ```shell crewai log-tasks-outputs ``` @@ -29,21 +28,25 @@ crewai replay -t ``` -### Replaying from a task Programmatically +### Replaying from a Task Programmatically To replay from a task programmatically, use the following steps: 1. Specify the task_id and input parameters for the replay process. 2. Execute the replay command within a try-except block to handle potential errors. ```python - def replay(): + def replay(): """ Replay the crew execution from a specific task. """ task_id = '' - inputs = {"topic": "CrewAI Training"} # this is optional, you can pass in the inputs you want to replay otherwise uses the previous kickoffs inputs + inputs = {"topic": "CrewAI Training"} # This is optional; you can pass in the inputs you want to replay; otherwise, it uses the previous kickoff's inputs. try: YourCrewName_Crew().crew().replay(task_id=task_id, inputs=inputs) + except subprocess.CalledProcessError as e: + raise Exception(f"An error occurred while replaying the crew: {e}") + except Exception as e: - raise Exception(f"An error occurred while replaying the crew: {e}") \ No newline at end of file + raise Exception(f"An unexpected error occurred: {e}") +``` \ No newline at end of file diff --git a/docs/how-to/Sequential.md b/docs/how-to/Sequential.md index ae351197b7..291a1746d8 100644 --- a/docs/how-to/Sequential.md +++ b/docs/how-to/Sequential.md @@ -18,7 +18,7 @@ The sequential process ensures tasks are executed one after the other, following To use the sequential process, assemble your crew and define tasks in the order they need to be executed. ```python -from crewai import Crew, Process, Agent, Task +from crewai import Crew, Process, Agent, Task, TaskOutput, CrewOutput # Define your agents researcher = Agent( @@ -37,6 +37,7 @@ writer = Agent( backstory='A skilled writer with a talent for crafting compelling narratives' ) +# Define your tasks research_task = Task(description='Gather relevant data...', agent=researcher, expected_output='Raw Data') analysis_task = Task(description='Analyze the data...', agent=analyst, expected_output='Data Insights') writing_task = Task(description='Compose the report...', agent=writer, expected_output='Final Report') @@ -50,6 +51,10 @@ report_crew = Crew( # Execute the crew result = report_crew.kickoff() + +# Accessing the type safe output +task_output: TaskOutput = result.tasks[0].output +crew_output: CrewOutput = result.output ``` ### Workflow in Action @@ -82,4 +87,4 @@ CrewAI tracks token usage across all tasks and agents. You can access these metr 1. **Order Matters**: Arrange tasks in a logical sequence where each task builds upon the previous one. 2. **Clear Task Descriptions**: Provide detailed descriptions for each task to guide the agents effectively. 3. **Appropriate Agent Selection**: Match agents' skills and roles to the requirements of each task. -4. **Use Context**: Leverage the context from previous tasks to inform subsequent ones +4. **Use Context**: Leverage the context from previous tasks to inform subsequent ones. \ No newline at end of file