Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Replay feat using db #930

Merged
merged 68 commits into from
Jul 15, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
68 commits
Select commit Hold shift + click to select a range
ea5a784
Cleaned up task execution to now have separate paths for async and sy…
bhancockio Jun 20, 2024
26489ce
Consistently storing async and sync output for context
bhancockio Jun 20, 2024
5c504f4
outline tests I need to create going forward
bhancockio Jun 20, 2024
ee4a996
Major rehaul of TaskOutput and CrewOutput. Updated all tests to work …
bhancockio Jun 21, 2024
f86e4a1
Merge branch 'main' into feature/kickoff-consistent-output
bhancockio Jun 21, 2024
5f820ce
Encountering issues with callback. Need to test on main. WIP
bhancockio Jun 21, 2024
5775ed3
working on tests. WIP
bhancockio Jun 23, 2024
cc1c97e
WIP. Figuring out disconnect issue.
bhancockio Jun 25, 2024
be0a4c2
Cleaned up logs now that I've isolated the issue to the LLM
bhancockio Jun 25, 2024
764234c
more wip.
bhancockio Jun 27, 2024
5091712
WIP. It looks like usage metrics has always been broken for async
bhancockio Jul 1, 2024
1d2827e
Update parent crew who is managing for_each loop
bhancockio Jul 1, 2024
2efe16e
Merge in main to bugfix/kickoff-for-each-usage-metrics
bhancockio Jul 1, 2024
6a47eb4
Merge branch 'main' into bugfix/kickoff-for-each-usage-metrics
bhancockio Jul 1, 2024
60c8f86
Clean up code for review
bhancockio Jul 1, 2024
5a5276e
Add new tests
bhancockio Jul 1, 2024
1f9166f
Final cleanup. Ready for review.
bhancockio Jul 1, 2024
f36f73e
Moving copy functionality from Agent to BaseAgent
bhancockio Jul 1, 2024
68de393
Fix renaming issue
bhancockio Jul 1, 2024
5334e9e
Fix linting errors
bhancockio Jul 1, 2024
0bfa549
use BaseAgent instead of Agent where applicable
bhancockio Jul 1, 2024
053d8a0
Merge branch 'bugfix/kickoff-for-each-usage-metrics' into feature/kic…
bhancockio Jul 1, 2024
e745094
Fixing missing function. Working on tests.
bhancockio Jul 2, 2024
55af7e0
WIP. Needing team to review change
bhancockio Jul 3, 2024
bae9c70
Merge branch 'main' into bugfix/kickoff-for-each-usage-metrics
bhancockio Jul 3, 2024
a3bdc09
Merge branch 'bugfix/kickoff-for-each-usage-metrics' into feature/kic…
bhancockio Jul 3, 2024
10b8495
Merge branch 'main' into feature/kickoff-consistent-output
bhancockio Jul 8, 2024
363ce5e
Fixing issues brought about by merge
bhancockio Jul 8, 2024
1a44a34
WIP: need to fix json encoder
lorenzejay Jul 8, 2024
5c04c63
WIP need to fix encoder
lorenzejay Jul 8, 2024
fffe4df
WIP
bhancockio Jul 9, 2024
92fca9b
WIP: replay working with async. need to add tests
lorenzejay Jul 9, 2024
ecc3d91
Implement major fixes from yesterdays group conversation. Now working…
bhancockio Jul 9, 2024
7518cb9
The majority of tasks are working now. Need to fix converter class
bhancockio Jul 9, 2024
9fdaffc
Fix final failing test
bhancockio Jul 9, 2024
2abc971
Fix linting and type-checker issues
bhancockio Jul 9, 2024
0b575ae
Add more tests to fully test CrewOutput and TaskOutput changes
bhancockio Jul 9, 2024
6f6b02c
Add in validation for async cannot depend on other async tasks.
bhancockio Jul 9, 2024
626e30d
WIP: working replay feat fixing inputs, need tests
lorenzejay Jul 9, 2024
7c4b91b
WIP: core logic of seq and heir for executing tasks added into one
lorenzejay Jul 10, 2024
39d6a9a
Update validators and tests
bhancockio Jul 10, 2024
3613bd4
better logic for seq and hier
lorenzejay Jul 10, 2024
d7b765a
Merge branch 'feature/kickoff-consistent-output' of https://github.co…
lorenzejay Jul 10, 2024
fa530ea
replay working for both seq and hier just need tests
lorenzejay Jul 11, 2024
ce4e28f
Merge branch 'main' of github.com:joaomdmoura/crewAI into temp-featur…
lorenzejay Jul 11, 2024
28929e1
fixed context
lorenzejay Jul 11, 2024
3aa5d16
added cli command + code cleanup TODO: need better refactoring
lorenzejay Jul 11, 2024
c7bf609
refactoring for cleaner code
lorenzejay Jul 11, 2024
a55a835
added better tests
lorenzejay Jul 11, 2024
1cf4b47
removed todo comments and fixed some tests
lorenzejay Jul 11, 2024
a9873ff
fix logging now all tests should pass
lorenzejay Jul 11, 2024
e1589be
cleaner code
lorenzejay Jul 12, 2024
af4579f
ensure replay is delcared when replaying specific tasks
lorenzejay Jul 12, 2024
8b70405
ensure hierarchical works
lorenzejay Jul 12, 2024
0e65091
better typing for stored_outputs and separated task_output_handler
lorenzejay Jul 12, 2024
b24304a
added better tests
lorenzejay Jul 12, 2024
b47d0c4
added replay feature to crew docs
lorenzejay Jul 12, 2024
010db77
easier cli command name
lorenzejay Jul 12, 2024
96af602
fixing changes
lorenzejay Jul 12, 2024
9eefa31
using sqllite instead of .json file for logging previous task_outputs
lorenzejay Jul 14, 2024
f27c8e7
Merge branch 'main' of github.com:joaomdmoura/crewAI into replay-feat…
lorenzejay Jul 15, 2024
d28ae85
tools fix
lorenzejay Jul 15, 2024
6bb909d
added to docs and fixed tests
lorenzejay Jul 15, 2024
79912a8
fixed .db
lorenzejay Jul 15, 2024
a0e59b7
fixed docs and removed unneeded comments
lorenzejay Jul 15, 2024
7178f8b
separating ltm and replay db
lorenzejay Jul 15, 2024
e802a2b
fixed printing colors
lorenzejay Jul 15, 2024
a567b54
added how to doc
lorenzejay Jul 15, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 2 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -14,4 +14,5 @@ test.py
rc-tests/*
*.pkl
temp/*
.vscode/*
.vscode/*
crew_tasks_output.json
29 changes: 29 additions & 0 deletions docs/core-concepts/Crews.md
Original file line number Diff line number Diff line change
Expand Up @@ -156,3 +156,32 @@ for async_result in async_results:
```

These methods provide flexibility in how you manage and execute tasks within your crew, allowing for both synchronous and asynchronous workflows tailored to your needs


### Replaying from specific task:
lorenzejay marked this conversation as resolved.
Show resolved Hide resolved
You can now replay from a specific task using our cli command replay.

The replay_from_tasks feature in CrewAI allows you to replay from a specific task using the command-line interface (CLI). By running the command `crewai replay -t <task_id>`, you can specify the `task_id` for the replay process.

Kickoffs will now save the latest kickoffs returned task outputs locally for you to be able to replay from.


### Replaying from specific task Using the CLI
To use the replay feature, follow these steps:

1. Open your terminal or command prompt.
2. Navigate to the directory where your CrewAI project is located.
3. Run the following command:

To view latest kickoff task_ids use:

```shell
crewai log-tasks-outputs
```


```shell
crewai replay -t <task_id>
```
joaomdmoura marked this conversation as resolved.
Show resolved Hide resolved

These commands let you replay from your latest kickoff tasks, still retaining context from previously executed tasks.
49 changes: 49 additions & 0 deletions docs/how-to/Replay-tasks-from-latest-Crew-Kickoff.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@
---
title: Replay Tasks from Latest Crew Kickoff
description: Replay tasks from the latest crew.kickoff(...)
---

## Introduction
CrewAI provides the ability to replay from a task specified from the latest crew kickoff. This feature is particularly useful when you've finished a kickoff and may want to retry certain tasks or don't need to refetch data over and your agents already have the context saved from the kickoff execution so you just need to replay the tasks you want to.

## Note:
You must run `crew.kickoff()` before you can replay a task. Currently, only the latest kickoff is supported, so if you use `kickoff_for_each`, it will only allow you to replay from the most recent crew run.

Here's an example of how to replay from a task:

### Replaying from specific task Using the CLI
To use the replay feature, follow these steps:

1. Open your terminal or command prompt.
2. Navigate to the directory where your CrewAI project is located.
3. Run the following command:

To view latest kickoff task_ids use:
```shell
crewai log-tasks-outputs
```

Once you have your task_id to replay from use:
```shell
crewai replay -t <task_id>
```


### Replaying from a task Programmatically
To replay from a task programmatically, use the following steps:

1. Specify the task_id and input parameters for the replay process.
2. Execute the replay command within a try-except block to handle potential errors.

```python
def replay_from_task():
"""
Replay the crew execution from a specific task.
"""
task_id = '<task_id>'
inputs = {"topic": "CrewAI Training"} # this is optional, you can pass in the inputs you want to replay otherwise uses the previous kickoffs inputs
try:
YourCrewName_Crew().crew().replay_from_task(task_id=task_id, inputs=inputs)

except Exception as e:
raise Exception(f"An error occurred while replaying the crew: {e}")
5 changes: 5 additions & 0 deletions docs/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -113,6 +113,11 @@ Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By
Kickoff a Crew for a List
</a>
</li>
<li>
<a href="./how-to/Replay-tasks-from-latest-Crew-Kickoff">
Replay from a Task
</a>
</li>
<li>
<a href="./how-to/AgentOps-Observability">
Agent Monitoring with AgentOps
Expand Down
1 change: 1 addition & 0 deletions mkdocs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -145,6 +145,7 @@ nav:
- Human Input on Execution: 'how-to/Human-Input-on-Execution.md'
- Kickoff a Crew Asynchronously: 'how-to/Kickoff-async.md'
- Kickoff a Crew for a List: 'how-to/Kickoff-for-each.md'
- Replay from a specific task from a kickoff: 'how-to/Replay-tasks-from-latest-Crew-Kickoff.md'
- Agent Monitoring with AgentOps: 'how-to/AgentOps-Observability.md'
- Agent Monitoring with LangTrace: 'how-to/Langtrace-Observability.md'
- Tools Docs:
Expand Down
2 changes: 1 addition & 1 deletion src/crewai/agents/agent_builder/base_agent.py
Original file line number Diff line number Diff line change
Expand Up @@ -180,7 +180,7 @@ def _parse_tools(self, tools: List[Any]) -> List[Any]:
pass

@abstractmethod
def get_delegation_tools(self, agents: List["BaseAgent"]):
def get_delegation_tools(self, agents: List["BaseAgent"]) -> List[Any]:
"""Set the task tools that init BaseAgenTools class."""
pass

Expand Down
51 changes: 51 additions & 0 deletions src/crewai/cli/cli.py
Original file line number Diff line number Diff line change
@@ -1,8 +1,14 @@
import click
import pkg_resources

from crewai.memory.storage.kickoff_task_outputs_storage import (
KickoffTaskOutputsSQLiteStorage,
)


from .create_crew import create_crew
from .train_crew import train_crew
from .replay_from_task import replay_task_command


@click.group()
Expand Down Expand Up @@ -48,5 +54,50 @@ def train(n_iterations: int):
train_crew(n_iterations)


@crewai.command()
@click.option(
"-t",
"--task_id",
type=str,
help="Replay the crew from this task ID, including all subsequent tasks.",
)
def replay(task_id: str) -> None:
"""
Replay the crew execution from a specific task.

Args:
task_id (str): The ID of the task to replay from.
"""
try:
click.echo(f"Replaying the crew from task {task_id}")
replay_task_command(task_id)
except Exception as e:
click.echo(f"An error occurred while replaying: {e}", err=True)


@crewai.command()
def log_tasks_outputs() -> None:
"""
Retrieve your latest crew.kickoff() task outputs.
"""
try:
storage = KickoffTaskOutputsSQLiteStorage()
tasks = storage.load()

if not tasks:
click.echo(
"No task outputs found. Only crew kickoff task outputs are logged."
)
return

for index, task in enumerate(tasks, 1):
click.echo(f"Task {index}: {task['task_id']}")
click.echo(f"Description: {task['expected_output']}")
click.echo("------")

except Exception as e:
click.echo(f"An error occurred while logging task outputs: {e}", err=True)


if __name__ == "__main__":
crewai()
24 changes: 24 additions & 0 deletions src/crewai/cli/replay_from_task.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
import subprocess
import click


def replay_task_command(task_id: str) -> None:
"""
Replay the crew execution from a specific task.

Args:
task_id (str): The ID of the task to replay from.
"""
command = ["poetry", "run", "replay", task_id]

try:
result = subprocess.run(command, capture_output=False, text=True, check=True)
if result.stderr:
click.echo(result.stderr, err=True)

except subprocess.CalledProcessError as e:
click.echo(f"An error occurred while replaying the task: {e}", err=True)
click.echo(e.output, err=True)

except Exception as e:
click.echo(f"An unexpected error occurred: {e}", err=True)
10 changes: 10 additions & 0 deletions src/crewai/cli/templates/main.py
Original file line number Diff line number Diff line change
Expand Up @@ -21,3 +21,13 @@ def train():

except Exception as e:
raise Exception(f"An error occurred while training the crew: {e}")

def replay_from_task():
"""
Replay the crew execution from a specific task.
"""
try:
{{crew_name}}Crew().crew().replay_from_task(task_id=sys.argv[1])

except Exception as e:
raise Exception(f"An error occurred while replaying the crew: {e}")
1 change: 1 addition & 0 deletions src/crewai/cli/templates/pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,7 @@ crewai = { extras = ["tools"], version = "^0.35.8" }
[tool.poetry.scripts]
{{folder_name}} = "{{folder_name}}.main:run"
train = "{{folder_name}}.main:train"
replay = "{{folder_name}}.main:replay_from_task"

[build-system]
requires = ["poetry-core"]
Expand Down
Loading
Loading