Skip to content

Commit

Permalink
Merge branch 'main' into DOCS/poetry.toml_update
Browse files Browse the repository at this point in the history
  • Loading branch information
joaomdmoura committed Sep 7, 2024
2 parents 2874ffb + 596491d commit d6e77bb
Show file tree
Hide file tree
Showing 17 changed files with 164 additions and 101 deletions.
36 changes: 36 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -284,3 +284,39 @@ Users can opt-in to Further Telemetry, sharing the complete telemetry data by se
## License

CrewAI is released under the MIT License.

## Frequently Asked Questions (FAQ)

### Q: What is CrewAI?
A: CrewAI is a cutting-edge framework for orchestrating role-playing, autonomous AI agents. It enables agents to work together seamlessly, tackling complex tasks through collaborative intelligence.

### Q: How do I install CrewAI?
A: You can install CrewAI using pip:
```shell
pip install crewai
```
For additional tools, use:
```shell
pip install 'crewai[tools]'
```

### Q: Can I use CrewAI with local models?
A: Yes, CrewAI supports various LLMs, including local models. You can configure your agents to use local models via tools like Ollama & LM Studio. Check the [LLM Connections documentation](https://docs.crewai.com/how-to/LLM-Connections/) for more details.

### Q: What are the key features of CrewAI?
A: Key features include role-based agent design, autonomous inter-agent delegation, flexible task management, process-driven execution, output saving as files, and compatibility with both open-source and proprietary models.

### Q: How does CrewAI compare to other AI orchestration tools?
A: CrewAI is designed with production in mind, offering flexibility similar to Autogen's conversational agents and structured processes like ChatDev, but with more adaptability for real-world applications.

### Q: Is CrewAI open-source?
A: Yes, CrewAI is open-source and welcomes contributions from the community.

### Q: Does CrewAI collect any data?
A: CrewAI uses anonymous telemetry to collect usage data for improvement purposes. No sensitive data (like prompts, task descriptions, or API calls) is collected. Users can opt-in to share more detailed data by setting `share_crew=True` on their Crews.

### Q: Where can I find examples of CrewAI in action?
A: You can find various real-life examples in the [crewAI-examples repository](https://github.com/crewAIInc/crewAI-examples), including trip planners, stock analysis tools, and more.

### Q: How can I contribute to CrewAI?
A: Contributions are welcome! You can fork the repository, create a new branch for your feature, add your improvement, and send a pull request. Check the Contribution section in the README for more details.
Binary file removed docs/assets/crewai-langtrace-spans.png
Binary file not shown.
Binary file removed docs/assets/crewai-langtrace-stats.png
Binary file not shown.
Binary file added docs/assets/langtrace1.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/assets/langtrace2.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/assets/langtrace3.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
45 changes: 10 additions & 35 deletions docs/how-to/Langtrace-Observability.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,10 +7,14 @@ description: How to monitor cost, latency, and performance of CrewAI Agents usin

Langtrace is an open-source, external tool that helps you set up observability and evaluations for Large Language Models (LLMs), LLM frameworks, and Vector Databases. While not built directly into CrewAI, Langtrace can be used alongside CrewAI to gain deep visibility into the cost, latency, and performance of your CrewAI Agents. This integration allows you to log hyperparameters, monitor performance regressions, and establish a process for continuous improvement of your Agents.

![Overview of a select series of agent session runs](..%2Fassets%2Flangtrace1.png)
![Overview of agent traces](..%2Fassets%2Flangtrace2.png)
![Overview of llm traces in details](..%2Fassets%2Flangtrace3.png)

## Setup Instructions

1. Sign up for [Langtrace](https://langtrace.ai/) by visiting [https://langtrace.ai/signup](https://langtrace.ai/signup).
2. Create a project and generate an API key.
2. Create a project, set the project type to crewAI & generate an API key.
3. Install Langtrace in your CrewAI project using the following commands:

```bash
Expand All @@ -32,58 +36,29 @@ langtrace.init(api_key='<LANGTRACE_API_KEY>')
from crewai import Agent, Task, Crew
```

2. Create your CrewAI agents and tasks as usual.

3. Use Langtrace's tracking functions to monitor your CrewAI operations. For example:

```python
with langtrace.trace("CrewAI Task Execution"):
result = crew.kickoff()
```

### Features and Their Application to CrewAI

1. **LLM Token and Cost Tracking**

- Monitor the token usage and associated costs for each CrewAI agent interaction.
- Example:
```python
with langtrace.trace("Agent Interaction"):
agent_response = agent.execute(task)
```

2. **Trace Graph for Execution Steps**

- Visualize the execution flow of your CrewAI tasks, including latency and logs.
- Useful for identifying bottlenecks in your agent workflows.

3. **Dataset Curation with Manual Annotation**

- Create datasets from your CrewAI task outputs for future training or evaluation.
- Example:
```python
langtrace.log_dataset_item(task_input, agent_output, {"task_type": "research"})
```

4. **Prompt Versioning and Management**

- Keep track of different versions of prompts used in your CrewAI agents.
- Useful for A/B testing and optimizing agent performance.

5. **Prompt Playground with Model Comparisons**

- Test and compare different prompts and models for your CrewAI agents before deployment.

6. **Testing and Evaluations**
- Set up automated tests for your CrewAI agents and tasks.
- Example:
```python
langtrace.evaluate(agent_output, expected_output, "accuracy")
```

## Monitoring New CrewAI Features

CrewAI has introduced several new features that can be monitored using Langtrace:

1. **Code Execution**: Monitor the performance and output of code executed by agents.
```python
with langtrace.trace("Agent Code Execution"):
code_output = agent.execute_code(code_snippet)
```

2. **Third-party Agent Integration**: Track interactions with LlamaIndex, LangChain, and Autogen agents.
5 changes: 4 additions & 1 deletion src/crewai/crew.py
Original file line number Diff line number Diff line change
Expand Up @@ -584,7 +584,10 @@ def _create_manager_agent(self):
self.manager_agent.allow_delegation = True
manager = self.manager_agent
if manager.tools is not None and len(manager.tools) > 0:
raise Exception("Manager agent should not have tools")
self._logger.log(
"warning", "Manager agent should not have tools", color="orange"
)
manager.tools = []
manager.tools = self.manager_agent.get_delegation_tools(self.agents)
else:
manager = Agent(
Expand Down
3 changes: 2 additions & 1 deletion src/crewai/project/annotations.py
Original file line number Diff line number Diff line change
Expand Up @@ -103,7 +103,8 @@ def wrapper(self, *args, **kwargs):
for task_name in sorted_task_names:
task_instance = tasks[task_name]()
instantiated_tasks.append(task_instance)
if hasattr(task_instance, "agent"):
agent_instance = getattr(task_instance, "agent", None)
if agent_instance is not None:
agent_instance = task_instance.agent
if agent_instance.role not in agent_roles:
instantiated_agents.append(agent_instance)
Expand Down
13 changes: 8 additions & 5 deletions src/crewai/task.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@
from concurrent.futures import Future
from copy import copy
from hashlib import md5
from typing import Any, Dict, List, Optional, Tuple, Type, Union
from typing import Any, Dict, List, Optional, Set, Tuple, Type, Union

from opentelemetry.trace import Span
from pydantic import (
Expand Down Expand Up @@ -108,6 +108,7 @@ class Task(BaseModel):
description="A converter class used to export structured output",
default=None,
)
processed_by_agents: Set[str] = Field(default_factory=set)

_telemetry: Telemetry = PrivateAttr(default_factory=Telemetry)
_execution_span: Optional[Span] = PrivateAttr(default=None)
Expand Down Expand Up @@ -241,6 +242,8 @@ def _execute_core(
self.prompt_context = context
tools = tools or self.tools or []

self.processed_by_agents.add(agent.role)

result = agent.execute_task(
task=self,
context=context,
Expand Down Expand Up @@ -273,9 +276,7 @@ def _execute_core(
content = (
json_output
if json_output
else pydantic_output.model_dump_json()
if pydantic_output
else result
else pydantic_output.model_dump_json() if pydantic_output else result
)
self._save_file(content)

Expand Down Expand Up @@ -310,8 +311,10 @@ def increment_tools_errors(self) -> None:
"""Increment the tools errors counter."""
self.tools_errors += 1

def increment_delegations(self) -> None:
def increment_delegations(self, agent_name: Optional[str]) -> None:
"""Increment the delegations counter."""
if agent_name:
self.processed_by_agents.add(agent_name)
self.delegations += 1

def copy(self, agents: List["BaseAgent"]) -> "Task":
Expand Down
2 changes: 1 addition & 1 deletion src/crewai/tools/tool_calling.py
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
from typing import Any, Dict, Optional

from pydantic import BaseModel, Field
from pydantic import BaseModel as PydanticBaseModel
from pydantic import Field as PydanticField
from pydantic.v1 import BaseModel, Field


class ToolCalling(BaseModel):
Expand Down
2 changes: 1 addition & 1 deletion src/crewai/tools/tool_output_parser.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
from langchain.output_parsers import PydanticOutputParser
from langchain_core.exceptions import OutputParserException
from langchain_core.outputs import Generation
from langchain_core.pydantic_v1 import ValidationError
from pydantic import ValidationError


class ToolOutputParser(PydanticOutputParser):
Expand Down
22 changes: 13 additions & 9 deletions src/crewai/tools/tool_usage.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@
from langchain_openai import ChatOpenAI

from crewai.agents.tools_handler import ToolsHandler
from crewai.task import Task
from crewai.telemetry import Telemetry
from crewai.tools.tool_calling import InstructorToolCalling, ToolCalling
from crewai.utilities import I18N, Converter, ConverterError, Printer
Expand Down Expand Up @@ -51,7 +52,7 @@ def __init__(
original_tools: List[Any],
tools_description: str,
tools_names: str,
task: Any,
task: Task,
function_calling_llm: Any,
agent: Any,
action: Any,
Expand Down Expand Up @@ -154,7 +155,10 @@ def _use(
"Delegate work to coworker",
"Ask question to coworker",
]:
self.task.increment_delegations()
coworker = (
calling.arguments.get("coworker") if calling.arguments else None
)
self.task.increment_delegations(coworker)

if calling.arguments:
try:
Expand Down Expand Up @@ -241,7 +245,7 @@ def _format_result(self, result: Any) -> None:
result = self._remember_format(result=result) # type: ignore # "_remember_format" of "ToolUsage" does not return a value (it only ever returns None)
return result

def _should_remember_format(self) -> None:
def _should_remember_format(self) -> bool:
return self.task.used_tools % self._remember_format_after_usages == 0

def _remember_format(self, result: str) -> None:
Expand Down Expand Up @@ -353,10 +357,10 @@ def _tool_calling(
return ToolUsageErrorException( # type: ignore # Incompatible return value type (got "ToolUsageErrorException", expected "ToolCalling | InstructorToolCalling")
f'{self._i18n.errors("tool_arguments_error")}'
)
calling = ToolCalling( # type: ignore # Unexpected keyword argument "log" for "ToolCalling"
calling = ToolCalling(
tool_name=tool.name,
arguments=arguments,
log=tool_string,
log=tool_string, # type: ignore
)
except Exception as e:
self._run_attempts += 1
Expand Down Expand Up @@ -404,19 +408,19 @@ def _validate_tool_input(self, tool_input: str) -> str:
'"' + value.replace('"', '\\"') + '"'
) # Re-encapsulate with double quotes
elif value.isdigit(): # Check if value is a digit, hence integer
formatted_value = value
value = value
elif value.lower() in [
"true",
"false",
"null",
]: # Check for boolean and null values
formatted_value = value.lower()
value = value.lower()
else:
# Assume the value is a string and needs quotes
formatted_value = '"' + value.replace('"', '\\"') + '"'
value = '"' + value.replace('"', '\\"') + '"'

# Rebuild the entry with proper quoting
formatted_entry = f'"{key}": {formatted_value}'
formatted_entry = f'"{key}": {value}'
formatted_entries.append(formatted_entry)

# Reconstruct the JSON string
Expand Down
11 changes: 5 additions & 6 deletions src/crewai/utilities/config.py
Original file line number Diff line number Diff line change
Expand Up @@ -23,17 +23,16 @@ def process_config(
# Copy values from config (originally from YAML) to the model's attributes.
# Only copy if the attribute isn't already set, preserving any explicitly defined values.
for key, value in config.items():
if key not in model_class.model_fields:
if key not in model_class.model_fields or values.get(key) is not None:
continue
if values.get(key) is not None:
continue
if isinstance(value, (str, int, float, bool, list)):
values[key] = value
elif isinstance(value, dict):

if isinstance(value, dict):
if isinstance(values.get(key), dict):
values[key].update(value)
else:
values[key] = value
else:
values[key] = value

# Remove the config from values to avoid duplicate processing
values.pop("config", None)
Expand Down
3 changes: 1 addition & 2 deletions src/crewai/utilities/crew_pydantic_output_parser.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,8 +5,7 @@
from langchain.output_parsers import PydanticOutputParser
from langchain_core.exceptions import OutputParserException
from langchain_core.outputs import Generation
from langchain_core.pydantic_v1 import ValidationError
from pydantic import BaseModel
from pydantic import BaseModel, ValidationError


class CrewPydanticOutputParser(PydanticOutputParser):
Expand Down
Loading

0 comments on commit d6e77bb

Please sign in to comment.