diff --git a/docs/core-concepts/Pipeline.md b/docs/core-concepts/Pipeline.md new file mode 100644 index 0000000000..208d941ac1 --- /dev/null +++ b/docs/core-concepts/Pipeline.md @@ -0,0 +1,267 @@ +--- +title: crewAI Pipelines +description: Understanding and utilizing pipelines in the crewAI framework for efficient multi-stage task processing. +--- + +## What is a Pipeline? + +A pipeline in crewAI represents a structured workflow that allows for the sequential or parallel execution of multiple crews. It provides a way to organize complex processes involving multiple stages, where the output of one stage can serve as input for subsequent stages. + +## Key Terminology + +Understanding the following terms is crucial for working effectively with pipelines: + +- **Stage**: A distinct part of the pipeline, which can be either sequential (a single crew) or parallel (multiple crews executing concurrently). +- **Run**: A specific execution of the pipeline for a given set of inputs, representing a single instance of processing through the pipeline. +- **Branch**: Parallel executions within a stage (e.g., concurrent crew operations). +- **Trace**: The journey of an individual input through the entire pipeline, capturing the path and transformations it undergoes. + +Example pipeline structure: + +``` +crew1 >> [crew2, crew3] >> crew4 +``` + +This represents a pipeline with three stages: + +1. A sequential stage (crew1) +2. A parallel stage with two branches (crew2 and crew3 executing concurrently) +3. Another sequential stage (crew4) + +Each input creates its own run, flowing through all stages of the pipeline. Multiple runs can be processed concurrently, each following the defined pipeline structure. + +## Pipeline Attributes + +| Attribute | Parameters | Description | +| :--------- | :--------- | :------------------------------------------------------------------------------------ | +| **Stages** | `stages` | A list of crews or lists of crews representing the stages to be executed in sequence. | + +## Creating a Pipeline + +When creating a pipeline, you define a series of stages, each consisting of either a single crew or a list of crews for parallel execution. The pipeline ensures that each stage is executed in order, with the output of one stage feeding into the next. + +### Example: Assembling a Pipeline + +```python +from crewai import Crew, Agent, Task, Pipeline + +# Define your crews +research_crew = Crew( + agents=[researcher], + tasks=[research_task], + process=Process.sequential +) + +analysis_crew = Crew( + agents=[analyst], + tasks=[analysis_task], + process=Process.sequential +) + +writing_crew = Crew( + agents=[writer], + tasks=[writing_task], + process=Process.sequential +) + +# Assemble the pipeline +my_pipeline = Pipeline( + stages=[research_crew, analysis_crew, writing_crew] +) +``` + +## Pipeline Methods + +| Method | Description | +| :--------------- | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| **process_runs** | Executes the pipeline, processing all stages and returning the results. This method initiates one or more runs through the pipeline, handling the flow of data between stages. | + +## Pipeline Output + +!!! note "Understanding Pipeline Outputs" +The output of a pipeline in the crewAI framework is encapsulated within two main classes: `PipelineOutput` and `PipelineRunResult`. These classes provide a structured way to access the results of the pipeline's execution, including various formats such as raw strings, JSON, and Pydantic models. + +### Pipeline Output Attributes + +| Attribute | Parameters | Type | Description | +| :-------------- | :------------ | :------------------------ | :-------------------------------------------------------------------------------------------------------- | +| **ID** | `id` | `UUID4` | A unique identifier for the pipeline output. | +| **Run Results** | `run_results` | `List[PipelineRunResult]` | A list of `PipelineRunResult` objects, each representing the output of a single run through the pipeline. | + +### Pipeline Output Methods + +| Method/Property | Description | +| :----------------- | :----------------------------------------------------- | +| **add_run_result** | Adds a `PipelineRunResult` to the list of run results. | + +### Pipeline Run Result Attributes + +| Attribute | Parameters | Type | Description | +| :---------------- | :-------------- | :------------------------- | :-------------------------------------------------------------------------------------------- | +| **ID** | `id` | `UUID4` | A unique identifier for the run result. | +| **Raw** | `raw` | `str` | The raw output of the final stage in the pipeline run. | +| **Pydantic** | `pydantic` | `Optional[BaseModel]` | A Pydantic model object representing the structured output of the final stage, if applicable. | +| **JSON Dict** | `json_dict` | `Optional[Dict[str, Any]]` | A dictionary representing the JSON output of the final stage, if applicable. | +| **Token Usage** | `token_usage` | `Dict[str, Any]` | A summary of token usage across all stages of the pipeline run. | +| **Trace** | `trace` | `List[Any]` | A trace of the journey of inputs through the pipeline run. | +| **Crews Outputs** | `crews_outputs` | `List[CrewOutput]` | A list of `CrewOutput` objects, representing the outputs from each crew in the pipeline run. | + +### Pipeline Run Result Methods and Properties + +| Method/Property | Description | +| :-------------- | :------------------------------------------------------------------------------------------------------- | +| **json** | Returns the JSON string representation of the run result if the output format of the final task is JSON. | +| **to_dict** | Converts the JSON and Pydantic outputs to a dictionary. | +| \***\*str\*\*** | Returns the string representation of the run result, prioritizing Pydantic, then JSON, then raw. | + +### Accessing Pipeline Outputs + +Once a pipeline has been executed, its output can be accessed through the `PipelineOutput` object returned by the `process_runs` method. The `PipelineOutput` class provides access to individual `PipelineRunResult` objects, each representing a single run through the pipeline. + +#### Example + +```python +# Define input data for the pipeline +input_data = [{"initial_query": "Latest advancements in AI"}, {"initial_query": "Future of robotics"}] + +# Execute the pipeline +pipeline_output = await my_pipeline.process_runs(input_data) + +# Access the results +for run_result in pipeline_output.run_results: + print(f"Run ID: {run_result.id}") + print(f"Final Raw Output: {run_result.raw}") + if run_result.json_dict: + print(f"JSON Output: {json.dumps(run_result.json_dict, indent=2)}") + if run_result.pydantic: + print(f"Pydantic Output: {run_result.pydantic}") + print(f"Token Usage: {run_result.token_usage}") + print(f"Trace: {run_result.trace}") + print("Crew Outputs:") + for crew_output in run_result.crews_outputs: + print(f" Crew: {crew_output.raw}") + print("\n") +``` + +This example demonstrates how to access and work with the pipeline output, including individual run results and their associated data. + +## Using Pipelines + +Pipelines are particularly useful for complex workflows that involve multiple stages of processing, analysis, or content generation. They allow you to: + +1. **Sequence Operations**: Execute crews in a specific order, ensuring that the output of one crew is available as input to the next. +2. **Parallel Processing**: Run multiple crews concurrently within a stage for increased efficiency. +3. **Manage Complex Workflows**: Break down large tasks into smaller, manageable steps executed by specialized crews. + +### Example: Running a Pipeline + +```python +# Define input data for the pipeline +input_data = [{"initial_query": "Latest advancements in AI"}] + +# Execute the pipeline, initiating a run for each input +results = await my_pipeline.process_runs(input_data) + +# Access the results +for result in results: + print(f"Final Output: {result.raw}") + print(f"Token Usage: {result.token_usage}") + print(f"Trace: {result.trace}") # Shows the path of the input through all stages +``` + +## Advanced Features + +### Parallel Execution within Stages + +You can define parallel execution within a stage by providing a list of crews, creating multiple branches: + +```python +parallel_analysis_crew = Crew(agents=[financial_analyst], tasks=[financial_analysis_task]) +market_analysis_crew = Crew(agents=[market_analyst], tasks=[market_analysis_task]) + +my_pipeline = Pipeline( + stages=[ + research_crew, + [parallel_analysis_crew, market_analysis_crew], # Parallel execution (branching) + writing_crew + ] +) +``` + +### Routers in Pipelines + +Routers are a powerful feature in crewAI pipelines that allow for dynamic decision-making and branching within your workflow. They enable you to direct the flow of execution based on specific conditions or criteria, making your pipelines more flexible and adaptive. + +#### What is a Router? + +A router in crewAI is a special component that can be included as a stage in your pipeline. It evaluates the input data and determines which path the execution should take next. This allows for conditional branching in your pipeline, where different crews or sub-pipelines can be executed based on the router's decision. + +#### Key Components of a Router + +1. **Routes**: A dictionary of named routes, each associated with a condition and a pipeline to execute if the condition is met. +2. **Default Route**: A fallback pipeline that is executed if none of the defined route conditions are met. + +#### Creating a Router + +Here's an example of how to create a router: + +```python +from crewai import Router, Route, Pipeline, Crew, Agent, Task + +# Define your agents +classifier = Agent(name="Classifier", role="Email Classifier") +urgent_handler = Agent(name="Urgent Handler", role="Urgent Email Processor") +normal_handler = Agent(name="Normal Handler", role="Normal Email Processor") + +# Define your tasks +classify_task = Task(description="Classify the email based on its content and metadata.") +urgent_task = Task(description="Process and respond to urgent email quickly.") +normal_task = Task(description="Process and respond to normal email thoroughly.") + +# Define your crews +classification_crew = Crew(agents=[classifier], tasks=[classify_task]) # classify email between high and low urgency 1-10 +urgent_crew = Crew(agents=[urgent_handler], tasks=[urgent_task]) +normal_crew = Crew(agents=[normal_handler], tasks=[normal_task]) + +# Create pipelines for different urgency levels +urgent_pipeline = Pipeline(stages=[urgent_crew]) +normal_pipeline = Pipeline(stages=[normal_crew]) + +# Create a router +email_router = Router( + routes={ + "high_urgency": Route( + condition=lambda x: x.get("urgency_score", 0) > 7, + pipeline=urgent_pipeline + ), + "low_urgency": Route( + condition=lambda x: x.get("urgency_score", 0) <= 7, + pipeline=normal_pipeline + ) + }, + default=Pipeline(stages=[normal_pipeline]) # Default to just classification if no urgency score +) + +# Use the router in a main pipeline +main_pipeline = Pipeline(stages=[classification_crew, email_router]) + +inputs = [{"email": "..."}, {"email": "..."}] # List of email data + +main_pipeline.kickoff(inputs=inputs) +``` + +In this example, the router decides between an urgent pipeline and a normal pipeline based on the urgency score of the email. If the urgency score is greater than 7, it routes to the urgent pipeline; otherwise, it uses the normal pipeline. If the input doesn't include an urgency score, it defaults to just the classification crew. + +#### Benefits of Using Routers + +1. **Dynamic Workflow**: Adapt your pipeline's behavior based on input characteristics or intermediate results. +2. **Efficiency**: Route urgent tasks to quicker processes, reserving more thorough pipelines for less time-sensitive inputs. +3. **Flexibility**: Easily modify or extend your pipeline's logic without changing the core structure. +4. **Scalability**: Handle a wide range of email types and urgency levels with a single pipeline structure. + +### Error Handling and Validation + +The Pipeline class includes validation mechanisms to ensure the robustness of the pipeline structure: + +- Validates that stages contain only Crew instances or lists of Crew instances. +- Prevents double nesting of stages to maintain a clear structure. diff --git a/docs/index.md b/docs/index.md index 54dfd59aa6..7fe2d224f6 100644 --- a/docs/index.md +++ b/docs/index.md @@ -46,6 +46,11 @@ Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By Crews +
  • + + Pipeline + +
  • Training diff --git a/poetry.lock b/poetry.lock index 020255b02c..5bc872be59 100644 --- a/poetry.lock +++ b/poetry.lock @@ -1,14 +1,14 @@ -# This file is automatically @generated by Poetry 1.8.3 and should not be changed by hand. +# This file is automatically @generated by Poetry 1.7.1 and should not be changed by hand. [[package]] name = "agentops" -version = "0.3.2" +version = "0.3.4" description = "Python SDK for developing AI agent evals and observability" optional = true python-versions = ">=3.7" files = [ - {file = "agentops-0.3.2-py3-none-any.whl", hash = "sha256:b35988e04378624204572bb3d7a454094f879ea573f05b57d4e75ab0bfbb82af"}, - {file = "agentops-0.3.2.tar.gz", hash = "sha256:55559ac4a43634831dfa8937c2597c28e332809dc7c6bb3bc3c8b233442e224c"}, + {file = "agentops-0.3.4-py3-none-any.whl", hash = "sha256:126f7aed4ba43c1399b5488d67a03d10cb4c531e619c650776f826ca00c1aa24"}, + {file = "agentops-0.3.4.tar.gz", hash = "sha256:a92c9cb7c511197f0ecb8cb5aca15d35022c15a3d2fd2aaaa34cd7e5dc59393f"}, ] [package.dependencies] @@ -355,17 +355,17 @@ lxml = ["lxml"] [[package]] name = "boto3" -version = "1.34.146" +version = "1.34.149" description = "The AWS SDK for Python" optional = false python-versions = ">=3.8" files = [ - {file = "boto3-1.34.146-py3-none-any.whl", hash = "sha256:7ec568fb19bce82a70be51f08fddac1ef927ca3fb0896cbb34303a012ba228d8"}, - {file = "boto3-1.34.146.tar.gz", hash = "sha256:5686fe2a6d1aa1de8a88e9589cdcc33361640d3d7a13da718a30717248886124"}, + {file = "boto3-1.34.149-py3-none-any.whl", hash = "sha256:11edeeacdd517bda3b7615b754d8440820cdc9ddd66794cc995a9693ddeaa3be"}, + {file = "boto3-1.34.149.tar.gz", hash = "sha256:f4e6489ba9dc7fb37d53e0e82dbc97f2cb0a4969ef3970e2c88b8f94023ae81a"}, ] [package.dependencies] -botocore = ">=1.34.146,<1.35.0" +botocore = ">=1.34.149,<1.35.0" jmespath = ">=0.7.1,<2.0.0" s3transfer = ">=0.10.0,<0.11.0" @@ -374,13 +374,13 @@ crt = ["botocore[crt] (>=1.21.0,<2.0a0)"] [[package]] name = "botocore" -version = "1.34.146" +version = "1.34.149" description = "Low-level, data-driven core of boto 3." optional = false python-versions = ">=3.8" files = [ - {file = "botocore-1.34.146-py3-none-any.whl", hash = "sha256:3fd4782362bd29c192704ebf859c5c8c5189ad05719e391eefe23088434427ae"}, - {file = "botocore-1.34.146.tar.gz", hash = "sha256:849cb8e54e042443aeabcd7822b5f2b76cb5cfe33fe3a71f91c7c069748a869c"}, + {file = "botocore-1.34.149-py3-none-any.whl", hash = "sha256:ae6c4be52eeee96f68c116b27d252bab069cd046d61a17cfe8e9da411cf22906"}, + {file = "botocore-1.34.149.tar.gz", hash = "sha256:2e1eb5ef40102a3d796bb3dd05f2ac5e8fb43fe1ff114b4f6d33153437f5a372"}, ] [package.dependencies] @@ -1012,13 +1012,13 @@ idna = ">=2.0.0" [[package]] name = "embedchain" -version = "0.1.118" +version = "0.1.119" description = "Simplest open source retrieval (RAG) framework" optional = false python-versions = "<=3.13,>=3.9" files = [ - {file = "embedchain-0.1.118-py3-none-any.whl", hash = "sha256:38ead471df9d9234bf42e6f7a32cab26431d50d6f2f894f18a6cabc0b02bf31a"}, - {file = "embedchain-0.1.118.tar.gz", hash = "sha256:1fa1e799882a1dc4e63af344595b043f1c1f30fbd59461b6660b1934b85a1e4b"}, + {file = "embedchain-0.1.119-py3-none-any.whl", hash = "sha256:8ec3e7f139939fa1dc8fda898f8d8d9d31a5abfe08e184b607e38733d863d606"}, + {file = "embedchain-0.1.119.tar.gz", hash = "sha256:0f4f45e092b7f3192ea6fe82575726532573b1231d7af6c22edc695b701b4223"}, ] [package.dependencies] @@ -1032,7 +1032,7 @@ langchain = ">0.2,<=0.3" langchain-cohere = ">=0.1.4,<0.2.0" langchain-community = ">=0.2.6,<0.3.0" langchain-openai = ">=0.1.7,<0.2.0" -mem0ai = ">=0.0.5,<0.0.6" +mem0ai = ">=0.0.9,<0.0.10" openai = ">=1.1.1" posthog = ">=3.0.2,<4.0.0" pypdf = ">=4.0.1,<5.0.0" @@ -1061,20 +1061,6 @@ together = ["together (>=1.2.1,<2.0.0)"] vertexai = ["langchain-google-vertexai (>=1.0.6,<2.0.0)"] weaviate = ["weaviate-client (>=3.24.1,<4.0.0)"] -[[package]] -name = "eval-type-backport" -version = "0.2.0" -description = "Like `typing._eval_type`, but lets older Python versions use newer typing features." -optional = false -python-versions = ">=3.8" -files = [ - {file = "eval_type_backport-0.2.0-py3-none-any.whl", hash = "sha256:ac2f73d30d40c5a30a80b8739a789d6bb5e49fdffa66d7912667e2015d9c9933"}, - {file = "eval_type_backport-0.2.0.tar.gz", hash = "sha256:68796cfbc7371ebf923f03bdf7bef415f3ec098aeced24e054b253a0e78f7b37"}, -] - -[package.extras] -tests = ["pytest"] - [[package]] name = "exceptiongroup" version = "1.2.2" @@ -1402,13 +1388,13 @@ requests = ["requests (>=2.20.0,<3.0.0.dev0)"] [[package]] name = "google-cloud-aiplatform" -version = "1.59.0" +version = "1.60.0" description = "Vertex AI API client library" optional = false python-versions = ">=3.8" files = [ - {file = "google-cloud-aiplatform-1.59.0.tar.gz", hash = "sha256:2bebb59c0ba3e3b4b568305418ca1b021977988adbee8691a5bed09b037e7e63"}, - {file = "google_cloud_aiplatform-1.59.0-py2.py3-none-any.whl", hash = "sha256:549e6eb1844b0f853043309138ebe2db00de4bbd8197b3bde26804ac163ef52a"}, + {file = "google-cloud-aiplatform-1.60.0.tar.gz", hash = "sha256:782c7f1ec0e77a7c7daabef3b65bfd506ed2b4b1dc2186753c43cd6faf8dd04e"}, + {file = "google_cloud_aiplatform-1.60.0-py2.py3-none-any.whl", hash = "sha256:5f14159c9575f4b46335027e3ceb8fa57bd5eaa76a07f858105b8c6c034ec0d6"}, ] [package.dependencies] @@ -1430,8 +1416,8 @@ cloud-profiler = ["tensorboard-plugin-profile (>=2.4.0,<3.0.0dev)", "tensorflow datasets = ["pyarrow (>=10.0.1)", "pyarrow (>=14.0.0)", "pyarrow (>=3.0.0,<8.0dev)"] endpoint = ["requests (>=2.28.1)"] full = ["cloudpickle (<3.0)", "docker (>=5.0.3)", "explainable-ai-sdk (>=1.0.0)", "fastapi (>=0.71.0,<=0.109.1)", "google-cloud-bigquery", "google-cloud-bigquery-storage", "google-cloud-logging (<4.0)", "google-vizier (>=0.1.6)", "httpx (>=0.23.0,<0.25.0)", "immutabledict", "lit-nlp (==0.4.0)", "mlflow (>=1.27.0,<=2.1.1)", "numpy (>=1.15.0)", "pandas (>=1.0.0)", "pandas (>=1.0.0,<2.2.0)", "pyarrow (>=10.0.1)", "pyarrow (>=14.0.0)", "pyarrow (>=3.0.0,<8.0dev)", "pyarrow (>=6.0.1)", "pydantic (<2)", "pyyaml (>=5.3.1,<7)", "ray[default] (>=2.4,<2.5.dev0 || >2.9.0,!=2.9.1,!=2.9.2,<=2.9.3)", "ray[default] (>=2.5,<=2.9.3)", "requests (>=2.28.1)", "setuptools (<70.0.0)", "starlette (>=0.17.1)", "tensorboard-plugin-profile (>=2.4.0,<3.0.0dev)", "tensorflow (>=2.3.0,<3.0.0dev)", "tensorflow (>=2.3.0,<3.0.0dev)", "tensorflow (>=2.4.0,<3.0.0dev)", "tqdm (>=4.23.0)", "urllib3 (>=1.21.1,<1.27)", "uvicorn[standard] (>=0.16.0)", "werkzeug (>=2.0.0,<2.1.0dev)"] -langchain = ["langchain (>=0.1.16,<0.3)", "langchain-core (<0.2)", "langchain-google-vertexai (<2)", "openinference-instrumentation-langchain (>=0.1.19,<0.2)", "tenacity (<=8.3)"] -langchain-testing = ["absl-py", "cloudpickle (>=3.0,<4.0)", "langchain (>=0.1.16,<0.3)", "langchain-core (<0.2)", "langchain-google-vertexai (<2)", "openinference-instrumentation-langchain (>=0.1.19,<0.2)", "opentelemetry-exporter-gcp-trace (<2)", "opentelemetry-sdk (<2)", "pydantic (>=2.6.3,<3)", "pytest-xdist", "tenacity (<=8.3)"] +langchain = ["langchain (>=0.1.16,<0.3)", "langchain-core (<0.3)", "langchain-google-vertexai (<2)", "openinference-instrumentation-langchain (>=0.1.19,<0.2)", "tenacity (<=8.3)"] +langchain-testing = ["absl-py", "cloudpickle (>=3.0,<4.0)", "google-cloud-trace (<2)", "langchain (>=0.1.16,<0.3)", "langchain-core (<0.3)", "langchain-google-vertexai (<2)", "openinference-instrumentation-langchain (>=0.1.19,<0.2)", "opentelemetry-exporter-gcp-trace (<2)", "opentelemetry-sdk (<2)", "pydantic (>=2.6.3,<3)", "pytest-xdist", "tenacity (<=8.3)"] lit = ["explainable-ai-sdk (>=1.0.0)", "lit-nlp (==0.4.0)", "pandas (>=1.0.0)", "tensorflow (>=2.3.0,<3.0.0dev)"] metadata = ["numpy (>=1.15.0)", "pandas (>=1.0.0)"] pipelines = ["pyyaml (>=5.3.1,<7)"] @@ -1441,7 +1427,7 @@ private-endpoints = ["requests (>=2.28.1)", "urllib3 (>=1.21.1,<1.27)"] rapid-evaluation = ["pandas (>=1.0.0,<2.2.0)", "tqdm (>=4.23.0)"] ray = ["google-cloud-bigquery", "google-cloud-bigquery-storage", "immutabledict", "pandas (>=1.0.0,<2.2.0)", "pyarrow (>=6.0.1)", "pydantic (<2)", "ray[default] (>=2.4,<2.5.dev0 || >2.9.0,!=2.9.1,!=2.9.2,<=2.9.3)", "ray[default] (>=2.5,<=2.9.3)", "setuptools (<70.0.0)"] ray-testing = ["google-cloud-bigquery", "google-cloud-bigquery-storage", "immutabledict", "pandas (>=1.0.0,<2.2.0)", "pyarrow (>=6.0.1)", "pydantic (<2)", "pytest-xdist", "ray[default] (>=2.4,<2.5.dev0 || >2.9.0,!=2.9.1,!=2.9.2,<=2.9.3)", "ray[default] (>=2.5,<=2.9.3)", "ray[train] (==2.9.3)", "scikit-learn", "setuptools (<70.0.0)", "tensorflow", "torch (>=2.0.0,<2.1.0)", "xgboost", "xgboost-ray"] -reasoningengine = ["cloudpickle (>=3.0,<4.0)", "opentelemetry-exporter-gcp-trace (<2)", "opentelemetry-sdk (<2)", "pydantic (>=2.6.3,<3)"] +reasoningengine = ["cloudpickle (>=3.0,<4.0)", "google-cloud-trace (<2)", "opentelemetry-exporter-gcp-trace (<2)", "opentelemetry-sdk (<2)", "pydantic (>=2.6.3,<3)"] tensorboard = ["tensorboard-plugin-profile (>=2.4.0,<3.0.0dev)", "tensorflow (>=2.3.0,<3.0.0dev)", "tensorflow (>=2.4.0,<3.0.0dev)", "werkzeug (>=2.0.0,<2.1.0dev)"] testing = ["bigframes", "cloudpickle (<3.0)", "docker (>=5.0.3)", "explainable-ai-sdk (>=1.0.0)", "fastapi (>=0.71.0,<=0.109.1)", "google-api-core (>=2.11,<3.0.0)", "google-cloud-bigquery", "google-cloud-bigquery-storage", "google-cloud-logging (<4.0)", "google-vizier (>=0.1.6)", "grpcio-testing", "httpx (>=0.23.0,<0.25.0)", "immutabledict", "ipython", "kfp (>=2.6.0,<3.0.0)", "lit-nlp (==0.4.0)", "mlflow (>=1.27.0,<=2.1.1)", "nltk", "numpy (>=1.15.0)", "pandas (>=1.0.0)", "pandas (>=1.0.0,<2.2.0)", "pyarrow (>=10.0.1)", "pyarrow (>=14.0.0)", "pyarrow (>=3.0.0,<8.0dev)", "pyarrow (>=6.0.1)", "pydantic (<2)", "pyfakefs", "pytest-asyncio", "pytest-xdist", "pyyaml (>=5.3.1,<7)", "ray[default] (>=2.4,<2.5.dev0 || >2.9.0,!=2.9.1,!=2.9.2,<=2.9.3)", "ray[default] (>=2.5,<=2.9.3)", "requests (>=2.28.1)", "requests-toolbelt (<1.0.0)", "scikit-learn", "sentencepiece (>=0.2.0)", "setuptools (<70.0.0)", "starlette (>=0.17.1)", "tensorboard-plugin-profile (>=2.4.0,<3.0.0dev)", "tensorflow (==2.13.0)", "tensorflow (==2.16.1)", "tensorflow (>=2.3.0,<3.0.0dev)", "tensorflow (>=2.3.0,<3.0.0dev)", "tensorflow (>=2.4.0,<3.0.0dev)", "torch (>=2.0.0,<2.1.0)", "torch (>=2.2.0)", "tqdm (>=4.23.0)", "urllib3 (>=1.21.1,<1.27)", "uvicorn[standard] (>=0.16.0)", "werkzeug (>=2.0.0,<2.1.0dev)", "xgboost"] tokenization = ["sentencepiece (>=0.2.0)"] @@ -1756,25 +1742,6 @@ files = [ backports-strenum = {version = ">=1.3", markers = "python_version < \"3.11\""} colorama = ">=0.4" -[[package]] -name = "groq" -version = "0.9.0" -description = "The official Python library for the groq API" -optional = false -python-versions = ">=3.7" -files = [ - {file = "groq-0.9.0-py3-none-any.whl", hash = "sha256:d0e46f4ad645504672bb09c8100af3ced3a7db0d5119dc13e4aca535fc455874"}, - {file = "groq-0.9.0.tar.gz", hash = "sha256:130ed5e35d3acfaab46b9e7a078eeaebf91052f4a9d71f86f87fb319b5fec332"}, -] - -[package.dependencies] -anyio = ">=3.5.0,<5" -distro = ">=1.7.0,<2" -httpx = ">=0.23.0,<1" -pydantic = ">=1.9.0,<3" -sniffio = "*" -typing-extensions = ">=4.7,<5" - [[package]] name = "grpc-google-iam-v1" version = "0.13.1" @@ -2077,13 +2044,13 @@ files = [ [[package]] name = "huggingface-hub" -version = "0.24.0" +version = "0.24.3" description = "Client library to download and publish models, datasets and other repos on the huggingface.co hub" optional = false python-versions = ">=3.8.0" files = [ - {file = "huggingface_hub-0.24.0-py3-none-any.whl", hash = "sha256:7ad92edefb93d8145c061f6df8d99df2ff85f8379ba5fac8a95aca0642afa5d7"}, - {file = "huggingface_hub-0.24.0.tar.gz", hash = "sha256:6c7092736b577d89d57b3cdfea026f1b0dc2234ae783fa0d59caf1bf7d52dfa7"}, + {file = "huggingface_hub-0.24.3-py3-none-any.whl", hash = "sha256:69ecce486dd6cdad69937ba76779e893c224a670a9d947636c1d5cbd049e44d8"}, + {file = "huggingface_hub-0.24.3.tar.gz", hash = "sha256:bfdc05cc9b64a0e24e8614a44222698799183268f6b68be209aa2df70cff2cde"}, ] [package.dependencies] @@ -2161,22 +2128,22 @@ files = [ [[package]] name = "importlib-metadata" -version = "7.1.0" +version = "8.0.0" description = "Read metadata from Python packages" optional = false python-versions = ">=3.8" files = [ - {file = "importlib_metadata-7.1.0-py3-none-any.whl", hash = "sha256:30962b96c0c223483ed6cc7280e7f0199feb01a0e40cfae4d4450fc6fab1f570"}, - {file = "importlib_metadata-7.1.0.tar.gz", hash = "sha256:b78938b926ee8d5f020fc4772d487045805a55ddbad2ecf21c6d60938dc7fcd2"}, + {file = "importlib_metadata-8.0.0-py3-none-any.whl", hash = "sha256:15584cf2b1bf449d98ff8a6ff1abef57bf20f3ac6454f431736cd3e660921b2f"}, + {file = "importlib_metadata-8.0.0.tar.gz", hash = "sha256:188bd24e4c346d3f0a933f275c2fec67050326a856b9a359881d7c2a697e8812"}, ] [package.dependencies] zipp = ">=0.5" [package.extras] -docs = ["furo", "jaraco.packaging (>=9.3)", "jaraco.tidelift (>=1.4)", "rst.linker (>=1.9)", "sphinx (>=3.5)", "sphinx-lint"] +doc = ["furo", "jaraco.packaging (>=9.3)", "jaraco.tidelift (>=1.4)", "rst.linker (>=1.9)", "sphinx (>=3.5)", "sphinx-lint"] perf = ["ipython"] -testing = ["flufl.flake8", "importlib-resources (>=1.3)", "jaraco.test (>=5.4)", "packaging", "pyfakefs", "pytest (>=6)", "pytest-checkdocs (>=2.4)", "pytest-cov", "pytest-enabler (>=2.2)", "pytest-mypy", "pytest-perf (>=0.9.2)", "pytest-ruff (>=0.2.1)"] +test = ["flufl.flake8", "importlib-resources (>=1.3)", "jaraco.test (>=5.4)", "packaging", "pyfakefs", "pytest (>=6,!=8.1.*)", "pytest-checkdocs (>=2.4)", "pytest-cov", "pytest-enabler (>=2.2)", "pytest-mypy", "pytest-perf (>=0.9.2)", "pytest-ruff (>=0.2.1)"] [[package]] name = "importlib-resources" @@ -2456,19 +2423,19 @@ tests = ["aiohttp", "duckdb", "pandas (>=1.4)", "polars (>=0.19)", "pytest", "py [[package]] name = "langchain" -version = "0.2.10" +version = "0.2.11" description = "Building applications with LLMs through composability" optional = false python-versions = "<4.0,>=3.8.1" files = [ - {file = "langchain-0.2.10-py3-none-any.whl", hash = "sha256:b4fb58c7faf4f4999cfe3325474979a7121a1737dd101655a723a1d957ef0617"}, - {file = "langchain-0.2.10.tar.gz", hash = "sha256:1f861c1b59ac9c91b02bb0fa58d3adad1c1d0686636872b5b357bbce3ce41d06"}, + {file = "langchain-0.2.11-py3-none-any.whl", hash = "sha256:5a7a8b4918f3d3bebce9b4f23b92d050699e6f7fb97591e8941177cf07a260a2"}, + {file = "langchain-0.2.11.tar.gz", hash = "sha256:d7a9e4165f02dca0bd78addbc2319d5b9286b5d37c51d784124102b57e9fd297"}, ] [package.dependencies] aiohttp = ">=3.8.3,<4.0.0" async-timeout = {version = ">=4.0.0,<5.0.0", markers = "python_version < \"3.11\""} -langchain-core = ">=0.2.22,<0.3.0" +langchain-core = ">=0.2.23,<0.3.0" langchain-text-splitters = ">=0.2.0,<0.3.0" langsmith = ">=0.1.17,<0.2.0" numpy = [ @@ -2504,20 +2471,20 @@ langchain-community = ["langchain-community (>=0.2.4)"] [[package]] name = "langchain-community" -version = "0.2.9" +version = "0.2.10" description = "Community contributed LangChain integrations." optional = false python-versions = "<4.0,>=3.8.1" files = [ - {file = "langchain_community-0.2.9-py3-none-any.whl", hash = "sha256:b51d3adf9346a1161c1098917585b9e303cf24e2f5c71f5d232a0504edada5f2"}, - {file = "langchain_community-0.2.9.tar.gz", hash = "sha256:1e7c180232916cbe35fe00509680dd1f805e32d7c87b5e80b3a9ec8754ecae37"}, + {file = "langchain_community-0.2.10-py3-none-any.whl", hash = "sha256:9f4d1b5ab7f0b0a704f538e26e50fce45a461da6d2bf6b7b636d24f22fbc088a"}, + {file = "langchain_community-0.2.10.tar.gz", hash = "sha256:3a0404bad4bd07d6f86affdb62fb3d080a456c66191754d586a409d9d6024d62"}, ] [package.dependencies] aiohttp = ">=3.8.3,<4.0.0" dataclasses-json = ">=0.5.7,<0.7" langchain = ">=0.2.9,<0.3.0" -langchain-core = ">=0.2.22,<0.3.0" +langchain-core = ">=0.2.23,<0.3.0" langsmith = ">=0.1.0,<0.2.0" numpy = [ {version = ">=1,<2", markers = "python_version < \"3.12\""}, @@ -2530,13 +2497,13 @@ tenacity = ">=8.1.0,<8.4.0 || >8.4.0,<9.0.0" [[package]] name = "langchain-core" -version = "0.2.22" +version = "0.2.24" description = "Building applications with LLMs through composability" optional = false python-versions = "<4.0,>=3.8.1" files = [ - {file = "langchain_core-0.2.22-py3-none-any.whl", hash = "sha256:7731a86440c0958b3186c003fb9b26b2d5a682a6344bda7bfb9174e2898f8b43"}, - {file = "langchain_core-0.2.22.tar.gz", hash = "sha256:582d6f929a43b830139444e4124123cd415331ad62f25757b1406252958cdcac"}, + {file = "langchain_core-0.2.24-py3-none-any.whl", hash = "sha256:9444fc082d21ef075d925590a684a73fe1f9688a3d90087580ec929751be55e7"}, + {file = "langchain_core-0.2.24.tar.gz", hash = "sha256:f2e3fa200b124e8c45d270da9bf836bed9c09532612c96ff3225e59b9a232f5a"}, ] [package.dependencies] @@ -2552,13 +2519,13 @@ tenacity = ">=8.1.0,<8.4.0 || >8.4.0,<9.0.0" [[package]] name = "langchain-experimental" -version = "0.0.62" +version = "0.0.63" description = "Building applications with LLMs through composability" optional = false python-versions = "<4.0,>=3.8.1" files = [ - {file = "langchain_experimental-0.0.62-py3-none-any.whl", hash = "sha256:9240f9e3490e819976f20a37863970036e7baacb7104b9eb6833d19ab6d518c9"}, - {file = "langchain_experimental-0.0.62.tar.gz", hash = "sha256:9737fbc8429d24457ea4d368e3c9ba9ed1cace0564fb5f1a96a3027a588bd0ac"}, + {file = "langchain_experimental-0.0.63-py3-none-any.whl", hash = "sha256:cb4ae7a685bb3c077d138b4533ed02e8df1f5f784333c3e52dcae8c80f031ca2"}, + {file = "langchain_experimental-0.0.63.tar.gz", hash = "sha256:fc894599bfac43445004a9ff60d9a28751426b2fea1979e4b2fa453c847850c4"}, ] [package.dependencies] @@ -2567,17 +2534,17 @@ langchain-core = ">=0.2.10,<0.3.0" [[package]] name = "langchain-openai" -version = "0.1.17" +version = "0.1.19" description = "An integration package connecting OpenAI and LangChain" optional = false python-versions = "<4.0,>=3.8.1" files = [ - {file = "langchain_openai-0.1.17-py3-none-any.whl", hash = "sha256:30bef5574ecbbbb91b8025b2dc5a1bd81fd62157d3ad1a35d820141f31c5b443"}, - {file = "langchain_openai-0.1.17.tar.gz", hash = "sha256:c5d70ddecdcb93e146f376bdbadbb6ec69de9ac0f402cd5b83de50b655ba85ee"}, + {file = "langchain_openai-0.1.19-py3-none-any.whl", hash = "sha256:a7a739f1469d54cd988865420e7fc21b50fb93727b2e6da5ad30273fc61ecf19"}, + {file = "langchain_openai-0.1.19.tar.gz", hash = "sha256:3bf342bb302d1444f4abafdf01c467dbd9b248497e1133808c4bae70396c79b3"}, ] [package.dependencies] -langchain-core = ">=0.2.20,<0.3.0" +langchain-core = ">=0.2.24,<0.3.0" openai = ">=1.32.0,<2.0.0" tiktoken = ">=0.7,<1" @@ -2773,23 +2740,20 @@ files = [ [[package]] name = "mem0ai" -version = "0.0.5" +version = "0.0.9" description = "Long-term memory for AI Agents" optional = false python-versions = "<4.0,>=3.8" files = [ - {file = "mem0ai-0.0.5-py3-none-any.whl", hash = "sha256:6f6e5356fd522adf0510322cd581476ea456fd7ccefca11b5ac050e9a6f00f36"}, - {file = "mem0ai-0.0.5.tar.gz", hash = "sha256:f2ac35d15e4e620becb8d06b8ebeb1ffa85fac0b7cb2d3138056babec48dd5dd"}, + {file = "mem0ai-0.0.9-py3-none-any.whl", hash = "sha256:d4de435729af4fd3d597d022ffb2af89a0630d6c3b4769792bbe27d2ce816858"}, + {file = "mem0ai-0.0.9.tar.gz", hash = "sha256:e4374d5d04aa3f543cd3325f700e4b62f5358ae1c6fa5c44b2ff790c10c4e5f1"}, ] [package.dependencies] -boto3 = ">=1.34.144,<2.0.0" -groq = ">=0.9.0,<0.10.0" openai = ">=1.33.0,<2.0.0" posthog = ">=3.5.0,<4.0.0" pydantic = ">=2.7.3,<3.0.0" qdrant-client = ">=1.9.1,<2.0.0" -together = ">=1.2.1,<2.0.0" [[package]] name = "mergedeep" @@ -3338,13 +3302,13 @@ sympy = "*" [[package]] name = "openai" -version = "1.37.0" +version = "1.37.1" description = "The official Python library for the openai API" optional = false python-versions = ">=3.7.1" files = [ - {file = "openai-1.37.0-py3-none-any.whl", hash = "sha256:a903245c0ecf622f2830024acdaa78683c70abb8e9d37a497b851670864c9f73"}, - {file = "openai-1.37.0.tar.gz", hash = "sha256:dc8197fc40ab9d431777b6620d962cc49f4544ffc3011f03ce0a805e6eb54adb"}, + {file = "openai-1.37.1-py3-none-any.whl", hash = "sha256:9a6adda0d6ae8fce02d235c5671c399cfa40d6a281b3628914c7ebf244888ee3"}, + {file = "openai-1.37.1.tar.gz", hash = "sha256:faf87206785a6b5d9e34555d6a3242482a6852bc802e453e2a891f68ee04ce55"}, ] [package.dependencies] @@ -3361,42 +3325,42 @@ datalib = ["numpy (>=1)", "pandas (>=1.2.3)", "pandas-stubs (>=1.1.0.11)"] [[package]] name = "opentelemetry-api" -version = "1.25.0" +version = "1.26.0" description = "OpenTelemetry Python API" optional = false python-versions = ">=3.8" files = [ - {file = "opentelemetry_api-1.25.0-py3-none-any.whl", hash = "sha256:757fa1aa020a0f8fa139f8959e53dec2051cc26b832e76fa839a6d76ecefd737"}, - {file = "opentelemetry_api-1.25.0.tar.gz", hash = "sha256:77c4985f62f2614e42ce77ee4c9da5fa5f0bc1e1821085e9a47533a9323ae869"}, + {file = "opentelemetry_api-1.26.0-py3-none-any.whl", hash = "sha256:7d7ea33adf2ceda2dd680b18b1677e4152000b37ca76e679da71ff103b943064"}, + {file = "opentelemetry_api-1.26.0.tar.gz", hash = "sha256:2bd639e4bed5b18486fef0b5a520aaffde5a18fc225e808a1ac4df363f43a1ce"}, ] [package.dependencies] deprecated = ">=1.2.6" -importlib-metadata = ">=6.0,<=7.1" +importlib-metadata = ">=6.0,<=8.0.0" [[package]] name = "opentelemetry-exporter-otlp-proto-common" -version = "1.25.0" +version = "1.26.0" description = "OpenTelemetry Protobuf encoding" optional = false python-versions = ">=3.8" files = [ - {file = "opentelemetry_exporter_otlp_proto_common-1.25.0-py3-none-any.whl", hash = "sha256:15637b7d580c2675f70246563363775b4e6de947871e01d0f4e3881d1848d693"}, - {file = "opentelemetry_exporter_otlp_proto_common-1.25.0.tar.gz", hash = "sha256:c93f4e30da4eee02bacd1e004eb82ce4da143a2f8e15b987a9f603e0a85407d3"}, + {file = "opentelemetry_exporter_otlp_proto_common-1.26.0-py3-none-any.whl", hash = "sha256:ee4d8f8891a1b9c372abf8d109409e5b81947cf66423fd998e56880057afbc71"}, + {file = "opentelemetry_exporter_otlp_proto_common-1.26.0.tar.gz", hash = "sha256:bdbe50e2e22a1c71acaa0c8ba6efaadd58882e5a5978737a44a4c4b10d304c92"}, ] [package.dependencies] -opentelemetry-proto = "1.25.0" +opentelemetry-proto = "1.26.0" [[package]] name = "opentelemetry-exporter-otlp-proto-grpc" -version = "1.25.0" +version = "1.26.0" description = "OpenTelemetry Collector Protobuf over gRPC Exporter" optional = false python-versions = ">=3.8" files = [ - {file = "opentelemetry_exporter_otlp_proto_grpc-1.25.0-py3-none-any.whl", hash = "sha256:3131028f0c0a155a64c430ca600fd658e8e37043cb13209f0109db5c1a3e4eb4"}, - {file = "opentelemetry_exporter_otlp_proto_grpc-1.25.0.tar.gz", hash = "sha256:c0b1661415acec5af87625587efa1ccab68b873745ca0ee96b69bb1042087eac"}, + {file = "opentelemetry_exporter_otlp_proto_grpc-1.26.0-py3-none-any.whl", hash = "sha256:e2be5eff72ebcb010675b818e8d7c2e7d61ec451755b8de67a140bc49b9b0280"}, + {file = "opentelemetry_exporter_otlp_proto_grpc-1.26.0.tar.gz", hash = "sha256:a65b67a9a6b06ba1ec406114568e21afe88c1cdb29c464f2507d529eb906d8ae"}, ] [package.dependencies] @@ -3404,39 +3368,39 @@ deprecated = ">=1.2.6" googleapis-common-protos = ">=1.52,<2.0" grpcio = ">=1.0.0,<2.0.0" opentelemetry-api = ">=1.15,<2.0" -opentelemetry-exporter-otlp-proto-common = "1.25.0" -opentelemetry-proto = "1.25.0" -opentelemetry-sdk = ">=1.25.0,<1.26.0" +opentelemetry-exporter-otlp-proto-common = "1.26.0" +opentelemetry-proto = "1.26.0" +opentelemetry-sdk = ">=1.26.0,<1.27.0" [[package]] name = "opentelemetry-exporter-otlp-proto-http" -version = "1.25.0" +version = "1.26.0" description = "OpenTelemetry Collector Protobuf over HTTP Exporter" optional = false python-versions = ">=3.8" files = [ - {file = "opentelemetry_exporter_otlp_proto_http-1.25.0-py3-none-any.whl", hash = "sha256:2eca686ee11b27acd28198b3ea5e5863a53d1266b91cda47c839d95d5e0541a6"}, - {file = "opentelemetry_exporter_otlp_proto_http-1.25.0.tar.gz", hash = "sha256:9f8723859e37c75183ea7afa73a3542f01d0fd274a5b97487ea24cb683d7d684"}, + {file = "opentelemetry_exporter_otlp_proto_http-1.26.0-py3-none-any.whl", hash = "sha256:ee72a87c48ec977421b02f16c52ea8d884122470e0be573905237b540f4ee562"}, + {file = "opentelemetry_exporter_otlp_proto_http-1.26.0.tar.gz", hash = "sha256:5801ebbcf7b527377883e6cbbdda35ee712dc55114fff1e93dfee210be56c908"}, ] [package.dependencies] deprecated = ">=1.2.6" googleapis-common-protos = ">=1.52,<2.0" opentelemetry-api = ">=1.15,<2.0" -opentelemetry-exporter-otlp-proto-common = "1.25.0" -opentelemetry-proto = "1.25.0" -opentelemetry-sdk = ">=1.25.0,<1.26.0" +opentelemetry-exporter-otlp-proto-common = "1.26.0" +opentelemetry-proto = "1.26.0" +opentelemetry-sdk = ">=1.26.0,<1.27.0" requests = ">=2.7,<3.0" [[package]] name = "opentelemetry-instrumentation" -version = "0.46b0" +version = "0.47b0" description = "Instrumentation Tools & Auto Instrumentation for OpenTelemetry Python" optional = false python-versions = ">=3.8" files = [ - {file = "opentelemetry_instrumentation-0.46b0-py3-none-any.whl", hash = "sha256:89cd721b9c18c014ca848ccd11181e6b3fd3f6c7669e35d59c48dc527408c18b"}, - {file = "opentelemetry_instrumentation-0.46b0.tar.gz", hash = "sha256:974e0888fb2a1e01c38fbacc9483d024bb1132aad92d6d24e2e5543887a7adda"}, + {file = "opentelemetry_instrumentation-0.47b0-py3-none-any.whl", hash = "sha256:88974ee52b1db08fc298334b51c19d47e53099c33740e48c4f084bd1afd052d5"}, + {file = "opentelemetry_instrumentation-0.47b0.tar.gz", hash = "sha256:96f9885e450c35e3f16a4f33145f2ebf620aea910c9fd74a392bbc0f807a350f"}, ] [package.dependencies] @@ -3446,55 +3410,55 @@ wrapt = ">=1.0.0,<2.0.0" [[package]] name = "opentelemetry-instrumentation-asgi" -version = "0.46b0" +version = "0.47b0" description = "ASGI instrumentation for OpenTelemetry" optional = false python-versions = ">=3.8" files = [ - {file = "opentelemetry_instrumentation_asgi-0.46b0-py3-none-any.whl", hash = "sha256:f13c55c852689573057837a9500aeeffc010c4ba59933c322e8f866573374759"}, - {file = "opentelemetry_instrumentation_asgi-0.46b0.tar.gz", hash = "sha256:02559f30cf4b7e2a737ab17eb52aa0779bcf4cc06573064f3e2cb4dcc7d3040a"}, + {file = "opentelemetry_instrumentation_asgi-0.47b0-py3-none-any.whl", hash = "sha256:b798dc4957b3edc9dfecb47a4c05809036a4b762234c5071212fda39ead80ade"}, + {file = "opentelemetry_instrumentation_asgi-0.47b0.tar.gz", hash = "sha256:e78b7822c1bca0511e5e9610ec484b8994a81670375e570c76f06f69af7c506a"}, ] [package.dependencies] asgiref = ">=3.0,<4.0" opentelemetry-api = ">=1.12,<2.0" -opentelemetry-instrumentation = "0.46b0" -opentelemetry-semantic-conventions = "0.46b0" -opentelemetry-util-http = "0.46b0" +opentelemetry-instrumentation = "0.47b0" +opentelemetry-semantic-conventions = "0.47b0" +opentelemetry-util-http = "0.47b0" [package.extras] instruments = ["asgiref (>=3.0,<4.0)"] [[package]] name = "opentelemetry-instrumentation-fastapi" -version = "0.46b0" +version = "0.47b0" description = "OpenTelemetry FastAPI Instrumentation" optional = false python-versions = ">=3.8" files = [ - {file = "opentelemetry_instrumentation_fastapi-0.46b0-py3-none-any.whl", hash = "sha256:e0f5d150c6c36833dd011f0e6ef5ede6d7406c1aed0c7c98b2d3b38a018d1b33"}, - {file = "opentelemetry_instrumentation_fastapi-0.46b0.tar.gz", hash = "sha256:928a883a36fc89f9702f15edce43d1a7104da93d740281e32d50ffd03dbb4365"}, + {file = "opentelemetry_instrumentation_fastapi-0.47b0-py3-none-any.whl", hash = "sha256:5ac28dd401160b02e4f544a85a9e4f61a8cbe5b077ea0379d411615376a2bd21"}, + {file = "opentelemetry_instrumentation_fastapi-0.47b0.tar.gz", hash = "sha256:0c7c10b5d971e99a420678ffd16c5b1ea4f0db3b31b62faf305fbb03b4ebee36"}, ] [package.dependencies] opentelemetry-api = ">=1.12,<2.0" -opentelemetry-instrumentation = "0.46b0" -opentelemetry-instrumentation-asgi = "0.46b0" -opentelemetry-semantic-conventions = "0.46b0" -opentelemetry-util-http = "0.46b0" +opentelemetry-instrumentation = "0.47b0" +opentelemetry-instrumentation-asgi = "0.47b0" +opentelemetry-semantic-conventions = "0.47b0" +opentelemetry-util-http = "0.47b0" [package.extras] -instruments = ["fastapi (>=0.58,<1.0)"] +instruments = ["fastapi (>=0.58,<1.0)", "fastapi-slim (>=0.111.0,<0.112.0)"] [[package]] name = "opentelemetry-proto" -version = "1.25.0" +version = "1.26.0" description = "OpenTelemetry Python Proto" optional = false python-versions = ">=3.8" files = [ - {file = "opentelemetry_proto-1.25.0-py3-none-any.whl", hash = "sha256:f07e3341c78d835d9b86665903b199893befa5e98866f63d22b00d0b7ca4972f"}, - {file = "opentelemetry_proto-1.25.0.tar.gz", hash = "sha256:35b6ef9dc4a9f7853ecc5006738ad40443701e52c26099e197895cbda8b815a3"}, + {file = "opentelemetry_proto-1.26.0-py3-none-any.whl", hash = "sha256:6c4d7b4d4d9c88543bcf8c28ae3f8f0448a753dc291c18c5390444c90b76a725"}, + {file = "opentelemetry_proto-1.26.0.tar.gz", hash = "sha256:c5c18796c0cab3751fc3b98dee53855835e90c0422924b484432ac852d93dc1e"}, ] [package.dependencies] @@ -3502,43 +3466,44 @@ protobuf = ">=3.19,<5.0" [[package]] name = "opentelemetry-sdk" -version = "1.25.0" +version = "1.26.0" description = "OpenTelemetry Python SDK" optional = false python-versions = ">=3.8" files = [ - {file = "opentelemetry_sdk-1.25.0-py3-none-any.whl", hash = "sha256:d97ff7ec4b351692e9d5a15af570c693b8715ad78b8aafbec5c7100fe966b4c9"}, - {file = "opentelemetry_sdk-1.25.0.tar.gz", hash = "sha256:ce7fc319c57707ef5bf8b74fb9f8ebdb8bfafbe11898410e0d2a761d08a98ec7"}, + {file = "opentelemetry_sdk-1.26.0-py3-none-any.whl", hash = "sha256:feb5056a84a88670c041ea0ded9921fca559efec03905dddeb3885525e0af897"}, + {file = "opentelemetry_sdk-1.26.0.tar.gz", hash = "sha256:c90d2868f8805619535c05562d699e2f4fb1f00dbd55a86dcefca4da6fa02f85"}, ] [package.dependencies] -opentelemetry-api = "1.25.0" -opentelemetry-semantic-conventions = "0.46b0" +opentelemetry-api = "1.26.0" +opentelemetry-semantic-conventions = "0.47b0" typing-extensions = ">=3.7.4" [[package]] name = "opentelemetry-semantic-conventions" -version = "0.46b0" +version = "0.47b0" description = "OpenTelemetry Semantic Conventions" optional = false python-versions = ">=3.8" files = [ - {file = "opentelemetry_semantic_conventions-0.46b0-py3-none-any.whl", hash = "sha256:6daef4ef9fa51d51855d9f8e0ccd3a1bd59e0e545abe99ac6203804e36ab3e07"}, - {file = "opentelemetry_semantic_conventions-0.46b0.tar.gz", hash = "sha256:fbc982ecbb6a6e90869b15c1673be90bd18c8a56ff1cffc0864e38e2edffaefa"}, + {file = "opentelemetry_semantic_conventions-0.47b0-py3-none-any.whl", hash = "sha256:4ff9d595b85a59c1c1413f02bba320ce7ea6bf9e2ead2b0913c4395c7bbc1063"}, + {file = "opentelemetry_semantic_conventions-0.47b0.tar.gz", hash = "sha256:a8d57999bbe3495ffd4d510de26a97dadc1dace53e0275001b2c1b2f67992a7e"}, ] [package.dependencies] -opentelemetry-api = "1.25.0" +deprecated = ">=1.2.6" +opentelemetry-api = "1.26.0" [[package]] name = "opentelemetry-util-http" -version = "0.46b0" +version = "0.47b0" description = "Web util for OpenTelemetry" optional = false python-versions = ">=3.8" files = [ - {file = "opentelemetry_util_http-0.46b0-py3-none-any.whl", hash = "sha256:8dc1949ce63caef08db84ae977fdc1848fe6dc38e6bbaad0ae3e6ecd0d451629"}, - {file = "opentelemetry_util_http-0.46b0.tar.gz", hash = "sha256:03b6e222642f9c7eae58d9132343e045b50aca9761fcb53709bd2b663571fdf6"}, + {file = "opentelemetry_util_http-0.47b0-py3-none-any.whl", hash = "sha256:3d3215e09c4a723b12da6d0233a31395aeb2bb33a64d7b15a1500690ba250f19"}, + {file = "opentelemetry_util_http-0.47b0.tar.gz", hash = "sha256:352a07664c18eef827eb8ddcbd64c64a7284a39dd1655e2f16f577eb046ccb32"}, ] [[package]] @@ -3917,13 +3882,13 @@ test = ["coverage", "flake8", "freezegun (==0.3.15)", "mock (>=2.0.0)", "pylint" [[package]] name = "pre-commit" -version = "3.7.1" +version = "3.8.0" description = "A framework for managing and maintaining multi-language pre-commit hooks." optional = false python-versions = ">=3.9" files = [ - {file = "pre_commit-3.7.1-py2.py3-none-any.whl", hash = "sha256:fae36fd1d7ad7d6a5a1c0b0d5adb2ed1a3bda5a21bf6c3e5372073d7a11cd4c5"}, - {file = "pre_commit-3.7.1.tar.gz", hash = "sha256:8ca3ad567bc78a4972a3f1a477e94a79d4597e8140a6e0b651c5e33899c3654a"}, + {file = "pre_commit-3.8.0-py2.py3-none-any.whl", hash = "sha256:9a90a53bf82fdd8778d58085faf8d83df56e40dfe18f45b19446e26bf1b3a63f"}, + {file = "pre_commit-3.8.0.tar.gz", hash = "sha256:8bb6494d4a20423842e198980c9ecf9f96607a07ea29549e180eef9ae80fe7af"}, ] [package.dependencies] @@ -3952,22 +3917,22 @@ testing = ["google-api-core (>=1.31.5)"] [[package]] name = "protobuf" -version = "4.25.3" +version = "4.25.4" description = "" optional = false python-versions = ">=3.8" files = [ - {file = "protobuf-4.25.3-cp310-abi3-win32.whl", hash = "sha256:d4198877797a83cbfe9bffa3803602bbe1625dc30d8a097365dbc762e5790faa"}, - {file = "protobuf-4.25.3-cp310-abi3-win_amd64.whl", hash = "sha256:209ba4cc916bab46f64e56b85b090607a676f66b473e6b762e6f1d9d591eb2e8"}, - {file = "protobuf-4.25.3-cp37-abi3-macosx_10_9_universal2.whl", hash = "sha256:f1279ab38ecbfae7e456a108c5c0681e4956d5b1090027c1de0f934dfdb4b35c"}, - {file = "protobuf-4.25.3-cp37-abi3-manylinux2014_aarch64.whl", hash = "sha256:e7cb0ae90dd83727f0c0718634ed56837bfeeee29a5f82a7514c03ee1364c019"}, - {file = "protobuf-4.25.3-cp37-abi3-manylinux2014_x86_64.whl", hash = "sha256:7c8daa26095f82482307bc717364e7c13f4f1c99659be82890dcfc215194554d"}, - {file = "protobuf-4.25.3-cp38-cp38-win32.whl", hash = "sha256:f4f118245c4a087776e0a8408be33cf09f6c547442c00395fbfb116fac2f8ac2"}, - {file = "protobuf-4.25.3-cp38-cp38-win_amd64.whl", hash = "sha256:c053062984e61144385022e53678fbded7aea14ebb3e0305ae3592fb219ccfa4"}, - {file = "protobuf-4.25.3-cp39-cp39-win32.whl", hash = "sha256:19b270aeaa0099f16d3ca02628546b8baefe2955bbe23224aaf856134eccf1e4"}, - {file = "protobuf-4.25.3-cp39-cp39-win_amd64.whl", hash = "sha256:e3c97a1555fd6388f857770ff8b9703083de6bf1f9274a002a332d65fbb56c8c"}, - {file = "protobuf-4.25.3-py3-none-any.whl", hash = "sha256:f0700d54bcf45424477e46a9f0944155b46fb0639d69728739c0e47bab83f2b9"}, - {file = "protobuf-4.25.3.tar.gz", hash = "sha256:25b5d0b42fd000320bd7830b349e3b696435f3b329810427a6bcce6a5492cc5c"}, + {file = "protobuf-4.25.4-cp310-abi3-win32.whl", hash = "sha256:db9fd45183e1a67722cafa5c1da3e85c6492a5383f127c86c4c4aa4845867dc4"}, + {file = "protobuf-4.25.4-cp310-abi3-win_amd64.whl", hash = "sha256:ba3d8504116a921af46499471c63a85260c1a5fc23333154a427a310e015d26d"}, + {file = "protobuf-4.25.4-cp37-abi3-macosx_10_9_universal2.whl", hash = "sha256:eecd41bfc0e4b1bd3fa7909ed93dd14dd5567b98c941d6c1ad08fdcab3d6884b"}, + {file = "protobuf-4.25.4-cp37-abi3-manylinux2014_aarch64.whl", hash = "sha256:4c8a70fdcb995dcf6c8966cfa3a29101916f7225e9afe3ced4395359955d3835"}, + {file = "protobuf-4.25.4-cp37-abi3-manylinux2014_x86_64.whl", hash = "sha256:3319e073562e2515c6ddc643eb92ce20809f5d8f10fead3332f71c63be6a7040"}, + {file = "protobuf-4.25.4-cp38-cp38-win32.whl", hash = "sha256:7e372cbbda66a63ebca18f8ffaa6948455dfecc4e9c1029312f6c2edcd86c4e1"}, + {file = "protobuf-4.25.4-cp38-cp38-win_amd64.whl", hash = "sha256:051e97ce9fa6067a4546e75cb14f90cf0232dcb3e3d508c448b8d0e4265b61c1"}, + {file = "protobuf-4.25.4-cp39-cp39-win32.whl", hash = "sha256:90bf6fd378494eb698805bbbe7afe6c5d12c8e17fca817a646cd6a1818c696ca"}, + {file = "protobuf-4.25.4-cp39-cp39-win_amd64.whl", hash = "sha256:ac79a48d6b99dfed2729ccccee547b34a1d3d63289c71cef056653a846a2240f"}, + {file = "protobuf-4.25.4-py3-none-any.whl", hash = "sha256:bfbebc1c8e4793cfd58589acfb8a1026be0003e852b9da7db5a4285bde996978"}, + {file = "protobuf-4.25.4.tar.gz", hash = "sha256:0dc4a62cc4052a036ee2204d26fe4d835c62827c855c8a03f29fe6da146b380d"}, ] [[package]] @@ -4317,13 +4282,13 @@ torch = ["torch"] [[package]] name = "pymdown-extensions" -version = "10.8.1" +version = "10.9" description = "Extension pack for Python Markdown." optional = false python-versions = ">=3.8" files = [ - {file = "pymdown_extensions-10.8.1-py3-none-any.whl", hash = "sha256:f938326115884f48c6059c67377c46cf631c733ef3629b6eed1349989d1b30cb"}, - {file = "pymdown_extensions-10.8.1.tar.gz", hash = "sha256:3ab1db5c9e21728dabf75192d71471f8e50f216627e9a1fa9535ecb0231b9940"}, + {file = "pymdown_extensions-10.9-py3-none-any.whl", hash = "sha256:d323f7e90d83c86113ee78f3fe62fc9dee5f56b54d912660703ea1816fed5626"}, + {file = "pymdown_extensions-10.9.tar.gz", hash = "sha256:6ff740bcd99ec4172a938970d42b96128bdc9d4b9bcad72494f29921dc69b753"}, ] [package.dependencies] @@ -4388,13 +4353,13 @@ files = [ [[package]] name = "pyright" -version = "1.1.372" +version = "1.1.373" description = "Command line wrapper for pyright" optional = false python-versions = ">=3.7" files = [ - {file = "pyright-1.1.372-py3-none-any.whl", hash = "sha256:25b15fb8967740f0949fd35b963777187f0a0404c0bd753cc966ec139f3eaa0b"}, - {file = "pyright-1.1.372.tar.gz", hash = "sha256:a9f5e0daa955daaa17e3d1ef76d3623e75f8afd5e37b437d3ff84d5b38c15420"}, + {file = "pyright-1.1.373-py3-none-any.whl", hash = "sha256:b805413227f2c209f27b14b55da27fe5e9fb84129c9f1eb27708a5d12f6f000e"}, + {file = "pyright-1.1.373.tar.gz", hash = "sha256:f41bcfc8b9d1802b09921a394d6ae1ce19694957b628bc657629688daf8a83ff"}, ] [package.dependencies] @@ -4428,13 +4393,13 @@ files = [ [[package]] name = "pytest" -version = "8.3.1" +version = "8.3.2" description = "pytest: simple powerful testing with Python" optional = false python-versions = ">=3.8" files = [ - {file = "pytest-8.3.1-py3-none-any.whl", hash = "sha256:e9600ccf4f563976e2c99fa02c7624ab938296551f280835ee6516df8bc4ae8c"}, - {file = "pytest-8.3.1.tar.gz", hash = "sha256:7e8e5c5abd6e93cb1cc151f23e57adc31fcf8cfd2a3ff2da63e23f732de35db6"}, + {file = "pytest-8.3.2-py3-none-any.whl", hash = "sha256:4ba08f9ae7dcf84ded419494d229b48d0903ea6407b030eaec46df5e6a73bba5"}, + {file = "pytest-8.3.2.tar.gz", hash = "sha256:c132345d12ce551242c87269de812483f5bcc87cdbb4722e48487ba194f9fdce"}, ] [package.dependencies] @@ -4448,6 +4413,24 @@ tomli = {version = ">=1", markers = "python_version < \"3.11\""} [package.extras] dev = ["argcomplete", "attrs (>=19.2)", "hypothesis (>=3.56)", "mock", "pygments (>=2.7.2)", "requests", "setuptools", "xmlschema"] +[[package]] +name = "pytest-asyncio" +version = "0.23.8" +description = "Pytest support for asyncio" +optional = false +python-versions = ">=3.8" +files = [ + {file = "pytest_asyncio-0.23.8-py3-none-any.whl", hash = "sha256:50265d892689a5faefb84df80819d1ecef566eb3549cf915dfb33569359d1ce2"}, + {file = "pytest_asyncio-0.23.8.tar.gz", hash = "sha256:759b10b33a6dc61cce40a8bd5205e302978bbbcc00e279a8b61d9a6a3c82e4d3"}, +] + +[package.dependencies] +pytest = ">=7.0.0,<9" + +[package.extras] +docs = ["sphinx (>=5.3)", "sphinx-rtd-theme (>=1.0)"] +testing = ["coverage (>=6.2)", "hypothesis (>=5.7.1)"] + [[package]] name = "pytest-vcr" version = "1.0.2" @@ -4883,22 +4866,22 @@ files = [ [[package]] name = "selenium" -version = "4.22.0" +version = "4.23.1" description = "Official Python bindings for Selenium WebDriver" optional = false python-versions = ">=3.8" files = [ - {file = "selenium-4.22.0-py3-none-any.whl", hash = "sha256:e424991196e9857e19bf04fe5c1c0a4aac076794ff5e74615b1124e729d93104"}, - {file = "selenium-4.22.0.tar.gz", hash = "sha256:903c8c9d61b3eea6fcc9809dc7d9377e04e2ac87709876542cc8f863e482c4ce"}, + {file = "selenium-4.23.1-py3-none-any.whl", hash = "sha256:3a8d9f23dc636bd3840dd56f00c2739e32ec0c1e34a821dd553e15babef24477"}, + {file = "selenium-4.23.1.tar.gz", hash = "sha256:128d099e66284437e7128d2279176ec7a06e6ec7426e167f5d34987166bd8f46"}, ] [package.dependencies] certifi = ">=2021.10.8" trio = ">=0.17,<1.0" trio-websocket = ">=0.9,<1.0" -typing_extensions = ">=4.9.0" +typing_extensions = ">=4.9,<5.0" urllib3 = {version = ">=1.26,<3", extras = ["socks"]} -websocket-client = ">=1.8.0" +websocket-client = ">=1.8,<2.0" [[package]] name = "semver" @@ -4913,13 +4896,13 @@ files = [ [[package]] name = "setuptools" -version = "71.1.0" +version = "72.1.0" description = "Easily download, build, install, upgrade, and uninstall Python packages" optional = false python-versions = ">=3.8" files = [ - {file = "setuptools-71.1.0-py3-none-any.whl", hash = "sha256:33874fdc59b3188304b2e7c80d9029097ea31627180896fb549c578ceb8a0855"}, - {file = "setuptools-71.1.0.tar.gz", hash = "sha256:032d42ee9fb536e33087fb66cac5f840eb9391ed05637b3f2a76a7c8fb477936"}, + {file = "setuptools-72.1.0-py3-none-any.whl", hash = "sha256:5a03e1860cf56bb6ef48ce186b0e557fdba433237481a9a625176c2831be15d1"}, + {file = "setuptools-72.1.0.tar.gz", hash = "sha256:8d243eff56d095e5817f796ede6ae32941278f542e0f941867cc05ae52b162ec"}, ] [package.extras] @@ -5268,34 +5251,6 @@ webencodings = ">=0.4" doc = ["sphinx", "sphinx_rtd_theme"] test = ["pytest", "ruff"] -[[package]] -name = "together" -version = "1.2.3" -description = "Python client for Together's Cloud Platform!" -optional = false -python-versions = "<4.0,>=3.8" -files = [ - {file = "together-1.2.3-py3-none-any.whl", hash = "sha256:bbafb4b8340e0f7e0ddb11ad447eb3467c591090910d0291cfbf74b47af045c1"}, - {file = "together-1.2.3.tar.gz", hash = "sha256:4ea7626a9581d16fbf293e3eaf91557c43dea044627cf6dbe458bbf43408a6b2"}, -] - -[package.dependencies] -aiohttp = ">=3.9.3,<4.0.0" -click = ">=8.1.7,<9.0.0" -eval-type-backport = ">=0.1.3,<0.3.0" -filelock = ">=3.13.1,<4.0.0" -numpy = [ - {version = ">=1.23.5", markers = "python_version < \"3.12\""}, - {version = ">=1.26.0", markers = "python_version >= \"3.12\""}, -] -pillow = ">=10.3.0,<11.0.0" -pyarrow = ">=10.0.1" -pydantic = ">=2.6.3,<3.0.0" -requests = ">=2.31.0,<3.0.0" -tabulate = ">=0.9.0,<0.10.0" -tqdm = ">=4.66.2,<5.0.0" -typer = ">=0.9,<0.13" - [[package]] name = "tokenizers" version = "0.19.1" @@ -6144,4 +6099,4 @@ tools = ["crewai-tools"] [metadata] lock-version = "2.0" python-versions = ">=3.10,<=3.13" -content-hash = "f5ad9babb3c57c405e39232020e8cbfaaeb5c315c2e7c5bb8fdf66792f260343" +content-hash = "8df022f5ec0997c0a0f5710476139d9117c1057889c158e958f2c8efd22a4756" diff --git a/pyproject.toml b/pyproject.toml index a174fc6692..c556d7b51e 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -52,6 +52,7 @@ crewai-tools = "^0.4.26" pytest = "^8.0.0" pytest-vcr = "^1.0.2" python-dotenv = "1.0.0" +pytest-asyncio = "^0.23.7" [tool.poetry.scripts] crewai = "crewai.cli.cli:crewai" @@ -59,7 +60,7 @@ crewai = "crewai.cli.cli:crewai" [tool.mypy] ignore_missing_imports = true disable_error_code = 'import-untyped' -exclude = ["cli/templates/main.py", "cli/templates/crew.py"] +exclude = ["cli/templates"] [build-system] requires = ["poetry-core"] diff --git a/src/crewai/__init__.py b/src/crewai/__init__.py index c4091ec546..8bc4613421 100644 --- a/src/crewai/__init__.py +++ b/src/crewai/__init__.py @@ -1,6 +1,7 @@ from crewai.agent import Agent from crewai.crew import Crew +from crewai.pipeline import Pipeline from crewai.process import Process from crewai.task import Task -__all__ = ["Agent", "Crew", "Process", "Task"] +__all__ = ["Agent", "Crew", "Process", "Task", "Pipeline"] diff --git a/src/crewai/agents/agent_builder/utilities/base_token_process.py b/src/crewai/agents/agent_builder/utilities/base_token_process.py index ce0b446d3b..e971d018e7 100644 --- a/src/crewai/agents/agent_builder/utilities/base_token_process.py +++ b/src/crewai/agents/agent_builder/utilities/base_token_process.py @@ -1,4 +1,4 @@ -from typing import Any, Dict +from crewai.types.usage_metrics import UsageMetrics class TokenProcess: @@ -18,10 +18,10 @@ def sum_completion_tokens(self, tokens: int): def sum_successful_requests(self, requests: int): self.successful_requests = self.successful_requests + requests - def get_summary(self) -> Dict[str, Any]: - return { - "total_tokens": self.total_tokens, - "prompt_tokens": self.prompt_tokens, - "completion_tokens": self.completion_tokens, - "successful_requests": self.successful_requests, - } + def get_summary(self) -> UsageMetrics: + return UsageMetrics( + total_tokens=self.total_tokens, + prompt_tokens=self.prompt_tokens, + completion_tokens=self.completion_tokens, + successful_requests=self.successful_requests, + ) diff --git a/src/crewai/agents/executor.py b/src/crewai/agents/executor.py index 8a1ea4f60c..532ceca253 100644 --- a/src/crewai/agents/executor.py +++ b/src/crewai/agents/executor.py @@ -69,7 +69,7 @@ def _call( ) intermediate_steps: List[Tuple[AgentAction, str]] = [] # Allowing human input given task setting - if self.task.human_input: + if self.task and self.task.human_input: self.should_ask_for_human_input = True # Let's start tracking the number of iterations and time elapsed diff --git a/src/crewai/cli/cli.py b/src/crewai/cli/cli.py index 5ae9feb03c..9d748715a7 100644 --- a/src/crewai/cli/cli.py +++ b/src/crewai/cli/cli.py @@ -1,11 +1,12 @@ import click import pkg_resources +from crewai.cli.create_crew import create_crew +from crewai.cli.create_pipeline import create_pipeline from crewai.memory.storage.kickoff_task_outputs_storage import ( KickoffTaskOutputsSQLiteStorage, ) -from .create_crew import create_crew from .evaluate_crew import evaluate_crew from .replay_from_task import replay_task_command from .reset_memories_command import reset_memories_command @@ -19,10 +20,19 @@ def crewai(): @crewai.command() -@click.argument("project_name") -def create(project_name): - """Create a new crew.""" - create_crew(project_name) +@click.argument("type", type=click.Choice(["crew", "pipeline"])) +@click.argument("name") +@click.option( + "--router", is_flag=True, help="Create a pipeline with router functionality" +) +def create(type, name, router): + """Create a new crew or pipeline.""" + if type == "crew": + create_crew(name) + elif type == "pipeline": + create_pipeline(name, router) + else: + click.secho("Error: Invalid type. Must be 'crew' or 'pipeline'.", fg="red") @crewai.command() diff --git a/src/crewai/cli/create_crew.py b/src/crewai/cli/create_crew.py index c44d94c34c..510d4f4317 100644 --- a/src/crewai/cli/create_crew.py +++ b/src/crewai/cli/create_crew.py @@ -1,25 +1,35 @@ -import os from pathlib import Path import click +from crewai.cli.utils import copy_template -def create_crew(name): + +def create_crew(name, parent_folder=None): """Create a new crew.""" folder_name = name.replace(" ", "_").replace("-", "_").lower() class_name = name.replace("_", " ").replace("-", " ").title().replace(" ", "") - click.secho(f"Creating folder {folder_name}...", fg="green", bold=True) - - if not os.path.exists(folder_name): - os.mkdir(folder_name) - os.mkdir(folder_name + "/tests") - os.mkdir(folder_name + "/src") - os.mkdir(folder_name + f"/src/{folder_name}") - os.mkdir(folder_name + f"/src/{folder_name}/tools") - os.mkdir(folder_name + f"/src/{folder_name}/config") - with open(folder_name + "/.env", "w") as file: - file.write("OPENAI_API_KEY=YOUR_API_KEY") + if parent_folder: + folder_path = Path(parent_folder) / folder_name + else: + folder_path = Path(folder_name) + + click.secho( + f"Creating {'crew' if parent_folder else 'folder'} {folder_name}...", + fg="green", + bold=True, + ) + + if not folder_path.exists(): + folder_path.mkdir(parents=True) + (folder_path / "tests").mkdir(exist_ok=True) + if not parent_folder: + (folder_path / "src" / folder_name).mkdir(parents=True) + (folder_path / "src" / folder_name / "tools").mkdir(parents=True) + (folder_path / "src" / folder_name / "config").mkdir(parents=True) + with open(folder_path / ".env", "w") as file: + file.write("OPENAI_API_KEY=YOUR_API_KEY") else: click.secho( f"\tFolder {folder_name} already exists. Please choose a different name.", @@ -28,53 +38,34 @@ def create_crew(name): return package_dir = Path(__file__).parent - templates_dir = package_dir / "templates" + templates_dir = package_dir / "templates" / "crew" # List of template files to copy - root_template_files = [ - ".gitignore", - "pyproject.toml", - "README.md", - ] + root_template_files = ( + [".gitignore", "pyproject.toml", "README.md"] if not parent_folder else [] + ) tools_template_files = ["tools/custom_tool.py", "tools/__init__.py"] config_template_files = ["config/agents.yaml", "config/tasks.yaml"] - src_template_files = ["__init__.py", "main.py", "crew.py"] + src_template_files = ( + ["__init__.py", "main.py", "crew.py"] if not parent_folder else ["crew.py"] + ) for file_name in root_template_files: src_file = templates_dir / file_name - dst_file = Path(folder_name) / file_name + dst_file = folder_path / file_name copy_template(src_file, dst_file, name, class_name, folder_name) - for file_name in src_template_files: - src_file = templates_dir / file_name - dst_file = Path(folder_name) / "src" / folder_name / file_name - copy_template(src_file, dst_file, name, class_name, folder_name) + src_folder = folder_path / "src" / folder_name if not parent_folder else folder_path - for file_name in tools_template_files: + for file_name in src_template_files: src_file = templates_dir / file_name - dst_file = Path(folder_name) / "src" / folder_name / file_name + dst_file = src_folder / file_name copy_template(src_file, dst_file, name, class_name, folder_name) - for file_name in config_template_files: - src_file = templates_dir / file_name - dst_file = Path(folder_name) / "src" / folder_name / file_name - copy_template(src_file, dst_file, name, class_name, folder_name) + if not parent_folder: + for file_name in tools_template_files + config_template_files: + src_file = templates_dir / file_name + dst_file = src_folder / file_name + copy_template(src_file, dst_file, name, class_name, folder_name) click.secho(f"Crew {name} created successfully!", fg="green", bold=True) - - -def copy_template(src, dst, name, class_name, folder_name): - """Copy a file from src to dst.""" - with open(src, "r") as file: - content = file.read() - - # Interpolate the content - content = content.replace("{{name}}", name) - content = content.replace("{{crew_name}}", class_name) - content = content.replace("{{folder_name}}", folder_name) - - # Write the interpolated content to the new file - with open(dst, "w") as file: - file.write(content) - - click.secho(f" - Created {dst}", fg="green") diff --git a/src/crewai/cli/create_pipeline.py b/src/crewai/cli/create_pipeline.py new file mode 100644 index 0000000000..b26acf818d --- /dev/null +++ b/src/crewai/cli/create_pipeline.py @@ -0,0 +1,107 @@ +import shutil +from pathlib import Path + +import click + + +def create_pipeline(name, router=False): + """Create a new pipeline project.""" + folder_name = name.replace(" ", "_").replace("-", "_").lower() + class_name = name.replace("_", " ").replace("-", " ").title().replace(" ", "") + + click.secho(f"Creating pipeline {folder_name}...", fg="green", bold=True) + + project_root = Path(folder_name) + if project_root.exists(): + click.secho(f"Error: Folder {folder_name} already exists.", fg="red") + return + + # Create directory structure + (project_root / "src" / folder_name).mkdir(parents=True) + (project_root / "src" / folder_name / "pipelines").mkdir(parents=True) + (project_root / "src" / folder_name / "crews").mkdir(parents=True) + (project_root / "src" / folder_name / "tools").mkdir(parents=True) + (project_root / "tests").mkdir(exist_ok=True) + + # Create .env file + with open(project_root / ".env", "w") as file: + file.write("OPENAI_API_KEY=YOUR_API_KEY") + + package_dir = Path(__file__).parent + template_folder = "pipeline_router" if router else "pipeline" + templates_dir = package_dir / "templates" / template_folder + + # List of template files to copy + root_template_files = [".gitignore", "pyproject.toml", "README.md"] + src_template_files = ["__init__.py", "main.py"] + tools_template_files = ["tools/__init__.py", "tools/custom_tool.py"] + + if router: + crew_folders = [ + "classifier_crew", + "normal_crew", + "urgent_crew", + ] + pipelines_folders = [ + "pipelines/__init__.py", + "pipelines/pipeline_classifier.py", + "pipelines/pipeline_normal.py", + "pipelines/pipeline_urgent.py", + ] + else: + crew_folders = [ + "research_crew", + "write_linkedin_crew", + "write_x_crew", + ] + pipelines_folders = ["pipelines/__init__.py", "pipelines/pipeline.py"] + + def process_file(src_file, dst_file): + with open(src_file, "r") as file: + content = file.read() + + content = content.replace("{{name}}", name) + content = content.replace("{{crew_name}}", class_name) + content = content.replace("{{folder_name}}", folder_name) + content = content.replace("{{pipeline_name}}", class_name) + + with open(dst_file, "w") as file: + file.write(content) + + # Copy and process root template files + for file_name in root_template_files: + src_file = templates_dir / file_name + dst_file = project_root / file_name + process_file(src_file, dst_file) + + # Copy and process src template files + for file_name in src_template_files: + src_file = templates_dir / file_name + dst_file = project_root / "src" / folder_name / file_name + process_file(src_file, dst_file) + + # Copy tools files + for file_name in tools_template_files: + src_file = templates_dir / file_name + dst_file = project_root / "src" / folder_name / file_name + shutil.copy(src_file, dst_file) + + # Copy pipelines folders + for file_name in pipelines_folders: + src_file = templates_dir / file_name + dst_file = project_root / "src" / folder_name / file_name + process_file(src_file, dst_file) + + # Copy crew folders + for crew_folder in crew_folders: + src_crew_folder = templates_dir / "crews" / crew_folder + dst_crew_folder = project_root / "src" / folder_name / "crews" / crew_folder + if src_crew_folder.exists(): + shutil.copytree(src_crew_folder, dst_crew_folder) + else: + click.secho( + f"Warning: Crew folder {crew_folder} not found in template.", + fg="yellow", + ) + + click.secho(f"Pipeline {name} created successfully!", fg="green", bold=True) diff --git a/src/crewai/cli/templates/.gitignore b/src/crewai/cli/templates/crew/.gitignore similarity index 100% rename from src/crewai/cli/templates/.gitignore rename to src/crewai/cli/templates/crew/.gitignore diff --git a/src/crewai/cli/templates/README.md b/src/crewai/cli/templates/crew/README.md similarity index 100% rename from src/crewai/cli/templates/README.md rename to src/crewai/cli/templates/crew/README.md diff --git a/src/crewai/cli/templates/tools/__init__.py b/src/crewai/cli/templates/crew/__init__.py similarity index 100% rename from src/crewai/cli/templates/tools/__init__.py rename to src/crewai/cli/templates/crew/__init__.py diff --git a/src/crewai/cli/templates/config/agents.yaml b/src/crewai/cli/templates/crew/config/agents.yaml similarity index 100% rename from src/crewai/cli/templates/config/agents.yaml rename to src/crewai/cli/templates/crew/config/agents.yaml diff --git a/src/crewai/cli/templates/config/tasks.yaml b/src/crewai/cli/templates/crew/config/tasks.yaml similarity index 100% rename from src/crewai/cli/templates/config/tasks.yaml rename to src/crewai/cli/templates/crew/config/tasks.yaml diff --git a/src/crewai/cli/templates/crew.py b/src/crewai/cli/templates/crew/crew.py similarity index 100% rename from src/crewai/cli/templates/crew.py rename to src/crewai/cli/templates/crew/crew.py diff --git a/src/crewai/cli/templates/main.py b/src/crewai/cli/templates/crew/main.py similarity index 100% rename from src/crewai/cli/templates/main.py rename to src/crewai/cli/templates/crew/main.py diff --git a/src/crewai/cli/templates/pyproject.toml b/src/crewai/cli/templates/crew/pyproject.toml similarity index 100% rename from src/crewai/cli/templates/pyproject.toml rename to src/crewai/cli/templates/crew/pyproject.toml diff --git a/src/crewai/cli/templates/crew/tools/__init__.py b/src/crewai/cli/templates/crew/tools/__init__.py new file mode 100644 index 0000000000..e69de29bb2 diff --git a/src/crewai/cli/templates/tools/custom_tool.py b/src/crewai/cli/templates/crew/tools/custom_tool.py similarity index 100% rename from src/crewai/cli/templates/tools/custom_tool.py rename to src/crewai/cli/templates/crew/tools/custom_tool.py diff --git a/src/crewai/cli/templates/pipeline/.gitignore b/src/crewai/cli/templates/pipeline/.gitignore new file mode 100644 index 0000000000..d50a09fc91 --- /dev/null +++ b/src/crewai/cli/templates/pipeline/.gitignore @@ -0,0 +1,2 @@ +.env +__pycache__/ diff --git a/src/crewai/cli/templates/pipeline/README.md b/src/crewai/cli/templates/pipeline/README.md new file mode 100644 index 0000000000..60dc617e9d --- /dev/null +++ b/src/crewai/cli/templates/pipeline/README.md @@ -0,0 +1,57 @@ +# {{crew_name}} Crew + +Welcome to the {{crew_name}} Crew project, powered by [crewAI](https://crewai.com). This template is designed to help you set up a multi-agent AI system with ease, leveraging the powerful and flexible framework provided by crewAI. Our goal is to enable your agents to collaborate effectively on complex tasks, maximizing their collective intelligence and capabilities. + +## Installation + +Ensure you have Python >=3.10 <=3.13 installed on your system. This project uses [Poetry](https://python-poetry.org/) for dependency management and package handling, offering a seamless setup and execution experience. + +First, if you haven't already, install Poetry: + +```bash +pip install poetry +``` + +Next, navigate to your project directory and install the dependencies: + +1. First lock the dependencies and then install them: +```bash +poetry lock +``` +```bash +poetry install +``` +### Customizing + +**Add your `OPENAI_API_KEY` into the `.env` file** + +- Modify `src/{{folder_name}}/config/agents.yaml` to define your agents +- Modify `src/{{folder_name}}/config/tasks.yaml` to define your tasks +- Modify `src/{{folder_name}}/crew.py` to add your own logic, tools and specific args +- Modify `src/{{folder_name}}/main.py` to add custom inputs for your agents and tasks + +## Running the Project + +To kickstart your crew of AI agents and begin task execution, run this from the root folder of your project: + +```bash +poetry run {{folder_name}} +``` + +This command initializes the {{name}} Crew, assembling the agents and assigning them tasks as defined in your configuration. + +This example, unmodified, will run the create a `report.md` file with the output of a research on LLMs in the root folder. + +## Understanding Your Crew + +The {{name}} Crew is composed of multiple AI agents, each with unique roles, goals, and tools. These agents collaborate on a series of tasks, defined in `config/tasks.yaml`, leveraging their collective skills to achieve complex objectives. The `config/agents.yaml` file outlines the capabilities and configurations of each agent in your crew. + +## Support + +For support, questions, or feedback regarding the {{crew_name}} Crew or crewAI. +- Visit our [documentation](https://docs.crewai.com) +- Reach out to us through our [GitHub repository](https://github.com/joaomdmoura/crewai) +- [Join our Discord](https://discord.com/invite/X4JWnZnxPb) +- [Chat with our docs](https://chatg.pt/DWjSBZn) + +Let's create wonders together with the power and simplicity of crewAI. diff --git a/src/crewai/cli/templates/pipeline/__init__.py b/src/crewai/cli/templates/pipeline/__init__.py new file mode 100644 index 0000000000..e69de29bb2 diff --git a/src/crewai/cli/templates/pipeline/crews/research_crew/config/agents.yaml b/src/crewai/cli/templates/pipeline/crews/research_crew/config/agents.yaml new file mode 100644 index 0000000000..f8cf1f5c11 --- /dev/null +++ b/src/crewai/cli/templates/pipeline/crews/research_crew/config/agents.yaml @@ -0,0 +1,19 @@ +researcher: + role: > + {topic} Senior Data Researcher + goal: > + Uncover cutting-edge developments in {topic} + backstory: > + You're a seasoned researcher with a knack for uncovering the latest + developments in {topic}. Known for your ability to find the most relevant + information and present it in a clear and concise manner. + +reporting_analyst: + role: > + {topic} Reporting Analyst + goal: > + Create detailed reports based on {topic} data analysis and research findings + backstory: > + You're a meticulous analyst with a keen eye for detail. You're known for + your ability to turn complex data into clear and concise reports, making + it easy for others to understand and act on the information you provide. diff --git a/src/crewai/cli/templates/pipeline/crews/research_crew/config/tasks.yaml b/src/crewai/cli/templates/pipeline/crews/research_crew/config/tasks.yaml new file mode 100644 index 0000000000..e78091842c --- /dev/null +++ b/src/crewai/cli/templates/pipeline/crews/research_crew/config/tasks.yaml @@ -0,0 +1,16 @@ +research_task: + description: > + Conduct a thorough research about {topic} + Make sure you find any interesting and relevant information given + the current year is 2024. + expected_output: > + A list with 10 bullet points of the most relevant information about {topic} + agent: researcher + +reporting_task: + description: > + Review the context you got and expand each topic into a full section for a report. + Make sure the report is detailed and contains any and all relevant information. + expected_output: > + A fully fledge reports with a title, mains topics, each with a full section of information. + agent: reporting_analyst diff --git a/src/crewai/cli/templates/pipeline/crews/research_crew/research_crew.py b/src/crewai/cli/templates/pipeline/crews/research_crew/research_crew.py new file mode 100644 index 0000000000..f26ad712a2 --- /dev/null +++ b/src/crewai/cli/templates/pipeline/crews/research_crew/research_crew.py @@ -0,0 +1,58 @@ +from pydantic import BaseModel +from crewai import Agent, Crew, Process, Task +from crewai.project import CrewBase, agent, crew, task + +# Uncomment the following line to use an example of a custom tool +# from demo_pipeline.tools.custom_tool import MyCustomTool + +# Check our tools documentations for more information on how to use them +# from crewai_tools import SerperDevTool + + +class ResearchReport(BaseModel): + """Research Report""" + title: str + body: str + +@CrewBase +class ResearchCrew(): + """Research Crew""" + agents_config = 'config/agents.yaml' + tasks_config = 'config/tasks.yaml' + + @agent + def researcher(self) -> Agent: + return Agent( + config=self.agents_config['researcher'], + verbose=True + ) + + @agent + def reporting_analyst(self) -> Agent: + return Agent( + config=self.agents_config['reporting_analyst'], + verbose=True + ) + + @task + def research_task(self) -> Task: + return Task( + config=self.tasks_config['research_task'], + ) + + @task + def reporting_task(self) -> Task: + return Task( + config=self.tasks_config['reporting_task'], + output_pydantic=ResearchReport + ) + + @crew + def crew(self) -> Crew: + """Creates the Research Crew""" + return Crew( + agents=self.agents, # Automatically created by the @agent decorator + tasks=self.tasks, # Automatically created by the @task decorator + process=Process.sequential, + verbose=True, + ) \ No newline at end of file diff --git a/src/crewai/cli/templates/pipeline/crews/write_linkedin_crew/config/agents.yaml b/src/crewai/cli/templates/pipeline/crews/write_linkedin_crew/config/agents.yaml new file mode 100644 index 0000000000..e69de29bb2 diff --git a/src/crewai/cli/templates/pipeline/crews/write_linkedin_crew/config/tasks.yaml b/src/crewai/cli/templates/pipeline/crews/write_linkedin_crew/config/tasks.yaml new file mode 100644 index 0000000000..e69de29bb2 diff --git a/src/crewai/cli/templates/pipeline/crews/write_linkedin_crew/write_linkedin_crew.py b/src/crewai/cli/templates/pipeline/crews/write_linkedin_crew/write_linkedin_crew.py new file mode 100644 index 0000000000..4a40c3fb4b --- /dev/null +++ b/src/crewai/cli/templates/pipeline/crews/write_linkedin_crew/write_linkedin_crew.py @@ -0,0 +1,51 @@ +from crewai import Agent, Crew, Process, Task +from crewai.project import CrewBase, agent, crew, task + +# Uncomment the following line to use an example of a custom tool +# from {{folder_name}}.tools.custom_tool import MyCustomTool + +# Check our tools documentations for more information on how to use them +# from crewai_tools import SerperDevTool + +@CrewBase +class WriteLinkedInCrew(): + """Research Crew""" + agents_config = 'config/agents.yaml' + tasks_config = 'config/tasks.yaml' + + @agent + def researcher(self) -> Agent: + return Agent( + config=self.agents_config['researcher'], + verbose=True + ) + + @agent + def reporting_analyst(self) -> Agent: + return Agent( + config=self.agents_config['reporting_analyst'], + verbose=True + ) + + @task + def research_task(self) -> Task: + return Task( + config=self.tasks_config['research_task'], + ) + + @task + def reporting_task(self) -> Task: + return Task( + config=self.tasks_config['reporting_task'], + output_file='report.md' + ) + + @crew + def crew(self) -> Crew: + """Creates the {{crew_name}} crew""" + return Crew( + agents=self.agents, # Automatically created by the @agent decorator + tasks=self.tasks, # Automatically created by the @task decorator + process=Process.sequential, + verbose=True, + ) \ No newline at end of file diff --git a/src/crewai/cli/templates/pipeline/crews/write_x_crew/config/agents.yaml b/src/crewai/cli/templates/pipeline/crews/write_x_crew/config/agents.yaml new file mode 100644 index 0000000000..1401dcbe06 --- /dev/null +++ b/src/crewai/cli/templates/pipeline/crews/write_x_crew/config/agents.yaml @@ -0,0 +1,14 @@ +x_writer_agent: + role: > + Expert Social Media Content Creator specializing in short form written content + goal: > + Create viral-worthy, engaging short form posts that distill complex {topic} information + into compelling 280-character messages + backstory: > + You're a social media virtuoso with a particular talent for short form content. Your posts + consistently go viral due to your ability to craft hooks that stop users mid-scroll. + You've studied the techniques of social media masters like Justin Welsh, Dickie Bush, + Nicolas Cole, and Shaan Puri, incorporating their best practices into your own unique style. + Your superpower is taking intricate {topic} concepts and transforming them into + bite-sized, shareable content that resonates with a wide audience. You know exactly + how to structure a post for maximum impact and engagement. diff --git a/src/crewai/cli/templates/pipeline/crews/write_x_crew/config/tasks.yaml b/src/crewai/cli/templates/pipeline/crews/write_x_crew/config/tasks.yaml new file mode 100644 index 0000000000..1ffbc207aa --- /dev/null +++ b/src/crewai/cli/templates/pipeline/crews/write_x_crew/config/tasks.yaml @@ -0,0 +1,22 @@ +write_x_task: + description: > + Using the research report provided, create an engaging short form post about {topic}. + Your post should have a great hook, summarize key points, and be structured for easy + consumption on a digital platform. The post must be under 280 characters. + Follow these guidelines: + 1. Start with an attention-grabbing hook + 2. Condense the main insights from the research + 3. Use clear, concise language + 4. Include a call-to-action or thought-provoking question if space allows + 5. Ensure the post flows well and is easy to read quickly + + Here is the title of the research report you will be using + + Title: {title} + Research: + {body} + + expected_output: > + A compelling X post under 280 characters that effectively summarizes the key findings + about {topic}, starts with a strong hook, and is optimized for engagement on the platform. + agent: x_writer_agent diff --git a/src/crewai/cli/templates/pipeline/crews/write_x_crew/write_x_crew.py b/src/crewai/cli/templates/pipeline/crews/write_x_crew/write_x_crew.py new file mode 100644 index 0000000000..454aafdc01 --- /dev/null +++ b/src/crewai/cli/templates/pipeline/crews/write_x_crew/write_x_crew.py @@ -0,0 +1,36 @@ +from crewai import Agent, Crew, Process, Task +from crewai.project import CrewBase, agent, crew, task + +# Uncomment the following line to use an example of a custom tool +# from demo_pipeline.tools.custom_tool import MyCustomTool + +# Check our tools documentations for more information on how to use them +# from crewai_tools import SerperDevTool + + +@CrewBase +class WriteXCrew: + """Research Crew""" + + agents_config = "config/agents.yaml" + tasks_config = "config/tasks.yaml" + + @agent + def x_writer_agent(self) -> Agent: + return Agent(config=self.agents_config["x_writer_agent"], verbose=True) + + @task + def write_x_task(self) -> Task: + return Task( + config=self.tasks_config["write_x_task"], + ) + + @crew + def crew(self) -> Crew: + """Creates the Write X Crew""" + return Crew( + agents=self.agents, # Automatically created by the @agent decorator + tasks=self.tasks, # Automatically created by the @task decorator + process=Process.sequential, + verbose=True, + ) diff --git a/src/crewai/cli/templates/pipeline/main.py b/src/crewai/cli/templates/pipeline/main.py new file mode 100644 index 0000000000..3766933309 --- /dev/null +++ b/src/crewai/cli/templates/pipeline/main.py @@ -0,0 +1,26 @@ +#!/usr/bin/env python +import asyncio +from {{folder_name}}.pipelines.pipeline import {{pipeline_name}}Pipeline + +async def run(): + """ + Run the pipeline. + """ + inputs = [ + {"topic": "AI wearables"}, + ] + pipeline = {{pipeline_name}}Pipeline() + results = await pipeline.kickoff(inputs) + + # Process and print results + for result in results: + print(f"Raw output: {result.raw}") + if result.json_dict: + print(f"JSON output: {result.json_dict}") + print("\n") + +def main(): + asyncio.run(run()) + +if __name__ == "__main__": + main() \ No newline at end of file diff --git a/src/crewai/cli/templates/pipeline/pipelines/__init__.py b/src/crewai/cli/templates/pipeline/pipelines/__init__.py new file mode 100644 index 0000000000..e69de29bb2 diff --git a/src/crewai/cli/templates/pipeline/pipelines/pipeline.py b/src/crewai/cli/templates/pipeline/pipelines/pipeline.py new file mode 100644 index 0000000000..8ca8dfa44f --- /dev/null +++ b/src/crewai/cli/templates/pipeline/pipelines/pipeline.py @@ -0,0 +1,87 @@ +""" +This pipeline file includes two different examples to demonstrate the flexibility of crewAI pipelines. + +Example 1: Two-Stage Pipeline +----------------------------- +This pipeline consists of two crews: +1. ResearchCrew: Performs research on a given topic. +2. WriteXCrew: Generates an X (Twitter) post based on the research findings. + +Key features: +- The ResearchCrew's final task uses output_json to store all research findings in a JSON object. +- This JSON object is then passed to the WriteXCrew, where tasks can access the research findings. + +Example 2: Two-Stage Pipeline with Parallel Execution +------------------------------------------------------- +This pipeline consists of three crews: +1. ResearchCrew: Performs research on a given topic. +2. WriteXCrew and WriteLinkedInCrew: Run in parallel, using the research findings to generate posts for X and LinkedIn, respectively. + +Key features: +- Demonstrates the ability to run multiple crews in parallel. +- Shows how to structure a pipeline with both sequential and parallel stages. + +Usage: +- To switch between examples, comment/uncomment the respective code blocks below. +- Ensure that you have implemented all necessary crew classes (ResearchCrew, WriteXCrew, WriteLinkedInCrew) before running. +""" + +# Common imports for both examples +from crewai import Pipeline + + + +# Uncomment the crews you need for your chosen example +from ..crews.research_crew.research_crew import ResearchCrew +from ..crews.write_x_crew.write_x_crew import WriteXCrew +# from .crews.write_linkedin_crew.write_linkedin_crew import WriteLinkedInCrew # Uncomment for Example 2 + +# EXAMPLE 1: Two-Stage Pipeline +# ----------------------------- +# Uncomment the following code block to use Example 1 + +class {{pipeline_name}}Pipeline: + def __init__(self): + # Initialize crews + self.research_crew = ResearchCrew().crew() + self.write_x_crew = WriteXCrew().crew() + + def create_pipeline(self): + return Pipeline( + stages=[ + self.research_crew, + self.write_x_crew + ] + ) + + async def kickoff(self, inputs): + pipeline = self.create_pipeline() + results = await pipeline.kickoff(inputs) + return results + + +# EXAMPLE 2: Two-Stage Pipeline with Parallel Execution +# ------------------------------------------------------- +# Uncomment the following code block to use Example 2 + +# @PipelineBase +# class {{pipeline_name}}Pipeline: +# def __init__(self): +# # Initialize crews +# self.research_crew = ResearchCrew().crew() +# self.write_x_crew = WriteXCrew().crew() +# self.write_linkedin_crew = WriteLinkedInCrew().crew() + +# @pipeline +# def create_pipeline(self): +# return Pipeline( +# stages=[ +# self.research_crew, +# [self.write_x_crew, self.write_linkedin_crew] # Parallel execution +# ] +# ) + +# async def run(self, inputs): +# pipeline = self.create_pipeline() +# results = await pipeline.kickoff(inputs) +# return results \ No newline at end of file diff --git a/src/crewai/cli/templates/pipeline/pyproject.toml b/src/crewai/cli/templates/pipeline/pyproject.toml new file mode 100644 index 0000000000..f3f6432146 --- /dev/null +++ b/src/crewai/cli/templates/pipeline/pyproject.toml @@ -0,0 +1,17 @@ +[tool.poetry] +name = "{{folder_name}}" +version = "0.1.0" +description = "{{name}} using crewAI" +authors = ["Your Name "] + +[tool.poetry.dependencies] +python = ">=3.10,<=3.13" +crewai = { extras = ["tools"], version = "^0.46.0" } +asyncio = "*" + +[tool.poetry.scripts] +{{folder_name}} = "{{folder_name}}.main:main" + +[build-system] +requires = ["poetry-core"] +build-backend = "poetry.core.masonry.api" \ No newline at end of file diff --git a/src/crewai/cli/templates/pipeline/tools/__init__.py b/src/crewai/cli/templates/pipeline/tools/__init__.py new file mode 100644 index 0000000000..e69de29bb2 diff --git a/src/crewai/cli/templates/pipeline/tools/custom_tool.py b/src/crewai/cli/templates/pipeline/tools/custom_tool.py new file mode 100644 index 0000000000..b125293033 --- /dev/null +++ b/src/crewai/cli/templates/pipeline/tools/custom_tool.py @@ -0,0 +1,12 @@ +from crewai_tools import BaseTool + + +class MyCustomTool(BaseTool): + name: str = "Name of my tool" + description: str = ( + "Clear description for what this tool is useful for, you agent will need this information to use it." + ) + + def _run(self, argument: str) -> str: + # Implementation goes here + return "this is an example of a tool output, ignore it and move along." diff --git a/src/crewai/cli/templates/pipeline_router/.gitignore b/src/crewai/cli/templates/pipeline_router/.gitignore new file mode 100644 index 0000000000..d50a09fc91 --- /dev/null +++ b/src/crewai/cli/templates/pipeline_router/.gitignore @@ -0,0 +1,2 @@ +.env +__pycache__/ diff --git a/src/crewai/cli/templates/pipeline_router/README.md b/src/crewai/cli/templates/pipeline_router/README.md new file mode 100644 index 0000000000..60dc617e9d --- /dev/null +++ b/src/crewai/cli/templates/pipeline_router/README.md @@ -0,0 +1,57 @@ +# {{crew_name}} Crew + +Welcome to the {{crew_name}} Crew project, powered by [crewAI](https://crewai.com). This template is designed to help you set up a multi-agent AI system with ease, leveraging the powerful and flexible framework provided by crewAI. Our goal is to enable your agents to collaborate effectively on complex tasks, maximizing their collective intelligence and capabilities. + +## Installation + +Ensure you have Python >=3.10 <=3.13 installed on your system. This project uses [Poetry](https://python-poetry.org/) for dependency management and package handling, offering a seamless setup and execution experience. + +First, if you haven't already, install Poetry: + +```bash +pip install poetry +``` + +Next, navigate to your project directory and install the dependencies: + +1. First lock the dependencies and then install them: +```bash +poetry lock +``` +```bash +poetry install +``` +### Customizing + +**Add your `OPENAI_API_KEY` into the `.env` file** + +- Modify `src/{{folder_name}}/config/agents.yaml` to define your agents +- Modify `src/{{folder_name}}/config/tasks.yaml` to define your tasks +- Modify `src/{{folder_name}}/crew.py` to add your own logic, tools and specific args +- Modify `src/{{folder_name}}/main.py` to add custom inputs for your agents and tasks + +## Running the Project + +To kickstart your crew of AI agents and begin task execution, run this from the root folder of your project: + +```bash +poetry run {{folder_name}} +``` + +This command initializes the {{name}} Crew, assembling the agents and assigning them tasks as defined in your configuration. + +This example, unmodified, will run the create a `report.md` file with the output of a research on LLMs in the root folder. + +## Understanding Your Crew + +The {{name}} Crew is composed of multiple AI agents, each with unique roles, goals, and tools. These agents collaborate on a series of tasks, defined in `config/tasks.yaml`, leveraging their collective skills to achieve complex objectives. The `config/agents.yaml` file outlines the capabilities and configurations of each agent in your crew. + +## Support + +For support, questions, or feedback regarding the {{crew_name}} Crew or crewAI. +- Visit our [documentation](https://docs.crewai.com) +- Reach out to us through our [GitHub repository](https://github.com/joaomdmoura/crewai) +- [Join our Discord](https://discord.com/invite/X4JWnZnxPb) +- [Chat with our docs](https://chatg.pt/DWjSBZn) + +Let's create wonders together with the power and simplicity of crewAI. diff --git a/src/crewai/cli/templates/pipeline_router/__init__.py b/src/crewai/cli/templates/pipeline_router/__init__.py new file mode 100644 index 0000000000..e69de29bb2 diff --git a/src/crewai/cli/templates/pipeline_router/config/agents.yaml b/src/crewai/cli/templates/pipeline_router/config/agents.yaml new file mode 100644 index 0000000000..72ed6939e4 --- /dev/null +++ b/src/crewai/cli/templates/pipeline_router/config/agents.yaml @@ -0,0 +1,19 @@ +researcher: + role: > + {topic} Senior Data Researcher + goal: > + Uncover cutting-edge developments in {topic} + backstory: > + You're a seasoned researcher with a knack for uncovering the latest + developments in {topic}. Known for your ability to find the most relevant + information and present it in a clear and concise manner. + +reporting_analyst: + role: > + {topic} Reporting Analyst + goal: > + Create detailed reports based on {topic} data analysis and research findings + backstory: > + You're a meticulous analyst with a keen eye for detail. You're known for + your ability to turn complex data into clear and concise reports, making + it easy for others to understand and act on the information you provide. \ No newline at end of file diff --git a/src/crewai/cli/templates/pipeline_router/config/tasks.yaml b/src/crewai/cli/templates/pipeline_router/config/tasks.yaml new file mode 100644 index 0000000000..f30820855e --- /dev/null +++ b/src/crewai/cli/templates/pipeline_router/config/tasks.yaml @@ -0,0 +1,17 @@ +research_task: + description: > + Conduct a thorough research about {topic} + Make sure you find any interesting and relevant information given + the current year is 2024. + expected_output: > + A list with 10 bullet points of the most relevant information about {topic} + agent: researcher + +reporting_task: + description: > + Review the context you got and expand each topic into a full section for a report. + Make sure the report is detailed and contains any and all relevant information. + expected_output: > + A fully fledge reports with the mains topics, each with a full section of information. + Formatted as markdown without '```' + agent: reporting_analyst diff --git a/src/crewai/cli/templates/pipeline_router/crews/classifier_crew/classifier_crew.py b/src/crewai/cli/templates/pipeline_router/crews/classifier_crew/classifier_crew.py new file mode 100644 index 0000000000..af6be20ab7 --- /dev/null +++ b/src/crewai/cli/templates/pipeline_router/crews/classifier_crew/classifier_crew.py @@ -0,0 +1,40 @@ +from crewai import Agent, Crew, Process, Task +from crewai.project import CrewBase, agent, crew, task +from pydantic import BaseModel + +# Uncomment the following line to use an example of a custom tool +# from demo_pipeline.tools.custom_tool import MyCustomTool + +# Check our tools documentations for more information on how to use them +# from crewai_tools import SerperDevTool + +class UrgencyScore(BaseModel): + urgency_score: int + +@CrewBase +class ClassifierCrew: + """Email Classifier Crew""" + + agents_config = "config/agents.yaml" + tasks_config = "config/tasks.yaml" + + @agent + def classifier(self) -> Agent: + return Agent(config=self.agents_config["classifier"], verbose=True) + + @task + def urgent_task(self) -> Task: + return Task( + config=self.tasks_config["classify_email"], + output_pydantic=UrgencyScore, + ) + + @crew + def crew(self) -> Crew: + """Creates the Email Classifier Crew""" + return Crew( + agents=self.agents, # Automatically created by the @agent decorator + tasks=self.tasks, # Automatically created by the @task decorator + process=Process.sequential, + verbose=True, + ) diff --git a/src/crewai/cli/templates/pipeline_router/crews/classifier_crew/config/agents.yaml b/src/crewai/cli/templates/pipeline_router/crews/classifier_crew/config/agents.yaml new file mode 100644 index 0000000000..45506d0388 --- /dev/null +++ b/src/crewai/cli/templates/pipeline_router/crews/classifier_crew/config/agents.yaml @@ -0,0 +1,7 @@ +classifier: + role: > + Email Classifier + goal: > + Classify the email: {email} as urgent or normal from a score of 1 to 10, where 1 is not urgent and 10 is urgent. Return the urgency score only.` + backstory: > + You are a highly efficient and experienced email classifier, trained to quickly assess and classify emails. Your ability to remain calm under pressure and provide concise, actionable responses has made you an invaluable asset in managing normal situations and maintaining smooth operations. diff --git a/src/crewai/cli/templates/pipeline_router/crews/classifier_crew/config/tasks.yaml b/src/crewai/cli/templates/pipeline_router/crews/classifier_crew/config/tasks.yaml new file mode 100644 index 0000000000..cd843fd1ac --- /dev/null +++ b/src/crewai/cli/templates/pipeline_router/crews/classifier_crew/config/tasks.yaml @@ -0,0 +1,7 @@ +classify_email: + description: > + Classify the email: {email} + as urgent or normal. + expected_output: > + Classify the email from a scale of 1 to 10, where 1 is not urgent and 10 is urgent. Return the urgency score only. + agent: classifier diff --git a/src/crewai/cli/templates/pipeline_router/crews/normal_crew/config/agents.yaml b/src/crewai/cli/templates/pipeline_router/crews/normal_crew/config/agents.yaml new file mode 100644 index 0000000000..c847ce8f55 --- /dev/null +++ b/src/crewai/cli/templates/pipeline_router/crews/normal_crew/config/agents.yaml @@ -0,0 +1,7 @@ +normal_handler: + role: > + Normal Email Processor + goal: > + Process normal emails and create an email to respond to the sender. + backstory: > + You are a highly efficient and experienced normal email handler, trained to quickly assess and respond to normal communications. Your ability to remain calm under pressure and provide concise, actionable responses has made you an invaluable asset in managing normal situations and maintaining smooth operations. diff --git a/src/crewai/cli/templates/pipeline_router/crews/normal_crew/config/tasks.yaml b/src/crewai/cli/templates/pipeline_router/crews/normal_crew/config/tasks.yaml new file mode 100644 index 0000000000..341303e902 --- /dev/null +++ b/src/crewai/cli/templates/pipeline_router/crews/normal_crew/config/tasks.yaml @@ -0,0 +1,6 @@ +normal_task: + description: > + Process and respond to normal email quickly. + expected_output: > + An email response to the normal email. + agent: normal_handler diff --git a/src/crewai/cli/templates/pipeline_router/crews/normal_crew/normal_crew.py b/src/crewai/cli/templates/pipeline_router/crews/normal_crew/normal_crew.py new file mode 100644 index 0000000000..c240acfd15 --- /dev/null +++ b/src/crewai/cli/templates/pipeline_router/crews/normal_crew/normal_crew.py @@ -0,0 +1,36 @@ +from crewai import Agent, Crew, Process, Task +from crewai.project import CrewBase, agent, crew, task + +# Uncomment the following line to use an example of a custom tool +# from demo_pipeline.tools.custom_tool import MyCustomTool + +# Check our tools documentations for more information on how to use them +# from crewai_tools import SerperDevTool + + +@CrewBase +class NormalCrew: + """Normal Email Crew""" + + agents_config = "config/agents.yaml" + tasks_config = "config/tasks.yaml" + + @agent + def normal_handler(self) -> Agent: + return Agent(config=self.agents_config["normal_handler"], verbose=True) + + @task + def urgent_task(self) -> Task: + return Task( + config=self.tasks_config["normal_task"], + ) + + @crew + def crew(self) -> Crew: + """Creates the Normal Email Crew""" + return Crew( + agents=self.agents, # Automatically created by the @agent decorator + tasks=self.tasks, # Automatically created by the @task decorator + process=Process.sequential, + verbose=True, + ) diff --git a/src/crewai/cli/templates/pipeline_router/crews/urgent_crew/config/agents.yaml b/src/crewai/cli/templates/pipeline_router/crews/urgent_crew/config/agents.yaml new file mode 100644 index 0000000000..52804a9c12 --- /dev/null +++ b/src/crewai/cli/templates/pipeline_router/crews/urgent_crew/config/agents.yaml @@ -0,0 +1,7 @@ +urgent_handler: + role: > + Urgent Email Processor + goal: > + Process urgent emails and create an email to respond to the sender. + backstory: > + You are a highly efficient and experienced urgent email handler, trained to quickly assess and respond to time-sensitive communications. Your ability to remain calm under pressure and provide concise, actionable responses has made you an invaluable asset in managing critical situations and maintaining smooth operations. diff --git a/src/crewai/cli/templates/pipeline_router/crews/urgent_crew/config/tasks.yaml b/src/crewai/cli/templates/pipeline_router/crews/urgent_crew/config/tasks.yaml new file mode 100644 index 0000000000..dc2ee1c2a5 --- /dev/null +++ b/src/crewai/cli/templates/pipeline_router/crews/urgent_crew/config/tasks.yaml @@ -0,0 +1,6 @@ +urgent_task: + description: > + Process and respond to urgent email quickly. + expected_output: > + An email response to the urgent email. + agent: urgent_handler diff --git a/src/crewai/cli/templates/pipeline_router/crews/urgent_crew/urgent_crew.py b/src/crewai/cli/templates/pipeline_router/crews/urgent_crew/urgent_crew.py new file mode 100644 index 0000000000..54c804c798 --- /dev/null +++ b/src/crewai/cli/templates/pipeline_router/crews/urgent_crew/urgent_crew.py @@ -0,0 +1,36 @@ +from crewai import Agent, Crew, Process, Task +from crewai.project import CrewBase, agent, crew, task + +# Uncomment the following line to use an example of a custom tool +# from demo_pipeline.tools.custom_tool import MyCustomTool + +# Check our tools documentations for more information on how to use them +# from crewai_tools import SerperDevTool + + +@CrewBase +class UrgentCrew: + """Urgent Email Crew""" + + agents_config = "config/agents.yaml" + tasks_config = "config/tasks.yaml" + + @agent + def urgent_handler(self) -> Agent: + return Agent(config=self.agents_config["urgent_handler"], verbose=True) + + @task + def urgent_task(self) -> Task: + return Task( + config=self.tasks_config["urgent_task"], + ) + + @crew + def crew(self) -> Crew: + """Creates the Urgent Email Crew""" + return Crew( + agents=self.agents, # Automatically created by the @agent decorator + tasks=self.tasks, # Automatically created by the @task decorator + process=Process.sequential, + verbose=True, + ) diff --git a/src/crewai/cli/templates/pipeline_router/main.py b/src/crewai/cli/templates/pipeline_router/main.py new file mode 100644 index 0000000000..5d1cdc3704 --- /dev/null +++ b/src/crewai/cli/templates/pipeline_router/main.py @@ -0,0 +1,75 @@ +#!/usr/bin/env python +import asyncio +from crewai.routers.router import Route +from crewai.routers.router import Router + +from {{folder_name}}.pipelines.pipeline_classifier import EmailClassifierPipeline +from {{folder_name}}.pipelines.pipeline_normal import NormalPipeline +from {{folder_name}}.pipelines.pipeline_urgent import UrgentPipeline + +async def run(): + """ + Run the pipeline. + """ + inputs = [ + { + "email": """ + Subject: URGENT: Marketing Campaign Launch - Immediate Action Required + Dear Team, + I'm reaching out regarding our upcoming marketing campaign that requires your immediate attention and swift action. We're facing a critical deadline, and our success hinges on our ability to mobilize quickly. + Key points: + + Campaign launch: 48 hours from now + Target audience: 250,000 potential customers + Expected ROI: 35% increase in Q3 sales + + What we need from you NOW: + + Final approval on creative assets (due in 3 hours) + Confirmation of media placements (due by end of day) + Last-minute budget allocation for paid social media push + + Our competitors are poised to launch similar campaigns, and we must act fast to maintain our market advantage. Delays could result in significant lost opportunities and potential revenue. + Please prioritize this campaign above all other tasks. I'll be available for the next 24 hours to address any concerns or roadblocks. + Let's make this happen! + [Your Name] + Marketing Director + P.S. I'll be scheduling an emergency team meeting in 1 hour to discuss our action plan. Attendance is mandatory. + """ + } + ] + + pipeline_classifier = EmailClassifierPipeline().create_pipeline() + pipeline_urgent = UrgentPipeline().create_pipeline() + pipeline_normal = NormalPipeline().create_pipeline() + + router = Router( + routes={ + "high_urgency": Route( + condition=lambda x: x.get("urgency_score", 0) > 7, + pipeline=pipeline_urgent + ), + "low_urgency": Route( + condition=lambda x: x.get("urgency_score", 0) <= 7, + pipeline=pipeline_normal + ) + }, + default=pipeline_normal + ) + + pipeline = pipeline_classifier >> router + + results = await pipeline.kickoff(inputs) + + # Process and print results + for result in results: + print(f"Raw output: {result.raw}") + if result.json_dict: + print(f"JSON output: {result.json_dict}") + print("\n") + +def main(): + asyncio.run(run()) + +if __name__ == "__main__": + main() \ No newline at end of file diff --git a/src/crewai/cli/templates/pipeline_router/pipelines/__init__.py b/src/crewai/cli/templates/pipeline_router/pipelines/__init__.py new file mode 100644 index 0000000000..e69de29bb2 diff --git a/src/crewai/cli/templates/pipeline_router/pipelines/pipeline_classifier.py b/src/crewai/cli/templates/pipeline_router/pipelines/pipeline_classifier.py new file mode 100644 index 0000000000..5047d1602c --- /dev/null +++ b/src/crewai/cli/templates/pipeline_router/pipelines/pipeline_classifier.py @@ -0,0 +1,24 @@ +from crewai import Pipeline +from crewai.project import PipelineBase +from ..crews.classifier_crew.classifier_crew import ClassifierCrew + + +@PipelineBase +class EmailClassifierPipeline: + def __init__(self): + # Initialize crews + self.classifier_crew = ClassifierCrew().crew() + + def create_pipeline(self): + return Pipeline( + stages=[ + self.classifier_crew + ] + ) + + async def kickoff(self, inputs): + pipeline = self.create_pipeline() + results = await pipeline.kickoff(inputs) + return results + + diff --git a/src/crewai/cli/templates/pipeline_router/pipelines/pipeline_normal.py b/src/crewai/cli/templates/pipeline_router/pipelines/pipeline_normal.py new file mode 100644 index 0000000000..936af41764 --- /dev/null +++ b/src/crewai/cli/templates/pipeline_router/pipelines/pipeline_normal.py @@ -0,0 +1,24 @@ +from crewai import Pipeline +from crewai.project import PipelineBase +from ..crews.normal_crew.normal_crew import NormalCrew + + +@PipelineBase +class NormalPipeline: + def __init__(self): + # Initialize crews + self.normal_crew = NormalCrew().crew() + + def create_pipeline(self): + return Pipeline( + stages=[ + self.normal_crew + ] + ) + + async def kickoff(self, inputs): + pipeline = self.create_pipeline() + results = await pipeline.kickoff(inputs) + return results + + diff --git a/src/crewai/cli/templates/pipeline_router/pipelines/pipeline_urgent.py b/src/crewai/cli/templates/pipeline_router/pipelines/pipeline_urgent.py new file mode 100644 index 0000000000..07297bf757 --- /dev/null +++ b/src/crewai/cli/templates/pipeline_router/pipelines/pipeline_urgent.py @@ -0,0 +1,23 @@ +from crewai import Pipeline +from crewai.project import PipelineBase +from ..crews.urgent_crew.urgent_crew import UrgentCrew + +@PipelineBase +class UrgentPipeline: + def __init__(self): + # Initialize crews + self.urgent_crew = UrgentCrew().crew() + + def create_pipeline(self): + return Pipeline( + stages=[ + self.urgent_crew + ] + ) + + async def kickoff(self, inputs): + pipeline = self.create_pipeline() + results = await pipeline.kickoff(inputs) + return results + + diff --git a/src/crewai/cli/templates/pipeline_router/pyproject.toml b/src/crewai/cli/templates/pipeline_router/pyproject.toml new file mode 100644 index 0000000000..024f1ac4af --- /dev/null +++ b/src/crewai/cli/templates/pipeline_router/pyproject.toml @@ -0,0 +1,19 @@ +[tool.poetry] +name = "{{folder_name}}" +version = "0.1.0" +description = "{{name}} using crewAI" +authors = ["Your Name "] + +[tool.poetry.dependencies] +python = ">=3.10,<=3.13" +crewai = { extras = ["tools"], version = "^0.46.0" } + +[tool.poetry.scripts] +{{folder_name}} = "{{folder_name}}.main:main" +train = "{{folder_name}}.main:train" +replay = "{{folder_name}}.main:replay" +test = "{{folder_name}}.main:test" + +[build-system] +requires = ["poetry-core"] +build-backend = "poetry.core.masonry.api" diff --git a/src/crewai/cli/templates/pipeline_router/tools/__init__.py b/src/crewai/cli/templates/pipeline_router/tools/__init__.py new file mode 100644 index 0000000000..e69de29bb2 diff --git a/src/crewai/cli/templates/pipeline_router/tools/custom_tool.py b/src/crewai/cli/templates/pipeline_router/tools/custom_tool.py new file mode 100644 index 0000000000..b125293033 --- /dev/null +++ b/src/crewai/cli/templates/pipeline_router/tools/custom_tool.py @@ -0,0 +1,12 @@ +from crewai_tools import BaseTool + + +class MyCustomTool(BaseTool): + name: str = "Name of my tool" + description: str = ( + "Clear description for what this tool is useful for, you agent will need this information to use it." + ) + + def _run(self, argument: str) -> str: + # Implementation goes here + return "this is an example of a tool output, ignore it and move along." diff --git a/src/crewai/cli/utils.py b/src/crewai/cli/utils.py new file mode 100644 index 0000000000..2cb181fc48 --- /dev/null +++ b/src/crewai/cli/utils.py @@ -0,0 +1,18 @@ +import click + + +def copy_template(src, dst, name, class_name, folder_name): + """Copy a file from src to dst.""" + with open(src, "r") as file: + content = file.read() + + # Interpolate the content + content = content.replace("{{name}}", name) + content = content.replace("{{crew_name}}", class_name) + content = content.replace("{{folder_name}}", folder_name) + + # Write the interpolated content to the new file + with open(dst, "w") as file: + file.write(content) + + click.secho(f" - Created {dst}", fg="green") diff --git a/src/crewai/crew.py b/src/crewai/crew.py index 2c84e3c4ba..d7998ecff2 100644 --- a/src/crewai/crew.py +++ b/src/crewai/crew.py @@ -3,7 +3,7 @@ import uuid from concurrent.futures import Future from hashlib import md5 -from typing import Any, Dict, List, Optional, Tuple, Union +from typing import TYPE_CHECKING, Any, Dict, List, Optional, Tuple, Union from langchain_core.callbacks import BaseCallbackHandler from pydantic import ( @@ -32,11 +32,9 @@ from crewai.tasks.task_output import TaskOutput from crewai.telemetry import Telemetry from crewai.tools.agent_tools import AgentTools +from crewai.types.usage_metrics import UsageMetrics from crewai.utilities import I18N, FileHandler, Logger, RPMController -from crewai.utilities.constants import ( - TRAINED_AGENTS_DATA_FILE, - TRAINING_DATA_FILE, -) +from crewai.utilities.constants import TRAINED_AGENTS_DATA_FILE, TRAINING_DATA_FILE from crewai.utilities.evaluators.crew_evaluator_handler import CrewEvaluator from crewai.utilities.evaluators.task_evaluator import TaskEvaluator from crewai.utilities.formatter import ( @@ -52,6 +50,9 @@ except ImportError: agentops = None +if TYPE_CHECKING: + from crewai.pipeline.pipeline import Pipeline + class Crew(BaseModel): """ @@ -97,6 +98,7 @@ class Crew(BaseModel): default_factory=TaskOutputStorageHandler ) + name: Optional[str] = Field(default=None) cache: bool = Field(default=True) model_config = ConfigDict(arbitrary_types_allowed=True) tasks: List[Task] = Field(default_factory=list) @@ -111,7 +113,7 @@ class Crew(BaseModel): default={"provider": "openai"}, description="Configuration for the embedder to be used for the crew.", ) - usage_metrics: Optional[dict] = Field( + usage_metrics: Optional[UsageMetrics] = Field( default=None, description="Metrics for the LLM usage during all tasks execution.", ) @@ -147,8 +149,8 @@ class Crew(BaseModel): default=None, description="Path to the prompt json file to be used for the crew.", ) - output_log_file: Optional[Union[bool, str]] = Field( - default=False, + output_log_file: Optional[str] = Field( + default=None, description="output_log_file", ) planning: Optional[bool] = Field( @@ -453,7 +455,7 @@ def kickoff( if self.planning: self._handle_crew_planning() - metrics = [] + metrics: List[UsageMetrics] = [] if self.process == Process.sequential: result = self._run_sequential_process() @@ -463,11 +465,12 @@ def kickoff( raise NotImplementedError( f"The process '{self.process}' is not implemented yet." ) + metrics += [agent._token_process.get_summary() for agent in self.agents] - self.usage_metrics = { - key: sum([m[key] for m in metrics if m is not None]) for key in metrics[0] - } + self.usage_metrics = UsageMetrics() + for metric in metrics: + self.usage_metrics.add_usage_metrics(metric) return result @@ -476,12 +479,7 @@ def kickoff_for_each(self, inputs: List[Dict[str, Any]]) -> List[CrewOutput]: results: List[CrewOutput] = [] # Initialize the parent crew's usage metrics - total_usage_metrics = { - "total_tokens": 0, - "prompt_tokens": 0, - "completion_tokens": 0, - "successful_requests": 0, - } + total_usage_metrics = UsageMetrics() for input_data in inputs: crew = self.copy() @@ -489,8 +487,7 @@ def kickoff_for_each(self, inputs: List[Dict[str, Any]]) -> List[CrewOutput]: output = crew.kickoff(inputs=input_data) if crew.usage_metrics: - for key in total_usage_metrics: - total_usage_metrics[key] += crew.usage_metrics.get(key, 0) + total_usage_metrics.add_usage_metrics(crew.usage_metrics) results.append(output) @@ -519,29 +516,10 @@ async def run_crew(crew, input_data): results = await asyncio.gather(*tasks) - total_usage_metrics = { - "total_tokens": 0, - "prompt_tokens": 0, - "completion_tokens": 0, - "successful_requests": 0, - } - for crew in crew_copies: - if crew.usage_metrics: - for key in total_usage_metrics: - total_usage_metrics[key] += crew.usage_metrics.get(key, 0) - - self.usage_metrics = total_usage_metrics - - total_usage_metrics = { - "total_tokens": 0, - "prompt_tokens": 0, - "completion_tokens": 0, - "successful_requests": 0, - } + total_usage_metrics = UsageMetrics() for crew in crew_copies: if crew.usage_metrics: - for key in total_usage_metrics: - total_usage_metrics[key] += crew.usage_metrics.get(key, 0) + total_usage_metrics.add_usage_metrics(crew.usage_metrics) self.usage_metrics = total_usage_metrics self._task_output_handler.reset() @@ -932,25 +910,18 @@ def _finish_execution(self, final_string_output: str) -> None: ) self._telemetry.end_crew(self, final_string_output) - def calculate_usage_metrics(self) -> Dict[str, int]: + def calculate_usage_metrics(self) -> UsageMetrics: """Calculates and returns the usage metrics.""" - total_usage_metrics = { - "total_tokens": 0, - "prompt_tokens": 0, - "completion_tokens": 0, - "successful_requests": 0, - } + total_usage_metrics = UsageMetrics() for agent in self.agents: if hasattr(agent, "_token_process"): token_sum = agent._token_process.get_summary() - for key in total_usage_metrics: - total_usage_metrics[key] += token_sum.get(key, 0) + total_usage_metrics.add_usage_metrics(token_sum) if self.manager_agent and hasattr(self.manager_agent, "_token_process"): token_sum = self.manager_agent._token_process.get_summary() - for key in total_usage_metrics: - total_usage_metrics[key] += token_sum.get(key, 0) + total_usage_metrics.add_usage_metrics(token_sum) return total_usage_metrics @@ -969,5 +940,17 @@ def test( evaluator.print_crew_evaluation_result() + def __rshift__(self, other: "Crew") -> "Pipeline": + """ + Implements the >> operator to add another Crew to an existing Pipeline. + """ + from crewai.pipeline.pipeline import Pipeline + + if not isinstance(other, Crew): + raise TypeError( + f"Unsupported operand type for >>: '{type(self).__name__}' and '{type(other).__name__}'" + ) + return Pipeline(stages=[self, other]) + def __repr__(self): return f"Crew(id={self.id}, process={self.process}, number_of_agents={len(self.agents)}, number_of_tasks={len(self.tasks)})" diff --git a/src/crewai/crews/crew_output.py b/src/crewai/crews/crew_output.py index e630c1f3a6..64d1f9caf0 100644 --- a/src/crewai/crews/crew_output.py +++ b/src/crewai/crews/crew_output.py @@ -5,6 +5,7 @@ from crewai.tasks.output_format import OutputFormat from crewai.tasks.task_output import TaskOutput +from crewai.types.usage_metrics import UsageMetrics class CrewOutput(BaseModel): @@ -20,9 +21,7 @@ class CrewOutput(BaseModel): tasks_output: list[TaskOutput] = Field( description="Output of each task", default=[] ) - token_usage: Dict[str, Any] = Field( - description="Processed token summary", default={} - ) + token_usage: UsageMetrics = Field(description="Processed token summary", default={}) @property def json(self) -> Optional[str]: diff --git a/src/crewai/memory/storage/rag_storage.py b/src/crewai/memory/storage/rag_storage.py index e53f096e9e..5270e9c026 100644 --- a/src/crewai/memory/storage/rag_storage.py +++ b/src/crewai/memory/storage/rag_storage.py @@ -5,13 +5,13 @@ import shutil from typing import Any, Dict, List, Optional +from crewai.memory.storage.interface import Storage +from crewai.utilities.paths import db_storage_path from embedchain import App from embedchain.llm.base import BaseLlm +from embedchain.models.data_type import DataType from embedchain.vectordb.chroma import InvalidDimensionException -from crewai.memory.storage.interface import Storage -from crewai.utilities.paths import db_storage_path - @contextlib.contextmanager def suppress_logging( @@ -101,8 +101,7 @@ def search( # type: ignore # BUG?: Signature of "search" incompatible with supe return [r for r in results if r["metadata"]["score"] >= score_threshold] def _generate_embedding(self, text: str, metadata: Dict[str, Any]) -> Any: - with suppress_logging(): - self.app.add(text, data_type="text", metadata=metadata) + self.app.add(text, data_type=DataType.TEXT, metadata=metadata) def reset(self) -> None: try: diff --git a/src/crewai/pipeline/__init__.py b/src/crewai/pipeline/__init__.py new file mode 100644 index 0000000000..573154b256 --- /dev/null +++ b/src/crewai/pipeline/__init__.py @@ -0,0 +1,3 @@ +from crewai.pipeline.pipeline import Pipeline +from crewai.pipeline.pipeline_kickoff_result import PipelineKickoffResult +from crewai.pipeline.pipeline_output import PipelineOutput diff --git a/src/crewai/pipeline/pipeline.py b/src/crewai/pipeline/pipeline.py new file mode 100644 index 0000000000..e655290926 --- /dev/null +++ b/src/crewai/pipeline/pipeline.py @@ -0,0 +1,405 @@ +import asyncio +import copy +from typing import Any, Dict, List, Tuple, Union + +from pydantic import BaseModel, Field, model_validator + +from crewai.crew import Crew +from crewai.crews.crew_output import CrewOutput +from crewai.pipeline.pipeline_kickoff_result import PipelineKickoffResult +from crewai.routers.router import Router +from crewai.types.usage_metrics import UsageMetrics + +Trace = Union[Union[str, Dict[str, Any]], List[Union[str, Dict[str, Any]]]] +PipelineStage = Union[Crew, List[Crew], Router] + +""" +Developer Notes: + +This module defines a Pipeline class that represents a sequence of operations (stages) +to process inputs. Each stage can be either sequential or parallel, and the pipeline +can process multiple kickoffs concurrently. + +Core Loop Explanation: +1. The `process_kickoffs` method processes multiple kickoffs in parallel, each going through + all pipeline stages. +2. The `process_single_kickoff` method handles the processing of a single kickouff through + all stages, updating metrics and input data along the way. +3. The `_process_stage` method determines whether a stage is sequential or parallel + and processes it accordingly. +4. The `_process_single_crew` and `_process_parallel_crews` methods handle the + execution of single and parallel crew stages. +5. The `_update_metrics_and_input` method updates usage metrics and the current input + with the outputs from a stage. +6. The `_build_pipeline_kickoff_results` method constructs the final results of the + pipeline kickoff, including traces and outputs. + +Handling Traces and Crew Outputs: +- During the processing of stages, we handle the results (traces and crew outputs) + for all stages except the last one differently from the final stage. +- For intermediate stages, the primary focus is on passing the input data between stages. + This involves merging the output dictionaries from all crews in a stage into a single + dictionary and passing it to the next stage. This merged dictionary allows for smooth + data flow between stages. +- For the final stage, in addition to passing the input data, we also need to prepare + the final outputs and traces to be returned as the overall result of the pipeline kickoff. + In this case, we do not merge the results, as each result needs to be included + separately in its own pipeline kickoff result. + +Pipeline Terminology: +- Pipeline: The overall structure that defines a sequence of operations. +- Stage: A distinct part of the pipeline, which can be either sequential or parallel. +- Kickoff: A specific execution of the pipeline for a given set of inputs, representing a single instance of processing through the pipeline. +- Branch: Parallel executions within a stage (e.g., concurrent crew operations). +- Trace: The journey of an individual input through the entire pipeline. + +Example pipeline structure: +crew1 >> crew2 >> crew3 + +This represents a pipeline with three sequential stages: +1. crew1 is the first stage, which processes the input and passes its output to crew2. +2. crew2 is the second stage, which takes the output from crew1 as its input, processes it, and passes its output to crew3. +3. crew3 is the final stage, which takes the output from crew2 as its input and produces the final output of the pipeline. + +Each input creates its own kickoff, flowing through all stages of the pipeline. +Multiple kickoffss can be processed concurrently, each following the defined pipeline structure. + +Another example pipeline structure: +crew1 >> [crew2, crew3] >> crew4 + +This represents a pipeline with three stages: +1. A sequential stage (crew1) +2. A parallel stage with two branches (crew2 and crew3 executing concurrently) +3. Another sequential stage (crew4) + +Each input creates its own kickoff, flowing through all stages of the pipeline. +Multiple kickoffs can be processed concurrently, each following the defined pipeline structure. +""" + + +class Pipeline(BaseModel): + stages: List[PipelineStage] = Field( + ..., description="List of crews representing stages to be executed in sequence" + ) + + @model_validator(mode="before") + @classmethod + def validate_stages(cls, values): + stages = values.get("stages", []) + + def check_nesting_and_type(item, depth=0): + if depth > 1: + raise ValueError("Double nesting is not allowed in pipeline stages") + if isinstance(item, list): + for sub_item in item: + check_nesting_and_type(sub_item, depth + 1) + elif not isinstance(item, (Crew, Router)): + raise ValueError( + f"Expected Crew instance, Router instance, or list of Crews, got {type(item)}" + ) + + for stage in stages: + check_nesting_and_type(stage) + return values + + async def kickoff( + self, inputs: List[Dict[str, Any]] + ) -> List[PipelineKickoffResult]: + """ + Processes multiple runs in parallel, each going through all pipeline stages. + + Args: + inputs (List[Dict[str, Any]]): List of inputs for each run. + + Returns: + List[PipelineKickoffResult]: List of results from each run. + """ + pipeline_results: List[PipelineKickoffResult] = [] + + # Process all runs in parallel + all_run_results = await asyncio.gather( + *(self.process_single_kickoff(input_data) for input_data in inputs) + ) + + # Flatten the list of lists into a single list of results + pipeline_results.extend( + result for run_result in all_run_results for result in run_result + ) + + return pipeline_results + + async def process_single_kickoff( + self, kickoff_input: Dict[str, Any] + ) -> List[PipelineKickoffResult]: + """ + Processes a single run through all pipeline stages. + + Args: + input (Dict[str, Any]): The input for the run. + + Returns: + List[PipelineKickoffResult]: The results of processing the run. + """ + initial_input = copy.deepcopy(kickoff_input) + current_input = copy.deepcopy(kickoff_input) + stages = self._copy_stages() + pipeline_usage_metrics: Dict[str, UsageMetrics] = {} + all_stage_outputs: List[List[CrewOutput]] = [] + traces: List[List[Union[str, Dict[str, Any]]]] = [[initial_input]] + + stage_index = 0 + while stage_index < len(stages): + stage = stages[stage_index] + stage_input = copy.deepcopy(current_input) + + if isinstance(stage, Router): + next_pipeline, route_taken = stage.route(stage_input) + stages = ( + stages[: stage_index + 1] + + list(next_pipeline.stages) + + stages[stage_index + 1 :] + ) + traces.append([{"route_taken": route_taken}]) + stage_index += 1 + continue + + stage_outputs, stage_trace = await self._process_stage(stage, stage_input) + + self._update_metrics_and_input( + pipeline_usage_metrics, current_input, stage, stage_outputs + ) + traces.append(stage_trace) + all_stage_outputs.append(stage_outputs) + stage_index += 1 + + return self._build_pipeline_kickoff_results( + all_stage_outputs, traces, pipeline_usage_metrics + ) + + async def _process_stage( + self, stage: PipelineStage, current_input: Dict[str, Any] + ) -> Tuple[List[CrewOutput], List[Union[str, Dict[str, Any]]]]: + """ + Processes a single stage of the pipeline, which can be either sequential or parallel. + + Args: + stage (Union[Crew, List[Crew]]): The stage to process. + current_input (Dict[str, Any]): The input for the stage. + + Returns: + Tuple[List[CrewOutput], List[Union[str, Dict[str, Any]]]]: The outputs and trace of the stage. + """ + if isinstance(stage, Crew): + return await self._process_single_crew(stage, current_input) + elif isinstance(stage, list) and all(isinstance(crew, Crew) for crew in stage): + return await self._process_parallel_crews(stage, current_input) + else: + raise ValueError(f"Unsupported stage type: {type(stage)}") + + async def _process_single_crew( + self, crew: Crew, current_input: Dict[str, Any] + ) -> Tuple[List[CrewOutput], List[Union[str, Dict[str, Any]]]]: + """ + Processes a single crew. + + Args: + crew (Crew): The crew to process. + current_input (Dict[str, Any]): The input for the crew. + + Returns: + Tuple[List[CrewOutput], List[Union[str, Dict[str, Any]]]]: The output and trace of the crew. + """ + output = await crew.kickoff_async(inputs=current_input) + return [output], [crew.name or str(crew.id)] + + async def _process_parallel_crews( + self, crews: List[Crew], current_input: Dict[str, Any] + ) -> Tuple[List[CrewOutput], List[Union[str, Dict[str, Any]]]]: + """ + Processes multiple crews in parallel. + + Args: + crews (List[Crew]): The list of crews to process in parallel. + current_input (Dict[str, Any]): The input for the crews. + + Returns: + Tuple[List[CrewOutput], List[Union[str, Dict[str, Any]]]]: The outputs and traces of the crews. + """ + parallel_outputs = await asyncio.gather( + *[crew.kickoff_async(inputs=current_input) for crew in crews] + ) + return parallel_outputs, [crew.name or str(crew.id) for crew in crews] + + def _update_metrics_and_input( + self, + usage_metrics: Dict[str, UsageMetrics], + current_input: Dict[str, Any], + stage: PipelineStage, + outputs: List[CrewOutput], + ) -> None: + """ + Updates metrics and current input with the outputs of a stage. + + Args: + usage_metrics (Dict[str, Any]): The usage metrics to update. + current_input (Dict[str, Any]): The current input to update. + stage (Union[Crew, List[Crew]]): The stage that was processed. + outputs (List[CrewOutput]): The outputs of the stage. + """ + if isinstance(stage, Crew): + usage_metrics[stage.name or str(stage.id)] = outputs[0].token_usage + current_input.update(outputs[0].to_dict()) + elif isinstance(stage, list) and all(isinstance(crew, Crew) for crew in stage): + for crew, output in zip(stage, outputs): + usage_metrics[crew.name or str(crew.id)] = output.token_usage + current_input.update(output.to_dict()) + else: + raise ValueError(f"Unsupported stage type: {type(stage)}") + + def _build_pipeline_kickoff_results( + self, + all_stage_outputs: List[List[CrewOutput]], + traces: List[List[Union[str, Dict[str, Any]]]], + token_usage: Dict[str, UsageMetrics], + ) -> List[PipelineKickoffResult]: + """ + Builds the results of a pipeline run. + + Args: + all_stage_outputs (List[List[CrewOutput]]): All stage outputs. + traces (List[List[Union[str, Dict[str, Any]]]]): All traces. + token_usage (Dict[str, Any]): Token usage metrics. + + Returns: + List[PipelineKickoffResult]: The results of the pipeline run. + """ + formatted_traces = self._format_traces(traces) + formatted_crew_outputs = self._format_crew_outputs(all_stage_outputs) + + return [ + PipelineKickoffResult( + token_usage=token_usage, + trace=formatted_trace, + raw=crews_outputs[-1].raw, + pydantic=crews_outputs[-1].pydantic, + json_dict=crews_outputs[-1].json_dict, + crews_outputs=crews_outputs, + ) + for crews_outputs, formatted_trace in zip( + formatted_crew_outputs, formatted_traces + ) + ] + + def _format_traces( + self, traces: List[List[Union[str, Dict[str, Any]]]] + ) -> List[List[Trace]]: + """ + Formats the traces of a pipeline run. + + Args: + traces (List[List[Union[str, Dict[str, Any]]]]): The traces to format. + + Returns: + List[List[Trace]]: The formatted traces. + """ + formatted_traces: List[Trace] = self._format_single_trace(traces[:-1]) + return self._format_multiple_traces(formatted_traces, traces[-1]) + + def _format_single_trace( + self, traces: List[List[Union[str, Dict[str, Any]]]] + ) -> List[Trace]: + """ + Formats single traces. + + Args: + traces (List[List[Union[str, Dict[str, Any]]]]): The traces to format. + + Returns: + List[Trace]: The formatted single traces. + """ + formatted_traces: List[Trace] = [] + for trace in traces: + formatted_traces.append(trace[0] if len(trace) == 1 else trace) + return formatted_traces + + def _format_multiple_traces( + self, + formatted_traces: List[Trace], + final_trace: List[Union[str, Dict[str, Any]]], + ) -> List[List[Trace]]: + """ + Formats multiple traces. + + Args: + formatted_traces (List[Trace]): The formatted single traces. + final_trace (List[Union[str, Dict[str, Any]]]): The final trace to format. + + Returns: + List[List[Trace]]: The formatted multiple traces. + """ + traces_to_return: List[List[Trace]] = [] + if len(final_trace) == 1: + formatted_traces.append(final_trace[0]) + traces_to_return.append(formatted_traces) + else: + for trace in final_trace: + copied_traces = formatted_traces.copy() + copied_traces.append(trace) + traces_to_return.append(copied_traces) + return traces_to_return + + def _format_crew_outputs( + self, all_stage_outputs: List[List[CrewOutput]] + ) -> List[List[CrewOutput]]: + """ + Formats the outputs of all stages into a list of crew outputs. + + Args: + all_stage_outputs (List[List[CrewOutput]]): All stage outputs. + + Returns: + List[List[CrewOutput]]: Formatted crew outputs. + """ + crew_outputs: List[CrewOutput] = [ + output + for stage_outputs in all_stage_outputs[:-1] + for output in stage_outputs + ] + return [crew_outputs + [output] for output in all_stage_outputs[-1]] + + def _copy_stages(self): + """Create a deep copy of the Pipeline's stages.""" + new_stages = [] + for stage in self.stages: + if isinstance(stage, list): + new_stages.append( + [ + crew.copy() if hasattr(crew, "copy") else copy.deepcopy(crew) + for crew in stage + ] + ) + elif hasattr(stage, "copy"): + new_stages.append(stage.copy()) + else: + new_stages.append(copy.deepcopy(stage)) + + return new_stages + + def __rshift__(self, other: PipelineStage) -> "Pipeline": + """ + Implements the >> operator to add another Stage (Crew or List[Crew]) to an existing Pipeline. + + Args: + other (Any): The stage to add. + + Returns: + Pipeline: A new pipeline with the added stage. + """ + if isinstance(other, (Crew, Router)) or ( + isinstance(other, list) and all(isinstance(item, Crew) for item in other) + ): + return type(self)(stages=self.stages + [other]) + else: + raise TypeError( + f"Unsupported operand type for >>: '{type(self).__name__}' and '{type(other).__name__}'" + ) diff --git a/src/crewai/pipeline/pipeline_kickoff_result.py b/src/crewai/pipeline/pipeline_kickoff_result.py new file mode 100644 index 0000000000..7bde238cd0 --- /dev/null +++ b/src/crewai/pipeline/pipeline_kickoff_result.py @@ -0,0 +1,61 @@ +import json +import uuid +from typing import Any, Dict, List, Optional, Union + +from pydantic import UUID4, BaseModel, Field + +from crewai.crews.crew_output import CrewOutput +from crewai.types.usage_metrics import UsageMetrics + + +class PipelineKickoffResult(BaseModel): + """Class that represents the result of a pipeline run.""" + + id: UUID4 = Field( + default_factory=uuid.uuid4, + frozen=True, + description="Unique identifier for the object, not set by user.", + ) + raw: str = Field(description="Raw output of the pipeline run", default="") + pydantic: Any = Field( + description="Pydantic output of the pipeline run", default=None + ) + json_dict: Union[Dict[str, Any], None] = Field( + description="JSON dict output of the pipeline run", default={} + ) + + token_usage: Dict[str, UsageMetrics] = Field( + description="Token usage for each crew in the run" + ) + trace: List[Any] = Field( + description="Trace of the journey of inputs through the run" + ) + crews_outputs: List[CrewOutput] = Field( + description="Output from each crew in the run", + default=[], + ) + + @property + def json(self) -> Optional[str]: + if self.crews_outputs[-1].tasks_output[-1].output_format != "json": + raise ValueError( + "No JSON output found in the final task of the final crew. Please make sure to set the output_json property in the final task in your crew." + ) + + return json.dumps(self.json_dict) + + def to_dict(self) -> Dict[str, Any]: + """Convert json_output and pydantic_output to a dictionary.""" + output_dict = {} + if self.json_dict: + output_dict.update(self.json_dict) + elif self.pydantic: + output_dict.update(self.pydantic.model_dump()) + return output_dict + + def __str__(self): + if self.pydantic: + return str(self.pydantic) + if self.json_dict: + return str(self.json_dict) + return self.raw diff --git a/src/crewai/pipeline/pipeline_output.py b/src/crewai/pipeline/pipeline_output.py new file mode 100644 index 0000000000..d9875b64af --- /dev/null +++ b/src/crewai/pipeline/pipeline_output.py @@ -0,0 +1,20 @@ +import uuid +from typing import List + +from pydantic import UUID4, BaseModel, Field + +from crewai.pipeline.pipeline_kickoff_result import PipelineKickoffResult + + +class PipelineOutput(BaseModel): + id: UUID4 = Field( + default_factory=uuid.uuid4, + frozen=True, + description="Unique identifier for the object, not set by user.", + ) + run_results: List[PipelineKickoffResult] = Field( + description="List of results for each run through the pipeline", default=[] + ) + + def add_run_result(self, result: PipelineKickoffResult): + self.run_results.append(result) diff --git a/src/crewai/project/__init__.py b/src/crewai/project/__init__.py index 0bae08a940..34759f465b 100644 --- a/src/crewai/project/__init__.py +++ b/src/crewai/project/__init__.py @@ -1,15 +1,17 @@ from .annotations import ( agent, + cache_handler, + callback, crew, - task, + llm, output_json, output_pydantic, + pipeline, + task, tool, - callback, - llm, - cache_handler, ) from .crew_base import CrewBase +from .pipeline_base import PipelineBase __all__ = [ "agent", @@ -20,6 +22,8 @@ "tool", "callback", "CrewBase", + "PipelineBase", "llm", "cache_handler", + "pipeline", ] diff --git a/src/crewai/project/annotations.py b/src/crewai/project/annotations.py index f6dba56a33..6b2c64fb39 100644 --- a/src/crewai/project/annotations.py +++ b/src/crewai/project/annotations.py @@ -1,14 +1,4 @@ -def memoize(func): - cache = {} - - def memoized_func(*args, **kwargs): - key = (args, tuple(kwargs.items())) - if key not in cache: - cache[key] = func(*args, **kwargs) - return cache[key] - - memoized_func.__dict__.update(func.__dict__) - return memoized_func +from crewai.project.utils import memoize def task(func): @@ -61,6 +51,21 @@ def cache_handler(func): return memoize(func) +def stage(func): + func.is_stage = True + return memoize(func) + + +def router(func): + func.is_router = True + return memoize(func) + + +def pipeline(func): + func.is_pipeline = True + return memoize(func) + + def crew(func): def wrapper(self, *args, **kwargs): instantiated_tasks = [] diff --git a/src/crewai/project/crew_base.py b/src/crewai/project/crew_base.py index 2f33c06afc..460d4381c1 100644 --- a/src/crewai/project/crew_base.py +++ b/src/crewai/project/crew_base.py @@ -24,6 +24,7 @@ class WrappedClass(cls): original_agents_config_path = getattr( cls, "agents_config", "config/agents.yaml" ) + original_tasks_config_path = getattr(cls, "tasks_config", "config/tasks.yaml") def __init__(self, *args, **kwargs): @@ -37,9 +38,11 @@ def __init__(self, *args, **kwargs): self.agents_config = self.load_yaml( os.path.join(self.base_directory, self.original_agents_config_path) ) + self.tasks_config = self.load_yaml( os.path.join(self.base_directory, self.original_tasks_config_path) ) + self.map_all_agent_variables() self.map_all_task_variables() diff --git a/src/crewai/project/pipeline_base.py b/src/crewai/project/pipeline_base.py new file mode 100644 index 0000000000..fd109be3b5 --- /dev/null +++ b/src/crewai/project/pipeline_base.py @@ -0,0 +1,58 @@ +from typing import Callable, Dict + +from pydantic import ConfigDict + +from crewai.crew import Crew +from crewai.pipeline.pipeline import Pipeline +from crewai.routers.router import Router + + +# TODO: Could potentially remove. Need to check with @joao and @gui if this is needed for CrewAI+ +def PipelineBase(cls): + class WrappedClass(cls): + model_config = ConfigDict(arbitrary_types_allowed=True) + is_pipeline_class: bool = True + + def __init__(self, *args, **kwargs): + super().__init__(*args, **kwargs) + self.stages = [] + self._map_pipeline_components() + + def _get_all_functions(self): + return { + name: getattr(self, name) + for name in dir(self) + if callable(getattr(self, name)) + } + + def _filter_functions( + self, functions: Dict[str, Callable], attribute: str + ) -> Dict[str, Callable]: + return { + name: func + for name, func in functions.items() + if hasattr(func, attribute) + } + + def _map_pipeline_components(self): + all_functions = self._get_all_functions() + crew_functions = self._filter_functions(all_functions, "is_crew") + router_functions = self._filter_functions(all_functions, "is_router") + + for stage_attr in dir(self): + stage = getattr(self, stage_attr) + if isinstance(stage, (Crew, Router)): + self.stages.append(stage) + elif callable(stage) and hasattr(stage, "is_crew"): + self.stages.append(crew_functions[stage_attr]()) + elif callable(stage) and hasattr(stage, "is_router"): + self.stages.append(router_functions[stage_attr]()) + elif isinstance(stage, list) and all( + isinstance(item, Crew) for item in stage + ): + self.stages.append(stage) + + def build_pipeline(self) -> Pipeline: + return Pipeline(stages=self.stages) + + return WrappedClass diff --git a/src/crewai/project/utils.py b/src/crewai/project/utils.py new file mode 100644 index 0000000000..be3f757d91 --- /dev/null +++ b/src/crewai/project/utils.py @@ -0,0 +1,11 @@ +def memoize(func): + cache = {} + + def memoized_func(*args, **kwargs): + key = (args, tuple(kwargs.items())) + if key not in cache: + cache[key] = func(*args, **kwargs) + return cache[key] + + memoized_func.__dict__.update(func.__dict__) + return memoized_func diff --git a/src/crewai/routers/__init__.py b/src/crewai/routers/__init__.py new file mode 100644 index 0000000000..b21d76bd27 --- /dev/null +++ b/src/crewai/routers/__init__.py @@ -0,0 +1 @@ +from crewai.routers.router import Router diff --git a/src/crewai/routers/router.py b/src/crewai/routers/router.py new file mode 100644 index 0000000000..76549565be --- /dev/null +++ b/src/crewai/routers/router.py @@ -0,0 +1,90 @@ +from copy import deepcopy +from typing import Any, Callable, Dict, Generic, Tuple, TypeVar + +from pydantic import BaseModel, Field, PrivateAttr + +T = TypeVar("T", bound=Dict[str, Any]) +U = TypeVar("U") + + +class Route(Generic[T, U]): + condition: Callable[[T], bool] + pipeline: U + + def __init__(self, condition: Callable[[T], bool], pipeline: U): + self.condition = condition + self.pipeline = pipeline + + +class Router(BaseModel, Generic[T, U]): + routes: Dict[str, Route[T, U]] = Field( + default_factory=dict, + description="Dictionary of route names to (condition, pipeline) tuples", + ) + default: U = Field(..., description="Default pipeline if no conditions are met") + _route_types: Dict[str, type] = PrivateAttr(default_factory=dict) + + model_config = {"arbitrary_types_allowed": True} + + def __init__(self, routes: Dict[str, Route[T, U]], default: U, **data): + super().__init__(routes=routes, default=default, **data) + self._check_copyable(default) + for name, route in routes.items(): + self._check_copyable(route.pipeline) + self._route_types[name] = type(route.pipeline) + + @staticmethod + def _check_copyable(obj): + if not hasattr(obj, "copy") or not callable(getattr(obj, "copy")): + raise ValueError(f"Object of type {type(obj)} must have a 'copy' method") + + def add_route( + self, + name: str, + condition: Callable[[T], bool], + pipeline: U, + ) -> "Router[T, U]": + """ + Add a named route with its condition and corresponding pipeline to the router. + + Args: + name: A unique name for this route + condition: A function that takes a dictionary input and returns a boolean + pipeline: The Pipeline to execute if the condition is met + + Returns: + The Router instance for method chaining + """ + self._check_copyable(pipeline) + self.routes[name] = Route(condition=condition, pipeline=pipeline) + self._route_types[name] = type(pipeline) + return self + + def route(self, input_data: T) -> Tuple[U, str]: + """ + Evaluate the input against the conditions and return the appropriate pipeline. + + Args: + input_data: The input dictionary to be evaluated + + Returns: + A tuple containing the next Pipeline to be executed and the name of the route taken + """ + for name, route in self.routes.items(): + if route.condition(input_data): + return route.pipeline, name + + return self.default, "default" + + def copy(self) -> "Router[T, U]": + """Create a deep copy of the Router.""" + new_routes = { + name: Route( + condition=deepcopy(route.condition), + pipeline=route.pipeline.copy(), # type: ignore + ) + for name, route in self.routes.items() + } + new_default = self.default.copy() # type: ignore + + return Router(routes=new_routes, default=new_default) diff --git a/src/crewai/task.py b/src/crewai/task.py index 3e693a4982..8efaee5fc7 100644 --- a/src/crewai/task.py +++ b/src/crewai/task.py @@ -47,6 +47,7 @@ class Config: tools_errors: int = 0 delegations: int = 0 i18n: I18N = I18N() + name: Optional[str] = Field(default=None) prompt_context: Optional[str] = None description: str = Field(description="Description of the actual task.") expected_output: str = Field( diff --git a/src/crewai/types/__init__.py b/src/crewai/types/__init__.py new file mode 100644 index 0000000000..e69de29bb2 diff --git a/src/crewai/types/usage_metrics.py b/src/crewai/types/usage_metrics.py new file mode 100644 index 0000000000..a5cee6a0f8 --- /dev/null +++ b/src/crewai/types/usage_metrics.py @@ -0,0 +1,36 @@ +from pydantic import BaseModel, Field + + +class UsageMetrics(BaseModel): + """ + Model to track usage metrics for the crew's execution. + + Attributes: + total_tokens: Total number of tokens used. + prompt_tokens: Number of tokens used in prompts. + completion_tokens: Number of tokens used in completions. + successful_requests: Number of successful requests made. + """ + + total_tokens: int = Field(default=0, description="Total number of tokens used.") + prompt_tokens: int = Field( + default=0, description="Number of tokens used in prompts." + ) + completion_tokens: int = Field( + default=0, description="Number of tokens used in completions." + ) + successful_requests: int = Field( + default=0, description="Number of successful requests made." + ) + + def add_usage_metrics(self, usage_metrics: "UsageMetrics"): + """ + Add the usage metrics from another UsageMetrics object. + + Args: + usage_metrics (UsageMetrics): The usage metrics to add. + """ + self.total_tokens += usage_metrics.total_tokens + self.prompt_tokens += usage_metrics.prompt_tokens + self.completion_tokens += usage_metrics.completion_tokens + self.successful_requests += usage_metrics.successful_requests diff --git a/tests/cassettes/test_crew_async_kickoff.yaml b/tests/cassettes/test_crew_async_kickoff.yaml index 1502253fa1..129d5c4e03 100644 --- a/tests/cassettes/test_crew_async_kickoff.yaml +++ b/tests/cassettes/test_crew_async_kickoff.yaml @@ -1,17 +1,18 @@ interactions: - request: - body: '{"messages": [{"content": "You are dog Researcher. You have a lot of experience - with dog.\nYour personal goal is: Express hot takes on dog.To give my best complete + body: '{"messages": [{"content": "You are cat Researcher. You have a lot of experience + with cat.\nYour personal goal is: Express hot takes on cat.To give my best complete final answer to the task use the exact following format:\n\nThought: I now can give a great answer\nFinal Answer: my best complete final answer to the task.\nYour final answer must be the great and the most complete as possible, it must be outcome described.\n\nI MUST use these formats, my job depends on it!\nCurrent - Task: Give me an analysis around dog.\n\nThis is the expect criteria for your - final answer: 1 bullet point about dog that''s under 15 words. \n you MUST return + Task: Give me an analysis around cat.\n\nThis is the expect criteria for your + final answer: 1 bullet point about cat that''s under 15 words. \n you MUST return the actual complete content as the final answer, not a summary.\n\nBegin! This is VERY important to you, use the tools available and give your best Final Answer, - your job depends on it!\n\nThought:\n", "role": "user"}], "model": "gpt-4o", - "n": 1, "stop": ["\nObservation"], "stream": true, "temperature": 0.7}' + your job depends on it!\n\nThought:\n", "role": "user"}], "model": "gpt-4o-mini", + "logprobs": false, "n": 1, "stop": ["\nObservation"], "stream": true, "temperature": + 0.7}' headers: accept: - application/json @@ -20,13 +21,13 @@ interactions: connection: - keep-alive content-length: - - '951' + - '975' content-type: - application/json host: - api.openai.com user-agent: - - OpenAI/Python 1.34.0 + - OpenAI/Python 1.37.0 x-stainless-arch: - arm64 x-stainless-async: @@ -36,109 +37,494 @@ interactions: x-stainless-os: - MacOS x-stainless-package-version: - - 1.34.0 + - 1.37.0 x-stainless-runtime: - CPython x-stainless-runtime-version: - - 3.12.3 + - 3.11.5 method: POST uri: https://api.openai.com/v1/chat/completions response: body: - string: 'data: {"id":"chatcmpl-9gdIXYXcf53VmxfiC6Q2NBDG2bPci","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"role":"assistant","content":""},"logprobs":null,"finish_reason":null}]} + string: 'data: {"id":"chatcmpl-9uMpniiKsDwdvCCf3F2PWtEjR3VsS","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_48196bc67a","choices":[{"index":0,"delta":{"role":"assistant","content":"","refusal":null},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uMpniiKsDwdvCCf3F2PWtEjR3VsS","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_48196bc67a","choices":[{"index":0,"delta":{"content":"I"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uMpniiKsDwdvCCf3F2PWtEjR3VsS","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_48196bc67a","choices":[{"index":0,"delta":{"content":" + now"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uMpniiKsDwdvCCf3F2PWtEjR3VsS","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_48196bc67a","choices":[{"index":0,"delta":{"content":" + can"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uMpniiKsDwdvCCf3F2PWtEjR3VsS","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_48196bc67a","choices":[{"index":0,"delta":{"content":" + give"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uMpniiKsDwdvCCf3F2PWtEjR3VsS","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_48196bc67a","choices":[{"index":0,"delta":{"content":" + a"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uMpniiKsDwdvCCf3F2PWtEjR3VsS","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_48196bc67a","choices":[{"index":0,"delta":{"content":" + great"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uMpniiKsDwdvCCf3F2PWtEjR3VsS","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_48196bc67a","choices":[{"index":0,"delta":{"content":" + answer"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uMpniiKsDwdvCCf3F2PWtEjR3VsS","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_48196bc67a","choices":[{"index":0,"delta":{"content":" \n"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uMpniiKsDwdvCCf3F2PWtEjR3VsS","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_48196bc67a","choices":[{"index":0,"delta":{"content":"Final"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uMpniiKsDwdvCCf3F2PWtEjR3VsS","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_48196bc67a","choices":[{"index":0,"delta":{"content":" + Answer"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uMpniiKsDwdvCCf3F2PWtEjR3VsS","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_48196bc67a","choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uMpniiKsDwdvCCf3F2PWtEjR3VsS","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_48196bc67a","choices":[{"index":0,"delta":{"content":" + Cats"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uMpniiKsDwdvCCf3F2PWtEjR3VsS","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_48196bc67a","choices":[{"index":0,"delta":{"content":" + are"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uMpniiKsDwdvCCf3F2PWtEjR3VsS","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_48196bc67a","choices":[{"index":0,"delta":{"content":" + independent"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uMpniiKsDwdvCCf3F2PWtEjR3VsS","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_48196bc67a","choices":[{"index":0,"delta":{"content":" + yet"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uMpniiKsDwdvCCf3F2PWtEjR3VsS","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_48196bc67a","choices":[{"index":0,"delta":{"content":" + affectionate"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uMpniiKsDwdvCCf3F2PWtEjR3VsS","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_48196bc67a","choices":[{"index":0,"delta":{"content":" + companions"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uMpniiKsDwdvCCf3F2PWtEjR3VsS","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_48196bc67a","choices":[{"index":0,"delta":{"content":","},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uMpniiKsDwdvCCf3F2PWtEjR3VsS","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_48196bc67a","choices":[{"index":0,"delta":{"content":" + offering"},"logprobs":null,"finish_reason":null}]} - data: {"id":"chatcmpl-9gdIXYXcf53VmxfiC6Q2NBDG2bPci","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"Thought"},"logprobs":null,"finish_reason":null}]} + data: {"id":"chatcmpl-9uMpniiKsDwdvCCf3F2PWtEjR3VsS","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_48196bc67a","choices":[{"index":0,"delta":{"content":" + joy"},"logprobs":null,"finish_reason":null}]} - data: {"id":"chatcmpl-9gdIXYXcf53VmxfiC6Q2NBDG2bPci","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}]} + data: {"id":"chatcmpl-9uMpniiKsDwdvCCf3F2PWtEjR3VsS","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_48196bc67a","choices":[{"index":0,"delta":{"content":" + and"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uMpniiKsDwdvCCf3F2PWtEjR3VsS","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_48196bc67a","choices":[{"index":0,"delta":{"content":" + comfort"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uMpniiKsDwdvCCf3F2PWtEjR3VsS","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_48196bc67a","choices":[{"index":0,"delta":{"content":"."},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uMpniiKsDwdvCCf3F2PWtEjR3VsS","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_48196bc67a","choices":[{"index":0,"delta":{},"logprobs":null,"finish_reason":"stop"}]} + + + data: [DONE] + + + ' + headers: + CF-Cache-Status: + - DYNAMIC + CF-RAY: + - 8b0925026c6fcb7d-LAX + Connection: + - keep-alive + Content-Type: + - text/event-stream; charset=utf-8 + Date: + - Fri, 09 Aug 2024 16:32:51 GMT + Server: + - cloudflare + Set-Cookie: + - __cf_bm=fHT8rr7EEB8P5onAHBmhpPjAuZcosz_7GhUZvfTp04Y-1723221171-1.0.1.1-imzkvdhuU8l86moq2Uxvb3SAkLdoXkR3_CbPT3tc4KdSUSX_TMfIFQDJ77GgIgH9jY49FJ6evReGbzQr5pGMAQ; + path=/; expires=Fri, 09-Aug-24 17:02:51 GMT; domain=.api.openai.com; HttpOnly; + Secure; SameSite=None + - _cfuvid=nXi1gEz0Wj3dY0NS18uoD8wu1jBaO_F4H0pP1Pxn.Ps-1723221171794-0.0.1.1-604800000; + path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None + Transfer-Encoding: + - chunked + X-Content-Type-Options: + - nosniff + alt-svc: + - h3=":443"; ma=86400 + openai-organization: + - crewai-iuxna1 + openai-processing-ms: + - '118' + openai-version: + - '2020-10-01' + strict-transport-security: + - max-age=15552000; includeSubDomains; preload + x-ratelimit-limit-requests: + - '30000' + x-ratelimit-limit-tokens: + - '150000000' + x-ratelimit-remaining-requests: + - '29999' + x-ratelimit-remaining-tokens: + - '149999783' + x-ratelimit-reset-requests: + - 2ms + x-ratelimit-reset-tokens: + - 0s + x-request-id: + - req_90c40884caf804069aa9e55413fb550e + status: + code: 200 + message: OK +- request: + body: '{"messages": [{"content": "You are apple Researcher. You have a lot of + experience with apple.\nYour personal goal is: Express hot takes on apple.To + give my best complete final answer to the task use the exact following format:\n\nThought: + I now can give a great answer\nFinal Answer: my best complete final answer to + the task.\nYour final answer must be the great and the most complete as possible, + it must be outcome described.\n\nI MUST use these formats, my job depends on + it!\nCurrent Task: Give me an analysis around apple.\n\nThis is the expect criteria + for your final answer: 1 bullet point about apple that''s under 15 words. \n + you MUST return the actual complete content as the final answer, not a summary.\n\nBegin! + This is VERY important to you, use the tools available and give your best Final + Answer, your job depends on it!\n\nThought:\n", "role": "user"}], "model": "gpt-4o-mini", + "logprobs": false, "n": 1, "stop": ["\nObservation"], "stream": true, "temperature": + 0.7}' + headers: + accept: + - application/json + accept-encoding: + - gzip, deflate, br + connection: + - keep-alive + content-length: + - '985' + content-type: + - application/json + host: + - api.openai.com + user-agent: + - OpenAI/Python 1.37.0 + x-stainless-arch: + - arm64 + x-stainless-async: + - 'false' + x-stainless-lang: + - python + x-stainless-os: + - MacOS + x-stainless-package-version: + - 1.37.0 + x-stainless-runtime: + - CPython + x-stainless-runtime-version: + - 3.11.5 + method: POST + uri: https://api.openai.com/v1/chat/completions + response: + body: + string: 'data: {"id":"chatcmpl-9uMpncpaFkR8g7qsCI2jyczDlFOnI","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_507c9469a1","choices":[{"index":0,"delta":{"role":"assistant","content":"","refusal":null},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uMpncpaFkR8g7qsCI2jyczDlFOnI","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_507c9469a1","choices":[{"index":0,"delta":{"content":"I"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uMpncpaFkR8g7qsCI2jyczDlFOnI","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_507c9469a1","choices":[{"index":0,"delta":{"content":" + now"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uMpncpaFkR8g7qsCI2jyczDlFOnI","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_507c9469a1","choices":[{"index":0,"delta":{"content":" + can"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uMpncpaFkR8g7qsCI2jyczDlFOnI","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_507c9469a1","choices":[{"index":0,"delta":{"content":" + give"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uMpncpaFkR8g7qsCI2jyczDlFOnI","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_507c9469a1","choices":[{"index":0,"delta":{"content":" + a"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uMpncpaFkR8g7qsCI2jyczDlFOnI","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_507c9469a1","choices":[{"index":0,"delta":{"content":" + great"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uMpncpaFkR8g7qsCI2jyczDlFOnI","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_507c9469a1","choices":[{"index":0,"delta":{"content":" + answer"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uMpncpaFkR8g7qsCI2jyczDlFOnI","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_507c9469a1","choices":[{"index":0,"delta":{"content":" \n"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uMpncpaFkR8g7qsCI2jyczDlFOnI","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_507c9469a1","choices":[{"index":0,"delta":{"content":"Final"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uMpncpaFkR8g7qsCI2jyczDlFOnI","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_507c9469a1","choices":[{"index":0,"delta":{"content":" + Answer"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uMpncpaFkR8g7qsCI2jyczDlFOnI","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_507c9469a1","choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uMpncpaFkR8g7qsCI2jyczDlFOnI","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_507c9469a1","choices":[{"index":0,"delta":{"content":" + Apple"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uMpncpaFkR8g7qsCI2jyczDlFOnI","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_507c9469a1","choices":[{"index":0,"delta":{"content":" + consistently"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uMpncpaFkR8g7qsCI2jyczDlFOnI","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_507c9469a1","choices":[{"index":0,"delta":{"content":" + innov"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uMpncpaFkR8g7qsCI2jyczDlFOnI","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_507c9469a1","choices":[{"index":0,"delta":{"content":"ates"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uMpncpaFkR8g7qsCI2jyczDlFOnI","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_507c9469a1","choices":[{"index":0,"delta":{"content":","},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uMpncpaFkR8g7qsCI2jyczDlFOnI","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_507c9469a1","choices":[{"index":0,"delta":{"content":" + setting"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uMpncpaFkR8g7qsCI2jyczDlFOnI","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_507c9469a1","choices":[{"index":0,"delta":{"content":" + trends"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uMpncpaFkR8g7qsCI2jyczDlFOnI","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_507c9469a1","choices":[{"index":0,"delta":{"content":" + in"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uMpncpaFkR8g7qsCI2jyczDlFOnI","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_507c9469a1","choices":[{"index":0,"delta":{"content":" + technology"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uMpncpaFkR8g7qsCI2jyczDlFOnI","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_507c9469a1","choices":[{"index":0,"delta":{"content":" + and"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uMpncpaFkR8g7qsCI2jyczDlFOnI","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_507c9469a1","choices":[{"index":0,"delta":{"content":" + design"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uMpncpaFkR8g7qsCI2jyczDlFOnI","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_507c9469a1","choices":[{"index":0,"delta":{"content":"."},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uMpncpaFkR8g7qsCI2jyczDlFOnI","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_507c9469a1","choices":[{"index":0,"delta":{},"logprobs":null,"finish_reason":"stop"}]} + + + data: [DONE] + + + ' + headers: + CF-Cache-Status: + - DYNAMIC + CF-RAY: + - 8b0925025b5f7c04-LAX + Connection: + - keep-alive + Content-Type: + - text/event-stream; charset=utf-8 + Date: + - Fri, 09 Aug 2024 16:32:51 GMT + Server: + - cloudflare + Set-Cookie: + - __cf_bm=hHWiVPFGC4hMeIUI2oy3Kn9RDuj_jW4aRPVMT__zBPI-1723221171-1.0.1.1-_LrgvD7jWokLsDNBRhUl3OH7sKm_MmXGMC.epF_Ul2pX0bY2KNUBksNkIPZuPr2_ROgDXphRq8oRr3YkpSynuQ; + path=/; expires=Fri, 09-Aug-24 17:02:51 GMT; domain=.api.openai.com; HttpOnly; + Secure; SameSite=None + - _cfuvid=dxQtUwoJzPi_hj3Tq5W69qSd2uWM3SH.1wdqUJyTGEs-1723221171886-0.0.1.1-604800000; + path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None + Transfer-Encoding: + - chunked + X-Content-Type-Options: + - nosniff + alt-svc: + - h3=":443"; ma=86400 + openai-organization: + - crewai-iuxna1 + openai-processing-ms: + - '100' + openai-version: + - '2020-10-01' + strict-transport-security: + - max-age=15552000; includeSubDomains; preload + x-ratelimit-limit-requests: + - '30000' + x-ratelimit-limit-tokens: + - '150000000' + x-ratelimit-remaining-requests: + - '29999' + x-ratelimit-remaining-tokens: + - '149999780' + x-ratelimit-reset-requests: + - 2ms + x-ratelimit-reset-tokens: + - 0s + x-request-id: + - req_d7b5bd2cb7fcb695a09b8fe33aaa4425 + status: + code: 200 + message: OK +- request: + body: '{"messages": [{"content": "You are dog Researcher. You have a lot of experience + with dog.\nYour personal goal is: Express hot takes on dog.To give my best complete + final answer to the task use the exact following format:\n\nThought: I now can + give a great answer\nFinal Answer: my best complete final answer to the task.\nYour + final answer must be the great and the most complete as possible, it must be + outcome described.\n\nI MUST use these formats, my job depends on it!\nCurrent + Task: Give me an analysis around dog.\n\nThis is the expect criteria for your + final answer: 1 bullet point about dog that''s under 15 words. \n you MUST return + the actual complete content as the final answer, not a summary.\n\nBegin! This + is VERY important to you, use the tools available and give your best Final Answer, + your job depends on it!\n\nThought:\n", "role": "user"}], "model": "gpt-4o-mini", + "logprobs": false, "n": 1, "stop": ["\nObservation"], "stream": true, "temperature": + 0.7}' + headers: + accept: + - application/json + accept-encoding: + - gzip, deflate, br + connection: + - keep-alive + content-length: + - '975' + content-type: + - application/json + host: + - api.openai.com + user-agent: + - OpenAI/Python 1.37.0 + x-stainless-arch: + - arm64 + x-stainless-async: + - 'false' + x-stainless-lang: + - python + x-stainless-os: + - MacOS + x-stainless-package-version: + - 1.37.0 + x-stainless-runtime: + - CPython + x-stainless-runtime-version: + - 3.11.5 + method: POST + uri: https://api.openai.com/v1/chat/completions + response: + body: + string: 'data: {"id":"chatcmpl-9uMpnkUMq10919R5FfHnji9597inv","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_48196bc67a","choices":[{"index":0,"delta":{"role":"assistant","content":"","refusal":null},"logprobs":null,"finish_reason":null}]} - data: {"id":"chatcmpl-9gdIXYXcf53VmxfiC6Q2NBDG2bPci","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":" - I"},"logprobs":null,"finish_reason":null}]} + data: {"id":"chatcmpl-9uMpnkUMq10919R5FfHnji9597inv","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_48196bc67a","choices":[{"index":0,"delta":{"content":"I"},"logprobs":null,"finish_reason":null}]} - data: {"id":"chatcmpl-9gdIXYXcf53VmxfiC6Q2NBDG2bPci","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":" + data: {"id":"chatcmpl-9uMpnkUMq10919R5FfHnji9597inv","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_48196bc67a","choices":[{"index":0,"delta":{"content":" now"},"logprobs":null,"finish_reason":null}]} - data: {"id":"chatcmpl-9gdIXYXcf53VmxfiC6Q2NBDG2bPci","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":" + data: {"id":"chatcmpl-9uMpnkUMq10919R5FfHnji9597inv","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_48196bc67a","choices":[{"index":0,"delta":{"content":" can"},"logprobs":null,"finish_reason":null}]} - data: {"id":"chatcmpl-9gdIXYXcf53VmxfiC6Q2NBDG2bPci","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":" + data: {"id":"chatcmpl-9uMpnkUMq10919R5FfHnji9597inv","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_48196bc67a","choices":[{"index":0,"delta":{"content":" give"},"logprobs":null,"finish_reason":null}]} - data: {"id":"chatcmpl-9gdIXYXcf53VmxfiC6Q2NBDG2bPci","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":" + data: {"id":"chatcmpl-9uMpnkUMq10919R5FfHnji9597inv","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_48196bc67a","choices":[{"index":0,"delta":{"content":" a"},"logprobs":null,"finish_reason":null}]} - data: {"id":"chatcmpl-9gdIXYXcf53VmxfiC6Q2NBDG2bPci","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":" + data: {"id":"chatcmpl-9uMpnkUMq10919R5FfHnji9597inv","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_48196bc67a","choices":[{"index":0,"delta":{"content":" great"},"logprobs":null,"finish_reason":null}]} - data: {"id":"chatcmpl-9gdIXYXcf53VmxfiC6Q2NBDG2bPci","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":" + data: {"id":"chatcmpl-9uMpnkUMq10919R5FfHnji9597inv","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_48196bc67a","choices":[{"index":0,"delta":{"content":" answer"},"logprobs":null,"finish_reason":null}]} - data: {"id":"chatcmpl-9gdIXYXcf53VmxfiC6Q2NBDG2bPci","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"\n"},"logprobs":null,"finish_reason":null}]} + data: {"id":"chatcmpl-9uMpnkUMq10919R5FfHnji9597inv","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_48196bc67a","choices":[{"index":0,"delta":{"content":" \n"},"logprobs":null,"finish_reason":null}]} - data: {"id":"chatcmpl-9gdIXYXcf53VmxfiC6Q2NBDG2bPci","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"Final"},"logprobs":null,"finish_reason":null}]} + data: {"id":"chatcmpl-9uMpnkUMq10919R5FfHnji9597inv","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_48196bc67a","choices":[{"index":0,"delta":{"content":"Final"},"logprobs":null,"finish_reason":null}]} - data: {"id":"chatcmpl-9gdIXYXcf53VmxfiC6Q2NBDG2bPci","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":" + data: {"id":"chatcmpl-9uMpnkUMq10919R5FfHnji9597inv","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_48196bc67a","choices":[{"index":0,"delta":{"content":" Answer"},"logprobs":null,"finish_reason":null}]} - data: {"id":"chatcmpl-9gdIXYXcf53VmxfiC6Q2NBDG2bPci","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}]} + data: {"id":"chatcmpl-9uMpnkUMq10919R5FfHnji9597inv","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_48196bc67a","choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}]} - data: {"id":"chatcmpl-9gdIXYXcf53VmxfiC6Q2NBDG2bPci","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":" + data: {"id":"chatcmpl-9uMpnkUMq10919R5FfHnji9597inv","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_48196bc67a","choices":[{"index":0,"delta":{"content":" Dogs"},"logprobs":null,"finish_reason":null}]} - data: {"id":"chatcmpl-9gdIXYXcf53VmxfiC6Q2NBDG2bPci","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":" + data: {"id":"chatcmpl-9uMpnkUMq10919R5FfHnji9597inv","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_48196bc67a","choices":[{"index":0,"delta":{"content":" are"},"logprobs":null,"finish_reason":null}]} - data: {"id":"chatcmpl-9gdIXYXcf53VmxfiC6Q2NBDG2bPci","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":" - incredibly"},"logprobs":null,"finish_reason":null}]} + data: {"id":"chatcmpl-9uMpnkUMq10919R5FfHnji9597inv","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_48196bc67a","choices":[{"index":0,"delta":{"content":" + loyal"},"logprobs":null,"finish_reason":null}]} - data: {"id":"chatcmpl-9gdIXYXcf53VmxfiC6Q2NBDG2bPci","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":" - loyal"},"logprobs":null,"finish_reason":null}]} + data: {"id":"chatcmpl-9uMpnkUMq10919R5FfHnji9597inv","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_48196bc67a","choices":[{"index":0,"delta":{"content":" + companions"},"logprobs":null,"finish_reason":null}]} - data: {"id":"chatcmpl-9gdIXYXcf53VmxfiC6Q2NBDG2bPci","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":" - and"},"logprobs":null,"finish_reason":null}]} + data: {"id":"chatcmpl-9uMpnkUMq10919R5FfHnji9597inv","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_48196bc67a","choices":[{"index":0,"delta":{"content":" + that"},"logprobs":null,"finish_reason":null}]} - data: {"id":"chatcmpl-9gdIXYXcf53VmxfiC6Q2NBDG2bPci","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":" - provide"},"logprobs":null,"finish_reason":null}]} + data: {"id":"chatcmpl-9uMpnkUMq10919R5FfHnji9597inv","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_48196bc67a","choices":[{"index":0,"delta":{"content":" + significantly"},"logprobs":null,"finish_reason":null}]} - data: {"id":"chatcmpl-9gdIXYXcf53VmxfiC6Q2NBDG2bPci","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":" - unmatched"},"logprobs":null,"finish_reason":null}]} + data: {"id":"chatcmpl-9uMpnkUMq10919R5FfHnji9597inv","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_48196bc67a","choices":[{"index":0,"delta":{"content":" + enhance"},"logprobs":null,"finish_reason":null}]} - data: {"id":"chatcmpl-9gdIXYXcf53VmxfiC6Q2NBDG2bPci","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":" - companionship"},"logprobs":null,"finish_reason":null}]} + data: {"id":"chatcmpl-9uMpnkUMq10919R5FfHnji9597inv","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_48196bc67a","choices":[{"index":0,"delta":{"content":" + human"},"logprobs":null,"finish_reason":null}]} - data: {"id":"chatcmpl-9gdIXYXcf53VmxfiC6Q2NBDG2bPci","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":" - to"},"logprobs":null,"finish_reason":null}]} + data: {"id":"chatcmpl-9uMpnkUMq10919R5FfHnji9597inv","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_48196bc67a","choices":[{"index":0,"delta":{"content":" + well"},"logprobs":null,"finish_reason":null}]} - data: {"id":"chatcmpl-9gdIXYXcf53VmxfiC6Q2NBDG2bPci","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":" - humans"},"logprobs":null,"finish_reason":null}]} + data: {"id":"chatcmpl-9uMpnkUMq10919R5FfHnji9597inv","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_48196bc67a","choices":[{"index":0,"delta":{"content":"-being"},"logprobs":null,"finish_reason":null}]} - data: {"id":"chatcmpl-9gdIXYXcf53VmxfiC6Q2NBDG2bPci","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"."},"logprobs":null,"finish_reason":null}]} + data: {"id":"chatcmpl-9uMpnkUMq10919R5FfHnji9597inv","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_48196bc67a","choices":[{"index":0,"delta":{"content":" + and"},"logprobs":null,"finish_reason":null}]} - data: {"id":"chatcmpl-9gdIXYXcf53VmxfiC6Q2NBDG2bPci","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{},"logprobs":null,"finish_reason":"stop"}]} + data: {"id":"chatcmpl-9uMpnkUMq10919R5FfHnji9597inv","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_48196bc67a","choices":[{"index":0,"delta":{"content":" + happiness"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uMpnkUMq10919R5FfHnji9597inv","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_48196bc67a","choices":[{"index":0,"delta":{"content":"."},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uMpnkUMq10919R5FfHnji9597inv","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_48196bc67a","choices":[{"index":0,"delta":{},"logprobs":null,"finish_reason":"stop"}]} data: [DONE] @@ -149,47 +535,49 @@ interactions: CF-Cache-Status: - DYNAMIC CF-RAY: - - 89d0fa4e7abf53db-ATL + - 8b0925026dcb311f-LAX Connection: - keep-alive Content-Type: - text/event-stream; charset=utf-8 Date: - - Tue, 02 Jul 2024 19:17:45 GMT + - Fri, 09 Aug 2024 16:32:51 GMT Server: - cloudflare Set-Cookie: - - __cf_bm=6Xl2nvdsXT4uSfQ3C1ZK.LWKGYekVs5ErrLDZOdI.50-1719947865-1.0.1.1-6RQoTCznxe7H868MoxghRegIZaElbG_bN_jbs94hmnsnuR1P9bptoj8o2DbOSvj48ubewyvy8L16mOZHlMLw_A; - path=/; expires=Tue, 02-Jul-24 19:47:45 GMT; domain=.api.openai.com; HttpOnly; + - __cf_bm=XL_lOuQeHTNeez6APgBjp6Li_xNxRm15T.iR_FCGd4s-1723221171-1.0.1.1-YmSu37yT.YJRthvPOM3WH9sB177XQUUCgwD9FjshwcXRxBX2QHm08re_qgpuWPqKXEvmp8_il3bDszO7db.c.Q; + path=/; expires=Fri, 09-Aug-24 17:02:51 GMT; domain=.api.openai.com; HttpOnly; Secure; SameSite=None - - _cfuvid=kPTMOkGHQp0ytgVUrm3jFNiB9I.DDI2ONPRTr6IMTeo-1719947865623-0.0.1.1-604800000; + - _cfuvid=nlB9yPZOKdXyXyrbJTYkcC1rY3U8aegR0nD6wmMd7Ww-1723221171929-0.0.1.1-604800000; path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None Transfer-Encoding: - chunked + X-Content-Type-Options: + - nosniff alt-svc: - h3=":443"; ma=86400 openai-organization: - crewai-iuxna1 openai-processing-ms: - - '102' + - '120' openai-version: - '2020-10-01' strict-transport-security: - - max-age=31536000; includeSubDomains + - max-age=15552000; includeSubDomains; preload x-ratelimit-limit-requests: - - '10000' + - '30000' x-ratelimit-limit-tokens: - - '16000000' + - '150000000' x-ratelimit-remaining-requests: - - '9997' + - '29998' x-ratelimit-remaining-tokens: - - '15999783' + - '149999782' x-ratelimit-reset-requests: - - 14ms + - 3ms x-ratelimit-reset-tokens: - 0s x-request-id: - - req_2c5219e228ce79f0131c497230904013 + - req_848656e5567773c3f6680cf49291c929 status: code: 200 message: OK @@ -204,8 +592,9 @@ interactions: for your final answer: 1 bullet point about apple that''s under 15 words. \n you MUST return the actual complete content as the final answer, not a summary.\n\nBegin! This is VERY important to you, use the tools available and give your best Final - Answer, your job depends on it!\n\nThought:\n", "role": "user"}], "model": "gpt-4o", - "n": 1, "stop": ["\nObservation"], "stream": true, "temperature": 0.7}' + Answer, your job depends on it!\n\nThought:\n", "role": "user"}], "model": "gpt-4o-mini", + "logprobs": false, "n": 1, "stop": ["\nObservation"], "stream": true, "temperature": + 0.7}' headers: accept: - application/json @@ -214,13 +603,13 @@ interactions: connection: - keep-alive content-length: - - '961' + - '985' content-type: - application/json host: - api.openai.com user-agent: - - OpenAI/Python 1.34.0 + - OpenAI/Python 1.37.0 x-stainless-arch: - arm64 x-stainless-async: @@ -230,119 +619,306 @@ interactions: x-stainless-os: - MacOS x-stainless-package-version: - - 1.34.0 + - 1.37.0 x-stainless-runtime: - CPython x-stainless-runtime-version: - - 3.12.3 + - 3.11.5 method: POST uri: https://api.openai.com/v1/chat/completions response: body: - string: 'data: {"id":"chatcmpl-9gdIXaXAntrwdA2E5Bhxgz9p7q5Nc","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"role":"assistant","content":""},"logprobs":null,"finish_reason":null}]} + string: 'data: {"id":"chatcmpl-9uMpn3L1Tn4WQ63OmzVNJi5Mu9w2v","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_507c9469a1","choices":[{"index":0,"delta":{"role":"assistant","content":"","refusal":null},"logprobs":null,"finish_reason":null}]} - data: {"id":"chatcmpl-9gdIXaXAntrwdA2E5Bhxgz9p7q5Nc","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"I"},"logprobs":null,"finish_reason":null}]} + data: {"id":"chatcmpl-9uMpn3L1Tn4WQ63OmzVNJi5Mu9w2v","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_507c9469a1","choices":[{"index":0,"delta":{"content":"I"},"logprobs":null,"finish_reason":null}]} - data: {"id":"chatcmpl-9gdIXaXAntrwdA2E5Bhxgz9p7q5Nc","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":" + data: {"id":"chatcmpl-9uMpn3L1Tn4WQ63OmzVNJi5Mu9w2v","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_507c9469a1","choices":[{"index":0,"delta":{"content":" now"},"logprobs":null,"finish_reason":null}]} - data: {"id":"chatcmpl-9gdIXaXAntrwdA2E5Bhxgz9p7q5Nc","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":" + data: {"id":"chatcmpl-9uMpn3L1Tn4WQ63OmzVNJi5Mu9w2v","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_507c9469a1","choices":[{"index":0,"delta":{"content":" can"},"logprobs":null,"finish_reason":null}]} - data: {"id":"chatcmpl-9gdIXaXAntrwdA2E5Bhxgz9p7q5Nc","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":" + data: {"id":"chatcmpl-9uMpn3L1Tn4WQ63OmzVNJi5Mu9w2v","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_507c9469a1","choices":[{"index":0,"delta":{"content":" give"},"logprobs":null,"finish_reason":null}]} - data: {"id":"chatcmpl-9gdIXaXAntrwdA2E5Bhxgz9p7q5Nc","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":" + data: {"id":"chatcmpl-9uMpn3L1Tn4WQ63OmzVNJi5Mu9w2v","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_507c9469a1","choices":[{"index":0,"delta":{"content":" a"},"logprobs":null,"finish_reason":null}]} - data: {"id":"chatcmpl-9gdIXaXAntrwdA2E5Bhxgz9p7q5Nc","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":" + data: {"id":"chatcmpl-9uMpn3L1Tn4WQ63OmzVNJi5Mu9w2v","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_507c9469a1","choices":[{"index":0,"delta":{"content":" great"},"logprobs":null,"finish_reason":null}]} - data: {"id":"chatcmpl-9gdIXaXAntrwdA2E5Bhxgz9p7q5Nc","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":" + data: {"id":"chatcmpl-9uMpn3L1Tn4WQ63OmzVNJi5Mu9w2v","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_507c9469a1","choices":[{"index":0,"delta":{"content":" answer"},"logprobs":null,"finish_reason":null}]} - data: {"id":"chatcmpl-9gdIXaXAntrwdA2E5Bhxgz9p7q5Nc","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":".\n"},"logprobs":null,"finish_reason":null}]} + data: {"id":"chatcmpl-9uMpn3L1Tn4WQ63OmzVNJi5Mu9w2v","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_507c9469a1","choices":[{"index":0,"delta":{"content":" \n"},"logprobs":null,"finish_reason":null}]} - data: {"id":"chatcmpl-9gdIXaXAntrwdA2E5Bhxgz9p7q5Nc","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"Final"},"logprobs":null,"finish_reason":null}]} + data: {"id":"chatcmpl-9uMpn3L1Tn4WQ63OmzVNJi5Mu9w2v","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_507c9469a1","choices":[{"index":0,"delta":{"content":"Final"},"logprobs":null,"finish_reason":null}]} - data: {"id":"chatcmpl-9gdIXaXAntrwdA2E5Bhxgz9p7q5Nc","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":" + data: {"id":"chatcmpl-9uMpn3L1Tn4WQ63OmzVNJi5Mu9w2v","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_507c9469a1","choices":[{"index":0,"delta":{"content":" Answer"},"logprobs":null,"finish_reason":null}]} - data: {"id":"chatcmpl-9gdIXaXAntrwdA2E5Bhxgz9p7q5Nc","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}]} + data: {"id":"chatcmpl-9uMpn3L1Tn4WQ63OmzVNJi5Mu9w2v","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_507c9469a1","choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}]} - data: {"id":"chatcmpl-9gdIXaXAntrwdA2E5Bhxgz9p7q5Nc","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":" + data: {"id":"chatcmpl-9uMpn3L1Tn4WQ63OmzVNJi5Mu9w2v","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_507c9469a1","choices":[{"index":0,"delta":{"content":" Apple"},"logprobs":null,"finish_reason":null}]} - data: {"id":"chatcmpl-9gdIXaXAntrwdA2E5Bhxgz9p7q5Nc","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":" - revolution"},"logprobs":null,"finish_reason":null}]} + data: {"id":"chatcmpl-9uMpn3L1Tn4WQ63OmzVNJi5Mu9w2v","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_507c9469a1","choices":[{"index":0,"delta":{"content":" + consistently"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uMpn3L1Tn4WQ63OmzVNJi5Mu9w2v","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_507c9469a1","choices":[{"index":0,"delta":{"content":" + leads"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uMpn3L1Tn4WQ63OmzVNJi5Mu9w2v","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_507c9469a1","choices":[{"index":0,"delta":{"content":" + in"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uMpn3L1Tn4WQ63OmzVNJi5Mu9w2v","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_507c9469a1","choices":[{"index":0,"delta":{"content":" + innovation"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uMpn3L1Tn4WQ63OmzVNJi5Mu9w2v","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_507c9469a1","choices":[{"index":0,"delta":{"content":","},"logprobs":null,"finish_reason":null}]} + + data: {"id":"chatcmpl-9uMpn3L1Tn4WQ63OmzVNJi5Mu9w2v","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_507c9469a1","choices":[{"index":0,"delta":{"content":" + setting"},"logprobs":null,"finish_reason":null}]} - data: {"id":"chatcmpl-9gdIXaXAntrwdA2E5Bhxgz9p7q5Nc","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"izes"},"logprobs":null,"finish_reason":null}]} + data: {"id":"chatcmpl-9uMpn3L1Tn4WQ63OmzVNJi5Mu9w2v","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_507c9469a1","choices":[{"index":0,"delta":{"content":" + trends"},"logprobs":null,"finish_reason":null}]} - data: {"id":"chatcmpl-9gdIXaXAntrwdA2E5Bhxgz9p7q5Nc","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":" + + data: {"id":"chatcmpl-9uMpn3L1Tn4WQ63OmzVNJi5Mu9w2v","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_507c9469a1","choices":[{"index":0,"delta":{"content":" + in"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uMpn3L1Tn4WQ63OmzVNJi5Mu9w2v","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_507c9469a1","choices":[{"index":0,"delta":{"content":" technology"},"logprobs":null,"finish_reason":null}]} - data: {"id":"chatcmpl-9gdIXaXAntrwdA2E5Bhxgz9p7q5Nc","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":" - with"},"logprobs":null,"finish_reason":null}]} + data: {"id":"chatcmpl-9uMpn3L1Tn4WQ63OmzVNJi5Mu9w2v","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_507c9469a1","choices":[{"index":0,"delta":{"content":" + and"},"logprobs":null,"finish_reason":null}]} - data: {"id":"chatcmpl-9gdIXaXAntrwdA2E5Bhxgz9p7q5Nc","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":" - sleek"},"logprobs":null,"finish_reason":null}]} + data: {"id":"chatcmpl-9uMpn3L1Tn4WQ63OmzVNJi5Mu9w2v","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_507c9469a1","choices":[{"index":0,"delta":{"content":" + design"},"logprobs":null,"finish_reason":null}]} - data: {"id":"chatcmpl-9gdIXaXAntrwdA2E5Bhxgz9p7q5Nc","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":" - designs"},"logprobs":null,"finish_reason":null}]} + data: {"id":"chatcmpl-9uMpn3L1Tn4WQ63OmzVNJi5Mu9w2v","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_507c9469a1","choices":[{"index":0,"delta":{"content":"."},"logprobs":null,"finish_reason":null}]} - data: {"id":"chatcmpl-9gdIXaXAntrwdA2E5Bhxgz9p7q5Nc","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":","},"logprobs":null,"finish_reason":null}]} + data: {"id":"chatcmpl-9uMpn3L1Tn4WQ63OmzVNJi5Mu9w2v","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_507c9469a1","choices":[{"index":0,"delta":{},"logprobs":null,"finish_reason":"stop"}]} - data: {"id":"chatcmpl-9gdIXaXAntrwdA2E5Bhxgz9p7q5Nc","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":" - seamless"},"logprobs":null,"finish_reason":null}]} + data: [DONE] - data: {"id":"chatcmpl-9gdIXaXAntrwdA2E5Bhxgz9p7q5Nc","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":" - integration"},"logprobs":null,"finish_reason":null}]} + ' + headers: + CF-Cache-Status: + - DYNAMIC + CF-RAY: + - 8b0925026cc62eae-LAX + Connection: + - keep-alive + Content-Type: + - text/event-stream; charset=utf-8 + Date: + - Fri, 09 Aug 2024 16:32:51 GMT + Server: + - cloudflare + Set-Cookie: + - __cf_bm=OmCSrIckdJaIwZg44plOuyxamF9neWCRso5EeL8kUvk-1723221171-1.0.1.1-y5fiEvxr_Fh_7XMl3OkoMlm2iyX2zkuNwlMR8diFIMTCJxaXBVs0JcIHzQVWmNpfbDE48YODdR_VJ8.Mt36JNg; + path=/; expires=Fri, 09-Aug-24 17:02:51 GMT; domain=.api.openai.com; HttpOnly; + Secure; SameSite=None + - _cfuvid=4B5FQw4VwLS5LMi2AL6QH6AxjSYbq1XQkW0hlBnEx98-1723221171938-0.0.1.1-604800000; + path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None + Transfer-Encoding: + - chunked + X-Content-Type-Options: + - nosniff + alt-svc: + - h3=":443"; ma=86400 + openai-organization: + - crewai-iuxna1 + openai-processing-ms: + - '125' + openai-version: + - '2020-10-01' + strict-transport-security: + - max-age=15552000; includeSubDomains; preload + x-ratelimit-limit-requests: + - '30000' + x-ratelimit-limit-tokens: + - '150000000' + x-ratelimit-remaining-requests: + - '29999' + x-ratelimit-remaining-tokens: + - '149999780' + x-ratelimit-reset-requests: + - 2ms + x-ratelimit-reset-tokens: + - 0s + x-request-id: + - req_7883863a16524fe7d94dcfe289b92306 + status: + code: 200 + message: OK +- request: + body: '{"messages": [{"content": "You are dog Researcher. You have a lot of experience + with dog.\nYour personal goal is: Express hot takes on dog.To give my best complete + final answer to the task use the exact following format:\n\nThought: I now can + give a great answer\nFinal Answer: my best complete final answer to the task.\nYour + final answer must be the great and the most complete as possible, it must be + outcome described.\n\nI MUST use these formats, my job depends on it!\nCurrent + Task: Give me an analysis around dog.\n\nThis is the expect criteria for your + final answer: 1 bullet point about dog that''s under 15 words. \n you MUST return + the actual complete content as the final answer, not a summary.\n\nBegin! This + is VERY important to you, use the tools available and give your best Final Answer, + your job depends on it!\n\nThought:\n", "role": "user"}], "model": "gpt-4o-mini", + "logprobs": false, "n": 1, "stop": ["\nObservation"], "stream": true, "temperature": + 0.7}' + headers: + accept: + - application/json + accept-encoding: + - gzip, deflate, br + connection: + - keep-alive + content-length: + - '975' + content-type: + - application/json + host: + - api.openai.com + user-agent: + - OpenAI/Python 1.37.0 + x-stainless-arch: + - arm64 + x-stainless-async: + - 'false' + x-stainless-lang: + - python + x-stainless-os: + - MacOS + x-stainless-package-version: + - 1.37.0 + x-stainless-runtime: + - CPython + x-stainless-runtime-version: + - 3.11.5 + method: POST + uri: https://api.openai.com/v1/chat/completions + response: + body: + string: 'data: {"id":"chatcmpl-9uMpnj5NaPLieGetrADhtjoB0QBPd","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_48196bc67a","choices":[{"index":0,"delta":{"role":"assistant","content":"","refusal":null},"logprobs":null,"finish_reason":null}]} - data: {"id":"chatcmpl-9gdIXaXAntrwdA2E5Bhxgz9p7q5Nc","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":","},"logprobs":null,"finish_reason":null}]} + data: {"id":"chatcmpl-9uMpnj5NaPLieGetrADhtjoB0QBPd","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_48196bc67a","choices":[{"index":0,"delta":{"content":"I"},"logprobs":null,"finish_reason":null}]} - data: {"id":"chatcmpl-9gdIXaXAntrwdA2E5Bhxgz9p7q5Nc","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":" + data: {"id":"chatcmpl-9uMpnj5NaPLieGetrADhtjoB0QBPd","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_48196bc67a","choices":[{"index":0,"delta":{"content":" + now"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uMpnj5NaPLieGetrADhtjoB0QBPd","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_48196bc67a","choices":[{"index":0,"delta":{"content":" + can"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uMpnj5NaPLieGetrADhtjoB0QBPd","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_48196bc67a","choices":[{"index":0,"delta":{"content":" + give"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uMpnj5NaPLieGetrADhtjoB0QBPd","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_48196bc67a","choices":[{"index":0,"delta":{"content":" + a"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uMpnj5NaPLieGetrADhtjoB0QBPd","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_48196bc67a","choices":[{"index":0,"delta":{"content":" + great"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uMpnj5NaPLieGetrADhtjoB0QBPd","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_48196bc67a","choices":[{"index":0,"delta":{"content":" + answer"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uMpnj5NaPLieGetrADhtjoB0QBPd","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_48196bc67a","choices":[{"index":0,"delta":{"content":" \n"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uMpnj5NaPLieGetrADhtjoB0QBPd","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_48196bc67a","choices":[{"index":0,"delta":{"content":"Final"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uMpnj5NaPLieGetrADhtjoB0QBPd","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_48196bc67a","choices":[{"index":0,"delta":{"content":" + Answer"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uMpnj5NaPLieGetrADhtjoB0QBPd","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_48196bc67a","choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uMpnj5NaPLieGetrADhtjoB0QBPd","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_48196bc67a","choices":[{"index":0,"delta":{"content":" + Dogs"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uMpnj5NaPLieGetrADhtjoB0QBPd","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_48196bc67a","choices":[{"index":0,"delta":{"content":" + are"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uMpnj5NaPLieGetrADhtjoB0QBPd","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_48196bc67a","choices":[{"index":0,"delta":{"content":" + our"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uMpnj5NaPLieGetrADhtjoB0QBPd","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_48196bc67a","choices":[{"index":0,"delta":{"content":" + loyal"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uMpnj5NaPLieGetrADhtjoB0QBPd","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_48196bc67a","choices":[{"index":0,"delta":{"content":" + companions"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uMpnj5NaPLieGetrADhtjoB0QBPd","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_48196bc67a","choices":[{"index":0,"delta":{"content":","},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uMpnj5NaPLieGetrADhtjoB0QBPd","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_48196bc67a","choices":[{"index":0,"delta":{"content":" + bringing"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uMpnj5NaPLieGetrADhtjoB0QBPd","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_48196bc67a","choices":[{"index":0,"delta":{"content":" + joy"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uMpnj5NaPLieGetrADhtjoB0QBPd","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_48196bc67a","choices":[{"index":0,"delta":{"content":" and"},"logprobs":null,"finish_reason":null}]} - data: {"id":"chatcmpl-9gdIXaXAntrwdA2E5Bhxgz9p7q5Nc","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":" - innovative"},"logprobs":null,"finish_reason":null}]} + data: {"id":"chatcmpl-9uMpnj5NaPLieGetrADhtjoB0QBPd","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_48196bc67a","choices":[{"index":0,"delta":{"content":" + unconditional"},"logprobs":null,"finish_reason":null}]} - data: {"id":"chatcmpl-9gdIXaXAntrwdA2E5Bhxgz9p7q5Nc","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":" - user"},"logprobs":null,"finish_reason":null}]} + data: {"id":"chatcmpl-9uMpnj5NaPLieGetrADhtjoB0QBPd","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_48196bc67a","choices":[{"index":0,"delta":{"content":" + love"},"logprobs":null,"finish_reason":null}]} - data: {"id":"chatcmpl-9gdIXaXAntrwdA2E5Bhxgz9p7q5Nc","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":" - experiences"},"logprobs":null,"finish_reason":null}]} + data: {"id":"chatcmpl-9uMpnj5NaPLieGetrADhtjoB0QBPd","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_48196bc67a","choices":[{"index":0,"delta":{"content":" + daily"},"logprobs":null,"finish_reason":null}]} - data: {"id":"chatcmpl-9gdIXaXAntrwdA2E5Bhxgz9p7q5Nc","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"."},"logprobs":null,"finish_reason":null}]} + data: {"id":"chatcmpl-9uMpnj5NaPLieGetrADhtjoB0QBPd","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_48196bc67a","choices":[{"index":0,"delta":{"content":"."},"logprobs":null,"finish_reason":null}]} - data: {"id":"chatcmpl-9gdIXaXAntrwdA2E5Bhxgz9p7q5Nc","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{},"logprobs":null,"finish_reason":"stop"}]} + data: {"id":"chatcmpl-9uMpnj5NaPLieGetrADhtjoB0QBPd","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_48196bc67a","choices":[{"index":0,"delta":{},"logprobs":null,"finish_reason":"stop"}]} data: [DONE] @@ -353,47 +929,49 @@ interactions: CF-Cache-Status: - DYNAMIC CF-RAY: - - 89d0fa4e7ca907e6-ATL + - 8b0925026d8a1018-LAX Connection: - keep-alive Content-Type: - text/event-stream; charset=utf-8 Date: - - Tue, 02 Jul 2024 19:17:45 GMT + - Fri, 09 Aug 2024 16:32:51 GMT Server: - cloudflare Set-Cookie: - - __cf_bm=wf2ozMjr46sG0EhuZjpiDNagwTxC05ct3Hn7Y9Rs5AI-1719947865-1.0.1.1-uckxTTr7Yfe6sv4ZznqqrGTEz9E3_Cpp7OAWBIEeNz1Smdjwijw8YV5oYPe_6W4DrEtwVzRDxaqIHlWP55O0QA; - path=/; expires=Tue, 02-Jul-24 19:47:45 GMT; domain=.api.openai.com; HttpOnly; + - __cf_bm=ka9TSwoQ9j54cBsy8JifkjhPqCrh4x3UoRmuxnA3Iag-1723221171-1.0.1.1-tEXs42lPCG5_VVABTQT1Cr1X4I4FeFBWjaPydTr_A.rl10SgqaP9DikAKxBzFcEJjL_alG8ExjWguER1tpNzQw; + path=/; expires=Fri, 09-Aug-24 17:02:51 GMT; domain=.api.openai.com; HttpOnly; Secure; SameSite=None - - _cfuvid=F9pWw4TeoPa8puOm5RN9Gp2oY0lRoN53ChZ1qFYx1S8-1719947865726-0.0.1.1-604800000; + - _cfuvid=awLkbnttI0sYW_7T7Rh67jpmJYmSAvqYd29wrNnp1M4-1723221171983-0.0.1.1-604800000; path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None Transfer-Encoding: - chunked + X-Content-Type-Options: + - nosniff alt-svc: - h3=":443"; ma=86400 openai-organization: - crewai-iuxna1 openai-processing-ms: - - '168' + - '80' openai-version: - '2020-10-01' strict-transport-security: - - max-age=31536000; includeSubDomains + - max-age=15552000; includeSubDomains; preload x-ratelimit-limit-requests: - - '10000' + - '30000' x-ratelimit-limit-tokens: - - '16000000' + - '150000000' x-ratelimit-remaining-requests: - - '9998' + - '29999' x-ratelimit-remaining-tokens: - - '15999780' + - '149999783' x-ratelimit-reset-requests: - - 10ms + - 2ms x-ratelimit-reset-tokens: - 0s x-request-id: - - req_e6dfeda5935eae030bcc2da526234635 + - req_94bc790b98bc3a10a7eb6465f9b48191 status: code: 200 message: OK @@ -408,8 +986,9 @@ interactions: final answer: 1 bullet point about cat that''s under 15 words. \n you MUST return the actual complete content as the final answer, not a summary.\n\nBegin! This is VERY important to you, use the tools available and give your best Final Answer, - your job depends on it!\n\nThought:\n", "role": "user"}], "model": "gpt-4o", - "n": 1, "stop": ["\nObservation"], "stream": true, "temperature": 0.7}' + your job depends on it!\n\nThought:\n", "role": "user"}], "model": "gpt-4o-mini", + "logprobs": false, "n": 1, "stop": ["\nObservation"], "stream": true, "temperature": + 0.7}' headers: accept: - application/json @@ -418,13 +997,13 @@ interactions: connection: - keep-alive content-length: - - '951' + - '975' content-type: - application/json host: - api.openai.com user-agent: - - OpenAI/Python 1.34.0 + - OpenAI/Python 1.37.0 x-stainless-arch: - arm64 x-stainless-async: @@ -434,106 +1013,105 @@ interactions: x-stainless-os: - MacOS x-stainless-package-version: - - 1.34.0 + - 1.37.0 x-stainless-runtime: - CPython x-stainless-runtime-version: - - 3.12.3 + - 3.11.5 method: POST uri: https://api.openai.com/v1/chat/completions response: body: - string: 'data: {"id":"chatcmpl-9gdIXPfC85ZAgbI0KqvS9z396XBKw","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"role":"assistant","content":""},"logprobs":null,"finish_reason":null}]} - - - data: {"id":"chatcmpl-9gdIXPfC85ZAgbI0KqvS9z396XBKw","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"Thought"},"logprobs":null,"finish_reason":null}]} + string: 'data: {"id":"chatcmpl-9uMpnaX1NoG0lPFhDnxjg1nj47cbN","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_48196bc67a","choices":[{"index":0,"delta":{"role":"assistant","content":"","refusal":null},"logprobs":null,"finish_reason":null}]} - data: {"id":"chatcmpl-9gdIXPfC85ZAgbI0KqvS9z396XBKw","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}]} + data: {"id":"chatcmpl-9uMpnaX1NoG0lPFhDnxjg1nj47cbN","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_48196bc67a","choices":[{"index":0,"delta":{"content":"I"},"logprobs":null,"finish_reason":null}]} - data: {"id":"chatcmpl-9gdIXPfC85ZAgbI0KqvS9z396XBKw","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":" - I"},"logprobs":null,"finish_reason":null}]} - - - data: {"id":"chatcmpl-9gdIXPfC85ZAgbI0KqvS9z396XBKw","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":" + data: {"id":"chatcmpl-9uMpnaX1NoG0lPFhDnxjg1nj47cbN","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_48196bc67a","choices":[{"index":0,"delta":{"content":" now"},"logprobs":null,"finish_reason":null}]} - data: {"id":"chatcmpl-9gdIXPfC85ZAgbI0KqvS9z396XBKw","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":" + data: {"id":"chatcmpl-9uMpnaX1NoG0lPFhDnxjg1nj47cbN","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_48196bc67a","choices":[{"index":0,"delta":{"content":" can"},"logprobs":null,"finish_reason":null}]} - data: {"id":"chatcmpl-9gdIXPfC85ZAgbI0KqvS9z396XBKw","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":" + data: {"id":"chatcmpl-9uMpnaX1NoG0lPFhDnxjg1nj47cbN","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_48196bc67a","choices":[{"index":0,"delta":{"content":" give"},"logprobs":null,"finish_reason":null}]} - data: {"id":"chatcmpl-9gdIXPfC85ZAgbI0KqvS9z396XBKw","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":" + data: {"id":"chatcmpl-9uMpnaX1NoG0lPFhDnxjg1nj47cbN","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_48196bc67a","choices":[{"index":0,"delta":{"content":" a"},"logprobs":null,"finish_reason":null}]} - data: {"id":"chatcmpl-9gdIXPfC85ZAgbI0KqvS9z396XBKw","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":" + data: {"id":"chatcmpl-9uMpnaX1NoG0lPFhDnxjg1nj47cbN","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_48196bc67a","choices":[{"index":0,"delta":{"content":" great"},"logprobs":null,"finish_reason":null}]} - data: {"id":"chatcmpl-9gdIXPfC85ZAgbI0KqvS9z396XBKw","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":" + data: {"id":"chatcmpl-9uMpnaX1NoG0lPFhDnxjg1nj47cbN","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_48196bc67a","choices":[{"index":0,"delta":{"content":" answer"},"logprobs":null,"finish_reason":null}]} - data: {"id":"chatcmpl-9gdIXPfC85ZAgbI0KqvS9z396XBKw","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"\n"},"logprobs":null,"finish_reason":null}]} + data: {"id":"chatcmpl-9uMpnaX1NoG0lPFhDnxjg1nj47cbN","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_48196bc67a","choices":[{"index":0,"delta":{"content":" \n"},"logprobs":null,"finish_reason":null}]} - data: {"id":"chatcmpl-9gdIXPfC85ZAgbI0KqvS9z396XBKw","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"Final"},"logprobs":null,"finish_reason":null}]} + data: {"id":"chatcmpl-9uMpnaX1NoG0lPFhDnxjg1nj47cbN","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_48196bc67a","choices":[{"index":0,"delta":{"content":"Final"},"logprobs":null,"finish_reason":null}]} - data: {"id":"chatcmpl-9gdIXPfC85ZAgbI0KqvS9z396XBKw","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":" + data: {"id":"chatcmpl-9uMpnaX1NoG0lPFhDnxjg1nj47cbN","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_48196bc67a","choices":[{"index":0,"delta":{"content":" Answer"},"logprobs":null,"finish_reason":null}]} - data: {"id":"chatcmpl-9gdIXPfC85ZAgbI0KqvS9z396XBKw","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}]} + data: {"id":"chatcmpl-9uMpnaX1NoG0lPFhDnxjg1nj47cbN","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_48196bc67a","choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}]} - data: {"id":"chatcmpl-9gdIXPfC85ZAgbI0KqvS9z396XBKw","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":" + data: {"id":"chatcmpl-9uMpnaX1NoG0lPFhDnxjg1nj47cbN","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_48196bc67a","choices":[{"index":0,"delta":{"content":" Cats"},"logprobs":null,"finish_reason":null}]} - data: {"id":"chatcmpl-9gdIXPfC85ZAgbI0KqvS9z396XBKw","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":" + data: {"id":"chatcmpl-9uMpnaX1NoG0lPFhDnxjg1nj47cbN","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_48196bc67a","choices":[{"index":0,"delta":{"content":" are"},"logprobs":null,"finish_reason":null}]} - data: {"id":"chatcmpl-9gdIXPfC85ZAgbI0KqvS9z396XBKw","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":" - master"},"logprobs":null,"finish_reason":null}]} + data: {"id":"chatcmpl-9uMpnaX1NoG0lPFhDnxjg1nj47cbN","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_48196bc67a","choices":[{"index":0,"delta":{"content":" + independent"},"logprobs":null,"finish_reason":null}]} - data: {"id":"chatcmpl-9gdIXPfC85ZAgbI0KqvS9z396XBKw","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"ful"},"logprobs":null,"finish_reason":null}]} + data: {"id":"chatcmpl-9uMpnaX1NoG0lPFhDnxjg1nj47cbN","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_48196bc67a","choices":[{"index":0,"delta":{"content":" + creatures"},"logprobs":null,"finish_reason":null}]} - data: {"id":"chatcmpl-9gdIXPfC85ZAgbI0KqvS9z396XBKw","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":" - hunters"},"logprobs":null,"finish_reason":null}]} + data: {"id":"chatcmpl-9uMpnaX1NoG0lPFhDnxjg1nj47cbN","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_48196bc67a","choices":[{"index":0,"delta":{"content":" + that"},"logprobs":null,"finish_reason":null}]} - data: {"id":"chatcmpl-9gdIXPfC85ZAgbI0KqvS9z396XBKw","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":" - and"},"logprobs":null,"finish_reason":null}]} + data: {"id":"chatcmpl-9uMpnaX1NoG0lPFhDnxjg1nj47cbN","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_48196bc67a","choices":[{"index":0,"delta":{"content":" + require"},"logprobs":null,"finish_reason":null}]} - data: {"id":"chatcmpl-9gdIXPfC85ZAgbI0KqvS9z396XBKw","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":" - brilliant"},"logprobs":null,"finish_reason":null}]} + data: {"id":"chatcmpl-9uMpnaX1NoG0lPFhDnxjg1nj47cbN","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_48196bc67a","choices":[{"index":0,"delta":{"content":" + minimal"},"logprobs":null,"finish_reason":null}]} - data: {"id":"chatcmpl-9gdIXPfC85ZAgbI0KqvS9z396XBKw","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":" - problem"},"logprobs":null,"finish_reason":null}]} + data: {"id":"chatcmpl-9uMpnaX1NoG0lPFhDnxjg1nj47cbN","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_48196bc67a","choices":[{"index":0,"delta":{"content":" + training"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uMpnaX1NoG0lPFhDnxjg1nj47cbN","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_48196bc67a","choices":[{"index":0,"delta":{"content":" + and"},"logprobs":null,"finish_reason":null}]} - data: {"id":"chatcmpl-9gdIXPfC85ZAgbI0KqvS9z396XBKw","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"-sol"},"logprobs":null,"finish_reason":null}]} + data: {"id":"chatcmpl-9uMpnaX1NoG0lPFhDnxjg1nj47cbN","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_48196bc67a","choices":[{"index":0,"delta":{"content":" + social"},"logprobs":null,"finish_reason":null}]} - data: {"id":"chatcmpl-9gdIXPfC85ZAgbI0KqvS9z396XBKw","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"vers"},"logprobs":null,"finish_reason":null}]} + data: {"id":"chatcmpl-9uMpnaX1NoG0lPFhDnxjg1nj47cbN","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_48196bc67a","choices":[{"index":0,"delta":{"content":"ization"},"logprobs":null,"finish_reason":null}]} - data: {"id":"chatcmpl-9gdIXPfC85ZAgbI0KqvS9z396XBKw","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"."},"logprobs":null,"finish_reason":null}]} + data: {"id":"chatcmpl-9uMpnaX1NoG0lPFhDnxjg1nj47cbN","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_48196bc67a","choices":[{"index":0,"delta":{"content":"."},"logprobs":null,"finish_reason":null}]} - data: {"id":"chatcmpl-9gdIXPfC85ZAgbI0KqvS9z396XBKw","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{},"logprobs":null,"finish_reason":"stop"}]} + data: {"id":"chatcmpl-9uMpnaX1NoG0lPFhDnxjg1nj47cbN","object":"chat.completion.chunk","created":1723221171,"model":"gpt-4o-mini-2024-07-18","system_fingerprint":"fp_48196bc67a","choices":[{"index":0,"delta":{},"logprobs":null,"finish_reason":"stop"}]} data: [DONE] @@ -544,47 +1122,49 @@ interactions: CF-Cache-Status: - DYNAMIC CF-RAY: - - 89d0fa4e7ae912d7-ATL + - 8b092502598c7cf8-LAX Connection: - keep-alive Content-Type: - text/event-stream; charset=utf-8 Date: - - Tue, 02 Jul 2024 19:17:45 GMT + - Fri, 09 Aug 2024 16:32:52 GMT Server: - cloudflare Set-Cookie: - - __cf_bm=y7JNZ8WEp.q5pMXLi79ajfcI.F6MfE0GeYLw34Apkf0-1719947865-1.0.1.1-QKklGeYuOnsQROgqMs42XwqKNvW.mPrmcbtaxMnUg3eSgI7TRnRq4qPuSan0ynDt4Hd9NMuls2FR.Caa1MVr9Q; - path=/; expires=Tue, 02-Jul-24 19:47:45 GMT; domain=.api.openai.com; HttpOnly; + - __cf_bm=Pil4OfqpNZSXhwNQH0Od.JNHSLhcuGGnqvHCk2Hn9Xw-1723221172-1.0.1.1-YR2vfyWH1tqU657yG_8jco7809xPDaxws_mdh7qyyLwamlSA1gxihpOtA6jFsKlk_Q8I_Jx1E78IJ5BJNwBhHA; + path=/; expires=Fri, 09-Aug-24 17:02:52 GMT; domain=.api.openai.com; HttpOnly; Secure; SameSite=None - - _cfuvid=FVQoSgcvVyiB_o43X6y5MGYgzGojmsQqS.nPObW3JYU-1719947865679-0.0.1.1-604800000; + - _cfuvid=797z9sSFf6OuWq9RjuprvyKnyCmayK_AtpteVHpWIV4-1723221172108-0.0.1.1-604800000; path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None Transfer-Encoding: - chunked + X-Content-Type-Options: + - nosniff alt-svc: - h3=":443"; ma=86400 openai-organization: - crewai-iuxna1 openai-processing-ms: - - '132' + - '334' openai-version: - '2020-10-01' strict-transport-security: - - max-age=31536000; includeSubDomains + - max-age=15552000; includeSubDomains; preload x-ratelimit-limit-requests: - - '10000' + - '30000' x-ratelimit-limit-tokens: - - '16000000' + - '150000000' x-ratelimit-remaining-requests: - - '9999' + - '29999' x-ratelimit-remaining-tokens: - - '15999783' + - '149999782' x-ratelimit-reset-requests: - - 6ms + - 2ms x-ratelimit-reset-tokens: - 0s x-request-id: - - req_a06bde4044d3ee75edf08f333139679c + - req_bd93956c6bb620930e4c3092eaddc43d status: code: 200 message: OK diff --git a/tests/crew_test.py b/tests/crew_test.py index b9e9a1ca6b..0c49d51f2e 100644 --- a/tests/crew_test.py +++ b/tests/crew_test.py @@ -18,6 +18,7 @@ from crewai.tasks.conditional_task import ConditionalTask from crewai.tasks.output_format import OutputFormat from crewai.tasks.task_output import TaskOutput +from crewai.types.usage_metrics import UsageMetrics from crewai.utilities import Logger, RPMController from crewai.utilities.task_output_storage_handler import TaskOutputStorageHandler @@ -565,14 +566,10 @@ def test_crew_kickoff_usage_metrics(): assert len(results) == len(inputs) for result in results: # Assert that all required keys are in usage_metrics and their values are not None - for key in [ - "total_tokens", - "prompt_tokens", - "completion_tokens", - "successful_requests", - ]: - assert key in result.token_usage - assert result.token_usage[key] > 0 + assert result.token_usage.total_tokens > 0 + assert result.token_usage.prompt_tokens > 0 + assert result.token_usage.completion_tokens > 0 + assert result.token_usage.successful_requests > 0 def test_agents_rpm_is_never_set_if_crew_max_RPM_is_not_set(): @@ -711,7 +708,7 @@ async def test_crew_async_kickoff(): ] agent = Agent( - role="{topic} Researcher", + role="mock agent", goal="Express hot takes on {topic}.", backstory="You have a lot of experience with {topic}.", ) @@ -723,19 +720,30 @@ async def test_crew_async_kickoff(): ) crew = Crew(agents=[agent], tasks=[task]) - results = await crew.kickoff_for_each_async(inputs=inputs) + mock_task_output = ( + CrewOutput( + raw="Test output from Crew 1", + tasks_output=[], + token_usage=UsageMetrics( + total_tokens=100, + prompt_tokens=10, + completion_tokens=90, + successful_requests=1, + ), + json_dict={"output": "crew1"}, + pydantic=None, + ), + ) + with patch.object(Crew, "kickoff_async", return_value=mock_task_output): + results = await crew.kickoff_for_each_async(inputs=inputs) - assert len(results) == len(inputs) - for result in results: - # Assert that all required keys are in usage_metrics and their values are not None - for key in [ - "total_tokens", - "prompt_tokens", - "completion_tokens", - "successful_requests", - ]: - assert key in result.token_usage - assert result.token_usage[key] > 0 + assert len(results) == len(inputs) + for result in results: + # Assert that all required keys are in usage_metrics and their values are not None + assert result[0].token_usage.total_tokens > 0 # type: ignore + assert result[0].token_usage.prompt_tokens > 0 # type: ignore + assert result[0].token_usage.completion_tokens > 0 # type: ignore + assert result[0].token_usage.successful_requests > 0 # type: ignore @pytest.mark.vcr(filter_headers=["authorization"]) @@ -1283,12 +1291,12 @@ def test_agent_usage_metrics_are_captured_for_hierarchical_process(): print(crew.usage_metrics) - assert crew.usage_metrics == { - "total_tokens": 219, - "prompt_tokens": 201, - "completion_tokens": 18, - "successful_requests": 1, - } + assert crew.usage_metrics == UsageMetrics( + total_tokens=219, + prompt_tokens=201, + completion_tokens=18, + successful_requests=1, + ) @pytest.mark.vcr(filter_headers=["authorization"]) diff --git a/tests/pipeline/__init__.py b/tests/pipeline/__init__.py new file mode 100644 index 0000000000..e69de29bb2 diff --git a/tests/pipeline/cassettes/test_pipeline_process_streams_single_input_pydantic_output.yaml b/tests/pipeline/cassettes/test_pipeline_process_streams_single_input_pydantic_output.yaml new file mode 100644 index 0000000000..e40d7f058e --- /dev/null +++ b/tests/pipeline/cassettes/test_pipeline_process_streams_single_input_pydantic_output.yaml @@ -0,0 +1,163 @@ +interactions: +- request: + body: '{"messages": [{"content": "You are Mock Role. Mock Backstory\nYour personal + goal is: Mock GoalTo give my best complete final answer to the task use the + exact following format:\n\nThought: I now can give a great answer\nFinal Answer: + my best complete final answer to the task.\nYour final answer must be the great + and the most complete as possible, it must be outcome described.\n\nI MUST use + these formats, my job depends on it!\nCurrent Task: Return: Test output\n\nThis + is the expect criteria for your final answer: Test output \n you MUST return + the actual complete content as the final answer, not a summary.\n\nBegin! This + is VERY important to you, use the tools available and give your best Final Answer, + your job depends on it!\n\nThought:\n", "role": "user"}], "model": "gpt-4o", + "logprobs": false, "n": 1, "stop": ["\nObservation"], "stream": true, "temperature": + 0.7}' + headers: + accept: + - application/json + accept-encoding: + - gzip, deflate, br + connection: + - keep-alive + content-length: + - '877' + content-type: + - application/json + host: + - api.openai.com + user-agent: + - OpenAI/Python 1.37.0 + x-stainless-arch: + - arm64 + x-stainless-async: + - 'false' + x-stainless-lang: + - python + x-stainless-os: + - MacOS + x-stainless-package-version: + - 1.37.0 + x-stainless-runtime: + - CPython + x-stainless-runtime-version: + - 3.11.5 + method: POST + uri: https://api.openai.com/v1/chat/completions + response: + body: + string: 'data: {"id":"chatcmpl-9uKW9xFsa6Ib8K87cY8De4agMoe8A","object":"chat.completion.chunk","created":1723212265,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_c9aa9c0491","choices":[{"index":0,"delta":{"role":"assistant","content":"","refusal":null},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uKW9xFsa6Ib8K87cY8De4agMoe8A","object":"chat.completion.chunk","created":1723212265,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_c9aa9c0491","choices":[{"index":0,"delta":{"content":"Thought"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uKW9xFsa6Ib8K87cY8De4agMoe8A","object":"chat.completion.chunk","created":1723212265,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_c9aa9c0491","choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uKW9xFsa6Ib8K87cY8De4agMoe8A","object":"chat.completion.chunk","created":1723212265,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_c9aa9c0491","choices":[{"index":0,"delta":{"content":" + I"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uKW9xFsa6Ib8K87cY8De4agMoe8A","object":"chat.completion.chunk","created":1723212265,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_c9aa9c0491","choices":[{"index":0,"delta":{"content":" + now"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uKW9xFsa6Ib8K87cY8De4agMoe8A","object":"chat.completion.chunk","created":1723212265,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_c9aa9c0491","choices":[{"index":0,"delta":{"content":" + can"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uKW9xFsa6Ib8K87cY8De4agMoe8A","object":"chat.completion.chunk","created":1723212265,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_c9aa9c0491","choices":[{"index":0,"delta":{"content":" + give"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uKW9xFsa6Ib8K87cY8De4agMoe8A","object":"chat.completion.chunk","created":1723212265,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_c9aa9c0491","choices":[{"index":0,"delta":{"content":" + a"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uKW9xFsa6Ib8K87cY8De4agMoe8A","object":"chat.completion.chunk","created":1723212265,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_c9aa9c0491","choices":[{"index":0,"delta":{"content":" + great"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uKW9xFsa6Ib8K87cY8De4agMoe8A","object":"chat.completion.chunk","created":1723212265,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_c9aa9c0491","choices":[{"index":0,"delta":{"content":" + answer"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uKW9xFsa6Ib8K87cY8De4agMoe8A","object":"chat.completion.chunk","created":1723212265,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_c9aa9c0491","choices":[{"index":0,"delta":{"content":"\n"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uKW9xFsa6Ib8K87cY8De4agMoe8A","object":"chat.completion.chunk","created":1723212265,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_c9aa9c0491","choices":[{"index":0,"delta":{"content":"Final"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uKW9xFsa6Ib8K87cY8De4agMoe8A","object":"chat.completion.chunk","created":1723212265,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_c9aa9c0491","choices":[{"index":0,"delta":{"content":" + Answer"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uKW9xFsa6Ib8K87cY8De4agMoe8A","object":"chat.completion.chunk","created":1723212265,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_c9aa9c0491","choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uKW9xFsa6Ib8K87cY8De4agMoe8A","object":"chat.completion.chunk","created":1723212265,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_c9aa9c0491","choices":[{"index":0,"delta":{"content":" + Test"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uKW9xFsa6Ib8K87cY8De4agMoe8A","object":"chat.completion.chunk","created":1723212265,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_c9aa9c0491","choices":[{"index":0,"delta":{"content":" + output"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uKW9xFsa6Ib8K87cY8De4agMoe8A","object":"chat.completion.chunk","created":1723212265,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_c9aa9c0491","choices":[{"index":0,"delta":{},"logprobs":null,"finish_reason":"stop"}]} + + + data: [DONE] + + + ' + headers: + CF-Cache-Status: + - DYNAMIC + CF-RAY: + - 8b084b8fcd7f2aad-LAX + Connection: + - keep-alive + Content-Type: + - text/event-stream; charset=utf-8 + Date: + - Fri, 09 Aug 2024 14:04:25 GMT + Server: + - cloudflare + Set-Cookie: + - __cf_bm=oxCbFZSGsVbRMNF7Ropntit_j6RkFrrYepSheJNSdR4-1723212265-1.0.1.1-BWjFhFaF3HTur21PTi4rF4nxQaOgeq_Mf9WOgMy7pqvFZZf9B.Ke_es1wk5qMDpnPzWYXyqOx2.LDx.6xwXGHg; + path=/; expires=Fri, 09-Aug-24 14:34:25 GMT; domain=.api.openai.com; HttpOnly; + Secure; SameSite=None + - _cfuvid=bQz8n1LEJhVzFyHMDG.NgJqwk0MJYiPxkg2WKd9hIik-1723212265235-0.0.1.1-604800000; + path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None + Transfer-Encoding: + - chunked + X-Content-Type-Options: + - nosniff + alt-svc: + - h3=":443"; ma=86400 + openai-organization: + - crewai-iuxna1 + openai-processing-ms: + - '79' + openai-version: + - '2020-10-01' + strict-transport-security: + - max-age=15552000; includeSubDomains; preload + x-ratelimit-limit-requests: + - '10000' + x-ratelimit-limit-tokens: + - '30000000' + x-ratelimit-remaining-requests: + - '9999' + x-ratelimit-remaining-tokens: + - '29999806' + x-ratelimit-reset-requests: + - 6ms + x-ratelimit-reset-tokens: + - 0s + x-request-id: + - req_556426f83d0ac63c40f2c70967c53378 + status: + code: 200 + message: OK +version: 1 diff --git a/tests/pipeline/cassettes/test_pipeline_with_multiple_routers.yaml b/tests/pipeline/cassettes/test_pipeline_with_multiple_routers.yaml new file mode 100644 index 0000000000..6e5a5f5c08 --- /dev/null +++ b/tests/pipeline/cassettes/test_pipeline_with_multiple_routers.yaml @@ -0,0 +1,479 @@ +interactions: +- request: + body: '{"messages": [{"content": "You are Test Role. Test Backstory\nYour personal + goal is: Test GoalTo give my best complete final answer to the task use the + exact following format:\n\nThought: I now can give a great answer\nFinal Answer: + my best complete final answer to the task.\nYour final answer must be the great + and the most complete as possible, it must be outcome described.\n\nI MUST use + these formats, my job depends on it!\nCurrent Task: Return: Test output\n\nThis + is the expect criteria for your final answer: Test output \n you MUST return + the actual complete content as the final answer, not a summary.\n\nBegin! This + is VERY important to you, use the tools available and give your best Final Answer, + your job depends on it!\n\nThought:\n", "role": "user"}], "model": "gpt-4o", + "logprobs": false, "n": 1, "stop": ["\nObservation"], "stream": true, "temperature": + 0.7}' + headers: + accept: + - application/json + accept-encoding: + - gzip, deflate, br + connection: + - keep-alive + content-length: + - '877' + content-type: + - application/json + host: + - api.openai.com + user-agent: + - OpenAI/Python 1.37.0 + x-stainless-arch: + - arm64 + x-stainless-async: + - 'false' + x-stainless-lang: + - python + x-stainless-os: + - MacOS + x-stainless-package-version: + - 1.37.0 + x-stainless-runtime: + - CPython + x-stainless-runtime-version: + - 3.11.5 + method: POST + uri: https://api.openai.com/v1/chat/completions + response: + body: + string: 'data: {"id":"chatcmpl-9uLwTfhUMqHkuIXfpNrPbBofqX25q","object":"chat.completion.chunk","created":1723217741,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3aa7262c27","choices":[{"index":0,"delta":{"role":"assistant","content":"","refusal":null},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uLwTfhUMqHkuIXfpNrPbBofqX25q","object":"chat.completion.chunk","created":1723217741,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3aa7262c27","choices":[{"index":0,"delta":{"content":"Thought"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uLwTfhUMqHkuIXfpNrPbBofqX25q","object":"chat.completion.chunk","created":1723217741,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3aa7262c27","choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uLwTfhUMqHkuIXfpNrPbBofqX25q","object":"chat.completion.chunk","created":1723217741,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3aa7262c27","choices":[{"index":0,"delta":{"content":" + I"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uLwTfhUMqHkuIXfpNrPbBofqX25q","object":"chat.completion.chunk","created":1723217741,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3aa7262c27","choices":[{"index":0,"delta":{"content":" + now"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uLwTfhUMqHkuIXfpNrPbBofqX25q","object":"chat.completion.chunk","created":1723217741,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3aa7262c27","choices":[{"index":0,"delta":{"content":" + can"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uLwTfhUMqHkuIXfpNrPbBofqX25q","object":"chat.completion.chunk","created":1723217741,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3aa7262c27","choices":[{"index":0,"delta":{"content":" + give"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uLwTfhUMqHkuIXfpNrPbBofqX25q","object":"chat.completion.chunk","created":1723217741,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3aa7262c27","choices":[{"index":0,"delta":{"content":" + a"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uLwTfhUMqHkuIXfpNrPbBofqX25q","object":"chat.completion.chunk","created":1723217741,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3aa7262c27","choices":[{"index":0,"delta":{"content":" + great"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uLwTfhUMqHkuIXfpNrPbBofqX25q","object":"chat.completion.chunk","created":1723217741,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3aa7262c27","choices":[{"index":0,"delta":{"content":" + answer"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uLwTfhUMqHkuIXfpNrPbBofqX25q","object":"chat.completion.chunk","created":1723217741,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3aa7262c27","choices":[{"index":0,"delta":{"content":"\n"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uLwTfhUMqHkuIXfpNrPbBofqX25q","object":"chat.completion.chunk","created":1723217741,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3aa7262c27","choices":[{"index":0,"delta":{"content":"Final"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uLwTfhUMqHkuIXfpNrPbBofqX25q","object":"chat.completion.chunk","created":1723217741,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3aa7262c27","choices":[{"index":0,"delta":{"content":" + Answer"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uLwTfhUMqHkuIXfpNrPbBofqX25q","object":"chat.completion.chunk","created":1723217741,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3aa7262c27","choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uLwTfhUMqHkuIXfpNrPbBofqX25q","object":"chat.completion.chunk","created":1723217741,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3aa7262c27","choices":[{"index":0,"delta":{"content":" + Test"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uLwTfhUMqHkuIXfpNrPbBofqX25q","object":"chat.completion.chunk","created":1723217741,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3aa7262c27","choices":[{"index":0,"delta":{"content":" + output"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uLwTfhUMqHkuIXfpNrPbBofqX25q","object":"chat.completion.chunk","created":1723217741,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3aa7262c27","choices":[{"index":0,"delta":{},"logprobs":null,"finish_reason":"stop"}]} + + + data: [DONE] + + + ' + headers: + CF-Cache-Status: + - DYNAMIC + CF-RAY: + - 8b08d1456dfb08f2-LAX + Connection: + - keep-alive + Content-Type: + - text/event-stream; charset=utf-8 + Date: + - Fri, 09 Aug 2024 15:35:41 GMT + Server: + - cloudflare + Set-Cookie: + - __cf_bm=VXBO9XCMNYUdWJMLzmhZVwYp2qBnn27YLCV.iO5jdSQ-1723217741-1.0.1.1-WncSUA42bMTHs6l3gv46MmHUnPqrizRLmh23xZWc6q8udphMGhUhkXwHxRrSf4xCMCXXXvO6JC826sh6mNnehA; + path=/; expires=Fri, 09-Aug-24 16:05:41 GMT; domain=.api.openai.com; HttpOnly; + Secure; SameSite=None + - _cfuvid=bd8N16RsuXKEtaAf_S1XEK.0LZqbsndFJNK2Lvr7nYs-1723217741960-0.0.1.1-604800000; + path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None + Transfer-Encoding: + - chunked + X-Content-Type-Options: + - nosniff + alt-svc: + - h3=":443"; ma=86400 + openai-organization: + - crewai-iuxna1 + openai-processing-ms: + - '87' + openai-version: + - '2020-10-01' + strict-transport-security: + - max-age=15552000; includeSubDomains; preload + x-ratelimit-limit-requests: + - '10000' + x-ratelimit-limit-tokens: + - '30000000' + x-ratelimit-remaining-requests: + - '9999' + x-ratelimit-remaining-tokens: + - '29999806' + x-ratelimit-reset-requests: + - 6ms + x-ratelimit-reset-tokens: + - 0s + x-request-id: + - req_74f5a4e0520f918cfd2c8d8d7f50cb7b + status: + code: 200 + message: OK +- request: + body: '{"messages": [{"content": "You are Test Role. Test Backstory\nYour personal + goal is: Test GoalTo give my best complete final answer to the task use the + exact following format:\n\nThought: I now can give a great answer\nFinal Answer: + my best complete final answer to the task.\nYour final answer must be the great + and the most complete as possible, it must be outcome described.\n\nI MUST use + these formats, my job depends on it!\nCurrent Task: Return: Test output\n\nThis + is the expect criteria for your final answer: Test output \n you MUST return + the actual complete content as the final answer, not a summary.\n\nBegin! This + is VERY important to you, use the tools available and give your best Final Answer, + your job depends on it!\n\nThought:\n", "role": "user"}], "model": "gpt-4o", + "logprobs": false, "n": 1, "stop": ["\nObservation"], "stream": true, "temperature": + 0.7}' + headers: + accept: + - application/json + accept-encoding: + - gzip, deflate, br + connection: + - keep-alive + content-length: + - '877' + content-type: + - application/json + cookie: + - __cf_bm=VXBO9XCMNYUdWJMLzmhZVwYp2qBnn27YLCV.iO5jdSQ-1723217741-1.0.1.1-WncSUA42bMTHs6l3gv46MmHUnPqrizRLmh23xZWc6q8udphMGhUhkXwHxRrSf4xCMCXXXvO6JC826sh6mNnehA; + _cfuvid=bd8N16RsuXKEtaAf_S1XEK.0LZqbsndFJNK2Lvr7nYs-1723217741960-0.0.1.1-604800000 + host: + - api.openai.com + user-agent: + - OpenAI/Python 1.37.0 + x-stainless-arch: + - arm64 + x-stainless-async: + - 'false' + x-stainless-lang: + - python + x-stainless-os: + - MacOS + x-stainless-package-version: + - 1.37.0 + x-stainless-runtime: + - CPython + x-stainless-runtime-version: + - 3.11.5 + method: POST + uri: https://api.openai.com/v1/chat/completions + response: + body: + string: 'data: {"id":"chatcmpl-9uLwUeI2Ax7m14tF5viOieTyMyyoT","object":"chat.completion.chunk","created":1723217742,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_c9aa9c0491","choices":[{"index":0,"delta":{"role":"assistant","content":"","refusal":null},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uLwUeI2Ax7m14tF5viOieTyMyyoT","object":"chat.completion.chunk","created":1723217742,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_c9aa9c0491","choices":[{"index":0,"delta":{"content":"Thought"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uLwUeI2Ax7m14tF5viOieTyMyyoT","object":"chat.completion.chunk","created":1723217742,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_c9aa9c0491","choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uLwUeI2Ax7m14tF5viOieTyMyyoT","object":"chat.completion.chunk","created":1723217742,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_c9aa9c0491","choices":[{"index":0,"delta":{"content":" + I"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uLwUeI2Ax7m14tF5viOieTyMyyoT","object":"chat.completion.chunk","created":1723217742,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_c9aa9c0491","choices":[{"index":0,"delta":{"content":" + now"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uLwUeI2Ax7m14tF5viOieTyMyyoT","object":"chat.completion.chunk","created":1723217742,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_c9aa9c0491","choices":[{"index":0,"delta":{"content":" + can"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uLwUeI2Ax7m14tF5viOieTyMyyoT","object":"chat.completion.chunk","created":1723217742,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_c9aa9c0491","choices":[{"index":0,"delta":{"content":" + give"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uLwUeI2Ax7m14tF5viOieTyMyyoT","object":"chat.completion.chunk","created":1723217742,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_c9aa9c0491","choices":[{"index":0,"delta":{"content":" + a"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uLwUeI2Ax7m14tF5viOieTyMyyoT","object":"chat.completion.chunk","created":1723217742,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_c9aa9c0491","choices":[{"index":0,"delta":{"content":" + great"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uLwUeI2Ax7m14tF5viOieTyMyyoT","object":"chat.completion.chunk","created":1723217742,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_c9aa9c0491","choices":[{"index":0,"delta":{"content":" + answer"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uLwUeI2Ax7m14tF5viOieTyMyyoT","object":"chat.completion.chunk","created":1723217742,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_c9aa9c0491","choices":[{"index":0,"delta":{"content":"\n"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uLwUeI2Ax7m14tF5viOieTyMyyoT","object":"chat.completion.chunk","created":1723217742,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_c9aa9c0491","choices":[{"index":0,"delta":{"content":"Final"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uLwUeI2Ax7m14tF5viOieTyMyyoT","object":"chat.completion.chunk","created":1723217742,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_c9aa9c0491","choices":[{"index":0,"delta":{"content":" + Answer"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uLwUeI2Ax7m14tF5viOieTyMyyoT","object":"chat.completion.chunk","created":1723217742,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_c9aa9c0491","choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uLwUeI2Ax7m14tF5viOieTyMyyoT","object":"chat.completion.chunk","created":1723217742,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_c9aa9c0491","choices":[{"index":0,"delta":{"content":" + Test"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uLwUeI2Ax7m14tF5viOieTyMyyoT","object":"chat.completion.chunk","created":1723217742,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_c9aa9c0491","choices":[{"index":0,"delta":{"content":" + output"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uLwUeI2Ax7m14tF5viOieTyMyyoT","object":"chat.completion.chunk","created":1723217742,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_c9aa9c0491","choices":[{"index":0,"delta":{},"logprobs":null,"finish_reason":"stop"}]} + + + data: [DONE] + + + ' + headers: + CF-Cache-Status: + - DYNAMIC + CF-RAY: + - 8b08d1496c1308f2-LAX + Connection: + - keep-alive + Content-Type: + - text/event-stream; charset=utf-8 + Date: + - Fri, 09 Aug 2024 15:35:42 GMT + Server: + - cloudflare + Transfer-Encoding: + - chunked + X-Content-Type-Options: + - nosniff + alt-svc: + - h3=":443"; ma=86400 + openai-organization: + - crewai-iuxna1 + openai-processing-ms: + - '123' + openai-version: + - '2020-10-01' + strict-transport-security: + - max-age=15552000; includeSubDomains; preload + x-ratelimit-limit-requests: + - '10000' + x-ratelimit-limit-tokens: + - '30000000' + x-ratelimit-remaining-requests: + - '9999' + x-ratelimit-remaining-tokens: + - '29999806' + x-ratelimit-reset-requests: + - 6ms + x-ratelimit-reset-tokens: + - 0s + x-request-id: + - req_7602c65d59ddaaae4a6cb92952636b53 + status: + code: 200 + message: OK +- request: + body: '{"messages": [{"content": "You are Test Role. Test Backstory\nYour personal + goal is: Test GoalTo give my best complete final answer to the task use the + exact following format:\n\nThought: I now can give a great answer\nFinal Answer: + my best complete final answer to the task.\nYour final answer must be the great + and the most complete as possible, it must be outcome described.\n\nI MUST use + these formats, my job depends on it!\nCurrent Task: Return: Test output\n\nThis + is the expect criteria for your final answer: Test output \n you MUST return + the actual complete content as the final answer, not a summary.\n\nBegin! This + is VERY important to you, use the tools available and give your best Final Answer, + your job depends on it!\n\nThought:\n", "role": "user"}], "model": "gpt-4o", + "logprobs": false, "n": 1, "stop": ["\nObservation"], "stream": true, "temperature": + 0.7}' + headers: + accept: + - application/json + accept-encoding: + - gzip, deflate, br + connection: + - keep-alive + content-length: + - '877' + content-type: + - application/json + cookie: + - __cf_bm=VXBO9XCMNYUdWJMLzmhZVwYp2qBnn27YLCV.iO5jdSQ-1723217741-1.0.1.1-WncSUA42bMTHs6l3gv46MmHUnPqrizRLmh23xZWc6q8udphMGhUhkXwHxRrSf4xCMCXXXvO6JC826sh6mNnehA; + _cfuvid=bd8N16RsuXKEtaAf_S1XEK.0LZqbsndFJNK2Lvr7nYs-1723217741960-0.0.1.1-604800000 + host: + - api.openai.com + user-agent: + - OpenAI/Python 1.37.0 + x-stainless-arch: + - arm64 + x-stainless-async: + - 'false' + x-stainless-lang: + - python + x-stainless-os: + - MacOS + x-stainless-package-version: + - 1.37.0 + x-stainless-runtime: + - CPython + x-stainless-runtime-version: + - 3.11.5 + method: POST + uri: https://api.openai.com/v1/chat/completions + response: + body: + string: 'data: {"id":"chatcmpl-9uLwUgFxqI5ZynZsF4IjfMfLOOVVL","object":"chat.completion.chunk","created":1723217742,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3aa7262c27","choices":[{"index":0,"delta":{"role":"assistant","content":"","refusal":null},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uLwUgFxqI5ZynZsF4IjfMfLOOVVL","object":"chat.completion.chunk","created":1723217742,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3aa7262c27","choices":[{"index":0,"delta":{"content":"Thought"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uLwUgFxqI5ZynZsF4IjfMfLOOVVL","object":"chat.completion.chunk","created":1723217742,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3aa7262c27","choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uLwUgFxqI5ZynZsF4IjfMfLOOVVL","object":"chat.completion.chunk","created":1723217742,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3aa7262c27","choices":[{"index":0,"delta":{"content":" + I"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uLwUgFxqI5ZynZsF4IjfMfLOOVVL","object":"chat.completion.chunk","created":1723217742,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3aa7262c27","choices":[{"index":0,"delta":{"content":" + now"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uLwUgFxqI5ZynZsF4IjfMfLOOVVL","object":"chat.completion.chunk","created":1723217742,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3aa7262c27","choices":[{"index":0,"delta":{"content":" + can"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uLwUgFxqI5ZynZsF4IjfMfLOOVVL","object":"chat.completion.chunk","created":1723217742,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3aa7262c27","choices":[{"index":0,"delta":{"content":" + give"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uLwUgFxqI5ZynZsF4IjfMfLOOVVL","object":"chat.completion.chunk","created":1723217742,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3aa7262c27","choices":[{"index":0,"delta":{"content":" + a"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uLwUgFxqI5ZynZsF4IjfMfLOOVVL","object":"chat.completion.chunk","created":1723217742,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3aa7262c27","choices":[{"index":0,"delta":{"content":" + great"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uLwUgFxqI5ZynZsF4IjfMfLOOVVL","object":"chat.completion.chunk","created":1723217742,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3aa7262c27","choices":[{"index":0,"delta":{"content":" + answer"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uLwUgFxqI5ZynZsF4IjfMfLOOVVL","object":"chat.completion.chunk","created":1723217742,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3aa7262c27","choices":[{"index":0,"delta":{"content":"\n"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uLwUgFxqI5ZynZsF4IjfMfLOOVVL","object":"chat.completion.chunk","created":1723217742,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3aa7262c27","choices":[{"index":0,"delta":{"content":"Final"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uLwUgFxqI5ZynZsF4IjfMfLOOVVL","object":"chat.completion.chunk","created":1723217742,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3aa7262c27","choices":[{"index":0,"delta":{"content":" + Answer"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uLwUgFxqI5ZynZsF4IjfMfLOOVVL","object":"chat.completion.chunk","created":1723217742,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3aa7262c27","choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uLwUgFxqI5ZynZsF4IjfMfLOOVVL","object":"chat.completion.chunk","created":1723217742,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3aa7262c27","choices":[{"index":0,"delta":{"content":" + Test"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uLwUgFxqI5ZynZsF4IjfMfLOOVVL","object":"chat.completion.chunk","created":1723217742,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3aa7262c27","choices":[{"index":0,"delta":{"content":" + output"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uLwUgFxqI5ZynZsF4IjfMfLOOVVL","object":"chat.completion.chunk","created":1723217742,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3aa7262c27","choices":[{"index":0,"delta":{},"logprobs":null,"finish_reason":"stop"}]} + + + data: [DONE] + + + ' + headers: + CF-Cache-Status: + - DYNAMIC + CF-RAY: + - 8b08d14c38b708f2-LAX + Connection: + - keep-alive + Content-Type: + - text/event-stream; charset=utf-8 + Date: + - Fri, 09 Aug 2024 15:35:42 GMT + Server: + - cloudflare + Transfer-Encoding: + - chunked + X-Content-Type-Options: + - nosniff + alt-svc: + - h3=":443"; ma=86400 + openai-organization: + - crewai-iuxna1 + openai-processing-ms: + - '98' + openai-version: + - '2020-10-01' + strict-transport-security: + - max-age=15552000; includeSubDomains; preload + x-ratelimit-limit-requests: + - '10000' + x-ratelimit-limit-tokens: + - '30000000' + x-ratelimit-remaining-requests: + - '9999' + x-ratelimit-remaining-tokens: + - '29999806' + x-ratelimit-reset-requests: + - 6ms + x-ratelimit-reset-tokens: + - 0s + x-request-id: + - req_2312eeb6632e09efcd03bd2e212d242d + status: + code: 200 + message: OK +version: 1 diff --git a/tests/pipeline/cassettes/test_pipeline_with_router.yaml b/tests/pipeline/cassettes/test_pipeline_with_router.yaml new file mode 100644 index 0000000000..0c5033f494 --- /dev/null +++ b/tests/pipeline/cassettes/test_pipeline_with_router.yaml @@ -0,0 +1,163 @@ +interactions: +- request: + body: '{"messages": [{"content": "You are Test Role. Test Backstory\nYour personal + goal is: Test GoalTo give my best complete final answer to the task use the + exact following format:\n\nThought: I now can give a great answer\nFinal Answer: + my best complete final answer to the task.\nYour final answer must be the great + and the most complete as possible, it must be outcome described.\n\nI MUST use + these formats, my job depends on it!\nCurrent Task: Return: Test output\n\nThis + is the expect criteria for your final answer: Test output \n you MUST return + the actual complete content as the final answer, not a summary.\n\nBegin! This + is VERY important to you, use the tools available and give your best Final Answer, + your job depends on it!\n\nThought:\n", "role": "user"}], "model": "gpt-4o", + "logprobs": false, "n": 1, "stop": ["\nObservation"], "stream": true, "temperature": + 0.7}' + headers: + accept: + - application/json + accept-encoding: + - gzip, deflate, br + connection: + - keep-alive + content-length: + - '877' + content-type: + - application/json + host: + - api.openai.com + user-agent: + - OpenAI/Python 1.37.0 + x-stainless-arch: + - arm64 + x-stainless-async: + - 'false' + x-stainless-lang: + - python + x-stainless-os: + - MacOS + x-stainless-package-version: + - 1.37.0 + x-stainless-runtime: + - CPython + x-stainless-runtime-version: + - 3.11.5 + method: POST + uri: https://api.openai.com/v1/chat/completions + response: + body: + string: 'data: {"id":"chatcmpl-9uLh5NS9xxPoraNNjd3gSUi2Fufab","object":"chat.completion.chunk","created":1723216787,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3aa7262c27","choices":[{"index":0,"delta":{"role":"assistant","content":"","refusal":null},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uLh5NS9xxPoraNNjd3gSUi2Fufab","object":"chat.completion.chunk","created":1723216787,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3aa7262c27","choices":[{"index":0,"delta":{"content":"Thought"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uLh5NS9xxPoraNNjd3gSUi2Fufab","object":"chat.completion.chunk","created":1723216787,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3aa7262c27","choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uLh5NS9xxPoraNNjd3gSUi2Fufab","object":"chat.completion.chunk","created":1723216787,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3aa7262c27","choices":[{"index":0,"delta":{"content":" + I"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uLh5NS9xxPoraNNjd3gSUi2Fufab","object":"chat.completion.chunk","created":1723216787,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3aa7262c27","choices":[{"index":0,"delta":{"content":" + now"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uLh5NS9xxPoraNNjd3gSUi2Fufab","object":"chat.completion.chunk","created":1723216787,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3aa7262c27","choices":[{"index":0,"delta":{"content":" + can"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uLh5NS9xxPoraNNjd3gSUi2Fufab","object":"chat.completion.chunk","created":1723216787,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3aa7262c27","choices":[{"index":0,"delta":{"content":" + give"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uLh5NS9xxPoraNNjd3gSUi2Fufab","object":"chat.completion.chunk","created":1723216787,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3aa7262c27","choices":[{"index":0,"delta":{"content":" + a"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uLh5NS9xxPoraNNjd3gSUi2Fufab","object":"chat.completion.chunk","created":1723216787,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3aa7262c27","choices":[{"index":0,"delta":{"content":" + great"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uLh5NS9xxPoraNNjd3gSUi2Fufab","object":"chat.completion.chunk","created":1723216787,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3aa7262c27","choices":[{"index":0,"delta":{"content":" + answer"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uLh5NS9xxPoraNNjd3gSUi2Fufab","object":"chat.completion.chunk","created":1723216787,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3aa7262c27","choices":[{"index":0,"delta":{"content":"\n"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uLh5NS9xxPoraNNjd3gSUi2Fufab","object":"chat.completion.chunk","created":1723216787,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3aa7262c27","choices":[{"index":0,"delta":{"content":"Final"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uLh5NS9xxPoraNNjd3gSUi2Fufab","object":"chat.completion.chunk","created":1723216787,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3aa7262c27","choices":[{"index":0,"delta":{"content":" + Answer"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uLh5NS9xxPoraNNjd3gSUi2Fufab","object":"chat.completion.chunk","created":1723216787,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3aa7262c27","choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uLh5NS9xxPoraNNjd3gSUi2Fufab","object":"chat.completion.chunk","created":1723216787,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3aa7262c27","choices":[{"index":0,"delta":{"content":" + Test"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uLh5NS9xxPoraNNjd3gSUi2Fufab","object":"chat.completion.chunk","created":1723216787,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3aa7262c27","choices":[{"index":0,"delta":{"content":" + output"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uLh5NS9xxPoraNNjd3gSUi2Fufab","object":"chat.completion.chunk","created":1723216787,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3aa7262c27","choices":[{"index":0,"delta":{},"logprobs":null,"finish_reason":"stop"}]} + + + data: [DONE] + + + ' + headers: + CF-Cache-Status: + - DYNAMIC + CF-RAY: + - 8b08b9fbab0d7e76-LAX + Connection: + - keep-alive + Content-Type: + - text/event-stream; charset=utf-8 + Date: + - Fri, 09 Aug 2024 15:19:48 GMT + Server: + - cloudflare + Set-Cookie: + - __cf_bm=Ok33IBBKZR0e3d8fZD5DZf5ZOL.Qx6NWCCcNC8ws7eA-1723216788-1.0.1.1-oYl6wzOQJ7YhoLjLgqAQFoC2DfM8wGFMe3dUTAKd0r3odwGsPS4y6QQKYfat0RxpOyErx_iDD25ZBdN0NtXOMA; + path=/; expires=Fri, 09-Aug-24 15:49:48 GMT; domain=.api.openai.com; HttpOnly; + Secure; SameSite=None + - _cfuvid=OO1MACceu3Jjuq5WdFsXX6LcFGp8AsdvZtVlVaIFspg-1723216788116-0.0.1.1-604800000; + path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None + Transfer-Encoding: + - chunked + X-Content-Type-Options: + - nosniff + alt-svc: + - h3=":443"; ma=86400 + openai-organization: + - crewai-iuxna1 + openai-processing-ms: + - '127' + openai-version: + - '2020-10-01' + strict-transport-security: + - max-age=15552000; includeSubDomains; preload + x-ratelimit-limit-requests: + - '10000' + x-ratelimit-limit-tokens: + - '30000000' + x-ratelimit-remaining-requests: + - '9999' + x-ratelimit-remaining-tokens: + - '29999806' + x-ratelimit-reset-requests: + - 6ms + x-ratelimit-reset-tokens: + - 0s + x-request-id: + - req_c427ee9954f36d4e5efebf821bc4981e + status: + code: 200 + message: OK +version: 1 diff --git a/tests/pipeline/cassettes/test_router_with_empty_input.yaml b/tests/pipeline/cassettes/test_router_with_empty_input.yaml new file mode 100644 index 0000000000..1091c4ba8c --- /dev/null +++ b/tests/pipeline/cassettes/test_router_with_empty_input.yaml @@ -0,0 +1,163 @@ +interactions: +- request: + body: '{"messages": [{"content": "You are Test Role. Test Backstory\nYour personal + goal is: Test GoalTo give my best complete final answer to the task use the + exact following format:\n\nThought: I now can give a great answer\nFinal Answer: + my best complete final answer to the task.\nYour final answer must be the great + and the most complete as possible, it must be outcome described.\n\nI MUST use + these formats, my job depends on it!\nCurrent Task: Return: Test output\n\nThis + is the expect criteria for your final answer: Test output \n you MUST return + the actual complete content as the final answer, not a summary.\n\nBegin! This + is VERY important to you, use the tools available and give your best Final Answer, + your job depends on it!\n\nThought:\n", "role": "user"}], "model": "gpt-4o", + "logprobs": false, "n": 1, "stop": ["\nObservation"], "stream": true, "temperature": + 0.7}' + headers: + accept: + - application/json + accept-encoding: + - gzip, deflate, br + connection: + - keep-alive + content-length: + - '877' + content-type: + - application/json + host: + - api.openai.com + user-agent: + - OpenAI/Python 1.37.0 + x-stainless-arch: + - arm64 + x-stainless-async: + - 'false' + x-stainless-lang: + - python + x-stainless-os: + - MacOS + x-stainless-package-version: + - 1.37.0 + x-stainless-runtime: + - CPython + x-stainless-runtime-version: + - 3.11.5 + method: POST + uri: https://api.openai.com/v1/chat/completions + response: + body: + string: 'data: {"id":"chatcmpl-9uM4CheR93UVORWzMVtwfViezobOD","object":"chat.completion.chunk","created":1723218220,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3aa7262c27","choices":[{"index":0,"delta":{"role":"assistant","content":"","refusal":null},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uM4CheR93UVORWzMVtwfViezobOD","object":"chat.completion.chunk","created":1723218220,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3aa7262c27","choices":[{"index":0,"delta":{"content":"Thought"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uM4CheR93UVORWzMVtwfViezobOD","object":"chat.completion.chunk","created":1723218220,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3aa7262c27","choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uM4CheR93UVORWzMVtwfViezobOD","object":"chat.completion.chunk","created":1723218220,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3aa7262c27","choices":[{"index":0,"delta":{"content":" + I"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uM4CheR93UVORWzMVtwfViezobOD","object":"chat.completion.chunk","created":1723218220,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3aa7262c27","choices":[{"index":0,"delta":{"content":" + now"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uM4CheR93UVORWzMVtwfViezobOD","object":"chat.completion.chunk","created":1723218220,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3aa7262c27","choices":[{"index":0,"delta":{"content":" + can"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uM4CheR93UVORWzMVtwfViezobOD","object":"chat.completion.chunk","created":1723218220,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3aa7262c27","choices":[{"index":0,"delta":{"content":" + give"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uM4CheR93UVORWzMVtwfViezobOD","object":"chat.completion.chunk","created":1723218220,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3aa7262c27","choices":[{"index":0,"delta":{"content":" + a"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uM4CheR93UVORWzMVtwfViezobOD","object":"chat.completion.chunk","created":1723218220,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3aa7262c27","choices":[{"index":0,"delta":{"content":" + great"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uM4CheR93UVORWzMVtwfViezobOD","object":"chat.completion.chunk","created":1723218220,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3aa7262c27","choices":[{"index":0,"delta":{"content":" + answer"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uM4CheR93UVORWzMVtwfViezobOD","object":"chat.completion.chunk","created":1723218220,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3aa7262c27","choices":[{"index":0,"delta":{"content":"\n"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uM4CheR93UVORWzMVtwfViezobOD","object":"chat.completion.chunk","created":1723218220,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3aa7262c27","choices":[{"index":0,"delta":{"content":"Final"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uM4CheR93UVORWzMVtwfViezobOD","object":"chat.completion.chunk","created":1723218220,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3aa7262c27","choices":[{"index":0,"delta":{"content":" + Answer"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uM4CheR93UVORWzMVtwfViezobOD","object":"chat.completion.chunk","created":1723218220,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3aa7262c27","choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uM4CheR93UVORWzMVtwfViezobOD","object":"chat.completion.chunk","created":1723218220,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3aa7262c27","choices":[{"index":0,"delta":{"content":" + Test"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uM4CheR93UVORWzMVtwfViezobOD","object":"chat.completion.chunk","created":1723218220,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3aa7262c27","choices":[{"index":0,"delta":{"content":" + output"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uM4CheR93UVORWzMVtwfViezobOD","object":"chat.completion.chunk","created":1723218220,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3aa7262c27","choices":[{"index":0,"delta":{},"logprobs":null,"finish_reason":"stop"}]} + + + data: [DONE] + + + ' + headers: + CF-Cache-Status: + - DYNAMIC + CF-RAY: + - 8b08dcf819760fb7-LAX + Connection: + - keep-alive + Content-Type: + - text/event-stream; charset=utf-8 + Date: + - Fri, 09 Aug 2024 15:43:41 GMT + Server: + - cloudflare + Set-Cookie: + - __cf_bm=saQ_eM9u1PmBfQRG1jTfrvAKarBMyii_YCKH3FykFK4-1723218221-1.0.1.1-vRk7EtdUPVTK.HPVpyGPsu7x2eDxYzwbGwSiVxoOKaaRlunWxPTMzNLRoX6qPOevFnUamxpkPJewlKmn8UhVAA; + path=/; expires=Fri, 09-Aug-24 16:13:41 GMT; domain=.api.openai.com; HttpOnly; + Secure; SameSite=None + - _cfuvid=uqsYHaOZpg_WhpwNfvP.nPWqew9tn.d2OFbP0zPu.AQ-1723218221175-0.0.1.1-604800000; + path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None + Transfer-Encoding: + - chunked + X-Content-Type-Options: + - nosniff + alt-svc: + - h3=":443"; ma=86400 + openai-organization: + - crewai-iuxna1 + openai-processing-ms: + - '126' + openai-version: + - '2020-10-01' + strict-transport-security: + - max-age=15552000; includeSubDomains; preload + x-ratelimit-limit-requests: + - '10000' + x-ratelimit-limit-tokens: + - '30000000' + x-ratelimit-remaining-requests: + - '9999' + x-ratelimit-remaining-tokens: + - '29999806' + x-ratelimit-reset-requests: + - 6ms + x-ratelimit-reset-tokens: + - 0s + x-request-id: + - req_27740c683ef0ce74d4ebdc7db3394b80 + status: + code: 200 + message: OK +version: 1 diff --git a/tests/pipeline/cassettes/test_router_with_multiple_inputs.yaml b/tests/pipeline/cassettes/test_router_with_multiple_inputs.yaml new file mode 100644 index 0000000000..5761be4118 --- /dev/null +++ b/tests/pipeline/cassettes/test_router_with_multiple_inputs.yaml @@ -0,0 +1,485 @@ +interactions: +- request: + body: '{"messages": [{"content": "You are Test Role. Test Backstory\nYour personal + goal is: Test GoalTo give my best complete final answer to the task use the + exact following format:\n\nThought: I now can give a great answer\nFinal Answer: + my best complete final answer to the task.\nYour final answer must be the great + and the most complete as possible, it must be outcome described.\n\nI MUST use + these formats, my job depends on it!\nCurrent Task: Return: Test output\n\nThis + is the expect criteria for your final answer: Test output \n you MUST return + the actual complete content as the final answer, not a summary.\n\nBegin! This + is VERY important to you, use the tools available and give your best Final Answer, + your job depends on it!\n\nThought:\n", "role": "user"}], "model": "gpt-4o", + "logprobs": false, "n": 1, "stop": ["\nObservation"], "stream": true, "temperature": + 0.7}' + headers: + accept: + - application/json + accept-encoding: + - gzip, deflate, br + connection: + - keep-alive + content-length: + - '877' + content-type: + - application/json + host: + - api.openai.com + user-agent: + - OpenAI/Python 1.37.0 + x-stainless-arch: + - arm64 + x-stainless-async: + - 'false' + x-stainless-lang: + - python + x-stainless-os: + - MacOS + x-stainless-package-version: + - 1.37.0 + x-stainless-runtime: + - CPython + x-stainless-runtime-version: + - 3.11.5 + method: POST + uri: https://api.openai.com/v1/chat/completions + response: + body: + string: 'data: {"id":"chatcmpl-9uLtG38aPebYIjXywvamVsV7cdhoA","object":"chat.completion.chunk","created":1723217542,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3aa7262c27","choices":[{"index":0,"delta":{"role":"assistant","content":"","refusal":null},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uLtG38aPebYIjXywvamVsV7cdhoA","object":"chat.completion.chunk","created":1723217542,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3aa7262c27","choices":[{"index":0,"delta":{"content":"Thought"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uLtG38aPebYIjXywvamVsV7cdhoA","object":"chat.completion.chunk","created":1723217542,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3aa7262c27","choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uLtG38aPebYIjXywvamVsV7cdhoA","object":"chat.completion.chunk","created":1723217542,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3aa7262c27","choices":[{"index":0,"delta":{"content":" + I"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uLtG38aPebYIjXywvamVsV7cdhoA","object":"chat.completion.chunk","created":1723217542,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3aa7262c27","choices":[{"index":0,"delta":{"content":" + now"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uLtG38aPebYIjXywvamVsV7cdhoA","object":"chat.completion.chunk","created":1723217542,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3aa7262c27","choices":[{"index":0,"delta":{"content":" + can"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uLtG38aPebYIjXywvamVsV7cdhoA","object":"chat.completion.chunk","created":1723217542,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3aa7262c27","choices":[{"index":0,"delta":{"content":" + give"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uLtG38aPebYIjXywvamVsV7cdhoA","object":"chat.completion.chunk","created":1723217542,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3aa7262c27","choices":[{"index":0,"delta":{"content":" + a"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uLtG38aPebYIjXywvamVsV7cdhoA","object":"chat.completion.chunk","created":1723217542,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3aa7262c27","choices":[{"index":0,"delta":{"content":" + great"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uLtG38aPebYIjXywvamVsV7cdhoA","object":"chat.completion.chunk","created":1723217542,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3aa7262c27","choices":[{"index":0,"delta":{"content":" + answer"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uLtG38aPebYIjXywvamVsV7cdhoA","object":"chat.completion.chunk","created":1723217542,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3aa7262c27","choices":[{"index":0,"delta":{"content":"\n"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uLtG38aPebYIjXywvamVsV7cdhoA","object":"chat.completion.chunk","created":1723217542,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3aa7262c27","choices":[{"index":0,"delta":{"content":"Final"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uLtG38aPebYIjXywvamVsV7cdhoA","object":"chat.completion.chunk","created":1723217542,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3aa7262c27","choices":[{"index":0,"delta":{"content":" + Answer"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uLtG38aPebYIjXywvamVsV7cdhoA","object":"chat.completion.chunk","created":1723217542,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3aa7262c27","choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uLtG38aPebYIjXywvamVsV7cdhoA","object":"chat.completion.chunk","created":1723217542,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3aa7262c27","choices":[{"index":0,"delta":{"content":" + Test"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uLtG38aPebYIjXywvamVsV7cdhoA","object":"chat.completion.chunk","created":1723217542,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3aa7262c27","choices":[{"index":0,"delta":{"content":" + output"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uLtG38aPebYIjXywvamVsV7cdhoA","object":"chat.completion.chunk","created":1723217542,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3aa7262c27","choices":[{"index":0,"delta":{},"logprobs":null,"finish_reason":"stop"}]} + + + data: [DONE] + + + ' + headers: + CF-Cache-Status: + - DYNAMIC + CF-RAY: + - 8b08cc6588637cd4-LAX + Connection: + - keep-alive + Content-Type: + - text/event-stream; charset=utf-8 + Date: + - Fri, 09 Aug 2024 15:32:22 GMT + Server: + - cloudflare + Set-Cookie: + - __cf_bm=M4up.JtFpyEgsdfTC6p_PCuqpf1awZrIU7O4maUmvcc-1723217542-1.0.1.1-pHndpcggWD247VW0oQyVdkNn9_78MKa58Br4436XxAO_2OB0snYblbzsZMPvIaLfEXJe51XqwSKsNDW.DW9OWA; + path=/; expires=Fri, 09-Aug-24 16:02:22 GMT; domain=.api.openai.com; HttpOnly; + Secure; SameSite=None + - _cfuvid=tCayjNZ3JhkjI0BUvmaFdzdvxjApNr5qo93qUN7v.FY-1723217542310-0.0.1.1-604800000; + path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None + Transfer-Encoding: + - chunked + X-Content-Type-Options: + - nosniff + alt-svc: + - h3=":443"; ma=86400 + openai-organization: + - crewai-iuxna1 + openai-processing-ms: + - '75' + openai-version: + - '2020-10-01' + strict-transport-security: + - max-age=15552000; includeSubDomains; preload + x-ratelimit-limit-requests: + - '10000' + x-ratelimit-limit-tokens: + - '30000000' + x-ratelimit-remaining-requests: + - '9999' + x-ratelimit-remaining-tokens: + - '29999806' + x-ratelimit-reset-requests: + - 6ms + x-ratelimit-reset-tokens: + - 0s + x-request-id: + - req_2dbbb252efe10800521717ee0f07709a + status: + code: 200 + message: OK +- request: + body: '{"messages": [{"content": "You are Test Role. Test Backstory\nYour personal + goal is: Test GoalTo give my best complete final answer to the task use the + exact following format:\n\nThought: I now can give a great answer\nFinal Answer: + my best complete final answer to the task.\nYour final answer must be the great + and the most complete as possible, it must be outcome described.\n\nI MUST use + these formats, my job depends on it!\nCurrent Task: Return: Test output\n\nThis + is the expect criteria for your final answer: Test output \n you MUST return + the actual complete content as the final answer, not a summary.\n\nBegin! This + is VERY important to you, use the tools available and give your best Final Answer, + your job depends on it!\n\nThought:\n", "role": "user"}], "model": "gpt-4o", + "logprobs": false, "n": 1, "stop": ["\nObservation"], "stream": true, "temperature": + 0.7}' + headers: + accept: + - application/json + accept-encoding: + - gzip, deflate, br + connection: + - keep-alive + content-length: + - '877' + content-type: + - application/json + host: + - api.openai.com + user-agent: + - OpenAI/Python 1.37.0 + x-stainless-arch: + - arm64 + x-stainless-async: + - 'false' + x-stainless-lang: + - python + x-stainless-os: + - MacOS + x-stainless-package-version: + - 1.37.0 + x-stainless-runtime: + - CPython + x-stainless-runtime-version: + - 3.11.5 + method: POST + uri: https://api.openai.com/v1/chat/completions + response: + body: + string: 'data: {"id":"chatcmpl-9uLtGVW8T1OV0Tld9mR1Yl981v5RI","object":"chat.completion.chunk","created":1723217542,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3aa7262c27","choices":[{"index":0,"delta":{"role":"assistant","content":"","refusal":null},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uLtGVW8T1OV0Tld9mR1Yl981v5RI","object":"chat.completion.chunk","created":1723217542,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3aa7262c27","choices":[{"index":0,"delta":{"content":"Thought"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uLtGVW8T1OV0Tld9mR1Yl981v5RI","object":"chat.completion.chunk","created":1723217542,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3aa7262c27","choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uLtGVW8T1OV0Tld9mR1Yl981v5RI","object":"chat.completion.chunk","created":1723217542,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3aa7262c27","choices":[{"index":0,"delta":{"content":" + I"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uLtGVW8T1OV0Tld9mR1Yl981v5RI","object":"chat.completion.chunk","created":1723217542,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3aa7262c27","choices":[{"index":0,"delta":{"content":" + now"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uLtGVW8T1OV0Tld9mR1Yl981v5RI","object":"chat.completion.chunk","created":1723217542,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3aa7262c27","choices":[{"index":0,"delta":{"content":" + can"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uLtGVW8T1OV0Tld9mR1Yl981v5RI","object":"chat.completion.chunk","created":1723217542,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3aa7262c27","choices":[{"index":0,"delta":{"content":" + give"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uLtGVW8T1OV0Tld9mR1Yl981v5RI","object":"chat.completion.chunk","created":1723217542,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3aa7262c27","choices":[{"index":0,"delta":{"content":" + a"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uLtGVW8T1OV0Tld9mR1Yl981v5RI","object":"chat.completion.chunk","created":1723217542,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3aa7262c27","choices":[{"index":0,"delta":{"content":" + great"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uLtGVW8T1OV0Tld9mR1Yl981v5RI","object":"chat.completion.chunk","created":1723217542,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3aa7262c27","choices":[{"index":0,"delta":{"content":" + answer"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uLtGVW8T1OV0Tld9mR1Yl981v5RI","object":"chat.completion.chunk","created":1723217542,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3aa7262c27","choices":[{"index":0,"delta":{"content":"\n"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uLtGVW8T1OV0Tld9mR1Yl981v5RI","object":"chat.completion.chunk","created":1723217542,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3aa7262c27","choices":[{"index":0,"delta":{"content":"Final"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uLtGVW8T1OV0Tld9mR1Yl981v5RI","object":"chat.completion.chunk","created":1723217542,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3aa7262c27","choices":[{"index":0,"delta":{"content":" + Answer"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uLtGVW8T1OV0Tld9mR1Yl981v5RI","object":"chat.completion.chunk","created":1723217542,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3aa7262c27","choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uLtGVW8T1OV0Tld9mR1Yl981v5RI","object":"chat.completion.chunk","created":1723217542,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3aa7262c27","choices":[{"index":0,"delta":{"content":" + Test"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uLtGVW8T1OV0Tld9mR1Yl981v5RI","object":"chat.completion.chunk","created":1723217542,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3aa7262c27","choices":[{"index":0,"delta":{"content":" + output"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uLtGVW8T1OV0Tld9mR1Yl981v5RI","object":"chat.completion.chunk","created":1723217542,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3aa7262c27","choices":[{"index":0,"delta":{},"logprobs":null,"finish_reason":"stop"}]} + + + data: [DONE] + + + ' + headers: + CF-Cache-Status: + - DYNAMIC + CF-RAY: + - 8b08cc658eae14f2-LAX + Connection: + - keep-alive + Content-Type: + - text/event-stream; charset=utf-8 + Date: + - Fri, 09 Aug 2024 15:32:22 GMT + Server: + - cloudflare + Set-Cookie: + - __cf_bm=Creh.MQMh5D4uV8gMWrmEP3U_xYEycFVgWVxxn0kHlE-1723217542-1.0.1.1-J_gmaciM9fSi6r8l4UAzTm.I2IKLzR17wmi09h8_yI6k3LD_nHjgALB6hys0HApKUntVCQVmZig2J5bBVCboBQ; + path=/; expires=Fri, 09-Aug-24 16:02:22 GMT; domain=.api.openai.com; HttpOnly; + Secure; SameSite=None + - _cfuvid=zSXGyUBaieLtzC9vciAfldh0f8XOr_yG7xEkAiy72z0-1723217542314-0.0.1.1-604800000; + path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None + Transfer-Encoding: + - chunked + X-Content-Type-Options: + - nosniff + alt-svc: + - h3=":443"; ma=86400 + openai-organization: + - crewai-iuxna1 + openai-processing-ms: + - '96' + openai-version: + - '2020-10-01' + strict-transport-security: + - max-age=15552000; includeSubDomains; preload + x-ratelimit-limit-requests: + - '10000' + x-ratelimit-limit-tokens: + - '30000000' + x-ratelimit-remaining-requests: + - '9998' + x-ratelimit-remaining-tokens: + - '29999806' + x-ratelimit-reset-requests: + - 6ms + x-ratelimit-reset-tokens: + - 0s + x-request-id: + - req_0ede16e7bce60442ed70741e03da5ec5 + status: + code: 200 + message: OK +- request: + body: '{"messages": [{"content": "You are Test Role. Test Backstory\nYour personal + goal is: Test GoalTo give my best complete final answer to the task use the + exact following format:\n\nThought: I now can give a great answer\nFinal Answer: + my best complete final answer to the task.\nYour final answer must be the great + and the most complete as possible, it must be outcome described.\n\nI MUST use + these formats, my job depends on it!\nCurrent Task: Return: Test output\n\nThis + is the expect criteria for your final answer: Test output \n you MUST return + the actual complete content as the final answer, not a summary.\n\nBegin! This + is VERY important to you, use the tools available and give your best Final Answer, + your job depends on it!\n\nThought:\n", "role": "user"}], "model": "gpt-4o", + "logprobs": false, "n": 1, "stop": ["\nObservation"], "stream": true, "temperature": + 0.7}' + headers: + accept: + - application/json + accept-encoding: + - gzip, deflate, br + connection: + - keep-alive + content-length: + - '877' + content-type: + - application/json + host: + - api.openai.com + user-agent: + - OpenAI/Python 1.37.0 + x-stainless-arch: + - arm64 + x-stainless-async: + - 'false' + x-stainless-lang: + - python + x-stainless-os: + - MacOS + x-stainless-package-version: + - 1.37.0 + x-stainless-runtime: + - CPython + x-stainless-runtime-version: + - 3.11.5 + method: POST + uri: https://api.openai.com/v1/chat/completions + response: + body: + string: 'data: {"id":"chatcmpl-9uLtG17RAkTmMGMwaaU6ljV8N8PzN","object":"chat.completion.chunk","created":1723217542,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3aa7262c27","choices":[{"index":0,"delta":{"role":"assistant","content":"","refusal":null},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uLtG17RAkTmMGMwaaU6ljV8N8PzN","object":"chat.completion.chunk","created":1723217542,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3aa7262c27","choices":[{"index":0,"delta":{"content":"Thought"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uLtG17RAkTmMGMwaaU6ljV8N8PzN","object":"chat.completion.chunk","created":1723217542,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3aa7262c27","choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uLtG17RAkTmMGMwaaU6ljV8N8PzN","object":"chat.completion.chunk","created":1723217542,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3aa7262c27","choices":[{"index":0,"delta":{"content":" + I"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uLtG17RAkTmMGMwaaU6ljV8N8PzN","object":"chat.completion.chunk","created":1723217542,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3aa7262c27","choices":[{"index":0,"delta":{"content":" + now"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uLtG17RAkTmMGMwaaU6ljV8N8PzN","object":"chat.completion.chunk","created":1723217542,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3aa7262c27","choices":[{"index":0,"delta":{"content":" + can"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uLtG17RAkTmMGMwaaU6ljV8N8PzN","object":"chat.completion.chunk","created":1723217542,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3aa7262c27","choices":[{"index":0,"delta":{"content":" + give"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uLtG17RAkTmMGMwaaU6ljV8N8PzN","object":"chat.completion.chunk","created":1723217542,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3aa7262c27","choices":[{"index":0,"delta":{"content":" + a"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uLtG17RAkTmMGMwaaU6ljV8N8PzN","object":"chat.completion.chunk","created":1723217542,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3aa7262c27","choices":[{"index":0,"delta":{"content":" + great"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uLtG17RAkTmMGMwaaU6ljV8N8PzN","object":"chat.completion.chunk","created":1723217542,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3aa7262c27","choices":[{"index":0,"delta":{"content":" + answer"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uLtG17RAkTmMGMwaaU6ljV8N8PzN","object":"chat.completion.chunk","created":1723217542,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3aa7262c27","choices":[{"index":0,"delta":{"content":"\n"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uLtG17RAkTmMGMwaaU6ljV8N8PzN","object":"chat.completion.chunk","created":1723217542,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3aa7262c27","choices":[{"index":0,"delta":{"content":"Final"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uLtG17RAkTmMGMwaaU6ljV8N8PzN","object":"chat.completion.chunk","created":1723217542,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3aa7262c27","choices":[{"index":0,"delta":{"content":" + Answer"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uLtG17RAkTmMGMwaaU6ljV8N8PzN","object":"chat.completion.chunk","created":1723217542,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3aa7262c27","choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uLtG17RAkTmMGMwaaU6ljV8N8PzN","object":"chat.completion.chunk","created":1723217542,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3aa7262c27","choices":[{"index":0,"delta":{"content":" + Test"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uLtG17RAkTmMGMwaaU6ljV8N8PzN","object":"chat.completion.chunk","created":1723217542,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3aa7262c27","choices":[{"index":0,"delta":{"content":" + output"},"logprobs":null,"finish_reason":null}]} + + + data: {"id":"chatcmpl-9uLtG17RAkTmMGMwaaU6ljV8N8PzN","object":"chat.completion.chunk","created":1723217542,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3aa7262c27","choices":[{"index":0,"delta":{},"logprobs":null,"finish_reason":"stop"}]} + + + data: [DONE] + + + ' + headers: + CF-Cache-Status: + - DYNAMIC + CF-RAY: + - 8b08cc658e2829c9-LAX + Connection: + - keep-alive + Content-Type: + - text/event-stream; charset=utf-8 + Date: + - Fri, 09 Aug 2024 15:32:22 GMT + Server: + - cloudflare + Set-Cookie: + - __cf_bm=juF2fptTmZAWU14Lw6_CYL4oseJBYYRglgYx8coLilQ-1723217542-1.0.1.1-NRcTnXs5yz0cX7hllCCcuNuqACPtm1aX_VAub6GReR.eBkJKV.HcA3cFiavaBK5XCrE4z04jOev3UVboQpyQkw; + path=/; expires=Fri, 09-Aug-24 16:02:22 GMT; domain=.api.openai.com; HttpOnly; + Secure; SameSite=None + - _cfuvid=Mgsswdn3w9OMcS9r3KLgbo6Enbk9IQfmmtjOv5LmCgs-1723217542628-0.0.1.1-604800000; + path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None + Transfer-Encoding: + - chunked + X-Content-Type-Options: + - nosniff + alt-svc: + - h3=":443"; ma=86400 + openai-organization: + - crewai-iuxna1 + openai-processing-ms: + - '116' + openai-version: + - '2020-10-01' + strict-transport-security: + - max-age=15552000; includeSubDomains; preload + x-ratelimit-limit-requests: + - '10000' + x-ratelimit-limit-tokens: + - '30000000' + x-ratelimit-remaining-requests: + - '9999' + x-ratelimit-remaining-tokens: + - '29999806' + x-ratelimit-reset-requests: + - 6ms + x-ratelimit-reset-tokens: + - 0s + x-request-id: + - req_d048537865c87387356c8da4971df278 + status: + code: 200 + message: OK +version: 1 diff --git a/tests/pipeline/test_pipeline.py b/tests/pipeline/test_pipeline.py new file mode 100644 index 0000000000..39f28ab868 --- /dev/null +++ b/tests/pipeline/test_pipeline.py @@ -0,0 +1,948 @@ +import json +from unittest.mock import AsyncMock, MagicMock, patch + +import pytest +from crewai.agent import Agent +from crewai.crew import Crew +from crewai.crews.crew_output import CrewOutput +from crewai.pipeline.pipeline import Pipeline +from crewai.pipeline.pipeline_kickoff_result import PipelineKickoffResult +from crewai.process import Process +from crewai.routers.router import Route, Router +from crewai.task import Task +from crewai.tasks.task_output import TaskOutput +from crewai.types.usage_metrics import UsageMetrics +from pydantic import BaseModel, ValidationError + +DEFAULT_TOKEN_USAGE = UsageMetrics( + total_tokens=100, prompt_tokens=50, completion_tokens=50, successful_requests=3 +) + + +@pytest.fixture +def mock_crew_factory(): + def _create_mock_crew(name: str, output_json_dict=None, pydantic_output=None): + MockCrewClass = type("MockCrew", (MagicMock, Crew), {}) + + class MockCrew(MockCrewClass): + def __deepcopy__(self, memo): + result = MockCrewClass() + result.kickoff_async = self.kickoff_async + result.name = self.name + return result + + crew = MockCrew() + crew.name = name + task_output = TaskOutput( + description="Test task", raw="Task output", agent="Test Agent" + ) + crew_output = CrewOutput( + raw="Test output", + tasks_output=[task_output], + token_usage=DEFAULT_TOKEN_USAGE, + json_dict=output_json_dict if output_json_dict else None, + pydantic=pydantic_output, + ) + + async def async_kickoff(inputs=None): + return crew_output + + # Add more attributes that Procedure might be expecting + crew.verbose = False + crew.output_log_file = None + crew.max_rpm = None + crew.memory = False + crew.process = Process.sequential + crew.config = None + crew.cache = True + + # # Create a valid Agent instance + mock_agent = Agent( + name="Mock Agent", + role="Mock Role", + goal="Mock Goal", + backstory="Mock Backstory", + allow_delegation=False, + verbose=False, + ) + + # Create a valid Task instance + mock_task = Task( + description="Return: Test output", + expected_output="Test output", + agent=mock_agent, + async_execution=False, + context=None, + ) + + crew.agents = [mock_agent] + crew.tasks = [mock_task] + + crew.kickoff_async = AsyncMock(side_effect=async_kickoff) + + return crew + + return _create_mock_crew + + +@pytest.fixture +def mock_router_factory(mock_crew_factory): + def _create_mock_router(): + crew1 = mock_crew_factory(name="Crew 1", output_json_dict={"output": "crew1"}) + crew2 = mock_crew_factory(name="Crew 2", output_json_dict={"output": "crew2"}) + crew3 = mock_crew_factory(name="Crew 3", output_json_dict={"output": "crew3"}) + + MockRouterClass = type("MockRouter", (MagicMock, Router), {}) + + class MockRouter(MockRouterClass): + def __deepcopy__(self, memo): + result = MockRouterClass() + result.route = self.route + return result + + mock_router = MockRouter() + mock_router.route = MagicMock( + side_effect=lambda x: ( + ( + Pipeline(stages=[crew1]) + if x.get("score", 0) > 80 + else ( + Pipeline(stages=[crew2]) + if x.get("score", 0) > 50 + else Pipeline(stages=[crew3]) + ) + ), + ( + "route1" + if x.get("score", 0) > 80 + else "route2" + if x.get("score", 0) > 50 + else "default" + ), + ) + ) + + return mock_router + + return _create_mock_router + + +def test_pipeline_initialization(mock_crew_factory): + """ + Test that a Pipeline is correctly initialized with the given stages. + """ + crew1 = mock_crew_factory(name="Crew 1") + crew2 = mock_crew_factory(name="Crew 2") + + pipeline = Pipeline(stages=[crew1, crew2]) + assert len(pipeline.stages) == 2 + assert pipeline.stages[0] == crew1 + assert pipeline.stages[1] == crew2 + + +@pytest.mark.asyncio +async def test_pipeline_with_empty_input(mock_crew_factory): + """ + Ensure the pipeline handles an empty input list correctly. + """ + crew = mock_crew_factory(name="Test Crew") + pipeline = Pipeline(stages=[crew]) + + input_data = [] + pipeline_results = await pipeline.kickoff(input_data) + + assert ( + len(pipeline_results) == 0 + ), "Pipeline should return empty results for empty input" + + +agent = Agent( + role="Test Role", + goal="Test Goal", + backstory="Test Backstory", + allow_delegation=False, + verbose=False, +) +task = Task( + description="Return: Test output", + expected_output="Test output", + agent=agent, + async_execution=False, + context=None, +) + + +@pytest.mark.asyncio +async def test_pipeline_process_streams_single_input(): + """ + Test that Pipeline.process_streams() correctly processes a single input + and returns the expected CrewOutput. + """ + crew_name = "Test Crew" + mock_crew = Crew( + agents=[agent], + tasks=[task], + process=Process.sequential, + ) + mock_crew.name = crew_name + pipeline = Pipeline(stages=[mock_crew]) + input_data = [{"key": "value"}] + with patch.object(Crew, "kickoff_async") as mock_kickoff: + task_output = TaskOutput( + description="Test task", raw="Task output", agent="Test Agent" + ) + mock_kickoff.return_value = CrewOutput( + raw="Test output", + tasks_output=[task_output], + token_usage=DEFAULT_TOKEN_USAGE, + json_dict=None, + pydantic=None, + ) + pipeline_results = await pipeline.kickoff(input_data) + mock_crew.kickoff_async.assert_called_once_with(inputs={"key": "value"}) + + for pipeline_result in pipeline_results: + assert isinstance(pipeline_result, PipelineKickoffResult) + assert pipeline_result.raw == "Test output" + assert len(pipeline_result.crews_outputs) == 1 + assert pipeline_result.token_usage == {crew_name: DEFAULT_TOKEN_USAGE} + assert pipeline_result.trace == [input_data[0], "Test Crew"] + + +@pytest.mark.asyncio +async def test_pipeline_result_ordering(): + """ + Ensure that results are returned in the same order as the inputs, especially with parallel processing. + """ + crew1 = Crew( + name="Crew 1", + agents=[agent], + tasks=[task], + ) + crew2 = Crew( + name="Crew 2", + agents=[agent], + tasks=[task], + ) + crew3 = Crew( + name="Crew 3", + agents=[agent], + tasks=[task], + ) + + pipeline = Pipeline( + stages=[crew1, [crew2, crew3]] + ) # Parallel stage to test ordering + input_data = [{"id": 1}, {"id": 2}, {"id": 3}] + + def create_crew_output(crew_name): + return CrewOutput( + raw=f"Test output from {crew_name}", + tasks_output=[ + TaskOutput( + description="Test task", + raw=f"Task output from {crew_name}", + agent="Test Agent", + ) + ], + token_usage=DEFAULT_TOKEN_USAGE, + json_dict={"output": crew_name.lower().replace(" ", "")}, + pydantic=None, + ) + + with patch.object(Crew, "kickoff_async") as mock_kickoff: + mock_kickoff.side_effect = [ + create_crew_output("Crew 1"), + create_crew_output("Crew 2"), + create_crew_output("Crew 3"), + ] * 3 + pipeline_results = await pipeline.kickoff(input_data) + mock_kickoff.call_count = 3 + + assert ( + len(pipeline_results) == 6 + ), "Should have 2 results for each input due to the parallel final stage" + + # Group results by their original input id + grouped_results = {} + for result in pipeline_results: + input_id = result.trace[0]["id"] + if input_id not in grouped_results: + grouped_results[input_id] = [] + grouped_results[input_id].append(result) + + # Check that we have the correct number of groups and results per group + assert len(grouped_results) == 3, "Should have results for each of the 3 inputs" + for input_id, results in grouped_results.items(): + assert ( + len(results) == 2 + ), f"Each input should have 2 results, but input {input_id} has {len(results)}" + + # Check the ordering and content of the results + for input_id in range(1, 4): + group = grouped_results[input_id] + assert group[0].trace == [ + {"id": input_id}, + "Crew 1", + "Crew 2", + ], f"Unexpected trace for first result of input {input_id}" + assert group[1].trace == [ + {"id": input_id}, + "Crew 1", + "Crew 3", + ], f"Unexpected trace for second result of input {input_id}" + + +class TestPydanticOutput(BaseModel): + key: str + value: int + + +@pytest.mark.asyncio +@pytest.mark.vcr(filter_headers=["authorization"]) +async def test_pipeline_process_streams_single_input_pydantic_output(): + crew_name = "Test Crew" + task = Task( + description="Return: Key:value", + expected_output="Key:Value", + agent=agent, + async_execution=False, + context=None, + output_pydantic=TestPydanticOutput, + ) + mock_crew = Crew( + name=crew_name, + agents=[agent], + tasks=[task], + ) + + pipeline = Pipeline(stages=[mock_crew]) + input_data = [{"key": "value"}] + with patch.object(Crew, "kickoff_async") as mock_kickoff: + mock_crew_output = CrewOutput( + raw="Test output", + tasks_output=[ + TaskOutput( + description="Return: Key:value", raw="Key:Value", agent="Test Agent" + ) + ], + token_usage=UsageMetrics( + total_tokens=171, + prompt_tokens=154, + completion_tokens=17, + successful_requests=1, + ), + pydantic=TestPydanticOutput(key="test", value=42), + ) + mock_kickoff.return_value = mock_crew_output + pipeline_results = await pipeline.kickoff(input_data) + + assert len(pipeline_results) == 1 + pipeline_result = pipeline_results[0] + + assert isinstance(pipeline_result, PipelineKickoffResult) + assert pipeline_result.raw == "Test output" + assert len(pipeline_result.crews_outputs) == 1 + assert pipeline_result.token_usage == { + crew_name: UsageMetrics( + total_tokens=171, + prompt_tokens=154, + completion_tokens=17, + successful_requests=1, + ) + } + + assert pipeline_result.trace == [input_data[0], "Test Crew"] + assert isinstance(pipeline_result.pydantic, TestPydanticOutput) + assert pipeline_result.pydantic.key == "test" + assert pipeline_result.pydantic.value == 42 + assert pipeline_result.json_dict is None + + +@pytest.mark.asyncio +async def test_pipeline_preserves_original_input(mock_crew_factory): + crew_name = "Test Crew" + mock_crew = mock_crew_factory( + name=crew_name, + output_json_dict={"new_key": "new_value"}, + ) + pipeline = Pipeline(stages=[mock_crew]) + + # Create a deep copy of the input data to ensure we're not comparing references + original_input_data = [{"key": "value", "nested": {"a": 1}}] + input_data = json.loads(json.dumps(original_input_data)) + + await pipeline.kickoff(input_data) + + # Assert that the original input hasn't been modified + assert ( + input_data == original_input_data + ), "The original input data should not be modified" + + # Ensure that even nested structures haven't been modified + assert ( + input_data[0]["nested"] == original_input_data[0]["nested"] + ), "Nested structures should not be modified" + + # Verify that adding new keys to the crew output doesn't affect the original input + assert ( + "new_key" not in input_data[0] + ), "New keys from crew output should not be added to the original input" + + +@pytest.mark.asyncio +async def test_pipeline_process_streams_multiple_inputs(): + """ + Test that Pipeline.process_streams() correctly processes multiple inputs + and returns the expected CrewOutputs. + """ + mock_crew = Crew(name="Test Crew", tasks=[task], agents=[agent]) + pipeline = Pipeline(stages=[mock_crew]) + input_data = [{"key1": "value1"}, {"key2": "value2"}] + + with patch.object(Crew, "kickoff_async") as mock_kickoff: + mock_kickoff.return_value = CrewOutput( + raw="Test output", + tasks_output=[ + TaskOutput( + description="Test task", raw="Task output", agent="Test Agent" + ) + ], + token_usage=DEFAULT_TOKEN_USAGE, + json_dict=None, + pydantic=None, + ) + pipeline_results = await pipeline.kickoff(input_data) + assert mock_kickoff.call_count == 2 + assert len(pipeline_results) == 2 + + for pipeline_result in pipeline_results: + assert all( + isinstance(crew_output, CrewOutput) + for crew_output in pipeline_result.crews_outputs + ) + + +@pytest.mark.asyncio +async def test_pipeline_with_parallel_stages(): + """ + Test that Pipeline correctly handles parallel stages. + """ + crew1 = Crew(name="Crew 1", tasks=[task], agents=[agent]) + crew2 = Crew(name="Crew 2", tasks=[task], agents=[agent]) + crew3 = Crew(name="Crew 3", tasks=[task], agents=[agent]) + + pipeline = Pipeline(stages=[crew1, [crew2, crew3]]) + input_data = [{"initial": "data"}] + + with patch.object(Crew, "kickoff_async") as mock_kickoff: + mock_kickoff.return_value = CrewOutput( + raw="Test output", + tasks_output=[ + TaskOutput( + description="Test task", raw="Task output", agent="Test Agent" + ) + ], + token_usage=DEFAULT_TOKEN_USAGE, + json_dict=None, + pydantic=None, + ) + pipeline_result = await pipeline.kickoff(input_data) + mock_kickoff.assert_called_with(inputs={"initial": "data"}) + + assert len(pipeline_result) == 2 + pipeline_result_1, pipeline_result_2 = pipeline_result + + pipeline_result_1.trace = [ + "Crew 1", + "Crew 2", + ] + pipeline_result_2.trace = [ + "Crew 1", + "Crew 3", + ] + + expected_token_usage = { + "Crew 1": DEFAULT_TOKEN_USAGE, + "Crew 2": DEFAULT_TOKEN_USAGE, + "Crew 3": DEFAULT_TOKEN_USAGE, + } + + assert pipeline_result_1.token_usage == expected_token_usage + assert pipeline_result_2.token_usage == expected_token_usage + + +@pytest.mark.asyncio +async def test_pipeline_with_parallel_stages_end_in_single_stage(mock_crew_factory): + """ + Test that Pipeline correctly handles parallel stages. + """ + crew1 = Crew(name="Crew 1", tasks=[task], agents=[agent]) + crew2 = Crew(name="Crew 2", tasks=[task], agents=[agent]) + crew3 = Crew(name="Crew 3", tasks=[task], agents=[agent]) + crew4 = Crew(name="Crew 4", tasks=[task], agents=[agent]) + + pipeline = Pipeline(stages=[crew1, [crew2, crew3], crew4]) + input_data = [{"initial": "data"}] + + pipeline_result = await pipeline.kickoff(input_data) + + with patch.object(Crew, "kickoff_async") as mock_kickoff: + mock_kickoff.return_value = CrewOutput( + raw="Test output", + tasks_output=[ + TaskOutput( + description="Test task", raw="Task output", agent="Test Agent" + ) + ], + token_usage=DEFAULT_TOKEN_USAGE, + json_dict=None, + pydantic=None, + ) + pipeline_result = await pipeline.kickoff(input_data) + + mock_kickoff.assert_called_with(inputs={"initial": "data"}) + + assert len(pipeline_result) == 1 + pipeline_result_1 = pipeline_result[0] + + pipeline_result_1.trace = [ + input_data[0], + "Crew 1", + ["Crew 2", "Crew 3"], + "Crew 4", + ] + + expected_token_usage = { + "Crew 1": DEFAULT_TOKEN_USAGE, + "Crew 2": DEFAULT_TOKEN_USAGE, + "Crew 3": DEFAULT_TOKEN_USAGE, + "Crew 4": DEFAULT_TOKEN_USAGE, + } + + assert pipeline_result_1.token_usage == expected_token_usage + + +def test_pipeline_rshift_operator(mock_crew_factory): + """ + Test that the >> operator correctly creates a Pipeline from Crews and lists of Crews. + """ + crew1 = mock_crew_factory(name="Crew 1") + crew2 = mock_crew_factory(name="Crew 2") + crew3 = mock_crew_factory(name="Crew 3") + + # Test single crew addition + pipeline = Pipeline(stages=[]) >> crew1 + assert len(pipeline.stages) == 1 + assert pipeline.stages[0] == crew1 + + # Test adding a list of crews + pipeline = Pipeline(stages=[crew1]) + pipeline = pipeline >> [crew2, crew3] + assert len(pipeline.stages) == 2 + assert pipeline.stages[1] == [crew2, crew3] + + # Test error case: trying to shift with non-Crew object + with pytest.raises(TypeError): + pipeline >> "not a crew" + + +@pytest.mark.asyncio +@pytest.mark.vcr(filter_headers=["authorization"]) +async def test_pipeline_parallel_crews_to_parallel_crews(): + """ + Test that feeding parallel crews to parallel crews works correctly. + """ + crew1 = Crew(name="Crew 1", tasks=[task], agents=[agent]) + crew2 = Crew(name="Crew 2", tasks=[task], agents=[agent]) + crew3 = Crew(name="Crew 3", tasks=[task], agents=[agent]) + crew4 = Crew(name="Crew 4", tasks=[task], agents=[agent]) + # output_json_dict={"output1": "crew1"} + pipeline = Pipeline(stages=[[crew1, crew2], [crew3, crew4]]) + + input_data = [{"input": "test"}] + + def create_crew_output(crew_name): + return CrewOutput( + raw=f"Test output from {crew_name}", + tasks_output=[ + TaskOutput( + description="Test task", + raw=f"Task output from {crew_name}", + agent="Test Agent", + ) + ], + token_usage=DEFAULT_TOKEN_USAGE, + json_dict={"output": crew_name.lower().replace(" ", "")}, + pydantic=None, + ) + + with patch.object(Crew, "kickoff_async") as mock_kickoff: + mock_kickoff.side_effect = [ + create_crew_output(crew_name) + for crew_name in ["Crew 1", "Crew 2", "Crew 3", "Crew 4"] + ] + pipeline_results = await pipeline.kickoff(input_data) + + assert len(pipeline_results) == 2, "Should have 2 results for final parallel stage" + + pipeline_result_1, pipeline_result_2 = pipeline_results + + # Check the outputs + assert pipeline_result_1.json_dict == {"output": "crew3"} + assert pipeline_result_2.json_dict == {"output": "crew4"} + + # Check the traces + expected_traces = [ + [{"input": "test"}, ["Crew 1", "Crew 2"], "Crew 3"], + [{"input": "test"}, ["Crew 1", "Crew 2"], "Crew 4"], + ] + + for result, expected_trace in zip(pipeline_results, expected_traces): + assert result.trace == expected_trace, f"Unexpected trace: {result.trace}" + + +def test_pipeline_double_nesting_not_allowed(mock_crew_factory): + """ + Test that double nesting in pipeline stages is not allowed. + """ + crew1 = mock_crew_factory(name="Crew 1") + crew2 = mock_crew_factory(name="Crew 2") + crew3 = mock_crew_factory(name="Crew 3") + crew4 = mock_crew_factory(name="Crew 4") + + with pytest.raises(ValidationError) as exc_info: + Pipeline(stages=[crew1, [[crew2, crew3], crew4]]) + + error_msg = str(exc_info.value) + + assert ( + "Double nesting is not allowed in pipeline stages" in error_msg + ), f"Unexpected error message: {error_msg}" + + +def test_pipeline_invalid_crew(mock_crew_factory): + """ + Test that non-Crew objects are not allowed in pipeline stages. + """ + crew1 = mock_crew_factory(name="Crew 1") + not_a_crew = "This is not a crew" + + with pytest.raises(ValidationError) as exc_info: + Pipeline(stages=[crew1, not_a_crew]) + + error_msg = str(exc_info.value) + print(f"Full error message: {error_msg}") # For debugging + assert ( + "Expected Crew instance, Router instance, or list of Crews, got " + in error_msg + ), f"Unexpected error message: {error_msg}" + + +""" +TODO: Figure out what is the proper output for a pipeline with multiple stages + +Options: +- Should the final output only include the last stage's output? +- Should the final output include the accumulation of previous stages' outputs? +""" + + +@pytest.mark.asyncio +async def test_pipeline_data_accumulation(): + crew1 = Crew(name="Crew 1", tasks=[task], agents=[agent]) + crew2 = Crew(name="Crew 2", tasks=[task], agents=[agent]) + + pipeline = Pipeline(stages=[crew1, crew2]) + input_data = [{"initial": "data"}] + results = await pipeline.kickoff(input_data) + + with patch.object(Crew, "kickoff_async") as mock_kickoff: + mock_kickoff.side_effect = [ + CrewOutput( + raw="Test output from Crew 1", + tasks_output=[], + token_usage=DEFAULT_TOKEN_USAGE, + json_dict={"key1": "value1"}, + pydantic=None, + ), + CrewOutput( + raw="Test output from Crew 2", + tasks_output=[], + token_usage=DEFAULT_TOKEN_USAGE, + json_dict={"key2": "value2"}, + pydantic=None, + ), + ] + + results = await pipeline.kickoff(input_data) + + # Check the final output + assert len(results) == 1 + final_result = results[0] + assert final_result.json_dict == {"key2": "value2"} + + # Check that the trace includes all stages + assert final_result.trace == [{"initial": "data"}, "Crew 1", "Crew 2"] + + # Check that crews_outputs contain the correct information + assert len(final_result.crews_outputs) == 2 + assert final_result.crews_outputs[0].json_dict == {"key1": "value1"} + assert final_result.crews_outputs[1].json_dict == {"key2": "value2"} + + +@pytest.mark.asyncio +@pytest.mark.vcr(filter_headers=["authorization"]) +async def test_pipeline_with_router(): + crew1 = Crew(name="Crew 1", tasks=[task], agents=[agent]) + crew2 = Crew(name="Crew 2", tasks=[task], agents=[agent]) + crew3 = Crew(name="Crew 3", tasks=[task], agents=[agent]) + routes = { + "route1": Route( + condition=lambda x: x.get("score", 0) > 80, + pipeline=Pipeline(stages=[crew1]), + ), + "route2": Route( + condition=lambda x: 50 < x.get("score", 0) <= 80, + pipeline=Pipeline(stages=[crew2]), + ), + } + router = Router( + routes=routes, + default=Pipeline(stages=[crew3]), + ) + # Test high score route + pipeline = Pipeline(stages=[router]) + with patch.object(Crew, "kickoff_async") as mock_kickoff: + mock_kickoff.return_value = CrewOutput( + raw="Test output from Crew 1", + tasks_output=[], + token_usage=DEFAULT_TOKEN_USAGE, + json_dict={"output": "crew1"}, + pydantic=None, + ) + result_high = await pipeline.kickoff([{"score": 90}]) + + assert len(result_high) == 1 + assert result_high[0].json_dict is not None + assert result_high[0].json_dict["output"] == "crew1" + assert result_high[0].trace == [ + {"score": 90}, + {"route_taken": "route1"}, + "Crew 1", + ] + with patch.object(Crew, "kickoff_async") as mock_kickoff: + mock_kickoff.return_value = CrewOutput( + raw="Test output from Crew 2", + tasks_output=[], + token_usage=DEFAULT_TOKEN_USAGE, + json_dict={"output": "crew2"}, + pydantic=None, + ) + # Test medium score route + pipeline = Pipeline(stages=[router]) + result_medium = await pipeline.kickoff([{"score": 60}]) + assert len(result_medium) == 1 + assert result_medium[0].json_dict is not None + assert result_medium[0].json_dict["output"] == "crew2" + assert result_medium[0].trace == [ + {"score": 60}, + {"route_taken": "route2"}, + "Crew 2", + ] + + with patch.object(Crew, "kickoff_async") as mock_kickoff: + mock_kickoff.return_value = CrewOutput( + raw="Test output from Crew 3", + tasks_output=[], + token_usage=DEFAULT_TOKEN_USAGE, + json_dict={"output": "crew3"}, + pydantic=None, + ) + # Test low score route + pipeline = Pipeline(stages=[router]) + result_low = await pipeline.kickoff([{"score": 30}]) + assert len(result_low) == 1 + assert result_low[0].json_dict is not None + assert result_low[0].json_dict["output"] == "crew3" + assert result_low[0].trace == [ + {"score": 30}, + {"route_taken": "default"}, + "Crew 3", + ] + + +@pytest.mark.asyncio +@pytest.mark.vcr(filter_headers=["authorization"]) +async def test_router_with_multiple_inputs(): + crew1 = Crew(name="Crew 1", tasks=[task], agents=[agent]) + crew2 = Crew(name="Crew 2", tasks=[task], agents=[agent]) + crew3 = Crew(name="Crew 3", tasks=[task], agents=[agent]) + router = Router( + routes={ + "route1": Route( + condition=lambda x: x.get("score", 0) > 80, + pipeline=Pipeline(stages=[crew1]), + ), + "route2": Route( + condition=lambda x: 50 < x.get("score", 0) <= 80, + pipeline=Pipeline(stages=[crew2]), + ), + }, + default=Pipeline(stages=[crew3]), + ) + pipeline = Pipeline(stages=[router]) + + inputs = [{"score": 90}, {"score": 60}, {"score": 30}] + + with patch.object(Crew, "kickoff_async") as mock_kickoff: + mock_kickoff.side_effect = [ + CrewOutput( + raw="Test output from Crew 1", + tasks_output=[], + token_usage=DEFAULT_TOKEN_USAGE, + json_dict={"output": "crew1"}, + pydantic=None, + ), + CrewOutput( + raw="Test output from Crew 2", + tasks_output=[], + token_usage=DEFAULT_TOKEN_USAGE, + json_dict={"output": "crew2"}, + pydantic=None, + ), + CrewOutput( + raw="Test output from Crew 3", + tasks_output=[], + token_usage=DEFAULT_TOKEN_USAGE, + json_dict={"output": "crew3"}, + pydantic=None, + ), + ] + results = await pipeline.kickoff(inputs) + + assert len(results) == 3 + assert results[0].json_dict is not None + assert results[0].json_dict["output"] == "crew1" + assert results[1].json_dict is not None + assert results[1].json_dict["output"] == "crew2" + assert results[2].json_dict is not None + assert results[2].json_dict["output"] == "crew3" + + assert results[0].trace[1]["route_taken"] == "route1" + assert results[1].trace[1]["route_taken"] == "route2" + assert results[2].trace[1]["route_taken"] == "default" + + +@pytest.mark.asyncio +@pytest.mark.vcr(filter_headers=["authorization"]) +async def test_pipeline_with_multiple_routers(): + crew1 = Crew(name="Crew 1", tasks=[task], agents=[agent]) + crew2 = Crew(name="Crew 2", tasks=[task], agents=[agent]) + router1 = Router( + routes={ + "route1": Route( + condition=lambda x: x.get("score", 0) > 80, + pipeline=Pipeline(stages=[crew1]), + ), + }, + default=Pipeline(stages=[crew2]), + ) + router2 = Router( + routes={ + "route2": Route( + condition=lambda x: 50 < x.get("score", 0) <= 80, + pipeline=Pipeline(stages=[crew2]), + ), + }, + default=Pipeline(stages=[crew2]), + ) + final_crew = Crew(name="Final Crew", tasks=[task], agents=[agent]) + + pipeline = Pipeline(stages=[router1, router2, final_crew]) + + with patch.object(Crew, "kickoff_async") as mock_kickoff: + mock_kickoff.side_effect = [ + CrewOutput( + raw="Test output from Crew 1", + tasks_output=[], + token_usage=DEFAULT_TOKEN_USAGE, + json_dict={"output": "crew1"}, + pydantic=None, + ), + CrewOutput( + raw="Test output from Crew 2", + tasks_output=[], + token_usage=DEFAULT_TOKEN_USAGE, + json_dict={"output": "crew2"}, + pydantic=None, + ), + CrewOutput( + raw="Test output from Final Crew", + tasks_output=[], + token_usage=DEFAULT_TOKEN_USAGE, + json_dict={"output": "final"}, + pydantic=None, + ), + ] + result = await pipeline.kickoff([{"score": 75}]) + + assert len(result) == 1 + assert result[0].json_dict is not None + assert result[0].json_dict["output"] == "final" + assert ( + len(result[0].trace) == 6 + ) # Input, Router1, Crew2, Router2, Crew2, Final Crew + assert result[0].trace[1]["route_taken"] == "default" + assert result[0].trace[3]["route_taken"] == "route2" + + +@pytest.mark.asyncio +async def test_router_default_route(mock_crew_factory): + default_crew = mock_crew_factory( + name="Default Crew", output_json_dict={"output": "default"} + ) + router = Router( + routes={ + "route1": Route( + condition=lambda x: False, + pipeline=Pipeline(stages=[mock_crew_factory(name="Never Used")]), + ), + }, + default=Pipeline(stages=[default_crew]), + ) + + pipeline = Pipeline(stages=[router]) + result = await pipeline.kickoff([{"score": 100}]) + + assert len(result) == 1 + assert result[0].json_dict is not None + assert result[0].json_dict["output"] == "default" + assert result[0].trace[1]["route_taken"] == "default" + + +@pytest.mark.asyncio +@pytest.mark.vcr(filter_headers=["authorization"]) +async def test_router_with_empty_input(): + crew1 = Crew(name="Crew 1", tasks=[task], agents=[agent]) + crew2 = Crew(name="Crew 2", tasks=[task], agents=[agent]) + crew3 = Crew(name="Crew 3", tasks=[task], agents=[agent]) + router = Router( + routes={ + "route1": Route( + condition=lambda x: x.get("score", 0) > 80, + pipeline=Pipeline(stages=[crew1]), + ), + "route2": Route( + condition=lambda x: 50 < x.get("score", 0) <= 80, + pipeline=Pipeline(stages=[crew2]), + ), + }, + default=Pipeline(stages=[crew3]), + ) + pipeline = Pipeline(stages=[router]) + + result = await pipeline.kickoff([{}]) + + assert len(result) == 1 + assert result[0].trace[1]["route_taken"] == "default"