From 85e596b688d1e49ffc34a573ace27360ece12067 Mon Sep 17 00:00:00 2001 From: Al-Iqram Elahee Date: Sun, 1 Oct 2023 17:08:09 -0600 Subject: [PATCH 1/6] Fixed formating issue in the README --- README.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/README.md b/README.md index 77c51f96585..2ae9da88c6e 100644 --- a/README.md +++ b/README.md @@ -72,9 +72,9 @@ user_proxy = UserProxyAgent("user_proxy", code_execution_config={"work_dir": "co user_proxy.initiate_chat(assistant, message="Plot a chart of NVDA and TESLA stock price change YTD.") # This initiates an automated chat between the two agents to solve the task ``` -Multi-agent conversations: AutoGen agents can communicate with each other to solve tasks. This allows for more complex and sophisticated applications than would be possible with a single LLM. -Customization: AutoGen agents can be customized to meet the specific needs of an application. This includes the ability to choose the LLMs to use, the types of human input to allow, and the tools to employ. -Human participation: AutoGen seamlessly allows human participation. This means that humans can provide input and feedback to the agents as needed. +* Multi-agent conversations: AutoGen agents can communicate with each other to solve tasks. This allows for more complex and sophisticated applications than would be possible with a single LLM. +* Customization: AutoGen agents can be customized to meet the specific needs of an application. This includes the ability to choose the LLMs to use, the types of human input to allow, and the tools to employ. +* Human participation: AutoGen seamlessly allows human participation. This means that humans can provide input and feedback to the agents as needed. This example can be run with ```python From 671d580bea812931b52a816778f7a56b6f441996 Mon Sep 17 00:00:00 2001 From: Al-Iqram Elahee Date: Sun, 1 Oct 2023 17:19:21 -0600 Subject: [PATCH 2/6] Fixed the formating issue in the README --- README.md | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/README.md b/README.md index 2ae9da88c6e..660efff6745 100644 --- a/README.md +++ b/README.md @@ -72,9 +72,10 @@ user_proxy = UserProxyAgent("user_proxy", code_execution_config={"work_dir": "co user_proxy.initiate_chat(assistant, message="Plot a chart of NVDA and TESLA stock price change YTD.") # This initiates an automated chat between the two agents to solve the task ``` -* Multi-agent conversations: AutoGen agents can communicate with each other to solve tasks. This allows for more complex and sophisticated applications than would be possible with a single LLM. -* Customization: AutoGen agents can be customized to meet the specific needs of an application. This includes the ability to choose the LLMs to use, the types of human input to allow, and the tools to employ. -* Human participation: AutoGen seamlessly allows human participation. This means that humans can provide input and feedback to the agents as needed. +- **Multi-agent conversations:** AutoGen agents can communicate with each other to solve tasks. This allows for more complex and sophisticated applications than would be possible with a single LLM. +- **Customization:** AutoGen agents can be customized to meet the specific needs of an application. This includes the ability to choose the LLMs to use, the types of human input to allow, and the tools to employ. +- **Human participation:** AutoGen seamlessly allows human participation. This means that humans can provide input and feedback to the agents as needed. + This example can be run with ```python From 5dbcf3a387c95230c798da98bffb2a49f87914b8 Mon Sep 17 00:00:00 2001 From: Al-Iqram Elahee Date: Mon, 2 Oct 2023 02:16:49 -0600 Subject: [PATCH 3/6] Updated formatting as per review comments --- README.md | 11 ++++++++--- 1 file changed, 8 insertions(+), 3 deletions(-) diff --git a/README.md b/README.md index 660efff6745..b306f6f82a6 100644 --- a/README.md +++ b/README.md @@ -72,9 +72,14 @@ user_proxy = UserProxyAgent("user_proxy", code_execution_config={"work_dir": "co user_proxy.initiate_chat(assistant, message="Plot a chart of NVDA and TESLA stock price change YTD.") # This initiates an automated chat between the two agents to solve the task ``` -- **Multi-agent conversations:** AutoGen agents can communicate with each other to solve tasks. This allows for more complex and sophisticated applications than would be possible with a single LLM. -- **Customization:** AutoGen agents can be customized to meet the specific needs of an application. This includes the ability to choose the LLMs to use, the types of human input to allow, and the tools to employ. -- **Human participation:** AutoGen seamlessly allows human participation. This means that humans can provide input and feedback to the agents as needed. +### Multi-agent conversations +AutoGen agents can communicate with each other to solve tasks. This allows for more complex and sophisticated applications than would be possible with a single LLM. + +### Customization +AutoGen agents can be customized to meet the specific needs of an application. This includes the ability to choose the LLMs to use, the types of human input to allow, and the tools to employ. + +### Human participation +AutoGen seamlessly allows human participation. This means that humans can provide input and feedback to the agents as needed. This example can be run with From dce1d832cab693728fdff2fe3a5e02d1029f6f4c Mon Sep 17 00:00:00 2001 From: Al-Iqram Elahee Date: Mon, 2 Oct 2023 13:05:00 -0600 Subject: [PATCH 4/6] Refactor README.md to highlight use cases and features --- README.md | 16 ++++++---------- 1 file changed, 6 insertions(+), 10 deletions(-) diff --git a/README.md b/README.md index baed444097e..23bf6e924d3 100644 --- a/README.md +++ b/README.md @@ -58,7 +58,12 @@ Find more options in [Installation](https://microsoft.github.io/autogen/docs/Ins For LLM inference configurations, check the [FAQ](https://microsoft.github.io/autogen/docs/FAQ#set-your-api-endpoints). ## Quickstart +## Features of AutoGen +- **Multi-agent conversations**: AutoGen agents can communicate with each other to solve tasks. This allows for more complex and sophisticated applications than would be possible with a single LLM. +- **Customization**: AutoGen agents can be customized to meet the specific needs of an application. This includes the ability to choose the LLMs to use, the types of human input to allow, and the tools to employ. +- **Human participation**: AutoGen seamlessly allows human participation. This means that humans can provide input and feedback to the agents as needed. +## Multi-Agent Conversation Framework * Autogen enables the next-gen LLM applications with a generic multi-agent conversation framework. It offers customizable and conversable agents which integrate LLMs, tools, and humans. By automating chat among multiple capable agents, one can easily make them collectively perform tasks autonomously or with human feedback, including tasks that require using tools via code. For [example](https://github.com/microsoft/autogen/blob/main/test/twoagent.py), ```python @@ -72,15 +77,6 @@ user_proxy = UserProxyAgent("user_proxy", code_execution_config={"work_dir": "co user_proxy.initiate_chat(assistant, message="Plot a chart of NVDA and TESLA stock price change YTD.") # This initiates an automated chat between the two agents to solve the task ``` -### Multi-agent conversations -AutoGen agents can communicate with each other to solve tasks. This allows for more complex and sophisticated applications than would be possible with a single LLM. - -### Customization -AutoGen agents can be customized to meet the specific needs of an application. This includes the ability to choose the LLMs to use, the types of human input to allow, and the tools to employ. - -### Human participation -AutoGen seamlessly allows human participation. This means that humans can provide input and feedback to the agents as needed. - This example can be run with ```python @@ -91,7 +87,7 @@ The figure below shows an example conversation flow with AutoGen. ![Agent Chat Example](https://github.com/microsoft/autogen/blob/main/website/static/img/chat_example.png) Please find more [code examples](https://microsoft.github.io/autogen/docs/Examples/AutoGen-AgentChat) for this feature. - +## Enhanced LLM Inferences * Autogen also helps maximize the utility out of the expensive LLMs such as ChatGPT and GPT-4. It offers a drop-in replacement of `openai.Completion` or `openai.ChatCompletion` adding powerful functionalities like tuning, caching, error handling, and templating. For example, you can optimize generations by LLM with your own tuning data, success metrics and budgets. ```python # perform tuning From fcd3cbfc7dccac50869d81b1c4fad205da86eee0 Mon Sep 17 00:00:00 2001 From: Al-Iqram Elahee Date: Mon, 2 Oct 2023 16:06:02 -0600 Subject: [PATCH 5/6] Updated README as per feedback --- README.md | 13 ++++++++----- 1 file changed, 8 insertions(+), 5 deletions(-) diff --git a/README.md b/README.md index 23bf6e924d3..fe25c6d162d 100644 --- a/README.md +++ b/README.md @@ -58,14 +58,17 @@ Find more options in [Installation](https://microsoft.github.io/autogen/docs/Ins For LLM inference configurations, check the [FAQ](https://microsoft.github.io/autogen/docs/FAQ#set-your-api-endpoints). ## Quickstart -## Features of AutoGen + +## Multi-Agent Conversation Framework + +Autogen enables the next-gen LLM applications with a generic multi-agent conversation framework. It offers customizable and conversable agents which integrate LLMs, tools, and humans. +By automating chat among multiple capable agents, one can easily make them collectively perform tasks autonomously or with human feedback, including tasks that require using tools via code. + +Features of this use case include: - **Multi-agent conversations**: AutoGen agents can communicate with each other to solve tasks. This allows for more complex and sophisticated applications than would be possible with a single LLM. - **Customization**: AutoGen agents can be customized to meet the specific needs of an application. This includes the ability to choose the LLMs to use, the types of human input to allow, and the tools to employ. -- **Human participation**: AutoGen seamlessly allows human participation. This means that humans can provide input and feedback to the agents as needed. -## Multi-Agent Conversation Framework -* Autogen enables the next-gen LLM applications with a generic multi-agent conversation framework. It offers customizable and conversable agents which integrate LLMs, tools, and humans. -By automating chat among multiple capable agents, one can easily make them collectively perform tasks autonomously or with human feedback, including tasks that require using tools via code. For [example](https://github.com/microsoft/autogen/blob/main/test/twoagent.py), +- **Human participation**: AutoGen seamlessly allows human participation. This means that humans can provide input and feedback to the agents as needed. For [example](https://github.com/microsoft/autogen/blob/main/test/twoagent.py), ```python from autogen import AssistantAgent, UserProxyAgent, config_list_from_json # Load LLM inference endpoints from an env variable or a file From 848b42091bdf95bf289133c05524494e0cc1f96a Mon Sep 17 00:00:00 2001 From: Al-Iqram Elahee Date: Mon, 2 Oct 2023 18:06:07 -0600 Subject: [PATCH 6/6] Updated README as per feedback --- README.md | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index fe25c6d162d..98563a329d8 100644 --- a/README.md +++ b/README.md @@ -68,7 +68,9 @@ Features of this use case include: - **Multi-agent conversations**: AutoGen agents can communicate with each other to solve tasks. This allows for more complex and sophisticated applications than would be possible with a single LLM. - **Customization**: AutoGen agents can be customized to meet the specific needs of an application. This includes the ability to choose the LLMs to use, the types of human input to allow, and the tools to employ. -- **Human participation**: AutoGen seamlessly allows human participation. This means that humans can provide input and feedback to the agents as needed. For [example](https://github.com/microsoft/autogen/blob/main/test/twoagent.py), +- **Human participation**: AutoGen seamlessly allows human participation. This means that humans can provide input and feedback to the agents as needed. + +For [example](https://github.com/microsoft/autogen/blob/main/test/twoagent.py), ```python from autogen import AssistantAgent, UserProxyAgent, config_list_from_json # Load LLM inference endpoints from an env variable or a file @@ -91,7 +93,8 @@ The figure below shows an example conversation flow with AutoGen. Please find more [code examples](https://microsoft.github.io/autogen/docs/Examples/AutoGen-AgentChat) for this feature. ## Enhanced LLM Inferences -* Autogen also helps maximize the utility out of the expensive LLMs such as ChatGPT and GPT-4. It offers a drop-in replacement of `openai.Completion` or `openai.ChatCompletion` adding powerful functionalities like tuning, caching, error handling, and templating. For example, you can optimize generations by LLM with your own tuning data, success metrics and budgets. + +Autogen also helps maximize the utility out of the expensive LLMs such as ChatGPT and GPT-4. It offers a drop-in replacement of `openai.Completion` or `openai.ChatCompletion` adding powerful functionalities like tuning, caching, error handling, and templating. For example, you can optimize generations by LLM with your own tuning data, success metrics and budgets. ```python # perform tuning config, analysis = autogen.Completion.tune(