Skip to content

Commit

Permalink
Merge branch 'aw-dotenv' of https://github.com/AaronWard/autogen into…
Browse files Browse the repository at this point in the history
… aw-dotenv
  • Loading branch information
Ward authored and Ward committed Oct 4, 2023
2 parents 1a8e9c8 + b543f1e commit d529672
Show file tree
Hide file tree
Showing 12 changed files with 109 additions and 40 deletions.
18 changes: 9 additions & 9 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ This project is a spinoff from [FLAML](https://github.com/microsoft/FLAML).

:fire: autogen has graduated from [FLAML](https://github.com/microsoft/FLAML) into a new project.

<!-- :fire: Heads-up: We're preparing to migrate [autogen](https://microsoft.github.io/FLAML/docs/Use-Cases/Autogen) into a dedicated github repository. Alongside this move, we'll also launch a dedicated Discord server and a website for comprehensive documentation.
<!-- :fire: Heads-up: We're preparing to migrate [autogen](https://microsoft.github.io/FLAML/docs/Use-Cases/Autogen) into a dedicated Github repository. Alongside this move, we'll also launch a dedicated Discord server and a website for comprehensive documentation.
:fire: FLAML is highlighted in OpenAI's [cookbook](https://github.com/openai/openai-cookbook#related-resources-from-around-the-web).
Expand All @@ -26,17 +26,17 @@ This project is a spinoff from [FLAML](https://github.com/microsoft/FLAML).

## What is AutoGen

AutoGen is a framework that enables development of LLM applications using multiple agents that can converse with each other to solve tasks. AutoGen agents are customizable, conversable, and seamlessly allow human participation. They can operate in various modes that employ combinations of LLMs, human inputs, and tools.
AutoGen is a framework that enables the development of LLM applications using multiple agents that can converse with each other to solve tasks. AutoGen agents are customizable, conversable, and seamlessly allow human participation. They can operate in various modes that employ combinations of LLMs, human inputs, and tools.

![AutoGen Overview](https://github.com/microsoft/autogen/blob/main/website/static/img/autogen_agentchat.png)

* AutoGen enables building next-gen LLM applications based on **multi-agent conversations** with minimal effort. It simplifies the orchestration, automation and optimization of a complex LLM workflow. It maximizes the performance of LLM models and overcomes their weaknesses.
* AutoGen enables building next-gen LLM applications based on **multi-agent conversations** with minimal effort. It simplifies the orchestration, automation, and optimization of a complex LLM workflow. It maximizes the performance of LLM models and overcomes their weaknesses.
* It supports **diverse conversation patterns** for complex workflows. With customizable and conversable agents, developers can use AutoGen to build a wide range of conversation patterns concerning conversation autonomy,
the number of agents, and agent conversation topology.
* It provides a collection of working systems with different complexities. These systems span a **wide range of applications** from various domains and complexities. This demonstrates how AutoGen can easily support diverse conversation patterns.
* AutoGen provides a drop-in replacement of `openai.Completion` or `openai.ChatCompletion` as an **enhanced inference API**. It allows easy performance tuning, utilities like API unification and caching, and advanced usage patterns, such as error handling, multi-config inference, context programming, etc.

AutoGen is powered by collaborative [research studies](https://microsoft.github.io/autogen/docs/Research) from Microsoft, Penn State University, and University of Washington.
AutoGen is powered by collaborative [research studies](https://microsoft.github.io/autogen/docs/Research) from Microsoft, Penn State University, and the University of Washington.

## Installation

Expand All @@ -61,7 +61,7 @@ For LLM inference configurations, check the [FAQ](https://microsoft.github.io/au

## Multi-Agent Conversation Framework

Autogen enables the next-gen LLM applications with a generic multi-agent conversation framework. It offers customizable and conversable agents which integrate LLMs, tools, and humans.
Autogen enables the next-gen LLM applications with a generic multi-agent conversation framework. It offers customizable and conversable agents that integrate LLMs, tools, and humans.
By automating chat among multiple capable agents, one can easily make them collectively perform tasks autonomously or with human feedback, including tasks that require using tools via code.

Features of this use case include:
Expand Down Expand Up @@ -94,7 +94,7 @@ The figure below shows an example conversation flow with AutoGen.
Please find more [code examples](https://microsoft.github.io/autogen/docs/Examples/AutoGen-AgentChat) for this feature.
## Enhanced LLM Inferences

Autogen also helps maximize the utility out of the expensive LLMs such as ChatGPT and GPT-4. It offers a drop-in replacement of `openai.Completion` or `openai.ChatCompletion` adding powerful functionalities like tuning, caching, error handling, and templating. For example, you can optimize generations by LLM with your own tuning data, success metrics and budgets.
Autogen also helps maximize the utility out of the expensive LLMs such as ChatGPT and GPT-4. It offers a drop-in replacement of `openai.Completion` or `openai.ChatCompletion` adding powerful functionalities like tuning, caching, error handling, and templating. For example, you can optimize generations by LLM with your own tuning data, success metrics, and budgets.
```python
# perform tuning
config, analysis = autogen.Completion.tune(
Expand All @@ -114,7 +114,7 @@ Please find more [code examples](https://microsoft.github.io/autogen/docs/Exampl

## Documentation

You can find a detailed documentation about AutoGen [here](https://microsoft.github.io/autogen/).
You can find detailed documentation about AutoGen [here](https://microsoft.github.io/autogen/).

In addition, you can find:

Expand Down Expand Up @@ -147,15 +147,15 @@ in this repository under the [Creative Commons Attribution 4.0 International Pub
see the [LICENSE](LICENSE) file, and grant you a license to any code in the repository under the [MIT License](https://opensource.org/licenses/MIT), see the
[LICENSE-CODE](LICENSE-CODE) file.

Microsoft, Windows, Microsoft Azure and/or other Microsoft products and services referenced in the documentation
Microsoft, Windows, Microsoft Azure, and/or other Microsoft products and services referenced in the documentation
may be either trademarks or registered trademarks of Microsoft in the United States and/or other countries.
The licenses for this project do not grant you rights to use any Microsoft names, logos, or trademarks.
Microsoft's general trademark guidelines can be found at http://go.microsoft.com/fwlink/?LinkID=254653.

Privacy information can be found at https://privacy.microsoft.com/en-us/

Microsoft and any contributors reserve all other rights, whether under their respective copyrights, patents,
or trademarks, whether by implication, estoppel or otherwise.
or trademarks, whether by implication, estoppel, or otherwise.


## Citation
Expand Down
16 changes: 14 additions & 2 deletions autogen/agentchat/conversable_agent.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@
from collections import defaultdict
import copy
import json
import logging
from typing import Any, Callable, Dict, List, Optional, Tuple, Type, Union
from autogen import oai
from .agent import Agent
Expand All @@ -21,6 +22,9 @@ def colored(x, *args, **kwargs):
return x


logger = logging.getLogger(__name__)


class ConversableAgent(Agent):
"""(In preview) A class for generic conversable agents which can be configured as assistant or user proxy.
Expand Down Expand Up @@ -757,7 +761,11 @@ def generate_reply(
Returns:
str or dict or None: reply. None if no reply is generated.
"""
assert messages is not None or sender is not None, "Either messages or sender must be provided."
if all((messages is None, sender is None)):
error_msg = f"Either {messages=} or {sender=} must be provided."
logger.error(error_msg)
raise AssertionError(error_msg)

if messages is None:
messages = self._oai_messages[sender]

Expand Down Expand Up @@ -804,7 +812,11 @@ async def a_generate_reply(
Returns:
str or dict or None: reply. None if no reply is generated.
"""
assert messages is not None or sender is not None, "Either messages or sender must be provided."
if all((messages is None, sender is None)):
error_msg = f"Either {messages=} or {sender=} must be provided."
logger.error(error_msg)
raise AssertionError(error_msg)

if messages is None:
messages = self._oai_messages[sender]

Expand Down
10 changes: 8 additions & 2 deletions autogen/code_utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,8 @@
WIN32 = sys.platform == "win32"
PATH_SEPARATOR = WIN32 and "\\" or "/"

logger = logging.getLogger(__name__)


def infer_lang(code):
"""infer the language for the code.
Expand Down Expand Up @@ -250,7 +252,11 @@ def execute_code(
str: The error message if the code fails to execute; the stdout otherwise.
image: The docker image name after container run when docker is used.
"""
assert code is not None or filename is not None, "Either code or filename must be provided."
if all((code is None, filename is None)):
error_msg = f"Either {code=} or {filename=} must be provided."
logger.error(error_msg)
raise AssertionError(error_msg)

timeout = timeout or DEFAULT_TIMEOUT
original_filename = filename
if WIN32 and lang in ["sh", "shell"]:
Expand All @@ -276,7 +282,7 @@ def execute_code(
f".\\{filename}" if WIN32 else filename,
]
if WIN32:
logging.warning("SIGALRM is not supported on Windows. No timeout will be enforced.")
logger.warning("SIGALRM is not supported on Windows. No timeout will be enforced.")
result = subprocess.run(
cmd,
cwd=work_dir,
Expand Down
14 changes: 9 additions & 5 deletions autogen/math_utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -35,8 +35,9 @@ def remove_boxed(string: str) -> Optional[str]:
"""
left = "\\boxed{"
try:
assert string[: len(left)] == left
assert string[-1] == "}"
if not all((string[: len(left)] == left, string[-1] == "}")):
raise AssertionError

return string[len(left) : -1]
except Exception:
return None
Expand Down Expand Up @@ -94,7 +95,8 @@ def _fix_fracs(string: str) -> str:
new_str += substr
else:
try:
assert len(substr) >= 2
if not len(substr) >= 2:
raise AssertionError
except Exception:
return string
a = substr[0]
Expand Down Expand Up @@ -129,7 +131,8 @@ def _fix_a_slash_b(string: str) -> str:
try:
a = int(a_str)
b = int(b_str)
assert string == "{}/{}".format(a, b)
if not string == "{}/{}".format(a, b):
raise AssertionError
new_string = "\\frac{" + str(a) + "}{" + str(b) + "}"
return new_string
except Exception:
Expand All @@ -143,7 +146,8 @@ def _remove_right_units(string: str) -> str:
"""
if "\\text{ " in string:
splits = string.split("\\text{ ")
assert len(splits) == 2
if not len(splits) == 2:
raise AssertionError
return splits[0]
else:
return string
Expand Down
29 changes: 20 additions & 9 deletions autogen/oai/completion.py
Original file line number Diff line number Diff line change
Expand Up @@ -582,23 +582,31 @@ def eval_func(responses, **data):
cls._prompts = space.get("prompt")
if cls._prompts is None:
cls._messages = space.get("messages")
assert isinstance(cls._messages, list) and isinstance(
cls._messages[0], (dict, list)
), "messages must be a list of dicts or a list of lists."
if not all((isinstance(cls._messages, list), isinstance(cls._messages[0], (dict, list)))):
error_msg = "messages must be a list of dicts or a list of lists."
logger.error(error_msg)
raise AssertionError(error_msg)
if isinstance(cls._messages[0], dict):
cls._messages = [cls._messages]
space["messages"] = tune.choice(list(range(len(cls._messages))))
else:
assert space.get("messages") is None, "messages and prompt cannot be provided at the same time."
assert isinstance(cls._prompts, (str, list)), "prompt must be a string or a list of strings."
if space.get("messages") is not None:
error_msg = "messages and prompt cannot be provided at the same time."
logger.error(error_msg)
raise AssertionError(error_msg)
if not isinstance(cls._prompts, (str, list)):
error_msg = "prompt must be a string or a list of strings."
logger.error(error_msg)
raise AssertionError(error_msg)
if isinstance(cls._prompts, str):
cls._prompts = [cls._prompts]
space["prompt"] = tune.choice(list(range(len(cls._prompts))))
cls._stops = space.get("stop")
if cls._stops:
assert isinstance(
cls._stops, (str, list)
), "stop must be a string, a list of strings, or a list of lists of strings."
if not isinstance(cls._stops, (str, list)):
error_msg = "stop must be a string, a list of strings, or a list of lists of strings."
logger.error(error_msg)
raise AssertionError(error_msg)
if not (isinstance(cls._stops, list) and isinstance(cls._stops[0], list)):
cls._stops = [cls._stops]
space["stop"] = tune.choice(list(range(len(cls._stops))))
Expand Down Expand Up @@ -969,7 +977,10 @@ def eval_func(responses, **data):
elif isinstance(agg_method, dict):
for key in metric_keys:
metric_agg_method = agg_method[key]
assert callable(metric_agg_method), "please provide a callable for each metric"
if not callable(metric_agg_method):
error_msg = "please provide a callable for each metric"
logger.error(error_msg)
raise AssertionError(error_msg)
result_agg[key] = metric_agg_method([r[key] for r in result_list])
else:
raise ValueError(
Expand Down
4 changes: 3 additions & 1 deletion autogen/retrieve_utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,7 @@
"yml",
"pdf",
]
VALID_CHUNK_MODES = frozenset({"one_line", "multi_lines"})


def num_tokens_from_text(
Expand Down Expand Up @@ -96,7 +97,8 @@ def split_text_to_chunks(
overlap: int = 10,
):
"""Split a long text into chunks of max_tokens."""
assert chunk_mode in {"one_line", "multi_lines"}
if chunk_mode not in VALID_CHUNK_MODES:
raise AssertionError
if chunk_mode == "one_line":
must_break_at_empty_line = False
chunks = []
Expand Down
9 changes: 6 additions & 3 deletions setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -39,15 +39,18 @@
install_requires=install_requires,
extras_require={
"test": [
"pytest>=6.1.1",
"chromadb",
"coverage>=5.3",
"pre-commit",
"datasets",
"ipykernel",
"nbconvert",
"nbformat",
"ipykernel",
"pre-commit",
"pydantic==1.10.9",
"pytest-asyncio",
"pytest>=6.1.1",
"sympy",
"tiktoken",
"wolframalpha",
],
"blendsearch": ["flaml[blendsearch]"],
Expand Down
22 changes: 22 additions & 0 deletions test/agentchat/test_conversable_agent.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,17 @@
from autogen.agentchat import ConversableAgent


@pytest.fixture
def conversable_agent():
return ConversableAgent(
"conversable_agent_0",
max_consecutive_auto_reply=10,
code_execution_config=False,
llm_config=False,
human_input_mode="NEVER",
)


def test_trigger():
agent = ConversableAgent("a0", max_consecutive_auto_reply=0, llm_config=False, human_input_mode="NEVER")
agent1 = ConversableAgent("a1", max_consecutive_auto_reply=0, human_input_mode="NEVER")
Expand Down Expand Up @@ -217,6 +228,17 @@ def add_num(num_to_be_added):
), "generate_reply not working when messages is None"


def test_generate_reply_raises_on_messages_and_sender_none(conversable_agent):
with pytest.raises(AssertionError):
conversable_agent.generate_reply(messages=None, sender=None)


@pytest.mark.asyncio
async def test_a_generate_reply_raises_on_messages_and_sender_none(conversable_agent):
with pytest.raises(AssertionError):
await conversable_agent.a_generate_reply(messages=None, sender=None)


if __name__ == "__main__":
test_trigger()
# test_context()
Expand Down
5 changes: 5 additions & 0 deletions test/test_code.py
Original file line number Diff line number Diff line change
Expand Up @@ -264,6 +264,11 @@ def test_execute_code(use_docker=None):
assert isinstance(image, str) or docker is None or os.path.exists("/.dockerenv") or use_docker is False


def test_execute_code_raises_when_code_and_filename_are_both_none():
with pytest.raises(AssertionError):
execute_code(code=None, filename=None)


@pytest.mark.skipif(
sys.platform in ["darwin"],
reason="do not run on MacOS",
Expand Down
4 changes: 4 additions & 0 deletions test/test_retrieve_utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -48,6 +48,10 @@ def test_split_text_to_chunks(self):
chunks = split_text_to_chunks(long_text, max_tokens=1000)
assert all(num_tokens_from_text(chunk) <= 1000 for chunk in chunks)

def test_split_text_to_chunks_raises_on_invalid_chunk_mode(self):
with pytest.raises(AssertionError):
split_text_to_chunks("A" * 10000, chunk_mode="bogus_chunk_mode")

def test_extract_text_from_pdf(self):
pdf_file_path = os.path.join(test_dir, "example.pdf")
assert "".join(expected_text.split()) == "".join(extract_text_from_pdf(pdf_file_path).strip().split())
Expand Down
2 changes: 1 addition & 1 deletion website/docs/Getting-Started.md
Original file line number Diff line number Diff line change
Expand Up @@ -75,4 +75,4 @@ response = autogen.Completion.create(context=test_instance, **config)

If you like our project, please give it a [star](https://github.com/microsoft/autogen/stargazers) on GitHub. If you are interested in contributing, please read [Contributor's Guide](/docs/Contribute).

<!-- <iframe src="https://ghbtns.com/github-btn.html?user=microsoft&amp;repo=autogen&amp;type=star&amp;count=true&amp;size=large" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe> -->
<iframe src="https://ghbtns.com/github-btn.html?user=microsoft&amp;repo=autogen&amp;type=star&amp;count=true&amp;size=large" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe>
16 changes: 8 additions & 8 deletions website/yarn.lock
Original file line number Diff line number Diff line change
Expand Up @@ -5475,10 +5475,10 @@ multicast-dns@^7.2.5:
dns-packet "^5.2.2"
thunky "^1.0.2"

nanoid@^3.3.4:
version "3.3.4"
resolved "https://registry.npmmirror.com/nanoid/-/nanoid-3.3.4.tgz#730b67e3cd09e2deacf03c027c81c9d9dbc5e8ab"
integrity sha512-MqBkQh/OHTS2egovRtLk45wEyNXwF+cokD+1YPf9u5VfJiRdAiRwB2froX5Co9Rh20xs4siNPm8naNotSD6RBw==
nanoid@^3.3.6:
version "3.3.6"
resolved "https://registry.yarnpkg.com/nanoid/-/nanoid-3.3.6.tgz#443380c856d6e9f9824267d960b4236ad583ea4c"
integrity sha512-BGcqMMJuToF7i1rt+2PWSNVnWIkGCU78jBG3RxO/bZlnZPK2Cmi2QaffxGO/2RvWi9sL+FAiRiXMgsyxQ1DIDA==

negotiator@0.6.3:
version "0.6.3"
Expand Down Expand Up @@ -6166,11 +6166,11 @@ postcss-zindex@^5.1.0:
integrity sha512-fgFMf0OtVSBR1va1JNHYgMxYk73yhn/qb4uQDq1DLGYolz8gHCyr/sesEuGUaYs58E3ZJRcpoGuPVoB7Meiq9A==

postcss@^8.2.15, postcss@^8.3.11, postcss@^8.3.5, postcss@^8.3.7:
version "8.4.18"
resolved "https://registry.npmmirror.com/postcss/-/postcss-8.4.18.tgz#6d50046ea7d3d66a85e0e782074e7203bc7fbca2"
integrity sha512-Wi8mWhncLJm11GATDaQKobXSNEYGUHeQLiQqDFG1qQ5UTDPTEvKw0Xt5NsTpktGTwLps3ByrWsBrG0rB8YQ9oA==
version "8.4.31"
resolved "https://registry.yarnpkg.com/postcss/-/postcss-8.4.31.tgz#92b451050a9f914da6755af352bdc0192508656d"
integrity sha512-PS08Iboia9mts/2ygV3eLpY5ghnUcfLV/EXTOW1E2qYxJKGGBUtNjN76FYHnMs36RmARn41bC0AZmn+rR0OVpQ==
dependencies:
nanoid "^3.3.4"
nanoid "^3.3.6"
picocolors "^1.0.0"
source-map-js "^1.0.2"

Expand Down

0 comments on commit d529672

Please sign in to comment.