Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dev/v0.2 #393

Merged
merged 43 commits into from
Nov 4, 2023
Merged
Show file tree
Hide file tree
Changes from 29 commits
Commits
Show all changes
43 commits
Select commit Hold shift + click to select a range
d6761d1
api_base -> base_url (#383)
sonichi Oct 23, 2023
dfd5695
InvalidRequestError -> BadRequestError (#389)
sonichi Oct 23, 2023
c41be9c
remove api_key_path; close #388
sonichi Oct 23, 2023
c8f8cbc
Merge branch 'main' into dev/v0.2
sonichi Oct 24, 2023
2f97b8b
close #402 (#403)
sonichi Oct 24, 2023
1df493b
openai client (#419)
sonichi Oct 25, 2023
23a107a
Merge branch 'main' into dev/v0.2
sonichi Oct 25, 2023
d77b1c9
_client -> client
sonichi Oct 25, 2023
6a8eaf3
_client -> client
sonichi Oct 25, 2023
c3f58f3
extra kwargs
sonichi Oct 25, 2023
75a6f7d
Completion -> client (#426)
sonichi Oct 26, 2023
9b25c91
annotations
sonichi Oct 26, 2023
8c1626c
import
sonichi Oct 26, 2023
8d42528
reduce test
sonichi Oct 26, 2023
b8302a7
skip test
sonichi Oct 26, 2023
b09e6bb
skip test
sonichi Oct 26, 2023
f29fbc5
skip test
sonichi Oct 26, 2023
4318e0e
debug test
sonichi Oct 26, 2023
153f182
rename test
sonichi Oct 26, 2023
645d60e
update workflow
sonichi Oct 26, 2023
62eabc8
update workflow
sonichi Oct 26, 2023
f895633
env
sonichi Oct 26, 2023
a72c89d
py version
sonichi Oct 26, 2023
9073eb7
doc improvement
sonichi Oct 26, 2023
b0ad39b
docstr update
sonichi Oct 26, 2023
33de6a3
openai<1
sonichi Oct 26, 2023
b2e7f9c
Merge branch 'main' into dev/v0.2
sonichi Oct 28, 2023
3b567c9
add tiktoken to dependency
sonichi Oct 28, 2023
3cb3930
filter_func
sonichi Oct 28, 2023
cc9f0e0
async test
sonichi Oct 29, 2023
62b0393
Merge branch 'main' into dev/v0.2
sonichi Oct 29, 2023
82f3712
dependency
sonichi Oct 29, 2023
420cbf0
migration guide (#477)
sonichi Oct 30, 2023
fa6fbec
Merge branch 'main' into dev/v0.2
sonichi Oct 30, 2023
ed1b77b
Merge branch 'main' into dev/v0.2
sonichi Oct 30, 2023
1edddb8
deal with azure gpt-3.5
sonichi Oct 31, 2023
57134a3
Merge branch 'main' into dev/v0.2
sonichi Nov 1, 2023
4555eb3
add back test_eval_math_responses
sonichi Nov 1, 2023
3bf520a
timeout
sonichi Nov 1, 2023
8bb6e82
Add back tests for RetrieveChat (#480)
thinkall Nov 4, 2023
37f8b14
Merge branch 'main' into dev/v0.2
sonichi Nov 4, 2023
a786cb6
retrieve chat is tested
sonichi Nov 4, 2023
1f1459b
bump version to 0.2.0b1
sonichi Nov 4, 2023
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 3 additions & 2 deletions .github/workflows/build.yml
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ jobs:
python -m pip install --upgrade pip wheel
pip install -e .
python -c "import autogen"
pip install -e.[mathchat,retrievechat,test] datasets pytest
pip install -e. pytest
pip uninstall -y openai
- name: Test with pytest
if: matrix.python-version != '3.10'
Expand All @@ -49,7 +49,8 @@ jobs:
- name: Coverage
if: matrix.python-version == '3.10'
run: |
pip install coverage
pip install -e.[mathchat,test]
sonichi marked this conversation as resolved.
Show resolved Hide resolved
pip uninstall -y openai
coverage run -a -m pytest test
coverage xml
- name: Upload coverage to Codecov
Expand Down
29 changes: 5 additions & 24 deletions .github/workflows/openai.yml
Original file line number Diff line number Diff line change
Expand Up @@ -4,16 +4,13 @@
name: OpenAI

on:
pull_request_target:
pull_request:
branches: ['main']
paths:
- 'autogen/**'
- 'test/**'
- 'notebook/agentchat_auto_feedback_from_code_execution.ipynb'
- 'notebook/agentchat_function_call.ipynb'
- 'notebook/agentchat_MathChat.ipynb'
- 'notebook/oai_completion.ipynb'
- 'notebook/oai_chatgpt_gpt4.ipynb'
- '.github/workflows/openai.yml'

jobs:
Expand All @@ -23,7 +20,7 @@ jobs:
os: [ubuntu-latest]
python-version: ["3.9", "3.10", "3.11"]
runs-on: ${{ matrix.os }}
environment: openai
environment: openai1
steps:
# checkout to pr branch
- name: Checkout
Expand All @@ -38,28 +35,13 @@ jobs:
run: |
docker --version
python -m pip install --upgrade pip wheel
pip install -e.[blendsearch]
pip install -e.
python -c "import autogen"
pip install coverage pytest-asyncio datasets
pip install coverage pytest-asyncio
- name: Install packages for test when needed
if: matrix.python-version == '3.9'
run: |
pip install docker
- name: Install packages for MathChat when needed
if: matrix.python-version != '3.11'
run: |
pip install -e .[mathchat]
- name: Install packages for RetrieveChat when needed
if: matrix.python-version == '3.9'
run: |
pip install -e .[retrievechat]
- name: Install packages for Teachable when needed
run: |
pip install -e .[teachable]
- name: Install packages for RetrieveChat with QDrant when needed
if: matrix.python-version == '3.11'
run: |
pip install -e .[retrievechat] qdrant_client[fastembed]
- name: Coverage
if: matrix.python-version == '3.9'
env:
Expand All @@ -80,8 +62,7 @@ jobs:
OAI_CONFIG_LIST: ${{ secrets.OAI_CONFIG_LIST }}
run: |
pip install nbconvert nbformat ipykernel
coverage run -a -m pytest test/agentchat/test_qdrant_retrievechat.py
coverage run -a -m pytest test/test_with_openai.py
coverage run -a -m pytest test/agentchat/test_function_call_groupchat.py
coverage run -a -m pytest test/test_notebook.py
coverage xml
cat "$(pwd)/test/executed_openai_notebook_output.txt"
Expand Down
4 changes: 2 additions & 2 deletions OAI_CONFIG_LIST_sample
Original file line number Diff line number Diff line change
Expand Up @@ -7,14 +7,14 @@
{
"model": "gpt-4",
"api_key": "<your Azure OpenAI API key here>",
"api_base": "<your Azure OpenAI API base here>",
"base_url": "<your Azure OpenAI API base here>",
"api_type": "azure",
sonichi marked this conversation as resolved.
Show resolved Hide resolved
"api_version": "2023-07-01-preview"
},
{
"model": "gpt-3.5-turbo",
"api_key": "<your Azure OpenAI API key here>",
"api_base": "<your Azure OpenAI API base here>",
"base_url": "<your Azure OpenAI API base here>",
"api_type": "azure",
sonichi marked this conversation as resolved.
Show resolved Hide resolved
"api_version": "2023-07-01-preview"
}
Expand Down
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -113,7 +113,7 @@ Please find more [code examples](https://microsoft.github.io/autogen/docs/Exampl
Autogen also helps maximize the utility out of the expensive LLMs such as ChatGPT and GPT-4. It offers enhanced LLM inference with powerful functionalities like tuning, caching, error handling, and templating. For example, you can optimize generations by LLM with your own tuning data, success metrics, and budgets.

```python
# perform tuning
# perform tuning for openai<1
config, analysis = autogen.Completion.tune(
data=tune_data,
metric="success",
Expand Down
2 changes: 1 addition & 1 deletion autogen/agentchat/assistant_agent.py
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ def __init__(
system_message (str): system message for the ChatCompletion inference.
Please override this attribute if you want to reprogram the agent.
llm_config (dict): llm inference configuration.
Please refer to [Completion.create](/docs/reference/oai/completion#create)
Please refer to [OpenAIWrapper.create](/docs/reference/oai/client#create)
for available options.
is_termination_msg (function): a function that takes a message in the form of a dictionary
and returns a boolean value indicating if this received message is a termination message.
Expand Down
15 changes: 7 additions & 8 deletions autogen/agentchat/contrib/teachable_agent.py
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ def colored(x, *args, **kwargs):


class TeachableAgent(ConversableAgent):
"""Teachable Agent, a subclass of ConversableAgent using a vector database to remember user teachings.
"""(Experimental) Teachable Agent, a subclass of ConversableAgent using a vector database to remember user teachings.
In this class, the term 'user' refers to any caller (human or not) sending messages to this agent.
Not yet tested in the group-chat setting."""

Expand All @@ -40,7 +40,7 @@ def __init__(
system_message (str): system message for the ChatCompletion inference.
human_input_mode (str): This agent should NEVER prompt the human for input.
llm_config (dict or False): llm inference configuration.
Please refer to [Completion.create](/docs/reference/oai/completion#create)
Please refer to [OpenAIWrapper.create](/docs/reference/oai/client#create)
for available options.
To disable llm-based auto reply, set to False.
analyzer_llm_config (dict or False): llm inference configuration passed to TextAnalyzerAgent.
Expand Down Expand Up @@ -125,11 +125,8 @@ def _generate_teachable_assistant_reply(
messages = messages.copy()
messages[-1]["content"] = new_user_text

# Generate a response.
msgs = self._oai_system_message + messages
response = oai.ChatCompletion.create(messages=msgs, **self.llm_config)
response_text = oai.ChatCompletion.extract_text_or_function_call(response)[0]
return True, response_text
# Generate a response by reusing existing generate_oai_reply
return self.generate_oai_reply(messages, sender, config)

def learn_from_user_feedback(self):
"""Reviews the user comments from the last chat, and decides what teachings to store as memos."""
Expand Down Expand Up @@ -265,12 +262,14 @@ def analyze(self, text_to_analyze, analysis_instructions):
self.send(recipient=self.analyzer, message=analysis_instructions, request_reply=True) # Request the reply.
return self.last_message(self.analyzer)["content"]
else:
# TODO: This is not an encouraged usage pattern. It breaks the conversation-centric design.
# consider using the arg "silent"
# Use the analyzer's method directly, to leave analyzer message out of the printed chat.
return self.analyzer.analyze_text(text_to_analyze, analysis_instructions)


class MemoStore:
"""
"""(Experimental)
Provides memory storage and retrieval for a TeachableAgent, using a vector database.
Each DB entry (called a memo) is a pair of strings: an input text and an output text.
The input text might be a question, or a task to perform.
Expand Down
10 changes: 3 additions & 7 deletions autogen/agentchat/contrib/text_analyzer_agent.py
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@


class TextAnalyzerAgent(ConversableAgent):
"""Text Analysis agent, a subclass of ConversableAgent designed to analyze text as instructed."""
"""(Experimental) Text Analysis agent, a subclass of ConversableAgent designed to analyze text as instructed."""

def __init__(
self,
Expand All @@ -26,7 +26,7 @@ def __init__(
system_message (str): system message for the ChatCompletion inference.
human_input_mode (str): This agent should NEVER prompt the human for input.
llm_config (dict or False): llm inference configuration.
Please refer to [Completion.create](/docs/reference/oai/completion#create)
Please refer to [OpenAIWrapper.create](/docs/reference/oai/client#create)
for available options.
To disable llm-based auto reply, set to False.
teach_config (dict or None): Additional parameters used by TeachableAgent.
Expand Down Expand Up @@ -74,9 +74,5 @@ def analyze_text(self, text_to_analyze, analysis_instructions):
msg_text = "\n".join(
[analysis_instructions, text_to_analyze, analysis_instructions]
) # Repeat the instructions.
messages = self._oai_system_message + [{"role": "user", "content": msg_text}]

# Generate and return the analysis string.
response = oai.ChatCompletion.create(context=None, messages=messages, **self.llm_config)
output_text = oai.ChatCompletion.extract_text_or_function_call(response)[0]
return output_text
return self.generate_oai_reply([{"role": "user", "content": msg_text}], None, None)[1]
38 changes: 23 additions & 15 deletions autogen/agentchat/conversable_agent.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
import json
import logging
from typing import Any, Callable, Dict, List, Optional, Tuple, Type, Union
from autogen import oai
from autogen import OpenAIWrapper
from .agent import Agent
from autogen.code_utils import (
DEFAULT_MODEL,
Expand Down Expand Up @@ -93,7 +93,7 @@ def __init__(
- timeout (Optional, int): The maximum execution time in seconds.
- last_n_messages (Experimental, Optional, int): The number of messages to look back for code execution. Default to 1.
llm_config (dict or False): llm inference configuration.
Please refer to [Completion.create](/docs/reference/oai/completion#create)
Please refer to [OpenAIWrapper.create](/docs/reference/oai/client#create)
for available options.
To disable llm-based auto reply, set to False.
default_auto_reply (str or dict or None): default auto reply when no code execution or llm-based reply is generated.
Expand All @@ -107,10 +107,12 @@ def __init__(
)
if llm_config is False:
self.llm_config = False
self.client = None
else:
self.llm_config = self.DEFAULT_CONFIG.copy()
if isinstance(llm_config, dict):
self.llm_config.update(llm_config)
self.client = OpenAIWrapper(**self.llm_config)

self._code_execution_config = {} if code_execution_config is None else code_execution_config
self.human_input_mode = human_input_mode
Expand Down Expand Up @@ -254,8 +256,10 @@ def _message_to_dict(message: Union[Dict, str]):
"""
if isinstance(message, str):
return {"content": message}
else:
elif isinstance(message, dict):
return message
else:
return dict(message)

def _append_oai_message(self, message: Union[Dict, str], role, conversation_id: Agent) -> bool:
"""Append a message to the ChatCompletion conversation.
Expand Down Expand Up @@ -285,6 +289,7 @@ def _append_oai_message(self, message: Union[Dict, str], role, conversation_id:
oai_message["role"] = "function" if message.get("role") == "function" else role
if "function_call" in oai_message:
oai_message["role"] = "assistant" # only messages with role 'assistant' can have a function call.
oai_message["function_call"] = dict(oai_message["function_call"])
self._oai_messages[conversation_id].append(oai_message)
return True

Expand All @@ -306,7 +311,7 @@ def send(
- role (str): the role of the message, any role that is not "function"
will be modified to "assistant".
- context (dict): the context of the message, which will be passed to
[Completion.create](../oai/Completion#create).
[OpenAIWrapper.create](../oai/client#create).
For example, one agent can send a message A as:
```python
{
Expand Down Expand Up @@ -355,7 +360,7 @@ async def a_send(
- role (str): the role of the message, any role that is not "function"
will be modified to "assistant".
- context (dict): the context of the message, which will be passed to
[Completion.create](../oai/Completion#create).
[OpenAIWrapper.create](../oai/client#create).
For example, one agent can send a message A as:
```python
{
Expand Down Expand Up @@ -398,18 +403,21 @@ def _print_received_message(self, message: Union[Dict, str], sender: Agent):
content = message.get("content")
if content is not None:
if "context" in message:
content = oai.ChatCompletion.instantiate(
content = OpenAIWrapper.instantiate(
content,
message["context"],
self.llm_config and self.llm_config.get("allow_format_str_template", False),
)
print(content, flush=True)
if "function_call" in message:
func_print = f"***** Suggested function Call: {message['function_call'].get('name', '(No function name found)')} *****"
function_call = dict(message["function_call"])
func_print = (
f"***** Suggested function Call: {function_call.get('name', '(No function name found)')} *****"
)
print(colored(func_print, "green"), flush=True)
print(
"Arguments: \n",
message["function_call"].get("arguments", "(No arguments found)"),
function_call.get("arguments", "(No arguments found)"),
flush=True,
sep="",
)
Expand Down Expand Up @@ -447,7 +455,7 @@ def receive(
This field is only needed to distinguish between "function" or "assistant"/"user".
4. "name": In most cases, this field is not needed. When the role is "function", this field is needed to indicate the function name.
5. "context" (dict): the context of the message, which will be passed to
[Completion.create](../oai/Completion#create).
[OpenAIWrapper.create](../oai/client#create).
sender: sender of an Agent instance.
request_reply (bool or None): whether a reply is requested from the sender.
If None, the value is determined by `self.reply_at_receive[sender]`.
Expand Down Expand Up @@ -483,7 +491,7 @@ async def a_receive(
This field is only needed to distinguish between "function" or "assistant"/"user".
4. "name": In most cases, this field is not needed. When the role is "function", this field is needed to indicate the function name.
5. "context" (dict): the context of the message, which will be passed to
[Completion.create](../oai/Completion#create).
[OpenAIWrapper.create](../oai/client#create).
sender: sender of an Agent instance.
request_reply (bool or None): whether a reply is requested from the sender.
If None, the value is determined by `self.reply_at_receive[sender]`.
Expand Down Expand Up @@ -596,17 +604,17 @@ def generate_oai_reply(
config: Optional[Any] = None,
) -> Tuple[bool, Union[str, Dict, None]]:
"""Generate a reply using autogen.oai."""
llm_config = self.llm_config if config is None else config
if llm_config is False:
client = self.client if config is None else config
if client is None:
return False, None
if messages is None:
messages = self._oai_messages[sender]

# TODO: #1143 handle token limit exceeded error
response = oai.ChatCompletion.create(
context=messages[-1].pop("context", None), messages=self._oai_system_message + messages, **llm_config
response = client.create(
context=messages[-1].pop("context", None), messages=self._oai_system_message + messages
)
return True, oai.ChatCompletion.extract_text_or_function_call(response)[0]
return True, client.extract_text_or_function_call(response)[0]

def generate_code_execution_reply(
self,
Expand Down
2 changes: 1 addition & 1 deletion autogen/agentchat/groupchat.py
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@

@dataclass
class GroupChat:
"""A group chat class that contains the following data fields:
"""(In preview) A group chat class that contains the following data fields:
- agents: a list of participating agents.
- messages: a list of messages in the group chat.
- max_round: the maximum number of rounds.
Expand Down
2 changes: 1 addition & 1 deletion autogen/agentchat/user_proxy_agent.py
Original file line number Diff line number Diff line change
Expand Up @@ -63,7 +63,7 @@ def __init__(
- last_n_messages (Experimental, Optional, int): The number of messages to look back for code execution. Default to 1.
default_auto_reply (str or dict or None): the default auto reply message when no code execution or llm based reply is generated.
llm_config (dict or False): llm inference configuration.
Please refer to [Completion.create](/docs/reference/oai/completion#create)
Please refer to [OpenAIWrapper.create](/docs/reference/oai/client#create)
for available options.
Default to false, which disables llm-based auto reply.
system_message (str): system message for ChatCompletion inference.
Expand Down
Loading
Loading