Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Graph group chat #753

Closed
wants to merge 30 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
30 commits
Select commit Hold shift + click to select a range
a6d5f91
Refactored GroupChat to prepare for GraphGroupChat
joshkyh Nov 23, 2023
a26dde7
Added two unit test to test the refactoring
joshkyh Nov 23, 2023
c7ab5dc
All tests passed
joshkyh Nov 23, 2023
e3da160
pre-commit pass
joshkyh Nov 23, 2023
faf442f
improved _check_graph_validity
joshkyh Nov 24, 2023
aab6be9
More tests
joshkyh Nov 24, 2023
1d2dbc4
Documentation addition
joshkyh Nov 24, 2023
2efb3f0
pre-commit formats
joshkyh Nov 24, 2023
a34a9e2
Added GraphGroupChat Test
joshkyh Nov 24, 2023
a356c73
Comment out dev config_list
joshkyh Nov 24, 2023
6f2947d
precommit passed
joshkyh Nov 24, 2023
f264f41
Removed assert config list
joshkyh Nov 24, 2023
afb9c95
Relaxed versions
joshkyh Nov 24, 2023
c652834
Shift test casts to openai workload
joshkyh Nov 24, 2023
681bd59
Catch two errors
joshkyh Nov 24, 2023
e488e6e
try import GraphGroupChat
joshkyh Nov 24, 2023
51e9839
Update .github/workflows/contrib-openai.yml
joshkyh Nov 25, 2023
d805d2f
Update .github/workflows/contrib-openai.yml
joshkyh Nov 25, 2023
3d26f49
Resolve conflict
joshkyh Nov 27, 2023
a8e2f1e
Manually insert test for teachable agent
joshkyh Nov 27, 2023
0ab9d8b
Merge branch 'main' into GraphGroupChat
joshkyh Nov 27, 2023
81995ec
Merge branch 'main' into GraphGroupChat
joshkyh Nov 27, 2023
d1009d1
Changes from review
joshkyh Nov 27, 2023
f517aa9
Merge branch 'main' into GraphGroupChat
joshkyh Nov 28, 2023
327a359
Merge branch 'main' into GraphGroupChat
joshkyh Nov 30, 2023
744c3c1
Bring back original registered auto-reply
joshkyh Dec 3, 2023
a648684
Regroup Registered Auto Reply and Clarify GraphGroupChat
joshkyh Dec 3, 2023
3d1daf8
Update website/docs/Use-Cases/agent_chat.md
joshkyh Dec 3, 2023
ecb4dd6
Update website/docs/Use-Cases/agent_chat.md
joshkyh Dec 3, 2023
810faa5
Corrected GroupGroupChat link
joshkyh Dec 3, 2023
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
38 changes: 38 additions & 0 deletions .github/workflows/contrib-openai.yml
Original file line number Diff line number Diff line change
Expand Up @@ -138,6 +138,44 @@ jobs:
with:
file: ./coverage.xml
flags: unittests
GraphGroupChat:
strategy:
matrix:
os: [ubuntu-latest]
python-version: ["3.8"]
runs-on: ${{ matrix.os }}
environment: openai1
steps:
# checkout to pr branch
- name: Checkout
uses: actions/checkout@v3
with:
ref: ${{ github.event.pull_request.head.sha }}
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v4
with:
python-version: ${{ matrix.python-version }}
- name: Install packages and dependencies
run: |
docker --version
python -m pip install --upgrade pip wheel
pip install -e .
python -c "import autogen"
pip install coverage
- name: Coverage
env:
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
AZURE_OPENAI_API_KEY: ${{ secrets.AZURE_OPENAI_API_KEY }}
AZURE_OPENAI_API_BASE: ${{ secrets.AZURE_OPENAI_API_BASE }}
OAI_CONFIG_LIST: ${{ secrets.OAI_CONFIG_LIST }}
run: |
coverage run -a -m pytest test/agentchat/contrib/test_graphgroupchat.py
coverage xml
- name: Upload coverage to Codecov
uses: codecov/codecov-action@v3
with:
file: ./coverage.xml
flags: unittests
TeachableAgent:
strategy:
matrix:
Expand Down
163 changes: 163 additions & 0 deletions autogen/agentchat/contrib/graphgroupchat.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,163 @@
import logging

try:
import networkx as nx
import matplotlib.pyplot as plt
except ImportError as e:
logging.fatal("Failed to import networkx or matplotlib. Try running 'pip install autogen[graphs]'")
raise e

import autogen
from autogen.agentchat.assistant_agent import AssistantAgent
from autogen.agentchat.groupchat import GroupChat, Agent, ConversableAgent

import random
from typing import List, Dict


class GraphGroupChat(GroupChat):
"""(In preview) A group chat class that contains the following data fields:
- agents: a list of participating agents.
- messages: a list of messages in the group chat.
- graph: a networkx graph depicting who are the next speakers available.
- max_round: the maximum number of rounds.
- admin_name: the name of the admin agent if there is one. Default is "Admin".
KeyBoardInterrupt will make the admin agent take over.
- func_call_filter: whether to enforce function call filter. Default is True.
When set to True and when a message is a function call suggestion,
the next speaker will be chosen from an agent which contains the corresponding function name
in its `function_map`.
- allow_repeat_speaker: whether to allow the same speaker to speak consecutively. Default is True.
"""

def __init__(
self,
agents: List[Agent],
messages: List[Dict],
graph: nx.DiGraph,
max_round: int = 10,
admin_name: str = "Admin",
func_call_filter: bool = True,
allow_repeat_speaker: bool = True,
):
# Inherit from GroupChat, and initialize with the given parameters (except graph)
super().__init__(
agents=agents,
messages=messages,
max_round=max_round,
admin_name=admin_name,
func_call_filter=func_call_filter,
speaker_selection_method="graph",
allow_repeat_speaker=allow_repeat_speaker,
)

self.previous_speaker = None # Keep track of the previous speaker
self.graph = graph # The graph depicting who are the next speakers available

# Check that the graph is a DiGraph
if not isinstance(self.graph, nx.DiGraph):
raise ValueError("The graph must be a networkx DiGraph.")

def _check_graph_validity(self):
"""
Check for the following
1. The graph has at least one node
2. The graph has at least one edge
3. The graph has at least one node with 'first_round_speaker' set to True
4. If self.allow_repeat_speaker is False, then the graph has no self-loops
5. Warning if there are isolated agent nodes
6. Warning if there are any agents in self.agents not in graph
"""

# Check 1. The graph has at least one node
if len(self.graph.nodes) == 0:
raise ValueError("The graph has no nodes.")

# Check 2. The graph has at least one edge
if len(self.graph.edges) == 0:
raise ValueError("The graph has no edges.")

# Check 3. The graph has at least one node with 'first_round_speaker' set to True
first_round_speakers = [
agent
for agent in self.agents
if agent.name in self.graph.nodes and self.graph.nodes[agent.name].get("first_round_speaker", False)
]
if not first_round_speakers:
raise ValueError("The graph has no nodes with 'first_round_speaker' set to True.")

# Check 4. If self.allow_repeat_speaker is False, then the graph has no self-loops
if not self.allow_repeat_speaker and any(
[self.graph.has_edge(agent.name, agent.name) for agent in self.agents]
):
raise ValueError("The graph has self-loops, but self.allow_repeat_speaker is False.")

# Check 5. Warning if there are isolated agent nodes
if any([self.graph.degree(agent.name) == 0 for agent in self.agents]):
# Name the isolated agents
isolated_agents = [agent.name for agent in self.agents if self.graph.degree(agent.name) == 0]
logging.warning(f"The graph has isolated agents: {isolated_agents}")

# Check 6. Warning if there are any agents in self.agents not in graph
if any([agent.name not in self.graph.nodes for agent in self.agents]):
# Name the agents not in the graph
agents_not_in_graph = [agent.name for agent in self.agents if agent.name not in self.graph.nodes]
logging.warning(f"The graph has agents not in self.agents: {agents_not_in_graph}")

# Run graph check
_check_graph_validity(self)

# All methods are from the GroupChat class, except for select_speaker
def select_speaker(self, last_speaker: Agent, selector: ConversableAgent) -> Agent:
self.previous_speaker = last_speaker

# Check if last message suggests a next speaker
last_message = self.messages[-1] if self.messages else None
suggested_next = None

if last_message:
if "NEXT:" in last_message["content"]:
suggested_next = last_message["content"].split("NEXT: ")[-1].strip()
# Strip full stop and comma
suggested_next = suggested_next.replace(".", "").replace(",", "")

# Selecting first round speaker
if self.previous_speaker is None and self.graph is not None:
eligible_speakers = [
agent for agent in self.agents if self.graph.nodes[agent.name].get("first_round_speaker", False)
]

# Selecting successors of the previous speaker
elif self.previous_speaker is not None and self.graph is not None:
eligible_speaker_names = [target for target in self.graph.successors(self.previous_speaker.name)]
eligible_speakers = [agent for agent in self.agents if agent.name in eligible_speaker_names]

else:
eligible_speakers = self.agents

# Three attempts at getting the next_speaker
# 1. Using suggested_next if suggested_next is in the eligible_speakers.name
# 2. Using LLM to pick from eligible_speakers, given that there is some context in self.message
# 3. Random (catch-all)
next_speaker = None

if eligible_speakers:
# 1. Using suggested_next if suggested_next is in the eligible_speakers.name
if suggested_next in [speaker.name for speaker in eligible_speakers]:
next_speaker = self.agent_by_name(suggested_next)

else:
if len(self.messages) > 1:
# 2. Using LLM to pick from eligible_speakers, given that there is some context in self.message
next_speaker, self.agents, last_speaker, selector = self.auto_select_speaker(
self.agents, last_speaker, selector
)

if next_speaker is None:
# 3. Random (catch-all)
next_speaker = random.choice(eligible_speakers)

return next_speaker
else:
# Cannot return next_speaker with no eligible speakers
raise ValueError("No eligible speakers found based on the graph constraints.")
77 changes: 48 additions & 29 deletions autogen/agentchat/groupchat.py
Original file line number Diff line number Diff line change
Expand Up @@ -28,6 +28,7 @@ class GroupChat:
- "manual": the next speaker is selected manually by user input.
- "random": the next speaker is selected randomly.
- "round_robin": the next speaker is selected in a round robin fashion, i.e., iterating in the same order as provided in `agents`.
- "graph": the next speaker is selected based on a graph. The select_speaker method is overridden by GraphGroupChat.
- allow_repeat_speaker: whether to allow the same speaker to speak consecutively. Default is True.
"""

Expand All @@ -39,7 +40,7 @@ class GroupChat:
speaker_selection_method: str = "auto"
allow_repeat_speaker: bool = True

_VALID_SPEAKER_SELECTION_METHODS = ["auto", "manual", "random", "round_robin"]
_VALID_SPEAKER_SELECTION_METHODS = ["auto", "manual", "random", "round_robin", "graph"]

@property
def agent_names(self) -> List[str]:
Expand Down Expand Up @@ -99,7 +100,41 @@ def manual_select_speaker(self, agents: List[Agent]) -> Agent:
print(f"Invalid input. Please enter a number between 1 and {_n_agents}.")
return None

def select_speaker(self, last_speaker: Agent, selector: ConversableAgent):
def auto_select_speaker(
self, agents: List[Agent], last_speaker: Agent, selector: ConversableAgent
) -> (Agent, List[Agent], Agent, ConversableAgent):
# Encapsulating select_speaker_auto as a class method, so that it can be reused through inheritance in GraphGroupChat
# It returns the selected_agent, agents, last_speaker, and selector so as to preserve the states of the inputs from select_speaker
selector.update_system_message(self.select_speaker_msg(agents))
final, name = selector.generate_oai_reply(
self.messages
+ [
{
"role": "system",
"content": f"Read the above conversation. Then select the next role from {[agent.name for agent in agents]} to play. Only return the role.",
}
]
)
if not final:
# the LLM client is None, thus no reply is generated. Use round robin instead.
return self.next_agent(last_speaker, agents), agents, last_speaker, selector

# If exactly one agent is mentioned, use it. Otherwise, leave the OAI response unmodified
mentions = self._mentioned_agents(name, agents)
if len(mentions) == 1:
name = next(iter(mentions))
else:
logger.warning(
f"GroupChat select_speaker failed to resolve the next speaker's name. This is because the speaker selection OAI call returned:\n{name}"
)

# Return the result
try:
return self.agent_by_name(name), agents, last_speaker, selector
except ValueError:
return self.next_agent(last_speaker, agents), agents, last_speaker, selector

def select_speaker(self, last_speaker: Agent, selector: ConversableAgent) -> Agent:
"""Select the next speaker."""
if self.speaker_selection_method.lower() not in self._VALID_SPEAKER_SELECTION_METHODS:
raise ValueError(
Expand All @@ -109,6 +144,7 @@ def select_speaker(self, last_speaker: Agent, selector: ConversableAgent):

agents = self.agents
n_agents = len(agents)

# Warn if GroupChat is underpopulated
if n_agents < 2:
raise ValueError(
Expand Down Expand Up @@ -148,40 +184,23 @@ def select_speaker(self, last_speaker: Agent, selector: ConversableAgent):
selected_agent = self.manual_select_speaker(agents)
if selected_agent:
return selected_agent

elif self.speaker_selection_method.lower() == "round_robin":
return self.next_agent(last_speaker, agents)

elif self.speaker_selection_method.lower() == "random":
return random.choice(agents)

# auto speaker selection
selector.update_system_message(self.select_speaker_msg(agents))
final, name = selector.generate_oai_reply(
self.messages
+ [
{
"role": "system",
"content": f"Read the above conversation. Then select the next role from {[agent.name for agent in agents]} to play. Only return the role.",
}
]
)
if not final:
# the LLM client is None, thus no reply is generated. Use round robin instead.
return self.next_agent(last_speaker, agents)

# If exactly one agent is mentioned, use it. Otherwise, leave the OAI response unmodified
mentions = self._mentioned_agents(name, agents)
if len(mentions) == 1:
name = next(iter(mentions))
else:
logger.warning(
f"GroupChat select_speaker failed to resolve the next speaker's name. This is because the speaker selection OAI call returned:\n{name}"
elif self.speaker_selection_method.lower() == "graph":
# This should not trigger because GraphGroupChat select_speaker overrides GroupChat select_speaker
raise ValueError(
f"GroupChat speaker_selection_method is set to '{self.speaker_selection_method}'. "
f"GraphGroupChat select_speaker overrides GroupChat select_speaker. "
)

# Return the result
try:
return self.agent_by_name(name)
except ValueError:
return self.next_agent(last_speaker, agents)
# Since it is not manual nor round_robin nor random, it must be auto
auto_selected_speaker, agents, last_speaker, selector = self.auto_select_speaker(agents, last_speaker, selector)
return auto_selected_speaker

def _participant_roles(self, agents: List[Agent] = None) -> str:
# Default to all agents registered
Expand Down
446 changes: 192 additions & 254 deletions notebook/agentchat_graph_modelling_language_using_select_speaker.ipynb

Large diffs are not rendered by default.

2 changes: 1 addition & 1 deletion setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,7 @@
"retrievechat": ["chromadb", "sentence_transformers", "pypdf", "ipython"],
"teachable": ["chromadb"],
"lmm": ["replicate", "pillow"],
"graphs": ["networkx~=3.2.1", "matplotlib~=3.8.1"],
"graphs": ["networkx", "matplotlib"],
},
classifiers=[
"Programming Language :: Python :: 3",
Expand Down
Loading
Loading