Skip to content

Commit

Permalink
Compressible Agent require model field for llm_config (microsoft#…
Browse files Browse the repository at this point in the history
…1903)

* update

* update

* update

* update

* Update autogen/agentchat/contrib/compressible_agent.py

---------

Co-authored-by: Qingyun Wu <qingyun0327@gmail.com>
  • Loading branch information
yiranwu0 and qingyun-wu committed Mar 7, 2024
1 parent 457a5a1 commit 76bc505
Show file tree
Hide file tree
Showing 2 changed files with 10 additions and 2 deletions.
3 changes: 3 additions & 0 deletions autogen/agentchat/contrib/compressible_agent.py
Original file line number Diff line number Diff line change
Expand Up @@ -73,6 +73,7 @@ def __init__(
system_message (str): system message for the ChatCompletion inference.
Please override this attribute if you want to reprogram the agent.
llm_config (dict): llm inference configuration.
Note: you must set `model` in llm_config. It will be used to compute the token count.
Please refer to [OpenAIWrapper.create](/docs/reference/oai/client#create)
for available options.
is_termination_msg (function): a function that takes a message in the form of a dictionary
Expand Down Expand Up @@ -121,6 +122,8 @@ def __init__(
self.llm_compress_config = False
self.compress_client = None
else:
if "model" not in llm_config:
raise ValueError("llm_config must contain the 'model' field.")
self.llm_compress_config = self.llm_config.copy()
# remove functions
if "functions" in self.llm_compress_config:
Expand Down
9 changes: 7 additions & 2 deletions notebook/agentchat_compression.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -92,7 +92,7 @@
"config_list = autogen.config_list_from_json(\n",
" \"OAI_CONFIG_LIST\",\n",
" filter_dict={\n",
" \"model\": [\"gpt-4\", \"gpt-4-0314\", \"gpt4\", \"gpt-4-32k\", \"gpt-4-32k-0314\", \"gpt-4-32k-v0314\"],\n",
" \"model\": [\"gpt-4-1106-preview\"],\n",
" },\n",
")"
]
Expand Down Expand Up @@ -139,8 +139,10 @@
"## Example 1\n",
"This example is from [agentchat_MathChat.ipynb](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_MathChat.ipynb). Compression with code execution.\n",
"\n",
"You must set the `model` field in `llm_config`, as it will be used to calculate the token usage.\n",
"\n",
"Note: we set `trigger_count=600`, and `leave_last_n=2`. In this example, we set a low trigger_count to demonstrate the compression feature. \n",
"The token count after compression is still bigger than trigger count, mainly because the trigger count is low an the first and last 2 messages are not compressed. Thus, the compression is performed at each turn. In practice, you want to adjust the trigger_count to a bigger number and properly set the `leave_last_n` to avoid compression at each turn. "
"The token count after compression is still bigger than trigger count, mainly because the trigger count is low an the first and last 2 messages are not compressed. Thus, the compression is performed at each turn. In practice, you want to adjust the trigger_count to a bigger number and properly set the `leave_last_n` to avoid compression at each turn. \n"
]
},
{
Expand Down Expand Up @@ -548,6 +550,7 @@
" \"timeout\": 600,\n",
" \"cache_seed\": 42,\n",
" \"config_list\": config_list,\n",
" \"model\": \"gpt-4-1106-preview\", # you must set the model field in llm_config, as it will be used to calculate the token usage.\n",
" },\n",
" compress_config={\n",
" \"mode\": \"COMPRESS\",\n",
Expand Down Expand Up @@ -785,6 +788,7 @@
],
"source": [
"llm_config = {\n",
" \"model\": \"gpt-4-1106-preview\",\n",
" \"functions\": [\n",
" {\n",
" \"name\": \"python\",\n",
Expand Down Expand Up @@ -1249,6 +1253,7 @@
" \"timeout\": 600,\n",
" \"cache_seed\": 43,\n",
" \"config_list\": config_list,\n",
" \"model\": \"gpt-4-1106-preview\",\n",
" },\n",
" compress_config={\n",
" \"mode\": \"CUSTOMIZED\",\n",
Expand Down

0 comments on commit 76bc505

Please sign in to comment.