Skip to content

Commit

Permalink
Update autogen.md to include Azure config example + patch for `pyauto…
Browse files Browse the repository at this point in the history
…gen>=0.2.0` (#555)

* Update autogen.md

* in groupchat example add an azure elif

* fixed missing azure mappings + corrected the gpt-4-turbo one

* Updated MemGPT AutoGen agent to take credentials and store them in the config (allows users to use memgpt+autogen without running memgpt configure), also patched api_base kwarg for autogen >=v0.2

* add note about 0.2 testing

* added overview to autogen integration page

* default examples to openai, sync config header between the two main examples, change speaker mode to round-robin in 2-way chat to supress warning

* sync config header on last example (not used in docs)

* refactor to make sure we use existing config when writing out extra credentials

* fixed bug in local LLM where we need to comment out api_type (for pyautogen>=0.2.0)
  • Loading branch information
cpacker authored Dec 4, 2023
1 parent be08a74 commit be03282
Show file tree
Hide file tree
Showing 6 changed files with 325 additions and 51 deletions.
105 changes: 96 additions & 9 deletions docs/autogen.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,11 +4,61 @@

You can also check the [GitHub discussion page](https://github.com/cpacker/MemGPT/discussions/65), but the Discord server is the official support channel and is monitored more actively.

[examples/agent_groupchat.py](https://github.com/cpacker/MemGPT/blob/main/memgpt/autogen/examples/agent_groupchat.py) contains an example of a groupchat where one of the agents is powered by MemGPT.
!!! warning "Tested with `pyautogen` v0.2.0

If you are using OpenAI, you can also run it using the [example notebook](https://github.com/cpacker/MemGPT/blob/main/memgpt/autogen/examples/memgpt_coder_autogen.ipynb).
The MemGPT+AutoGen integration was last tested using AutoGen version v0.2.0.

If you are having issues, please first try installing the specific version of AutoGen using `pip install pyautogen==0.2.0`

## Overview

In the next section, we detail how to set up MemGPT and AutoGen to run with local LLMs.
MemGPT includes an AutoGen agent class ([MemGPTAgent](https://github.com/cpacker/MemGPT/blob/main/memgpt/autogen/memgpt_agent.py)) that mimics the interface of AutoGen's [ConversableAgent](https://microsoft.github.io/autogen/docs/reference/agentchat/conversable_agent#conversableagent-objects), allowing you to plug MemGPT into the AutoGen framework.

To create a MemGPT AutoGen agent for use in an AutoGen script, you can use the `create_memgpt_autogen_agent_from_config` constructor:
```python
from memgpt.autogen.memgpt_agent import create_memgpt_autogen_agent_from_config

# create a config for the MemGPT AutoGen agent
config_list_memgpt = [
{
"model": "gpt-4",
"context_window": 8192,
"preset": "memgpt_chat",
# OpenAI specific
"model_endpoint_type": "openai",
"openai_key": YOUR_OPENAI_KEY,
},
]
llm_config_memgpt = {"config_list": config_list_memgpt, "seed": 42}

# there are some additional options to do with how you want the interface to look (more info below)
interface_kwargs = {
"debug": False,
"show_inner_thoughts": True,
"show_function_outputs": False,
}

# then pass the config to the constructor
memgpt_autogen_agent = create_memgpt_autogen_agent_from_config(
"MemGPT_agent",
llm_config=llm_config_memgpt,
system_message=f"Your desired MemGPT persona",
interface_kwargs=interface_kwargs,
default_auto_reply="...",
)
```

Now this `memgpt_autogen_agent` can be used in standard AutoGen scripts:
```python
import autogen

# ... assuming we have some other AutoGen agents other_agent_1 and 2
groupchat = autogen.GroupChat(agents=[memgpt_autogen_agent, other_agent_1, other_agent_2], messages=[], max_round=12)
```

[examples/agent_groupchat.py](https://github.com/cpacker/MemGPT/blob/main/memgpt/autogen/examples/agent_groupchat.py) contains an example of a groupchat where one of the agents is powered by MemGPT. If you are using OpenAI, you can also run the example using the [notebook](https://github.com/cpacker/MemGPT/blob/main/memgpt/autogen/examples/memgpt_coder_autogen.ipynb).

In the next section, we'll go through the example in depth to demonstrate how to set up MemGPT and AutoGen to run with a local LLM backend.

## Example: connecting AutoGen + MemGPT to non-OpenAI LLMs

Expand Down Expand Up @@ -58,8 +108,9 @@ Going back to the example we first mentioned, [examples/agent_groupchat.py](http

In order to run this example on a local LLM, go to lines 46-66 in [examples/agent_groupchat.py](https://github.com/cpacker/MemGPT/blob/main/memgpt/autogen/examples/agent_groupchat.py) and fill in the config files with your local LLM's deployment details.

`config_list` is used by non-MemGPT AutoGen agents, which expect an OpenAI-compatible API. `config_list_memgpt` is used by MemGPT AutoGen agents, and requires additional settings specific to MemGPT (such as the `model_wrapper` and `context_window`.
`config_list` is used by non-MemGPT AutoGen agents, which expect an OpenAI-compatible API. `config_list_memgpt` is used by MemGPT AutoGen agents, and requires additional settings specific to MemGPT (such as the `model_wrapper` and `context_window`. Depending on what LLM backend you want to use, you'll have to set up your `config_list` and `config_list_memgpt` differently:

#### web UI example
For example, if you are using web UI, it will look something like this:
```python
# Non-MemGPT agents will still use local LLMs, but they will use the ChatCompletions endpoint
Expand All @@ -85,6 +136,7 @@ config_list_memgpt = [
]
```

#### LM Studio example
If you are using LM Studio, then you'll need to change the `api_base` in `config_list`, and `model_endpoint_type` + `model_endpoint` in `config_list_memgpt`:
```python
# Non-MemGPT agents will still use local LLMs, but they will use the ChatCompletions endpoint
Expand All @@ -110,26 +162,62 @@ config_list_memgpt = [
]
```

#### OpenAI example
If you are using the OpenAI API (e.g. using `gpt-4-turbo` via your own OpenAI API account), then the `config_list` for the AutoGen agent and `config_list_memgpt` for the MemGPT AutoGen agent will look different (a lot simpler):
```python
# This config is for autogen agents that are not powered by MemGPT
config_list = [
{
"model": "gpt-4-1106-preview", # gpt-4-turbo (https://platform.openai.com/docs/models/gpt-4-and-gpt-4-turbo)
"model": "gpt-4",
"api_key": os.getenv("OPENAI_API_KEY"),
}
]

# This config is for autogen agents that powered by MemGPT
config_list_memgpt = [
{
"model": "gpt-4-1106-preview", # gpt-4-turbo (https://platform.openai.com/docs/models/gpt-4-and-gpt-4-turbo)
"preset": DEFAULT_PRESET,
"model": None,
"model": "gpt-4",
"model_wrapper": None,
"model_endpoint_type": None,
"model_endpoint": None,
"context_window": 128000, # gpt-4-turbo
"context_window": 8192, # gpt-4 context window
},
]
```

#### Azure OpenAI example
Azure OpenAI API setup will be similar to OpenAI API, but requires additional config variables. First, make sure that you've set all the related Azure variables referenced in [our MemGPTAzure setup page](https://memgpt.readthedocs.io/en/latest/endpoints) (`AZURE_OPENAI_API_KEY`, `AZURE_OPENAI_VERSION`, `AZURE_OPENAI_ENDPOINT`, etc). If you have all the variables set correctly, you should be able to create configs by pulling from the env variables:
```python
# This config is for autogen agents that are not powered by MemGPT
# See Auto
config_list = [
{
"model": "gpt-4", # make sure you choose a model that you have access to deploy on your Azure account
"api_type": "azure",
"api_key": os.getenv("AZURE_OPENAI_API_KEY"),
"api_version": os.getenv("AZURE_OPENAI_VERSION"),
"api_base": os.getenv("AZURE_OPENAI_ENDPOINT"),
}
]

# This config is for autogen agents that powered by MemGPT
config_list_memgpt = [
{
"preset": DEFAULT_PRESET,
"model": "gpt-4", # make sure you choose a model that you have access to deploy on your Azure account
"model_wrapper": None,
"model_endpoint_type": None,
"model_endpoint": None,
"context_window": 8192, # gpt-4 context window
# required setup for Azure
"model_endpoint_type": "azure",
"model_endpoint": os.getenv("AZURE_OPENAI_ENDPOINT"),
"azure_key": os.getenv("AZURE_OPENAI_API_KEY"),
"azure_endpoint": os.getenv("AZURE_OPENAI_ENDPOINT"),
"azure_version": os.getenv("AZURE_OPENAI_VERSION"),
# if you are using Azure for embeddings too, include the following line:
"embedding_embedding_endpoint_type": "azure",
},
]
```
Expand Down Expand Up @@ -220,7 +308,6 @@ User_proxy (to chat_manager):

### Part 4: Attaching documents to MemGPT AutoGen agents


[examples/agent_docs.py](https://github.com/cpacker/MemGPT/blob/main/memgpt/autogen/examples/agent_docs.py) contains an example of a groupchat where the MemGPT autogen agent has access to documents.

First, follow the instructions in [Example - chat with your data - Creating an external data source](../example_data/#creating-an-external-data-source):
Expand Down
87 changes: 71 additions & 16 deletions memgpt/autogen/examples/agent_autoreply.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,42 +13,94 @@
import autogen
from memgpt.autogen.memgpt_agent import create_memgpt_autogen_agent_from_config
from memgpt.presets.presets import DEFAULT_PRESET
from memgpt.constants import LLM_MAX_TOKENS

# USE_OPENAI = True
USE_OPENAI = False
if USE_OPENAI:
# This config is for autogen agents that are not powered by MemGPT
LLM_BACKEND = "openai"
# LLM_BACKEND = "azure"
# LLM_BACKEND = "local"

if LLM_BACKEND == "openai":
# For demo purposes let's use gpt-4
model = "gpt-4"

openai_api_key = os.getenv("OPENAI_API_KEY")
assert openai_api_key, "You must set OPENAI_API_KEY to run this example"

# This config is for AutoGen agents that are not powered by MemGPT
config_list = [
{
"model": "gpt-4-1106-preview", # gpt-4-turbo (https://platform.openai.com/docs/models/gpt-4-and-gpt-4-turbo)
"model": model,
"api_key": os.getenv("OPENAI_API_KEY"),
}
]

# This config is for autogen agents that powered by MemGPT
# This config is for AutoGen agents that powered by MemGPT
config_list_memgpt = [
{
"model": "gpt-4-1106-preview", # gpt-4-turbo (https://platform.openai.com/docs/models/gpt-4-and-gpt-4-turbo)
"preset": "memgpt_docs",
"model": None,
"model": model,
"context_window": LLM_MAX_TOKENS[model],
"preset": DEFAULT_PRESET,
"model_wrapper": None,
"model_endpoint_type": None,
"model_endpoint": None,
"context_window": 128000, # gpt-4-turbo
# OpenAI specific
"model_endpoint_type": "openai",
"model_endpoint": "https://api.openai.com/v1",
"openai_key": openai_api_key,
},
]

else:
elif LLM_BACKEND == "azure":
# Make sure that you have access to this deployment/model on your Azure account!
# If you don't have access to the model, the code will fail
model = "gpt-4"

azure_openai_api_key = os.getenv("AZURE_OPENAI_KEY")
azure_openai_version = os.getenv("AZURE_OPENAI_VERSION")
azure_openai_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT")
assert (
azure_openai_api_key is not None and azure_openai_version is not None and azure_openai_endpoint is not None
), "Set all the required OpenAI Azure variables (see: https://memgpt.readthedocs.io/en/latest/endpoints/#azure)"

# This config is for AutoGen agents that are not powered by MemGPT
config_list = [
{
"model": model,
"api_type": "azure",
"api_key": azure_openai_api_key,
"api_version": azure_openai_version,
# NOTE: on versions of pyautogen < 0.2.0, use "api_base"
# "api_base": azure_openai_endpoint,
"base_url": azure_openai_endpoint,
}
]

# This config is for AutoGen agents that powered by MemGPT
config_list_memgpt = [
{
"model": model,
"context_window": LLM_MAX_TOKENS[model],
"preset": DEFAULT_PRESET,
"model_wrapper": None,
# Azure specific
"model_endpoint_type": "azure",
"azure_key": azure_openai_api_key,
"azure_endpoint": azure_openai_endpoint,
"azure_version": azure_openai_version,
},
]

elif LLM_BACKEND == "local":
# Example using LM Studio on a local machine
# You will have to change the parameters based on your setup

# Non-MemGPT agents will still use local LLMs, but they will use the ChatCompletions endpoint
config_list = [
{
"model": "NULL", # not needed
"api_base": "http://localhost:1234/v1", # ex. "http://127.0.0.1:5001/v1" if you are using webui, "http://localhost:1234/v1/" if you are using LM Studio
# NOTE: on versions of pyautogen < 0.2.0 use "api_base", and also uncomment "api_type"
# "api_base": "http://localhost:1234/v1",
# "api_type": "open_ai",
"base_url": "http://localhost:1234/v1", # ex. "http://127.0.0.1:5001/v1" if you are using webui, "http://localhost:1234/v1/" if you are using LM Studio
"api_key": "NULL", # not needed
"api_type": "open_ai",
},
]

Expand All @@ -57,13 +109,16 @@
{
"preset": DEFAULT_PRESET,
"model": None, # only required for Ollama, see: https://memgpt.readthedocs.io/en/latest/ollama/
"context_window": 8192, # the context window of your model (for Mistral 7B-based models, it's likely 8192)
"model_wrapper": "airoboros-l2-70b-2.1", # airoboros is the default wrapper and should work for most models
"model_endpoint_type": "lmstudio", # can use webui, ollama, llamacpp, etc.
"model_endpoint": "http://localhost:1234", # the IP address of your LLM backend
"context_window": 8192, # the context window of your model (for Mistral 7B-based models, it's likely 8192)
},
]

else:
raise ValueError(LLM_BACKEND)


# If USE_MEMGPT is False, then this example will be the same as the official AutoGen repo
# (https://github.com/microsoft/autogen/blob/main/notebook/agentchat_groupchat.ipynb)
Expand Down
Loading

0 comments on commit be03282

Please sign in to comment.