Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OAI_CONFIG_LIST details in documentation #3

Closed
BeibinLi opened this issue Sep 18, 2023 · 12 comments
Closed

OAI_CONFIG_LIST details in documentation #3

BeibinLi opened this issue Sep 18, 2023 · 12 comments

Comments

@BeibinLi
Copy link
Collaborator

It would be helpful to add details about the OAI_CONFIG_LIST in documentation so that users can quickly starts with the OAI functions.

@sonichi sonichi transferred this issue from microsoft/FLAML Sep 20, 2023
@sonichi
Copy link
Contributor

sonichi commented Sep 20, 2023

Could you read https://microsoft.github.io/autogen/docs/Use-Cases/enhanced_inference#runtime-error and see if the issue is addressed?

@rustyorb
Copy link

Is there a working example of this file, in JSON format? I can get this file to work or parse right when I create it.

@AaronWard
Copy link
Collaborator

AaronWard commented Sep 27, 2023

I was a bit confused on how this works. It would be nice to just have something like load_dotenv() to handle the keys. But anyways - Just going off the examples in the /notebooks directory i made a file called OAI_CONFIG_LIST (with no filetype)

[
    {
        "model": "gpt-4",
        "api_key": "***"
    },
    {
        "model": "gpt-3.5-turbo",
        "api_key": "***"
    }
]

In my notebook:

# Setting configurations for autogen
config_list = autogen.config_list_from_json(
    env_or_file="OAI_CONFIG_LIST",
    file_location=".",
    filter_dict={
        "model": {
            "gpt-4",
            "gpt4",
            "gpt-4-32k",
            "gpt-4-32k-0314",
            "gpt-4-32k-v0314",
            "gpt-3.5-turbo",
            "gpt-3.5-turbo-16k",
            "gpt-3.5-turbo-0301",
            "chatgpt-35-turbo-0301",
            "gpt-35-turbo-v0301",
            "gpt",
        }
    }
)

config_list

When printing config_list

[{'model': 'gpt-4',
  'api_key': '***'},
 {'model': 'gpt-3.5-turbo',
  'api_key': '***}]

And then i pass the config_list to the initiate_chat function.

assistant = AssistantAgent("assistant")
user_proxy = UserProxyAgent("user_proxy")
user_proxy.initiate_chat(assistant, 
    message="Plot a chart of META and TESLA stock price change YTD.", 
    config_list=config_list
)

This worked for me


Update

If you'd rather use load_dotenv() this worked for me.

import json
from dotenv import find_dotenv, load_dotenv
load_dotenv(Path('../../.env'))

env_var = [
    {
        'model': 'gpt-4',
        'api_key': os.environ['OPENAI_API_KEY']
    },
    {
        'model': 'gpt-3.5-turbo',
        'api_key': os.environ['OPENAI_API_KEY']
    }
]

# needed to convert to str
env_var = json.dumps(env_var)

# Setting configurations for autogen
config_list = autogen.config_list_from_json(
    env_or_file=env_var,
    filter_dict={
        "model": {
            "gpt-4",
            "gpt4",
            "gpt-4-32k",
            "gpt-4-32k-0314",
            "gpt-4-32k-v0314",
            "gpt-3.5-turbo",
            "gpt-3.5-turbo-16k",
            "gpt-3.5-turbo-0301",
            "chatgpt-35-turbo-0301",
            "gpt-35-turbo-v0301",
            "gpt",
        }
    }
)

@rustyorb
Copy link

This is tremendously helpful, thank you.

@sonichi
Copy link
Contributor

sonichi commented Sep 27, 2023

I was a bit confused on how this works. It would be nice to just have something like load_dotenv() to handle the keys. But anyways - Just going off the examples in the /notebooks directory i made a file called OAI_CONFIG_LIST (with no filetype)

[
    {
        "model": "gpt-4",
        "api_key": "***"
    },
    {
        "model": "gpt-3.5-turbo",
        "api_key": "***"
    }
]

In my notebook:

# Setting configurations for autogen
config_list = autogen.config_list_from_json(
    env_or_file="OAI_CONFIG_LIST",
    file_location=".",
    filter_dict={
        "model": {
            "gpt-4",
            "gpt4",
            "gpt-4-32k",
            "gpt-4-32k-0314",
            "gpt-4-32k-v0314",
            "gpt-3.5-turbo",
            "gpt-3.5-turbo-16k",
            "gpt-3.5-turbo-0301",
            "chatgpt-35-turbo-0301",
            "gpt-35-turbo-v0301",
            "gpt",
        }
    }
)

config_list

When printing config_list

[{'model': 'gpt-4',
  'api_key': '***'},
 {'model': 'gpt-3.5-turbo',
  'api_key': '***}]

And then i pass the config_list to the initiate_chat function.

assistant = AssistantAgent("assistant")
user_proxy = UserProxyAgent("user_proxy")
user_proxy.initiate_chat(assistant, 
    message="Plot a chart of META and TESLA stock price change YTD.", 
    config_list=config_list
)

This worked for me

Update

If you'd rather use load_dotenv() this worked for me.

import json
from dotenv import find_dotenv, load_dotenv
load_dotenv(Path('../../.env'))

env_var = [
    {
        'model': 'gpt-4',
        'api_key': os.environ['OPENAI_API_KEY']
    },
    {
        'model': 'gpt-3.5-turbo',
        'api_key': os.environ['OPENAI_API_KEY']
    }
]

# needed to convert to str
env_var = json.dumps(env_var)

# Setting configurations for autogen
config_list = autogen.config_list_from_json(
    env_or_file=env_var,
    filter_dict={
        "model": {
            "gpt-4",
            "gpt4",
            "gpt-4-32k",
            "gpt-4-32k-0314",
            "gpt-4-32k-v0314",
            "gpt-3.5-turbo",
            "gpt-3.5-turbo-16k",
            "gpt-3.5-turbo-0301",
            "chatgpt-35-turbo-0301",
            "gpt-35-turbo-v0301",
            "gpt",
        }
    }
)

Thanks. You can also have a single env var which contains the entire json and load it directly:

load_dotenv(Path('../../.env'))
config_list = autogen.config_list_from_json(YOUR_ENV_VAR_NAME_FOR_JSON)

@EdFries
Copy link

EdFries commented Sep 27, 2023

I'm trying to solve the same problem using fastchat with a local llm as in the documentation but it fails because I don't provide an openai key:

from autogen import AssistantAgent, UserProxyAgent, oai
config_list=[
    {
        "model": "chatglm2-6b",
        "api_base": "http://localhost:8000/v1",
        "api_type": "open_ai",
        "api_key": "NULL", # just a placeholder
    }
]

response = oai.Completion.create(config_list=config_list, prompt="Hi")
print(response) # works fine

assistant = AssistantAgent("assistant")
user_proxy = UserProxyAgent("user_proxy")
user_proxy.initiate_chat(assistant, message="Plot a chart of META and TESLA stock price change YTD.", config_list=config_list)
# fails with the error: openai.error.AuthenticationError: No API key provided.

@sonichi
Copy link
Contributor

sonichi commented Sep 27, 2023

I'm trying to solve the same problem using fastchat with a local llm as in the documentation but it fails because I don't provide an openai key:

from autogen import AssistantAgent, UserProxyAgent, oai
config_list=[
    {
        "model": "chatglm2-6b",
        "api_base": "http://localhost:8000/v1",
        "api_type": "open_ai",
        "api_key": "NULL", # just a placeholder
    }
]

response = oai.Completion.create(config_list=config_list, prompt="Hi")
print(response) # works fine

assistant = AssistantAgent("assistant")
user_proxy = UserProxyAgent("user_proxy")
user_proxy.initiate_chat(assistant, message="Plot a chart of META and TESLA stock price change YTD.", config_list=config_list)
# fails with the error: openai.error.AuthenticationError: No API key provided.

Please add llm_config={"config_list": config_list} in the constructor of AssistantAgent

@EdFries
Copy link

EdFries commented Sep 27, 2023

Thanks, that worked!

@sonichi
Copy link
Contributor

sonichi commented Sep 27, 2023

@AaronWard
Copy link
Collaborator

Update: i found that my previous example was throwing an error because the json was being parsed as a string. Here is working example of setting up your config list using dotenv. This will allow you to dynamically create the json file required by autogen.config_list_from_json() when you're using a .env file

import tempfile
from dotenv import find_dotenv, load_dotenv
load_dotenv(find_dotenv())

env_var = [
    {
        'model': 'gpt-4',
        'api_key': os.getenv('OPENAI_API_KEY')
    },
    {
        'model': 'gpt-3.5-turbo',
        'api_key': os.getenv('OPENAI_API_KEY')
    }
]

# Create a temporary file
# Write the JSON structure to a temporary file and pass it to config_list_from_json
with tempfile.NamedTemporaryFile(mode='w+', delete=True) as temp:
    env_var = json.dumps(env_var)
    temp.write(env_var)
    temp.flush()

    # Setting configurations for autogen
    config_list = autogen.config_list_from_json(
        env_or_file=temp.name,
        filter_dict={
            "model": {
                "gpt-4",
                "gpt-3.5-turbo",
            }
        }
    )

assert len(config_list) > 0 
print("models to use: ", [config_list[i]["model"] for i in range(len(config_list))])

models to use: ['gpt-4', 'gpt-3.5-turbo']

@marklysze marklysze mentioned this issue Mar 4, 2024
3 tasks
ekzhu pushed a commit that referenced this issue May 7, 2024
randombet pushed a commit to randombet/autogen that referenced this issue Sep 9, 2024
* Ollama client! With function calling. Initial commit, client, no docs or tests yet.

* Tidy comments

* Cater for missing prompt token count

* Removed use of eval, added json parsing support library

* Fix to the use of the JSON fix library, handling of Mixtral escape sequence

* Fixed 'name' in JSON bug, catered for single function call JSON without []

* removing role='tool' from inner tool result to reduce token usage.

* Added Ollama documentation and updated library versions

* Added Native Ollama tool calling (v0.3.0 req.) as well as hide/show tools support

* Added native tool calling and hide_tools parameter to documentation

* Update to Ollama 0.3.1, added tests

* Tweak to manual function calling prompt to improve number handling.

* Update client.py fix indent

* Update setup.py - Ollama package version correction
@cjy8s
Copy link

cjy8s commented Sep 26, 2024

import autogen
import getpass
import tempfile
from dotenv import load_dotenv
import os
import json

load_dotenv(dotenv_path='../../API_KEYs.env')

OPENAI_API_KEY = os.getenv('OPENAI_API_KEY')

OAI_CONFIG_LIST = [{
        'model': 'gpt-4o-mini',
        'api_key': OPENAI_API_KEY
    }]

with tempfile.NamedTemporaryFile(mode='w+', delete=True) as temp:
    env_var = json.dumps(OAI_CONFIG_LIST)
    temp.write(env_var)
    temp.flush()

    # Setting configurations for autogen
    config_list = autogen.config_list_from_json(
        env_or_file=temp.name,
        filter_dict={
            "model": {
                "gpt-4o-mini",
            }
        }
    )

gpt4_config = {"seed": 42, 
               "config_list": config_list, 
               "temperature": 0}

Gives me this warning now:

[autogen.oai.client: 09-26 14:19:56] {184} WARNING - The API key specified is not a valid OpenAI format; it won't work with the OpenAI-hosted model.

@AaronWard
Copy link
Collaborator

@cjy8s You can just use the config_list_from_dotenv function instead of writing a temporary file.

jackgerrits pushed a commit that referenced this issue Oct 2, 2024
* Rename event to message
* rename SendMessage etc to SendMessageEnvelope etc
jackgerrits pushed a commit that referenced this issue Oct 2, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants