-
Notifications
You must be signed in to change notification settings - Fork 5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
OAI_CONFIG_LIST
details in documentation
#3
Comments
Could you read https://microsoft.github.io/autogen/docs/Use-Cases/enhanced_inference#runtime-error and see if the issue is addressed? |
Is there a working example of this file, in JSON format? I can get this file to work or parse right when I create it. |
I was a bit confused on how this works. It would be nice to just have something like [
{
"model": "gpt-4",
"api_key": "***"
},
{
"model": "gpt-3.5-turbo",
"api_key": "***"
}
] In my notebook: # Setting configurations for autogen
config_list = autogen.config_list_from_json(
env_or_file="OAI_CONFIG_LIST",
file_location=".",
filter_dict={
"model": {
"gpt-4",
"gpt4",
"gpt-4-32k",
"gpt-4-32k-0314",
"gpt-4-32k-v0314",
"gpt-3.5-turbo",
"gpt-3.5-turbo-16k",
"gpt-3.5-turbo-0301",
"chatgpt-35-turbo-0301",
"gpt-35-turbo-v0301",
"gpt",
}
}
)
config_list When printing [{'model': 'gpt-4',
'api_key': '***'},
{'model': 'gpt-3.5-turbo',
'api_key': '***}] And then i pass the assistant = AssistantAgent("assistant")
user_proxy = UserProxyAgent("user_proxy")
user_proxy.initiate_chat(assistant,
message="Plot a chart of META and TESLA stock price change YTD.",
config_list=config_list
) This worked for me UpdateIf you'd rather use import json
from dotenv import find_dotenv, load_dotenv
load_dotenv(Path('../../.env'))
env_var = [
{
'model': 'gpt-4',
'api_key': os.environ['OPENAI_API_KEY']
},
{
'model': 'gpt-3.5-turbo',
'api_key': os.environ['OPENAI_API_KEY']
}
]
# needed to convert to str
env_var = json.dumps(env_var)
# Setting configurations for autogen
config_list = autogen.config_list_from_json(
env_or_file=env_var,
filter_dict={
"model": {
"gpt-4",
"gpt4",
"gpt-4-32k",
"gpt-4-32k-0314",
"gpt-4-32k-v0314",
"gpt-3.5-turbo",
"gpt-3.5-turbo-16k",
"gpt-3.5-turbo-0301",
"chatgpt-35-turbo-0301",
"gpt-35-turbo-v0301",
"gpt",
}
}
) |
This is tremendously helpful, thank you. |
Thanks. You can also have a single env var which contains the entire json and load it directly: load_dotenv(Path('../../.env'))
config_list = autogen.config_list_from_json(YOUR_ENV_VAR_NAME_FOR_JSON) |
I'm trying to solve the same problem using fastchat with a local llm as in the documentation but it fails because I don't provide an openai key:
|
Please add |
Thanks, that worked! |
Update: i found that my previous example was throwing an error because the json was being parsed as a string. Here is working example of setting up your config list using import tempfile
from dotenv import find_dotenv, load_dotenv
load_dotenv(find_dotenv())
env_var = [
{
'model': 'gpt-4',
'api_key': os.getenv('OPENAI_API_KEY')
},
{
'model': 'gpt-3.5-turbo',
'api_key': os.getenv('OPENAI_API_KEY')
}
]
# Create a temporary file
# Write the JSON structure to a temporary file and pass it to config_list_from_json
with tempfile.NamedTemporaryFile(mode='w+', delete=True) as temp:
env_var = json.dumps(env_var)
temp.write(env_var)
temp.flush()
# Setting configurations for autogen
config_list = autogen.config_list_from_json(
env_or_file=temp.name,
filter_dict={
"model": {
"gpt-4",
"gpt-3.5-turbo",
}
}
)
assert len(config_list) > 0
print("models to use: ", [config_list[i]["model"] for i in range(len(config_list))])
|
* Ollama client! With function calling. Initial commit, client, no docs or tests yet. * Tidy comments * Cater for missing prompt token count * Removed use of eval, added json parsing support library * Fix to the use of the JSON fix library, handling of Mixtral escape sequence * Fixed 'name' in JSON bug, catered for single function call JSON without [] * removing role='tool' from inner tool result to reduce token usage. * Added Ollama documentation and updated library versions * Added Native Ollama tool calling (v0.3.0 req.) as well as hide/show tools support * Added native tool calling and hide_tools parameter to documentation * Update to Ollama 0.3.1, added tests * Tweak to manual function calling prompt to improve number handling. * Update client.py fix indent * Update setup.py - Ollama package version correction
Gives me this warning now:
|
@cjy8s You can just use the |
* Rename event to message * rename SendMessage etc to SendMessageEnvelope etc
It would be helpful to add details about the
OAI_CONFIG_LIST
in documentation so that users can quickly starts with the OAI functions.The text was updated successfully, but these errors were encountered: