Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

An expensive bug GPT4. no matter what I do #125

Closed
twobob opened this issue Oct 6, 2023 · 31 comments
Closed

An expensive bug GPT4. no matter what I do #125

twobob opened this issue Oct 6, 2023 · 31 comments

Comments

@twobob
Copy link

twobob commented Oct 6, 2023

I set this up on a new machine today.
I realised that in 2 games I had suddenly spent 7 dollars instantly. Pretty steep considering it was supposed to be using 3.5-turbo-16k

Checked over all the setting. Then parsed the entire source code for ANY reference to default models. anything. Changed every single reference. No matter what I set it is using the GPT4 API endpoint.

I have tried; the usual inline declarations; using the built in preferred template; just jamming it in as the hardcoded value to the method that checks it. So.. unless anyone has any other ideas. I'm all out.

7 bucks is no big shakes but I run systems in parallel and no doubt others batch too. Wouldn't want anyone to get an AMAZON AWS level surprise and not have reported. so. Enjoy. Reported.

EDIT: It's a linux box but I did install it under windows on there as well and test it (that test is probably the top 50c of the green bar ) It was identical so "not OS". and "yes, I also tried that."

@twobob
Copy link
Author

twobob commented Oct 6, 2023

image
Green 2 games. The blue bit, about 10 others....

@afourney
Copy link
Member

afourney commented Oct 6, 2023

Thanks for reporting and sorry about the spend. It sounds like you did due diligence on debugging, so I’m not sure how it was charged to GPT-4. We’ve not seen it call the wrong models before, so I am very curious to try and replicate this.

@victordibia
Copy link
Collaborator

To help reproduce, would be great to see

  • Your code
  • Emphasis on how you specify your OAI_CONFIG_LIST (the data structure used to define which LLM is used).

@LittleLittleCloud
Copy link
Collaborator

Maybe here? Looks like the default model is set to GPT-4

DEFAULT_MODEL = "gpt-4"

@afourney
Copy link
Member

afourney commented Oct 6, 2023

@LittleLittleCloud No I don't think that's it. That default is only accessed from the improve_code, improve_function, and implement calls -- and they aren't part of the main control path.

I am eyeing this:

# Ensure default models are always considered

   # Ensure default models are always considered
    default_models = ["gpt-4", "gpt-3.5-turbo"]

    for model in default_models:
        # Only assign default API key if the model is not present in the map.
        # If model is present but set to invalid/empty, do not overwrite.
        if model not in model_api_key_map:
            model_api_key_map[model] = "OPENAI_API_KEY"

@twobob How are you loading the config list? Are you using .env?

@twobob
Copy link
Author

twobob commented Oct 7, 2023

[
{
"model": "gpt-3.5-turbo-16k-0613",
"api_key": "ssdsdsdsdsdxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxA"
},
{
"model": "gpt-3.5-turbo-16k",
"api_key": "ssdsdsdsdsdxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxA"
},
{
"model": "gpt-3.5-turbo-0613",
"api_key": "ssdsdsdsdsdxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxA"
},
{
"model": "gpt-3.5-turbo",
"api_key": "ssdsdsdsdsdxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxA"
},
{
"model": "gpt-4", <--- -the single remaining reference - and its for azure so ¯\(ツ)
"api_key": "",
"api_base": "",
"api_type": "azure",
"api_version": "2023-07-01-preview"
},
{
"model": "gpt-3.5-turbo",
"api_key": "",
"api_base": "",
"api_type": "azure",
"api_version": "2023-07-01-preview"
}
]

@twobob
Copy link
Author

twobob commented Oct 7, 2023

thanks for all the feedback. Yes there is zero reference to the word gpt-4 in my code now so
DEFAULT_MODEL = "gpt-4"
would have already been changed. I literally gutted it

@twobob
Copy link
Author

twobob commented Oct 7, 2023

@LittleLittleCloud No I don't think that's it. That default is only accessed from the improve_code, improve_function, and implement calls -- and they aren't part of the main control path.

I am eyeing this:

# Ensure default models are always considered

   # Ensure default models are always considered
    default_models = ["gpt-4", "gpt-3.5-turbo"]

    for model in default_models:
        # Only assign default API key if the model is not present in the map.
        # If model is present but set to invalid/empty, do not overwrite.
        if model not in model_api_key_map:
            model_api_key_map[model] = "OPENAI_API_KEY"

@twobob How are you loading the config list? Are you using .env?

me@mebuntu:~/repo/autogen$ echo $OPENAI_API_KEY
ssdsdsdsdsdxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxA

my env is indeed setup as well as the config file

@twobob
Copy link
Author

twobob commented Oct 7, 2023

To help reproduce, would be great to see

* Your code

* Emphasis on how you specify your OAI_CONFIG_LIST (the data structure used to define which LLM is used).

hi , which code. its the default code gutted to not contain any references to gpt_4 pretty much?

@sonichi
Copy link
Contributor

sonichi commented Oct 7, 2023

Could you show the code that contains llm_config? If it contains "config_list" in it, could you also show the code which sets config_list?

@twobob
Copy link
Author

twobob commented Oct 7, 2023

I forgot to mention but I DID do this first on a windows box and wa using bash as my interpreter because "inline commands working for windows would be cool to do for everyone" I thought. That was when I noticed the bug.

Thought it was /maaaaybe/ the funky bash hack that was causing the error
Hence it being a vanilla ubuntu - like fresh for this job - and a very unhacked autogen other than the gutting.

Like this is the dream support box. literally nothing has happened on it other than this. since I have installed chatdev on the same box but meh. these tests were before that and nothing has changed in terms of the autogen anyway.

So there. the full sordid details. Honestly not sure where to go from there

It's not account since chatdev can use 3.5 just fine from the ubunutu install (why I installed chatdev after)

@twobob
Copy link
Author

twobob commented Oct 7, 2023

Could you show the code that contains llm_config? If it contains "config_list" in it, could you also show the code which sets config_list?

hi. it is the defaults straight from the readme or from a notebook, does it write logs anywhere? Ive mounted onto the windows drive from ubuntu so I can access anything...

@twobob
Copy link
Author

twobob commented Oct 7, 2023

i ran agentchat_groupchat_research.ipynb from the notebook folder in jupyter notebook

in the beginning I was running it

{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"import autogen\n",
"\n",
"config_list_gpt4 = autogen.config_list_from_json(\n",
" "OAI_CONFIG_LIST",\n",
" filter_dict={\n",
" "model": ["gpt-4-32k", "gpt-4-32k-0314", "gpt-4-32k-v0314"],\n",
" },\n",
")\n",
"\n",
"config_list_gpt3 = autogen.config_list_from_json(\n",
" "OAI_CONFIG_LIST",\n",
" filter_dict={\n",
" "model": ["gpt-3.5-turbo-16k-0613", "gpt-3.5-turbo-16k", "gpt-3.5-turbo-0613"],\n",
" },\n",
")"
]
},

and only the engineer was actually given gpt 4 access

"gpt4_config = {\n",
" "seed": 42, # change the seed for different trials\n",
" "temperature": 0,\n",
" "config_list": config_list_gpt4,\n",
" "request_timeout": 120,\n",
"}\n",
"gpt3_config = {\n",
" "seed": 42, # change the seed for different trials\n",
" "temperature": 0,\n",
" "config_list": config_list_gpt3,\n",
" "request_timeout": 90,\n",
"}\n",
"user_proxy = autogen.UserProxyAgent(\n",
" name="Admin",\n",
" system_message="A human admin. Interact with the planner to discuss the plan. Plan execution needs to be approved by this admin.",\n",
" code_execution_config=False,\n",
")\n",
"engineer = autogen.AssistantAgent(\n",
" name="Engineer",\n",
" llm_config=gpt3_config,\n",

but as shown there by the end all actors had been reduced to gpt_3config

@twobob
Copy link
Author

twobob commented Oct 7, 2023

@LittleLittleCloud No I don't think that's it. That default is only accessed from the improve_code, improve_function, and implement calls -- and they aren't part of the main control path.

I am eyeing this:

# Ensure default models are always considered

   # Ensure default models are always considered
    default_models = ["gpt-4", "gpt-3.5-turbo"]

    for model in default_models:
        # Only assign default API key if the model is not present in the map.
        # If model is present but set to invalid/empty, do not overwrite.
        if model not in model_api_key_map:
            model_api_key_map[model] = "OPENAI_API_KEY"

@twobob How are you loading the config list? Are you using .env?

Hmmm. you know I am certain I did NOT remove that one (I errantly excluded that from the replace)...
Hmm.. Time for new test then

@twobob
Copy link
Author

twobob commented Oct 7, 2023

So. i reran it just to make sure it wasnt anything really stupid. BEFORE I made the above change

image
Blue bar is me making 30 games yesterday with chatDev, green bar is just setting up the requirements (basically 4 round of chat) in autogen on "supposed 3.5" before I stopped it again and waited for the OpenAi usage to catch up.

so. I wasn't going mad it really was doing that.

I made the above change. and only added 3.5 models in the list and ran it again/

Still no change.

I attached my actual notebook.
agentchat_groupchat_research_That_WAS_ACTUTALLY_USED.zip

@twobob
Copy link
Author

twobob commented Oct 7, 2023

maybe I should remove the API key from env? Any other ideas?

It is fair to say I am very sad this does not work hahahah

@twobob
Copy link
Author

twobob commented Oct 7, 2023

@LittleLittleCloud
image

@twobob
Copy link
Author

twobob commented Oct 7, 2023

To help reproduce, would be great to see

* Your code

* Emphasis on how you specify your OAI_CONFIG_LIST (the data structure used to define which LLM is used).

attached the zip

@twobob
Copy link
Author

twobob commented Oct 7, 2023

In the notebook I attached I had actually increased the engineer to 4 again. Hardly mattered I was already timing out on the gpt4 token/min before it popped

@sonichi
Copy link
Contributor

sonichi commented Oct 7, 2023

The code looks good to me. Please make sure:

  1. OAI_CONFIG_LIST is either an environment variable, or a file under the same path as the notebook. Please print the config_list_gpt3 dict to make sure it contains the gpt-3.5 models.
  2. make sure config_list_gpt3 is contained in all the llm_config.
  3. set max_consecutive_reply to 0 for UserProxyAgent to make a single-turn conversation. Check your bill.

@twobob
Copy link
Author

twobob commented Oct 7, 2023

  1. it is a file in the root of the path. I didnt realise I could do it as an env... Did I miss a doc? maybe, I shall insert the print
  2. Will do. In terms of the llm_configs as you can see in the code I believe that twas the case
  3. Ah okay. If its all the same I will probably run up like 50c or so - just so I can FOR SURE see it. (I've set aside like $15 of left over credit to get this fixed..)

@twobob
Copy link
Author

twobob commented Oct 7, 2023

Ill do it after midnight. THEN I will for sure see it. it will be the only cost. lightbulb

@twobob
Copy link
Author

twobob commented Oct 7, 2023

image

Ooooookay. It seems like maybe we have a suspect.

config_list_from_json not actually populating ... hmm.. I DID NOT see that coming. good call.

So... I guess time to make that env... Better find those docs I guess

EDIT: https://microsoft.github.io/autogen/docs/FAQ/#set-your-api-endpoints I see

@sonichi
Copy link
Contributor

sonichi commented Oct 7, 2023

The file is supposed be located at:
image

@twobob
Copy link
Author

twobob commented Oct 7, 2023

I dont run this on colab sir.
in addition I have setup the env version. in .bashrc and then started up a new terminal - run jupyter notebook FROM the new terminal . I also made a copy of the OAI_CONFIG_LIST and placed it in the notbooks folder. in addition to the root of the drive.

@sonichi
Copy link
Contributor

sonichi commented Oct 7, 2023

ok. then it should be located next to the .ipynb file. otherwise you need to specify file_location. https://microsoft.github.io/autogen/docs/FAQ#set-your-api-endpoints

@twobob
Copy link
Author

twobob commented Oct 7, 2023

Yup ^^^ This ^^^ I was NOT in the notebook. folder. It was one level below. Ill restest now this is all a bit more "like it should be"

now when I print the config the keys are present.

running test. There are 0 gpt3 cost for the day and I noted the gpt4 usage. Here's hoping!

@twobob
Copy link
Author

twobob commented Oct 7, 2023

image

image

That sliver on blue IS A WELCOME SIGHT!

Might I suggest a tweak to the notebooks? something that actually PRINTS or even CHECKS the content of that array and just refuses to continue otherwise with a warning?

Thanks so much again

Engineer (to chat_manager):

As an AI language model, I don't have the capability to actively participate in the development process. However, I can provide guidance and suggestions based on the information provided. If you have any specific questions or need assistance with any particular step of the development process, please let me know and I'll be happy to help!

How unfortunate... Ah well

@twobob twobob closed this as completed Oct 7, 2023
@twobob
Copy link
Author

twobob commented Oct 7, 2023

Engineer (to chat_manager):

As an AI language model, I don't have the capability to actively participate in the development process. However, I can provide guidance and suggestions based on the information provided. If you have any specific questions or need assistance with any particular step of the development process, please let me know and I'll be happy to help!

How unfortunate... Ah well

Yup they spent $1.70 and never wrote a single line of code. Just kept planning the project over and over despite intervention guiding them not to several times....

23:00
Local time: 8 Oct 2023, 00:00
gpt-3.5-turbo-16k-0613, 38 requests
169,170 prompt + 2,509 completion = 171,679 tokens
23:05
Local time: 8 Oct 2023, 00:05
gpt-3.5-turbo-16k-0613, 23 requests
159,774 prompt + 2,554 completion = 162,328 tokens
image

I guess Ill wait until this allows local LLMS

@sonichi
Copy link
Contributor

sonichi commented Oct 9, 2023

or try some non-coding tasks, or try including a code example in the initial message.

@afourney
Copy link
Member

afourney commented Oct 9, 2023

Ok, here is my understanding of what's happening:

If the OAI_CONFIG_LIST is not found, then the config that gets returned is an empty list, (see:

). Then
self.llm_config = self.DEFAULT_CONFIG.copy()
populates the config with a model="gpt-4" default, and an empty config_list.

Then

if config_list:
defaults to the classic OpeanAI chat completion behavior because config_list evaluates to false. The default is to read from the "model" attribute.

Finally OAI's openai library reads the missing key from environment variable.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants