You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Edge case, but shows an incorrect assumption, and unsure where it should be fixed... either the update_tool_signature (and update_function_signature same problem) OR adjust the custom_client setup process
In conversable_agent.py, If you call the register_function(s) they end up calling update_tool_signature which calls this line last: self.client = OpenAIWrapper(**self.llm_config)
Normally that's fine, but if you've already setup a custom model, and run the register_custom_client(), it wipes out the Class setup, and on running, it complains the class isn't being used, and asks you to fix that.
The 2 step process of registering a custom client is adding a pointer to the config like so: {"model_client_cls": "CustomModelClient"}
This created a 'placeholder' in the agent's client, not an actual Class used (yet)
but it doesn't actually get applied till you run :
This actually changes the placeholder to use the Class.
The bug: if you later run something like
assistant.register_for_llm(name="calculator", description="A calculator tool that accepts nested expression as input")(calculator)
or assistant.register_for_execution(name="calculator")(calculator)
the CustomModelClient Class is removed from the agent. (rerunning the register_model_client function shows that's all that's needed)
Why? Because the assumption above: reloading that client via the OpenAIWrapper with the llmconfig ends up resetting back to the first step, since the placeholder is restored.
Workaround: Always do register_custom_model() LAST, and if you update or change the tool config, you might need to reregister.
Fix: either fix the reload of the llm_config OR fix the custom_model registration process to work solely from the llm_config with the need for a second function called.
Set up the 2 parts: the llm_config in the agent, and then register the custom_model
That works.
add a tool/function and register it with the agent.
You'll get the error that the custom class is in the config, but not registered.
reorder the calls, so the registering tool/function happens first, the registering the custom Class happens second
no error.
Model Used
any, tested on multiple custom client classes (Instructor and Ollama Raw, neither using OpenAI standard client), it's not the model, it's the client, despite 'register_custom_model' being the name of the function affected.
Expected Behavior
It should respect the existing client config, including the registered custom model class.
Screenshots and logs
No response
Additional Information
No response
The text was updated successfully, but these errors were encountered:
Describe the bug
Edge case, but shows an incorrect assumption, and unsure where it should be fixed... either the
update_tool_signature
(andupdate_function_signature
same problem) OR adjust the custom_client setup processIn
conversable_agent.py
, If you call the register_function(s) they end up callingupdate_tool_signature
which calls this line last:self.client = OpenAIWrapper(**self.llm_config)
Normally that's fine, but if you've already setup a custom model, and run the
register_custom_client()
, it wipes out the Class setup, and on running, it complains the class isn't being used, and asks you to fix that.The 2 step process of registering a custom client is adding a pointer to the config like so:
{"model_client_cls": "CustomModelClient"}
This created a 'placeholder' in the agent's client, not an actual Class used (yet)
but it doesn't actually get applied till you run :
assistant.register_model_client(model_client_cls=CustomModelClient)
This actually changes the placeholder to use the Class.
The bug: if you later run something like
or
assistant.register_for_execution(name="calculator")(calculator)
the CustomModelClient Class is removed from the agent. (rerunning the register_model_client function shows that's all that's needed)
Why? Because the assumption above: reloading that client via the OpenAIWrapper with the llmconfig ends up resetting back to the first step, since the placeholder is restored.
Workaround: Always do register_custom_model() LAST, and if you update or change the tool config, you might need to reregister.
Fix: either fix the reload of the llm_config OR fix the custom_model registration process to work solely from the llm_config with the need for a second function called.
Steps to reproduce
This is not a local setup issue, or llm model related, it's the custom client Class that matters here.
That works.
You'll get the error that the custom class is in the config, but not registered.
no error.
Model Used
any, tested on multiple custom client classes (Instructor and Ollama Raw, neither using OpenAI standard client), it's not the model, it's the client, despite 'register_custom_model' being the name of the function affected.
Expected Behavior
It should respect the existing client config, including the registered custom model class.
Screenshots and logs
No response
Additional Information
No response
The text was updated successfully, but these errors were encountered: