You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I had searched in the issues and found no similar feature requirement.
Description
There is no longer an AiClient that is compatible with OpenAI, capable of accessing other Large Language Model (LLM) Open APIs, and also has access to local large models.
What I desire in a Java version is likely present in a Python version, where llm_config can function with other LLM Open APIs.
Use case
varjoe = AssistantAgent.builder()
.client(OpenAiClient.builder()
.openaiApiBase(URL)
.openaiApiKey(API_KEY)
.build().init())
.name("joe")
.systemMessage("Your name is Joe and you are a part of a duo of comedians.")
.humanInputMode(NEVER)
.build();
local_llm_config= {
"config_list": [
{
"model": "NotRequired", # Loaded with LiteLLM command"api_key": "NotRequired", # Not needed"base_url": "http://0.0.0.0:4000"# Your LiteLLM URL
}
],
"cache_seed": None# Turns off caching, useful for testing different models
}
joe=ConversableAgent(
"joe",
system_message="Your name is Joe and you are a part of a duo of comedians.",
llm_config=local_llm_config,
human_input_mode="NEVER", # Never ask for human input.
)
When you used autogen4j from dependencies, you might see this in openai-client:0.2.2
OpenAiClient.class specifies it to have either of OPENAI_PROXY & OPENAI_API_KEY , OPENAI_ORGANIZATION to get it rolling
`
this.openaiProxy = this.getOrEnvOrDefault(this.openaiProxy, "OPENAI_PROXY");
OkHttpClient.Builder httpClientBuilder = (new OkHttpClient.Builder()).connectTimeout(this.requestTimeout, TimeUnit.SECONDS).readTimeout(this.requestTimeout, TimeUnit.SECONDS).writeTimeout(this.requestTimeout, TimeUnit.SECONDS).callTimeout(this.requestTimeout, TimeUnit.SECONDS);
`
To change this to use local llm and model from a url, another generic client needs to be setup similar to one python offers, then as a feature would be nice to have a switch between using openAI Client and LocalLLM client to fine tune as per flag. Can help add in this feature so it helps support both models and increase extension.
Search before asking
Description
There is no longer an AiClient that is compatible with OpenAI, capable of accessing other Large Language Model (LLM) Open APIs, and also has access to local large models.
What I desire in a Java version is likely present in a Python version, where
llm_config
can function with other LLM Open APIs.Use case
https://github.com/feuyeux/hello-autogen/blob/main/hello-autogen-java/src/test/java/org/feuyeux/ai/autogen/HelloAutogenTests.java
https://github.com/feuyeux/hello-autogen/blob/main/hello-autogen-python/hello_autogen.py
The text was updated successfully, but these errors were encountered: