Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Another Client compatible with OpenAI #17

Open
1 task done
feuyeux opened this issue May 29, 2024 · 1 comment
Open
1 task done

Another Client compatible with OpenAI #17

feuyeux opened this issue May 29, 2024 · 1 comment

Comments

@feuyeux
Copy link

feuyeux commented May 29, 2024

Search before asking

  • I had searched in the issues and found no similar feature requirement.

Description

There is no longer an AiClient that is compatible with OpenAI, capable of accessing other Large Language Model (LLM) Open APIs, and also has access to local large models.

What I desire in a Java version is likely present in a Python version, where llm_config can function with other LLM Open APIs.

Use case

var joe = AssistantAgent.builder()
                .client(OpenAiClient.builder()
                        .openaiApiBase(URL)
                        .openaiApiKey(API_KEY)
                        .build().init())
                .name("joe")
                .systemMessage("Your name is Joe and you are a part of a duo of comedians.")
                .humanInputMode(NEVER)
                .build();

https://github.com/feuyeux/hello-autogen/blob/main/hello-autogen-java/src/test/java/org/feuyeux/ai/autogen/HelloAutogenTests.java

local_llm_config = {
    "config_list": [
        {
            "model": "NotRequired",  # Loaded with LiteLLM command
            "api_key": "NotRequired",  # Not needed
            "base_url": "http://0.0.0.0:4000"  # Your LiteLLM URL
        }
    ],
    "cache_seed": None  # Turns off caching, useful for testing different models
}

joe = ConversableAgent(
    "joe",
    system_message="Your name is Joe and you are a part of a duo of comedians.",
    llm_config=local_llm_config,
    human_input_mode="NEVER",  # Never ask for human input.
)

https://github.com/feuyeux/hello-autogen/blob/main/hello-autogen-python/hello_autogen.py

@katemamba
Copy link

When you used autogen4j from dependencies, you might see this in openai-client:0.2.2
OpenAiClient.class specifies it to have either of OPENAI_PROXY & OPENAI_API_KEY , OPENAI_ORGANIZATION to get it rolling
`
this.openaiProxy = this.getOrEnvOrDefault(this.openaiProxy, "OPENAI_PROXY");

OkHttpClient.Builder httpClientBuilder = (new OkHttpClient.Builder()).connectTimeout(this.requestTimeout, TimeUnit.SECONDS).readTimeout(this.requestTimeout, TimeUnit.SECONDS).writeTimeout(this.requestTimeout, TimeUnit.SECONDS).callTimeout(this.requestTimeout, TimeUnit.SECONDS);

    httpClientBuilder.addInterceptor((chain) -> {

        this.openaiApiKey = this.getOrEnvOrDefault(this.openaiApiKey, "OPENAI_API_KEY");

        this.openaiOrganization = this.getOrEnvOrDefault(this.openaiOrganization, "OPENAI_ORGANIZATION", "");

        Request.Builder requestBuilder = chain.request().newBuilder();
        requestBuilder.header("Content-Type", "application/json");
        if (this.isAzureApiType()) {
            requestBuilder.header("api-key", this.openaiApiKey);
        } else {
            requestBuilder.header("Authorization", "Bearer " + this.openaiApiKey);
            requestBuilder.header("OpenAI-Organization", this.openaiOrganization);
        }

`
To change this to use local llm and model from a url, another generic client needs to be setup similar to one python offers, then as a feature would be nice to have a switch between using openAI Client and LocalLLM client to fine tune as per flag. Can help add in this feature so it helps support both models and increase extension.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants