Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support multiple deployment_ids for azure #300

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

henrikbjorn
Copy link

This enables support for calling multiple azure deployments when using the library. This follows the same convention as the official python openai package.

So client.embeddings(deployment_id: 'my-gpt-deployment', ...rest_of_args)

Since a embedding model would be one deployment and the actual chat model would be another this is required to use both.

All Submissions:

  • Have you followed the guidelines in our Contributing document?
  • Have you checked to ensure there aren't other open Pull Requests for the same update/change?
  • Have you added an explanation of what your changes do and why you'd like us to include them?

@alexrudall
Copy link
Owner

Thanks for this. Looks like a breaking change for Azure users?

@henrikbjorn
Copy link
Author

Thanks for this. Looks like a breaking change for Azure users?

Not really, they can just use the same base uri as always and then omit the deployment_id parameter from their method calls.

@henrikbjorn
Copy link
Author

Sorry to ping you @alexrudall

Not sure what you want to do with the Rubycop warning

@alexrudall
Copy link
Owner

@henrikbjorn - this should now be solved in v5 of ruby-openai as you can create multiple Client objects each with different configuration (so you can set different URI as needed). Let me know if this doesn't solve your problem, and thank you for your contribution!

@alexrudall alexrudall closed this Aug 14, 2023
@henrikbjorn
Copy link
Author

@henrikbjorn - this should now be solved in v5 of ruby-openai as you can create multiple Client objects each with different configuration (so you can set different URI as needed). Let me know if this doesn't solve your problem, and thank you for your contribution!

That dosen't solve the problem. It makes it complex and unneeded. Since I would have to have a Embedding Client, GPT-4 Client etc.

That does nok make sense to me

@henrikbjorn
Copy link
Author

Since especially that usecase is even still supported with this code. Just set the base uri to the deployment directly and omit the deployment_id from the api calls

@alexrudall
Copy link
Owner

Hmm OK let me take another look

@alexrudall alexrudall reopened this Aug 15, 2023
@henrikbjorn
Copy link
Author

The path in json_post in only changed if deployment_id is present. Otherwise the code path is as before

@themire
Copy link

themire commented Aug 30, 2023

This is what I'm doing to support multiple models in Azure which is working fine.

module OpenAi
  class Client
    def self.client(deployment_id)
      OpenAI::Client.new(
        uri_base: "https://***.openai.azure.com/openai/deployments/#{deployment_id}"
      )
    end

    def self.default
      client("gpt-35-turbo")
    end

    def self.gpt35_16k
      client("gpt-35-16k")
    end

    def self.gpt4
      client("gpt-4-8k")
    end

    def self.embedding
      client("text-embedding-ada-002")
    end
  end
end



OpenAi::Client.gpt35_16k.chat(parameters: ...)

@henrikbjorn
Copy link
Author

Either works, but I think it is a good idea to follow the same API as the python SDK, so the SDKs works the same

@bmulholland
Copy link

One minor annoyance with the current setup is that, even if creating multiple OpenAI clients, we have to duplicate and construct the uri each time. In that sense, the configuration for Azure isn't actually "URI base," but "full URI." Ideally, we'd configure the true "base" of the URI once (e.g. from an env var), and then set the deployment per client initialization. As a compromise, could we instead split out an optional deployment_id constructor arg that gets appended to the uri_base?

@henrikbjorn
Copy link
Author

One minor annoyance with the current setup is that, even if creating multiple OpenAI clients, we have to duplicate and construct the uri each time. In that sense, the configuration for Azure isn't actually "URI base," but "full URI." Ideally, we'd configure the true "base" of the URI once (e.g. from an env var), and then set the deployment per client initialization. As a compromise, could we instead split out an optional deployment_id constructor arg that gets appended to the uri_base?

Which is what this PR does

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants