Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error on using completions? #546

Open
denisj opened this issue Nov 12, 2024 · 3 comments
Open

Error on using completions? #546

denisj opened this issue Nov 12, 2024 · 3 comments

Comments

@denisj
Copy link

denisj commented Nov 12, 2024

Describe the bug
Don't know if it's a bug, but I'm trying to make Sentiment Analysis using OpenAI GPT-4. I was trying to do so using the completions API with the example of the documentation.

response = client.completions(
   parameters: {
     model: "gpt-4o",
     prompt: "Once upon a time",
     max_tokens: 5
   }
 )

But I get this error:

OpenAI HTTP Error (spotted in ruby-openai 7.3.1): {"error"=>{"message"=>"This is a chat model and not supported in the v1/completions endpoint. Did you mean to use v1/chat/completions?", "type"=>"invalid_request_error", "param"=>"model", "code"=>nil}}

With GPT3.5 too

response = client.chat.completions(
   parameters: {
     model: "gpt-3.5-turbo-0125",
     prompt: "Once upon a time",
     max_tokens: 5
   }
)
OpenAI HTTP Error (spotted in ruby-openai 7.3.1): {"error"=>{"message"=>"you must provide a model parameter", "type"=>"invalid_request_error", "param"=>nil, "code"=>nil}}

To Reproduce
Steps to reproduce the behavior:

  1. Enter in a Ruby console
  2. Use the code above

Expected behavior
Just receive the response from Chat GPT.

Desktop (please complete the following information):

  • OS: Ubuntu 20.04.6 LTS

Additional context

  • Ruby: 3.3.6
  • Rails: 7.2.2
  • Ruby-OpenAI: 7.3.1
@eltoob
Copy link

eltoob commented Nov 22, 2024

max token is weirdly low, did you try increasing that?

@denisj
Copy link
Author

denisj commented Nov 26, 2024

Still facing the same issue.

Could it be a problem on the key used that doesn't have the proper credentials?

@posiczko
Copy link

posiczko commented Dec 11, 2024

The completions models are now considered legacy. The /v1/completions endpoint is still present, however, you must use a compatible model to access them:

  • gpt-3.5-turbo-instruct
  • babbage-002
  • davinci-002

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants