You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I was just testing the azure OpenAI with the model "gpt35-instruct" model, which is a gpt3.5 instrcut model I have just deployed. But after setting up the model, when I was trying to make a simple query test, it shows this error:
/home/yb/.local/lib/python3.10/site-packages/lmql/runtime/bopenai/batched_openai.py:691: OpenAIAPIWarning: OpenAI: ("Setting 'echo' and 'logprobs' at the same time is not supported for this model. (after receiving 0 chunks. Current chunk time: 9.894371032714844e-05 Average chunk time: 0.0)", 'Stream duration:', 0.47087764739990234) "<class 'lmql.runtime.bopenai.openai_api.OpenAIStreamError'>"
But when I test the same query, using an openai gpt35 instruct model, it was functioning properly. Is this a problem of Azure API endpoint is different from the OpenAI version that they disabled the feature of using at the same time "logprob" and "echo"?
To configure the azure model, I am using the following code:
I just checked the OpenAI Endpoint certification, which the echo is set to false by default. Is it possible that the implementation of Azure Endpoint are set by default to true so the query is failing? Or I am using the wrong configuration to set the azure model? Thank you in advance!
Hi, I was just testing the azure OpenAI with the model "gpt35-instruct" model, which is a gpt3.5 instrcut model I have just deployed. But after setting up the model, when I was trying to make a simple query test, it shows this error:
But when I test the same query, using an openai gpt35 instruct model, it was functioning properly. Is this a problem of Azure API endpoint is different from the OpenAI version that they disabled the feature of using at the same time "logprob" and "echo"?
To configure the azure model, I am using the following code:
The text was updated successfully, but these errors were encountered: