Fix request timeout on OpenAI calls #67
Merged
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR fixes two issues:
HttpTimeout
ktor plugin and setting default timeout of all OpenAI requests to 30 seconds.max_tokens
due to now having larger prompts. BUT I think this is something we should calculate by ourselves by using a Tokenizer to get the number of tokens of our prompt, and subtracting it from the maximum context length of the selected model. That way we can also raise an error before sending the request to OpenAI. In that sense, I have just added for now thecontextLength
of eachLLMModel
.cc: @xebia-functional/team-ai