You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am getting the following error when I send multiple requests in parallel to the inference endpoint:
ERROR: 503
{
"code": 503,
"type": "ServiceUnavailableException",
"message": "Model "restorer" has no worker to serve inference request. Please use scale workers API to add workers. If this is a sequence inference, please check if it is closed, or expired; or exceeds maxSequenceJobQueueSize"
}
I have two separate processes that can access the inference API.
Any ideas?
The text was updated successfully, but these errors were encountered:
Hello,
I am getting the following error when I send multiple requests in parallel to the inference endpoint:
ERROR: 503
{
"code": 503,
"type": "ServiceUnavailableException",
"message": "Model "restorer" has no worker to serve inference request. Please use scale workers API to add workers. If this is a sequence inference, please check if it is closed, or expired; or exceeds maxSequenceJobQueueSize"
}
I have two separate processes that can access the inference API.
Any ideas?
The text was updated successfully, but these errors were encountered: