-
Notifications
You must be signed in to change notification settings - Fork 643
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Exception in batch inference with SDK #839
Comments
Please post your environment info by running Also, can you share your code about using SDK Python API? |
Thanks for your timely reply. 2022-07-29 12:19:05,439 - mmdeploy - INFO - 2022-07-29 12:19:05,439 - mmdeploy - INFO - Environmental information
2022-07-29 12:19:05,710 - mmdeploy - INFO - TorchVision: 0.13.0 2022-07-29 12:19:05,710 - mmdeploy - INFO - Backend information 2022-07-29 12:19:06,261 - mmdeploy - INFO - Codebase information |
And the code
The |
Hi, |
Just ignore the warning about bulk implementations, it has nothing to do with batch inference. Batch inference in SDK is experimental and must be turned on explicitly in the configuration file. In {
"name": "yolox",
"type": "Task",
"module": "Net",
"is_batched": true, // <--
"input": ["prep_output"],
"output": ["infer_output"],
"input_map": {"img": "input"}
} and be aware that after preprocess, images must be of the same size to form a batch. |
Thanks, I can verify that this works. |
Thank you for the great work,
I exported a Faster R-CNN TRT model with your tools, and it works fine using the inference API from Model Converter. However, when I use the MMDeploy SDK (Python API) to do inference, I get tons of [warning] [bulk.h:39] fallback Bulk implementation and the model fails to do batch inference (it infers one by one whatever the size of the input image list is).
Please tell me if you need any information.
The text was updated successfully, but these errors were encountered: