Drastic performance degradation when running multiple times on the same image #2749
Unanswered
jacoblambert
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi all,
I've encountered this issue where, after loading the model and performing inference once, the performance starts to degrade significantly. For example when I load the model -> inference on an image -> inference on the same image again, these are the results I get:
I don't want to call this a bug, I feel like this is a case where the .eval() flag is not being set and something in the model is changing, but I don't have access to the torch Module here so I can't set it. Has anyone encountered this issue and is there something wrong with my usage of the mmdeploy API?
Here's a code snippet for how I'm loading the model and running inference, followed by my environment details.
FYI I'm using mmdeploy_runtime "Detector" as opposed to "build_task_processor" because of this bug.
Here is my env check:
Beta Was this translation helpful? Give feedback.
All reactions