You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Speed of inference with V100 lower than GTX2080 for image size 128*128?
Ubuntu 18.04
cuda 11.1
cudnn 7
Consider for avoid CPU limitations, I split preprocess and upload part from model inference. But in GTX 2080 ti inference is done in 40 ms and in V100 time is 79 ms.
No CPU memory limit in the two systems.
The text was updated successfully, but these errors were encountered:
Speed of inference with V100 lower than GTX2080 for image size 128*128?
Ubuntu 18.04
cuda 11.1
cudnn 7
Consider for avoid CPU limitations, I split preprocess and upload part from model inference. But in GTX 2080 ti inference is done in 40 ms and in V100 time is 79 ms.
No CPU memory limit in the two systems.
The text was updated successfully, but these errors were encountered: