Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TRT inference consumes a large amount of memory #1914

Open
dengshuhai-cmd opened this issue Nov 19, 2024 · 0 comments
Open

TRT inference consumes a large amount of memory #1914

dengshuhai-cmd opened this issue Nov 19, 2024 · 0 comments

Comments

@dengshuhai-cmd
Copy link

Using the YOLOV8n model trained by oneself (Class 2) and the official YOLOV8n model provided (Class 80), both with 640 * 640 images as input, when reasoning on Jetson (8g), the YOLOV8n model (Class 80) displays the log 'init CUDNN', which occupies a large amount of memory and is twice that of the YOLOV8n model (Class 2). However, the YOLOV8n model (Class 2) does not display 'init CUDNN' during reasoning. What is the situation? When I run these two models on a PC (Linux), they occupy similar amounts of memory and there is no occurrence of init CUDNN.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant