Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Inquiry about TensorRT Deployment for Ultra Fast Lane Detection V2 #189

Open
yspark98 opened this issue Aug 5, 2024 · 0 comments
Open

Comments

@yspark98
Copy link

yspark98 commented Aug 5, 2024

Hello, my name is Yoonsoo Park, and I am a master's student at Inha University in South Korea. I am currently using Ultra Fast Lane Detection V2 for lane detection. I would like to describe my situation. I am aiming to process a file named "example.mp4" on a Jetson Orin Nano. However, due to memory issues, I perform the conversion of .pth to .onnx and .onnx to an engine file on a desktop, and then I transfer the converted engine file to the Orin Nano to run the deploy/trt_infer.py script.

Both the desktop and the Orin Nano operate under a CUDA 12.0 and TensorRT 8.6.1 environment, but when I execute deploy/trt_infer.py on the Orin Nano, I encounter a segmentation fault (core dumped) error. If there is any advice on how to successfully run UF Ldv2 on the Nvidia Jetson Orin Nano, such as matching CUDA and TensorRT versions, I would greatly appreciate your guidance.

Thank you.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant