-
-
Notifications
You must be signed in to change notification settings - Fork 1.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Nvidia Jetson Nano gstreamer + TensorRT support #2548
Nvidia Jetson Nano gstreamer + TensorRT support #2548
Conversation
Hi folks, sorry, I cannot continue contributing to this PR. I have to leave my home where I left the Jetson Nano device. Not sure when I can go back. I'll be very happy if someone can continue the work. |
Thank you for all your work on this and most importantly be safe! |
Be safe and Godspeed! I'm not a programmer really but have decades of professional IT experience and a spare Jetson Nano 4gig we can try stuff with. I'm somewhat busy but if someone can walk me through what needs to be done to get a working image for the Jetson Nano I'd be happy to contribute. |
I need to redo my setup anyway so I'll pull the old Nano 4gig out and see how she does. Question though, I have a USB Coral and my current t setup is a VMWare Box with Ubuntu Server 20 LTS with Frigate 0.11.1 running in docker-compose with the Hardware USB controller passed through to the VM and the Coral running on it. Then another VM with Home Assistant running Frigate Proxy, Double Take, and CompreFace doing the facial recognition and triggering automation off of it. Although I usually just use the frigate automation blueprint for notifications nowadays. I'm currently running 4 streams at 7fps and an inference speed of about 30-50. Then of course the sub-processing of the image by Double Take and CompreFace. I'd like to utilize the Nano as much as possible but I also have a 40-core PowerEdge server backing the rest. Since I have both the Nano and the Coral along with a beefy server, what do you think the best setup would be? This is primarily for home use and monitoring and I'd like it to be able to run facial detection. I'm thinking of just using the Nano with the Coral just for Frigate to use. Having the object detection ram through the Coral and utilizing the GPU for encoding the incoming streams and sending them back out as RTMP for the other services to be utilized. What do you guys think? |
Anyway we can reopen this and ill assist? I'm no programmer but I've worked in IT for over 20 years. I'm pretty good at figuring things out if anyone can point me in a good direction or advice on where to go to start learning programming. I figured python would be the best way to start as I can understand it's relation to Linux and even Windows systems. |
@LordNex this PR has loads of conflicts. 0.12 already has tensorrt support built in so jetson support will likely be added at some point in the future. |
@NickM-27 Yea I seen that and I didn't know if they needed fixed to be able to be in 12. I've noticed all the commits for 12 going through so I'm hopefull that will be out soon. My main question still applies as to what would be the best way to setup. I'm also a Frigate+ beta user so I've been trying to help where I can. I'd be interested in what you and everyone thinks would be the best setup. Thanks again for your hard work👍 |
To be clear nano support won't make it to 0.12 which is feature complete as it's in beta. Should be coming in the future though like I said.
Makes sense to me although with 0.12 it'll be rtsp and not rtmp |
So the base support is already built in just in beta and not official is that what you mean? I remember some of us discussing it before and someone said it's doable now but with a lot of setup, which I don't mind
Is there a reason for the move off of RTMP? I thought that was the primary solution to be able to consume the stream from several different sources at the same time. As currently I have my ReoLink, Amcreast, and Wyze cams all set up to send RSTP to Frigate and let it then restream them out as RTMP. But maybe I'm missing something. |
as far as that development was with the jetson-nano,
for 1) instead of ffmepg the gstreamer was needed on the nano. and that seemed to be a big thing for 2) another point with the jetson-nano was, as far as i remember, different tensorRT features than the big nvidia GPUs. |
All this information is in the release notes for the beta: https://github.com/blakeblackshear/frigate/releases/tag/v0.12.0-beta1
No. The tensorrt support in version 0.12 will only support nvidia GPUs on x86 platforms. The Nano or any other Jetson device will not be supported.
RTMP only supports h264 and crashes and burns as soon as the bitrate gets too high. It was probably responsible for 50% of the reported issues by users and had to be disabled. The same functionality will now be offered using the RTSP protocol which supports h265 too and is much more stable. |
This is in the release notes for 0.12. Jetson platforms will not be supported in 0.12.
0.12 does not incorporate gstreamer. 0.12 only works on x86 platforms. Adding support for Jetson will require a lot of additional changes. |
thats what i was remembering from my early experiments. but great now to see makeing NVIDIA GPUS usable for decoding and AI. (old days) Hope to see running more than only one AI-Model on a capable NVIDIA GPU. |
@blakeblackshear So Frigate won't run on the Jetson at all then. Dam I was hoping to leverage it for the encoding and decoding and the coral for detections. Guess I'll have to look into possibly moving CompreFace and maybe Double Take over to it and just leave Frigate running in the Ubuntu Server VM with the coral passed through. Currently I'm giving it 8 cores on 2 sockets so it has 16 thread cores to run. I'm only giving it 16 gigs of ram as I've never seen it climb above 4gigs of usage. Normally the 4 x 7fps streams being 24/7 recorded to a NAS on the same VM Host and the processors rarely go above 50% and about an inference speed of around 39. I just hate putting everything on that server. I originally bought the Nano for Deep Stack, but since the code owner of that is basically finished there's no more progression there. That andni found CompreFace to be way more reliable and accurate. What would you use the Nano do in this type of setup? Is there anything I can utilize it with in this project? Or should I just turn it into a overly powerfull RPi Moble Desktop(wonders if it runs games lol) Again Blake, thanks for all your hard work. It's much appreciated. I wish I knew more to help. But I'm an Infrastructure guy. Give me some servers and networking equipment and I can build some amazing setups. Just ain't much of a coder yet ✌️ |
@cap9qd I have a fork of yury's code which I modified to get working. It's not clean, but I have it running on my jetson nano. |
@cptnalf Thanks, I'll have to dig my Nano back out and see how it does. Are you still using a Coral on it or is it using the CUDA cores. I have both the Nano and a USB coral so I'm trying to find the best combination. So far that's ment running Frigate on an industrial server with the Coral attached. At this point I'm thinking of just putting a cheap PCIx Nvidia card in the server to help with transcoding. My idea with the Nano was to use the coral as the detector platform and then the GPU for the transcoding. I'm also running CompreFace for facial recognition which if I remember looking, doesn't have a build for the Nano yet. Anyway, ill definitely check it out as soon as I dig the dam SD card out |
I tried your fork originally and got the same errors. I checked out the gstreamer tag;is that correct? Let me force pull your fork and prune my environment and try again. I was stuck on this issue with it pulling the frigate-wheels-l4t from docker.io...so let me reset and try again. I think (know) my problems have been my learning pains with docker. I like the idea of ffpmeg...I am no GST expert or even novice so it took me like a year to get stable pipelines so if I can ditch that I wouldn't shed a tear. I played with it for the inference Jetson tutorials some but that's about it. |
@LordNex I'm not using a coral device on my jetson nano (4G) (with the custom build I have stitched together with yury's branch). ffmpeg with nvmpi is probably not the best decode/reencode setup. the interface is different from the standard GPU api, and it's not as supported (or at really all?). @cap9qd that is the correct branch. Now that I look at things, the converters/yolov4/build.sh isn't called out anywhere. |
Thanks for the help! The l4t_wheels builds against python 3.8 and was successful. But I am still getting an incompatible wheel error on the wheels copied from frigate_wheels when building l4t_frigate. I didn't have much time to investigate...and probably won't until this weekend. But I am determined to figure it out and help with 0.11 and 0.12 work. |
@cptnalf Hmm that really makes me wonder how I'll want to even use the Nano now. Maybe I can try and see if someone can get CompreFace or something better than DeepStack to run on there and use it for Facial Rec. Then just leave Frigate as a VM on my PowerEdge with the Coral. Might try and dig up an old video card that will fit in that small 1U chassis and allow it to do the transcoding since it has about 40 cores and 256 gigs of ram. I'm also shunting all of my footage and such to an OpenMediaVault NAS VM through NFS. Works like a dream until something occurs to knock the drive offline. Then the local hard drive fills up and crashes everything and takes me forever to clean it back up and stand it upright again. But hopefully 0.12 will fix some of that. |
GREAT NEWS: Python bindings for deepstream !! https://github.com/NVIDIA-AI-IOT/deepstream_python_apps/releases/tag/v1.1.6 |
The rebased version of the #2440
gst-discoverer-1.0
gst-discoverer-1.0
hang on discovering stream should not hang up the appI updated the Docker image on the DockerHub, you can test this branch by
docker pull yury1sannikov/frigate.l4t:latest
More details: #2440 (comment)