-
-
Notifications
You must be signed in to change notification settings - Fork 1.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Nvidia Jetson gstreamer support (WIP) #2440
Conversation
✔️ Deploy Preview for frigate-docs canceled. 🔨 Explore the source changes: 0097ddb 🔍 Inspect the deploy log: https://app.netlify.com/sites/frigate-docs/deploys/61cafb89eea2d30007e16ebd |
I don't have any issues with this direction. I looked at gstreamer previously, but ffmpeg is much more widely used and I am much more familiar with it. It seems totally reasonable to run it in parallel when needed. I also think TensorRT is worth including. I tried a while back, but there were python version incompatibilities that I couldn't resolve. I just haven't had time to come back around to trying again. These changes would likely be for 0.11.0. I am also planning to try and switch to jellyfin-ffmpeg for that release. |
is this the right place to collect ideas? but you could guess how this is could look like... |
hi @blakeblackshear |
I thought I saw that TensorRT was compatible with python 3.8 a while back. I don't know much about the L4T containers. |
Take a look at watsor to see if it gives you any ideas: https://github.com/asmirnou/watsor/tree/master/docker |
From a first look, it seems it's based on 3.6: |
promising performance results for more than 8 cams... |
I have pretty decent results running 7 cameras @10 fps with software inference and gstreamer with hardware support. I also get ~25fps at 1080P inference speed with the yolov4-tiny-416 model using TensorRT. My goal now is to make a Frigate version that uses gsreamer + TenorRT inference and stress test this setup on my hardware. I have all pieces together, though I have a hard time compiling some of the dependencies during the Docker Build phase. I have to either add |
as i have no coding skills but can only read some lines ... so i follow your commits and refactoring code for the nano (and with foreign-detectors looks like some other improvements). |
This looks like a good start. I don't have any major issues with the direction here. It's probably best if we add gstreamer to all variations of the docker builds. It may be beneficial on other platforms too, and I don't want some config options to only work in some image variants. Also, you may want to look at Ambianic since it also uses gstreamer. There may be some useful insights there. |
Yup, that makes perfect sense. This work is still in the feasibility study phase, I think. I have to figure out how to make gstreamer re-stream RTSP since it can't be backed by the FFmpeg. |
What if you use ffmpeg to relay the RTMP stream and gstreamer just for decoding. Can you mix and match ffmpeg and gstreamer? I'm planning to replace ffmpeg for rtmp relay in the future anyway. |
Didn't do much for the last couple of days other than running side by side tolo4-tiny-416 on JetsonNano with TensorRT and gstreamer (7 cameras) and Intel NUC with vanilla Frigate using CPU inference (2 cameras). The configuration file for both setups is almost the same, except NUC has 2 cameras and CPU interference. I feel 7 cams is sort of okay but I would not run more on such a tiny device as Nano though I observe no issues except one. I got a bus error overnight, which was, hopefully, solved by tweaking the shm-size. What I'm really happy about is that yolo-4, seems, have better sensitivity in low light/night mode conditions. Not sure if the inference speed makes any difference, though yolo4 tiny was able to detect a person ~25 meters away during the rain at night. I observed no such events registered by NUC, fed from the same camera. I'm excited to finalize this PR and I might need some guidance from @blakeblackshear to complete that. Here is the plan which I have in mind:
It seems I won't be able to get away with having no Nvidia runtime enabled for the Jetson family builds. It seems other projects has the same issues. |
i share that impression. interesting to see what yolo5 does. its somehow new architecture based on pytorch. |
This PR will need to be rebased on the What inference speeds are you seeing on the yolo-4 model with the Nano? |
release-0.10.0 branch makes sense to me. Do you see any existing issues or upcoming work that may interfere with what I'm doing? I might hold on if this is the case. If stats are correct, I'm getting ~37ms inference time for I haven't looked at the yolo5 in detail. I just noticed it's not the yolo4 successor, which is not necessarily bad. Also, I did not see any yolo5-tiny models. Looks like the full model is pretty heavy for Nano (although okay for more powerful devices). I ran |
jetson nano is the board and yolo5v6n looks like the new yolo-tiny . found some ncnn (??) of yolo5 explicit for the nano: the yolo(4)-tiny maybe are now called yolo(5-v6)-nano: running yolo5 on edgeTPU. some inspiration here: |
Oh, awesome! Sorry, Im pretty new to all of that stuff |
There aren't any in progress changes that will conflict. The only thing that could conflict for the 0.11.0 release will be changes to ffmpeg and the use of RTMP. This will go out with 0.11.0 if ready. I just wish I could get a Jetson Nano for testing. |
preferably 4GB RAM version (over the smaller 2GB) out of stock @ the moment, [chip-crisis.. |
i wish i could code, to help with the better road for yolo on the jetson nano. 😢 Yolo (v3/4/5) on Coral ?the yolo-models could run on the coral, i guess, v3 Yolo5 formatoptions:Yolo (v5v6?) as TensorRT ?but maybe a better way to get them running "natively" as Tensor-RT models (like nvidia does itself) ? Yolo (v3/4/5 and others) as TensorRTx ?https://github.com/wang-xinyu/tensorrtx Whats needed?dont know whats needed to convert - and that frigate handles the results correctly... Thought: as this is WiP for the jetson-nano-sbc (or jetson-tx?) and has powerful AI onboard,, maybe the jetson-sbc platform should not use the coral (or at least not depend on it) and use its own AI-capbilites? would make a single-hardware solution.. |
Rebased to 0.10 #2548 and apparently screwed up this PR |
hi @blakeblackshear,
First of all, thank you for this awesome project!
I also struggle with the hardware video decode on Jetson Nano. As far as I understood, ffmpeg will be sort of PITA on the Tegra platform. So I made an attempt to set up GStreamer with Frigate.
This work is very far from being done, though it was mostly intended to be a proof of concept and request for comments/suggestions.
Here is what I have now:
frigate.yml
can have agstreamer:
section instead offfmped
likeThis configuration has no GStreamer configuration so it falls back to the GStreamer

videotestsrc
video source. The result looks like this:With the knowledge of supported decoders one can come up with the following configuration:
I'm using a CPU detector and the usage might look pretty high. Though NVDEC is active.

In order to have full control over the GStreamer, one can come up with the manual pipeline which might look something like this:
This set up successfully up and running on my Jetson Nano and control the garden lights. Though I see no issues with GStreamer as a back end, I did not have time to make ffmpeg work. As a result, the live stream is broken. There is a list of things that makes this PR far from ideal:
As far as future improvement, I have a couple of questions:
Thanks