Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fast Network Camera mode supports H264 RTSP stream #1764

Merged
merged 8 commits into from
May 10, 2019

Conversation

jasaw
Copy link
Collaborator

@jasaw jasaw commented Jan 3, 2019

This is my first attempt at implementing #1259 feature, please review and give feedback.

This PR adds live video and audio RTSP stream to fast network camera backend for Raspberry Pi 1, 2, and 3, which serves the RTSP stream at url: rtsp://<rpi-IP-address>/h264. The video is in H264 format, and optional audio in AAC format. It relies on GStreamer to read H264 video feed from a v4l2 video interface, which works with RPi Cam via bcm2835-v4l2 driver. USB webcams with H264 output are supported as well. As for audio, GStreamer reads the audio from a specified ALSA interface. Any microphone that appears as an ALSA device should work.

Note: RTSP configuration support on the web interface is not covered by this PR, which means RTSP mode can only be enabled by editing configuration files by hand.

If you are interested in testing this out, head to my pre-built images: https://github.com/jasaw/motioneyeos/releases/tag/rtsp-fnc-dev20190321
I suggest trying this on a sacrificial microSD card so you don't need to rebuild your system if it doesn't work.
Be warned that I have only tested the Raspberry Pi 2 version in a controlled environment. Use at your own risk.

Architecture

GStreamer reads the video and audio, parses the h264 video stream while encoding the audio to AAC format. The video and audio feed (timestamped to help playback synchronization) are handled over to GStreamer RTSP server (called test-launch) which is able to serve multiple clients simultaneously.

To minimize changes to web front end and still be able to show video feed like before, another GStreamer instance is set up to convert RTSP to MJPEG. The MJPEG is fed into StreamEye, which serves the MJPEG stream to MotionEye.

RPi-Cam / Webcam (Video) ---+    GStreamer    +--- Any RTSP Client
                            +---   RTSP    ---+
USB / Webcam Mic (Audio) ---+     Server      +--- GStreamer RTSP Client --- StreamEye --- motionEye

Instructions

As the web front end has not been updated to support RTSP configuration, it is recommended to follow the below steps.

  1. Enable Fast Network Camera from the web interface first.
  2. Configure camera resolution, frame rate, etc from the web interface.
  3. To enable RTSP mode, add PROTO="rtsp" to /data/etc/streameye.conf.
  4. To specify which video device to use, add VIDEO_DEV="/dev/video0" to /data/etc/streameye.conf. If VIDEO_DEV is not specified, it defaults to /dev/video0.
  5. To enable audio, add AUDIO_DEV="hw:1,0" to /data/etc/streameye.conf. See the audio section below for more information.
  6. To specify the video bit rate, add bitrate 4000000 to /data/etc/raspimjpeg.conf.
  7. To specify audio bit rate and number of audio channels, add audio_bitrate 16000 and audio_channels 2 to /data/etc/raspimjpeg.conf.
  8. Reboot.

Video

V4L2 interface allows cameras to be configured for different light conditions, but the parameters can be quite different between cameras. Currently, configuration specified in /data/etc/raspimjpeg.conf are applied to RPi Cam only. Other cameras will have to be configured manually, probably via /data/etc/userinit.sh script.

The below commands should list all the supported configuration items for a specified camera.

v4l2-ctl -d /dev/video0 --all
v4l2-ctl -d /dev/video0 --log-status

The v4l2 camera controls can be found here: https://hverkuil.home.xs4all.nl/spec/uapi/v4l/extended-controls.html

Audio

An ALSA audio device is mapped into this format: hw:<card>,<subdevice>. To find your device mapping, run arecord -l.

Example output for hw:1,0 mapping:

**** List of CAPTURE Hardware Devices ****
card 1: C920 [HD Pro Webcam C920], device 0: USB Audio [USB Audio]
  Subdevices: 0/1
  Subdevice #0: subdevice #0

The ALSA device may need to be configured, which can be done from alsamixer -c 1 for card number 1.

To configure the ALSA from a script like /data/etc/userinit.sh, the below commands may be useful:

  • Print out card 1 microphone status: amixer -c 1 sget 'Mic'
  • Enable card 1 microphone: amixer -c 1 sset 'Mic' cap
  • Set card 1 microphone gain by percentage: amixer -c 1 sset 'Mic' 50%
  • Alternatively, set card 1 microphone gain by dB: amixer -c 1 sset 'Mic' 50dB

If you have microphone problem, you may want to do a test recording: arecord -D hw:1,0 -f DAT -r 16 /tmp/my_record.wav

Monitoring

When running with a RPi Cam, H264 video encoding and decoding are done in the GPU, which requires a lot of memory. Make sure you have enough memory allocated to the GPU. The amount of memory required depends on the video resolution.

To get GPU memory status, run: vcgencmd get_mem malloc ; vcgencmd get_mem reloc

CPU load and system memory usage are worth monitoring as H264 video and AAC audio encoding can be quite taxing on both CPU and memory. Make sure CPU load is always under 70% during intended use. Make sure there is enough system memory during intended use to avoid Out-of-Memory (OOM) killer. Lower the video resolution if you need to reduce CPU / memory usage.

Known Issues

I do not know the cause of the below issues yet.

  1. Some RTSP clients (if the first client to connect to GStreamer RTSP server) causes various issues to subsequent clients, e.g. unable to connect, or loss of audio, or laggy video. To workaround the issue, the streameye.sh script uses iptables to temporarily block RTSP port to make sure our RTSP to MJPEG client is always the first client to connect.

  2. The services must be started in the right sequence for the RTSP stream to work. test-launch first, then RTSP to MJPEG to StreamEye, then finally MotionEye.

  3. Stream latency accumulates over time.

@jasaw jasaw requested a review from ccrisan January 3, 2019 03:56
@pacman8521
Copy link

I have configured my raspberry 3 with MotionEye and am ready to begin setting up RTSP. Use me as a 3rd party tester...Tony

@jasaw
Copy link
Collaborator Author

jasaw commented Jan 3, 2019

@pacman8521 If you are interested in testing this, try out my pre-built images here: https://github.com/jasaw/motioneyeos/releases/tag/rtsp-fnc-dev20190321
I suggest you try this on a sacrificial microSD card so you don't need to rebuild your system if it doesn't work.

@popoviciri
Copy link
Contributor

Just got this hooked on a rpi zero w. I have it set at 800:600 25fps and it works great!! The server is on a rpi3 model B running motioneye 0.39.3 & motion 4.2.2.
The camera has a cpu loat at ~50% and server is recording at 25fps, cpu almost 100% (understandable). I do plan to run motioneye on an intel core I7 machine, so recording should be a breeze. I have 6 cameras ready to be set and I'm thinking to do it with this beta.
Is there a chance to get this reviewed and get it in motioneyeos? Anything you'd like me to do? I can provide logs or other info, just let me know.
cheers! radu

@jasaw
Copy link
Collaborator Author

jasaw commented Mar 26, 2019

I recently bought this $2 USB microphone to test the audio stream, and it seems to work quite well, but only mono channel.
c099_usb_mic

@jasaw
Copy link
Collaborator Author

jasaw commented Apr 12, 2019

@ccrisan Have you had a chance to look into this PR yet? Any thoughts? I've been running this for weeks, and seems to work fine from my limited testing.

@ccrisan
Copy link
Collaborator

ccrisan commented Apr 14, 2019

@jasaw I am really sorry but I haven't been able to find enough time to try this PR of yours. Let's get it done with the new (pre)release, making sure that all users have the new partition in place, happily running Motion 4.x with motionEye 0.40.

Then this is one of the pending features that will see its way to motionEyeOS.

@ccrisan
Copy link
Collaborator

ccrisan commented May 10, 2019

@jasaw this work of yours is amazing. I took the time to test it and it just works and it works very well. Let's have it merged for the next release.

@MACscr
Copy link

MACscr commented Jun 29, 2020

Am i reading this right that this has now been merged and audio should be now supported for such rtsp streams? I have both a .h264 and .aac rtsp stream from my cameras.

@jasaw
Copy link
Collaborator Author

jasaw commented Jun 29, 2020

@MACscr Yes, we now have experimental support for video & audio RTSP stream for Fast Network Camera mode. The web interface does not expose audio control at the moment because No one has tested audio except myself.
Please note that audio is only supported for raspberry pi cameras running in Fast Network Camera mode (provided you have an audio device that is supported by Raspbian), i.e. the raspberry pi camera generates the RTSP.

The work in this PR has been replaced by the PR below, but the audio instructions in this PR still apply.
#2126

@MACscr
Copy link

MACscr commented Jun 29, 2020

@MACscr Yes, we now have experimental support for video & audio RTSP stream for Fast Network Camera mode. The web interface does not expose audio control at the moment because No one has tested audio except myself.
Please note that audio is only supported for raspberry pi cameras running in Fast Network Camera mode (provided you have an audio device that is supported by Raspbian), i.e. the raspberry pi camera generates the RTSP.

Oh, so wouldnt work with some third party camera (yi home 1080p with custom firmware) that provides the two (h264 and acc) rtsp streams? Why would the source device matter?

@jasaw
Copy link
Collaborator Author

jasaw commented Jun 29, 2020

@MACscr This PR is about making raspberry pi capable of generating a RTSP stream that has video and audio.

Since you already have cameras that already generate video & audio stream, I believe what you are looking for is audio support on the receiving end, which requires audio support to be added to motion software. I haven't got any luck convincing motion developers to support audio. :-( Lots of people are asking for audio support too. This is one of them.
Motion-Project/motion#770

This is in motion's FAQ regarding audio:
https://github.com/Motion-Project/motion/wiki/FAQ#q--does--will-motion-support-audio-recording

@kwinz
Copy link

kwinz commented Sep 26, 2020

So RTSP streams are in May 2019 dev branch.
Has this since propagated to the latest release? (a year ago in September 2019)

@starbasessd
Copy link

starbasessd commented Sep 26, 2020 via email

@kwinz
Copy link

kwinz commented Sep 26, 2020

Thanks. So just to clarify I can't use the release version, I have to use dev20200907?
What network source string are you opening in VLC? rtsp://IPADDRESS:8081/h264 ?

My /data/etc/streameye.conf is

PROTO="rtsp"
PORT="8081"
RTSP_PORT="554"
VIDEO_DEV="/dev/video0"

@starbasessd
Copy link

starbasessd commented Sep 26, 2020 via email

@kwinz
Copy link

kwinz commented Sep 26, 2020

Thanks again. Right now I can't get a connection with VLC with the URI schema you posted, even though I think I have all the configuration where it should be according to #1764 (comment)
Testing without audio for now.

VIDEO_DEV="/dev/video0" is the USB device that I have used so far for the old MJPEG stream. I verified that with v4l2-ctl -d /dev/video0 --all. I want to move to H264, because the JPEG stream takes too much storage space at approximately 10Mbit/s.

I am not getting any web interface output and lots of errors like:

[    3.742995] smsc95xx 1-1.1:1.0 eth0: hardware isn't capable of remote wakeup
[    4.711802] usb 1-1.4: new high-speed USB device number 4 using dwc_otg
[    4.843692] usb 1-1.4: New USB device found, idVendor=534d, idProduct=0021, bcdDevice= 1.21
[    4.843708] usb 1-1.4: New USB device strings: Mfr=1, Product=2, SerialNumber=0
[    4.843717] usb 1-1.4: Product: AV TO USB2.0
[    4.843726] usb 1-1.4: Manufacturer: MACROSIL
[    4.867356] uvcvideo: Found UVC 1.00 device AV TO USB2.0 (534d:0021)
[    4.868220] uvcvideo: UVC non compliance - GET_DEF(PROBE) not supported. Enabling workaround.
[    4.869151] uvcvideo 1-1.4:1.0: Entity type for entity Processing 2 was not initialized!
[    4.869168] uvcvideo 1-1.4:1.0: Entity type for entity Camera 1 was not initialized!
[    4.869790] usbcore: registered new interface driver uvcvideo
[    4.869797] USB Video Class driver (1.1.1)
[    4.903302] usbcore: registered new interface driver snd-usb-audio
[1:ml1:FrontDoor] [ERR] [VID] v4l2_capture: VIDIOC_QBUF: No such device
[1:ml1:FrontDoor] [ERR] [ALL] mlp_capture: Video device fatal error - Closing video device
[1:ml1:FrontDoor] [WRN] [ALL] mlp_retry: Retrying until successful connection with camera
cucrulr:l :( 5(65)6 )R eRcevc vf afialiulruer:e :C oCnonnencetcitoino nr erseeste tb yb yp epeere
r
2020-09-26 17:42:11: [motioneye]  WARNING: 400 GET /picture/1/current (192.168.129.145) 5.31ms
2020-09-26 17:42:11: [motioneye]    ERROR: HTTP 400: Bad Request (unknown operation)

But I guess that's normal since RTSP is supposed to break MJPEG stream, web interface and motion capture to SMB share, if I understand correctly.

I will try with the newest dev release. Right now I am using the release from https://github.com/ccrisan/motioneyeos/wiki/Supported-Devices which links to the 2019 version.

@starbasessd
Copy link

starbasessd commented Sep 26, 2020 via email

@jasaw
Copy link
Collaborator Author

jasaw commented Sep 27, 2020

@kwinz The RTSP FNC implementation here has been replaced by a more efficient method that does all the processing and encoding in GPU, which means even a RPi Zero W can do 1080p 30fps, but only RPi CSI camera is supported: #2126

This should be in the latest release.

To enable RTSP mode on your FNC, go to the web interface of your FNC. Under "Video Streaming" section, change "Streaming Protocol" to RTSP. With RTSP mode, you get 2 video streams from your FNC, one is RTSP (high res, can be used for pass-through recording), another is MJPEG (meant to be low res low frame rate for motion detection and live-stream on web interface). All these parameters are configurable.

On your motionEye server that receives the RTSP stream, you need to add RTSP camera with rtsp://IPADDRESS:554/h264 or simply rtsp://IPADDRESS/h264.

@starbasessd
Copy link

starbasessd commented Sep 27, 2020 via email

@starbasessd
Copy link

starbasessd commented Sep 27, 2020 via email

@jasaw
Copy link
Collaborator Author

jasaw commented Sep 28, 2020

@ccrisan can we please give permission to @starbasessd to update the wiki here? He has done a fantastic job at supporting motionEyeOS.

@starbasessd
Copy link

starbasessd commented Sep 28, 2020 via email

@ccrisan
Copy link
Collaborator

ccrisan commented Sep 30, 2020

Done! @starbasessd you've been invited to be a collaborator on this project. Thank you for your help!

@shorted-neuron
Copy link

@jasaw thanks for your great work on enabling h264 for raspi CSI camera in #1764 ! I was quite pleased to find it.

I do wonder if some configurations it would be possible to have this method enabled while still feeding frames to "motion". In my case, i have raspberrypi 2, and a bunch of them. They use motion to detect locally and record motion to SD card. These are my "just in case" something bad happens at my shop while the main system is down, which is zoneminder. Thats the part where i could get a much higher frame rate on the noisy wifi network if i could ship it as h264. On my pi2 setups, they only run about 30% CPU with motion and streaming. When i turn on FNC and h264 RTSP, then they run 3% CPU, so it seems like they'd be able to take the "chain" that you made, and still run motion after the fact with CPU to spare.

I'm not a very good developer, but I'll try to hack around and maybe submit a pull request if i get it to work. Pointers would be handy if you have the time, or want to partner on it.

Thanks!

@jasaw
Copy link
Collaborator Author

jasaw commented Apr 29, 2021

@shorted-neuron You can certainly split the h264 stream and feed one of them into motion, but be careful if you use OMX to re-encode the video stream on motion side because the RPi GPU may not be fast enough to handle one additional encoding.

Let me give you some background on this RTSP FNC implementation:
The gstreamer approach in this PR has been replaced with v4l2multi_stream_mmal that I wrote to set up the entire video pipeline in the GPU, and we get an encoded h264 video stream and an MJPEG stream from this program. This is the only way to achieve high frame rate with minimal CPU load.

The video pipeline looks like this:

Pi Camera Preview Port ---> ISP (resizer) ---> MJPEG Encoder ---> File (stdout)
Pi Camera Video Port ---> H264 Encoder ---> V4L2 Loopback Device

The lower resolution MJPEG stream is used by streameye to provide live stream on web browser and optionally used by motion for motion detection because it's less CPU intensive to decode JPEG than H264.

The V4L2 lookback device is read by v4l2rtspserver RTSP server to provide the RTSP stream over IP.

v4l2multi_stream_mmal can be found here: https://github.com/jasaw/v4l2tools/tree/mmal-multi-stream

To add a 2nd h264 stream, you'll need to add a splitter and sink the 2nd H264 stream to another V4L2 loopback device, like so:

                                                    +---> V4L2 Loopback Device (RTSP server)
Pi Camera Video Port ---> H264 Encoder --- Splitter +
                                                    +---> V4L2 Loopback Device (motion)

You'll need to use v4l2loopback driver to create the 2nd V4L2 lookback device. I'm not sure how, so you might need to investigate this first.

Look at the streameye.sh file for more info on how the various programs are hooked up together.
https://github.com/ccrisan/motioneyeos/blob/dev/board/raspberrypi/overlay/usr/bin/streameye.sh

This pull request was closed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

8 participants