Skip to content

Software writing

Q-engineering edited this page Sep 4, 2022 · 5 revisions

Software writing.

The software of the YoloCam is in pure C++. All sources are available, so you can modify the application to your needs, if you like. Only the libYoloCamRpi.so, with its detection of the 80 COCO labels, is write protected. In other words, you can not change the software to detect other objects. In the near future, an additional extension will make this possible.

Tips before starting.

  • Before writing any software, disable the Alive function. It is done by setting Alive = 0 in the /mnt/WRdisk/settings.txt file.
    Otherwise, your Raspberry Pi will automatically reboot every hour since Alive is missing a working YoloCam.
  • Remove the overlay if set. With the overlay function active, all your hard work is lost after a reboot.
  • Expand the swap memory to 2 GB. The application needs a lot of memory to compile.
  • Copy your final result to the folder /usr/local/bin. During booting, the YoloCam (and Alive) app will be started from this location.
  • When done, restore the initial configuration by reversing the above order.
sudo nano /etc/dphys-swapfile
sudo /etc/init.d/dphys-swapfile stop
sudo /etc/init.d/dphys-swapfile start

output image

Installed software

  • OpenCV 4.5.2. The lite version.
  • ncnn. A deep learning framework. See our website.
  • Code::Blcks. A C++ IDE and compiler shell.

Synchronization.

GPIO version.

The GPIO version works with one process. Every algorithm is executed in sequence in one large while-loop. Due to the LCCV camera interface, you don't have any time delays as the most recent frame is always sent to the deep learning model.

Email version.

The email version works with more processes. They must be tuned carefully, or else minutes of delay in the frame sequence will result.
The camera frame rate is defined in /etc/rc.local. It is set at 15 FPS or 66.6 mSec per frame. Your main loop must process the frames with the same or a higher speed to prevent a buffer overflow. With the HLS stream buffering minutes of video, it will happen only after a while. Meanwhile, your recognition software is increasingly lagging.
It is not possible to process 15 FPS with the Yolo deep learning model. A few frames per second is the maximum. It forces you to skip frames.
The number of skipped frames is calculated by the inference time of the Yolo model divided by 66.6 mSec. In main.cpp at line 300 you find the routine.

        Inference = std::chrono::duration_cast<std::chrono::milliseconds> (Tstop - Tstart).count();

        //meanwhile there are Skip images stored in the HLS buffer
        //assuming FFMPEG runs at 15 FPS, each frame is 1/15 = 66.66 mSec
        if(Slow > 0.0){
            //give a marge of 50% extra Skip to be sure when things got very hot (long waiting times)
            //(1.5*Inference)/66.66 = Inference/44.44
            Skip = ceil(Inference/44.44);
        }
        else{
            //when temperatur is low, you can use the 'normal' 66.66 mSec interval
            Skip = ceil(Inference/66.66);
        }
        //flush the buffer
        for(i=0;i<Skip;i++) cap.read(frame);

The best way to check your synchronization is by looking at the CPU usage monitor. With an occasional dip in the graph, you know the while-loop has to wait for the next frame. It ensures that you are in sync with the stream. Usually, there is a 5-second delay in the camera images.
A permanent reading of 80% or more tells you a frame request is granted every time. In other words, there is always a frame available. Chances are you are having an increasing time delay, and the frame buffers will eventually overflow. Most likely, your delay in camera images gradually becomes more and more.

output image

For clearance, this is only the case in the email version. The GPIO version runs continuously at 80% or more.

Clone this wiki locally