Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Motion binary_sensor not working if Shinobi primary engine is not Pixel Array #61

Open
energywave opened this issue Dec 12, 2023 · 8 comments

Comments

@energywave
Copy link

I have several cameras in Shinobi. Some of them use the integrated PIxel Array motion detection as a primary engine. Some other use "TensorflowCoral Connected" (the provided Coral Tensorflow plugin) and no Pixel Array at all.
The integration is correctly changing the motion binary_sensor for each cameras that use Pixel Array motion detection, according to the motion status found in Shinobi.
For cameras that use Coral Tensorflow plugin, instead, the integration doesn't change the motion binary_sensor at all.

The result is that the motion sensor is not usable on those cameras and is not reliable as it depends on configuration.

I expect, however, that the motion binary_sensor will be evaluated for wathever kind of motion detection you've configured in Shinobi.
In short: if Shinobi trigger a motion (and record a watch-only camera) then the binary_sensor in Home Assistant must be triggered.

@elad-bar
Copy link
Owner

When using another engine shinobi sends event of object detection not motion, therefore the integration reports it as object detection,
If you would like to change that behavior, you should ask developer of shinobi yo change it, but it wouldn't be right - object detection is looking for an object in the image while pixel array looks for differences between 3 images (frames), which is actual motion detection,
Hope it make sense

@energywave
Copy link
Author

energywave commented Dec 12, 2023

In the meanwhile I was reading the previous issue (my fault: I didn't search closed issues before opening mine...)
I didn't get exactly what data arrives from Shinobi through the API but I get your point.
My point of view, however, is that there should be something that can an automation trigger in the same event as when Shinobi record a video, whatever configuration is made on the cam.
Not knowing what's going on behind I really don't know if that's possible or not, however.
A possible "workaround" that would, however, be a nice feature could be a sensor (not binary) that assume the state of what object was found. For example "none" then "person" or "car" then again "none", something like this.
That could be a good improvement and people can easily use that for triggering their automation based on object detection, other that "pure pixel motion".
Maybe even for face detected could be done the same (but I never tried that, I'm using a Coral USB, no GPU).

If I'll have time I'll try to enable debug on the integration and go in front of a pure pixel motion camera and then to a object detection cam to see what changes in json object received to further discuss this matter.

Lastly take in account that a user can even configure motion detection to completely offload it to the camera by disabling pixel array and plugins and make the camera push movement information using ONVIF event, mail or ftp to Shinobi.
That should also get caught by the integration...

However we can better discuss when I'll be more informed about what happens exactly.
In the meanwhile... thank you for the wonderful integration!!!

@elad-bar
Copy link
Owner

There is a support for object detection, almost at the bottom of the readme:
Any Shinobi Video NVR event from type detector_trigger will be sent as an HA event as well with the same payload

Event in HA will be called shinobi/object, shinobi/face, etc...

Regarding events of recording:
If the camera is changing the state during that time from streaming to recording, you can get it as part of the sensor of status per camera,
If not, will need yo check how to listen to that event

@energywave
Copy link
Author

Yes, I've fully read the doc and I've seen that you're passing events and that's a good way to implement a custom logic or to do weird not standard logic.
So you say that the camera entity is changing it's state from streaming to recording while recording when triggered by motion? Did I understand correctly? If so that could be a good workaround but...

I wanted to express my point of view, for the best of this wonderful integration, hoping it can make sense to you too.

  • I assume that people that use Shinobi do it mainly for security reasons and not for detecting when a fridge or chair is in the view (and if someone do it it's really a minority)
  • I also assume that who is using object detection is doing it to have a reliable way for knowing when to record (or to mark an activity on the recording) by removing false positive typical by the pixel analysis (when light changes, for example. Or when wind blowing moves plants and light objects or a car pass by at night, lighting the wall or something like that).
    It's a trend in security cameras as well to use AI to identify real motion with no false positives (I use it myself on my last Reolink cam)
  • But when you have a cam that don't have AI onboard or you want more sophisticated logics you can use Shinobi with some plugins, in my case I use a Coral USB to offload that kind of work from the CPU and it's working quite well.
  • Then we have region editor that we can use to select where we want to consider an object detected as a reason to trigger the cam recording and
  • We have event filtering to define what kind of object should be considered to trigger the camera recording. (I'm using it to let person objects trigger the recording and exclude other objects, again, I'm not interested to detect fridge or other objects...)
  • The final result is that when and only when a REAL motion you're interested in is happening the cam will record.

So, in my point of view, the motion binary_sensor should be triggered by the "pixel" motion as well as an object was detected. It's just a different way to define "a motion".

I've analyzed with debug log what events arrives from the websocket when a pixel array movement is happening (formatted for easy reading here):

{
    "details": {
        "confidence": 46,
        "imgHeight": "480",
        "imgWidth": "640",
        "matrices": [
            {
                "confidence": 26.5844078063965,
                "height": 478,
                "tag": "Cameretta",
                "width": 520,
                "x": 56,
                "y": 0
            }
        ],
        "name": "multipleRegions",
        "plug": "built-in",
        "reason": "motion"
    },
    "doObjectDetection": false,
    "f": "detector_trigger",
    "id": "sfjqK8WeoX80",
    "ke": "kHnmx2fyqO"
}

And here what is coming when an object is detected:

{
    "details": {
        "imgHeight": 480,
        "imgWidth": 640,
        "matrices": [
            {
                "confidence": 0.79,
                "height": 207,
                "id": 0,
                "isZombie": false,
                "tag": "person",
                "width": 119,
                "x": 325,
                "y": 271
            }
        ],
        "name": "Tensorflow",
        "plug": "TensorflowCoral",
        "reason": "object",
        "time": 26
    },
    "doObjectDetection": false,
    "f": "detector_trigger",
    "id": "FCKmxXlYiR9022",
    "ke": "kHnmx2fyqO"
}

They have so much in common.
When such an event arrives Shinobi has triggered the recording.
I've verified that this event arrives only as I configured it in Shinobi.
I've set to trigger only when the object move (Check for Motion First = Yes in Object Detection section) and the object must be in region (Require Object to be in Region = Yes in Object Detection section) and filtered all but person objects.
In the Home Assistant log, while in debug, I can only see event that satisfy those rules as configured, no other type of detection is found.

So, ultimately, I think it's a good idea to make use of every kind of movement or object detection to determine the motion binary_sensor of the integration.
It's the most intuitive interpretation that I think every user expects from that. At least I was expecting to be that way. Object detection = smarter way to detect motion you're really interested in.

What do you think? Can you agree with my point of view?

@elad-bar
Copy link
Owner

sorry for the long time took to responed,
I think based on what you are writing that we need to suggest Shinobi developer to extend the motion detection in case using Object detection / Face detection, it will be easy to copycat the behaviour of Pixel Array following those rules to trigger motion event:

Frame 1 Frame 2
Not detected Detected
Detected Not detected
Detected Tag cat Detected Tag Dog
Detected in x=1, y=1, height=100, width=100 Detected in x=2, y=1, height=100, width=100
Detected in x=1, y=1, height=100, width=100 Detected in x=1, y=1, height=100, width=200

wdyt?

@elad-bar
Copy link
Owner

elad-bar commented Jan 1, 2024

thought about it more, I can create a customAutoLoad script in Shinobi that listens to events of Face / Object detection,
hash the result (tag, location and size) of the predicition, store it per camera in memory,
once new event will come for the same camera it will check if the hash is different,
if that will happen, it will triggera motion detection event,
i will add for it also configuration of how many hashes per camera to store, so if you for instance would like to trigger the motion event only after 3 changes you will be able to set it like that, while i will want it only after 4 or 2...

I would like to raise it first in the discord channel of shinobi, before raising it, will it address your need?

@energywave
Copy link
Author

Hey, @elad-bar thank you so much for your time! Sorry for being late myself too (there was last eve night in the middle ;)
I didn't undersand your grid in the previous comment. And I don't even understand why you think a script in Shinobi is needed.
First of all I just want to clarify that I'm not interested in satisfying my own need (I can use events without problems). I was just thinking to streamline experience for users to get what they may expect (like I was expecting), just that. We're discussing for the good future of this (beautiful!) integration :)

I'm sure that I'm not getting something but...
Stating my captured two different JSON object related to a pixel array movement (the first one) and an object detection (the second one), they're nearly the same. The real difference is that the first contains "reason": "motion" while the second one contains "reason": "object". you're triggering the motion binary_sensor with the first one (reason: motion) while you don't for the second one (reason: object).
Isn't it just as easy as not filtering the reason field and trigger motion whatever reason you get? What I'm missing there? Why are you thinking to hash the content on the Shinobi side? To not fire movement when the same detection happens? But in my experience every time I see a JSON object like that I was expecting the movement to trigger as there was a human moving in front of the camera. And I didn't find multiple objects firing.
Maybe there could be a timed filted (something like 10 seconds, maybe configurable in the integration options dialog...): when a detection (of whatever reason) arrives you set the motion binary_sensor to on and start a time with the configured time. If another detection arrives you restart the timer. Until the timer times out, then you put the binary_sensor to off.
Could that work in your opinion?
I'll try to see the code but I'm a .net programmer, I have a very bery basic python knowledge... I have to study for true, one day or another...

@elad-bar
Copy link
Owner

elad-bar commented Jan 5, 2024

i'm not thinking you are trying to satisfy your own need, i think it's super valid need,
i even tried to do by myself what you are trying to achieve but the fact is - Shinobi doesn't support a trivial functionality - it allows you to define the object detection as motion detection but sending a different event,
trying to think about it more, it makes sense - object / face detection sends one frame for detection, it looks for an object in it, if found great, but it is not motion, motion is when you are comparing 2 or more frames and there is a difference between them.

regarding the matrix above with use cases, it's about when to identify that there was a motion based on previous image processing vs. the current using the JSON of the event (avoid additional image processing, meaning means less impact on performance - instead of running 2 engines / models to process the image - pixel array and object detection).

Although you need is super valid, approach of treating object as motion does not sounds right for me, 2 reasons:

  • it will make the functionality as HA only while the need is for anyone that works with Shinobi to get the indication whether there was a motion when the main engine for motion is not pixel array
  • HA gets the indication there was event only when identified while Shinobi is running for each frame the processing, meaning that running the "script" from within Shinobi, will be more accurate.

eventually as I see it, you will get the same result in terms of HA, but it will serve much more users of Shinobi, and maybe later I will manage to convince Shinobi developer to add it as default behavior of Shinobi without external script when working with external motion detector.

I started from .net many years ago, since i got into the IoT world any programing language is welcome...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants