Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[SlidingFrameGenerator] Sequence_time not working #32

Open
TomSeestern opened this issue Jul 24, 2020 · 6 comments
Open

[SlidingFrameGenerator] Sequence_time not working #32

TomSeestern opened this issue Jul 24, 2020 · 6 comments
Assignees
Labels
bug Something isn't working

Comments

@TomSeestern
Copy link

System Information

  • your operating system: Ubuntu 18.04 /
  • your python version (python -version): Python 3.6.9
  • keras video generator version (pip freeze | grep keras-video-generators): keras-video-generators==1.0.14

Describe the bug
Sequence_time always defaults to the Full Video Length.
See Colab Notebook for example code:
Colab Notebook

@metal3d
Copy link
Owner

metal3d commented Jul 24, 2020

Hello,

Sorry but I don't see the problem in your example notebook, there are 5 frames in the sequence as expected. And the batch size is 5 as you defined it.

Can you explain where I miss the problem ?

@TomSeestern
Copy link
Author

Hey there!

Thanks for the fast reply!
Maybe I misunderstood how the Generator works. I expected the First batch to contain only images of the first Sequence(_time) step.

So in my example I guessed the first Batch would contain 5 Images from Range Frame 0 to Frame 15. (Sequence_time=0.5 @ 30fps )

Instead I got a Batch with 5 Frames from the Range Frame 0 to Frame 110. Like the VideoFrameGenerator without Sliding Window does(?).

Is that not how it supposed to work? :)

@metal3d
Copy link
Owner

metal3d commented Jul 24, 2020

OK, the example with basketball players seems to be different that the last one... I see there the same sequences as you mention. And no, it's not exepected to produce this, you should have a sliding window as you say it.

That's weird, each test I did hasn't got that problem - so I will make some tests and check what happens.

Thanks a lot for that issue report ;)

@metal3d metal3d self-assigned this Jul 24, 2020
@metal3d metal3d added the bug Something isn't working label Jul 24, 2020
@Fab16BSB
Copy link
Contributor

Ok that post confirm why i always got overfitting when i try to train with sliding generator to improve performance it's because the generator always take the same image to make his sequence. If i have time i will try to look the code.

@TomSeestern if you need try the video generator i will use it to train en x images extracted from videos and next i predict continues videos sequence by sequence with successive image. The result are not the best but it work fine.

@Fab16BSB
Copy link
Contributor

Fab16BSB commented Dec 18, 2021

I am not a python OOP expert. I look the code and i don't understand why but the number of frame are good with shuffle are without on line 92 "frames": np.arange(i, i + stop_at)[::step][: self.nbframe].
But the problem seems start to line 177 (using cache or not) in my case all image of my batch seems be the same with the sliding generator. It try to add print((frames[0] == frames[1]).all()) before the return line 192 and i got True answer. I try to comment the transformation line 183 same result.

So i supose that the problem take space in the _get_frames method (generator.py file) because the calcul of step line 403 don't use the time_sequence define by the user. But not sure !

@Fab16BSB
Copy link
Contributor

I propose an (not optimized) solution and i think it is correct but not sure.
I had an optional parameter to get_frames for pass the define sequence and calcule the step with this info if is not none and sliding generator is chose

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants