Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Retraining LiteFlowNet #38

Open
jordigc2 opened this issue Aug 19, 2022 · 0 comments
Open

Retraining LiteFlowNet #38

jordigc2 opened this issue Aug 19, 2022 · 0 comments

Comments

@jordigc2
Copy link

Hello,

I have been working now for a while to make the DF-VO work with my own data set and after some time I realized that maybe Optical Flow results could be improved if LiteFlowNet is retrained with the data collected from the specific scenario I will be using this model for. In order to do this I based my training code on the online-tuning section from your shared code. Basically I am using the same loss functions.

The input data I am using are 2 consecutive images that with a probability of 0.5 they can be modified by modifying the brightness, contrast, saturation, and some other image parameters, so the model can generalize with different light conditions. Similarly as done in the Monodepth2 training.

The reason why I would like to retrain the LiteFlowNet with the data recorded in my scenario is because I can see that the optical flow with similar images is not consistent and smooth, mostly when walls are quite close to the camera. As you can see in the 2 attached images where walls are closer. Even, when running inference with the same images but with different order to obtain the forward and backward flow it results on providing slightly different outputs.

I also attach an image of how it performs when walls are farther away and as you can see the output flow is smoother and more consistent than when the others are far away.

The point of this issue is that I am running some training of the LiteFlowNet and all what I explained before are the results obtained using the UnLiteFlowNet pretrained model that you are providing. When running some iteration of the training I can see that the Loss is decreasing, that the re-projected images from Img0 to Img1 and vise-versa also improve but when I see the Optical Flow obtained from the model itś just a bunch of random noise, which just makes it good to fit the loss functions as you can see in the attached image.

After testing several parameters, reviewing the code I end up thinking that maybe the way you retrained the model using the Kitti dataset using the same approach had something else that is missing in the DF-VO online-tuning. Do you have any suggestions of which can be the issue behind this random output that makes the loss decrease and all the re-projections fit the source images.

I hope I explained myself clear enough and you can help me with some tips or advice that you took into account during the training you did with the Kitti dataset.

Feel free to ask any question in case you want more clarification in some part of this issue.

Thank you again for your time replying the issues and for sharing this amazing work.

Jordi

IMAGES

FLOW with CLOSE WALLS-1
flow-close-walls2
FLOW with CLOSE WALLS-2
flow-close-walls-1
FLOW with FARTHER WALLS
flow-farther-walls
FLOW after RETRAINING LiteFlowNet
flow-retrained-model

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant