Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About CARLA-ImitationLearning Backward operation #1

Closed
xubo92 opened this issue Apr 9, 2018 · 4 comments
Closed

About CARLA-ImitationLearning Backward operation #1

xubo92 opened this issue Apr 9, 2018 · 4 comments
Assignees

Comments

@xubo92
Copy link

xubo92 commented Apr 9, 2018

Hello @mvpcom
Thank you for your great code~.
I am currently working on the code of training "CARLA-ImitationLearning" model and learned a lot from your git post.
Now I am stuck at the loss backward operation with branches. I have noticed that the paper author @felipecode told you to "Run all branches , but just back propagate on one ( use a mask for that)". Do you figure out what that mean?
Here is my view: There are five branches (including speed branch), and every single sample is with different high level command which means their output belong to different branch. So how could it possible to "just back propagate on one branch"?
What is your idea? Look forward to further discussion with you. Thanks a lot~

@mvpcom
Copy link
Owner

mvpcom commented Apr 11, 2018

@lvlvlvlvlv As far as I understood, I recommend to define one loss function and a mask variable. According to the control input, the mask variable will specify which branch should be trained. If you assign right value to mask variable, then you can explicitly multiply the mask variable to the primary loss function. So, the new loss function is the sum of the all MSE of different branches multiplies to mask variable for each branch. Please let me know if you have more questions. Thanks for your contribution.

@xubo92
Copy link
Author

xubo92 commented Apr 15, 2018

@mvpcom Thanks very much~
Your words inspire me a lot.
Now I have made the model running for train. But I found the training loss doesn't decrease significantly.
Here is part of my log:

Train ::: Epoch 0, batch 460, loss: 0.739
Train ::: Epoch 0, batch 470, loss: 0.749
Train ::: Epoch 0, batch 480, loss: 0.735
Train ::: Epoch 0, batch 490, loss: 0.768
Train ::: Epoch 0, batch 500, loss: 0.756

It seems that the loss is always around 0.7**.
Have you ever train the model successfully? Do you have some experiences on that? What about your result?
Look forward to further discussion with you~
Thanks for your help and work !

@mvpcom
Copy link
Owner

mvpcom commented Apr 23, 2018

@lvlvlvlvlv I can confirm in my experiment the loss function has been decreased, and I have no idea why your loss function is fixed. However, as you may know, because of some minor problems the convergence is not possible. You can find all of the issues in TODO. One of the main concerns is balancing the dataset. You can easily balance the dataset using the same technique here. Although, I believe there are some smarter ways to do the same work. And again, sorry for late response. I will do my best and be being prompt as fast as possible for the further discussion.

@mvpcom
Copy link
Owner

mvpcom commented Apr 23, 2018

@lvlvlvlvlv My output experiment shows something like this:

Train::: Epoch: 0, Step: 580, TotalSteps: 581, Loss: 0.0489383', 'Follow Lane'
Train::: Epoch: 0, Step: 580, TotalSteps: 581, Loss: 0.0419313', 'Go Left'
Train::: Epoch: 0, Step: 580, TotalSteps: 581, Loss: 0.0437215', 'Go Right'
Train::: Epoch: 0, Step: 580, TotalSteps: 581, Loss: 0.0526295', 'Go Straight'
Train::: Epoch: 0, Step: 580, TotalSteps: 581, Loss: 165.592', 'Speed Prediction Branch'

Did you change anything? If the answer is yes, please share the code and let us find the problem. You can pull a request for that or whatever you like.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants