-
Notifications
You must be signed in to change notification settings - Fork 3.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add parity test for simple RNN #1351
Conversation
Hello @mpariente! Thanks for updating this PR. There are currently no PEP 8 issues detected in this Pull Request. Cheers! 🍻 Comment last updated at 2020-04-03 13:16:24 UTC |
@mpariente so haha... where does that leave us? sounds like these parity tests show lightning is working as expected? is it maybe truncated backprop? maybe we can add it to this if it already isn't? |
I was hoping for a little difference but even to Anyway, this parity is just the tip of the iceberg, we don't test for any real lightning features. But this is a good start. Has anything major changed in the callbacks behavior, or schedulers between 0.6.0 and 0.7.1? |
Nothing big I can think of. We did automatically add adam to configure_optimizers if you don't define it. We had someone misspell that and they trained with the wrong learning rate. Verify your learning rate? We removed it for 0.7.2. Maybe rerun using the version from master? Check the release notes: https://github.com/PyTorchLightning/pytorch-lightning/releases/tag/0.7.0 |
@mpariente yeah, those curves hint at learning rate mismatch or learning rate scheduler mismatch. maybe you guys changed the scheduler? or maybe we do something different for scheduler? i think there was something about .step vs .epoch for the scheduler. something like this: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM 🚀
we forget to add it to changelog... added in a9f15df |
* Add parity test for simple RNN * Update test_rnn_parity.py Co-authored-by: William Falcon <waf2107@columbia.edu>
* Add parity test for simple RNN * Update test_rnn_parity.py Co-authored-by: William Falcon <waf2107@columbia.edu>
This is a simple test for a shifted and summed random dataset.
The test passes on CPU (I added the GPU restriction before pushing).
What does this PR do?
Compares basic lightning training to vanilla torch training for RNN.
Note
Even for the architecture for which we have performance difference between PL 0.6.0 and PL0.7.1 (as discussed in #1136), the test passes, for both versions.
PR review
Anyone in the community is free to review the PR once the tests have passed.
If we didn't discuss your PR in Github issues there's a high chance it will not be merged.
Did you have fun?
Make sure you had fun coding 🙃