Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fixed accessing model.val_dataloader to model.trainer.val_dataloader #758

Merged
merged 10 commits into from
Nov 29, 2021

Conversation

danielgafni
Copy link

@danielgafni danielgafni commented Nov 16, 2021

Description

fixing #757

This PR ...

Checklist

  • Linked issues (if existing)
  • Amended changelog for large changes (and added myself there as contributor)
  • Added/modified tests
  • Used pre-commit hooks when committing to ensure that code is compliant with hooks. Install hooks with pre-commit install.
    To run hooks independent of commit, execute pre-commit run --all-files

Make sure to have fun coding!

@danielgafni
Copy link
Author

danielgafni commented Nov 16, 2021

Hmm, looks like accessing dataloaders from configure_optimizers can't be done until this pytorch-lightning issue is resolved.

We can either change this logic or wait for pytorch-lightning to fix this.

Daniil Gafni and others added 2 commits November 20, 2021 09:57
Bumps [pytorch-lightning](https://github.com/PyTorchLightning/pytorch-lightning) from 1.4.9 to 1.5.2.
- [Release notes](https://github.com/PyTorchLightning/pytorch-lightning/releases)
- [Changelog](https://github.com/PyTorchLightning/pytorch-lightning/blob/1.5.2/CHANGELOG.md)
- [Commits](Lightning-AI/pytorch-lightning@1.4.9...1.5.2)

---
updated-dependencies:
- dependency-name: pytorch-lightning
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
@jdb78
Copy link
Collaborator

jdb78 commented Nov 20, 2021

Looks like the team is actively working on it.

@jdb78
Copy link
Collaborator

jdb78 commented Nov 27, 2021

We should be able to fix it by removing the trainer dependency and making the lr scheduler optional, i.e. strict=False

@codecov-commenter
Copy link

codecov-commenter commented Nov 27, 2021

Codecov Report

Merging #758 (4bc9770) into master (910c6d8) will increase coverage by 0.24%.
The diff coverage is 60.00%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master     #758      +/-   ##
==========================================
+ Coverage   89.76%   90.01%   +0.24%     
==========================================
  Files          24       24              
  Lines        3734     3734              
==========================================
+ Hits         3352     3361       +9     
+ Misses        382      373       -9     
Flag Coverage Δ
cpu 90.01% <60.00%> (+0.24%) ⬆️
pytest 90.01% <60.00%> (+0.24%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

Impacted Files Coverage Δ
pytorch_forecasting/models/base_model.py 90.07% <60.00%> (+0.28%) ⬆️
pytorch_forecasting/metrics.py 93.45% <0.00%> (+0.23%) ⬆️
pytorch_forecasting/utils.py 81.94% <0.00%> (+1.38%) ⬆️
pytorch_forecasting/models/nn/rnn.py 92.40% <0.00%> (+5.06%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 910c6d8...4bc9770. Read the comment docs.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Access dataloaders with model.trainer.{stage}_dataloader, not model.{stage}_dataloader
3 participants