-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
multiple updates #153
base: master
Are you sure you want to change the base?
multiple updates #153
Conversation
- additional args to freeze to be able to individually freeze conv params, norm params and norm statistics - add argument to allow normalization on skip convs
…rrectly with resume; make LRAnnealingHook work with multiple param_groups;
…which correctly implements weight decay in Adam updates
Codecov ReportAttention:
❗ Your organization needs to install the Codecov GitHub app to enable full functionality. Additional details and impacted files@@ Coverage Diff @@
## master #153 +/- ##
==========================================
- Coverage 76.92% 76.82% -0.11%
==========================================
Files 46 46
Lines 3745 3776 +31
==========================================
+ Hits 2881 2901 +20
- Misses 864 875 +11 ☔ View full report in Codecov by Sentry. |
return self.get_optimizer(trainer).param_groups[0]['lr'] | ||
opt = self.get_optimizer(trainer) | ||
lrs = [param_group['lr'] for param_group in opt.param_groups] | ||
if len(set(lrs)) == 1: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why returning a scalar, when all lrs are the same?
Why not simply if len(lrs) == 1:
?
self.parameters, grad_clips | ||
) | ||
if isinstance(self.parameters[0], dict) and 'params' in self.parameters[0]: | ||
params = itertools.chain(*[param_group['params'] for param_group in self.parameters]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is that a bug in PyTorch, that the optimizers support more inputs than the clip_grad_norm_
?
Could you add a comment here, which special case is solved?
if force_dense: | ||
array = array.to_dense() | ||
else: | ||
raise NotImplementedError |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do you raise a NotImplementedError? Will Sparce Tensors not be converted to scipy? Or are the ambiguities?
No description provided.