-
-
Notifications
You must be signed in to change notification settings - Fork 611
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Iteration over params(m)
in explicit mode gives no gradient
#2091
Comments
How much do we care about this working? I'm not sure I have the wherewithal to figure it out while not regressing Flux.
We (or really github) provide an escape hatch. You can still open a blank issue via the " Don’t see your issue here? Open a blank issue." link at the bottom of the new issue page. |
IDK, it's the officially documented way to add a regularisation term, http://fluxml.ai/Flux.jl/stable/models/regularisation/ . Ideally that would move to something like FluxML/Optimisers.jl#57 . But if the goal is to provide a soft transition to explicit mode, so you can add the new things slowly during 0.13 rather than all-at-once to run on 0.14, then I think the old way ought to keep working a bit longer? |
Is there a smart rule we could write which avoids #2054? I think the main considerations would be how map |
This is surely possible, someone just has to write one more Functors thing which builds this gradient tree, as the |
The use of
params
within explicit-mode gradients has been broken by #2054 . Sadly there were no tests of this:(This form really is very annoying. Suggestions are fine but required textboxes seem to be solving a problem we didn't have. Also, it seems to break links to issues, like #2054 above. Edited to delete the boilerplate.)
The text was updated successfully, but these errors were encountered: