-
Notifications
You must be signed in to change notification settings - Fork 54
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
minor tweaks to allow for torch 1.5 compatability #558
Conversation
i have no idea why this is failing in QR. i have all test successful on both HDFML and my local machine. anyone have any ideas? |
Codecov Report
@@ Coverage Diff @@
## master #558 +/- ##
=======================================
Coverage 96.39% 96.39%
=======================================
Files 75 75
Lines 14849 14849
=======================================
Hits 14313 14313
Misses 536 536 Continue to review full report at Codecov.
|
whats the frequency travis? travis updates
…-analytics/heat into torch1.5-compatability
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great, thanks, the sanitize_memory part looks good to me. Maybe, while you're at it, add a line to the docs about input and output being torch tensors, I forgot at the time and indeed it can get confusing.
Description
Changes done to allow for the use of the most recent pytorch release (1.5.0) see changes proposed for more detail
Issue/s resolved: None
Changes proposed:
memory.sanitize_memory_layout
now assumes that the data is distributed in the default way, also it will not change anything if the tensor passed to the function has no elements.DNDarray.redistribute
uses where instead of nonzero (the usage of nonzero there was depreciated)Type of change
Compatibility with dependency update
Due Diligence
Does this change modify the behaviour of other functions? If so, which?
yes, anything which uses