-
Notifications
You must be signed in to change notification settings - Fork 54
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bug/999 Fix keepdim
in any
/all
#1000
Conversation
👇 Click on the image for a new way to code review
Legend |
Codecov Report
@@ Coverage Diff @@
## main #1000 +/- ##
=======================================
Coverage 91.67% 91.68%
=======================================
Files 65 65
Lines 9974 9978 +4
=======================================
+ Hits 9144 9148 +4
Misses 830 830
Flags with carried forward coverage won't be shown. Click here to find out more.
📣 We’re building smart automated test selection to slash your CI/CD build times. Learn more |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for noticing and addressing this @neosunhan !
…ics/heat into bug/999-keepdim-any-all
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks a lot @neosunhan , can you expand the tests for a combination of axis / split axis and to test the output shape?
Example:
ones_2d_split = ht.ones((2, 2), split=0)
self.assertEqual(ones_2d_split.all(axis = 0, keepdim=True).shape, .... )
self.assertEqual(ones_2d_split.all(axis = 1, keepdim=True).shape, .... )
ones_2d_split = ht.ones((2, 2), split=1)
# see above
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm going to merge this, well done @neosunhan !
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Next try... 👍
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Next next +1
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
final next...
* Fix `all` * Fix `any` * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Add distributed tests * Expanded tests for combination of axis/split axis Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Claudia Comito <39374113+ClaudiaComito@users.noreply.github.com> Co-authored-by: mtar <m.tarnawa@fz-juelich.de>
…ompatible shapes of local arrays (#1034) * Replace bug report MD template with form in view of further automation * Fix bug report file name * Update bug_report.yml * Update bug_report.yml * Update bug_report.yml * Update bug_report.yml * Auto generated release notes and changelog (#974) * wip: Initial release draft and changelog updater actions configuration * doc: pr title style guide in contibuting.md * ci: improved release draft templates * ci: extra release draft categories * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci Co-authored-by: Claudia Comito <39374113+ClaudiaComito@users.noreply.github.com> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * Tutorial note about local and global printing (#972) * doc: parallel tutorial note metioning local and global printing * doc: extenden local print note with ``ht.local_printing()`` * Fix typo * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci Co-authored-by: Claudia Comito <39374113+ClaudiaComito@users.noreply.github.com> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * Updated the tutorial document. (#977) * Updated the tutorial document. 1. Corrected the spelling mistake -> (sigular to single) 2. Corrected the statement -> the number of dimensions is the rank of the array. 3. Made 2 more small changes. * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Fix typo Co-authored-by: Claudia Comito <39374113+ClaudiaComito@users.noreply.github.com> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * Set write permissions for workflow * Update schedule * Update schedule * Update schedule * Move pytorch version file out of workflows dir * Update paths * [pre-commit.ci] pre-commit autoupdate updates: - [github.com/psf/black: 22.3.0 → 22.6.0](psf/black@22.3.0...22.6.0) * Push pytorch release update to release/1.2.x branch, not main * Update schedule * Bypass `on push` trigger * Update schedule * Fix condition syntax * Fix syntax * On push trigger workaround * Update schedule * Update schedule * Enable non-negative sample size * Read `min` value directly from torch return object * Enable non-negative number of samples for `logspace` * Add test for `logspace` * Add MPI version field to bug report template * fix: set cuda rng state on gpu tests for test_random.py (#1014) * Test latest pyorch on both main and release branch * Move pytorch release record out of workflows directory * Update paths * New PyTorch release * Temporarily remove trigger * Update pytorch-latest.txt * Reinstate trigger * New PyTorch release * Remove matrix strategy * Update pytorch-latest.txt * New PyTorch release * New PyTorch release * fix: set cuda rng state on gpu tests for test_random.py * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Added tests for python 3.9 and pytorch 1.12 Co-authored-by: Claudia Comito <c.comito@fz-juelich.de> Co-authored-by: Daniel Coquelin <daniel.coquelin@gmail.com> Co-authored-by: ClaudiaComito <c.comito@fz-juelich.de@users.noreply.github.com> Co-authored-by: Claudia Comito <39374113+ClaudiaComito@users.noreply.github.com> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * [pre-commit.ci] pre-commit autoupdate (#1024) updates: - [github.com/psf/black: 22.6.0 → 22.8.0](psf/black@22.6.0...22.8.0) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Claudia Comito <39374113+ClaudiaComito@users.noreply.github.com> * Refactored code for readability * rename file and activate force push * Update bug_report.yml fixes formatting issues * Update bug_report.yml fixes an issue where the bug label is not set. * Update README.md Use status badge from a different workflow action * Update codecov.yml * Update codecov.yml * Fixed code checking for non-matching local shapes while using is_split + Added test * Add section `Google Summer of Code 2022` * Bug/1017 `prod` / `sum` with empty arrays (#1018) * Check for split in `__reduce_op` * Check whether x is distributed Co-authored-by: mtar <m.tarnawa@fz-juelich.de> Co-authored-by: mtar <m.tarnawa@fz-juelich.de> Co-authored-by: Claudia Comito <39374113+ClaudiaComito@users.noreply.github.com> * Add section "Array API" * Mirror Repository and run GitHub CI at HZDR (#1032) * Update ci worflow action * Update codecov.yml * Bug/999 Fix `keepdim` in `any`/`all` (#1000) * Fix `all` * Fix `any` * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Add distributed tests * Expanded tests for combination of axis/split axis Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Claudia Comito <39374113+ClaudiaComito@users.noreply.github.com> Co-authored-by: mtar <m.tarnawa@fz-juelich.de> * [pre-commit.ci] pre-commit autoupdate updates: - [github.com/psf/black: 22.8.0 → 22.10.0](psf/black@22.8.0...22.10.0) * Updated error message Co-authored-by: Claudia Comito <c.comito@fz-juelich.de> Co-authored-by: Claudia Comito <39374113+ClaudiaComito@users.noreply.github.com> Co-authored-by: JuanPedroGHM <juanpedroghm@gmail.com> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: SaiSuraj27 <87087741+SaiSuraj27@users.noreply.github.com> Co-authored-by: neosunhan <neosunhan@gmail.com> Co-authored-by: Markus Goetz <markus.goetz@kit.edu> Co-authored-by: mtar <m.tarnawa@fz-juelich.de> Co-authored-by: Daniel Coquelin <daniel.coquelin@gmail.com> Co-authored-by: ClaudiaComito <c.comito@fz-juelich.de@users.noreply.github.com> Co-authored-by: neosunhan <97215518+neosunhan@users.noreply.github.com>
Description
keepdim=True
now works inany
/all
whenaxis=None
Issue/s resolved: #999
Changes proposed:
axis=None
, setaxis
to a tuple of all dimensions. This is equivalent to performing a reduction over all dimensions.Type of change
Due Diligence
Does this change modify the behaviour of other functions? If so, which?
no
skip ci