Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Features cumsum & cumprod #524

Merged
merged 15 commits into from
Apr 7, 2020
Merged

Conversation

mtar
Copy link
Collaborator

@mtar mtar commented Apr 3, 2020

Description

Include a summary of the change/s.
Please also include relevant motivation and context. List any dependencies that are required for this change.

implementation of cumsum and cumprod functions. numpy's axis=None is not supported.

Issue/s resolved: #243

Changes proposed:

  • add cumsum
  • add cumprod
  • cumproduct as alias to cumprod

Type of change

Remove irrelevant options:

  • New feature (non-breaking change which adds functionality)
  • Documentation update

Due Diligence

  • All split configurations tested
  • Multiple dtypes tested in relevant functions
  • Documentation updated (if needed)
  • Updated changelog.md under the title "Pending Additions"

Does this change modify the behaviour of other functions? If so, which?

yes / no

heat/core/arithmetics.py Outdated Show resolved Hide resolved
heat/core/arithmetics.py Outdated Show resolved Hide resolved
heat/core/tests/test_arithmetics.py Show resolved Hide resolved
cprod = ht.cumprod(a, 0)
self.assertTrue(ht.equal(cprod, result))

a = ht.full((2, 4, 2), 2, dtype=ht.float32, split=1, device=ht_device)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please test up to 3 dims


recv = torch.ones(cprod.shape[:axis] + cprod.shape[axis + 1 :], dtype=send.dtype)

a.comm.Exscan(send, recv, MPI.PROD)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@krajsek does this work with the upcoming AD stuff?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not yet.

heat/core/arithmetics.py Outdated Show resolved Hide resolved
cumprod : DNDarray
A new array holding the result is returned unless `out` is
specified, in which case a reference to out is returned.
"""
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add Raises section for exceptions that are thrown from within this function

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shall the raises section only be in __cum_op() or also here?

heat/core/arithmetics.py Outdated Show resolved Hide resolved

recv = torch.ones(cprod.shape[:axis] + cprod.shape[axis + 1 :], dtype=send.dtype)

a.comm.Exscan(send, recv, MPI.PROD)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not yet.

`axis` is not None or `a` is a 1-d array.

"""
if axis is None:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sanitize_axis

heat/core/arithmetics.py Outdated Show resolved Hide resolved
heat/core/arithmetics.py Show resolved Hide resolved

recv = torch.zeros(csum.shape[:axis] + csum.shape[axis + 1 :], dtype=send.dtype)

a.comm.Exscan(send, recv, MPI.SUM)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

More or less identical code as above. Could you please introduce a __cum_op() call analogous to __reduce_op() in operations.py and make use of it?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

function added

if csum.numel() > 0:
indices = -1
Ni, Nk = csum.shape[:axis], csum.shape[axis + 1 :]
for ii in np.ndindex(Ni):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we somehow manage to do this without the Python loops via views? Performance on them should be slow.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I replaced it with torch.index_select()

@codecov
Copy link

codecov bot commented Apr 7, 2020

Codecov Report

Merging #524 into master will decrease coverage by 0.02%.
The diff coverage is 92.30%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master     #524      +/-   ##
==========================================
- Coverage   96.38%   96.35%   -0.03%     
==========================================
  Files          75       75              
  Lines       14481    14585     +104     
==========================================
+ Hits        13958    14054      +96     
- Misses        523      531       +8     
Impacted Files Coverage Δ
heat/core/operations.py 92.77% <85.71%> (-1.31%) ⬇️
heat/core/tests/test_arithmetics.py 97.88% <94.28%> (-0.57%) ⬇️
heat/core/arithmetics.py 99.19% <100.00%> (+0.04%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 7a6e29f...11e8b4a. Read the comment docs.

@Markus-Goetz Markus-Goetz merged commit d92bff9 into master Apr 7, 2020
@Markus-Goetz Markus-Goetz deleted the enhancement/243-cum_operations branch April 7, 2020 18:11
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Provide cum* operation
3 participants