Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bug/464 unique torch tensor #465

Merged
merged 7 commits into from
Jan 24, 2020
Merged

Conversation

ClaudiaComito
Copy link
Contributor

@ClaudiaComito ClaudiaComito commented Jan 23, 2020

Description

Output of ht.unique(x) used to be a torch tensor or a tuple of torch tensors if x.split is None, it is now either a heat tensor, or a tuple of heat tensors in all cases.

Issue/s resolved: #464

Changes proposed:

  • see above
  • tests updated

Type of change

Remove irrelevant options:

  • Bug fix (non-breaking change which fixes an issue)

Due Diligence

  • All split configurations tested
  • Multiple dtypes tested in relavent functions
  • Documentation updated (if needed)
  • Updated changelog.md under the title "Pending Additions"

Does this change modify the behaviour of other functions? If so, which?

no

@ClaudiaComito ClaudiaComito added the bug Something isn't working label Jan 23, 2020
@codecov
Copy link

codecov bot commented Jan 23, 2020

Codecov Report

Merging #465 into master will increase coverage by <.01%.
The diff coverage is 100%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master     #465      +/-   ##
==========================================
+ Coverage   96.68%   96.69%   +<.01%     
==========================================
  Files          57       57              
  Lines       12134    12147      +13     
==========================================
+ Hits        11732    11745      +13     
  Misses        402      402
Impacted Files Coverage Δ
heat/core/tests/test_manipulations.py 99.26% <100%> (ø) ⬆️
heat/core/manipulations.py 99.2% <100%> (ø) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 075c138...5ee0c04. Read the comment docs.

Copy link
Collaborator

@mtar mtar left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One question about the number of arguments.

a._DNDarray__array, sorted=sorted, return_inverse=return_inverse, dim=axis
)
if isinstance(torch_output, tuple):
heat_output = tuple(factories.array(i, dtype=a.dtype) for i in torch_output)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

different number of arguments?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

also the array calls should have a device parameter

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

and split

if isinstance(torch_output, tuple):
heat_output = tuple(factories.array(i, dtype=a.dtype) for i in torch_output)
else:
heat_output = factories.array(torch_output, dtype=a.dtype, split=None, device=a.device)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

different number of arguments?

Copy link
Member

@coquelin77 coquelin77 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

everything looks good after the changes are made

a._DNDarray__array, sorted=sorted, return_inverse=return_inverse, dim=axis
)
if isinstance(torch_output, tuple):
heat_output = tuple(factories.array(i, dtype=a.dtype) for i in torch_output)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

also the array calls should have a device parameter

a._DNDarray__array, sorted=sorted, return_inverse=return_inverse, dim=axis
)
if isinstance(torch_output, tuple):
heat_output = tuple(factories.array(i, dtype=a.dtype) for i in torch_output)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

and split

@ClaudiaComito ClaudiaComito merged commit 5c3801a into master Jan 24, 2020
@ClaudiaComito ClaudiaComito deleted the bug/464-unique-torch-tensor branch January 24, 2020 04:20
@ClaudiaComito
Copy link
Contributor Author

Thanks a lot @mtar and @coquelin77 , you're correct, I made the changes you requested, merging and closing now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging this pull request may close these issues.

ht.unique() returns torch.Tensor if run locally
3 participants