Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Features/125 modf #402

Merged
merged 50 commits into from
Dec 9, 2019
Merged
Show file tree
Hide file tree
Changes from 15 commits
Commits
Show all changes
50 commits
Select commit Hold shift + click to select a range
ec6c508
BROKEN. First implementation of modf
lenablind Sep 13, 2019
f70fb61
Merge remote-tracking branch 'origin/master' into features/125-modf
lenablind Sep 30, 2019
b3d1c71
BROKEN. Implementation of modf (and round)
lenablind Sep 30, 2019
71b8832
Modf and round modified.
lenablind Sep 30, 2019
828cb50
Implemented test_round().
ClaudiaComito Oct 7, 2019
9a8d6f3
Implementation of round in dndArray
lenablind Oct 10, 2019
4ae4be5
Functions in alphabetical order
lenablind Oct 10, 2019
2e57691
Test function modf
lenablind Oct 10, 2019
b045297
Added modf to dndArray
lenablind Oct 10, 2019
3d23b9d
Added option of user-defined output buffer for ht.modf().
ClaudiaComito Oct 16, 2019
b99d531
Merge branch 'master' into features/125-modf
ClaudiaComito Oct 16, 2019
11aa1f1
modf with out
lenablind Oct 16, 2019
bbe33a6
Expanded test_modf for out
lenablind Oct 16, 2019
e123b3c
Adaptation to pre-commit
lenablind Oct 24, 2019
0ce6890
Merge branch 'master' into features/125-modf
lenablind Oct 24, 2019
f25a448
Merge branch 'master' into features/125-modf
coquelin77 Nov 8, 2019
5d088b9
Implementation of requested changes
lenablind Nov 11, 2019
db796cc
Code reformatted via black
lenablind Nov 14, 2019
dac92bd
Merge branch 'master' into features/125-modf
lenablind Nov 14, 2019
efcefbd
Tests with split tensors
lenablind Nov 20, 2019
46b001c
Tests correction
lenablind Nov 20, 2019
4c9e548
Merge branch 'master' into features/125-modf
lenablind Nov 20, 2019
f456d7b
Integration of ht.equal for tests
lenablind Nov 20, 2019
2921012
Merge branch 'master' into features/125-modf
lenablind Nov 20, 2019
e4ed000
Merge branch 'master' into features/125-modf
coquelin77 Nov 21, 2019
1ecdeba
Debugging attempts.
ClaudiaComito Nov 25, 2019
6a82e9c
Fixing debugging attempts.
ClaudiaComito Nov 25, 2019
342e661
test_modf(), test_round(): defining test tensors so that they are alw…
ClaudiaComito Nov 25, 2019
0f3d6e4
More debugging attempts.
ClaudiaComito Nov 25, 2019
7ad1fb3
Removed print/debugging statements
ClaudiaComito Nov 25, 2019
e5aa64e
Merge branch 'master' into features/125-modf
ClaudiaComito Nov 26, 2019
387d1f4
Debugging attempts.
ClaudiaComito Dec 3, 2019
0af3803
Debugging
ClaudiaComito Dec 3, 2019
9ef2614
Debugging. Removed test_modf() and test_round()
ClaudiaComito Dec 3, 2019
865b868
In assert_array_equal(), Allreduce running on self._comm, not on sel…
ClaudiaComito Dec 3, 2019
d0613dc
Debugging. Replacing failing array comparison with BasicTest.assert_a…
ClaudiaComito Dec 3, 2019
bdc5be2
rest_round(), replacing all assertTrue(ht.equal(...)) with BasicTest.…
ClaudiaComito Dec 3, 2019
30464c3
Debugging. test_round(), remiving distributed tests
ClaudiaComito Dec 3, 2019
c5810a6
Debugging. Adding back distributed test_round one bit at a time.
ClaudiaComito Dec 3, 2019
3ecd850
Small changes after pre-commit failed.
ClaudiaComito Dec 3, 2019
c383c78
Debugging. Adding back distributed tests for test_round(). Replaced h…
ClaudiaComito Dec 3, 2019
7f5e34b
Replaced ht.arange with ht.array for non-distribution case
lenablind Dec 4, 2019
567c2e4
Changed inheritance hierarchy of test_rounding to BasicTest
lenablind Dec 4, 2019
05b319c
Integration of assert_array_equal within test_round
lenablind Dec 4, 2019
58ac9db
Replacing ht.arange with ht.array(npArray)
ClaudiaComito Dec 6, 2019
e85e5c3
Merge branch 'master' into features/125-modf
coquelin77 Dec 6, 2019
9356e81
extending coverage
Dec 6, 2019
8bd1ae7
corrected indent in docs for modf
Dec 6, 2019
cfa1f74
corrected indent in docs for modf, minor formatting
Dec 6, 2019
2b52855
self replacing x/a in modf and round
Dec 6, 2019
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
53 changes: 53 additions & 0 deletions heat/core/dndarray.py
Original file line number Diff line number Diff line change
Expand Up @@ -1926,6 +1926,32 @@ def __mod__(self, other):
"""
return arithmetics.mod(self, other)

def modf(a, out=None):
"""
Return the fractional and integral parts of an array, element-wise.
The fractional and integral parts are negative if the given number is negative.

Parameters
----------
x : ht.DNDarray
Input array
out : ht.DNDarray, optional
A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to.
If not provided or None, a freshly-allocated array is returned.

Returns
-------
tuple(ht.DNDarray: fractionalParts, ht.DNDarray: integralParts)

fractionalParts : ht.DNDdarray
Fractional part of x. This is a scalar if x is a scalar.

integralParts : ht.DNDdarray
Integral part of x. This is a scalar if x is a scalar.
"""

return rounding.modf(a, out)

def __mul__(self, other):
"""
Element-wise multiplication (not matrix multiplication) with values from second operand (scalar or tensor)
Expand Down Expand Up @@ -2313,6 +2339,33 @@ def __rmod__(self, other):
"""
return arithmetics.mod(other, self)

def round(x, decimals=0, out=None, dtype=None):
"""
Calculate the rounded value element-wise.

Parameters
----------
x : ht.DNDarray
The values for which the compute the rounded value.
out : ht.DNDarray, optional
A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to.
If not provided or None, a freshly-allocated array is returned.
dtype : ht.type, optional
Determines the data type of the output array. The values are cast to this type with potential loss of
precision.

decimals: int, optional
Number of decimal places to round to (default: 0).
If decimals is negative, it specifies the number of positions to the left of the decimal point.

Returns
-------
rounded_values : ht.DNDarray
A tensor containing the rounded value of each element in x.
"""

return rounding.round(x, decimals, out, dtype)

def __rpow__(self, other):
"""
Element-wise exponential function of second operand (not-heat-typed) with values from first operand (tensor).
Expand Down
106 changes: 105 additions & 1 deletion heat/core/rounding.py
Original file line number Diff line number Diff line change
@@ -1,10 +1,11 @@
import torch
import heat as ht
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cyclic import


from . import operations
from . import dndarray
from . import types

__all__ = ["abs", "absolute", "ceil", "clip", "fabs", "floor", "trunc"]
__all__ = ["abs", "absolute", "ceil", "clip", "fabs", "floor", "modf", "round", "trunc"]


def abs(x, out=None, dtype=None):
Expand Down Expand Up @@ -178,6 +179,109 @@ def floor(x, out=None):
return operations.__local_op(torch.floor, x, out)


def modf(x, out=None):
"""
Return the fractional and integral parts of a tensor, element-wise.
The fractional and integral parts are negative if the given number is negative.

Parameters
----------
x : ht.DNDarray
Input tensor
out : tuple(ht.DNDarray, ht.DNDarray), optional
A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to.
If not provided or None, a freshly-allocated tensor is returned.

Returns
-------
tuple(ht.DNDarray: fractionalParts, ht.DNDarray: integralParts)

fractionalParts : ht.DNDdarray
Fractional part of x. This is a scalar if x is a scalar.

integralParts : ht.DNDdarray
Integral part of x. This is a scalar if x is a scalar.

Examples
--------
>>> ht.modf(ht.arange(-2.0, 2.0, 0.4))
(tensor([-2., -1., -1., -0., -0., 0., 0., 0., 1., 1.]),
tensor([ 0.0000, -0.6000, -0.2000, -0.8000, -0.4000, 0.0000, 0.4000, 0.8000, 0.2000, 0.6000]))
"""

integralParts = ht.trunc(x)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

dont need ht.trunc, should just be trunc(x)

in the future please avoid using ht.function() as it causes a cyclic import. instead use the form: file.function()

fractionalParts = x - integralParts

if out is not None:
if not isinstance(out, tuple):
raise TypeError(
"expected out to be None or a tuple of ht.DNDarray, but was {}".format(type(out))
)
if len(out) != 2:
raise ValueError(
"expected out to be a tuple of length 2, but was of length {}".format(len(out))
)
if (not isinstance(out[0], ht.DNDarray)) or (not isinstance(out[1], ht.DNDarray)):
raise TypeError(
"expected out to be None or a tuple of ht.DNDarray, but was ({}, {})".format(
type(out[0]), type(out[1])
)
)
out[0]._DNDarray__array = fractionalParts._DNDarray__array
out[1]._DNDarray__array = integralParts._DNDarray__array
return out

return (fractionalParts, integralParts)


def round(x, decimals=0, out=None, dtype=None):
"""
Calculate the rounded value element-wise.

Parameters
----------
x : ht.DNDarray
The values for which the compute the rounded value.
decimals: int, optional
Number of decimal places to round to (default: 0).
If decimals is negative, it specifies the number of positions to the left of the decimal point.
out : ht.DNDarray, optional
A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to.
If not provided or None, a freshly-allocated array is returned.
dtype : ht.type, optional
Determines the data type of the output array. The values are cast to this type with potential loss of
precision.


Returns
-------
rounded_values : ht.DNDarray
A tensor containing the rounded value of each element in x.

Examples
--------
>>> ht.round(ht.arange(-2.0, 2.0, 0.4))
tensor([-2., -2., -1., -1., -0., 0., 0., 1., 1., 2.])

"""
if dtype is not None and not issubclass(dtype, types.generic):
raise TypeError("dtype must be a heat data type")

if decimals != 0:
x *= 10 ** decimals

rounded_values = operations.__local_op(torch.round, x, out)

if decimals != 0:
rounded_values /= 10 ** decimals

if dtype is not None:
rounded_values._DNDarray__array = rounded_values._DNDarray__array.type(dtype.torch_type())
rounded_values._DNDarray__dtype = dtype

return rounded_values


def trunc(x, out=None):
"""
Return the trunc of the input, element-wise.
Expand Down
64 changes: 64 additions & 0 deletions heat/core/tests/test_rounding.py
Original file line number Diff line number Diff line change
Expand Up @@ -162,6 +162,70 @@ def test_floor(self):
with self.assertRaises(TypeError):
ht.floor(object())

def test_modf(self):
start, end, step = -5.0, 5.0, 1.4
comparison = np.modf(np.arange(start, end, step, np.float32))

# exponential of float32
float32_tensor = ht.arange(start, end, step, dtype=ht.float32)
float32_modf = float32_tensor.modf()
self.assertIsInstance(float32_modf[0], ht.DNDarray)
self.assertIsInstance(float32_modf[1], ht.DNDarray)
self.assertEqual(float32_modf[0].dtype, ht.float32)
self.assertEqual(float32_modf[1].dtype, ht.float32)
self.assertTrue((x for x in float32_modf[0]._DNDarray__array) == y for y in comparison[0])
self.assertTrue((x for x in float32_modf[1]._DNDarray__array) == y for y in comparison[1])

# exponential of float64
comparison = np.modf(np.arange(start, end, step, np.float64))

float64_tensor = ht.arange(start, end, step, dtype=ht.float64)
float64_modf = float64_tensor.modf()
self.assertIsInstance(float64_modf[0], ht.DNDarray)
self.assertIsInstance(float64_modf[1], ht.DNDarray)
self.assertEqual(float64_modf[0].dtype, ht.float64)
self.assertEqual(float64_modf[1].dtype, ht.float64)
self.assertTrue((x for x in float32_modf[0]._DNDarray__array) == y for y in comparison[0])
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

while technically correct, a neater way to do this might be to use ht.all(tensor == ht.array(comparison)) or torch.all as you use in the test_round function

self.assertTrue((x for x in float32_modf[1]._DNDarray__array) == y for y in comparison[1])

# check exceptions
with self.assertRaises(TypeError):
ht.modf([0, 1, 2, 3])
with self.assertRaises(TypeError):
ht.modf(object())
with self.assertRaises(TypeError):
ht.modf(float32_tensor, 1)
with self.assertRaises(ValueError):
ht.modf(float32_tensor, (float32_tensor, float32_tensor, float64_tensor))
with self.assertRaises(TypeError):
ht.modf(float32_tensor, (float32_tensor, 2))

def test_round(self):
start, end, step = -5.0, 5.0, 1.4
comparison = torch.arange(start, end, step, dtype=torch.float64).round()

# exponential of float32
float32_tensor = ht.arange(start, end, step, dtype=ht.float32)
float32_round = float32_tensor.round()
self.assertIsInstance(float32_round, ht.DNDarray)
self.assertEqual(float32_round.dtype, ht.float32)
self.assertEqual(float32_round.dtype, ht.float32)
self.assertTrue((float32_round._DNDarray__array == comparison.float()).all())

# exponential of float64
float64_tensor = ht.arange(start, end, step, dtype=ht.float64)
float64_round = float64_tensor.round()
self.assertIsInstance(float64_round, ht.DNDarray)
self.assertEqual(float64_round.dtype, ht.float64)
self.assertEqual(float64_round.dtype, ht.float64)
self.assertTrue((float64_round._DNDarray__array == comparison).all())

# check exceptions
with self.assertRaises(TypeError):
ht.round([0, 1, 2, 3])
with self.assertRaises(TypeError):
ht.round(object())

def test_trunc(self):
base_array = np.random.randn(20)

Expand Down