Skip to content

Commit

Permalink
Revert "Merge branch 'master' into release/1.1.x"
Browse files Browse the repository at this point in the history
This reverts commit 99f80f0, reversing
changes made to 2343d00.
  • Loading branch information
bhagemeier committed Sep 10, 2021
1 parent 99f80f0 commit 8702bde
Show file tree
Hide file tree
Showing 11 changed files with 143 additions and 1,826 deletions.
3 changes: 0 additions & 3 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,3 @@

# v1.1.1
- [#864](https://github.com/helmholtz-analytics/heat/pull/864) Dependencies: constrain `torchvision` version range to match supported `pytorch` version range.

Expand Down Expand Up @@ -211,8 +210,6 @@ Example on 2 processes:
- [#664](https://github.com/helmholtz-analytics/heat/pull/664) New feature / enhancement: distributed `random.random_sample`, `random.random`, `random.sample`, `random.ranf`, `random.random_integer`
- [#666](https://github.com/helmholtz-analytics/heat/pull/666) New feature: distributed prepend/append for `diff()`.
- [#667](https://github.com/helmholtz-analytics/heat/pull/667) Enhancement `reshape`: rename axis parameter
- [#678](https://github.com/helmholtz-analytics/heat/pull/678) New feature: distributed `tile`
- [#670](https://github.com/helmholtz-analytics/heat/pull/670) New Feature: `bincount()`
- [#674](https://github.com/helmholtz-analytics/heat/pull/674) New feature: `repeat`
- [#670](https://github.com/helmholtz-analytics/heat/pull/670) New Feature: distributed `bincount()`
- [#672](https://github.com/helmholtz-analytics/heat/pull/672) Bug / Enhancement: Remove `MPIRequest.wait()`, rewrite calls with capital letters. lower case `wait()` now falls back to the `mpi4py` function
Expand Down
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -63,7 +63,7 @@ Requirements
------------

Heat requires Python 3.7 or newer.
Heat is based on [PyTorch](https://pytorch.org/). Specifically, we are exploiting
Heat is based on [PyTorch](https://pytorch.org/). Specifially, we are exploiting
PyTorch's support for GPUs *and* MPI parallelism. For MPI support we utilize
[mpi4py](https://mpi4py.readthedocs.io). Both packages can be installed via pip
or automatically using the setup.py.
Expand Down
7 changes: 2 additions & 5 deletions heat/core/_operations.py
Original file line number Diff line number Diff line change
Expand Up @@ -419,10 +419,7 @@ def __reduce_op(
else:
output_shape = x.gshape
for dim in axis:
if not (
partial.shape.numel() == 0 and partial_op.__name__ in ("local_max", "local_min")
): # no neutral element for max/min
partial = partial_op(partial, dim=dim, keepdim=True)
partial = partial_op(partial, dim=dim, keepdim=True)
output_shape = output_shape[:dim] + (1,) + output_shape[dim + 1 :]
if not keepdim and not len(partial.shape) == 1:
gshape_losedim = tuple(x.gshape[dim] for dim in range(len(x.gshape)) if dim not in axis)
Expand All @@ -442,7 +439,7 @@ def __reduce_op(
balanced = True
if x.comm.is_distributed():
x.comm.Allreduce(MPI.IN_PLACE, partial, reduction_op)
elif axis is not None and not keepdim:
elif axis is not None:
down_dims = len(tuple(dim for dim in axis if dim < x.split))
split -= down_dims
balanced = x.balanced
Expand Down
315 changes: 99 additions & 216 deletions heat/core/dndarray.py

Large diffs are not rendered by default.

Loading

0 comments on commit 8702bde

Please sign in to comment.