Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix distributed.Client.compute applied to DataArray #3173

Merged
merged 2 commits into from
Aug 1, 2019

Conversation

crusaderky
Copy link
Contributor

@crusaderky crusaderky commented Aug 1, 2019

@crusaderky
Copy link
Contributor Author

Ready for review and merge.
The CI failures are caused by #3174

return False

def __hash__(self) -> int:
return hash((ReprObject, self._value))
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think this will be stable across processes - I just tried this in two different processes:

In [5]: hash(ReprObject)
Out[5]: 8774331846827

(new process)


In [2]: hash(ReprObject)
Out[2]: 8795055479397

Does that matter here? If so, could we change to hash(('ReprObject', self._value))?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, the hash is never stable across processes, I just tried the same for 'hello'.

Does dask rely on the hash, or an equality (or something else)?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it's here: --> 708 variable = ds._variables.pop(_THIS_ARRAY) from #3171

So then both, so this looks good - doesn't need to be consistent across processes

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

xarray (actually not dask) relies on the fact that a deep copy of a key object that has gone through a round-trip on the network produces the same hash as the original. The fact that the hash changes (by design; it's an anti-DoS measure) every time you restart the interpteter is irrelevant because the __hash__ method is always invoked locally. It would be, on the other hand, a grave mistake to cache it and then allow the cache to be serialised.

Copy link
Contributor Author

@crusaderky crusaderky Aug 1, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To elaborate:

BAD - the hash defaults to the id, which will break as soon as the object is shallow-copied

class C:
    pass

GOOD:

class C:
    def __hash__(self):
        return 123  # skip: actual calculation

BAD - it works as long as you do everything locally, but it will break as soon as you pickle the object and unpickle it in a different interpreter. Worth noting that a typical unit test c2 = pickle.loads(pickle.dumps(c)) will NOT spot the issue as the pickling and unpickling happen in the same interpreter.

class C:
    _hash_cache = None

    def __hash__(self):
        if self._hash_cache is None:
            self._hash_cache = 123  # skip: actual, CPU-intensive, calculation
        return self._hash_cache

GOOD AGAIN:

class C:
    _hash_cache = None

    def __hash__(self):
        if self._hash_cache is None:
            self._hash_cache = 123  # skip: actual, CPU-intensive, calculation
        return self._hash_cache

    def __getstate__(self):
        state = self.__dict__.copy()
        state.pop("_hash_cache", None)
        return state

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the explanation! Out of interest, why is this the case:

BAD - it works as long as you do everything locally, but it will break as soon as you pickle the object and unpickle it in a different interpreter.

...assuming...

The fact that the hash changes every time you restart the interpteter is irrelevant because the hash method is always invoked locally.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm a little surprised that these default values end up in pickles, but I guess it can happen.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm doubtful the default values would - an attribute on the class wouldn't be in the instance dict until it's assigned to

@max-sixty max-sixty merged commit 7b76f16 into pydata:master Aug 1, 2019
@crusaderky crusaderky deleted the distributed_this_array branch August 1, 2019 22:48
dcherian added a commit to yohai/xarray that referenced this pull request Aug 3, 2019
* master: (68 commits)
  enable sphinx.ext.napoleon (pydata#3180)
  remove type annotations from autodoc method signatures (pydata#3179)
  Fix regression: IndexVariable.copy(deep=True) casts dtype=U to object (pydata#3095)
  Fix distributed.Client.compute applied to DataArray (pydata#3173)
  More annotations in Dataset (pydata#3112)
  Hotfix for case of combining identical non-monotonic coords (pydata#3151)
  changed url for rasterio network test (pydata#3162)
  to_zarr(append_dim='dim0') doesn't need mode='a' (pydata#3123)
  BUG: fix+test groupby on empty DataArray raises StopIteration (pydata#3156)
  Temporarily remove pynio from py36 CI build (pydata#3157)
  missing 'about' field (pydata#3146)
  Fix h5py version printing (pydata#3145)
  Remove the matplotlib=3.0 constraint from py36.yml (pydata#3143)
  disable codecov comments (pydata#3140)
  Merge broadcast_like docstrings, analyze implementation problem (pydata#3130)
  Update whats-new for pydata#3125 and pydata#2334 (pydata#3135)
  Fix tests on big-endian systems (pydata#3125)
  XFAIL tests failing on ARM (pydata#2334)
  Add broadcast_like. (pydata#3086)
  Better docs and errors about expand_dims() view (pydata#3114)
  ...
dcherian added a commit to dcherian/xarray that referenced this pull request Aug 6, 2019
* master:
  enable sphinx.ext.napoleon (pydata#3180)
  remove type annotations from autodoc method signatures (pydata#3179)
  Fix regression: IndexVariable.copy(deep=True) casts dtype=U to object (pydata#3095)
  Fix distributed.Client.compute applied to DataArray (pydata#3173)
dcherian added a commit to dcherian/xarray that referenced this pull request Aug 6, 2019
* master:
  enable sphinx.ext.napoleon (pydata#3180)
  remove type annotations from autodoc method signatures (pydata#3179)
  Fix regression: IndexVariable.copy(deep=True) casts dtype=U to object (pydata#3095)
  Fix distributed.Client.compute applied to DataArray (pydata#3173)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

distributed.Client.compute fails on DataArray
3 participants