Skip to content

Commit

Permalink
Avoid non-blocking GPU->CPU copies. (#11288)
Browse files Browse the repository at this point in the history
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
Co-authored-by: Rohit Gupta <rohitgr1998@gmail.com>
Co-authored-by: Justus Schock <12886177+justusschock@users.noreply.github.com>
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
  • Loading branch information
5 people authored Jan 3, 2022
1 parent 95c7e5f commit cf32127
Show file tree
Hide file tree
Showing 2 changed files with 10 additions and 1 deletion.
3 changes: 3 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -385,6 +385,9 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
- Fixed data fetcher selection ([#11294](https://github.com/PyTorchLightning/pytorch-lightning/pull/11294))


- Fixed a race condition that could result in incorrect (zero) values being observed in prediction writer callbacks ([#11288](https://github.com/PyTorchLightning/pytorch-lightning/pull/11288))


## [1.5.7] - 2021-12-21

### Fixed
Expand Down
8 changes: 7 additions & 1 deletion pytorch_lightning/utilities/apply_func.py
Original file line number Diff line number Diff line change
Expand Up @@ -35,6 +35,9 @@
Batch = type(None)


_CPU_DEVICES = ("cpu", torch.device("cpu"))


def to_dtype_tensor(
value: Union[int, float, List[Union[int, float]]], dtype: torch.dtype, device: Union[str, torch.device]
) -> torch.Tensor:
Expand Down Expand Up @@ -274,7 +277,10 @@ def batch_to(data: Any) -> Any:
setattr(device_data, field, device_field)
return device_data

kwargs = dict(non_blocking=True) if isinstance(data, torch.Tensor) else {}
kwargs = {}
# Don't issue non-blocking transfers to CPU
if isinstance(data, torch.Tensor) and device not in _CPU_DEVICES:
kwargs["non_blocking"] = True
data_output = data.to(device, **kwargs)
if data_output is not None:
return data_output
Expand Down

0 comments on commit cf32127

Please sign in to comment.