-
Notifications
You must be signed in to change notification settings - Fork 275
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merge pytorch-device branch into master #695
Merged
Merged
Commits on May 24, 2022
-
Remove use of
torch.set_default_tensor_type
(#674)* Remove use of `torch.set_default_tensor_type` This PR removes use of `torch.set_default_tensor_type`. There are various reasons why we should probably move away from using this function: - Upstream will deprecate and remove it: pytorch/pytorch#53124 - We cannot use this mechanism for other devices than CPU/CUDA, such as Metal Performance Shaders. - It offers little flexibility in allocating Torch models on different devices. This PR makes `PyTorchWrapper`/`PyTorchShim` flexible in terms of the devices it can use. Both classes add a `device` argument to their constructors that takes a `torch.device` instance. The shim ensures that the model is on the given device. The wrapper ensures that input tensors are on the correct device, by calling `xp2torch` with the new `device` keyword argument. Even though this approach offers more flexibility, as a default we want to use the `cpu` device when `NumpyOps` is used and `cuda:N` when CupyOps is used. In order to do so, this PR also adds a new function `get_torch_default_device` that returns the correct device for the currently active Ops. `PyTorchWrapper`/`PyTorchShim`/`xp2torch` use this function when `None` is given as the device to fall back on this default, mimicking the behavior from before this PR. * Add some typing fixes * Remove spurious cupy import * Small fixes - Use `torch.cuda.current_device()` to get the current PyTorch CUDA device. - Do not use `torch_set_default_tensor_type` in `set_active_gpu`.
Configuration menu - View commit details
-
Copy full SHA for ab56938 - Browse repository at this point
Copy the full SHA ab56938View commit details
Commits on May 27, 2022
-
Configuration menu - View commit details
-
Copy full SHA for e586695 - Browse repository at this point
Copy the full SHA e586695View commit details -
Merge pull request #683 from danieldk/merge-master/features/pytorch-d…
…evice-20220527 Merge `master` into `features/pytorch-device`
Configuration menu - View commit details
-
Copy full SHA for 73cdf1b - Browse repository at this point
Copy the full SHA 73cdf1bView commit details
Commits on Jun 2, 2022
-
Configuration menu - View commit details
-
Copy full SHA for d3ef46a - Browse repository at this point
Copy the full SHA d3ef46aView commit details -
Auto-format code with black (#682)
Co-authored-by: explosion-bot <explosion-bot@users.noreply.github.com>
Configuration menu - View commit details
-
Copy full SHA for 05b6f4a - Browse repository at this point
Copy the full SHA 05b6f4aView commit details -
Configuration menu - View commit details
-
Copy full SHA for 65c6556 - Browse repository at this point
Copy the full SHA 65c6556View commit details -
Configuration menu - View commit details
-
Copy full SHA for b8054fd - Browse repository at this point
Copy the full SHA b8054fdView commit details
Commits on Jun 10, 2022
-
Add support for PyTorch Metal Performance Shaders (#685)
* Add `test_slow_gpu` explosion-bot command * Auto-format code with black (#682) Co-authored-by: explosion-bot <explosion-bot@users.noreply.github.com> * Add support for PyTorch Metal Performance Shaders Nightly PyTorch versions add support for Metal Performance Shaders (MPS). Metal is a low-level graphics API for Apple platforms that also supports compute kernels (shaders). MPS is a framework of highly-optimized compute and graphics kernels, including kernels for neural networks. MPS is supported on both Apple Silicon, such as the M1 family of SoC, as well as a range of AMD GPUs used in Macs. Since devices are handled in Thinc through a specific `Ops` implementation (e.g. `CupyOps` == CUDA GPUs), this change introduces the `MPSOps` class. This class is a subclass of `NumpyOps` or `AppleOps` (when available). `MPSOps` does not override any methods, but is used to signal to relevant code paths (e.g. `xp2torch`) that Torch tensors should be placed on the MPS device. The mapping in the previously introduced `get_torch_default_device` function is updated to: - `NumpyOps` -> `cpu` - `CupyOps` -> `cuda:N`, where N is the selected CUDA device. - `MPSOps` -> `mps` to ensure placement of Torch tensors on the `mps` device when `MPSOps` is active. Finally, the following booleans have been added to or changed in `compat`: - `has_torch_mps` (new): PyTorch has MPS support - `has_torch_mps_gpu` (new): PyTorch has MPS support and an MPS-capable GPU is available. - `has_torch_cuda_gpu` (new): PyTorch has CUDA support and a CUDA-capable GPU is available. - `has_torch_gpu` (changed): PyTorch has a GPU available (CUDA or MPS). * Test PyTorch wrapper with all xp ops * Azure: pin protobuf to fix Tensorflow * Extend typing_extensions to <4.2.0 (#689) * Fix type checking error * Only back-off to NumpyOps on import error We do not want to hide other issues while importing thinc_apple_ops. * Remove unneeded `has_torch_mps` bool * Add `has_gpu` bool and use it in `util` * Replace another expression by has_gpu * Set `has_torch_gpu` to `has_torch_cuda_gpu` We need to decide whether we want to make the potentially breaking change from `has_torch_cuda_gpu` to `has_torch_cuda_gpu or has_torch_mps_gpu`. But since the latter is not needed for this PR, remove the change. * Update thinc/util.py Co-authored-by: Sofie Van Landeghem <svlandeg@users.noreply.github.com> Co-authored-by: shademe <shadeMe@users.noreply.github.com> Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: explosion-bot <explosion-bot@users.noreply.github.com> Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com> Co-authored-by: Sofie Van Landeghem <svlandeg@users.noreply.github.com>
Configuration menu - View commit details
-
Copy full SHA for 5beeaf2 - Browse repository at this point
Copy the full SHA 5beeaf2View commit details -
Configuration menu - View commit details
-
Copy full SHA for 0c0a121 - Browse repository at this point
Copy the full SHA 0c0a121View commit details
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.