-
Notifications
You must be signed in to change notification settings - Fork 486
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cannot compile PyTorch/XLA master. #6564
Comments
After investigating a bit, I see that the last XLA pin update is the culprit (#6530). Replacing the updated XLA version b166243711f71b0a55daa1eda36b1dc745886784 by the former pin c08cfb0377e4e33a21bde65950f986a21c8a8199 made the error go away. |
Bump the pinned XLA version to fix GPU builds with CUDA11. Note that there are only 13 commits between the new pin and the previous one: ``` $ git log --oneline b1662437^..419a3d73 419a3d736 [xla] Do not include absl headers into xla/types.h 1a4ec9190 [xla:gpu] Add initialization guard to make sure we have exactly one NCCL clique initialization in progress 1365d31a8 [xla] Fix test compilation for environments without cuda 86e231a58 [xla:gpu] Add support for legacy API custom calls in AddressComputationFusionRewriter 82e775381 Fix broken build for convert_memory_placement_to_internal_annotations_test db973b7fb Integrate LLVM at llvm/llvm-project@bc66e0cf9feb 09c7c0818 Fix gcd simplification of div. 04af47afd PR #9400: Move Gt(Max) optimization after all other HandleCompare optimizations 06c8c19d8 Fix pad indexing map with interior padding. a27177d76 [XLA:GPU] Implement GpuPriorityFusion::Run instead of calling InstructionFusion::Run. 8a5491aa8 Don't require the argument of ReducePrecision to be a tensor. 50b3b8c40 [XLA] Add a way for an HLO runner to run instructions in isolation. e020e2e9b [XLA:GPU] Add coalescing heuristic. b16624371 Add support for unpinned_host for host memory offloading. XLA does not currently differentiate between pinned and unpinned. ``` Fixes #6564.
Bump the pinned XLA version to fix GPU builds with CUDA11. Note that there are only 13 commits between the new pin and the previous one: ``` $ git log --oneline b1662437^..419a3d73 419a3d736 [xla] Do not include absl headers into xla/types.h 1a4ec9190 [xla:gpu] Add initialization guard to make sure we have exactly one NCCL clique initialization in progress 1365d31a8 [xla] Fix test compilation for environments without cuda 86e231a58 [xla:gpu] Add support for legacy API custom calls in AddressComputationFusionRewriter 82e775381 Fix broken build for convert_memory_placement_to_internal_annotations_test db973b7fb Integrate LLVM at llvm/llvm-project@bc66e0cf9feb 09c7c0818 Fix gcd simplification of div. 04af47afd PR #9400: Move Gt(Max) optimization after all other HandleCompare optimizations 06c8c19d8 Fix pad indexing map with interior padding. a27177d76 [XLA:GPU] Implement GpuPriorityFusion::Run instead of calling InstructionFusion::Run. 8a5491aa8 Don't require the argument of ReducePrecision to be a tensor. 50b3b8c40 [XLA] Add a way for an HLO runner to run instructions in isolation. e020e2e9b [XLA:GPU] Add coalescing heuristic. b16624371 Add support for unpinned_host for host memory offloading. XLA does not currently differentiate between pinned and unpinned. ``` Fixes #6564.
Hi @ysiraichi Does this mean we have to revert the latest pin update? Is it resolved in the HEAD? We are moving to the HEAD soon. |
🐛 Bug
I'm trying to compile PyTorch/XLA master branch (see command below), however I'm getting the following error:
Environment
cc @miladm @lezcano
The text was updated successfully, but these errors were encountered: