Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Bump the pinned XLA version to fix GPU builds with CUDA11. Note that there are only 13 commits between the new pin and the previous one: ``` $ git log --oneline b1662437^..419a3d73 419a3d736 [xla] Do not include absl headers into xla/types.h 1a4ec9190 [xla:gpu] Add initialization guard to make sure we have exactly one NCCL clique initialization in progress 1365d31a8 [xla] Fix test compilation for environments without cuda 86e231a58 [xla:gpu] Add support for legacy API custom calls in AddressComputationFusionRewriter 82e775381 Fix broken build for convert_memory_placement_to_internal_annotations_test db973b7fb Integrate LLVM at llvm/llvm-project@bc66e0cf9feb 09c7c0818 Fix gcd simplification of div. 04af47afd PR #9400: Move Gt(Max) optimization after all other HandleCompare optimizations 06c8c19d8 Fix pad indexing map with interior padding. a27177d76 [XLA:GPU] Implement GpuPriorityFusion::Run instead of calling InstructionFusion::Run. 8a5491aa8 Don't require the argument of ReducePrecision to be a tensor. 50b3b8c40 [XLA] Add a way for an HLO runner to run instructions in isolation. e020e2e9b [XLA:GPU] Add coalescing heuristic. b16624371 Add support for unpinned_host for host memory offloading. XLA does not currently differentiate between pinned and unpinned. ``` Fixes #6530.
- Loading branch information