-
Notifications
You must be signed in to change notification settings - Fork 486
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
OpenXLA pin update #6530
OpenXLA pin update #6530
Conversation
Thanks @yeounoh! |
We can drop it, based on our verification
|
Thanks @yeounoh! |
Not sure if we should check the resnet18 speed? Remembered it's verified in previous pin update. Here is script of previous pin update |
Let's do a quick run of resnet to make sure we don't have any visible regression. |
With default batch size 128
With batch size 256
No visible regression on ResNet.. |
b65f2de
to
f059eb9
Compare
Bump the pinned XLA version to fix GPU builds with CUDA11. Note that there are only 13 commits between the new pin and the previous one: ``` $ git log --oneline b1662437^..419a3d73 419a3d736 [xla] Do not include absl headers into xla/types.h 1a4ec9190 [xla:gpu] Add initialization guard to make sure we have exactly one NCCL clique initialization in progress 1365d31a8 [xla] Fix test compilation for environments without cuda 86e231a58 [xla:gpu] Add support for legacy API custom calls in AddressComputationFusionRewriter 82e775381 Fix broken build for convert_memory_placement_to_internal_annotations_test db973b7fb Integrate LLVM at llvm/llvm-project@bc66e0cf9feb 09c7c0818 Fix gcd simplification of div. 04af47afd PR #9400: Move Gt(Max) optimization after all other HandleCompare optimizations 06c8c19d8 Fix pad indexing map with interior padding. a27177d76 [XLA:GPU] Implement GpuPriorityFusion::Run instead of calling InstructionFusion::Run. 8a5491aa8 Don't require the argument of ReducePrecision to be a tensor. 50b3b8c40 [XLA] Add a way for an HLO runner to run instructions in isolation. e020e2e9b [XLA:GPU] Add coalescing heuristic. b16624371 Add support for unpinned_host for host memory offloading. XLA does not currently differentiate between pinned and unpinned. ``` Fixes #6530.
This moves the pin to
strip_prefix = "xla-b166243711f71b0a55daa1eda36b1dc745886784",
and libtpu build to
_libtpu_version = '0.1.dev20240213'
Locally tested, and