-
Notifications
You must be signed in to change notification settings - Fork 423
Issues: openxla/xla
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Unexpected slow-down with
@jit
on simple functions that only use element-wise operations and jnp.roll()
on CPUs
#18478
opened Oct 18, 2024 by
pmocz
Unexpected speedup from wrapping function call in trivial jax.lax.cond statement
#18440
opened Oct 17, 2024 by
cgiovanetti
Deserialization of executables fails on non-zero ranks when deserializing single-device executable
#18286
opened Oct 14, 2024 by
jaro-sevcik
PR#18052 caused runtime crashes for all the MaxText training with multi-gpus
bug
Something isn't working
NVIDIA-GPU
XLA on Nvidia GPU
#18214
opened Oct 11, 2024 by
gpupuck
error: call of overloaded 'TileAssignment(<brace-enclosed initializer list>)' is ambiguous with gcc 10
#18140
opened Oct 10, 2024 by
elistevens
Inadequate memory consumption when using HSDP without gradient accumulation
#18090
opened Oct 9, 2024 by
qGentry
Compilation fails on Mac M1 (Sonoma 14.7) : "error: no matching function for call to 'min'"
#17820
opened Oct 1, 2024 by
domkirke
Memory leak in //third_party/tensorflow/compiler/xla:debug_options_parsers_test
bug
Something isn't working
CPU
Related to XLA on CPU
#17808
opened Oct 1, 2024 by
penpornk
AVX512 quantization (cast from float to uint8) returns wrong results
#17800
opened Oct 1, 2024 by
Flamefire
[XLA:CPU] Support limiting LLVM codegen in Aarch64 and other new x86 instructions
CPU
Related to XLA on CPU
enhancement
New feature or request
#17758
opened Sep 30, 2024 by
penpornk
ElementalIrEmitterExecutionTest.ConvertFloatsToF8E4FN failed -0.0586 vs -0.0625
#17324
opened Sep 18, 2024 by
apivovarov
ElementalIrEmitterExecutionTest.IotaF8E4M3FN - Invalid LLVM IR
#17323
opened Sep 18, 2024 by
apivovarov
Tranposing to different layout permutations results in different numerics
#17276
opened Sep 17, 2024 by
elfiegg
Eagerly create common nccl communicator(s) during init
enhancement
New feature or request
NVIDIA-GPU
XLA on Nvidia GPU
#17108
opened Sep 12, 2024 by
skye
XLA flags: No speed ups on GPUs and segmentation fault
#17103
opened Sep 12, 2024 by
AakashKumarNain
Previous Next
ProTip!
Adding no:label will show everything without a label.