Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OpenXLA Support #908

Open
vyeevani opened this issue Jan 12, 2024 · 3 comments
Open

OpenXLA Support #908

vyeevani opened this issue Jan 12, 2024 · 3 comments

Comments

@vyeevani
Copy link

There's some poc xla bindings for rust. It would be really cool to integrate those: https://github.com/LaurentMazare/xla-rs into dfdx to allow for tpus and other accelerators in addition to cuda.

@emchristiansen
Copy link

Agreed.

I'd actually go a step further and say you could dramatically simplify your architecture, while improving runtime speed, by using XLA as the only backend.
(You probably know that XLA was invented for exactly this purpose, to unite the backends of TensorFlow.)

I've used XLA (via JAX) quite a bit, and in my experience the code it generates is at least as fast as the code from other frameworks.
Of course this makes sense, because XLA typically has static access to the full graph including tensor shapes, so it has much more information at its disposal than e.g. PyTorch in eager mode.
For example, it can statically allocate tensors and run the full program in a fixed amount of memory.

But, there is one blocking issue with this approach: XLA compilation is stupidly slow and the framework doesn't provide good options for persisting compiled graphs.
But, based on chats I've had with a JAX dev, there's a good chance these problems are solvable (they just haven't been solved because they haven't mattered much for Google).
In any case, this question could probably be resolved via a few chats with the XLA folks.

What do you think?

@gmmyung
Copy link

gmmyung commented Feb 2, 2024

What are your thoughts on using IREE? The IREE runtime supports a diverse range of backends such as Rocm, CUDA, WebGPU, and Metal. Additionally, it would be beneficial if the models could be AOT compiled at Rust compile time, though I'm uncertain if the current architecture of dfdx supports this.

@emchristiansen
Copy link

I've never used IREE but that also seems like a good option.
AOT compilation would be great, too.

Perhaps the critique should be "don't manually support various backends, instead choose an IR and target it instead".

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants