-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[DISCUSS] Tensor Expr AutoDiff for TIR #46
Comments
Does our new tir allow user to modify index variables freely? |
@yzhliu can you elaborate what do you mean? |
For example, can user use
and does it support index like |
I don't think we could diff wrt to indirect indices, that is a restriction. If a variable is bound once, and its input are pure, i think it might be possible to diff wrt to that |
had a discussion with @yzhliu about this, a high level summary is that we can do it, as long as all blocks are complete and we ignore the loops |
This is something for fruit of thought as a possible future work. Not necessarily actionable right now.
https://discuss.tvm.ai/t/rfc-bring-in-tensor-expression-autodiff/5987
Discusses how can we introduce tensor expr level AD to the te.compute. It would be interesting to think about how can we generalize to the TIR level. In particular, if we place restrictions, such as making sure all blocks are complete, would we be able to run autodiff on the TIR directly written in hybrid script.
It would be useful to discuss and align possible designs right now so we can prepared for such as change, if it is possible.
cc @yzhliu @Hzfengsy @spectrometerHBH
The text was updated successfully, but these errors were encountered: