-
Notifications
You must be signed in to change notification settings - Fork 486
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add docs for eager mode #7700
Add docs for eager mode #7700
Conversation
docs/eager.md
Outdated
optimizer.step() | ||
return loss | ||
|
||
self.compiled_step_fn = torch_xla.experimental.compile(self.step_fn) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
re: naming, I don't think compiled_step_fn
makes sense here. It makes it look like this function is imperatively compiled when you call compile
, when what's really happening is that you will trace and JIT-compile the function when it's called later. jit
and trace
already mean something else in this ecosystem though...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
well that's the trade of I made when I try to align the api with the upstream, I think it creates more confusion to called it jited_step_fn
. I will just rename it to step_fn
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you want to just use compile
as a decorator here if you're following the upstream convention?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
seem like upstream support both, I can support both too https://pytorch.org/tutorials/intermediate/torch_compile_tutorial.html
No description provided.