-
Notifications
You must be signed in to change notification settings - Fork 216
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use dispatcher API #83
base: master
Are you sure you want to change the base?
Conversation
Hi @b-koopman! Thank you for your pull request. We require contributors to sign our Contributor License Agreement, and yours needs attention. You currently have a record in our system, but the CLA is no longer valid, and will need to be resubmitted. ProcessIn order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA. Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with If you have received this in error or have any questions, please contact us at cla@meta.com. Thanks! |
@@ -33,14 +34,16 @@ def get_grads(variables): | |||
|
|||
def check_forward(variables, with_cuda, verbose): | |||
baseline_values = python.lltm_baseline.LLTMFunction.apply(*variables) | |||
cpp_values = cpp.lltm.LLTMFunction.apply(*variables) | |||
cpp_variables = [v.cpu() for v in variables] | |||
cpp_values = torch.ops.myops.lltm(*cpp_variables) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It looks like it's something you already call from LLTMFunction in cpp/lltm.py
, so why not keep the LLTMFunction
object here as it was before and place the backend dependent code in evry module (cpp and cuda) ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since this has been migrated into an operator that is registered via dispatcher API, python implementation of LLTMFunction
no longer exists (see cpp/lltm.cpp:115
).
If I understand the intent of the code correctly, this use of LLTMFunction
was a way to directly call the C++ forward
implementation via python, but now we are using the dispatcher API, and can call the forward
op directly. (I think ideally the baseline solution would be updated to match this pattern too.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
C++ LLTMFunction impl is primarily used for re-dispatching from autograd op, as per: https://pytorch.org/tutorials/advanced/dispatcher.html#adding-autograd-support
@@ -20,6 +25,7 @@ | |||
parser.add_argument('-d', '--double', action='store_true') | |||
options = parser.parse_args() | |||
|
|||
LIB_EXT = torch.utils.cpp_extension.LIB_EXT |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks like all the changes you made in benchmark.py are not used.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The op is called from LLTM module class' forward()
See {cpp|cuda}/lltm.py
Hey, thanks for putting me as reviewer. Waiting for the tutorial part to better understand the dispatcher API and I'll finish my review ASAP. Overall, looks good to me, even though the part where you load the cpp module with |
Custom C++ and CUDA Extensions tutorial is updated to use dispatcher API.
Updating tutorial to support tutorial changes as part of:
pytorch/tutorials#1495