Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use dispatcher API #83

Open
wants to merge 3 commits into
base: master
Choose a base branch
from

Conversation

b-koopman
Copy link

Custom C++ and CUDA Extensions tutorial is updated to use dispatcher API.

Updating tutorial to support tutorial changes as part of:
pytorch/tutorials#1495

@facebook-github-bot
Copy link

Hi @b-koopman!

Thank you for your pull request.

We require contributors to sign our Contributor License Agreement, and yours needs attention.

You currently have a record in our system, but the CLA is no longer valid, and will need to be resubmitted.

Process

In order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA.

Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with CLA signed. The tagging process may take up to 1 hour after signing. Please give it that time before contacting us about it.

If you have received this in error or have any questions, please contact us at cla@meta.com. Thanks!

grad_check.py Outdated Show resolved Hide resolved
@@ -33,14 +34,16 @@ def get_grads(variables):

def check_forward(variables, with_cuda, verbose):
baseline_values = python.lltm_baseline.LLTMFunction.apply(*variables)
cpp_values = cpp.lltm.LLTMFunction.apply(*variables)
cpp_variables = [v.cpu() for v in variables]
cpp_values = torch.ops.myops.lltm(*cpp_variables)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It looks like it's something you already call from LLTMFunction in cpp/lltm.py, so why not keep the LLTMFunction object here as it was before and place the backend dependent code in evry module (cpp and cuda) ?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since this has been migrated into an operator that is registered via dispatcher API, python implementation of LLTMFunction no longer exists (see cpp/lltm.cpp:115).

If I understand the intent of the code correctly, this use of LLTMFunction was a way to directly call the C++ forward implementation via python, but now we are using the dispatcher API, and can call the forward op directly. (I think ideally the baseline solution would be updated to match this pattern too.)

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

C++ LLTMFunction impl is primarily used for re-dispatching from autograd op, as per: https://pytorch.org/tutorials/advanced/dispatcher.html#adding-autograd-support

@@ -20,6 +25,7 @@
parser.add_argument('-d', '--double', action='store_true')
options = parser.parse_args()

LIB_EXT = torch.utils.cpp_extension.LIB_EXT
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like all the changes you made in benchmark.py are not used.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The op is called from LLTM module class' forward() See {cpp|cuda}/lltm.py

@b-koopman b-koopman requested a review from ClementPinard June 12, 2023 15:04
@ClementPinard
Copy link
Contributor

Hey, thanks for putting me as reviewer. Waiting for the tutorial part to better understand the dispatcher API and I'll finish my review ASAP.

Overall, looks good to me, even though the part where you load the cpp module with pkg_resources could be removed from check.py and grad_check.py` and only be present in the lltm.py files for better clarity and less code replication

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants