Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Per file C++ Operator registration #3135

Merged
merged 12 commits into from
Dec 8, 2020

Conversation

datumbox
Copy link
Contributor

@datumbox datumbox commented Dec 7, 2020

Implements the POC discussed at #3051:

  • Move the registration of Operators directly in their files.
  • Make cpu/cuda kernel implementations private and remove the _cpu/_cuda naming convention.
  • Only forward/backward methods are in header files.

@datumbox datumbox changed the title [WIP] Per file C++ Operator registration Per file C++ Operator registration Dec 7, 2020
@datumbox datumbox requested a review from fmassa December 7, 2020 19:39
@datumbox datumbox force-pushed the refactoring/move_op_registration branch from 5a71bf5 to 46a4c66 Compare December 7, 2020 22:10
@codecov
Copy link

codecov bot commented Dec 8, 2020

Codecov Report

Merging #3135 (297ebbd) into master (6cb4fc2) will not change coverage.
The diff coverage is n/a.

Impacted file tree graph

@@           Coverage Diff           @@
##           master    #3135   +/-   ##
=======================================
  Coverage   72.75%   72.75%           
=======================================
  Files          99       99           
  Lines        8979     8979           
  Branches     1431     1431           
=======================================
  Hits         6533     6533           
  Misses       1999     1999           
  Partials      447      447           

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 6cb4fc2...297ebbd. Read the comment docs.

Copy link
Member

@fmassa fmassa left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The PR looks great, thanks a lot @datumbox !

I've made a few minor comments which are non-blocking, let me know your thoughts.

Comment on lines 4 to +5
#include <torchvision/nms.h>
#include <torchvision/roi_align.h>
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are those individual imports needed or can we just include torchvision/vision.h?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Including the nms is necessary because vision.h no longer includes all individual header files. I'm not sure that roi_align is necessary though. I left it because we had it before. Do you want me to try and remove it?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh yeah. We should fix this linker issue soon so that we can get rid of those hacks.

@@ -77,6 +79,10 @@ at::Tensor deform_conv2d_autocast(
use_mask)
.to(input.scalar_type());
}

TORCH_LIBRARY_IMPL(torchvision, Autocast, m) {
m.impl("deform_conv2d", deform_conv2d_autocast);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: I didn't know that the TORCH_LIBRARY_IMPL can actually be present before TORCH_LIBRARY_FRAGMENT. This is just a comment, no need to change the code.

Also, I wonder if for the future we should create new folders for autograd and autocast, like we have for cpu and cuda?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes indeed. The order looks a bit strange.

The reason I put it like that was to minimize the number of #if blocks AND avoid moving large blocks of code around and lose the git history. If we don't mind about the latter, we can easily refactor it. :)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's leave it like this for now. If we decide to create autocast and autograd folders, then things will be nicely present there (and no ifdefs in the code)


// C++ Backward
VISION_API
std::tuple<at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>
_deform_conv2d_backward(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: do we want to expose _deform_conv2d_backward in the headers? I would assume those are private and thus shouldn't be exposed, except if somewhere else in the codebase we need them?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought to leave it public because we also expose it on Python via the dispatcher. I believe in the past it was exposed to users because it was in a header file, so I opted for including it to avoid BC-breaking changes. Happy to remove it if we think it's not needed.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The fact that we have the *_backward functions exposed in Python is an artifact of how some things currently work in the dispatcher IIUC. We don't expose deform_conv2d_backward under torchvision.ops, so I would be ok removing them from those headers.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done at #3143

@datumbox datumbox merged commit 3c33f36 into pytorch:master Dec 8, 2020
@datumbox datumbox deleted the refactoring/move_op_registration branch December 8, 2020 19:06
facebook-github-bot pushed a commit that referenced this pull request Dec 10, 2020
Summary:
* Moving deform_conv2d op registration.

* Moving nms op registration.

* Moving new_empty_tensor op registration.

* Moving ps_roi_align op registration.

* Moving ps_roi_pool op registration.

* Moving roi_align op registration.

* Moving roi_pool op registration.

* Restoring headers for forward/backward and fixing styles.

* Restoring the test hack on windows.

* Stricter header inclusion.

Reviewed By: fmassa

Differential Revision: D25460675

fbshipit-source-id: 8b7c65320d41c9c0f43815d156957c1e40aef106
@datumbox datumbox mentioned this pull request Jan 5, 2021
13 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants