-
Notifications
You must be signed in to change notification settings - Fork 7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Per file C++ Operator registration #3135
Per file C++ Operator registration #3135
Conversation
# Conflicts: # torchvision/csrc/new_empty_tensor_op.cpp
5a71bf5
to
46a4c66
Compare
Codecov Report
@@ Coverage Diff @@
## master #3135 +/- ##
=======================================
Coverage 72.75% 72.75%
=======================================
Files 99 99
Lines 8979 8979
Branches 1431 1431
=======================================
Hits 6533 6533
Misses 1999 1999
Partials 447 447 Continue to review full report at Codecov.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The PR looks great, thanks a lot @datumbox !
I've made a few minor comments which are non-blocking, let me know your thoughts.
#include <torchvision/nms.h> | ||
#include <torchvision/roi_align.h> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are those individual imports needed or can we just include torchvision/vision.h
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Including the nms is necessary because vision.h no longer includes all individual header files. I'm not sure that roi_align is necessary though. I left it because we had it before. Do you want me to try and remove it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh yeah. We should fix this linker issue soon so that we can get rid of those hacks.
@@ -77,6 +79,10 @@ at::Tensor deform_conv2d_autocast( | |||
use_mask) | |||
.to(input.scalar_type()); | |||
} | |||
|
|||
TORCH_LIBRARY_IMPL(torchvision, Autocast, m) { | |||
m.impl("deform_conv2d", deform_conv2d_autocast); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: I didn't know that the TORCH_LIBRARY_IMPL
can actually be present before TORCH_LIBRARY_FRAGMENT
. This is just a comment, no need to change the code.
Also, I wonder if for the future we should create new folders for autograd
and autocast
, like we have for cpu
and cuda
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes indeed. The order looks a bit strange.
The reason I put it like that was to minimize the number of #if
blocks AND avoid moving large blocks of code around and lose the git history. If we don't mind about the latter, we can easily refactor it. :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's leave it like this for now. If we decide to create autocast
and autograd
folders, then things will be nicely present there (and no ifdefs in the code)
|
||
// C++ Backward | ||
VISION_API | ||
std::tuple<at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor> | ||
_deform_conv2d_backward( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: do we want to expose _deform_conv2d_backward
in the headers? I would assume those are private and thus shouldn't be exposed, except if somewhere else in the codebase we need them?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I thought to leave it public because we also expose it on Python via the dispatcher. I believe in the past it was exposed to users because it was in a header file, so I opted for including it to avoid BC-breaking changes. Happy to remove it if we think it's not needed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The fact that we have the *_backward
functions exposed in Python is an artifact of how some things currently work in the dispatcher IIUC. We don't expose deform_conv2d_backward
under torchvision.ops
, so I would be ok removing them from those headers.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done at #3143
Summary: * Moving deform_conv2d op registration. * Moving nms op registration. * Moving new_empty_tensor op registration. * Moving ps_roi_align op registration. * Moving ps_roi_pool op registration. * Moving roi_align op registration. * Moving roi_pool op registration. * Restoring headers for forward/backward and fixing styles. * Restoring the test hack on windows. * Stricter header inclusion. Reviewed By: fmassa Differential Revision: D25460675 fbshipit-source-id: 8b7c65320d41c9c0f43815d156957c1e40aef106
Implements the POC discussed at #3051: