-
Notifications
You must be signed in to change notification settings - Fork 350
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: support many unary dynamo converters #2246
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@zewenli98 We need a layer of indirection between aten and impl which is why functions like sign had dedicated implementations.
For each op we need:
- aten registration for the actual converter in aten_converters.py
- a dedicated impl function
So for aten::cos
We need:
- aten_ops_cos (converison.aten_ops_converters)
- impl.cos (conversion.impl.unary.ops)
@@ -405,7 +405,7 @@ def aten_ops_to_copy_dtype( | |||
) | |||
|
|||
|
|||
@dynamo_tensorrt_converter(torch.ops.aten.clone.default) | |||
@dynamo_tensorrt_converter(torch.ops.aten.clone.default) # type: ignore[misc] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Before this is committed check to see if theres a way to fix this mypy error @gs-olive can you look at the decorator as well?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I recently noticed that mypy
does not accurately show errors for these decorators. I will look more into this.
The decorator is already as strongly-typed as it can be though, as here:
TensorRT/py/torch_tensorrt/dynamo/conversion/converter_registry.py
Lines 64 to 69 in 585b2b6
def dynamo_tensorrt_converter( | |
key: Target, | |
enabled: bool = True, | |
capability_validator: Optional[Callable[[Node], bool]] = None, | |
priority: ConverterPriority = ConverterPriority.STANDARD, | |
) -> Callable[[Any], Union[TRTTensor, Sequence[TRTTensor]]]: |
I thought there's almost same code for these converters supported by TensorRT, so I created |
Dedicated implementations are better because they use a standardized simpler interface for all ops. So if I want to make a new converter for say LogSumExp, instead of my code looking like this: layer = impl.unary_op(in, ..., trt.IUnaryOperation.LOG)
layer = impl.reduce_op(layer.output, ..., trt.IReduceOperation.SUM)
return impl.unary_op(layer.output, ..., trt.IUnaryOperation.EXP) Peoples code would look closer to this: x = impl.log(in,...)
x = impl.sum(x,...)
return impl.exp(x,...) So a converter writer does not need to have in depth knowledge about tensorrt or its APIs, they can write code more similar to pytorch instead. |
Oh I see. It makes sense. Actually at first I implemented dedicated implementations but after seeing |
So there can be a core function which is centralized. So So we would do something like aten_ops_cos -> impl.cos -> impl.unary.unary_op(..., trt.IUnaryOperation.COS) |
OK, I'll change to something like |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM @zewenli98 rebase and should be good to merge
Do we still need |
Its fine for now we can resolve it later |
fix bugs change to dedicated implementations move input validation into ops
1a16b09
to
36d221a
Compare
rebased, but I don't have access to merge. |
Description
Implemented all 22 unary dynamo converters, including
EXP
,LOG
,SQRT
,RECIP
,ABS
,SIN
,COS
,TAN
,SINH
,COSH
,ASIN
,ACOS
,ATAN
,ASINH
,ACOSH
,ATANH
,CEIL
,FLOOR
,NOT
,SIGN
,ROUND
,ISINF
.Fixes #2199
Type of change
Checklist: