-
Notifications
You must be signed in to change notification settings - Fork 679
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Converting Multi-input model #251
Comments
i got same problem. |
Any solution? |
I met the same issue. |
I have the same problem. However, notice that I actually have only one input. The solution that I used was to actually set the input_names in torch2trt.torch2trt to the same name specified in self.engine (input)....
The following solution works for my case.... Before first (but I think all of you know)... move your python scripts that call the function torch2trt.torch2trt into /torch2trt folder. First, you check what is the name of self.engine in Line364 .... To do this, import pdb at the top of the torch2trt.py for debugging....
Then, check the name by using self.engine.get_binding_name().
In my case, the name of self.engine is "__img" After you know the name ("__img"), go back to your torch2trt.torch2trt command. Change "input_names" to "__img"
Then you should be able to run it... |
@GabbySuwichaya TypeError: get_binding_name(): incompatible function arguments. The following argument types are supported: Invoked with: <tensorrt.tensorrt.ICudaEngine object at 0x7eed91a6f0> |
Hi @ahangchen, I am not sure about Jetson xavier nx. |
@GabbySuwichaya I found the api change in tensor7. We should use engine.get_binding_name(index), where index means the ith binding. But I find the number of my binding names of input turn from 4 to 2. In network the number is 4 and in engine the number is 2, thus I can not convert multi input network to trt model. Currently I can only merge the inputs to one single input as a workaround. I think torch2trt didn't test any examples about multi inputs and results in such an unconvenient situation. |
@ahangchen, I am sorry to hear that. Have you tried something else like Onnx to TRT? Check it out here>> https://github.com/onnx/onnx-tensorrt |
I'm trying to convert a multi-input model to TensorRT. I can convert the model successfully but I get the following error while inferencing
I believe the error is at
torch2trt/torch2trt/torch2trt.py
Line 330 in e22844a
This line returns -1 for both inputs. I checked it with pdb.
I convert the model using:
model_trt = torch2trt.torch2trt(model, [x, masks[0]])
And I run inference as follows:
y_trt = model_trt(x, masks[i])
This is my forward pass:
Thank you!
The text was updated successfully, but these errors were encountered: