Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Plugin InstanceNormalization failure of TensorRT 8.6 when running InstanceNorm on GPU V100 #3165

Open
DataXujing opened this issue Jul 27, 2023 · 3 comments
Assignees
Labels
triaged Issue has been triaged by maintainers

Comments

@DataXujing
Copy link

DataXujing commented Jul 27, 2023

A bug? I use the Plugin of InstanceNorm plugin, when use trtexec to get the engine, Error show like:

[07/27/2023-06:18:54] [I] [TRT] No importer registered for op: InstanceNormalization_TRT. Attempting to import as plugin.
[07/27/2023-06:18:54] [I] [TRT] Searching for plugin: InstanceNormalization_TRT, plugin_version: 1, plugin_namespace:
[07/27/2023-06:18:54] [V] [TRT] Local registry did not find InstanceNormalization_TRT creator. Will try parent registry if enabled.
[07/27/2023-06:18:54] [V] [TRT] Global registry found InstanceNormalization_TRT creator.
[07/27/2023-06:18:54] [W] [TRT] builtin_op_importers.cpp:5221: Attribute scales not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[07/27/2023-06:18:54] [F] [TRT] Validation failed: scale.count == bias.count
plugin/instanceNormalizationPlugin/instanceNormalizationPlugin.cu:96

[07/27/2023-06:18:54] [E] [TRT] std::exception
[07/27/2023-06:18:54] [E] [TRT] ModelImporter.cpp:771: While parsing node number 3415 [InstanceNormalization_TRT -> "InstanceNormV-27"]:
[07/27/2023-06:18:54] [E] [TRT] ModelImporter.cpp:772: --- Begin node ---
[07/27/2023-06:18:54] [E] [TRT] ModelImporter.cpp:773: input: "/unet/input_blocks.1/input_blocks.1.0/in_layers/in_layers.0/Reshape_output_0"
output: "InstanceNormV-27"
name: "InstanceNormN-27"
op_type: "InstanceNormalization_TRT"
attribute {
  name: "epsilon"
  f: 1e-05
  type: FLOAT
}
attribute {
  name: "scale"
  floats: 1
  floats: 1
  floats: 1
  floats: 1
  floats: 1
  floats: 1
  floats: 1
  floats: 1
  floats: 1
  floats: 1
  floats: 1
  floats: 1
  floats: 1
  floats: 1
  floats: 1
  floats: 1
  floats: 1
  floats: 1
  floats: 1
  floats: 1
  floats: 1
  floats: 1
  floats: 1
  floats: 1
  floats: 1
  floats: 1
  floats: 1
  floats: 1
  floats: 1
  floats: 1
  floats: 1
  floats: 1
  type: FLOATS
}
attribute {
  name: "bias"
  floats: 0
  floats: 0
  floats: 0
  floats: 0
  floats: 0
  floats: 0
  floats: 0
  floats: 0
  floats: 0
  floats: 0
  floats: 0
  floats: 0
  floats: 0
  floats: 0
  floats: 0
  floats: 0
  floats: 0
  floats: 0
  floats: 0
  floats: 0
  floats: 0
  floats: 0
  floats: 0
  floats: 0
  floats: 0
  floats: 0
  floats: 0
  floats: 0
  floats: 0
  floats: 0
  floats: 0
  floats: 0
  type: FLOATS
}
attribute {
  name: "relu"
  i: 0
  type: INT
}
attribute {
  name: "alpha"
  f: 0
  type: FLOAT
}
attribute {
  name: "plugin_version"
  s: "1"
  type: STRING
}

[07/27/2023-06:18:54] [E] [TRT] ModelImporter.cpp:774: --- End node ---
[07/27/2023-06:18:54] [E] [TRT] ModelImporter.cpp:777: ERROR: builtin_op_importers.cpp:5412 In function importFallbackPluginImporter:
[8] Assertion failed: plugin && "Could not create plugin"
[07/27/2023-06:18:54] [E] Failed to parse onnx file
[07/27/2023-06:18:54] [I] Finished parsing network model. Parse time: 7.78809
[07/27/2023-06:18:54] [E] Parsing model failed
[07/27/2023-06:18:54] [E] Failed to create engine from model or file.
[07/27/2023-06:18:54] [E] Engine set up failed
&&&& FAILED TensorRT.trtexec [TensorRT v8601] # trtexec --onnx=./combine_0.onnx --saveEngine=combine_1.plan --verbose --workspace=3000 --fp16

I give five attributes to this plugin: epsilon,scale, bias, relu and alpha like the readme: https://github.com/NVIDIA/TensorRT/blob/release/8.6/plugin/instanceNormalizationPlugin/README.md#parameters ,

Therefore, I think there is a problem with our readme parameter. Should the scale parameter be replaced with scales?

@zerollzeng
Copy link
Collaborator

@samurdhikaru ^ ^

@zerollzeng zerollzeng added the triaged Issue has been triaged by maintainers label Jul 29, 2023
@ttyio
Copy link
Collaborator

ttyio commented Aug 22, 2023

@DataXujing I see this from the log attached , could you check your model, thanks!

Validation failed: scale.count == bias.count
plugin/instanceNormalizationPlugin/instanceNormalizationPlugin.cu:96

@DataXujing
Copy link
Author

DataXujing commented Aug 23, 2023

@DataXujing I see this from the log attached , could you check your model, thanks!

Validation failed: scale.count == bias.count
plugin/instanceNormalizationPlugin/instanceNormalizationPlugin.cu:96

I have reconfirmed that my model is correct, and I believe there is a bug in this line of plugin code:
https://github.com/NVIDIA/TensorRT/blob/35477bdb94eab72862ffbdf66d4419e408bef45f/plugin/instanceNormalizationPlugin/instanceNormalizationPlugin.cu#L621C1-L621C98

in line 621:

 mPluginAttributes.emplace_back(PluginField("scales", nullptr, PluginFieldType::kFLOAT32, 1));

should the attribute "scales" be written as "scale" ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
triaged Issue has been triaged by maintainers
Projects
None yet
Development

No branches or pull requests

4 participants