-
Notifications
You must be signed in to change notification settings - Fork 27
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Inference not proper when deploying with this pipeline #18
Comments
This is likely a parameter tuning mismatch between the defaults for NMS and DBScan in Deepstream/TAO versus |
Sure, I am sharing the input image. I don't know how you reproduce it at your end as for inference. As for launch graph, I understand you mean the launch (parmas) file which contains the hyper-parameters. Detectnet_node file (yaml) - |
Hi Hemal and Ashwin, Mayank has already provided details on detectnet v2 issue. Can you please let us know if were able to check/reproduce it at your side. Thanks |
We believe we've traced this to an issue with keeping aspect ratio in DNN Image Encoder that is likely squashing the images being fed in for inference causing some of the odd performance issues here. We'll push out fixes as soon as we can. |
Thanks for your patience @Windson9 @gsingh81 @HyunCello - we've just merged a PR that includes a fix to the Isaac ROS DNN Image Encoder. With the new fix, the encoder should properly preserve the aspect ratio of input images during the resize operation. We suspect that this distortion may have been the root cause of the performance degradation you observed. Could you please update your checkouts of |
Hi @jaiveersinghNV, thanks for the response and quick update. Unfortunately, the issues still persist on my end. I pulled the latest changes and built the package with the latest changes. There is no improvement in inference at my end. |
Sorry that this didn't fix it for you - there may be a deeper problem here. We'll look into the Detectnet Decoder component of this pipeline and see if we can identify any discrepancies there. |
Hello,
I have trained the detectnetV2 model on custom dataset using TAO toolkit. However, when I deploy the model with Isaac ROS Gems pipeline, the output inference is not reflecting the output metrics and the inference that I am seeing while using the TAO pipeline.
Later, I deployed the same model with deepstream and noticed that inference was as per the output metrics which I noticed while training the model.
I tried matching the hyper-parameters in the params.yaml file to match the config file of deepstream which I had used but there was no significant improvement in output inference. I am attaching the output images for i) TAO inference ii) Isaac ROS Gems inference iii) Deepstream inference for reference
Model was trained on 6000 images with 500 epochs which gave average precision(mAP) of ~95%. The config file used for training is same provided with DetectnetV2 for TAO training, no changes were done.
Isaac ROS Gems:
TAO inference:
Deep-stream:
Thanks,
Mayank
The text was updated successfully, but these errors were encountered: