Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

tensorrt c++ And mask decoder can be fp16 without too big error? #9

Open
Mediumcore opened this issue Oct 9, 2023 · 2 comments
Open

Comments

@Mediumcore
Copy link

Mediumcore commented Oct 9, 2023

Hello, Thank you for sharing the great work. Do you have a plan for tensorrt c++ version? That will be awsome too,Thank you very much.
I see mask decoder used with fp32, does fp16 tensorrt will produce big accuracy loss or not? Best regards.

@Mediumcore Mediumcore changed the title tensorrt c++ tensorrt c++ And mask decoder can be fp16 without too big error? Oct 9, 2023
@jaybdub
Copy link
Contributor

jaybdub commented Oct 13, 2023

Hi @Mediumcore ,

Thanks for reaching out!

We don't have a plan currently to add C++ support. But I'll update this thread if this changes.

I don't recall the exact decision to use FP32 with the mask decoder, but I think accuracy was the reason. I'm not 100% sure. To check, the accuracy difference could be verified by running the evaluation instructions.

John

@xiangw369
Copy link

Thank you for the great work you have done and look forward to your C++ initiatives

嗨,

感谢您伸出援手!

我们目前没有添加 C++ 支持的计划。但是,如果情况发生变化,我会更新此线程。

我不记得将 FP32 与掩模解码器一起使用的确切决定,但我认为准确性是原因。我不是100%确定。为了进行检查,可以通过运行评估指令来验证精度差异。

John

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants