Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Questions about experiment #6

Open
LiuQiir opened this issue Jun 7, 2023 · 2 comments
Open

Questions about experiment #6

LiuQiir opened this issue Jun 7, 2023 · 2 comments

Comments

@LiuQiir
Copy link

LiuQiir commented Jun 7, 2023

Thank you for your work, and I hope to learn some knowledge from it. When using your pretrained model for testing, there is a significant discrepancy in the MIE(R) value compared to the data provided in the paper. This issue occurs for all datasets except KITTI. Could you please explain the reason for this?
1d9cd68aede1e8e994087975155a5c1

@supersyq
Copy link
Owner

I am sorry for my delay. We have tested the pre-trained models on all datasets. The results are similar to those listed in our paper. And there is no significant discrepancy in the MIE(R). Here are the results(7scenes->ICL->same->noise->unseen):
7scenes

@LiuQiir
Copy link
Author

LiuQiir commented Jan 12, 2024

I am sorry for my delay. We have tested the pre-trained models on all datasets. The results are similar to those listed in our paper. And there is no significant discrepancy in the MIE(R). Here are the results(7scenes->ICL->same->noise->unseen): 7scenes

Thank you for your reply, I apologize that I just saw it. I still have a question, as your figures shows, For datasets other than kitti, the metric MIE(R) is much larger than expected. I hope you can help me to answer it!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants