You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you for your work, and I hope to learn some knowledge from it. When using your pretrained model for testing, there is a significant discrepancy in the MIE(R) value compared to the data provided in the paper. This issue occurs for all datasets except KITTI. Could you please explain the reason for this?
The text was updated successfully, but these errors were encountered:
I am sorry for my delay. We have tested the pre-trained models on all datasets. The results are similar to those listed in our paper. And there is no significant discrepancy in the MIE(R). Here are the results(7scenes->ICL->same->noise->unseen):
I am sorry for my delay. We have tested the pre-trained models on all datasets. The results are similar to those listed in our paper. And there is no significant discrepancy in the MIE(R). Here are the results(7scenes->ICL->same->noise->unseen):
Thank you for your reply, I apologize that I just saw it. I still have a question, as your figures shows, For datasets other than kitti, the metric MIE(R) is much larger than expected. I hope you can help me to answer it!
Thank you for your work, and I hope to learn some knowledge from it. When using your pretrained model for testing, there is a significant discrepancy in the MIE(R) value compared to the data provided in the paper. This issue occurs for all datasets except KITTI. Could you please explain the reason for this?
The text was updated successfully, but these errors were encountered: