Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About the positional encoding #41

Closed
Yang-L1 opened this issue Jul 31, 2021 · 2 comments
Closed

About the positional encoding #41

Yang-L1 opened this issue Jul 31, 2021 · 2 comments

Comments

@Yang-L1
Copy link

Yang-L1 commented Jul 31, 2021

Hi, thanks for sharing this great work.
I wonder if this an implementation mistake in the position encoding part in line 21 of position_encoding.py.
div_term = torch.exp(torch.arange(0, d_model//2, 2).float() * (-math.log(10000.0) / d_model//2))
Should we change (-math.log(10000.0) / d_model // 2) to (-math.log(10000.0) / (d_model // 2) )? Otherwise this part ends up with -1.0 which means very high temperature.

@zehongs
Copy link
Member

zehongs commented Aug 11, 2021

Hi, sorry for the late reply. And thanks for pointing out this implementation mistake! 👍 We'll fix this in this week!

@Yang-L1 Yang-L1 closed this as completed Aug 11, 2021
angshine added a commit that referenced this issue Aug 16, 2021
@angshine
Copy link
Member

We have trained a new model on ScanNet with this bug fixed, and it leads to a minor performance increase as shown below:

model auc@5 auc@10 auc@20
paper 22.06 40.8 57.62
bug-fix 22.5 41.23 57.77

I would conclude that this bug does not affect the results of experiments in our paper. However, thanks again for pointing this out!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants