Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unstable accuracy on test set #6

Closed
simoons95 opened this issue Nov 16, 2022 · 2 comments
Closed

Unstable accuracy on test set #6

simoons95 opened this issue Nov 16, 2022 · 2 comments

Comments

@simoons95
Copy link

simoons95 commented Nov 16, 2022

Hello,

I also have a second question (see #5 for the first one).
When I run your code on ba_2motifs, I obtain the following graph:
image
Fortunately, the validation set have the exact same behaviour as the test set, so when the validation set accuracy is high, the test set accuracy is high too, hence the good results.
Is it expected that it is so unstable? What could I do to avoid that?

@siqim
Copy link
Member

siqim commented Nov 16, 2022

Thanks for reporting this issue! After fixing issue #5 by PR #7, I just tested the code, and now it should be a lot more stable.

@siqim
Copy link
Member

siqim commented Nov 20, 2022

With #7/#8, learning edge attention for undirected graphs should be much more stable, and I will be closing this issue now. Thanks again for pointing out this issue, and feel free to let us know if you have any further questions!

@siqim siqim closed this as completed Nov 20, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants