Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question about model details in the paper #4

Open
minhquoc0712 opened this issue Jan 3, 2024 · 0 comments
Open

Question about model details in the paper #4

minhquoc0712 opened this issue Jan 3, 2024 · 0 comments

Comments

@minhquoc0712
Copy link

Hi,

Thank you for your interesting research. I have two questions regarding the implementation of the model in the paper.

  1. The graph encoder has two modules: GNN and Transformer. Each module takes the computational graph as input. How do you obtain a single embedding from many node embedding from EACH module before concatenating them together? Are you averaging them?
  2. There are two losses mentioned in the paper: The CL loss and the regression loss. Are the graph encoder and the prediction head trained together? If so, then how did you combine these two losses? Or did you first train the graph encoder and then freeze it?

Thank you.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant