Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

model #18

Open
congcongzhang1996 opened this issue Nov 11, 2020 · 1 comment
Open

model #18

congcongzhang1996 opened this issue Nov 11, 2020 · 1 comment

Comments

@congcongzhang1996
Copy link

Is the model in the code the same with the description in the paper? I have some doubt on the (models.py).

@abhinab303
Copy link

Have you understood the code now? It seems that in paper, type-level attention is calculated followed by node-level. But in code it looks quite the opposite. Also, in node-level attention, there is no concatenation operation like in paper. And even after applying softmax, there are extra steps, I don't understand what it has to do with the equations in the paper.

attention = F.softmax(attention, dim=1)
attention = torch.mul(attention, adj.sum(1).repeat(M, 1).t())
attention = torch.add(attention * self.gamma, adj.to_dense() * (1 - self.gamma))
h_prime = torch.matmul(attention, g)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants