You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Have you understood the code now? It seems that in paper, type-level attention is calculated followed by node-level. But in code it looks quite the opposite. Also, in node-level attention, there is no concatenation operation like in paper. And even after applying softmax, there are extra steps, I don't understand what it has to do with the equations in the paper.
Is the model in the code the same with the description in the paper? I have some doubt on the (models.py).
The text was updated successfully, but these errors were encountered: