You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The usage file and another paper I found mentions the possibility of using the architecture for node classification. I wondered how the signal classification example differs from node classification with multi-feature nodes V=(number of nodes, node features) and A = (number of nodes, number of nodes), specifically:
For node classification I'd assume we need the final layer to output: (Batch size=1, Number of classes, Nodes) to get a prediction per class just set the last fully connected layer=number of classes, but how can we get a prediction for each node when the node pooling decreases number of nodes so the final layers number of nodes is V/(pool_size_1*pool_size_2).
The Notebook example expands input since features=1, so first pass to gconv is N, M, F=1, i.e. graph signals in a batch, number of nodes, and features=1. For this case would rank still be=2 since data only has number of nodes in batch and features, and should it be expanded? It doesn't seem like there should be any M if not dealing with graph signals.
Thank you for the great article!
The text was updated successfully, but these errors were encountered:
The usage file and another paper I found mentions the possibility of using the architecture for node classification. I wondered how the signal classification example differs from node classification with multi-feature nodes V=(number of nodes, node features) and A = (number of nodes, number of nodes), specifically:
Thank you for the great article!
The text was updated successfully, but these errors were encountered: