-
Notifications
You must be signed in to change notification settings - Fork 85
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Hypergraph checks #249
Hypergraph checks #249
Conversation
Check out this pull request on See visual diffs & provide feedback on Jupyter Notebooks. Powered by ReviewNB |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice! There seem to be github conflicts though: could you solve them, so that the tests run -- before we review?
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #249 +/- ##
==========================================
- Coverage 97.49% 97.45% -0.05%
==========================================
Files 58 57 -1
Lines 2155 2121 -34
==========================================
- Hits 2101 2067 -34
Misses 54 54
☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perfect! Thank you for this great contribution, and thanks esp. for respecting the naming conventions of our variables. Just minor comments: you can address and then we can merge.
topomodelx/nn/hypergraph/allset.py
Outdated
return torch.sigmoid(self.linear(pooled_x))[0] | ||
x_0, x_1 = layer(x_0, incidence_1) | ||
|
||
return (x_0, x_1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No need for (, )
mlp_num_layers : int, default: 2 | ||
Number of layers in the MLP. | ||
mlp_activation : torch.nn.Module, default: None | ||
Activation function for the MLP. | ||
mlp_dropout : float, default: 0.0 | ||
Dropout probability for the MLP. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
missing parameter type: e.g. dropout : float
and note the space before : according to numpy's conventions for docstring, which we use here.
topomodelx/nn/hypergraph/allset.py
Outdated
mlp_dropout : float, default: 0.0 | ||
Dropout probability for the MLP. | ||
mlp_activation: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ditto.
|
||
return x | ||
return (x_0, x_1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ditto
Number of AllSet layers in the network. | ||
heads : int, default: 4 | ||
Number of attention heads. | ||
dropout : float, default=0.2 | ||
dropout : float |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ditto
topomodelx/nn/hypergraph/unigin.py
Outdated
self.input_drop = torch.nn.Dropout(input_drop) | ||
self.layer_drop = torch.nn.Dropout(layer_drop) | ||
|
||
# Define initial linear layer |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ditto
topomodelx/nn/hypergraph/unigin.py
Outdated
x_0 = self.layer_drop(x_0) | ||
x_0 = torch.nn.functional.relu(x_0) | ||
|
||
return (x_0, x_1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ditto
# Update node features using GIN update equation | ||
return self.nn((1 + self.eps) * x_0 + m_1_0) | ||
x_0 = self.linear((1 + self.eps) * x_0 + m_1_0) | ||
return (x_0, x_1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ditto
topomodelx/nn/hypergraph/unisage.py
Outdated
x_0 = self.layer_drop(x_0) | ||
x_0 = torch.nn.functional.relu(x_0) | ||
|
||
return (x_0, x_1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ditto
m_1_0 = self.edge2vertex(x_1, incidence_1) | ||
x_0 = x_0 + m_1_0 | ||
|
||
return (x_0, x_1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ditto
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Awesome, thanks, and thanks for adapting to the new CI!
Overall:
I have reviewed the hypergraph implementation and made necessary modifications to ensure consistent patterns. Additionally, I have updated tutorials and test files to align with these changes.
Specifically:
Previously, all the models generated the 'desired' hidden representation, meaning that if a model was required to perform a graph classification task, the pooling operation was executed within the model. Hence, the model output was the pooled signal.
Now, the models output the last hidden representations. In hypergraph models, the output includes nodes and hyperedges.
We have introduced a separate, lightweight pooling/readout class that is responsible for providing the desired output for particular downstream tasks. This new pooling/readout class is currently introduced in the tutorials.