Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Hypergraph checks #249

Merged
merged 42 commits into from
Nov 9, 2023
Merged

Hypergraph checks #249

merged 42 commits into from
Nov 9, 2023

Conversation

levtelyatnikov
Copy link
Collaborator

Overall:
I have reviewed the hypergraph implementation and made necessary modifications to ensure consistent patterns. Additionally, I have updated tutorials and test files to align with these changes.

Specifically:
Previously, all the models generated the 'desired' hidden representation, meaning that if a model was required to perform a graph classification task, the pooling operation was executed within the model. Hence, the model output was the pooled signal.

y_pred = model(x_0_init, incidence)

Now, the models output the last hidden representations. In hypergraph models, the output includes nodes and hyperedges.

x_0, x_1 = model(x_0_init, incidence)

We have introduced a separate, lightweight pooling/readout class that is responsible for providing the desired output for particular downstream tasks. This new pooling/readout class is currently introduced in the tutorials.

pooling = readout(task_level="graph")
x_0, x_1 = model(x_0_init, incidence)
y_pred = pooling(x_0, x_1)

@levtelyatnikov levtelyatnikov linked an issue Nov 6, 2023 that may be closed by this pull request
11 tasks
Copy link

Check out this pull request on  ReviewNB

See visual diffs & provide feedback on Jupyter Notebooks.


Powered by ReviewNB

Copy link
Collaborator

@ninamiolane ninamiolane left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice! There seem to be github conflicts though: could you solve them, so that the tests run -- before we review?

Copy link

codecov bot commented Nov 7, 2023

Codecov Report

All modified and coverable lines are covered by tests ✅

Comparison is base (e160cb8) 97.49% compared to head (62513dc) 97.45%.

Additional details and impacted files
@@            Coverage Diff             @@
##             main     #249      +/-   ##
==========================================
- Coverage   97.49%   97.45%   -0.05%     
==========================================
  Files          58       57       -1     
  Lines        2155     2121      -34     
==========================================
- Hits         2101     2067      -34     
  Misses         54       54              
Files Coverage Δ
topomodelx/nn/hypergraph/allset.py 100.00% <100.00%> (ø)
topomodelx/nn/hypergraph/allset_layer.py 88.88% <100.00%> (ø)
topomodelx/nn/hypergraph/allset_transformer.py 100.00% <100.00%> (ø)
...pomodelx/nn/hypergraph/allset_transformer_layer.py 94.33% <100.00%> (ø)
topomodelx/nn/hypergraph/dhgcn.py 100.00% <100.00%> (ø)
topomodelx/nn/hypergraph/dhgcn_layer.py 95.38% <100.00%> (ø)
topomodelx/nn/hypergraph/hmpnn.py 100.00% <100.00%> (ø)
topomodelx/nn/hypergraph/hmpnn_layer.py 100.00% <100.00%> (ø)
topomodelx/nn/hypergraph/hnhn.py 100.00% <100.00%> (ø)
topomodelx/nn/hypergraph/hnhn_layer.py 96.82% <100.00%> (-0.10%) ⬇️
... and 12 more

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Copy link
Collaborator

@ninamiolane ninamiolane left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perfect! Thank you for this great contribution, and thanks esp. for respecting the naming conventions of our variables. Just minor comments: you can address and then we can merge.

return torch.sigmoid(self.linear(pooled_x))[0]
x_0, x_1 = layer(x_0, incidence_1)

return (x_0, x_1)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No need for (, )

mlp_num_layers : int, default: 2
Number of layers in the MLP.
mlp_activation : torch.nn.Module, default: None
Activation function for the MLP.
mlp_dropout : float, default: 0.0
Dropout probability for the MLP.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

missing parameter type: e.g. dropout : float and note the space before : according to numpy's conventions for docstring, which we use here.

mlp_dropout : float, default: 0.0
Dropout probability for the MLP.
mlp_activation:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ditto.


return x
return (x_0, x_1)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ditto

Number of AllSet layers in the network.
heads : int, default: 4
Number of attention heads.
dropout : float, default=0.2
dropout : float
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ditto

self.input_drop = torch.nn.Dropout(input_drop)
self.layer_drop = torch.nn.Dropout(layer_drop)

# Define initial linear layer
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ditto

x_0 = self.layer_drop(x_0)
x_0 = torch.nn.functional.relu(x_0)

return (x_0, x_1)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ditto

# Update node features using GIN update equation
return self.nn((1 + self.eps) * x_0 + m_1_0)
x_0 = self.linear((1 + self.eps) * x_0 + m_1_0)
return (x_0, x_1)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ditto

x_0 = self.layer_drop(x_0)
x_0 = torch.nn.functional.relu(x_0)

return (x_0, x_1)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ditto

m_1_0 = self.edge2vertex(x_1, incidence_1)
x_0 = x_0 + m_1_0

return (x_0, x_1)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ditto

Copy link
Collaborator

@ninamiolane ninamiolane left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Awesome, thanks, and thanks for adapting to the new CI!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Review nn/hypergraph models
2 participants