From 1a6d330fcfe4a50e5e760166e72e425fa9e78b0a Mon Sep 17 00:00:00 2001 From: "Documenter.jl" Date: Sun, 3 Nov 2024 22:43:30 +0000 Subject: [PATCH] build based on e48637e --- .../dev/.documenter-siteinfo.json | 2 +- GraphNeuralNetworks/dev/api/basic/index.html | 6 +-- GraphNeuralNetworks/dev/api/conv/index.html | 40 +++++++++---------- .../dev/api/heteroconv/index.html | 2 +- GraphNeuralNetworks/dev/api/pool/index.html | 6 +-- .../dev/api/samplers/index.html | 2 +- .../dev/api/temporalconv/index.html | 12 +++--- GraphNeuralNetworks/dev/datasets/index.html | 2 +- GraphNeuralNetworks/dev/dev/index.html | 2 +- GraphNeuralNetworks/dev/gsoc/index.html | 2 +- GraphNeuralNetworks/dev/home/index.html | 2 +- GraphNeuralNetworks/dev/index.html | 2 +- GraphNeuralNetworks/dev/models/index.html | 2 +- 13 files changed, 41 insertions(+), 41 deletions(-) diff --git a/GraphNeuralNetworks/dev/.documenter-siteinfo.json b/GraphNeuralNetworks/dev/.documenter-siteinfo.json index 1729ae5ac..20ece2eba 100644 --- a/GraphNeuralNetworks/dev/.documenter-siteinfo.json +++ b/GraphNeuralNetworks/dev/.documenter-siteinfo.json @@ -1 +1 @@ -{"documenter":{"julia_version":"1.10.5","generation_timestamp":"2024-11-03T22:16:24","documenter_version":"1.7.0"}} \ No newline at end of file +{"documenter":{"julia_version":"1.10.5","generation_timestamp":"2024-11-03T22:43:22","documenter_version":"1.7.0"}} \ No newline at end of file diff --git a/GraphNeuralNetworks/dev/api/basic/index.html b/GraphNeuralNetworks/dev/api/basic/index.html index 95e4bba05..4a723f55a 100644 --- a/GraphNeuralNetworks/dev/api/basic/index.html +++ b/GraphNeuralNetworks/dev/api/basic/index.html @@ -9,7 +9,7 @@ julia> dotdec(g, rand(2, 5)) 1×6 Matrix{Float64}: - 0.345098 0.458305 0.106353 0.345098 0.458305 0.106353source
GraphNeuralNetworks.GNNChainType
GNNChain(layers...)
+ 0.345098  0.458305  0.106353  0.345098  0.458305  0.106353
source
GraphNeuralNetworks.GNNChainType
GNNChain(layers...)
 GNNChain(name = layer, ...)

Collects multiple layers / functions to be called in sequence on given input graph and input node features.

It allows to compose layers in a sequential fashion as Flux.Chain does, propagating the output of each layer to the next one. In addition, GNNChain handles the input graph as well, providing it as a first argument only to layers subtyping the GNNLayer abstract type.

GNNChain supports indexing and slicing, m[2] or m[1:end-1], and if names are given, m[:name] == m[1] etc.

Examples

julia> using Flux, GraphNeuralNetworks
 
 julia> m = GNNChain(GCNConv(2=>5), 
@@ -41,7 +41,7 @@
  2.90053  2.90053  2.90053  2.90053  2.90053  2.90053
 
 julia> m2[:enc](g, x) == m(g, x)
-true
source
GraphNeuralNetworks.GNNLayerType
abstract type GNNLayer end

An abstract type from which graph neural network layers are derived.

See also GNNChain.

source
GraphNeuralNetworks.WithGraphType
WithGraph(model, g::GNNGraph; traingraph=false)

A type wrapping the model and tying it to the graph g. In the forward pass, can only take feature arrays as inputs, returning model(g, x...; kws...).

If traingraph=false, the graph's parameters won't be part of the trainable parameters in the gradient updates.

Examples

g = GNNGraph([1,2,3], [2,3,1])
+true
source
GraphNeuralNetworks.GNNLayerType
abstract type GNNLayer end

An abstract type from which graph neural network layers are derived.

See also GNNChain.

source
GraphNeuralNetworks.WithGraphType
WithGraph(model, g::GNNGraph; traingraph=false)

A type wrapping the model and tying it to the graph g. In the forward pass, can only take feature arrays as inputs, returning model(g, x...; kws...).

If traingraph=false, the graph's parameters won't be part of the trainable parameters in the gradient updates.

Examples

g = GNNGraph([1,2,3], [2,3,1])
 x = rand(Float32, 2, 3)
 model = SAGEConv(2 => 3)
 wg = WithGraph(model, g)
@@ -51,4 +51,4 @@
 g2 = GNNGraph([1,1,2,3], [2,4,1,1])
 x2 = rand(Float32, 2, 4)
 # WithGraph will ignore the internal graph if fed with a new one. 
-@assert wg(g2, x2) == model(g2, x2)
source
+@assert wg(g2, x2) == model(g2, x2)source diff --git a/GraphNeuralNetworks/dev/api/conv/index.html b/GraphNeuralNetworks/dev/api/conv/index.html index d2690a1e8..2e3322b0b 100644 --- a/GraphNeuralNetworks/dev/api/conv/index.html +++ b/GraphNeuralNetworks/dev/api/conv/index.html @@ -10,7 +10,7 @@ l = AGNNConv(init_beta=2.0f0) # forward pass -y = l(g, x) source
GraphNeuralNetworks.CGConvType
CGConv((in, ein) => out, act=identity; bias=true, init=glorot_uniform, residual=false)
+y = l(g, x)   
source
GraphNeuralNetworks.CGConvType
CGConv((in, ein) => out, act=identity; bias=true, init=glorot_uniform, residual=false)
 CGConv(in => out, ...)

The crystal graph convolutional layer from the paper Crystal Graph Convolutional Neural Networks for an Accurate and Interpretable Prediction of Material Properties. Performs the operation

\[\mathbf{x}_i' = \mathbf{x}_i + \sum_{j\in N(i)}\sigma(W_f \mathbf{z}_{ij} + \mathbf{b}_f)\, act(W_s \mathbf{z}_{ij} + \mathbf{b}_s)\]

where $\mathbf{z}_{ij}$ is the node and edge features concatenation $[\mathbf{x}_i; \mathbf{x}_j; \mathbf{e}_{j\to i}]$ and $\sigma$ is the sigmoid function. The residual $\mathbf{x}_i$ is added only if residual=true and the output size is the same as the input size.

Arguments

  • in: The dimension of input node features.
  • ein: The dimension of input edge features.

If ein is not given, assumes that no edge features are passed as input in the forward pass.

  • out: The dimension of output node features.
  • act: Activation function.
  • bias: Add learnable bias.
  • init: Weights' initializer.
  • residual: Add a residual connection.

Examples

g = rand_graph(5, 6)
 x = rand(Float32, 2, g.num_nodes)
 e = rand(Float32, 3, g.num_edges)
@@ -20,7 +20,7 @@
 
 # No edge features
 l = CGConv(2 => 4, tanh)
-y = l(g, x)    # size: (4, num_nodes)
source
GraphNeuralNetworks.ChebConvType
ChebConv(in => out, k; bias=true, init=glorot_uniform)

Chebyshev spectral graph convolutional layer from paper Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering.

Implements

\[X' = \sum^{K-1}_{k=0} W^{(k)} Z^{(k)}\]

where $Z^{(k)}$ is the $k$-th term of Chebyshev polynomials, and can be calculated by the following recursive form:

\[\begin{aligned} +y = l(g, x) # size: (4, num_nodes)

source
GraphNeuralNetworks.ChebConvType
ChebConv(in => out, k; bias=true, init=glorot_uniform)

Chebyshev spectral graph convolutional layer from paper Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering.

Implements

\[X' = \sum^{K-1}_{k=0} W^{(k)} Z^{(k)}\]

where $Z^{(k)}$ is the $k$-th term of Chebyshev polynomials, and can be calculated by the following recursive form:

\[\begin{aligned} Z^{(0)} &= X \\ Z^{(1)} &= \hat{L} X \\ Z^{(k)} &= 2 \hat{L} Z^{(k-1)} - Z^{(k-2)} @@ -34,7 +34,7 @@ l = ChebConv(3 => 5, 5) # forward pass -y = l(g, x) # size: 5 × num_nodes

source
GraphNeuralNetworks.DConvType
DConv(ch::Pair{Int, Int}, k::Int; init = glorot_uniform, bias = true)

Diffusion convolution layer from the paper Diffusion Convolutional Recurrent Neural Networks: Data-Driven Traffic Forecasting.

Arguments

  • ch: Pair of input and output dimensions.
  • k: Number of diffusion steps.
  • init: Weights' initializer. Default glorot_uniform.
  • bias: Add learnable bias. Default true.

Examples

julia> g = GNNGraph(rand(10, 10), ndata = rand(Float32, 2, 10));
+y = l(g, x)       # size:  5 × num_nodes
source
GraphNeuralNetworks.DConvType
DConv(ch::Pair{Int, Int}, k::Int; init = glorot_uniform, bias = true)

Diffusion convolution layer from the paper Diffusion Convolutional Recurrent Neural Networks: Data-Driven Traffic Forecasting.

Arguments

  • ch: Pair of input and output dimensions.
  • k: Number of diffusion steps.
  • init: Weights' initializer. Default glorot_uniform.
  • bias: Add learnable bias. Default true.

Examples

julia> g = GNNGraph(rand(10, 10), ndata = rand(Float32, 2, 10));
 
 julia> dconv = DConv(2 => 4, 4)
 DConv(2 => 4, 4)
@@ -42,7 +42,7 @@
 julia> y = dconv(g, g.ndata.x);
 
 julia> size(y)
-(4, 10)
source
GraphNeuralNetworks.EGNNConvType
EGNNConv((in, ein) => out; hidden_size=2in, residual=false)
+(4, 10)
source
GraphNeuralNetworks.EGNNConvType
EGNNConv((in, ein) => out; hidden_size=2in, residual=false)
 EGNNConv(in => out; hidden_size=2in, residual=false)

Equivariant Graph Convolutional Layer from E(n) Equivariant Graph Neural Networks.

The layer performs the following operation:

\[\begin{aligned} \mathbf{m}_{j\to i} &=\phi_e(\mathbf{h}_i, \mathbf{h}_j, \lVert\mathbf{x}_i-\mathbf{x}_j\rVert^2, \mathbf{e}_{j\to i}),\\ \mathbf{x}_i' &= \mathbf{x}_i + C_i\sum_{j\in\mathcal{N}(i)}(\mathbf{x}_i-\mathbf{x}_j)\phi_x(\mathbf{m}_{j\to i}),\\ @@ -52,7 +52,7 @@ h = randn(Float32, 5, g.num_nodes) x = randn(Float32, 3, g.num_nodes) egnn = EGNNConv(5 => 6, 10) -hnew, xnew = egnn(g, h, x)

source
GraphNeuralNetworks.EdgeConvType
EdgeConv(nn; aggr=max)

Edge convolutional layer from paper Dynamic Graph CNN for Learning on Point Clouds.

Performs the operation

\[\mathbf{x}_i' = \square_{j \in N(i)}\, nn([\mathbf{x}_i; \mathbf{x}_j - \mathbf{x}_i])\]

where nn generally denotes a learnable function, e.g. a linear layer or a multi-layer perceptron.

Arguments

  • nn: A (possibly learnable) function.
  • aggr: Aggregation operator for the incoming messages (e.g. +, *, max, min, and mean).

Examples:

# create data
+hnew, xnew = egnn(g, h, x)
source
GraphNeuralNetworks.EdgeConvType
EdgeConv(nn; aggr=max)

Edge convolutional layer from paper Dynamic Graph CNN for Learning on Point Clouds.

Performs the operation

\[\mathbf{x}_i' = \square_{j \in N(i)}\, nn([\mathbf{x}_i; \mathbf{x}_j - \mathbf{x}_i])\]

where nn generally denotes a learnable function, e.g. a linear layer or a multi-layer perceptron.

Arguments

  • nn: A (possibly learnable) function.
  • aggr: Aggregation operator for the incoming messages (e.g. +, *, max, min, and mean).

Examples:

# create data
 s = [1,1,2,3]
 t = [2,3,1,1]
 in_channel = 3
@@ -63,7 +63,7 @@
 l = EdgeConv(Dense(2 * in_channel, out_channel), aggr = +)
 
 # forward pass
-y = l(g, x)
source
GraphNeuralNetworks.GATConvType
GATConv(in => out, [σ; heads, concat, init, bias, negative_slope, add_self_loops])
+y = l(g, x)
source
GraphNeuralNetworks.GATConvType
GATConv(in => out, [σ; heads, concat, init, bias, negative_slope, add_self_loops])
 GATConv((in, ein) => out, ...)

Graph attentional layer from the paper Graph Attention Networks.

Implements the operation

\[\mathbf{x}_i' = \sum_{j \in N(i) \cup \{i\}} \alpha_{ij} W \mathbf{x}_j\]

where the attention coefficients $\alpha_{ij}$ are given by

\[\alpha_{ij} = \frac{1}{z_i} \exp(LeakyReLU(\mathbf{a}^T [W \mathbf{x}_i; W \mathbf{x}_j]))\]

with $z_i$ a normalization factor.

In case ein > 0 is given, edge features of dimension ein will be expected in the forward pass and the attention coefficients will be calculated as

\[\alpha_{ij} = \frac{1}{z_i} \exp(LeakyReLU(\mathbf{a}^T [W_e \mathbf{e}_{j\to i}; W \mathbf{x}_i; W \mathbf{x}_j]))\]

Arguments

  • in: The dimension of input node features.
  • ein: The dimension of input edge features. Default 0 (i.e. no edge features passed in the forward).
  • out: The dimension of output node features.
  • σ: Activation function. Default identity.
  • bias: Learn the additive bias if true. Default true.
  • heads: Number attention heads. Default 1.
  • concat: Concatenate layer output or not. If not, layer output is averaged over the heads. Default true.
  • negative_slope: The parameter of LeakyReLU.Default 0.2.
  • add_self_loops: Add self loops to the graph before performing the convolution. Default true.
  • dropout: Dropout probability on the normalized attention coefficient. Default 0.0.

Examples

# create data
 s = [1,1,2,3]
 t = [2,3,1,1]
@@ -76,7 +76,7 @@
 l = GATConv(in_channel => out_channel, add_self_loops = false, bias = false; heads=2, concat=true)
 
 # forward pass
-y = l(g, x)       
source
GraphNeuralNetworks.GATv2ConvType
GATv2Conv(in => out, [σ; heads, concat, init, bias, negative_slope, add_self_loops])
+y = l(g, x)       
source
GraphNeuralNetworks.GATv2ConvType
GATv2Conv(in => out, [σ; heads, concat, init, bias, negative_slope, add_self_loops])
 GATv2Conv((in, ein) => out, ...)

GATv2 attentional layer from the paper How Attentive are Graph Attention Networks?.

Implements the operation

\[\mathbf{x}_i' = \sum_{j \in N(i) \cup \{i\}} \alpha_{ij} W_1 \mathbf{x}_j\]

where the attention coefficients $\alpha_{ij}$ are given by

\[\alpha_{ij} = \frac{1}{z_i} \exp(\mathbf{a}^T LeakyReLU(W_2 \mathbf{x}_i + W_1 \mathbf{x}_j))\]

with $z_i$ a normalization factor.

In case ein > 0 is given, edge features of dimension ein will be expected in the forward pass and the attention coefficients will be calculated as

\[\alpha_{ij} = \frac{1}{z_i} \exp(\mathbf{a}^T LeakyReLU(W_3 \mathbf{e}_{j\to i} + W_2 \mathbf{x}_i + W_1 \mathbf{x}_j)).\]

Arguments

  • in: The dimension of input node features.
  • ein: The dimension of input edge features. Default 0 (i.e. no edge features passed in the forward).
  • out: The dimension of output node features.
  • σ: Activation function. Default identity.
  • bias: Learn the additive bias if true. Default true.
  • heads: Number attention heads. Default 1.
  • concat: Concatenate layer output or not. If not, layer output is averaged over the heads. Default true.
  • negative_slope: The parameter of LeakyReLU.Default 0.2.
  • add_self_loops: Add self loops to the graph before performing the convolution. Default true.
  • dropout: Dropout probability on the normalized attention coefficient. Default 0.0.

Examples

# create data
 s = [1,1,2,3]
 t = [2,3,1,1]
@@ -93,7 +93,7 @@
 e = randn(Float32, ein, length(s))
 
 # forward pass
-y = l(g, x, e)    
source
GraphNeuralNetworks.GCNConvType
GCNConv(in => out, σ=identity; [bias, init, add_self_loops, use_edge_weight])

Graph convolutional layer from paper Semi-supervised Classification with Graph Convolutional Networks.

Performs the operation

\[\mathbf{x}'_i = \sum_{j\in N(i)} a_{ij} W \mathbf{x}_j\]

where $a_{ij} = 1 / \sqrt{|N(i)||N(j)|}$ is a normalization factor computed from the node degrees.

If the input graph has weighted edges and use_edge_weight=true, than $a_{ij}$ will be computed as

\[a_{ij} = \frac{e_{j\to i}}{\sqrt{\sum_{j \in N(i)} e_{j\to i}} \sqrt{\sum_{i \in N(j)} e_{i\to j}}}\]

The input to the layer is a node feature array X of size (num_features, num_nodes) and optionally an edge weight vector.

Arguments

  • in: Number of input features.
  • out: Number of output features.
  • σ: Activation function. Default identity.
  • bias: Add learnable bias. Default true.
  • init: Weights' initializer. Default glorot_uniform.
  • add_self_loops: Add self loops to the graph before performing the convolution. Default false.
  • use_edge_weight: If true, consider the edge weights in the input graph (if available). If add_self_loops=true the new weights will be set to 1. This option is ignored if the edge_weight is explicitly provided in the forward pass. Default false.

Forward

(::GCNConv)(g::GNNGraph, x, edge_weight = nothing; norm_fn = d -> 1 ./ sqrt.(d), conv_weight = nothing) -> AbstractMatrix

Takes as input a graph g, a node feature matrix x of size [in, num_nodes], and optionally an edge weight vector. Returns a node feature matrix of size [out, num_nodes].

The norm_fn parameter allows for custom normalization of the graph convolution operation by passing a function as argument. By default, it computes $\frac{1}{\sqrt{d}}$ i.e the inverse square root of the degree (d) of each node in the graph. If conv_weight is an AbstractMatrix of size [out, in], then the convolution is performed using that weight matrix instead of the weights stored in the model.

Examples

# create data
+y = l(g, x, e)    
source
GraphNeuralNetworks.GCNConvType
GCNConv(in => out, σ=identity; [bias, init, add_self_loops, use_edge_weight])

Graph convolutional layer from paper Semi-supervised Classification with Graph Convolutional Networks.

Performs the operation

\[\mathbf{x}'_i = \sum_{j\in N(i)} a_{ij} W \mathbf{x}_j\]

where $a_{ij} = 1 / \sqrt{|N(i)||N(j)|}$ is a normalization factor computed from the node degrees.

If the input graph has weighted edges and use_edge_weight=true, than $a_{ij}$ will be computed as

\[a_{ij} = \frac{e_{j\to i}}{\sqrt{\sum_{j \in N(i)} e_{j\to i}} \sqrt{\sum_{i \in N(j)} e_{i\to j}}}\]

The input to the layer is a node feature array X of size (num_features, num_nodes) and optionally an edge weight vector.

Arguments

  • in: Number of input features.
  • out: Number of output features.
  • σ: Activation function. Default identity.
  • bias: Add learnable bias. Default true.
  • init: Weights' initializer. Default glorot_uniform.
  • add_self_loops: Add self loops to the graph before performing the convolution. Default false.
  • use_edge_weight: If true, consider the edge weights in the input graph (if available). If add_self_loops=true the new weights will be set to 1. This option is ignored if the edge_weight is explicitly provided in the forward pass. Default false.

Forward

(::GCNConv)(g::GNNGraph, x, edge_weight = nothing; norm_fn = d -> 1 ./ sqrt.(d), conv_weight = nothing) -> AbstractMatrix

Takes as input a graph g, a node feature matrix x of size [in, num_nodes], and optionally an edge weight vector. Returns a node feature matrix of size [out, num_nodes].

The norm_fn parameter allows for custom normalization of the graph convolution operation by passing a function as argument. By default, it computes $\frac{1}{\sqrt{d}}$ i.e the inverse square root of the degree (d) of each node in the graph. If conv_weight is an AbstractMatrix of size [out, in], then the convolution is performed using that weight matrix instead of the weights stored in the model.

Examples

# create data
 s = [1,1,2,3]
 t = [2,3,1,1]
 g = GNNGraph(s, t)
@@ -113,7 +113,7 @@
 # Edge weights can also be embedded in the graph.
 g = GNNGraph(s, t, w)
 l = GCNConv(3 => 5, use_edge_weight=true) 
-y = l(g, x) # same as l(g, x, w) 
source
GraphNeuralNetworks.GINConvType
GINConv(f, ϵ; aggr=+)

Graph Isomorphism convolutional layer from paper How Powerful are Graph Neural Networks?.

Implements the graph convolution

\[\mathbf{x}_i' = f_\Theta\left((1 + \epsilon) \mathbf{x}_i + \sum_{j \in N(i)} \mathbf{x}_j \right)\]

where $f_\Theta$ typically denotes a learnable function, e.g. a linear layer or a multi-layer perceptron.

Arguments

  • f: A (possibly learnable) function acting on node features.
  • ϵ: Weighting factor.

Examples:

# create data
+y = l(g, x) # same as l(g, x, w) 
source
GraphNeuralNetworks.GINConvType
GINConv(f, ϵ; aggr=+)

Graph Isomorphism convolutional layer from paper How Powerful are Graph Neural Networks?.

Implements the graph convolution

\[\mathbf{x}_i' = f_\Theta\left((1 + \epsilon) \mathbf{x}_i + \sum_{j \in N(i)} \mathbf{x}_j \right)\]

where $f_\Theta$ typically denotes a learnable function, e.g. a linear layer or a multi-layer perceptron.

Arguments

  • f: A (possibly learnable) function acting on node features.
  • ϵ: Weighting factor.

Examples:

# create data
 s = [1,1,2,3]
 t = [2,3,1,1]
 in_channel = 3
@@ -127,7 +127,7 @@
 l = GINConv(nn, 0.01f0, aggr = mean)
 
 # forward pass
-y = l(g, x)  
source
GraphNeuralNetworks.GMMConvType
GMMConv((in, ein) => out, σ=identity; K=1, bias=true, init=glorot_uniform, residual=false)

Graph mixture model convolution layer from the paper Geometric deep learning on graphs and manifolds using mixture model CNNs Performs the operation

\[\mathbf{x}_i' = \mathbf{x}_i + \frac{1}{|N(i)|} \sum_{j\in N(i)}\frac{1}{K}\sum_{k=1}^K \mathbf{w}_k(\mathbf{e}_{j\to i}) \odot \Theta_k \mathbf{x}_j\]

where $w^a_{k}(e^a)$ for feature a and kernel k is given by

\[w^a_{k}(e^a) = \exp(-\frac{1}{2}(e^a - \mu^a_k)^T (\Sigma^{-1})^a_k(e^a - \mu^a_k))\]

$\Theta_k, \mu^a_k, (\Sigma^{-1})^a_k$ are learnable parameters.

The input to the layer is a node feature array x of size (num_features, num_nodes) and edge pseudo-coordinate array e of size (num_features, num_edges) The residual $\mathbf{x}_i$ is added only if residual=true and the output size is the same as the input size.

Arguments

  • in: Number of input node features.
  • ein: Number of input edge features.
  • out: Number of output features.
  • σ: Activation function. Default identity.
  • K: Number of kernels. Default 1.
  • bias: Add learnable bias. Default true.
  • init: Weights' initializer. Default glorot_uniform.
  • residual: Residual conncetion. Default false.

Examples

# create data
+y = l(g, x)  
source
GraphNeuralNetworks.GMMConvType
GMMConv((in, ein) => out, σ=identity; K=1, bias=true, init=glorot_uniform, residual=false)

Graph mixture model convolution layer from the paper Geometric deep learning on graphs and manifolds using mixture model CNNs Performs the operation

\[\mathbf{x}_i' = \mathbf{x}_i + \frac{1}{|N(i)|} \sum_{j\in N(i)}\frac{1}{K}\sum_{k=1}^K \mathbf{w}_k(\mathbf{e}_{j\to i}) \odot \Theta_k \mathbf{x}_j\]

where $w^a_{k}(e^a)$ for feature a and kernel k is given by

\[w^a_{k}(e^a) = \exp(-\frac{1}{2}(e^a - \mu^a_k)^T (\Sigma^{-1})^a_k(e^a - \mu^a_k))\]

$\Theta_k, \mu^a_k, (\Sigma^{-1})^a_k$ are learnable parameters.

The input to the layer is a node feature array x of size (num_features, num_nodes) and edge pseudo-coordinate array e of size (num_features, num_edges) The residual $\mathbf{x}_i$ is added only if residual=true and the output size is the same as the input size.

Arguments

  • in: Number of input node features.
  • ein: Number of input edge features.
  • out: Number of output features.
  • σ: Activation function. Default identity.
  • K: Number of kernels. Default 1.
  • bias: Add learnable bias. Default true.
  • init: Weights' initializer. Default glorot_uniform.
  • residual: Residual conncetion. Default false.

Examples

# create data
 s = [1,1,2,3]
 t = [2,3,1,1]
 g = GNNGraph(s,t)
@@ -139,7 +139,7 @@
 l = GMMConv((nin, ein) => out, K=K)
 
 # forward pass
-l(g, x, e)
source
GraphNeuralNetworks.GatedGraphConvType
GatedGraphConv(out, num_layers; aggr=+, init=glorot_uniform)

Gated graph convolution layer from Gated Graph Sequence Neural Networks.

Implements the recursion

\[\begin{aligned} +l(g, x, e)

source
GraphNeuralNetworks.GatedGraphConvType
GatedGraphConv(out, num_layers; aggr=+, init=glorot_uniform)

Gated graph convolution layer from Gated Graph Sequence Neural Networks.

Implements the recursion

\[\begin{aligned} \mathbf{h}^{(0)}_i &= [\mathbf{x}_i; \mathbf{0}] \\ \mathbf{h}^{(l)}_i &= GRU(\mathbf{h}^{(l-1)}_i, \square_{j \in N(i)} W \mathbf{h}^{(l-1)}_j) \end{aligned}\]

where $\mathbf{h}^{(l)}_i$ denotes the $l$-th hidden variables passing through GRU. The dimension of input $\mathbf{x}_i$ needs to be less or equal to out.

Arguments

  • out: The dimension of output features.
  • num_layers: The number of recursion steps.
  • aggr: Aggregation operator for the incoming messages (e.g. +, *, max, min, and mean).
  • init: Weight initialization function.

Examples:

# create data
@@ -153,7 +153,7 @@
 l = GatedGraphConv(out_channel, num_layers)
 
 # forward pass
-y = l(g, x)   
source
GraphNeuralNetworks.GraphConvType
GraphConv(in => out, σ=identity; aggr=+, bias=true, init=glorot_uniform)

Graph convolution layer from Reference: Weisfeiler and Leman Go Neural: Higher-order Graph Neural Networks.

Performs:

\[\mathbf{x}_i' = W_1 \mathbf{x}_i + \square_{j \in \mathcal{N}(i)} W_2 \mathbf{x}_j\]

where the aggregation type is selected by aggr.

Arguments

  • in: The dimension of input features.
  • out: The dimension of output features.
  • σ: Activation function.
  • aggr: Aggregation operator for the incoming messages (e.g. +, *, max, min, and mean).
  • bias: Add learnable bias.
  • init: Weights' initializer.

Examples

# create data
+y = l(g, x)   
source
GraphNeuralNetworks.GraphConvType
GraphConv(in => out, σ=identity; aggr=+, bias=true, init=glorot_uniform)

Graph convolution layer from Reference: Weisfeiler and Leman Go Neural: Higher-order Graph Neural Networks.

Performs:

\[\mathbf{x}_i' = W_1 \mathbf{x}_i + \square_{j \in \mathcal{N}(i)} W_2 \mathbf{x}_j\]

where the aggregation type is selected by aggr.

Arguments

  • in: The dimension of input features.
  • out: The dimension of output features.
  • σ: Activation function.
  • aggr: Aggregation operator for the incoming messages (e.g. +, *, max, min, and mean).
  • bias: Add learnable bias.
  • init: Weights' initializer.

Examples

# create data
 s = [1,1,2,3]
 t = [2,3,1,1]
 in_channel = 3
@@ -165,7 +165,7 @@
 l = GraphConv(in_channel => out_channel, relu, bias = false, aggr = mean)
 
 # forward pass
-y = l(g, x)       
source
GraphNeuralNetworks.MEGNetConvType
MEGNetConv(ϕe, ϕv; aggr=mean)
+y = l(g, x)       
source
GraphNeuralNetworks.MEGNetConvType
MEGNetConv(ϕe, ϕv; aggr=mean)
 MEGNetConv(in => out; aggr=mean)

Convolution from Graph Networks as a Universal Machine Learning Framework for Molecules and Crystals paper. In the forward pass, takes as inputs node features x and edge features e and returns updated features x' and e' according to

\[\begin{aligned} \mathbf{e}_{i\to j}' = \phi_e([\mathbf{x}_i;\, \mathbf{x}_j;\, \mathbf{e}_{i\to j}]),\\ \mathbf{x}_{i}' = \phi_v([\mathbf{x}_i;\, \square_{j\in \mathcal{N}(i)}\,\mathbf{e}_{j\to i}']). @@ -173,7 +173,7 @@ x = randn(Float32, 3, 10) e = randn(Float32, 3, 30) m = MEGNetConv(3 => 3) -x′, e′ = m(g, x, e)

source
GraphNeuralNetworks.NNConvType
NNConv(in => out, f, σ=identity; aggr=+, bias=true, init=glorot_uniform)

The continuous kernel-based convolutional operator from the Neural Message Passing for Quantum Chemistry paper. This convolution is also known as the edge-conditioned convolution from the Dynamic Edge-Conditioned Filters in Convolutional Neural Networks on Graphs paper.

Performs the operation

\[\mathbf{x}_i' = W \mathbf{x}_i + \square_{j \in N(i)} f_\Theta(\mathbf{e}_{j\to i})\,\mathbf{x}_j\]

where $f_\Theta$ denotes a learnable function (e.g. a linear layer or a multi-layer perceptron). Given an input of batched edge features e of size (num_edge_features, num_edges), the function f will return an batched matrices array whose size is (out, in, num_edges). For convenience, also functions returning a single (out*in, num_edges) matrix are allowed.

Arguments

  • in: The dimension of input node features.
  • out: The dimension of output node features.
  • f: A (possibly learnable) function acting on edge features.
  • aggr: Aggregation operator for the incoming messages (e.g. +, *, max, min, and mean).
  • σ: Activation function.
  • bias: Add learnable bias.
  • init: Weights' initializer.

Examples:

n_in = 3
+x′, e′ = m(g, x, e)
source
GraphNeuralNetworks.NNConvType
NNConv(in => out, f, σ=identity; aggr=+, bias=true, init=glorot_uniform)

The continuous kernel-based convolutional operator from the Neural Message Passing for Quantum Chemistry paper. This convolution is also known as the edge-conditioned convolution from the Dynamic Edge-Conditioned Filters in Convolutional Neural Networks on Graphs paper.

Performs the operation

\[\mathbf{x}_i' = W \mathbf{x}_i + \square_{j \in N(i)} f_\Theta(\mathbf{e}_{j\to i})\,\mathbf{x}_j\]

where $f_\Theta$ denotes a learnable function (e.g. a linear layer or a multi-layer perceptron). Given an input of batched edge features e of size (num_edge_features, num_edges), the function f will return an batched matrices array whose size is (out, in, num_edges). For convenience, also functions returning a single (out*in, num_edges) matrix are allowed.

Arguments

  • in: The dimension of input node features.
  • out: The dimension of output node features.
  • f: A (possibly learnable) function acting on edge features.
  • aggr: Aggregation operator for the incoming messages (e.g. +, *, max, min, and mean).
  • σ: Activation function.
  • bias: Add learnable bias.
  • init: Weights' initializer.

Examples:

n_in = 3
 n_in_edge = 10
 n_out = 5
 
@@ -192,7 +192,7 @@
 e = randn(Float32, n_in_edge, g.num_edges)
 
 # forward pass
-y = l(g, x, e)  
source
GraphNeuralNetworks.ResGatedGraphConvType
ResGatedGraphConv(in => out, act=identity; init=glorot_uniform, bias=true)

The residual gated graph convolutional operator from the Residual Gated Graph ConvNets paper.

The layer's forward pass is given by

\[\mathbf{x}_i' = act\big(U\mathbf{x}_i + \sum_{j \in N(i)} \eta_{ij} V \mathbf{x}_j\big),\]

where the edge gates $\eta_{ij}$ are given by

\[\eta_{ij} = sigmoid(A\mathbf{x}_i + B\mathbf{x}_j).\]

Arguments

  • in: The dimension of input features.
  • out: The dimension of output features.
  • act: Activation function.
  • init: Weight matrices' initializing function.
  • bias: Learn an additive bias if true.

Examples:

# create data
+y = l(g, x, e)  
source
GraphNeuralNetworks.ResGatedGraphConvType
ResGatedGraphConv(in => out, act=identity; init=glorot_uniform, bias=true)

The residual gated graph convolutional operator from the Residual Gated Graph ConvNets paper.

The layer's forward pass is given by

\[\mathbf{x}_i' = act\big(U\mathbf{x}_i + \sum_{j \in N(i)} \eta_{ij} V \mathbf{x}_j\big),\]

where the edge gates $\eta_{ij}$ are given by

\[\eta_{ij} = sigmoid(A\mathbf{x}_i + B\mathbf{x}_j).\]

Arguments

  • in: The dimension of input features.
  • out: The dimension of output features.
  • act: Activation function.
  • init: Weight matrices' initializing function.
  • bias: Learn an additive bias if true.

Examples:

# create data
 s = [1,1,2,3]
 t = [2,3,1,1]
 in_channel = 3
@@ -203,7 +203,7 @@
 l = ResGatedGraphConv(in_channel => out_channel, tanh, bias = true)
 
 # forward pass
-y = l(g, x)  
source
GraphNeuralNetworks.SAGEConvType
SAGEConv(in => out, σ=identity; aggr=mean, bias=true, init=glorot_uniform)

GraphSAGE convolution layer from paper Inductive Representation Learning on Large Graphs.

Performs:

\[\mathbf{x}_i' = W \cdot [\mathbf{x}_i; \square_{j \in \mathcal{N}(i)} \mathbf{x}_j]\]

where the aggregation type is selected by aggr.

Arguments

  • in: The dimension of input features.
  • out: The dimension of output features.
  • σ: Activation function.
  • aggr: Aggregation operator for the incoming messages (e.g. +, *, max, min, and mean).
  • bias: Add learnable bias.
  • init: Weights' initializer.

Examples:

# create data
+y = l(g, x)  
source
GraphNeuralNetworks.SAGEConvType
SAGEConv(in => out, σ=identity; aggr=mean, bias=true, init=glorot_uniform)

GraphSAGE convolution layer from paper Inductive Representation Learning on Large Graphs.

Performs:

\[\mathbf{x}_i' = W \cdot [\mathbf{x}_i; \square_{j \in \mathcal{N}(i)} \mathbf{x}_j]\]

where the aggregation type is selected by aggr.

Arguments

  • in: The dimension of input features.
  • out: The dimension of output features.
  • σ: Activation function.
  • aggr: Aggregation operator for the incoming messages (e.g. +, *, max, min, and mean).
  • bias: Add learnable bias.
  • init: Weights' initializer.

Examples:

# create data
 s = [1,1,2,3]
 t = [2,3,1,1]
 in_channel = 3
@@ -214,7 +214,7 @@
 l = SAGEConv(in_channel => out_channel, tanh, bias = false, aggr = +)
 
 # forward pass
-y = l(g, x)   
source
GraphNeuralNetworks.SGConvType
SGConv(int => out, k=1; [bias, init, add_self_loops, use_edge_weight])

SGC layer from Simplifying Graph Convolutional Networks Performs operation

\[H^{K} = (\tilde{D}^{-1/2} \tilde{A} \tilde{D}^{-1/2})^K X \Theta\]

where $\tilde{A}$ is $A + I$.

Arguments

  • in: Number of input features.
  • out: Number of output features.
  • k : Number of hops k. Default 1.
  • bias: Add learnable bias. Default true.
  • init: Weights' initializer. Default glorot_uniform.
  • add_self_loops: Add self loops to the graph before performing the convolution. Default false.
  • use_edge_weight: If true, consider the edge weights in the input graph (if available). If add_self_loops=true the new weights will be set to 1. Default false.

Examples

# create data
+y = l(g, x)   
source
GraphNeuralNetworks.SGConvType
SGConv(int => out, k=1; [bias, init, add_self_loops, use_edge_weight])

SGC layer from Simplifying Graph Convolutional Networks Performs operation

\[H^{K} = (\tilde{D}^{-1/2} \tilde{A} \tilde{D}^{-1/2})^K X \Theta\]

where $\tilde{A}$ is $A + I$.

Arguments

  • in: Number of input features.
  • out: Number of output features.
  • k : Number of hops k. Default 1.
  • bias: Add learnable bias. Default true.
  • init: Weights' initializer. Default glorot_uniform.
  • add_self_loops: Add self loops to the graph before performing the convolution. Default false.
  • use_edge_weight: If true, consider the edge weights in the input graph (if available). If add_self_loops=true the new weights will be set to 1. Default false.

Examples

# create data
 s = [1,1,2,3]
 t = [2,3,1,1]
 g = GNNGraph(s, t)
@@ -233,7 +233,7 @@
 # Edge weights can also be embedded in the graph.
 g = GNNGraph(s, t, w)
 l = SGConv(3 => 5, add_self_loops = true, use_edge_weight=true) 
-y = l(g, x) # same as l(g, x, w) 
source
GraphNeuralNetworks.TAGConvType
TAGConv(in => out, k=3; bias=true, init=glorot_uniform, add_self_loops=true, use_edge_weight=false)

TAGConv layer from Topology Adaptive Graph Convolutional Networks. This layer extends the idea of graph convolutions by applying filters that adapt to the topology of the data. It performs the operation:

\[H^{K} = {\sum}_{k=0}^K (D^{-1/2} A D^{-1/2})^{k} X {\Theta}_{k}\]

where A is the adjacency matrix of the graph, D is the degree matrix, X is the input feature matrix, and ${\Theta}_{k}$ is a unique weight matrix for each hop k.

Arguments

  • in: Number of input features.
  • out: Number of output features.
  • k: Maximum number of hops to consider. Default is 3.
  • bias: Whether to include a learnable bias term. Default is true.
  • init: Initialization function for the weights. Default is glorot_uniform.
  • add_self_loops: Whether to add self-loops to the adjacency matrix. Default is true.
  • use_edge_weight: If true, edge weights are considered in the computation (if available). Default is false.

Examples

# Example graph data
+y = l(g, x) # same as l(g, x, w) 
source
GraphNeuralNetworks.TAGConvType
TAGConv(in => out, k=3; bias=true, init=glorot_uniform, add_self_loops=true, use_edge_weight=false)

TAGConv layer from Topology Adaptive Graph Convolutional Networks. This layer extends the idea of graph convolutions by applying filters that adapt to the topology of the data. It performs the operation:

\[H^{K} = {\sum}_{k=0}^K (D^{-1/2} A D^{-1/2})^{k} X {\Theta}_{k}\]

where A is the adjacency matrix of the graph, D is the degree matrix, X is the input feature matrix, and ${\Theta}_{k}$ is a unique weight matrix for each hop k.

Arguments

  • in: Number of input features.
  • out: Number of output features.
  • k: Maximum number of hops to consider. Default is 3.
  • bias: Whether to include a learnable bias term. Default is true.
  • init: Initialization function for the weights. Default is glorot_uniform.
  • add_self_loops: Whether to add self-loops to the adjacency matrix. Default is true.
  • use_edge_weight: If true, edge weights are considered in the computation (if available). Default is false.

Examples

# Example graph data
 s = [1, 1, 2, 3]
 t = [2, 3, 1, 1]
 g = GNNGraph(s, t)  # Create a graph
@@ -243,7 +243,7 @@
 l = TAGConv(3 => 5, k=3; add_self_loops=true)
 
 # Apply the TAGConv layer
-y = l(g, x)  # Output size: 5 × num_nodes
source
GraphNeuralNetworks.TransformerConvType
TransformerConv((in, ein) => out; [heads, concat, init, add_self_loops, bias_qkv,
+y = l(g, x)  # Output size: 5 × num_nodes
source
GraphNeuralNetworks.TransformerConvType
TransformerConv((in, ein) => out; [heads, concat, init, add_self_loops, bias_qkv,
     bias_root, root_weight, gating, skip_connection, batch_norm, ff_channels]))

The transformer-like multi head attention convolutional operator from the Masked Label Prediction: Unified Message Passing Model for Semi-Supervised Classification paper, which also considers edge features. It further contains options to also be configured as the transformer-like convolutional operator from the Attention, Learn to Solve Routing Problems! paper, including a successive feed-forward network as well as skip layers and batch normalization.

The layer's basic forward pass is given by

\[x_i' = W_1x_i + \sum_{j\in N(i)} \alpha_{ij} (W_2 x_j + W_6e_{ij})\]

where the attention scores are

\[\alpha_{ij} = \mathrm{softmax}\left(\frac{(W_3x_i)^T(W_4x_j+ W_6e_{ij})}{\sqrt{d}}\right).\]

Optionally, a combination of the aggregated value with transformed root node features by a gating mechanism via

\[x'_i = \beta_i W_1 x_i + (1 - \beta_i) \underbrace{\left(\sum_{j \in \mathcal{N}(i)} \alpha_{i,j} W_2 x_j \right)}_{=m_i}\]

with

\[\beta_i = \textrm{sigmoid}(W_5^{\top} [ W_1 x_i, m_i, W_1 x_i - m_i ]).\]

can be performed.

Arguments

  • in: Dimension of input features, which also corresponds to the dimension of the output features.
  • ein: Dimension of the edge features; if 0, no edge features will be used.
  • out: Dimension of the output.
  • heads: Number of heads in output. Default 1.
  • concat: Concatenate layer output or not. If not, layer output is averaged over the heads. Default true.
  • init: Weight matrices' initializing function. Default glorot_uniform.
  • add_self_loops: Add self loops to the input graph. Default false.
  • bias_qkv: If set, bias is used in the key, query and value transformations for nodes. Default true.
  • bias_root: If set, the layer will also learn an additive bias for the root when root weight is used. Default true.
  • root_weight: If set, the layer will add the transformed root node features to the output. Default true.
  • gating: If set, will combine aggregation and transformed root node features by a gating mechanism. Default false.
  • skip_connection: If set, a skip connection will be made from the input and added to the output. Default false.
  • batch_norm: If set, a batch normalization will be applied to the output. Default false.
  • ff_channels: If positive, a feed-forward NN is appended, with the first having the given number of hidden nodes; this NN also gets a skip connection and batch normalization if the respective parameters are set. Default: 0.

Examples

N, in_channel, out_channel = 4, 3, 5
@@ -252,4 +252,4 @@
 l = TransformerConv((in_channel, ein) => in_channel; heads, gating = true, bias_qkv = true)
 x = rand(Float32, in_channel, N)
 e = rand(Float32, ein, g.num_edges)
-l(g, x, e)
source
+l(g, x, e)source diff --git a/GraphNeuralNetworks/dev/api/heteroconv/index.html b/GraphNeuralNetworks/dev/api/heteroconv/index.html index f0ac6b4c8..7eccbbd66 100644 --- a/GraphNeuralNetworks/dev/api/heteroconv/index.html +++ b/GraphNeuralNetworks/dev/api/heteroconv/index.html @@ -13,4 +13,4 @@ julia> y = layer(g, x); # output is a named tuple julia> size(y.A) == (32, 10) && size(y.B) == (32, 15) -truesource +truesource diff --git a/GraphNeuralNetworks/dev/api/pool/index.html b/GraphNeuralNetworks/dev/api/pool/index.html index fc46df86c..2556e047f 100644 --- a/GraphNeuralNetworks/dev/api/pool/index.html +++ b/GraphNeuralNetworks/dev/api/pool/index.html @@ -13,7 +13,7 @@ u = pool(g, g.ndata.x) -@assert size(u) == (chout, g.num_graphs)source
GraphNeuralNetworks.GlobalPoolType
GlobalPool(aggr)

Global pooling layer for graph neural networks. Takes a graph and feature nodes as inputs and performs the operation

\[\mathbf{u}_V = \square_{i \in V} \mathbf{x}_i\]

where $V$ is the set of nodes of the input graph and the type of aggregation represented by $\square$ is selected by the aggr argument. Commonly used aggregations are mean, max, and +.

See also reduce_nodes.

Examples

using Flux, GraphNeuralNetworks, Graphs
+@assert size(u) == (chout, g.num_graphs)
source
GraphNeuralNetworks.GlobalPoolType
GlobalPool(aggr)

Global pooling layer for graph neural networks. Takes a graph and feature nodes as inputs and performs the operation

\[\mathbf{u}_V = \square_{i \in V} \mathbf{x}_i\]

where $V$ is the set of nodes of the input graph and the type of aggregation represented by $\square$ is selected by the aggr argument. Commonly used aggregations are mean, max, and +.

See also reduce_nodes.

Examples

using Flux, GraphNeuralNetworks, Graphs
 
 pool = GlobalPool(mean)
 
@@ -24,7 +24,7 @@
 
 g = Flux.batch([GNNGraph(erdos_renyi(10, 4)) for _ in 1:5])
 X = rand(32, 50)
-pool(g, X) # => 32x5 matrix
source
GraphNeuralNetworks.Set2SetType
Set2Set(n_in, n_iters, n_layers = 1)

Set2Set layer from the paper Order Matters: Sequence to sequence for sets.

For each graph in the batch, the layer computes an output vector of size 2*n_in by iterating the following steps n_iters times:

\[\mathbf{q} = \mathrm{LSTM}(\mathbf{q}_{t-1}^*) +pool(g, X) # => 32x5 matrix

source
GraphNeuralNetworks.Set2SetType
Set2Set(n_in, n_iters, n_layers = 1)

Set2Set layer from the paper Order Matters: Sequence to sequence for sets.

For each graph in the batch, the layer computes an output vector of size 2*n_in by iterating the following steps n_iters times:

\[\mathbf{q} = \mathrm{LSTM}(\mathbf{q}_{t-1}^*) \alpha_{i} = \frac{\exp(\mathbf{q}^T \mathbf{x}_i)}{\sum_{j=1}^N \exp(\mathbf{q}^T \mathbf{x}_j)} \mathbf{r} = \sum_{i=1}^N \alpha_{i} \mathbf{x}_i -\mathbf{q}^*_t = [\mathbf{q}; \mathbf{r}]\]

where N is the number of nodes in the graph, LSTM is a Long-Short-Term-Memory network with n_layers layers, input size 2*n_in and output size n_in.

Given a batch of graphs g and node features x, the layer returns a matrix of size (2*n_in, n_graphs). ```

source
GraphNeuralNetworks.TopKPoolType
TopKPool(adj, k, in_channel)

Top-k pooling layer.

Arguments

  • adj: Adjacency matrix of a graph.
  • k: Top-k nodes are selected to pool together.
  • in_channel: The dimension of input channel.
source
+\mathbf{q}^*_t = [\mathbf{q}; \mathbf{r}]\]

where N is the number of nodes in the graph, LSTM is a Long-Short-Term-Memory network with n_layers layers, input size 2*n_in and output size n_in.

Given a batch of graphs g and node features x, the layer returns a matrix of size (2*n_in, n_graphs). ```

source
GraphNeuralNetworks.TopKPoolType
TopKPool(adj, k, in_channel)

Top-k pooling layer.

Arguments

  • adj: Adjacency matrix of a graph.
  • k: Top-k nodes are selected to pool together.
  • in_channel: The dimension of input channel.
source
diff --git a/GraphNeuralNetworks/dev/api/samplers/index.html b/GraphNeuralNetworks/dev/api/samplers/index.html index 8a9289cba..f1bcdd1f5 100644 --- a/GraphNeuralNetworks/dev/api/samplers/index.html +++ b/GraphNeuralNetworks/dev/api/samplers/index.html @@ -4,4 +4,4 @@ julia> batch_counter = 0 julia> for mini_batch_gnn in loader batch_counter += 1 - println("Batch ", batch_counter, ": Nodes in mini-batch graph: ", nv(mini_batch_gnn))source + println("Batch ", batch_counter, ": Nodes in mini-batch graph: ", nv(mini_batch_gnn))source diff --git a/GraphNeuralNetworks/dev/api/temporalconv/index.html b/GraphNeuralNetworks/dev/api/temporalconv/index.html index 593cdb47c..54137bee9 100644 --- a/GraphNeuralNetworks/dev/api/temporalconv/index.html +++ b/GraphNeuralNetworks/dev/api/temporalconv/index.html @@ -14,7 +14,7 @@ julia> y = a3tgcn(rand_graph(5, 10), rand(Float32, 2, 5, 20)); julia> size(y) -(6, 5)
Batch size changes

Failing to call reset! when the input batch size changes can lead to unexpected behavior.

source
GraphNeuralNetworks.EvolveGCNOType
EvolveGCNO(ch; bias = true, init = glorot_uniform, init_state = Flux.zeros32)

Evolving Graph Convolutional Network (EvolveGCNO) layer from the paper EvolveGCN: Evolving Graph Convolutional Networks for Dynamic Graphs.

Perfoms a Graph Convolutional layer with parameters derived from a Long Short-Term Memory (LSTM) layer across the snapshots of the temporal graph.

Arguments

  • in: Number of input features.
  • out: Number of output features.
  • bias: Add learnable bias. Default true.
  • init: Weights' initializer. Default glorot_uniform.
  • init_state: Initial state of the hidden stat of the LSTM layer. Default zeros32.

Examples

julia> tg = TemporalSnapshotsGNNGraph([rand_graph(10,20; ndata = rand(4,10)), rand_graph(10,14; ndata = rand(4,10)), rand_graph(10,22; ndata = rand(4,10))])
+(6, 5)
Batch size changes

Failing to call reset! when the input batch size changes can lead to unexpected behavior.

source
GraphNeuralNetworks.EvolveGCNOType
EvolveGCNO(ch; bias = true, init = glorot_uniform, init_state = Flux.zeros32)

Evolving Graph Convolutional Network (EvolveGCNO) layer from the paper EvolveGCN: Evolving Graph Convolutional Networks for Dynamic Graphs.

Perfoms a Graph Convolutional layer with parameters derived from a Long Short-Term Memory (LSTM) layer across the snapshots of the temporal graph.

Arguments

  • in: Number of input features.
  • out: Number of output features.
  • bias: Add learnable bias. Default true.
  • init: Weights' initializer. Default glorot_uniform.
  • init_state: Initial state of the hidden stat of the LSTM layer. Default zeros32.

Examples

julia> tg = TemporalSnapshotsGNNGraph([rand_graph(10,20; ndata = rand(4,10)), rand_graph(10,14; ndata = rand(4,10)), rand_graph(10,22; ndata = rand(4,10))])
 TemporalSnapshotsGNNGraph:
   num_nodes: [10, 10, 10]
   num_edges: [20, 14, 22]
@@ -27,7 +27,7 @@
 (3,)
 
 julia> size(ev(tg, tg.ndata.x)[1])
-(5, 10)
source
GraphNeuralNetworks.DCGRUMethod
DCGRU(in => out, k, n; [bias, init, init_state])

Diffusion Convolutional Recurrent Neural Network (DCGRU) layer from the paper Diffusion Convolutional Recurrent Neural Network: Data-driven Traffic Forecasting.

Performs a Diffusion Convolutional layer to model spatial dependencies, followed by a Gated Recurrent Unit (GRU) cell to model temporal dependencies.

Arguments

  • in: Number of input features.
  • out: Number of output features.
  • k: Diffusion step.
  • n: Number of nodes in the graph.
  • bias: Add learnable bias. Default true.
  • init: Weights' initializer. Default glorot_uniform.
  • init_state: Initial state of the hidden stat of the LSTM layer. Default zeros32.

Examples

julia> g1, x1 = rand_graph(5, 10), rand(Float32, 2, 5);
+(5, 10)
source
GraphNeuralNetworks.DCGRUMethod
DCGRU(in => out, k, n; [bias, init, init_state])

Diffusion Convolutional Recurrent Neural Network (DCGRU) layer from the paper Diffusion Convolutional Recurrent Neural Network: Data-driven Traffic Forecasting.

Performs a Diffusion Convolutional layer to model spatial dependencies, followed by a Gated Recurrent Unit (GRU) cell to model temporal dependencies.

Arguments

  • in: Number of input features.
  • out: Number of output features.
  • k: Diffusion step.
  • n: Number of nodes in the graph.
  • bias: Add learnable bias. Default true.
  • init: Weights' initializer. Default glorot_uniform.
  • init_state: Initial state of the hidden stat of the LSTM layer. Default zeros32.

Examples

julia> g1, x1 = rand_graph(5, 10), rand(Float32, 2, 5);
 
 julia> dcgru = DCGRU(2 => 5, 2, g1.num_nodes);
 
@@ -41,7 +41,7 @@
 julia> z = dcgru(g2, x2);
 
 julia> size(z)
-(5, 5, 30)
source
GraphNeuralNetworks.GConvGRUMethod
GConvGRU(in => out, k, n; [bias, init, init_state])

Graph Convolutional Gated Recurrent Unit (GConvGRU) recurrent layer from the paper Structured Sequence Modeling with Graph Convolutional Recurrent Networks.

Performs a layer of ChebConv to model spatial dependencies, followed by a Gated Recurrent Unit (GRU) cell to model temporal dependencies.

Arguments

  • in: Number of input features.
  • out: Number of output features.
  • k: Chebyshev polynomial order.
  • n: Number of nodes in the graph.
  • bias: Add learnable bias. Default true.
  • init: Weights' initializer. Default glorot_uniform.
  • init_state: Initial state of the hidden stat of the GRU layer. Default zeros32.

Examples

julia> g1, x1 = rand_graph(5, 10), rand(Float32, 2, 5);
+(5, 5, 30)
source
GraphNeuralNetworks.GConvGRUMethod
GConvGRU(in => out, k, n; [bias, init, init_state])

Graph Convolutional Gated Recurrent Unit (GConvGRU) recurrent layer from the paper Structured Sequence Modeling with Graph Convolutional Recurrent Networks.

Performs a layer of ChebConv to model spatial dependencies, followed by a Gated Recurrent Unit (GRU) cell to model temporal dependencies.

Arguments

  • in: Number of input features.
  • out: Number of output features.
  • k: Chebyshev polynomial order.
  • n: Number of nodes in the graph.
  • bias: Add learnable bias. Default true.
  • init: Weights' initializer. Default glorot_uniform.
  • init_state: Initial state of the hidden stat of the GRU layer. Default zeros32.

Examples

julia> g1, x1 = rand_graph(5, 10), rand(Float32, 2, 5);
 
 julia> ggru = GConvGRU(2 => 5, 2, g1.num_nodes);
 
@@ -55,7 +55,7 @@
 julia> z = ggru(g2, x2);
 
 julia> size(z)
-(5, 5, 30)
source
GraphNeuralNetworks.GConvLSTMMethod
GConvLSTM(in => out, k, n; [bias, init, init_state])

Graph Convolutional Long Short-Term Memory (GConvLSTM) recurrent layer from the paper Structured Sequence Modeling with Graph Convolutional Recurrent Networks.

Performs a layer of ChebConv to model spatial dependencies, followed by a Long Short-Term Memory (LSTM) cell to model temporal dependencies.

Arguments

  • in: Number of input features.
  • out: Number of output features.
  • k: Chebyshev polynomial order.
  • n: Number of nodes in the graph.
  • bias: Add learnable bias. Default true.
  • init: Weights' initializer. Default glorot_uniform.
  • init_state: Initial state of the hidden stat of the LSTM layer. Default zeros32.

Examples

julia> g1, x1 = rand_graph(5, 10), rand(Float32, 2, 5);
+(5, 5, 30)
source
GraphNeuralNetworks.GConvLSTMMethod
GConvLSTM(in => out, k, n; [bias, init, init_state])

Graph Convolutional Long Short-Term Memory (GConvLSTM) recurrent layer from the paper Structured Sequence Modeling with Graph Convolutional Recurrent Networks.

Performs a layer of ChebConv to model spatial dependencies, followed by a Long Short-Term Memory (LSTM) cell to model temporal dependencies.

Arguments

  • in: Number of input features.
  • out: Number of output features.
  • k: Chebyshev polynomial order.
  • n: Number of nodes in the graph.
  • bias: Add learnable bias. Default true.
  • init: Weights' initializer. Default glorot_uniform.
  • init_state: Initial state of the hidden stat of the LSTM layer. Default zeros32.

Examples

julia> g1, x1 = rand_graph(5, 10), rand(Float32, 2, 5);
 
 julia> gclstm = GConvLSTM(2 => 5, 2, g1.num_nodes);
 
@@ -69,7 +69,7 @@
 julia> z = gclstm(g2, x2);
 
 julia> size(z)
-(5, 5, 30)
source
GraphNeuralNetworks.TGCNMethod
TGCN(in => out; [bias, init, init_state, add_self_loops, use_edge_weight])

Temporal Graph Convolutional Network (T-GCN) recurrent layer from the paper T-GCN: A Temporal Graph Convolutional Network for Traffic Prediction.

Performs a layer of GCNConv to model spatial dependencies, followed by a Gated Recurrent Unit (GRU) cell to model temporal dependencies.

Arguments

  • in: Number of input features.
  • out: Number of output features.
  • bias: Add learnable bias. Default true.
  • init: Weights' initializer. Default glorot_uniform.
  • init_state: Initial state of the hidden stat of the GRU layer. Default zeros32.
  • add_self_loops: Add self loops to the graph before performing the convolution. Default false.
  • use_edge_weight: If true, consider the edge weights in the input graph (if available). If add_self_loops=true the new weights will be set to 1. This option is ignored if the edge_weight is explicitly provided in the forward pass. Default false.

Examples

julia> tgcn = TGCN(2 => 6)
+(5, 5, 30)
source
GraphNeuralNetworks.TGCNMethod
TGCN(in => out; [bias, init, init_state, add_self_loops, use_edge_weight])

Temporal Graph Convolutional Network (T-GCN) recurrent layer from the paper T-GCN: A Temporal Graph Convolutional Network for Traffic Prediction.

Performs a layer of GCNConv to model spatial dependencies, followed by a Gated Recurrent Unit (GRU) cell to model temporal dependencies.

Arguments

  • in: Number of input features.
  • out: Number of output features.
  • bias: Add learnable bias. Default true.
  • init: Weights' initializer. Default glorot_uniform.
  • init_state: Initial state of the hidden stat of the GRU layer. Default zeros32.
  • add_self_loops: Add self loops to the graph before performing the convolution. Default false.
  • use_edge_weight: If true, consider the edge weights in the input graph (if available). If add_self_loops=true the new weights will be set to 1. This option is ignored if the edge_weight is explicitly provided in the forward pass. Default false.

Examples

julia> tgcn = TGCN(2 => 6)
 Recur(
   TGCNCell(
     GCNConv(2 => 6, σ),                 # 18 parameters
@@ -91,4 +91,4 @@
 julia> Flux.reset!(tgcn);
 
 julia> tgcn(rand_graph(5, 10), rand(Float32, 2, 5, 20)) |> size # batch size of 20
-(6, 5, 20)
Batch size changes

Failing to call reset! when the input batch size changes can lead to unexpected behavior.

source
+(6, 5, 20)
Batch size changes

Failing to call reset! when the input batch size changes can lead to unexpected behavior.

source diff --git a/GraphNeuralNetworks/dev/datasets/index.html b/GraphNeuralNetworks/dev/datasets/index.html index b05b84b29..0afa1e8bf 100644 --- a/GraphNeuralNetworks/dev/datasets/index.html +++ b/GraphNeuralNetworks/dev/datasets/index.html @@ -1,2 +1,2 @@ -Datasets · GraphNeuralNetworks.jl
+Datasets · GraphNeuralNetworks.jl
diff --git a/GraphNeuralNetworks/dev/dev/index.html b/GraphNeuralNetworks/dev/dev/index.html index 1427d4ad8..28306283f 100644 --- a/GraphNeuralNetworks/dev/dev/index.html +++ b/GraphNeuralNetworks/dev/dev/index.html @@ -24,4 +24,4 @@ julia> @load "perf_pr_20210803_mymachine.jld2" julia> compare(dfpr, dfmaster)

Caching tutorials

Tutorials in GraphNeuralNetworks.jl are written in Pluto and rendered using DemoCards.jl and PlutoStaticHTML.jl. Rendering a Pluto notebook is time and resource-consuming, especially in a CI environment. So we use the caching functionality provided by PlutoStaticHTML.jl to reduce CI time.

If you are contributing a new tutorial or making changes to the existing notebook, generate the docs locally before committing/pushing. For caching to work, the cache environment(your local) and the documenter CI should have the same Julia version (e.g. "v1.9.1", also the patch number must match). So use the documenter CI Julia version for generating docs locally.

julia --version # check julia version before generating docs
-julia --project=docs docs/make.jl

Note: Use juliaup for easy switching of Julia versions.

During the doc generation process, DemoCards.jl stores the cache notebooks in docs/pluto_output. So add any changes made in this folder in your git commit. Remember that every file in this folder is machine-generated and should not be edited manually.

git add docs/pluto_output # add generated cache

Check the documenter CI logs to ensure that it used the local cache:

+julia --project=docs docs/make.jl

Note: Use juliaup for easy switching of Julia versions.

During the doc generation process, DemoCards.jl stores the cache notebooks in docs/pluto_output. So add any changes made in this folder in your git commit. Remember that every file in this folder is machine-generated and should not be edited manually.

git add docs/pluto_output # add generated cache

Check the documenter CI logs to ensure that it used the local cache:

diff --git a/GraphNeuralNetworks/dev/gsoc/index.html b/GraphNeuralNetworks/dev/gsoc/index.html index 29cfc0ec3..951467f14 100644 --- a/GraphNeuralNetworks/dev/gsoc/index.html +++ b/GraphNeuralNetworks/dev/gsoc/index.html @@ -1,2 +1,2 @@ -Google Summer of Code · GraphNeuralNetworks.jl
+Google Summer of Code · GraphNeuralNetworks.jl
diff --git a/GraphNeuralNetworks/dev/home/index.html b/GraphNeuralNetworks/dev/home/index.html index 532821d74..88a2bdb8f 100644 --- a/GraphNeuralNetworks/dev/home/index.html +++ b/GraphNeuralNetworks/dev/home/index.html @@ -37,4 +37,4 @@ end @info (; epoch, train_loss=loss(model, train_loader), test_loss=loss(model, test_loader)) -end +end diff --git a/GraphNeuralNetworks/dev/index.html b/GraphNeuralNetworks/dev/index.html index 620075046..5d643c545 100644 --- a/GraphNeuralNetworks/dev/index.html +++ b/GraphNeuralNetworks/dev/index.html @@ -1,2 +1,2 @@ -Home · GraphNeuralNetworks.jl

GraphNeuralNetworks Monorepo

This is the documentation page for GraphNeuralNetworks.jl, a graph neural network library written in Julia and based on the deep learning framework Flux.jl. GraphNeuralNetworks.jl is largely inspired by PyTorch Geometric, Deep Graph Library, and GeometricFlux.jl.

  • GraphNeuralNetwork.jl: Package that contains stateful graph convolutional layers based on the machine learning framework Flux.jl. This is fronted package for Flux users. It depends on GNNlib.jl, GNNGraphs.jl, and Flux.jl packages.

  • Implements common graph convolutional layers.

  • Supports computations on batched graphs.

  • Easy to define custom layers.

  • CUDA support.

  • Integration with Graphs.jl.

  • Examples of node, edge, and graph level machine learning tasks.

Usage examples on real datasets can be found in the examples folder.

+Home · GraphNeuralNetworks.jl

GraphNeuralNetworks Monorepo

This is the documentation page for GraphNeuralNetworks.jl, a graph neural network library written in Julia and based on the deep learning framework Flux.jl. GraphNeuralNetworks.jl is largely inspired by PyTorch Geometric, Deep Graph Library, and GeometricFlux.jl.

  • GraphNeuralNetwork.jl: Package that contains stateful graph convolutional layers based on the machine learning framework Flux.jl. This is fronted package for Flux users. It depends on GNNlib.jl, GNNGraphs.jl, and Flux.jl packages.

  • Implements common graph convolutional layers.

  • Supports computations on batched graphs.

  • Easy to define custom layers.

  • CUDA support.

  • Integration with Graphs.jl.

  • Examples of node, edge, and graph level machine learning tasks.

Usage examples on real datasets can be found in the examples folder.

diff --git a/GraphNeuralNetworks/dev/models/index.html b/GraphNeuralNetworks/dev/models/index.html index c07f4ec73..d3b620e92 100644 --- a/GraphNeuralNetworks/dev/models/index.html +++ b/GraphNeuralNetworks/dev/models/index.html @@ -66,4 +66,4 @@ X = randn(Float32, din, 10) # Pass only X as input, the model already contains the graph. -y = model(X)

An example of WithGraph usage is given in the graph neural ODE script in the examples folder.

+y = model(X)

An example of WithGraph usage is given in the graph neural ODE script in the examples folder.