From 5f776a017ecf9d0dcba6cb5b2593f53e4708f6a2 Mon Sep 17 00:00:00 2001 From: "Documenter.jl" Date: Wed, 6 Mar 2024 09:56:58 +0000 Subject: [PATCH] build based on 76711ba --- dev/api/basic/index.html | 6 ++-- dev/api/conv/index.html | 36 +++++++++---------- dev/api/gnngraph/index.html | 32 ++++++++--------- dev/api/heterograph/index.html | 4 +-- dev/api/messagepassing/index.html | 4 +-- dev/api/pool/index.html | 6 ++-- dev/api/temporalgraph/index.html | 10 +++--- dev/api/utils/index.html | 6 ++-- dev/datasets/index.html | 2 +- dev/dev/index.html | 2 +- dev/gnngraph/index.html | 2 +- dev/gsoc/index.html | 2 +- dev/heterograph/index.html | 2 +- dev/index.html | 2 +- dev/messagepassing/index.html | 2 +- dev/models/index.html | 2 +- dev/search/index.html | 2 +- dev/temporalgraph/index.html | 2 +- dev/tutorials/index.html | 2 +- .../gnn_intro_pluto/index.html | 2 +- .../graph_classification_pluto/index.html | 2 +- .../node_classification_pluto/index.html | 2 +- .../traffic_prediction/index.html | 2 +- 23 files changed, 67 insertions(+), 67 deletions(-) diff --git a/dev/api/basic/index.html b/dev/api/basic/index.html index 135ac152e..db52bc071 100644 --- a/dev/api/basic/index.html +++ b/dev/api/basic/index.html @@ -9,7 +9,7 @@ julia> dotdec(g, rand(2, 5)) 1×6 Matrix{Float64}: - 0.345098 0.458305 0.106353 0.345098 0.458305 0.106353source
GraphNeuralNetworks.GNNChainType
GNNChain(layers...)
+ 0.345098  0.458305  0.106353  0.345098  0.458305  0.106353
source
GraphNeuralNetworks.GNNChainType
GNNChain(layers...)
 GNNChain(name = layer, ...)

Collects multiple layers / functions to be called in sequence on given input graph and input node features.

It allows to compose layers in a sequential fashion as Flux.Chain does, propagating the output of each layer to the next one. In addition, GNNChain handles the input graph as well, providing it as a first argument only to layers subtyping the GNNLayer abstract type.

GNNChain supports indexing and slicing, m[2] or m[1:end-1], and if names are given, m[:name] == m[1] etc.

Examples

julia> using Flux, GraphNeuralNetworks
 
 julia> m = GNNChain(GCNConv(2=>5), 
@@ -41,7 +41,7 @@
  2.90053  2.90053  2.90053  2.90053  2.90053  2.90053
 
 julia> m2[:enc](g, x) == m(g, x)
-true
source
GraphNeuralNetworks.GNNLayerType
abstract type GNNLayer end

An abstract type from which graph neural network layers are derived.

See also GNNChain.

source
GraphNeuralNetworks.WithGraphType
WithGraph(model, g::GNNGraph; traingraph=false)

A type wrapping the model and tying it to the graph g. In the forward pass, can only take feature arrays as inputs, returning model(g, x...; kws...).

If traingraph=false, the graph's parameters won't be part of the trainable parameters in the gradient updates.

Examples

g = GNNGraph([1,2,3], [2,3,1])
+true
source
GraphNeuralNetworks.GNNLayerType
abstract type GNNLayer end

An abstract type from which graph neural network layers are derived.

See also GNNChain.

source
GraphNeuralNetworks.WithGraphType
WithGraph(model, g::GNNGraph; traingraph=false)

A type wrapping the model and tying it to the graph g. In the forward pass, can only take feature arrays as inputs, returning model(g, x...; kws...).

If traingraph=false, the graph's parameters won't be part of the trainable parameters in the gradient updates.

Examples

g = GNNGraph([1,2,3], [2,3,1])
 x = rand(Float32, 2, 3)
 model = SAGEConv(2 => 3)
 wg = WithGraph(model, g)
@@ -51,4 +51,4 @@
 g2 = GNNGraph([1,1,2,3], [2,4,1,1])
 x2 = rand(Float32, 2, 4)
 # WithGraph will ignore the internal graph if fed with a new one. 
-@assert wg(g2, x2) == model(g2, x2)
source
+@assert wg(g2, x2) == model(g2, x2)source diff --git a/dev/api/conv/index.html b/dev/api/conv/index.html index 969cb68b4..b3ccb6705 100644 --- a/dev/api/conv/index.html +++ b/dev/api/conv/index.html @@ -10,7 +10,7 @@ l = AGNNConv(init_beta=2.0f0) # forward pass -y = l(g, x) source
GraphNeuralNetworks.CGConvType
CGConv((in, ein) => out, act=identity; bias=true, init=glorot_uniform, residual=false)
+y = l(g, x)   
source
GraphNeuralNetworks.CGConvType
CGConv((in, ein) => out, act=identity; bias=true, init=glorot_uniform, residual=false)
 CGConv(in => out, ...)

The crystal graph convolutional layer from the paper Crystal Graph Convolutional Neural Networks for an Accurate and Interpretable Prediction of Material Properties. Performs the operation

\[\mathbf{x}_i' = \mathbf{x}_i + \sum_{j\in N(i)}\sigma(W_f \mathbf{z}_{ij} + \mathbf{b}_f)\, act(W_s \mathbf{z}_{ij} + \mathbf{b}_s)\]

where $\mathbf{z}_{ij}$ is the node and edge features concatenation $[\mathbf{x}_i; \mathbf{x}_j; \mathbf{e}_{j\to i}]$ and $\sigma$ is the sigmoid function. The residual $\mathbf{x}_i$ is added only if residual=true and the output size is the same as the input size.

Arguments

  • in: The dimension of input node features.
  • ein: The dimension of input edge features.

If ein is not given, assumes that no edge features are passed as input in the forward pass.

  • out: The dimension of output node features.
  • act: Activation function.
  • bias: Add learnable bias.
  • init: Weights' initializer.
  • residual: Add a residual connection.

Examples

g = rand_graph(5, 6)
 x = rand(Float32, 2, g.num_nodes)
 e = rand(Float32, 3, g.num_edges)
@@ -20,7 +20,7 @@
 
 # No edge features
 l = CGConv(2 => 4, tanh)
-y = l(g, x)    # size: (4, num_nodes)
source
GraphNeuralNetworks.ChebConvType
ChebConv(in => out, k; bias=true, init=glorot_uniform)

Chebyshev spectral graph convolutional layer from paper Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering.

Implements

\[X' = \sum^{K-1}_{k=0} W^{(k)} Z^{(k)}\]

where $Z^{(k)}$ is the $k$-th term of Chebyshev polynomials, and can be calculated by the following recursive form:

\[\begin{aligned} +y = l(g, x) # size: (4, num_nodes)

source
GraphNeuralNetworks.ChebConvType
ChebConv(in => out, k; bias=true, init=glorot_uniform)

Chebyshev spectral graph convolutional layer from paper Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering.

Implements

\[X' = \sum^{K-1}_{k=0} W^{(k)} Z^{(k)}\]

where $Z^{(k)}$ is the $k$-th term of Chebyshev polynomials, and can be calculated by the following recursive form:

\[\begin{aligned} Z^{(0)} &= X \\ Z^{(1)} &= \hat{L} X \\ Z^{(k)} &= 2 \hat{L} Z^{(k-1)} - Z^{(k-2)} @@ -34,7 +34,7 @@ l = ChebConv(3 => 5, 5) # forward pass -y = l(g, x) # size: 5 × num_nodes

source
GraphNeuralNetworks.EGNNConvType
EGNNConv((in, ein) => out; hidden_size=2in, residual=false)
+y = l(g, x)       # size:  5 × num_nodes
source
GraphNeuralNetworks.EGNNConvType
EGNNConv((in, ein) => out; hidden_size=2in, residual=false)
 EGNNConv(in => out; hidden_size=2in, residual=false)

Equivariant Graph Convolutional Layer from E(n) Equivariant Graph Neural Networks.

The layer performs the following operation:

\[\begin{aligned} \mathbf{m}_{j\to i} &=\phi_e(\mathbf{h}_i, \mathbf{h}_j, \lVert\mathbf{x}_i-\mathbf{x}_j\rVert^2, \mathbf{e}_{j\to i}),\\ \mathbf{x}_i' &= \mathbf{x}_i + C_i\sum_{j\in\mathcal{N}(i)}(\mathbf{x}_i-\mathbf{x}_j)\phi_x(\mathbf{m}_{j\to i}),\\ @@ -44,7 +44,7 @@ h = randn(Float32, 5, g.num_nodes) x = randn(Float32, 3, g.num_nodes) egnn = EGNNConv(5 => 6, 10) -hnew, xnew = egnn(g, h, x)

source
GraphNeuralNetworks.EdgeConvType
EdgeConv(nn; aggr=max)

Edge convolutional layer from paper Dynamic Graph CNN for Learning on Point Clouds.

Performs the operation

\[\mathbf{x}_i' = \square_{j \in N(i)}\, nn([\mathbf{x}_i; \mathbf{x}_j - \mathbf{x}_i])\]

where nn generally denotes a learnable function, e.g. a linear layer or a multi-layer perceptron.

Arguments

  • nn: A (possibly learnable) function.
  • aggr: Aggregation operator for the incoming messages (e.g. +, *, max, min, and mean).

Examples:

# create data
+hnew, xnew = egnn(g, h, x)
source
GraphNeuralNetworks.EdgeConvType
EdgeConv(nn; aggr=max)

Edge convolutional layer from paper Dynamic Graph CNN for Learning on Point Clouds.

Performs the operation

\[\mathbf{x}_i' = \square_{j \in N(i)}\, nn([\mathbf{x}_i; \mathbf{x}_j - \mathbf{x}_i])\]

where nn generally denotes a learnable function, e.g. a linear layer or a multi-layer perceptron.

Arguments

  • nn: A (possibly learnable) function.
  • aggr: Aggregation operator for the incoming messages (e.g. +, *, max, min, and mean).

Examples:

# create data
 s = [1,1,2,3]
 t = [2,3,1,1]
 in_channel = 3
@@ -55,7 +55,7 @@
 l = EdgeConv(Dense(2 * in_channel, out_channel), aggr = +)
 
 # forward pass
-y = l(g, x)
source
GraphNeuralNetworks.GATConvType
GATConv(in => out, [σ; heads, concat, init, bias, negative_slope, add_self_loops])
+y = l(g, x)
source
GraphNeuralNetworks.GATConvType
GATConv(in => out, [σ; heads, concat, init, bias, negative_slope, add_self_loops])
 GATConv((in, ein) => out, ...)

Graph attentional layer from the paper Graph Attention Networks.

Implements the operation

\[\mathbf{x}_i' = \sum_{j \in N(i) \cup \{i\}} \alpha_{ij} W \mathbf{x}_j\]

where the attention coefficients $\alpha_{ij}$ are given by

\[\alpha_{ij} = \frac{1}{z_i} \exp(LeakyReLU(\mathbf{a}^T [W \mathbf{x}_i; W \mathbf{x}_j]))\]

with $z_i$ a normalization factor.

In case ein > 0 is given, edge features of dimension ein will be expected in the forward pass and the attention coefficients will be calculated as

\[\alpha_{ij} = \frac{1}{z_i} \exp(LeakyReLU(\mathbf{a}^T [W_e \mathbf{e}_{j\to i}; W \mathbf{x}_i; W \mathbf{x}_j]))\]

Arguments

  • in: The dimension of input node features.
  • ein: The dimension of input edge features. Default 0 (i.e. no edge features passed in the forward).
  • out: The dimension of output node features.
  • σ: Activation function. Default identity.
  • bias: Learn the additive bias if true. Default true.
  • heads: Number attention heads. Default 1.
  • concat: Concatenate layer output or not. If not, layer output is averaged over the heads. Default true.
  • negative_slope: The parameter of LeakyReLU.Default 0.2.
  • add_self_loops: Add self loops to the graph before performing the convolution. Default true.

Examples

# create data
 s = [1,1,2,3]
 t = [2,3,1,1]
@@ -68,7 +68,7 @@
 l = GATConv(in_channel => out_channel, add_self_loops = false, bias = false; heads=2, concat=true)
 
 # forward pass
-y = l(g, x)       
source
GraphNeuralNetworks.GATv2ConvType
GATv2Conv(in => out, [σ; heads, concat, init, bias, negative_slope, add_self_loops])
+y = l(g, x)       
source
GraphNeuralNetworks.GATv2ConvType
GATv2Conv(in => out, [σ; heads, concat, init, bias, negative_slope, add_self_loops])
 GATv2Conv((in, ein) => out, ...)

GATv2 attentional layer from the paper How Attentive are Graph Attention Networks?.

Implements the operation

\[\mathbf{x}_i' = \sum_{j \in N(i) \cup \{i\}} \alpha_{ij} W_1 \mathbf{x}_j\]

where the attention coefficients $\alpha_{ij}$ are given by

\[\alpha_{ij} = \frac{1}{z_i} \exp(\mathbf{a}^T LeakyReLU(W_2 \mathbf{x}_i + W_1 \mathbf{x}_j))\]

with $z_i$ a normalization factor.

In case ein > 0 is given, edge features of dimension ein will be expected in the forward pass and the attention coefficients will be calculated as

\[\alpha_{ij} = \frac{1}{z_i} \exp(\mathbf{a}^T LeakyReLU(W_3 \mathbf{e}_{j\to i} + W_2 \mathbf{x}_i + W_1 \mathbf{x}_j)).\]

Arguments

  • in: The dimension of input node features.
  • ein: The dimension of input edge features. Default 0 (i.e. no edge features passed in the forward).
  • out: The dimension of output node features.
  • σ: Activation function. Default identity.
  • bias: Learn the additive bias if true. Default true.
  • heads: Number attention heads. Default 1.
  • concat: Concatenate layer output or not. If not, layer output is averaged over the heads. Default true.
  • negative_slope: The parameter of LeakyReLU.Default 0.2.
  • add_self_loops: Add self loops to the graph before performing the convolution. Default true.

Examples

# create data
 s = [1,1,2,3]
 t = [2,3,1,1]
@@ -85,7 +85,7 @@
 e = randn(Float32, ein, length(s))
 
 # forward pass
-y = l(g, x, e)    
source
GraphNeuralNetworks.GCNConvType
GCNConv(in => out, σ=identity; [bias, init, add_self_loops, use_edge_weight])

Graph convolutional layer from paper Semi-supervised Classification with Graph Convolutional Networks.

Performs the operation

\[\mathbf{x}'_i = \sum_{j\in N(i)} a_{ij} W \mathbf{x}_j\]

where $a_{ij} = 1 / \sqrt{|N(i)||N(j)|}$ is a normalization factor computed from the node degrees.

If the input graph has weighted edges and use_edge_weight=true, than $a_{ij}$ will be computed as

\[a_{ij} = \frac{e_{j\to i}}{\sqrt{\sum_{j \in N(i)} e_{j\to i}} \sqrt{\sum_{i \in N(j)} e_{i\to j}}}\]

The input to the layer is a node feature array X of size (num_features, num_nodes) and optionally an edge weight vector.

Arguments

  • in: Number of input features.
  • out: Number of output features.
  • σ: Activation function. Default identity.
  • bias: Add learnable bias. Default true.
  • init: Weights' initializer. Default glorot_uniform.
  • add_self_loops: Add self loops to the graph before performing the convolution. Default false.
  • use_edge_weight: If true, consider the edge weights in the input graph (if available). If add_self_loops=true the new weights will be set to 1. This option is ignored if the edge_weight is explicitly provided in the forward pass. Default false.

Forward

(::GCNConv)(g::GNNGraph, x::AbstractMatrix, edge_weight = nothing, norm_fn::Function = d -> 1 ./ sqrt.(d)) -> AbstractMatrix

Takes as input a graph g,ca node feature matrix x of size [in, num_nodes], and optionally an edge weight vector. Returns a node feature matrix of size [out, num_nodes].

The norm_fn parameter allows for custom normalization of the graph convolution operation by passing a function as argument. By default, it computes $\frac{1}{\sqrt{d}}$ i.e the inverse square root of the degree (d) of each node in the graph.

Examples

# create data
+y = l(g, x, e)    
source
GraphNeuralNetworks.GCNConvType
GCNConv(in => out, σ=identity; [bias, init, add_self_loops, use_edge_weight])

Graph convolutional layer from paper Semi-supervised Classification with Graph Convolutional Networks.

Performs the operation

\[\mathbf{x}'_i = \sum_{j\in N(i)} a_{ij} W \mathbf{x}_j\]

where $a_{ij} = 1 / \sqrt{|N(i)||N(j)|}$ is a normalization factor computed from the node degrees.

If the input graph has weighted edges and use_edge_weight=true, than $a_{ij}$ will be computed as

\[a_{ij} = \frac{e_{j\to i}}{\sqrt{\sum_{j \in N(i)} e_{j\to i}} \sqrt{\sum_{i \in N(j)} e_{i\to j}}}\]

The input to the layer is a node feature array X of size (num_features, num_nodes) and optionally an edge weight vector.

Arguments

  • in: Number of input features.
  • out: Number of output features.
  • σ: Activation function. Default identity.
  • bias: Add learnable bias. Default true.
  • init: Weights' initializer. Default glorot_uniform.
  • add_self_loops: Add self loops to the graph before performing the convolution. Default false.
  • use_edge_weight: If true, consider the edge weights in the input graph (if available). If add_self_loops=true the new weights will be set to 1. This option is ignored if the edge_weight is explicitly provided in the forward pass. Default false.

Forward

(::GCNConv)(g::GNNGraph, x::AbstractMatrix, edge_weight = nothing, norm_fn::Function = d -> 1 ./ sqrt.(d)) -> AbstractMatrix

Takes as input a graph g,ca node feature matrix x of size [in, num_nodes], and optionally an edge weight vector. Returns a node feature matrix of size [out, num_nodes].

The norm_fn parameter allows for custom normalization of the graph convolution operation by passing a function as argument. By default, it computes $\frac{1}{\sqrt{d}}$ i.e the inverse square root of the degree (d) of each node in the graph.

Examples

# create data
 s = [1,1,2,3]
 t = [2,3,1,1]
 g = GNNGraph(s, t)
@@ -105,7 +105,7 @@
 # Edge weights can also be embedded in the graph.
 g = GNNGraph(s, t, w)
 l = GCNConv(3 => 5, use_edge_weight=true) 
-y = l(g, x) # same as l(g, x, w) 
source
GraphNeuralNetworks.GINConvType
GINConv(f, ϵ; aggr=+)

Graph Isomorphism convolutional layer from paper How Powerful are Graph Neural Networks?.

Implements the graph convolution

\[\mathbf{x}_i' = f_\Theta\left((1 + \epsilon) \mathbf{x}_i + \sum_{j \in N(i)} \mathbf{x}_j \right)\]

where $f_\Theta$ typically denotes a learnable function, e.g. a linear layer or a multi-layer perceptron.

Arguments

  • f: A (possibly learnable) function acting on node features.
  • ϵ: Weighting factor.

Examples:

# create data
+y = l(g, x) # same as l(g, x, w) 
source
GraphNeuralNetworks.GINConvType
GINConv(f, ϵ; aggr=+)

Graph Isomorphism convolutional layer from paper How Powerful are Graph Neural Networks?.

Implements the graph convolution

\[\mathbf{x}_i' = f_\Theta\left((1 + \epsilon) \mathbf{x}_i + \sum_{j \in N(i)} \mathbf{x}_j \right)\]

where $f_\Theta$ typically denotes a learnable function, e.g. a linear layer or a multi-layer perceptron.

Arguments

  • f: A (possibly learnable) function acting on node features.
  • ϵ: Weighting factor.

Examples:

# create data
 s = [1,1,2,3]
 t = [2,3,1,1]
 in_channel = 3
@@ -119,7 +119,7 @@
 l = GINConv(nn, 0.01f0, aggr = mean)
 
 # forward pass
-y = l(g, x)  
source
GraphNeuralNetworks.GMMConvType
GMMConv((in, ein) => out, σ=identity; K=1, bias=true, init=glorot_uniform, residual=false)

Graph mixture model convolution layer from the paper Geometric deep learning on graphs and manifolds using mixture model CNNs Performs the operation

\[\mathbf{x}_i' = \mathbf{x}_i + \frac{1}{|N(i)|} \sum_{j\in N(i)}\frac{1}{K}\sum_{k=1}^K \mathbf{w}_k(\mathbf{e}_{j\to i}) \odot \Theta_k \mathbf{x}_j\]

where $w^a_{k}(e^a)$ for feature a and kernel k is given by

\[w^a_{k}(e^a) = \exp(-\frac{1}{2}(e^a - \mu^a_k)^T (\Sigma^{-1})^a_k(e^a - \mu^a_k))\]

$\Theta_k, \mu^a_k, (\Sigma^{-1})^a_k$ are learnable parameters.

The input to the layer is a node feature array x of size (num_features, num_nodes) and edge pseudo-coordinate array e of size (num_features, num_edges) The residual $\mathbf{x}_i$ is added only if residual=true and the output size is the same as the input size.

Arguments

  • in: Number of input node features.
  • ein: Number of input edge features.
  • out: Number of output features.
  • σ: Activation function. Default identity.
  • K: Number of kernels. Default 1.
  • bias: Add learnable bias. Default true.
  • init: Weights' initializer. Default glorot_uniform.
  • residual: Residual conncetion. Default false.

Examples

# create data
+y = l(g, x)  
source
GraphNeuralNetworks.GMMConvType
GMMConv((in, ein) => out, σ=identity; K=1, bias=true, init=glorot_uniform, residual=false)

Graph mixture model convolution layer from the paper Geometric deep learning on graphs and manifolds using mixture model CNNs Performs the operation

\[\mathbf{x}_i' = \mathbf{x}_i + \frac{1}{|N(i)|} \sum_{j\in N(i)}\frac{1}{K}\sum_{k=1}^K \mathbf{w}_k(\mathbf{e}_{j\to i}) \odot \Theta_k \mathbf{x}_j\]

where $w^a_{k}(e^a)$ for feature a and kernel k is given by

\[w^a_{k}(e^a) = \exp(-\frac{1}{2}(e^a - \mu^a_k)^T (\Sigma^{-1})^a_k(e^a - \mu^a_k))\]

$\Theta_k, \mu^a_k, (\Sigma^{-1})^a_k$ are learnable parameters.

The input to the layer is a node feature array x of size (num_features, num_nodes) and edge pseudo-coordinate array e of size (num_features, num_edges) The residual $\mathbf{x}_i$ is added only if residual=true and the output size is the same as the input size.

Arguments

  • in: Number of input node features.
  • ein: Number of input edge features.
  • out: Number of output features.
  • σ: Activation function. Default identity.
  • K: Number of kernels. Default 1.
  • bias: Add learnable bias. Default true.
  • init: Weights' initializer. Default glorot_uniform.
  • residual: Residual conncetion. Default false.

Examples

# create data
 s = [1,1,2,3]
 t = [2,3,1,1]
 g = GNNGraph(s,t)
@@ -131,7 +131,7 @@
 l = GMMConv((nin, ein) => out, K=K)
 
 # forward pass
-l(g, x, e)
source
GraphNeuralNetworks.GatedGraphConvType
GatedGraphConv(out, num_layers; aggr=+, init=glorot_uniform)

Gated graph convolution layer from Gated Graph Sequence Neural Networks.

Implements the recursion

\[\begin{aligned} +l(g, x, e)

source
GraphNeuralNetworks.GatedGraphConvType
GatedGraphConv(out, num_layers; aggr=+, init=glorot_uniform)

Gated graph convolution layer from Gated Graph Sequence Neural Networks.

Implements the recursion

\[\begin{aligned} \mathbf{h}^{(0)}_i &= [\mathbf{x}_i; \mathbf{0}] \\ \mathbf{h}^{(l)}_i &= GRU(\mathbf{h}^{(l-1)}_i, \square_{j \in N(i)} W \mathbf{h}^{(l-1)}_j) \end{aligned}\]

where $\mathbf{h}^{(l)}_i$ denotes the $l$-th hidden variables passing through GRU. The dimension of input $\mathbf{x}_i$ needs to be less or equal to out.

Arguments

  • out: The dimension of output features.
  • num_layers: The number of gated recurrent unit.
  • aggr: Aggregation operator for the incoming messages (e.g. +, *, max, min, and mean).
  • init: Weight initialization function.

Examples:

# create data
@@ -145,7 +145,7 @@
 l = GatedGraphConv(out_channel, num_layers)
 
 # forward pass
-y = l(g, x)   
source
GraphNeuralNetworks.GraphConvType
GraphConv(in => out, σ=identity; aggr=+, bias=true, init=glorot_uniform)

Graph convolution layer from Reference: Weisfeiler and Leman Go Neural: Higher-order Graph Neural Networks.

Performs:

\[\mathbf{x}_i' = W_1 \mathbf{x}_i + \square_{j \in \mathcal{N}(i)} W_2 \mathbf{x}_j\]

where the aggregation type is selected by aggr.

Arguments

  • in: The dimension of input features.
  • out: The dimension of output features.
  • σ: Activation function.
  • aggr: Aggregation operator for the incoming messages (e.g. +, *, max, min, and mean).
  • bias: Add learnable bias.
  • init: Weights' initializer.

Examples

# create data
+y = l(g, x)   
source
GraphNeuralNetworks.GraphConvType
GraphConv(in => out, σ=identity; aggr=+, bias=true, init=glorot_uniform)

Graph convolution layer from Reference: Weisfeiler and Leman Go Neural: Higher-order Graph Neural Networks.

Performs:

\[\mathbf{x}_i' = W_1 \mathbf{x}_i + \square_{j \in \mathcal{N}(i)} W_2 \mathbf{x}_j\]

where the aggregation type is selected by aggr.

Arguments

  • in: The dimension of input features.
  • out: The dimension of output features.
  • σ: Activation function.
  • aggr: Aggregation operator for the incoming messages (e.g. +, *, max, min, and mean).
  • bias: Add learnable bias.
  • init: Weights' initializer.

Examples

# create data
 s = [1,1,2,3]
 t = [2,3,1,1]
 in_channel = 3
@@ -157,7 +157,7 @@
 l = GraphConv(in_channel => out_channel, relu, bias = false, aggr = mean)
 
 # forward pass
-y = l(g, x)       
source
GraphNeuralNetworks.MEGNetConvType
MEGNetConv(ϕe, ϕv; aggr=mean)
+y = l(g, x)       
source
GraphNeuralNetworks.MEGNetConvType
MEGNetConv(ϕe, ϕv; aggr=mean)
 MEGNetConv(in => out; aggr=mean)

Convolution from Graph Networks as a Universal Machine Learning Framework for Molecules and Crystals paper. In the forward pass, takes as inputs node features x and edge features e and returns updated features x' and e' according to

\[\begin{aligned} \mathbf{e}_{i\to j}' = \phi_e([\mathbf{x}_i;\, \mathbf{x}_j;\, \mathbf{e}_{i\to j}]),\\ \mathbf{x}_{i}' = \phi_v([\mathbf{x}_i;\, \square_{j\in \mathcal{N}(i)}\,\mathbf{e}_{j\to i}']). @@ -165,7 +165,7 @@ x = randn(Float32, 3, 10) e = randn(Float32, 3, 30) m = MEGNetConv(3 => 3) -x′, e′ = m(g, x, e)

source
GraphNeuralNetworks.NNConvType
NNConv(in => out, f, σ=identity; aggr=+, bias=true, init=glorot_uniform)

The continuous kernel-based convolutional operator from the Neural Message Passing for Quantum Chemistry paper. This convolution is also known as the edge-conditioned convolution from the Dynamic Edge-Conditioned Filters in Convolutional Neural Networks on Graphs paper.

Performs the operation

\[\mathbf{x}_i' = W \mathbf{x}_i + \square_{j \in N(i)} f_\Theta(\mathbf{e}_{j\to i})\,\mathbf{x}_j\]

where $f_\Theta$ denotes a learnable function (e.g. a linear layer or a multi-layer perceptron). Given an input of batched edge features e of size (num_edge_features, num_edges), the function f will return an batched matrices array whose size is (out, in, num_edges). For convenience, also functions returning a single (out*in, num_edges) matrix are allowed.

Arguments

  • in: The dimension of input features.
  • out: The dimension of output features.
  • f: A (possibly learnable) function acting on edge features.
  • aggr: Aggregation operator for the incoming messages (e.g. +, *, max, min, and mean).
  • σ: Activation function.
  • bias: Add learnable bias.
  • init: Weights' initializer.

Examples:

# create data
+x′, e′ = m(g, x, e)
source
GraphNeuralNetworks.NNConvType
NNConv(in => out, f, σ=identity; aggr=+, bias=true, init=glorot_uniform)

The continuous kernel-based convolutional operator from the Neural Message Passing for Quantum Chemistry paper. This convolution is also known as the edge-conditioned convolution from the Dynamic Edge-Conditioned Filters in Convolutional Neural Networks on Graphs paper.

Performs the operation

\[\mathbf{x}_i' = W \mathbf{x}_i + \square_{j \in N(i)} f_\Theta(\mathbf{e}_{j\to i})\,\mathbf{x}_j\]

where $f_\Theta$ denotes a learnable function (e.g. a linear layer or a multi-layer perceptron). Given an input of batched edge features e of size (num_edge_features, num_edges), the function f will return an batched matrices array whose size is (out, in, num_edges). For convenience, also functions returning a single (out*in, num_edges) matrix are allowed.

Arguments

  • in: The dimension of input features.
  • out: The dimension of output features.
  • f: A (possibly learnable) function acting on edge features.
  • aggr: Aggregation operator for the incoming messages (e.g. +, *, max, min, and mean).
  • σ: Activation function.
  • bias: Add learnable bias.
  • init: Weights' initializer.

Examples:

# create data
 s = [1,1,2,3]
 t = [2,3,1,1]
 in_channel = 3
@@ -180,7 +180,7 @@
 l = NNConv(in_channel => out_channel, nn, tanh, bias = true, aggr = +)
 
 # forward pass
-y = l(g, x)   
source
GraphNeuralNetworks.ResGatedGraphConvType
ResGatedGraphConv(in => out, act=identity; init=glorot_uniform, bias=true)

The residual gated graph convolutional operator from the Residual Gated Graph ConvNets paper.

The layer's forward pass is given by

\[\mathbf{x}_i' = act\big(U\mathbf{x}_i + \sum_{j \in N(i)} \eta_{ij} V \mathbf{x}_j\big),\]

where the edge gates $\eta_{ij}$ are given by

\[\eta_{ij} = sigmoid(A\mathbf{x}_i + B\mathbf{x}_j).\]

Arguments

  • in: The dimension of input features.
  • out: The dimension of output features.
  • act: Activation function.
  • init: Weight matrices' initializing function.
  • bias: Learn an additive bias if true.

Examples:

# create data
+y = l(g, x)   
source
GraphNeuralNetworks.ResGatedGraphConvType
ResGatedGraphConv(in => out, act=identity; init=glorot_uniform, bias=true)

The residual gated graph convolutional operator from the Residual Gated Graph ConvNets paper.

The layer's forward pass is given by

\[\mathbf{x}_i' = act\big(U\mathbf{x}_i + \sum_{j \in N(i)} \eta_{ij} V \mathbf{x}_j\big),\]

where the edge gates $\eta_{ij}$ are given by

\[\eta_{ij} = sigmoid(A\mathbf{x}_i + B\mathbf{x}_j).\]

Arguments

  • in: The dimension of input features.
  • out: The dimension of output features.
  • act: Activation function.
  • init: Weight matrices' initializing function.
  • bias: Learn an additive bias if true.

Examples:

# create data
 s = [1,1,2,3]
 t = [2,3,1,1]
 in_channel = 3
@@ -191,7 +191,7 @@
 l = ResGatedGraphConv(in_channel => out_channel, tanh, bias = true)
 
 # forward pass
-y = l(g, x)  
source
GraphNeuralNetworks.SAGEConvType
SAGEConv(in => out, σ=identity; aggr=mean, bias=true, init=glorot_uniform)

GraphSAGE convolution layer from paper Inductive Representation Learning on Large Graphs.

Performs:

\[\mathbf{x}_i' = W \cdot [\mathbf{x}_i; \square_{j \in \mathcal{N}(i)} \mathbf{x}_j]\]

where the aggregation type is selected by aggr.

Arguments

  • in: The dimension of input features.
  • out: The dimension of output features.
  • σ: Activation function.
  • aggr: Aggregation operator for the incoming messages (e.g. +, *, max, min, and mean).
  • bias: Add learnable bias.
  • init: Weights' initializer.

Examples:

# create data
+y = l(g, x)  
source
GraphNeuralNetworks.SAGEConvType
SAGEConv(in => out, σ=identity; aggr=mean, bias=true, init=glorot_uniform)

GraphSAGE convolution layer from paper Inductive Representation Learning on Large Graphs.

Performs:

\[\mathbf{x}_i' = W \cdot [\mathbf{x}_i; \square_{j \in \mathcal{N}(i)} \mathbf{x}_j]\]

where the aggregation type is selected by aggr.

Arguments

  • in: The dimension of input features.
  • out: The dimension of output features.
  • σ: Activation function.
  • aggr: Aggregation operator for the incoming messages (e.g. +, *, max, min, and mean).
  • bias: Add learnable bias.
  • init: Weights' initializer.

Examples:

# create data
 s = [1,1,2,3]
 t = [2,3,1,1]
 in_channel = 3
@@ -202,7 +202,7 @@
 l = SAGEConv(in_channel => out_channel, tanh, bias = false, aggr = +)
 
 # forward pass
-y = l(g, x)   
source
GraphNeuralNetworks.SGConvType
SGConv(int => out, k=1; [bias, init, add_self_loops, use_edge_weight])

SGC layer from Simplifying Graph Convolutional Networks Performs operation

\[H^{K} = (\tilde{D}^{-1/2} \tilde{A} \tilde{D}^{-1/2})^K X \Theta\]

where $\tilde{A}$ is $A + I$.

Arguments

  • in: Number of input features.
  • out: Number of output features.
  • k : Number of hops k. Default 1.
  • bias: Add learnable bias. Default true.
  • init: Weights' initializer. Default glorot_uniform.
  • add_self_loops: Add self loops to the graph before performing the convolution. Default false.
  • use_edge_weight: If true, consider the edge weights in the input graph (if available). If add_self_loops=true the new weights will be set to 1. Default false.

Examples

# create data
+y = l(g, x)   
source
GraphNeuralNetworks.SGConvType
SGConv(int => out, k=1; [bias, init, add_self_loops, use_edge_weight])

SGC layer from Simplifying Graph Convolutional Networks Performs operation

\[H^{K} = (\tilde{D}^{-1/2} \tilde{A} \tilde{D}^{-1/2})^K X \Theta\]

where $\tilde{A}$ is $A + I$.

Arguments

  • in: Number of input features.
  • out: Number of output features.
  • k : Number of hops k. Default 1.
  • bias: Add learnable bias. Default true.
  • init: Weights' initializer. Default glorot_uniform.
  • add_self_loops: Add self loops to the graph before performing the convolution. Default false.
  • use_edge_weight: If true, consider the edge weights in the input graph (if available). If add_self_loops=true the new weights will be set to 1. Default false.

Examples

# create data
 s = [1,1,2,3]
 t = [2,3,1,1]
 g = GNNGraph(s, t)
@@ -221,7 +221,7 @@
 # Edge weights can also be embedded in the graph.
 g = GNNGraph(s, t, w)
 l = SGConv(3 => 5, add_self_loops = true, use_edge_weight=true) 
-y = l(g, x) # same as l(g, x, w) 
source
GraphNeuralNetworks.TransformerConvType
TransformerConv((in, ein) => out; [heads, concat, init, add_self_loops, bias_qkv,
+y = l(g, x) # same as l(g, x, w) 
source
GraphNeuralNetworks.TransformerConvType
TransformerConv((in, ein) => out; [heads, concat, init, add_self_loops, bias_qkv,
     bias_root, root_weight, gating, skip_connection, batch_norm, ff_channels]))

The transformer-like multi head attention convolutional operator from the Masked Label Prediction: Unified Message Passing Model for Semi-Supervised Classification paper, which also considers edge features. It further contains options to also be configured as the transformer-like convolutional operator from the Attention, Learn to Solve Routing Problems! paper, including a successive feed-forward network as well as skip layers and batch normalization.

The layer's basic forward pass is given by

\[x_i' = W_1x_i + \sum_{j\in N(i)} \alpha_{ij} (W_2 x_j + W_6e_{ij})\]

where the attention scores are

\[\alpha_{ij} = \mathrm{softmax}\left(\frac{(W_3x_i)^T(W_4x_j+ W_6e_{ij})}{\sqrt{d}}\right).\]

Optionally, a combination of the aggregated value with transformed root node features by a gating mechanism via

\[x'_i = \beta_i W_1 x_i + (1 - \beta_i) \underbrace{\left(\sum_{j \in \mathcal{N}(i)} -\alpha_{i,j} W_2 x_j \right)}_{=m_i}\]

with

\[\beta_i = \textrm{sigmoid}(W_5^{\top} [ W_1 x_i, m_i, W_1 x_i - m_i ]).\]

can be performed.

Arguments

  • in: Dimension of input features, which also corresponds to the dimension of the output features.
  • ein: Dimension of the edge features; if 0, no edge features will be used.
  • out: Dimension of the output.
  • heads: Number of heads in output. Default 1.
  • concat: Concatenate layer output or not. If not, layer output is averaged over the heads. Default true.
  • init: Weight matrices' initializing function. Default glorot_uniform.
  • add_self_loops: Add self loops to the input graph. Default false.
  • bias_qkv: If set, bias is used in the key, query and value transformations for nodes. Default true.
  • bias_root: If set, the layer will also learn an additive bias for the root when root weight is used. Default true.
  • root_weight: If set, the layer will add the transformed root node features to the output. Default true.
  • gating: If set, will combine aggregation and transformed root node features by a gating mechanism. Default false.
  • skip_connection: If set, a skip connection will be made from the input and added to the output. Default false.
  • batch_norm: If set, a batch normalization will be applied to the output. Default false.
  • ff_channels: If positive, a feed-forward NN is appended, with the first having the given number of hidden nodes; this NN also gets a skip connection and batch normalization if the respective parameters are set. Default: 0.
source
+\alpha_{i,j} W_2 x_j \right)}_{=m_i}\]

with

\[\beta_i = \textrm{sigmoid}(W_5^{\top} [ W_1 x_i, m_i, W_1 x_i - m_i ]).\]

can be performed.

Arguments

source diff --git a/dev/api/gnngraph/index.html b/dev/api/gnngraph/index.html index 88e8c69f1..cd4fca603 100644 --- a/dev/api/gnngraph/index.html +++ b/dev/api/gnngraph/index.html @@ -36,7 +36,7 @@ # Collect edges' source and target nodes. # Both source and target are vectors of length num_edges -source, target = edge_index(g)source

DataStore

GraphNeuralNetworks.GNNGraphs.DataStoreType
DataStore([n, data])
+source, target = edge_index(g)
source

DataStore

GraphNeuralNetworks.GNNGraphs.DataStoreType
DataStore([n, data])
 DataStore([n,] k1 = x1, k2 = x2, ...)

A container for feature arrays. The optional argument n enforces that numobs(x) == n for each array contained in the datastore.

At construction time, the data can be provided as any iterables of pairs of symbols and arrays or as keyword arguments:

julia> ds = DataStore(3, x = rand(2, 3), y = rand(3))
 DataStore(3) with 2 elements:
   y = 3-element Vector{Float64}
@@ -78,8 +78,8 @@
 julia> ds2.a
 2-element Vector{Float64}:
  1.0
- 1.0
source

Query

GraphNeuralNetworks.GNNGraphs.adjacency_listMethod
adjacency_list(g; dir=:out)
-adjacency_list(g, nodes; dir=:out)

Return the adjacency list representation (a vector of vectors) of the graph g.

Calling a the adjacency list, if dir=:out than a[i] will contain the neighbors of node i through outgoing edges. If dir=:in, it will contain neighbors from incoming edges instead.

If nodes is given, return the neighborhood of the nodes in nodes only.

source
GraphNeuralNetworks.GNNGraphs.edge_indexMethod
edge_index(g::GNNGraph)

Return a tuple containing two vectors, respectively storing the source and target nodes for each edges in g.

s, t = edge_index(g)
source
GraphNeuralNetworks.GNNGraphs.edge_indexMethod
edge_index(g::GNNHeteroGraph, [edge_t])

Return a tuple containing two vectors, respectively storing the source and target nodes for each edges in g of type edge_t = (src_t, rel_t, trg_t).

If edge_t is not provided, it will error if g has more than one edge type.

source
GraphNeuralNetworks.GNNGraphs.graph_indicatorMethod
graph_indicator(g::GNNGraph; edges=false)

Return a vector containing the graph membership (an integer from 1 to g.num_graphs) of each node in the graph. If edges=true, return the graph membership of each edge instead.

source
GraphNeuralNetworks.GNNGraphs.graph_indicatorMethod
graph_indicator(g::GNNHeteroGraph, [node_t])

Return a Dict of vectors containing the graph membership (an integer from 1 to g.num_graphs) of each node in the graph for each node type. If node_t is provided, return the graph membership of each node of type node_t instead.

See also batch.

source
GraphNeuralNetworks.GNNGraphs.has_isolated_nodesMethod
has_isolated_nodes(g::GNNGraph; dir=:out)

Return true if the graph g contains nodes with out-degree (if dir=:out) or in-degree (if dir = :in) equal to zero.

source
GraphNeuralNetworks.GNNGraphs.has_multi_edgesMethod
has_multi_edges(g::GNNGraph)

Return true if g has any multiple edges.

source
GraphNeuralNetworks.GNNGraphs.is_bidirectedMethod
is_bidirected(g::GNNGraph)

Check if the directed graph g essentially corresponds to an undirected graph, i.e. if for each edge it also contains the reverse edge.

source
GraphNeuralNetworks.GNNGraphs.khop_adjFunction
khop_adj(g::GNNGraph,k::Int,T::DataType=eltype(g); dir=:out, weighted=true)

Return $A^k$ where $A$ is the adjacency matrix of the graph 'g'.

source
GraphNeuralNetworks.GNNGraphs.laplacian_lambda_maxFunction
laplacian_lambda_max(g::GNNGraph, T=Float32; add_self_loops=false, dir=:out)

Return the largest eigenvalue of the normalized symmetric Laplacian of the graph g.

If the graph is batched from multiple graphs, return the list of the largest eigenvalue for each graph.

source
GraphNeuralNetworks.GNNGraphs.normalized_laplacianFunction
normalized_laplacian(g, T=Float32; add_self_loops=false, dir=:out)

Normalized Laplacian matrix of graph g.

Arguments

  • g: A GNNGraph.
  • T: result element type.
  • add_self_loops: add self-loops while calculating the matrix.
  • dir: the edge directionality considered (:out, :in, :both).
source
GraphNeuralNetworks.GNNGraphs.scaled_laplacianFunction
scaled_laplacian(g, T=Float32; dir=:out)

Scaled Laplacian matrix of graph g, defined as $\hat{L} = \frac{2}{\lambda_{max}} L - I$ where $L$ is the normalized Laplacian matrix.

Arguments

  • g: A GNNGraph.
  • T: result element type.
  • dir: the edge directionality considered (:out, :in, :both).
source
Graphs.LinAlg.adjacency_matrixFunction
adjacency_matrix(g::GNNGraph, T=eltype(g); dir=:out, weighted=true)

Return the adjacency matrix A for the graph g.

If dir=:out, A[i,j] > 0 denotes the presence of an edge from node i to node j. If dir=:in instead, A[i,j] > 0 denotes the presence of an edge from node j to node i.

User may specify the eltype T of the returned matrix.

If weighted=true, the A will contain the edge weights if any, otherwise the elements of A will be either 0 or 1.

source
Graphs.degreeMethod
degree(g::GNNGraph, T=nothing; dir=:out, edge_weight=true)

Return a vector containing the degrees of the nodes in g.

The gradient is propagated through this function only if edge_weight is true or a vector.

Arguments

  • g: A graph.
  • T: Element type of the returned vector. If nothing, is chosen based on the graph type and will be an integer if edge_weight = false. Default nothing.
  • dir: For dir = :out the degree of a node is counted based on the outgoing edges. For dir = :in, the ingoing edges are used. If dir = :both we have the sum of the two.
  • edge_weight: If true and the graph contains weighted edges, the degree will be weighted. Set to false instead to just count the number of outgoing/ingoing edges. Finally, you can also pass a vector of weights to be used instead of the graph's own weights. Default true.
source
Graphs.degreeMethod
degree(g::GNNHeteroGraph, edge_type::EType; dir = :in)

Return a vector containing the degrees of the nodes in g GNNHeteroGraph given edge_type.

Arguments

  • g: A graph.
  • edge_type: A tuple of symbols (source_t, edge_t, target_t) representing the edge type.
  • T: Element type of the returned vector. If nothing, is chosen based on the graph type. Default nothing.
  • dir: For dir = :out the degree of a node is counted based on the outgoing edges. For dir = :in, the ingoing edges are used. If dir = :both we have the sum of the two. Default dir = :out.
source
Graphs.has_self_loopsMethod
has_self_loops(g::GNNGraph)

Return true if g has any self loops.

source
Graphs.outneighborsFunction
outneighbors(g, v)

Return a list of all neighbors connected to vertex v by an outgoing edge.

Implementation Notes

Returns a reference to the current graph's internal structures, not a copy. Do not modify result. If the graph is modified, the behavior is undefined: the array behind this reference may be modified too, but this is not guaranteed.

Examples

julia> using Graphs
+ 1.0
source

Query

GraphNeuralNetworks.GNNGraphs.adjacency_listMethod
adjacency_list(g; dir=:out)
+adjacency_list(g, nodes; dir=:out)

Return the adjacency list representation (a vector of vectors) of the graph g.

Calling a the adjacency list, if dir=:out than a[i] will contain the neighbors of node i through outgoing edges. If dir=:in, it will contain neighbors from incoming edges instead.

If nodes is given, return the neighborhood of the nodes in nodes only.

source
GraphNeuralNetworks.GNNGraphs.edge_indexMethod
edge_index(g::GNNGraph)

Return a tuple containing two vectors, respectively storing the source and target nodes for each edges in g.

s, t = edge_index(g)
source
GraphNeuralNetworks.GNNGraphs.edge_indexMethod
edge_index(g::GNNHeteroGraph, [edge_t])

Return a tuple containing two vectors, respectively storing the source and target nodes for each edges in g of type edge_t = (src_t, rel_t, trg_t).

If edge_t is not provided, it will error if g has more than one edge type.

source
GraphNeuralNetworks.GNNGraphs.graph_indicatorMethod
graph_indicator(g::GNNGraph; edges=false)

Return a vector containing the graph membership (an integer from 1 to g.num_graphs) of each node in the graph. If edges=true, return the graph membership of each edge instead.

source
GraphNeuralNetworks.GNNGraphs.graph_indicatorMethod
graph_indicator(g::GNNHeteroGraph, [node_t])

Return a Dict of vectors containing the graph membership (an integer from 1 to g.num_graphs) of each node in the graph for each node type. If node_t is provided, return the graph membership of each node of type node_t instead.

See also batch.

source
GraphNeuralNetworks.GNNGraphs.has_isolated_nodesMethod
has_isolated_nodes(g::GNNGraph; dir=:out)

Return true if the graph g contains nodes with out-degree (if dir=:out) or in-degree (if dir = :in) equal to zero.

source
GraphNeuralNetworks.GNNGraphs.has_multi_edgesMethod
has_multi_edges(g::GNNGraph)

Return true if g has any multiple edges.

source
GraphNeuralNetworks.GNNGraphs.is_bidirectedMethod
is_bidirected(g::GNNGraph)

Check if the directed graph g essentially corresponds to an undirected graph, i.e. if for each edge it also contains the reverse edge.

source
GraphNeuralNetworks.GNNGraphs.khop_adjFunction
khop_adj(g::GNNGraph,k::Int,T::DataType=eltype(g); dir=:out, weighted=true)

Return $A^k$ where $A$ is the adjacency matrix of the graph 'g'.

source
GraphNeuralNetworks.GNNGraphs.laplacian_lambda_maxFunction
laplacian_lambda_max(g::GNNGraph, T=Float32; add_self_loops=false, dir=:out)

Return the largest eigenvalue of the normalized symmetric Laplacian of the graph g.

If the graph is batched from multiple graphs, return the list of the largest eigenvalue for each graph.

source
GraphNeuralNetworks.GNNGraphs.normalized_laplacianFunction
normalized_laplacian(g, T=Float32; add_self_loops=false, dir=:out)

Normalized Laplacian matrix of graph g.

Arguments

  • g: A GNNGraph.
  • T: result element type.
  • add_self_loops: add self-loops while calculating the matrix.
  • dir: the edge directionality considered (:out, :in, :both).
source
GraphNeuralNetworks.GNNGraphs.scaled_laplacianFunction
scaled_laplacian(g, T=Float32; dir=:out)

Scaled Laplacian matrix of graph g, defined as $\hat{L} = \frac{2}{\lambda_{max}} L - I$ where $L$ is the normalized Laplacian matrix.

Arguments

  • g: A GNNGraph.
  • T: result element type.
  • dir: the edge directionality considered (:out, :in, :both).
source
Graphs.LinAlg.adjacency_matrixFunction
adjacency_matrix(g::GNNGraph, T=eltype(g); dir=:out, weighted=true)

Return the adjacency matrix A for the graph g.

If dir=:out, A[i,j] > 0 denotes the presence of an edge from node i to node j. If dir=:in instead, A[i,j] > 0 denotes the presence of an edge from node j to node i.

User may specify the eltype T of the returned matrix.

If weighted=true, the A will contain the edge weights if any, otherwise the elements of A will be either 0 or 1.

source
Graphs.degreeMethod
degree(g::GNNGraph, T=nothing; dir=:out, edge_weight=true)

Return a vector containing the degrees of the nodes in g.

The gradient is propagated through this function only if edge_weight is true or a vector.

Arguments

  • g: A graph.
  • T: Element type of the returned vector. If nothing, is chosen based on the graph type and will be an integer if edge_weight = false. Default nothing.
  • dir: For dir = :out the degree of a node is counted based on the outgoing edges. For dir = :in, the ingoing edges are used. If dir = :both we have the sum of the two.
  • edge_weight: If true and the graph contains weighted edges, the degree will be weighted. Set to false instead to just count the number of outgoing/ingoing edges. Finally, you can also pass a vector of weights to be used instead of the graph's own weights. Default true.
source
Graphs.degreeMethod
degree(g::GNNHeteroGraph, edge_type::EType; dir = :in)

Return a vector containing the degrees of the nodes in g GNNHeteroGraph given edge_type.

Arguments

  • g: A graph.
  • edge_type: A tuple of symbols (source_t, edge_t, target_t) representing the edge type.
  • T: Element type of the returned vector. If nothing, is chosen based on the graph type. Default nothing.
  • dir: For dir = :out the degree of a node is counted based on the outgoing edges. For dir = :in, the ingoing edges are used. If dir = :both we have the sum of the two. Default dir = :out.
source
Graphs.has_self_loopsMethod
has_self_loops(g::GNNGraph)

Return true if g has any self loops.

source
Graphs.outneighborsFunction
outneighbors(g, v)

Return a list of all neighbors connected to vertex v by an outgoing edge.

Implementation Notes

Returns a reference to the current graph's internal structures, not a copy. Do not modify result. If the graph is modified, the behavior is undefined: the array behind this reference may be modified too, but this is not guaranteed.

Examples

julia> using Graphs
 
 julia> g = SimpleDiGraph([0 1 0 0 0; 0 0 1 0 0; 1 0 0 1 0; 0 0 0 0 1; 0 0 0 1 0]);
 
@@ -106,12 +106,12 @@
 julia> add_edges(g, ([2, 3], [4, 1], [10.0, 20.0]))
 GNNGraph:
   num_nodes: 4
-  num_edges: 7

```jldoctest julia> g = GNNGraph() GNNGraph: numnodes: 0 numedges: 0

julia> addedges(g, [1,2], [2,3]) GNNGraph: numnodes: 3 num_edges: 2

source
GraphNeuralNetworks.GNNGraphs.add_edgesMethod
add_edges(g::GNNHeteroGraph, edge_t, s, t; [edata, num_nodes])
+  num_edges: 7

```jldoctest julia> g = GNNGraph() GNNGraph: numnodes: 0 numedges: 0

julia> addedges(g, [1,2], [2,3]) GNNGraph: numnodes: 3 num_edges: 2

source
GraphNeuralNetworks.GNNGraphs.add_edgesMethod
add_edges(g::GNNHeteroGraph, edge_t, s, t; [edata, num_nodes])
 add_edges(g::GNNHeteroGraph, edge_t => (s, t); [edata, num_nodes])
-add_edges(g::GNNHeteroGraph, edge_t => (s, t, w); [edata, num_nodes])

Add to heterograph g edges of type edge_t with source node vector s and target node vector t. Optionally, pass the edge weights w or the features edata for the new edges. edge_t is a triplet of symbols (src_t, rel_t, dst_t).

If the edge type is not already present in the graph, it is added. If it involves new node types, they are added to the graph as well. In this case, a dictionary or named tuple of num_nodes can be passed to specify the number of nodes of the new types, otherwise the number of nodes is inferred from the maximum node id in s and t.

source
GraphNeuralNetworks.GNNGraphs.add_nodesMethod
add_nodes(g::GNNGraph, n; [ndata])

Add n new nodes to graph g. In the new graph, these nodes will have indexes from g.num_nodes + 1 to g.num_nodes + n.

source
GraphNeuralNetworks.GNNGraphs.add_self_loopsMethod
add_self_loops(g::GNNGraph)

Return a graph with the same features as g but also adding edges connecting the nodes to themselves.

Nodes with already existing self-loops will obtain a second self-loop.

If the graphs has edge weights, the new edges will have weight 1.

source
GraphNeuralNetworks.GNNGraphs.add_self_loopsMethod
add_self_loops(g::GNNHeteroGraph, edge_t::EType)
-add_self_loops(g::GNNHeteroGraph)

If the source node type is the same as the destination node type in edge_t, return a graph with the same features as g but also add self-loops of the specified type, edge_t. Otherwise, it returns g unchanged.

Nodes with already existing self-loops of type edge_t will obtain a second set of self-loops of the same type.

If the graph has edge weights for edges of type edge_t, the new edges will have weight 1.

If no edges of type edge_t exist, or all existing edges have no weight, then all new self loops will have no weight.

If edge_t is not passed as argument, for the entire graph self-loop is added to each node for every edge type in the graph where the source and destination node types are the same. This iterates over all edge types present in the graph, applying the self-loop addition logic to each applicable edge type.

source
GraphNeuralNetworks.GNNGraphs.getgraphMethod
getgraph(g::GNNGraph, i; nmap=false)

Return the subgraph of g induced by those nodes j for which g.graph_indicator[j] == i or, if i is a collection, g.graph_indicator[j] ∈ i. In other words, it extract the component graphs from a batched graph.

If nmap=true, return also a vector v mapping the new nodes to the old ones. The node i in the subgraph will correspond to the node v[i] in g.

source
GraphNeuralNetworks.GNNGraphs.negative_sampleMethod
negative_sample(g::GNNGraph; 
+add_edges(g::GNNHeteroGraph, edge_t => (s, t, w); [edata, num_nodes])

Add to heterograph g edges of type edge_t with source node vector s and target node vector t. Optionally, pass the edge weights w or the features edata for the new edges. edge_t is a triplet of symbols (src_t, rel_t, dst_t).

If the edge type is not already present in the graph, it is added. If it involves new node types, they are added to the graph as well. In this case, a dictionary or named tuple of num_nodes can be passed to specify the number of nodes of the new types, otherwise the number of nodes is inferred from the maximum node id in s and t.

source
GraphNeuralNetworks.GNNGraphs.add_nodesMethod
add_nodes(g::GNNGraph, n; [ndata])

Add n new nodes to graph g. In the new graph, these nodes will have indexes from g.num_nodes + 1 to g.num_nodes + n.

source
GraphNeuralNetworks.GNNGraphs.add_self_loopsMethod
add_self_loops(g::GNNGraph)

Return a graph with the same features as g but also adding edges connecting the nodes to themselves.

Nodes with already existing self-loops will obtain a second self-loop.

If the graphs has edge weights, the new edges will have weight 1.

source
GraphNeuralNetworks.GNNGraphs.add_self_loopsMethod
add_self_loops(g::GNNHeteroGraph, edge_t::EType)
+add_self_loops(g::GNNHeteroGraph)

If the source node type is the same as the destination node type in edge_t, return a graph with the same features as g but also add self-loops of the specified type, edge_t. Otherwise, it returns g unchanged.

Nodes with already existing self-loops of type edge_t will obtain a second set of self-loops of the same type.

If the graph has edge weights for edges of type edge_t, the new edges will have weight 1.

If no edges of type edge_t exist, or all existing edges have no weight, then all new self loops will have no weight.

If edge_t is not passed as argument, for the entire graph self-loop is added to each node for every edge type in the graph where the source and destination node types are the same. This iterates over all edge types present in the graph, applying the self-loop addition logic to each applicable edge type.

source
GraphNeuralNetworks.GNNGraphs.getgraphMethod
getgraph(g::GNNGraph, i; nmap=false)

Return the subgraph of g induced by those nodes j for which g.graph_indicator[j] == i or, if i is a collection, g.graph_indicator[j] ∈ i. In other words, it extract the component graphs from a batched graph.

If nmap=true, return also a vector v mapping the new nodes to the old ones. The node i in the subgraph will correspond to the node v[i] in g.

source
GraphNeuralNetworks.GNNGraphs.negative_sampleMethod
negative_sample(g::GNNGraph; 
                 num_neg_edges = g.num_edges, 
-                bidirected = is_bidirected(g))

Return a graph containing random negative edges (i.e. non-edges) from graph g as edges.

If bidirected=true, the output graph will be bidirected and there will be no leakage from the origin graph.

See also is_bidirected.

source
GraphNeuralNetworks.GNNGraphs.rand_edge_splitMethod
rand_edge_split(g::GNNGraph, frac; bidirected=is_bidirected(g)) -> g1, g2

Randomly partition the edges in g to form two graphs, g1 and g2. Both will have the same number of nodes as g. g1 will contain a fraction frac of the original edges, while g2 wil contain the rest.

If bidirected = true makes sure that an edge and its reverse go into the same split. This option is supported only for bidirected graphs with no self-loops and multi-edges.

rand_edge_split is tipically used to create train/test splits in link prediction tasks.

source
GraphNeuralNetworks.GNNGraphs.random_walk_peMethod
random_walk_pe(g, walk_length)

Return the random walk positional encoding from the paper Graph Neural Networks with Learnable Structural and Positional Representations of the given graph g and the length of the walk walk_length as a matrix of size (walk_length, g.num_nodes).

source
GraphNeuralNetworks.GNNGraphs.remove_multi_edgesMethod
remove_multi_edges(g::GNNGraph; aggr=+)

Remove multiple edges (also called parallel edges or repeated edges) from graph g. Possible edge features are aggregated according to aggr, that can take value +,min, max or mean.

See also remove_self_loops, has_multi_edges, and to_bidirected.

source
GraphNeuralNetworks.GNNGraphs.remove_self_loopsMethod
remove_self_loops(g::GNNGraph)

Return a graph constructed from g where self-loops (edges from a node to itself) are removed.

See also add_self_loops and remove_multi_edges.

source
GraphNeuralNetworks.GNNGraphs.set_edge_weightMethod
set_edge_weight(g::GNNGraph, w::AbstractVector)

Set w as edge weights in the returned graph.

source
GraphNeuralNetworks.GNNGraphs.to_bidirectedMethod
to_bidirected(g)

Adds a reverse edge for each edge in the graph, then calls remove_multi_edges with mean aggregation to simplify the graph.

See also is_bidirected.

Examples

julia> s, t = [1, 2, 3, 3, 4], [2, 3, 4, 4, 4];
+                bidirected = is_bidirected(g))

Return a graph containing random negative edges (i.e. non-edges) from graph g as edges.

If bidirected=true, the output graph will be bidirected and there will be no leakage from the origin graph.

See also is_bidirected.

source
GraphNeuralNetworks.GNNGraphs.rand_edge_splitMethod
rand_edge_split(g::GNNGraph, frac; bidirected=is_bidirected(g)) -> g1, g2

Randomly partition the edges in g to form two graphs, g1 and g2. Both will have the same number of nodes as g. g1 will contain a fraction frac of the original edges, while g2 wil contain the rest.

If bidirected = true makes sure that an edge and its reverse go into the same split. This option is supported only for bidirected graphs with no self-loops and multi-edges.

rand_edge_split is tipically used to create train/test splits in link prediction tasks.

source
GraphNeuralNetworks.GNNGraphs.random_walk_peMethod
random_walk_pe(g, walk_length)

Return the random walk positional encoding from the paper Graph Neural Networks with Learnable Structural and Positional Representations of the given graph g and the length of the walk walk_length as a matrix of size (walk_length, g.num_nodes).

source
GraphNeuralNetworks.GNNGraphs.remove_multi_edgesMethod
remove_multi_edges(g::GNNGraph; aggr=+)

Remove multiple edges (also called parallel edges or repeated edges) from graph g. Possible edge features are aggregated according to aggr, that can take value +,min, max or mean.

See also remove_self_loops, has_multi_edges, and to_bidirected.

source
GraphNeuralNetworks.GNNGraphs.remove_self_loopsMethod
remove_self_loops(g::GNNGraph)

Return a graph constructed from g where self-loops (edges from a node to itself) are removed.

See also add_self_loops and remove_multi_edges.

source
GraphNeuralNetworks.GNNGraphs.set_edge_weightMethod
set_edge_weight(g::GNNGraph, w::AbstractVector)

Set w as edge weights in the returned graph.

source
GraphNeuralNetworks.GNNGraphs.to_bidirectedMethod
to_bidirected(g)

Adds a reverse edge for each edge in the graph, then calls remove_multi_edges with mean aggregation to simplify the graph.

See also is_bidirected.

Examples

julia> s, t = [1, 2, 3, 3, 4], [2, 3, 4, 4, 4];
 
 julia> w = [1.0, 2.0, 3.0, 4.0, 5.0];
 
@@ -152,7 +152,7 @@
  20.0
  35.0
  35.0
- 50.0
source
GraphNeuralNetworks.GNNGraphs.to_unidirectedMethod
to_unidirected(g::GNNGraph)

Return a graph that for each multiple edge between two nodes in g keeps only an edge in one direction.

source
MLUtils.batchMethod
batch(gs::Vector{<:GNNGraph})

Batch together multiple GNNGraphs into a single one containing the total number of original nodes and edges.

Equivalent to SparseArrays.blockdiag. See also Flux.unbatch.

Examples

julia> g1 = rand_graph(4, 6, ndata=ones(8, 4))
+ 50.0
source
GraphNeuralNetworks.GNNGraphs.to_unidirectedMethod
to_unidirected(g::GNNGraph)

Return a graph that for each multiple edge between two nodes in g keeps only an edge in one direction.

source
MLUtils.batchMethod
batch(gs::Vector{<:GNNGraph})

Batch together multiple GNNGraphs into a single one containing the total number of original nodes and edges.

Equivalent to SparseArrays.blockdiag. See also Flux.unbatch.

Examples

julia> g1 = rand_graph(4, 6, ndata=ones(8, 4))
 GNNGraph:
     num_nodes = 4
     num_edges = 6
@@ -183,7 +183,7 @@
  1.0  1.0  1.0  1.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0
  1.0  1.0  1.0  1.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0
  1.0  1.0  1.0  1.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0
- 1.0  1.0  1.0  1.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0
source
MLUtils.unbatchMethod
unbatch(g::GNNGraph)

Opposite of the Flux.batch operation, returns an array of the individual graphs batched together in g.

See also Flux.batch and getgraph.

Examples

julia> gbatched = Flux.batch([rand_graph(5, 6), rand_graph(10, 8), rand_graph(4,2)])
+ 1.0  1.0  1.0  1.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0
source
MLUtils.unbatchMethod
unbatch(g::GNNGraph)

Opposite of the Flux.batch operation, returns an array of the individual graphs batched together in g.

See also Flux.batch and getgraph.

Examples

julia> gbatched = Flux.batch([rand_graph(5, 6), rand_graph(10, 8), rand_graph(4,2)])
 GNNGraph:
     num_nodes = 19
     num_edges = 16
@@ -201,7 +201,7 @@
 
  GNNGraph:
     num_nodes = 4
-    num_edges = 2
source
SparseArrays.blockdiagMethod
blockdiag(xs::GNNGraph...)

Equivalent to Flux.batch.

source

Generate

GraphNeuralNetworks.GNNGraphs.knn_graphMethod
knn_graph(points::AbstractMatrix, 
+    num_edges = 2
source
SparseArrays.blockdiagMethod
blockdiag(xs::GNNGraph...)

Equivalent to Flux.batch.

source

Generate

GraphNeuralNetworks.GNNGraphs.knn_graphMethod
knn_graph(points::AbstractMatrix, 
           k::Int; 
           graph_indicator = nothing,
           self_loops = false, 
@@ -222,7 +222,7 @@
     num_nodes = 10
     num_edges = 30
     num_graphs = 2
-
source
GraphNeuralNetworks.GNNGraphs.radius_graphMethod
radius_graph(points::AbstractMatrix, 
+
source
GraphNeuralNetworks.GNNGraphs.radius_graphMethod
radius_graph(points::AbstractMatrix, 
              r::AbstractFloat; 
              graph_indicator = nothing,
              self_loops = false, 
@@ -243,9 +243,9 @@
     num_nodes = 10
     num_edges = 20
     num_graphs = 2
-

References

Section B paragraphs 1 and 2 of the paper Dynamic Hidden-Variable Network Models

source
GraphNeuralNetworks.GNNGraphs.rand_bipartite_heterographFunction
rand_bipartite_heterograph(n1, n2, m; [bidirected, seed, node_t, edge_t, kws...])
+

References

Section B paragraphs 1 and 2 of the paper Dynamic Hidden-Variable Network Models

source
GraphNeuralNetworks.GNNGraphs.rand_bipartite_heterographFunction
rand_bipartite_heterograph(n1, n2, m; [bidirected, seed, node_t, edge_t, kws...])
 rand_bipartite_heterograph((n1, n2), m; ...)
-rand_bipartite_heterograph((n1, n2), (m1, m2); ...)

Construct an GNNHeteroGraph with number of nodes and edges specified by n1, n2 and m1 and m2 respectively.

See rand_heterograph for a more general version.

Keyword arguments

  • bidirected: whether to generate a bidirected graph. Default is true.
  • seed: random seed. Default is -1 (no seed).
  • node_t: node types. If bipartite=true, this should be a tuple of two node types, otherwise it should be a single node type.
  • edge_t: edge types. If bipartite=true, this should be a tuple of two edge types, otherwise it should be a single edge type.
source
GraphNeuralNetworks.GNNGraphs.rand_graphMethod
rand_graph(n, m; bidirected=true, seed=-1, edge_weight = nothing, kws...)

Generate a random (Erdós-Renyi) GNNGraph with n nodes and m edges.

If bidirected=true the reverse edge of each edge will be present. If bidirected=false instead, m unrelated edges are generated. In any case, the output graph will contain no self-loops or multi-edges.

A vector can be passed as edge_weight. Its length has to be equal to m in the directed case, and m÷2 in the bidirected one.

Use a seed > 0 for reproducibility.

Additional keyword arguments will be passed to the GNNGraph constructor.

Examples

julia> g = rand_graph(5, 4, bidirected=false)
+rand_bipartite_heterograph((n1, n2), (m1, m2); ...)

Construct an GNNHeteroGraph with number of nodes and edges specified by n1, n2 and m1 and m2 respectively.

See rand_heterograph for a more general version.

Keyword arguments

  • bidirected: whether to generate a bidirected graph. Default is true.
  • seed: random seed. Default is -1 (no seed).
  • node_t: node types. If bipartite=true, this should be a tuple of two node types, otherwise it should be a single node type.
  • edge_t: edge types. If bipartite=true, this should be a tuple of two edge types, otherwise it should be a single edge type.
source
GraphNeuralNetworks.GNNGraphs.rand_graphMethod
rand_graph(n, m; bidirected=true, seed=-1, edge_weight = nothing, kws...)

Generate a random (Erdós-Renyi) GNNGraph with n nodes and m edges.

If bidirected=true the reverse edge of each edge will be present. If bidirected=false instead, m unrelated edges are generated. In any case, the output graph will contain no self-loops or multi-edges.

A vector can be passed as edge_weight. Its length has to be equal to m in the directed case, and m÷2 in the bidirected one.

Use a seed > 0 for reproducibility.

Additional keyword arguments will be passed to the GNNGraph constructor.

Examples

julia> g = rand_graph(5, 4, bidirected=false)
 GNNGraph:
     num_nodes = 5
     num_edges = 4
@@ -264,11 +264,11 @@
 # Each edge has a reverse
 julia> edge_index(g)
 ([1, 3, 3, 4], [3, 4, 1, 3])
-
source
GraphNeuralNetworks.GNNGraphs.rand_heterographFunction
rand_heterograph(n, m; seed=-1, bidirected=false, kws...)

Construct an GNNHeteroGraph with number of nodes and edges specified by n and m respectively. n and m can be any iterable of pairs specifing node/edge types and their numbers.

Use a seed > 0 for reproducibility.

Setting bidirected=true will generate a bidirected graph, i.e. each edge will have a reverse edge. Therefore, for each edge type (:A, :rel, :B) a corresponding reverse edge type (:B, :rel, :A) will be generated.

Additional keyword arguments will be passed to the GNNHeteroGraph constructor.

Examples

julia> g = rand_heterograph((:user => 10, :movie => 20),
+
source
GraphNeuralNetworks.GNNGraphs.rand_heterographFunction
rand_heterograph(n, m; seed=-1, bidirected=false, kws...)

Construct an GNNHeteroGraph with number of nodes and edges specified by n and m respectively. n and m can be any iterable of pairs specifing node/edge types and their numbers.

Use a seed > 0 for reproducibility.

Setting bidirected=true will generate a bidirected graph, i.e. each edge will have a reverse edge. Therefore, for each edge type (:A, :rel, :B) a corresponding reverse edge type (:B, :rel, :A) will be generated.

Additional keyword arguments will be passed to the GNNHeteroGraph constructor.

Examples

julia> g = rand_heterograph((:user => 10, :movie => 20),
                             (:user, :rate, :movie) => 30)
 GNNHeteroGraph:
   num_nodes: (:user => 10, :movie => 20)         
-  num_edges: ((:user, :rate, :movie) => 30,)
source

Operators

Base.intersectFunction
intersect(g, h)

Return a graph with edges that are only in both graph g and graph h.

Implementation Notes

This function may produce a graph with 0-degree vertices. Preserves the eltype of the input graph.

Examples

julia> using Graphs
+  num_edges: ((:user, :rate, :movie) => 30,)
source

Operators

Base.intersectFunction
intersect(g, h)

Return a graph with edges that are only in both graph g and graph h.

Implementation Notes

This function may produce a graph with 0-degree vertices. Preserves the eltype of the input graph.

Examples

julia> using Graphs
 
 julia> g1 = SimpleDiGraph([0 1 0 0 0; 0 0 1 0 0; 1 0 0 1 0; 0 0 0 0 1; 0 0 0 1 0]);
 
@@ -316,4 +316,4 @@
     num_nodes = 20
     num_edges = 10
     edata:
-        EID => (10,)
source
+ EID => (10,)source diff --git a/dev/api/heterograph/index.html b/dev/api/heterograph/index.html index bd274dcff..399678d97 100644 --- a/dev/api/heterograph/index.html +++ b/dev/api/heterograph/index.html @@ -40,7 +40,7 @@ julia> hg.ndata[:A].x 2×10 Matrix{Float64}: 0.825882 0.0797502 0.245813 0.142281 0.231253 0.685025 0.821457 0.888838 0.571347 0.53165 - 0.631286 0.316292 0.705325 0.239211 0.533007 0.249233 0.473736 0.595475 0.0623298 0.159307

See also GNNGraph for a homogeneous graph type and rand_heterograph for a function to generate random heterographs.

source
GraphNeuralNetworks.GNNGraphs.edge_type_subgraphMethod
edge_type_subgraph(g::GNNHeteroGraph, edge_ts)

Return a subgraph of g that contains only the edges of type edge_ts. Edge types can be specified as a single edge type (i.e. a tuple containing 3 symbols) or a vector of edge types.

source
GraphNeuralNetworks.GNNGraphs.num_edge_typesMethod
num_edge_types(g)

Return the number of edge types in the graph. For GNNGraphs, this is always 1. For GNNHeteroGraphs, this is the number of unique edge types.

source
GraphNeuralNetworks.GNNGraphs.num_node_typesMethod
num_node_types(g)

Return the number of node types in the graph. For GNNGraphs, this is always 1. For GNNHeteroGraphs, this is the number of unique node types.

source

Heterogeneous Graph Convolutions

Heterogeneous graph convolutions are implemented in the type HeteroGraphConv. HeteroGraphConv relies on standard graph convolutional layers to perform message passing on the different relations. See the table at this page for the supported layers.

GraphNeuralNetworks.HeteroGraphConvType
HeteroGraphConv(itr; aggr = +)
+    0.631286  0.316292   0.705325  0.239211  0.533007  0.249233  0.473736  0.595475  0.0623298  0.159307

See also GNNGraph for a homogeneous graph type and rand_heterograph for a function to generate random heterographs.

source
GraphNeuralNetworks.GNNGraphs.edge_type_subgraphMethod
edge_type_subgraph(g::GNNHeteroGraph, edge_ts)

Return a subgraph of g that contains only the edges of type edge_ts. Edge types can be specified as a single edge type (i.e. a tuple containing 3 symbols) or a vector of edge types.

source
GraphNeuralNetworks.GNNGraphs.num_edge_typesMethod
num_edge_types(g)

Return the number of edge types in the graph. For GNNGraphs, this is always 1. For GNNHeteroGraphs, this is the number of unique edge types.

source
GraphNeuralNetworks.GNNGraphs.num_node_typesMethod
num_node_types(g)

Return the number of node types in the graph. For GNNGraphs, this is always 1. For GNNHeteroGraphs, this is the number of unique node types.

source

Heterogeneous Graph Convolutions

Heterogeneous graph convolutions are implemented in the type HeteroGraphConv. HeteroGraphConv relies on standard graph convolutional layers to perform message passing on the different relations. See the table at this page for the supported layers.

GraphNeuralNetworks.HeteroGraphConvType
HeteroGraphConv(itr; aggr = +)
 HeteroGraphConv(pairs...; aggr = +)

A convolutional layer for heterogeneous graphs.

The itr argument is an iterator of pairs of the form edge_t => layer, where edge_t is a 3-tuple of the form (src_node_type, edge_type, dst_node_type), and layer is a convolutional layers for homogeneous graphs.

Each convolution is applied to the corresponding relation. Since a node type can be involved in multiple relations, the single convolution outputs have to be aggregated using the aggr function. The default is to sum the outputs.

Forward Arguments

  • g::GNNHeteroGraph: The input graph.
  • x::Union{NamedTuple,Dict}: The input node features. The keys are node types and the values are node feature tensors.

Examples

julia> g = rand_bipartite_heterograph((10, 15), 20)
 GNNHeteroGraph:
   num_nodes: Dict(:A => 10, :B => 15)
@@ -54,4 +54,4 @@
 julia> y = layer(g, x); # output is a named tuple
 
 julia> size(y.A) == (32, 10) && size(y.B) == (32, 15)
-true
source
+truesource diff --git a/dev/api/messagepassing/index.html b/dev/api/messagepassing/index.html index a332ddc52..f080a82b9 100644 --- a/dev/api/messagepassing/index.html +++ b/dev/api/messagepassing/index.html @@ -1,6 +1,6 @@ Message Passing · GraphNeuralNetworks.jl

Message Passing

Index

Interface

GraphNeuralNetworks.apply_edgesFunction
apply_edges(fmsg, g, [layer]; [xi, xj, e])
-apply_edges(fmsg, g, [layer,] xi, xj, e=nothing)

Returns the message from node j to node i applying the message function fmsg on the edges in graph g. In the message-passing scheme, the incoming messages from the neighborhood of i will later be aggregated in order to update the features of node i (see aggregate_neighbors).

The function fmsg operates on batches of edges, therefore xi, xj, and e are tensors whose last dimension is the batch size, or can be named tuples of such tensors.

If also a GNNLayer layer is provided, it will be passed to fmsg as a first argument.

Arguments

  • g: An AbstractGNNGraph.
  • xi: An array or a named tuple containing arrays whose last dimension's size is g.num_nodes. It will be appropriately materialized on the target node of each edge (see also edge_index).
  • xj: As xi, but now to be materialized on each edge's source node.
  • e: An array or a named tuple containing arrays whose last dimension's size is g.num_edges.
  • fmsg: A function that takes as inputs the edge-materialized xi, xj, and e. These are arrays (or named tuples of arrays) whose last dimension' size is the size of a batch of edges. The output of f has to be an array (or a named tuple of arrays) with the same batch size. If also layer is passed to propagate, the signature of fmsg has to be fmsg(layer, xi, xj, e) instead of fmsg(xi, xj, e).
  • layer: A GNNLayer. If provided it will be passed to fmsg as a first argument.

See also propagate and aggregate_neighbors.

source
GraphNeuralNetworks.aggregate_neighborsFunction
aggregate_neighbors(g, aggr, m)

Given a graph g, edge features m, and an aggregation operator aggr (e.g +, min, max, mean), returns the new node features

\[\mathbf{x}_i = \square_{j \in \mathcal{N}(i)} \mathbf{m}_{j\to i}\]

Neighborhood aggregation is the second step of propagate, where it comes after apply_edges.

source
GraphNeuralNetworks.propagateFunction
propagate(fmsg, g, aggr [layer]; [xi, xj, e])
+apply_edges(fmsg, g, [layer,] xi, xj, e=nothing)

Returns the message from node j to node i applying the message function fmsg on the edges in graph g. In the message-passing scheme, the incoming messages from the neighborhood of i will later be aggregated in order to update the features of node i (see aggregate_neighbors).

The function fmsg operates on batches of edges, therefore xi, xj, and e are tensors whose last dimension is the batch size, or can be named tuples of such tensors.

If also a GNNLayer layer is provided, it will be passed to fmsg as a first argument.

Arguments

  • g: An AbstractGNNGraph.
  • xi: An array or a named tuple containing arrays whose last dimension's size is g.num_nodes. It will be appropriately materialized on the target node of each edge (see also edge_index).
  • xj: As xi, but now to be materialized on each edge's source node.
  • e: An array or a named tuple containing arrays whose last dimension's size is g.num_edges.
  • fmsg: A function that takes as inputs the edge-materialized xi, xj, and e. These are arrays (or named tuples of arrays) whose last dimension' size is the size of a batch of edges. The output of f has to be an array (or a named tuple of arrays) with the same batch size. If also layer is passed to propagate, the signature of fmsg has to be fmsg(layer, xi, xj, e) instead of fmsg(xi, xj, e).
  • layer: A GNNLayer. If provided it will be passed to fmsg as a first argument.

See also propagate and aggregate_neighbors.

source
GraphNeuralNetworks.aggregate_neighborsFunction
aggregate_neighbors(g, aggr, m)

Given a graph g, edge features m, and an aggregation operator aggr (e.g +, min, max, mean), returns the new node features

\[\mathbf{x}_i = \square_{j \in \mathcal{N}(i)} \mathbf{m}_{j\to i}\]

Neighborhood aggregation is the second step of propagate, where it comes after apply_edges.

source
GraphNeuralNetworks.propagateFunction
propagate(fmsg, g, aggr [layer]; [xi, xj, e])
 propagate(fmsg, g, aggr, [layer,] xi, xj, e=nothing)

Performs message passing on graph g. Takes care of materializing the node features on each edge, applying the message function fmsg, and returning an aggregated message $\bar{\mathbf{m}}$ (depending on the return value of fmsg, an array or a named tuple of arrays with last dimension's size g.num_nodes).

If also a GNNLayer layer is provided, it will be passed to fmsg as a first argument.

It can be decomposed in two steps:

m = apply_edges(fmsg, g, xi, xj, e)
 m̄ = aggregate_neighbors(g, aggr, m)

GNN layers typically call propagate in their forward pass, providing as input f a closure.

Arguments

  • g: A GNNGraph.
  • xi: An array or a named tuple containing arrays whose last dimension's size is g.num_nodes. It will be appropriately materialized on the target node of each edge (see also edge_index).
  • xj: As xj, but to be materialized on edges' sources.
  • e: An array or a named tuple containing arrays whose last dimension's size is g.num_edges.
  • fmsg: A generic function that will be passed over to apply_edges. Has to take as inputs the edge-materialized xi, xj, and e (arrays or named tuples of arrays whose last dimension' size is the size of a batch of edges). Its output has to be an array or a named tuple of arrays with the same batch size. If also layer is passed to propagate, the signature of fmsg has to be fmsg(layer, xi, xj, e) instead of fmsg(xi, xj, e).
  • layer: A GNNLayer. If provided it will be passed to fmsg as a first argument.
  • aggr: Neighborhood aggregation operator. Use +, mean, max, or min.

Examples

using GraphNeuralNetworks, Flux
 
@@ -26,4 +26,4 @@
 end
 
 l = GNNConv(10 => 20)
-l(g, x)

See also apply_edges and aggregate_neighbors.

source

Built-in message functions

GraphNeuralNetworks.e_mul_xjFunction
e_mul_xj(xi, xj, e) = reshape(e, (...)) .* xj

Reshape e into broadcast compatible shape with xj (by prepending singleton dimensions) then perform broadcasted multiplication.

source
+l(g, x)

See also apply_edges and aggregate_neighbors.

source

Built-in message functions

GraphNeuralNetworks.copy_xiFunction
copy_xi(xi, xj, e) = xi
source
GraphNeuralNetworks.copy_xjFunction
copy_xj(xi, xj, e) = xj
source
GraphNeuralNetworks.xi_dot_xjFunction
xi_dot_xj(xi, xj, e) = sum(xi .* xj, dims=1)
source
GraphNeuralNetworks.xi_sub_xjFunction
xi_sub_xj(xi, xj, e) = xi .- xj
source
GraphNeuralNetworks.xj_sub_xiFunction
xj_sub_xi(xi, xj, e) = xj .- xi
source
GraphNeuralNetworks.e_mul_xjFunction
e_mul_xj(xi, xj, e) = reshape(e, (...)) .* xj

Reshape e into broadcast compatible shape with xj (by prepending singleton dimensions) then perform broadcasted multiplication.

source
GraphNeuralNetworks.w_mul_xjFunction
w_mul_xj(xi, xj, w) = reshape(w, (...)) .* xj

Similar to e_mul_xj but specialized on scalar edge features (weights).

source
diff --git a/dev/api/pool/index.html b/dev/api/pool/index.html index 1d596252f..be15e6fdd 100644 --- a/dev/api/pool/index.html +++ b/dev/api/pool/index.html @@ -13,7 +13,7 @@ u = pool(g, g.ndata.x) -@assert size(u) == (chout, g.num_graphs)source
GraphNeuralNetworks.GlobalPoolType
GlobalPool(aggr)

Global pooling layer for graph neural networks. Takes a graph and feature nodes as inputs and performs the operation

\[\mathbf{u}_V = \square_{i \in V} \mathbf{x}_i\]

where $V$ is the set of nodes of the input graph and the type of aggregation represented by $\square$ is selected by the aggr argument. Commonly used aggregations are mean, max, and +.

See also reduce_nodes.

Examples

using Flux, GraphNeuralNetworks, Graphs
+@assert size(u) == (chout, g.num_graphs)
source
GraphNeuralNetworks.GlobalPoolType
GlobalPool(aggr)

Global pooling layer for graph neural networks. Takes a graph and feature nodes as inputs and performs the operation

\[\mathbf{u}_V = \square_{i \in V} \mathbf{x}_i\]

where $V$ is the set of nodes of the input graph and the type of aggregation represented by $\square$ is selected by the aggr argument. Commonly used aggregations are mean, max, and +.

See also reduce_nodes.

Examples

using Flux, GraphNeuralNetworks, Graphs
 
 pool = GlobalPool(mean)
 
@@ -24,7 +24,7 @@
 
 g = Flux.batch([GNNGraph(erdos_renyi(10, 4)) for _ in 1:5])
 X = rand(32, 50)
-pool(g, X) # => 32x5 matrix
source
GraphNeuralNetworks.Set2SetType
Set2Set(n_in, n_iters, n_layers = 1)

Set2Set layer from the paper Order Matters: Sequence to sequence for sets.

For each graph in the batch, the layer computes an output vector of size 2*n_in by iterating the following steps n_iters times:

\[\mathbf{q} = \mathrm{LSTM}(\mathbf{q}_{t-1}^*) +pool(g, X) # => 32x5 matrix

source
GraphNeuralNetworks.Set2SetType
Set2Set(n_in, n_iters, n_layers = 1)

Set2Set layer from the paper Order Matters: Sequence to sequence for sets.

For each graph in the batch, the layer computes an output vector of size 2*n_in by iterating the following steps n_iters times:

\[\mathbf{q} = \mathrm{LSTM}(\mathbf{q}_{t-1}^*) \alpha_{i} = \frac{\exp(\mathbf{q}^T \mathbf{x}_i)}{\sum_{j=1}^N \exp(\mathbf{q}^T \mathbf{x}_j)} \mathbf{r} = \sum_{i=1}^N \alpha_{i} \mathbf{x}_i -\mathbf{q}^*_t = [\mathbf{q}; \mathbf{r}]\]

where N is the number of nodes in the graph, LSTM is a Long-Short-Term-Memory network with n_layers layers, input size 2*n_in and output size n_in.

Given a batch of graphs g and node features x, the layer returns a matrix of size (2*n_in, n_graphs). ```

source
GraphNeuralNetworks.TopKPoolType
TopKPool(adj, k, in_channel)

Top-k pooling layer.

Arguments

  • adj: Adjacency matrix of a graph.
  • k: Top-k nodes are selected to pool together.
  • in_channel: The dimension of input channel.
source
+\mathbf{q}^*_t = [\mathbf{q}; \mathbf{r}]\]

where N is the number of nodes in the graph, LSTM is a Long-Short-Term-Memory network with n_layers layers, input size 2*n_in and output size n_in.

Given a batch of graphs g and node features x, the layer returns a matrix of size (2*n_in, n_graphs). ```

source
GraphNeuralNetworks.TopKPoolType
TopKPool(adj, k, in_channel)

Top-k pooling layer.

Arguments

  • adj: Adjacency matrix of a graph.
  • k: Top-k nodes are selected to pool together.
  • in_channel: The dimension of input channel.
source
diff --git a/dev/api/temporalgraph/index.html b/dev/api/temporalgraph/index.html index e51536e57..9fb6f1352 100644 --- a/dev/api/temporalgraph/index.html +++ b/dev/api/temporalgraph/index.html @@ -17,7 +17,7 @@ num_edges: [20, 20, 20, 20, 20] num_snapshots: 5 tgdata: - x = 4-element Vector{Float64}source
GraphNeuralNetworks.GNNGraphs.add_snapshotMethod
add_snapshot(tg::TemporalSnapshotsGNNGraph, t::Int, g::GNNGraph)

Return a TemporalSnapshotsGNNGraph created starting from tg by adding the snapshot g at time index t.

Examples

julia> using GraphNeuralNetworks
+        x = 4-element Vector{Float64}
source
GraphNeuralNetworks.GNNGraphs.add_snapshotMethod
add_snapshot(tg::TemporalSnapshotsGNNGraph, t::Int, g::GNNGraph)

Return a TemporalSnapshotsGNNGraph created starting from tg by adding the snapshot g at time index t.

Examples

julia> using GraphNeuralNetworks
 
 julia> snapshots = [rand_graph(10, 20) for i in 1:5];
 
@@ -31,7 +31,7 @@
 TemporalSnapshotsGNNGraph:
   num_nodes: [10, 10, 10, 10, 10, 10]
   num_edges: [20, 20, 16, 20, 20, 20]
-  num_snapshots: 6
source
GraphNeuralNetworks.GNNGraphs.remove_snapshotMethod
remove_snapshot(tg::TemporalSnapshotsGNNGraph, t::Int)

Return a TemporalSnapshotsGNNGraph created starting from tg by removing the snapshot at time index t.

Examples

julia> using GraphNeuralNetworks
+  num_snapshots: 6
source
GraphNeuralNetworks.GNNGraphs.remove_snapshotMethod
remove_snapshot(tg::TemporalSnapshotsGNNGraph, t::Int)

Return a TemporalSnapshotsGNNGraph created starting from tg by removing the snapshot at time index t.

Examples

julia> using GraphNeuralNetworks
 
 julia> snapshots = [rand_graph(10,20), rand_graph(10,14), rand_graph(10,22)];
 
@@ -45,7 +45,7 @@
 TemporalSnapshotsGNNGraph:
   num_nodes: [10, 10]
   num_edges: [20, 22]
-  num_snapshots: 2
source

TemporalSnapshotsGNNGraph random generators

GraphNeuralNetworks.GNNGraphs.rand_temporal_radius_graphFunction
rand_temporal_radius_graph(number_nodes::Int, 
+  num_snapshots: 2
source

TemporalSnapshotsGNNGraph random generators

GraphNeuralNetworks.GNNGraphs.rand_temporal_radius_graphFunction
rand_temporal_radius_graph(number_nodes::Int, 
                            number_snapshots::Int,
                            speed::AbstractFloat,
                            r::AbstractFloat;
@@ -57,7 +57,7 @@
 TemporalSnapshotsGNNGraph:
   num_nodes: [10, 10, 10, 10, 10]
   num_edges: [90, 90, 90, 90, 90]
-  num_snapshots: 5
source
GraphNeuralNetworks.GNNGraphs.rand_temporal_hyperbolic_graphFunction
rand_temporal_hyperbolic_graph(number_nodes::Int, 
+  num_snapshots: 5
source
GraphNeuralNetworks.GNNGraphs.rand_temporal_hyperbolic_graphFunction
rand_temporal_hyperbolic_graph(number_nodes::Int, 
                                number_snapshots::Int;
                                α::Real,
                                R::Real,
@@ -70,4 +70,4 @@
 TemporalSnapshotsGNNGraph:
   num_nodes: [10, 10, 10, 10, 10]
   num_edges: [44, 46, 48, 42, 38]
-  num_snapshots: 5

References

Section D of the paper Dynamic Hidden-Variable Network Models and the paper Hyperbolic Geometry of Complex Networks

source
+ num_snapshots: 5

References

Section D of the paper Dynamic Hidden-Variable Network Models and the paper Hyperbolic Geometry of Complex Networks

source diff --git a/dev/api/utils/index.html b/dev/api/utils/index.html index bfae0978b..70b47fda1 100644 --- a/dev/api/utils/index.html +++ b/dev/api/utils/index.html @@ -1,6 +1,6 @@ -Utils · GraphNeuralNetworks.jl

Utility Functions

Index

Docs

Graph-wise operations

GraphNeuralNetworks.reduce_nodesFunction
reduce_nodes(aggr, g, x)

For a batched graph g, return the graph-wise aggregation of the node features x. The aggregation operator aggr can be +, mean, max, or min. The returned array will have last dimension g.num_graphs.

See also: reduce_edges.

source
reduce_nodes(aggr, indicator::AbstractVector, x)

Return the graph-wise aggregation of the node features x given the graph indicator indicator. The aggregation operator aggr can be +, mean, max, or min.

See also graph_indicator.

source
GraphNeuralNetworks.reduce_edgesFunction
reduce_edges(aggr, g, e)

For a batched graph g, return the graph-wise aggregation of the edge features e. The aggregation operator aggr can be +, mean, max, or min. The returned array will have last dimension g.num_graphs.

source

Neighborhood operations

GraphNeuralNetworks.softmax_edge_neighborsFunction
softmax_edge_neighbors(g, e)

Softmax over each node's neighborhood of the edge features e.

\[\mathbf{e}'_{j\to i} = \frac{e^{\mathbf{e}_{j\to i}}} - {\sum_{j'\in N(i)} e^{\mathbf{e}_{j'\to i}}}.\]

source

NNlib

Primitive functions implemented in NNlib.jl.

NNlib.gather!Function
NNlib.gather!(dst, src, idx)

Reverse operation of scatter!. Gathers data from source src and writes it in destination dst according to the index array idx. For each k in CartesianIndices(idx), assign values to dst according to

dst[:, ... , k] .= src[:, ... , idx[k]...]

Notice that if idx is a vector containing integers, and both dst and src are matrices, previous expression simplifies to

dst[:, k] .= src[:, idx[k]]

and k will run over 1:length(idx).

The elements of idx can be integers or integer tuples and may be repeated. A single src column can end up being copied into zero, one, or multiple dst columns.

See gather for an allocating version.

NNlib.gatherFunction
NNlib.gather(src, idx) -> dst

Reverse operation of scatter. Gathers data from source src and writes it in a destination dst according to the index array idx. For each k in CartesianIndices(idx), assign values to dst according to

dst[:, ... , k] .= src[:, ... , idx[k]...]

Notice that if idx is a vector containing integers and src is a matrix, previous expression simplifies to

dst[:, k] .= src[:, idx[k]]

and k will run over 1:length(idx).

The elements of idx can be integers or integer tuples and may be repeated. A single src column can end up being copied into zero, one, or multiple dst columns.

See gather! for an in-place version.

Examples

julia> NNlib.gather([1,20,300,4000], [2,4,2])
+Utils · GraphNeuralNetworks.jl

Utility Functions

Index

Docs

Graph-wise operations

GraphNeuralNetworks.reduce_nodesFunction
reduce_nodes(aggr, g, x)

For a batched graph g, return the graph-wise aggregation of the node features x. The aggregation operator aggr can be +, mean, max, or min. The returned array will have last dimension g.num_graphs.

See also: reduce_edges.

source
reduce_nodes(aggr, indicator::AbstractVector, x)

Return the graph-wise aggregation of the node features x given the graph indicator indicator. The aggregation operator aggr can be +, mean, max, or min.

See also graph_indicator.

source
GraphNeuralNetworks.reduce_edgesFunction
reduce_edges(aggr, g, e)

For a batched graph g, return the graph-wise aggregation of the edge features e. The aggregation operator aggr can be +, mean, max, or min. The returned array will have last dimension g.num_graphs.

source

Neighborhood operations

GraphNeuralNetworks.softmax_edge_neighborsFunction
softmax_edge_neighbors(g, e)

Softmax over each node's neighborhood of the edge features e.

\[\mathbf{e}'_{j\to i} = \frac{e^{\mathbf{e}_{j\to i}}} + {\sum_{j'\in N(i)} e^{\mathbf{e}_{j'\to i}}}.\]

source

NNlib

Primitive functions implemented in NNlib.jl.

NNlib.gather!Function
NNlib.gather!(dst, src, idx)

Reverse operation of scatter!. Gathers data from source src and writes it in destination dst according to the index array idx. For each k in CartesianIndices(idx), assign values to dst according to

dst[:, ... , k] .= src[:, ... , idx[k]...]

Notice that if idx is a vector containing integers, and both dst and src are matrices, previous expression simplifies to

dst[:, k] .= src[:, idx[k]]

and k will run over 1:length(idx).

The elements of idx can be integers or integer tuples and may be repeated. A single src column can end up being copied into zero, one, or multiple dst columns.

See gather for an allocating version.

NNlib.gatherFunction
NNlib.gather(src, idx) -> dst

Reverse operation of scatter. Gathers data from source src and writes it in a destination dst according to the index array idx. For each k in CartesianIndices(idx), assign values to dst according to

dst[:, ... , k] .= src[:, ... , idx[k]...]

Notice that if idx is a vector containing integers and src is a matrix, previous expression simplifies to

dst[:, k] .= src[:, idx[k]]

and k will run over 1:length(idx).

The elements of idx can be integers or integer tuples and may be repeated. A single src column can end up being copied into zero, one, or multiple dst columns.

See gather! for an in-place version.

Examples

julia> NNlib.gather([1,20,300,4000], [2,4,2])
 3-element Vector{Int64}:
    20
  4000
@@ -45,4 +45,4 @@
     10
   2000
     10
-    10
+ 10
diff --git a/dev/datasets/index.html b/dev/datasets/index.html index fee6f8a62..788b6b765 100644 --- a/dev/datasets/index.html +++ b/dev/datasets/index.html @@ -10,4 +10,4 @@ targets => 2708-element Vector{Int64} train_mask => 2708-element BitVector val_mask => 2708-element BitVector - test_mask => 2708-element BitVectorsource + test_mask => 2708-element BitVectorsource diff --git a/dev/dev/index.html b/dev/dev/index.html index 2efd856f7..aec745735 100644 --- a/dev/dev/index.html +++ b/dev/dev/index.html @@ -13,4 +13,4 @@ julia> @load "perf_pr_20210803_mymachine.jld2" julia> compare(dfpr, dfmaster)

Caching tutorials

Tutorials in GraphNeuralNetworks.jl are written in Pluto and rendered using DemoCards.jl and PlutoStaticHTML.jl. Rendering a Pluto notebook is time and resource-consuming, especially in a CI environment. So we use the caching functionality provided by PlutoStaticHTML.jl to reduce CI time.

If you are contributing a new tutorial or making changes to the existing notebook, generate the docs locally before committing/pushing. For caching to work, the cache environment(your local) and the documenter CI should have the same Julia version (e.g. "v1.9.1", also the patch number must match). So use the documenter CI Julia version for generating docs locally.

julia --version # check julia version before generating docs
-julia --project=docs docs/make.jl

Note: Use juliaup for easy switching of Julia versions.

During the doc generation process, DemoCards.jl stores the cache notebooks in docs/pluto_output. So add any changes made in this folder in your git commit. Remember that every file in this folder is machine-generated and should not be edited manually.

git add docs/pluto_output # add generated cache

Check the documenter CI logs to ensure that it used the local cache:

+julia --project=docs docs/make.jl

Note: Use juliaup for easy switching of Julia versions.

During the doc generation process, DemoCards.jl stores the cache notebooks in docs/pluto_output. So add any changes made in this folder in your git commit. Remember that every file in this folder is machine-generated and should not be edited manually.

git add docs/pluto_output # add generated cache

Check the documenter CI logs to ensure that it used the local cache:

diff --git a/dev/gnngraph/index.html b/dev/gnngraph/index.html index ff93de350..0262392af 100644 --- a/dev/gnngraph/index.html +++ b/dev/gnngraph/index.html @@ -167,4 +167,4 @@ julia> GNNGraph(gd) GNNGraph: num_nodes: 10 - num_edges: 20 + num_edges: 20 diff --git a/dev/gsoc/index.html b/dev/gsoc/index.html index 0ccc0b1c8..73adcff43 100644 --- a/dev/gsoc/index.html +++ b/dev/gsoc/index.html @@ -1,2 +1,2 @@ -Summer Of Code · GraphNeuralNetworks.jl
+Summer Of Code · GraphNeuralNetworks.jl
diff --git a/dev/heterograph/index.html b/dev/heterograph/index.html index 91fc72a8f..317aec4ad 100644 --- a/dev/heterograph/index.html +++ b/dev/heterograph/index.html @@ -81,4 +81,4 @@ @assert g.num_nodes[:A] == 80 @assert size(g.ndata[:A].x) == (3, 80) # ... -end

Graph convolutions on heterographs

See HeteroGraphConv for how to perform convolutions on heterogeneous graphs.

+end

Graph convolutions on heterographs

See HeteroGraphConv for how to perform convolutions on heterogeneous graphs.

diff --git a/dev/index.html b/dev/index.html index 1430607d4..084a36b84 100644 --- a/dev/index.html +++ b/dev/index.html @@ -37,4 +37,4 @@ end @info (; epoch, train_loss=loss(model, train_loader), test_loss=loss(model, test_loader)) -end +end diff --git a/dev/messagepassing/index.html b/dev/messagepassing/index.html index 56f5d0f5f..211176be6 100644 --- a/dev/messagepassing/index.html +++ b/dev/messagepassing/index.html @@ -76,4 +76,4 @@ x = propagate(message, g, +, xj=x) return l.σ.(l.weight * x .+ l.bias) -end

See the GATConv implementation here for a more complex example.

Built-in message functions

In order to exploit optimized specializations of the propagate, it is recommended to use built-in message functions such as copy_xj whenever possible.

+end

See the GATConv implementation here for a more complex example.

Built-in message functions

In order to exploit optimized specializations of the propagate, it is recommended to use built-in message functions such as copy_xj whenever possible.

diff --git a/dev/models/index.html b/dev/models/index.html index a9b8ab57b..dab630cf6 100644 --- a/dev/models/index.html +++ b/dev/models/index.html @@ -66,4 +66,4 @@ X = randn(Float32, din, 10) # Pass only X as input, the model already contains the graph. -y = model(X)

An example of WithGraph usage is given in the graph neural ODE script in the examples folder.

+y = model(X)

An example of WithGraph usage is given in the graph neural ODE script in the examples folder.

diff --git a/dev/search/index.html b/dev/search/index.html index a404a9559..88de412a8 100644 --- a/dev/search/index.html +++ b/dev/search/index.html @@ -1,2 +1,2 @@ -Search · GraphNeuralNetworks.jl

Loading search...

    +Search · GraphNeuralNetworks.jl

    Loading search...

      diff --git a/dev/temporalgraph/index.html b/dev/temporalgraph/index.html index fda9b4f86..61f9c3a64 100644 --- a/dev/temporalgraph/index.html +++ b/dev/temporalgraph/index.html @@ -92,4 +92,4 @@ julia> output = m(tg, tg.ndata.x); julia> size(output[1]) -(1, 10) +(1, 10) diff --git a/dev/tutorials/index.html b/dev/tutorials/index.html index 977c8e58b..f12dde007 100644 --- a/dev/tutorials/index.html +++ b/dev/tutorials/index.html @@ -15,4 +15,4 @@

      Traffic Prediction using GraphNeuralNetworks.jl

      card-cover-image

      Traffic Prediction using recurrent Temporal Graph Convolutional Network

      -

      Contributions

      If you have a suggestion on adding new tutorials, feel free to create a new issue here. Users are invited to contribute demonstrations of their own. If you want to contribute new tutorials and looking for inspiration, checkout these tutorials from PyTorch Geometric. You are expected to use Pluto.jl notebooks with DemoCards.jl. Please check out existing tutorials for more details.

      +

      Contributions

      If you have a suggestion on adding new tutorials, feel free to create a new issue here. Users are invited to contribute demonstrations of their own. If you want to contribute new tutorials and looking for inspiration, checkout these tutorials from PyTorch Geometric. You are expected to use Pluto.jl notebooks with DemoCards.jl. Please check out existing tutorials for more details.

      diff --git a/dev/tutorials/introductory_tutorials/gnn_intro_pluto/index.html b/dev/tutorials/introductory_tutorials/gnn_intro_pluto/index.html index 4abbea687..17046afd3 100644 --- a/dev/tutorials/introductory_tutorials/gnn_intro_pluto/index.html +++ b/dev/tutorials/introductory_tutorials/gnn_intro_pluto/index.html @@ -252,4 +252,4 @@

      As one can see, our 3-layer GCN model manages to linearly separating the communities and classifying most of the nodes correctly.

      Furthermore, we did this all with a few lines of code, thanks to the GraphNeuralNetworks.jl which helped us out with data handling and GNN implementations.

      -

      This page was generated using DemoCards.jl. and PlutoStaticHTML.jl

      +

      This page was generated using DemoCards.jl. and PlutoStaticHTML.jl

      diff --git a/dev/tutorials/introductory_tutorials/graph_classification_pluto/index.html b/dev/tutorials/introductory_tutorials/graph_classification_pluto/index.html index dbb4ec1cf..3f397a1ef 100644 --- a/dev/tutorials/introductory_tutorials/graph_classification_pluto/index.html +++ b/dev/tutorials/introductory_tutorials/graph_classification_pluto/index.html @@ -207,4 +207,4 @@

      Conclusion

      In this chapter, you have learned how to apply GNNs to the task of graph classification. You have learned how graphs can be batched together for better GPU utilization, and how to apply readout layers for obtaining graph embeddings rather than node embeddings.

      -

      This page was generated using DemoCards.jl. and PlutoStaticHTML.jl

      +

      This page was generated using DemoCards.jl. and PlutoStaticHTML.jl

      diff --git a/dev/tutorials/introductory_tutorials/node_classification_pluto/index.html b/dev/tutorials/introductory_tutorials/node_classification_pluto/index.html index 111bf5883..e338aab41 100644 --- a/dev/tutorials/introductory_tutorials/node_classification_pluto/index.html +++ b/dev/tutorials/introductory_tutorials/node_classification_pluto/index.html @@ -304,4 +304,4 @@

      Conclusion

      In this tutorial, we have seen how to apply GNNs to real-world problems, and, in particular, how they can effectively be used for boosting a model's performance. In the next tutorial, we will look into how GNNs can be used for the task of graph classification.

      -

      This page was generated using DemoCards.jl. and PlutoStaticHTML.jl

      +

      This page was generated using DemoCards.jl. and PlutoStaticHTML.jl

      diff --git a/dev/tutorials/introductory_tutorials/traffic_prediction/index.html b/dev/tutorials/introductory_tutorials/traffic_prediction/index.html index 68a5ca391..de30190b0 100644 --- a/dev/tutorials/introductory_tutorials/traffic_prediction/index.html +++ b/dev/tutorials/introductory_tutorials/traffic_prediction/index.html @@ -199,4 +199,4 @@

      TrainingConclusion

      In this tutorial, we learned how to use a recurrent temporal graph convolutional network to predict traffic in a spatio-temporal setting. We used the TGCN model, which consists of a graph convolutional network (GCN) and a gated recurrent unit (GRU). We then trained the model for 100 epochs on a small subset of the METR-LA dataset. The accuracy of the model is not very good, but it can be improved by training on more data.

      -

      This page was generated using DemoCards.jl. and PlutoStaticHTML.jl

      +

      This page was generated using DemoCards.jl. and PlutoStaticHTML.jl