From b4f2ff64d1172854b536856ab5148d125b7f565c Mon Sep 17 00:00:00 2001 From: "Documenter.jl" Date: Mon, 9 Dec 2024 10:25:49 +0000 Subject: [PATCH] build based on 4988857 --- .../dev/.documenter-siteinfo.json | 2 +- docs/GNNGraphs.jl/dev/api/datasets/index.html | 2 +- docs/GNNGraphs.jl/dev/api/gnngraph/index.html | 38 +++++++++--------- .../dev/api/heterograph/index.html | 12 +++--- docs/GNNGraphs.jl/dev/api/samplers/index.html | 2 +- .../dev/api/temporalgraph/index.html | 10 ++--- .../dev/guides/datasets/index.html | 2 +- .../dev/guides/gnngraph/index.html | 2 +- .../dev/guides/heterograph/index.html | 2 +- .../dev/guides/temporalgraph/index.html | 2 +- docs/GNNGraphs.jl/dev/index.html | 2 +- docs/GNNLux.jl/dev/.documenter-siteinfo.json | 2 +- .../dev/GNNGraphs/api/datasets/index.html | 2 +- .../dev/GNNGraphs/api/gnngraph/index.html | 38 +++++++++--------- .../dev/GNNGraphs/api/heterograph/index.html | 12 +++--- .../dev/GNNGraphs/api/samplers/index.html | 2 +- .../GNNGraphs/api/temporalgraph/index.html | 10 ++--- .../dev/GNNGraphs/guides/datasets/index.html | 2 +- .../dev/GNNGraphs/guides/gnngraph/index.html | 2 +- .../GNNGraphs/guides/heterograph/index.html | 2 +- .../GNNGraphs/guides/temporalgraph/index.html | 2 +- docs/GNNLux.jl/dev/GNNGraphs/index.html | 2 +- .../dev/GNNlib/api/messagepassing/index.html | 4 +- .../GNNLux.jl/dev/GNNlib/api/utils/index.html | 4 +- .../GNNlib/guides/messagepassing/index.html | 2 +- docs/GNNLux.jl/dev/GNNlib/index.html | 2 +- docs/GNNLux.jl/dev/api/basic/index.html | 4 +- docs/GNNLux.jl/dev/api/conv/index.html | 36 ++++++++--------- .../GNNLux.jl/dev/api/temporalconv/index.html | 12 +++--- docs/GNNLux.jl/dev/guides/models/index.html | 2 +- docs/GNNLux.jl/dev/index.html | 2 +- .../dev/tutorials/gnn_intro/index.html | 2 +- docs/GNNlib.jl/dev/.documenter-siteinfo.json | 2 +- .../dev/GNNGraphs/api/datasets/index.html | 2 +- .../dev/GNNGraphs/api/gnngraph/index.html | 38 +++++++++--------- .../dev/GNNGraphs/api/heterograph/index.html | 12 +++--- .../dev/GNNGraphs/api/samplers/index.html | 2 +- .../GNNGraphs/api/temporalgraph/index.html | 10 ++--- .../dev/GNNGraphs/guides/datasets/index.html | 2 +- .../dev/GNNGraphs/guides/gnngraph/index.html | 2 +- .../GNNGraphs/guides/heterograph/index.html | 2 +- .../GNNGraphs/guides/temporalgraph/index.html | 2 +- docs/GNNlib.jl/dev/GNNGraphs/index.html | 2 +- .../dev/api/messagepassing/index.html | 4 +- docs/GNNlib.jl/dev/api/utils/index.html | 4 +- .../dev/guides/messagepassing/index.html | 2 +- docs/GNNlib.jl/dev/index.html | 2 +- .../dev/.documenter-siteinfo.json | 2 +- .../dev/GNNGraphs/api/datasets/index.html | 2 +- .../dev/GNNGraphs/api/gnngraph/index.html | 38 +++++++++--------- .../dev/GNNGraphs/api/heterograph/index.html | 12 +++--- .../dev/GNNGraphs/api/samplers/index.html | 2 +- .../GNNGraphs/api/temporalgraph/index.html | 10 ++--- .../dev/GNNGraphs/guides/datasets/index.html | 2 +- .../dev/GNNGraphs/guides/gnngraph/index.html | 2 +- .../GNNGraphs/guides/heterograph/index.html | 2 +- .../GNNGraphs/guides/temporalgraph/index.html | 2 +- .../dev/GNNGraphs/index.html | 2 +- .../dev/GNNlib/api/messagepassing/index.html | 4 +- .../dev/GNNlib/api/utils/index.html | 4 +- .../GNNlib/guides/messagepassing/index.html | 2 +- .../dev/GNNlib/index.html | 2 +- .../dev/api/basic/index.html | 6 +-- .../dev/api/conv/index.html | 40 +++++++++---------- .../dev/api/heteroconv/index.html | 2 +- .../dev/api/pool/index.html | 6 +-- .../dev/api/temporalconv/index.html | 12 +++--- .../GraphNeuralNetworks.jl/dev/dev/index.html | 2 +- .../dev/guides/models/index.html | 2 +- docs/GraphNeuralNetworks.jl/dev/index.html | 2 +- .../dev/tutorials/gnn_intro_pluto/index.html | 2 +- .../graph_classification_pluto/index.html | 2 +- .../node_classification_pluto/index.html | 2 +- .../index.html | 2 +- .../tutorials/traffic_prediction/index.html | 2 +- logo.svg | 31 ++++++++++++++ 76 files changed, 271 insertions(+), 240 deletions(-) create mode 100644 logo.svg diff --git a/docs/GNNGraphs.jl/dev/.documenter-siteinfo.json b/docs/GNNGraphs.jl/dev/.documenter-siteinfo.json index 3a23aa5a7..79ec40c77 100644 --- a/docs/GNNGraphs.jl/dev/.documenter-siteinfo.json +++ b/docs/GNNGraphs.jl/dev/.documenter-siteinfo.json @@ -1 +1 @@ -{"documenter":{"julia_version":"1.11.2","generation_timestamp":"2024-12-09T09:30:48","documenter_version":"1.8.0"}} \ No newline at end of file +{"documenter":{"julia_version":"1.11.2","generation_timestamp":"2024-12-09T10:22:31","documenter_version":"1.8.0"}} \ No newline at end of file diff --git a/docs/GNNGraphs.jl/dev/api/datasets/index.html b/docs/GNNGraphs.jl/dev/api/datasets/index.html index a6e661f82..3cc92e26f 100644 --- a/docs/GNNGraphs.jl/dev/api/datasets/index.html +++ b/docs/GNNGraphs.jl/dev/api/datasets/index.html @@ -9,4 +9,4 @@ targets = 2708-element Vector{Int64} test_mask = 2708-element BitVector features = 1433×2708 Matrix{Float32} - train_mask = 2708-element BitVectorsource
\ No newline at end of file + train_mask = 2708-element BitVectorsource
\ No newline at end of file diff --git a/docs/GNNGraphs.jl/dev/api/gnngraph/index.html b/docs/GNNGraphs.jl/dev/api/gnngraph/index.html index c14273e71..fbd9c883c 100644 --- a/docs/GNNGraphs.jl/dev/api/gnngraph/index.html +++ b/docs/GNNGraphs.jl/dev/api/gnngraph/index.html @@ -32,7 +32,7 @@ # Collect edges' source and target nodes. # Both source and target are vectors of length num_edges -source, target = edge_index(g)

A GNNGraph can be sent to the GPU, for example by using Flux.jl's gpu function or MLDataDevices.jl's utilities. ```

source
Base.copyFunction
copy(g::GNNGraph; deep=false)

Create a copy of g. If deep is true, then copy will be a deep copy (equivalent to deepcopy(g)), otherwise it will be a shallow copy with the same underlying graph data.

source

DataStore

GNNGraphs.DataStoreType
DataStore([n, data])
+source, target = edge_index(g)

A GNNGraph can be sent to the GPU, for example by using Flux.jl's gpu function or MLDataDevices.jl's utilities. ```

source
Base.copyFunction
copy(g::GNNGraph; deep=false)

Create a copy of g. If deep is true, then copy will be a deep copy (equivalent to deepcopy(g)), otherwise it will be a shallow copy with the same underlying graph data.

source

DataStore

GNNGraphs.DataStoreType
DataStore([n, data])
 DataStore([n,] k1 = x1, k2 = x2, ...)

A container for feature arrays. The optional argument n enforces that numobs(x) == n for each array contained in the datastore.

At construction time, the data can be provided as any iterables of pairs of symbols and arrays or as keyword arguments:

julia> ds = DataStore(3, x = rand(Float32, 2, 3), y = rand(Float32, 3))
 DataStore(3) with 2 elements:
   y = 3-element Vector{Float32}
@@ -62,8 +62,8 @@
 3-element Vector{Float32}:
  1.0
  1.0
- 1.0
source

Query

GNNGraphs.adjacency_listMethod
adjacency_list(g; dir=:out)
-adjacency_list(g, nodes; dir=:out)

Return the adjacency list representation (a vector of vectors) of the graph g.

Calling a the adjacency list, if dir=:out than a[i] will contain the neighbors of node i through outgoing edges. If dir=:in, it will contain neighbors from incoming edges instead.

If nodes is given, return the neighborhood of the nodes in nodes only.

source
GNNGraphs.edge_indexMethod
edge_index(g::GNNGraph)

Return a tuple containing two vectors, respectively storing the source and target nodes for each edges in g.

s, t = edge_index(g)
source
GNNGraphs.get_graph_typeMethod
get_graph_type(g::GNNGraph)

Return the underlying representation for the graph g as a symbol.

Possible values are:

  • :coo: Coordinate list representation. The graph is stored as a tuple of vectors (s, t, w), where s and t are the source and target nodes of the edges, and w is the edge weights.
  • :sparse: Sparse matrix representation. The graph is stored as a sparse matrix representing the weighted adjacency matrix.
  • :dense: Dense matrix representation. The graph is stored as a dense matrix representing the weighted adjacency matrix.

The default representation for graph constructors GNNGraphs.jl is :coo. The underlying representation can be accessed through the g.graph field.

See also GNNGraph.

Examples

The default representation for graph constructors GNNGraphs.jl is :coo.

julia> g = rand_graph(5, 10)
+ 1.0
source

Query

GNNGraphs.adjacency_listMethod
adjacency_list(g; dir=:out)
+adjacency_list(g, nodes; dir=:out)

Return the adjacency list representation (a vector of vectors) of the graph g.

Calling a the adjacency list, if dir=:out than a[i] will contain the neighbors of node i through outgoing edges. If dir=:in, it will contain neighbors from incoming edges instead.

If nodes is given, return the neighborhood of the nodes in nodes only.

source
GNNGraphs.edge_indexMethod
edge_index(g::GNNGraph)

Return a tuple containing two vectors, respectively storing the source and target nodes for each edges in g.

s, t = edge_index(g)
source
GNNGraphs.get_graph_typeMethod
get_graph_type(g::GNNGraph)

Return the underlying representation for the graph g as a symbol.

Possible values are:

  • :coo: Coordinate list representation. The graph is stored as a tuple of vectors (s, t, w), where s and t are the source and target nodes of the edges, and w is the edge weights.
  • :sparse: Sparse matrix representation. The graph is stored as a sparse matrix representing the weighted adjacency matrix.
  • :dense: Dense matrix representation. The graph is stored as a dense matrix representing the weighted adjacency matrix.

The default representation for graph constructors GNNGraphs.jl is :coo. The underlying representation can be accessed through the g.graph field.

See also GNNGraph.

Examples

The default representation for graph constructors GNNGraphs.jl is :coo.

julia> g = rand_graph(5, 10)
 GNNGraph:
   num_nodes: 5
   num_edges: 10
@@ -88,7 +88,7 @@
 julia> gcoo = GNNGraph(g, graph_type=:coo);
 
 julia> gcoo.graph
-([2, 3, 5], [1, 2, 4], [1, 1, 1])
source
GNNGraphs.graph_indicatorMethod
graph_indicator(g::GNNGraph; edges=false)

Return a vector containing the graph membership (an integer from 1 to g.num_graphs) of each node in the graph. If edges=true, return the graph membership of each edge instead.

source
GNNGraphs.has_isolated_nodesMethod
has_isolated_nodes(g::GNNGraph; dir=:out)

Return true if the graph g contains nodes with out-degree (if dir=:out) or in-degree (if dir = :in) equal to zero.

source
GNNGraphs.has_multi_edgesMethod
has_multi_edges(g::GNNGraph)

Return true if g has any multiple edges.

source
GNNGraphs.is_bidirectedMethod
is_bidirected(g::GNNGraph)

Check if the directed graph g essentially corresponds to an undirected graph, i.e. if for each edge it also contains the reverse edge.

source
GNNGraphs.khop_adjFunction
khop_adj(g::GNNGraph,k::Int,T::DataType=eltype(g); dir=:out, weighted=true)

Return $A^k$ where $A$ is the adjacency matrix of the graph 'g'.

source
GNNGraphs.laplacian_lambda_maxFunction
laplacian_lambda_max(g::GNNGraph, T=Float32; add_self_loops=false, dir=:out)

Return the largest eigenvalue of the normalized symmetric Laplacian of the graph g.

If the graph is batched from multiple graphs, return the list of the largest eigenvalue for each graph.

source
GNNGraphs.normalized_laplacianFunction
normalized_laplacian(g, T=Float32; add_self_loops=false, dir=:out)

Normalized Laplacian matrix of graph g.

Arguments

  • g: A GNNGraph.
  • T: result element type.
  • add_self_loops: add self-loops while calculating the matrix.
  • dir: the edge directionality considered (:out, :in, :both).
source
GNNGraphs.scaled_laplacianFunction
scaled_laplacian(g, T=Float32; dir=:out)

Scaled Laplacian matrix of graph g, defined as $\hat{L} = \frac{2}{\lambda_{max}} L - I$ where $L$ is the normalized Laplacian matrix.

Arguments

  • g: A GNNGraph.
  • T: result element type.
  • dir: the edge directionality considered (:out, :in, :both).
source
Graphs.LinAlg.adjacency_matrixFunction
adjacency_matrix(g::GNNGraph, T=eltype(g); dir=:out, weighted=true)

Return the adjacency matrix A for the graph g.

If dir=:out, A[i,j] > 0 denotes the presence of an edge from node i to node j. If dir=:in instead, A[i,j] > 0 denotes the presence of an edge from node j to node i.

User may specify the eltype T of the returned matrix.

If weighted=true, the A will contain the edge weights if any, otherwise the elements of A will be either 0 or 1.

source
Graphs.degreeMethod
degree(g::GNNGraph, T=nothing; dir=:out, edge_weight=true)

Return a vector containing the degrees of the nodes in g.

The gradient is propagated through this function only if edge_weight is true or a vector.

Arguments

  • g: A graph.
  • T: Element type of the returned vector. If nothing, is chosen based on the graph type and will be an integer if edge_weight = false. Default nothing.
  • dir: For dir = :out the degree of a node is counted based on the outgoing edges. For dir = :in, the ingoing edges are used. If dir = :both we have the sum of the two.
  • edge_weight: If true and the graph contains weighted edges, the degree will be weighted. Set to false instead to just count the number of outgoing/ingoing edges. Finally, you can also pass a vector of weights to be used instead of the graph's own weights. Default true.
source
Graphs.has_self_loopsMethod
has_self_loops(g::GNNGraph)

Return true if g has any self loops.

source
Graphs.inneighborsMethod
inneighbors(g::GNNGraph, i::Integer)

Return the neighbors of node i in the graph g through incoming edges.

See also neighbors and outneighbors.

source
Graphs.outneighborsMethod
outneighbors(g::GNNGraph, i::Integer)

Return the neighbors of node i in the graph g through outgoing edges.

See also neighbors and inneighbors.

source
Graphs.neighborsMethod
neighbors(g::GNNGraph, i::Integer; dir=:out)

Return the neighbors of node i in the graph g. If dir=:out, return the neighbors through outgoing edges. If dir=:in, return the neighbors through incoming edges.

See also outneighbors, inneighbors.

source

Transform

GNNGraphs.add_edgesMethod
add_edges(g::GNNGraph, s::AbstractVector, t::AbstractVector; [edata])
+([2, 3, 5], [1, 2, 4], [1, 1, 1])
source
GNNGraphs.graph_indicatorMethod
graph_indicator(g::GNNGraph; edges=false)

Return a vector containing the graph membership (an integer from 1 to g.num_graphs) of each node in the graph. If edges=true, return the graph membership of each edge instead.

source
GNNGraphs.has_isolated_nodesMethod
has_isolated_nodes(g::GNNGraph; dir=:out)

Return true if the graph g contains nodes with out-degree (if dir=:out) or in-degree (if dir = :in) equal to zero.

source
GNNGraphs.has_multi_edgesMethod
has_multi_edges(g::GNNGraph)

Return true if g has any multiple edges.

source
GNNGraphs.is_bidirectedMethod
is_bidirected(g::GNNGraph)

Check if the directed graph g essentially corresponds to an undirected graph, i.e. if for each edge it also contains the reverse edge.

source
GNNGraphs.khop_adjFunction
khop_adj(g::GNNGraph,k::Int,T::DataType=eltype(g); dir=:out, weighted=true)

Return $A^k$ where $A$ is the adjacency matrix of the graph 'g'.

source
GNNGraphs.laplacian_lambda_maxFunction
laplacian_lambda_max(g::GNNGraph, T=Float32; add_self_loops=false, dir=:out)

Return the largest eigenvalue of the normalized symmetric Laplacian of the graph g.

If the graph is batched from multiple graphs, return the list of the largest eigenvalue for each graph.

source
GNNGraphs.normalized_laplacianFunction
normalized_laplacian(g, T=Float32; add_self_loops=false, dir=:out)

Normalized Laplacian matrix of graph g.

Arguments

  • g: A GNNGraph.
  • T: result element type.
  • add_self_loops: add self-loops while calculating the matrix.
  • dir: the edge directionality considered (:out, :in, :both).
source
GNNGraphs.scaled_laplacianFunction
scaled_laplacian(g, T=Float32; dir=:out)

Scaled Laplacian matrix of graph g, defined as $\hat{L} = \frac{2}{\lambda_{max}} L - I$ where $L$ is the normalized Laplacian matrix.

Arguments

  • g: A GNNGraph.
  • T: result element type.
  • dir: the edge directionality considered (:out, :in, :both).
source
Graphs.LinAlg.adjacency_matrixFunction
adjacency_matrix(g::GNNGraph, T=eltype(g); dir=:out, weighted=true)

Return the adjacency matrix A for the graph g.

If dir=:out, A[i,j] > 0 denotes the presence of an edge from node i to node j. If dir=:in instead, A[i,j] > 0 denotes the presence of an edge from node j to node i.

User may specify the eltype T of the returned matrix.

If weighted=true, the A will contain the edge weights if any, otherwise the elements of A will be either 0 or 1.

source
Graphs.degreeMethod
degree(g::GNNGraph, T=nothing; dir=:out, edge_weight=true)

Return a vector containing the degrees of the nodes in g.

The gradient is propagated through this function only if edge_weight is true or a vector.

Arguments

  • g: A graph.
  • T: Element type of the returned vector. If nothing, is chosen based on the graph type and will be an integer if edge_weight = false. Default nothing.
  • dir: For dir = :out the degree of a node is counted based on the outgoing edges. For dir = :in, the ingoing edges are used. If dir = :both we have the sum of the two.
  • edge_weight: If true and the graph contains weighted edges, the degree will be weighted. Set to false instead to just count the number of outgoing/ingoing edges. Finally, you can also pass a vector of weights to be used instead of the graph's own weights. Default true.
source
Graphs.has_self_loopsMethod
has_self_loops(g::GNNGraph)

Return true if g has any self loops.

source
Graphs.inneighborsMethod
inneighbors(g::GNNGraph, i::Integer)

Return the neighbors of node i in the graph g through incoming edges.

See also neighbors and outneighbors.

source
Graphs.outneighborsMethod
outneighbors(g::GNNGraph, i::Integer)

Return the neighbors of node i in the graph g through outgoing edges.

See also neighbors and inneighbors.

source
Graphs.neighborsMethod
neighbors(g::GNNGraph, i::Integer; dir=:out)

Return the neighbors of node i in the graph g. If dir=:out, return the neighbors through outgoing edges. If dir=:in, return the neighbors through incoming edges.

See also outneighbors, inneighbors.

source

Transform

GNNGraphs.add_edgesMethod
add_edges(g::GNNGraph, s::AbstractVector, t::AbstractVector; [edata])
 add_edges(g::GNNGraph, (s, t); [edata])
 add_edges(g::GNNGraph, (s, t, w); [edata])

Add to graph g the edges with source nodes s and target nodes t. Optionally, pass the edge weight w and the features edata for the new edges. Returns a new graph sharing part of the underlying data with g.

If the s or t contain nodes that are not already present in the graph, they are added to the graph as well.

Examples

julia> s, t = [1, 2, 3, 3, 4], [2, 3, 4, 4, 4];
 
@@ -110,9 +110,9 @@
 julia> add_edges(g, [1,2], [2,3])
 GNNGraph:
   num_nodes: 3
-  num_edges: 2
source
GNNGraphs.add_nodesMethod
add_nodes(g::GNNGraph, n; [ndata])

Add n new nodes to graph g. In the new graph, these nodes will have indexes from g.num_nodes + 1 to g.num_nodes + n.

source
GNNGraphs.add_self_loopsMethod
add_self_loops(g::GNNGraph)

Return a graph with the same features as g but also adding edges connecting the nodes to themselves.

Nodes with already existing self-loops will obtain a second self-loop.

If the graphs has edge weights, the new edges will have weight 1.

source
GNNGraphs.getgraphMethod
getgraph(g::GNNGraph, i; nmap=false)

Return the subgraph of g induced by those nodes j for which g.graph_indicator[j] == i or, if i is a collection, g.graph_indicator[j] ∈ i. In other words, it extract the component graphs from a batched graph.

If nmap=true, return also a vector v mapping the new nodes to the old ones. The node i in the subgraph will correspond to the node v[i] in g.

source
GNNGraphs.negative_sampleMethod
negative_sample(g::GNNGraph; 
+  num_edges: 2
source
GNNGraphs.add_nodesMethod
add_nodes(g::GNNGraph, n; [ndata])

Add n new nodes to graph g. In the new graph, these nodes will have indexes from g.num_nodes + 1 to g.num_nodes + n.

source
GNNGraphs.add_self_loopsMethod
add_self_loops(g::GNNGraph)

Return a graph with the same features as g but also adding edges connecting the nodes to themselves.

Nodes with already existing self-loops will obtain a second self-loop.

If the graphs has edge weights, the new edges will have weight 1.

source
GNNGraphs.getgraphMethod
getgraph(g::GNNGraph, i; nmap=false)

Return the subgraph of g induced by those nodes j for which g.graph_indicator[j] == i or, if i is a collection, g.graph_indicator[j] ∈ i. In other words, it extract the component graphs from a batched graph.

If nmap=true, return also a vector v mapping the new nodes to the old ones. The node i in the subgraph will correspond to the node v[i] in g.

source
GNNGraphs.negative_sampleMethod
negative_sample(g::GNNGraph; 
                 num_neg_edges = g.num_edges, 
-                bidirected = is_bidirected(g))

Return a graph containing random negative edges (i.e. non-edges) from graph g as edges.

If bidirected=true, the output graph will be bidirected and there will be no leakage from the origin graph.

See also is_bidirected.

source
GNNGraphs.perturb_edgesMethod
perturb_edges([rng], g::GNNGraph, perturb_ratio)

Return a new graph obtained from g by adding random edges, based on a specified perturb_ratio. The perturb_ratio determines the fraction of new edges to add relative to the current number of edges in the graph. These new edges are added without creating self-loops.

The function returns a new GNNGraph instance that shares some of the underlying data with g but includes the additional edges. The nodes for the new edges are selected randomly, and no edge data (edata) or weights (w) are assigned to these new edges.

Arguments

  • g::GNNGraph: The graph to be perturbed.
  • perturb_ratio: The ratio of the number of new edges to add relative to the current number of edges in the graph. For example, a perturb_ratio of 0.1 means that 10% of the current number of edges will be added as new random edges.
  • rng: An optionalrandom number generator to ensure reproducible results.

Examples

julia> g = GNNGraph((s, t, w))
+                bidirected = is_bidirected(g))

Return a graph containing random negative edges (i.e. non-edges) from graph g as edges.

If bidirected=true, the output graph will be bidirected and there will be no leakage from the origin graph.

See also is_bidirected.

source
GNNGraphs.perturb_edgesMethod
perturb_edges([rng], g::GNNGraph, perturb_ratio)

Return a new graph obtained from g by adding random edges, based on a specified perturb_ratio. The perturb_ratio determines the fraction of new edges to add relative to the current number of edges in the graph. These new edges are added without creating self-loops.

The function returns a new GNNGraph instance that shares some of the underlying data with g but includes the additional edges. The nodes for the new edges are selected randomly, and no edge data (edata) or weights (w) are assigned to these new edges.

Arguments

  • g::GNNGraph: The graph to be perturbed.
  • perturb_ratio: The ratio of the number of new edges to add relative to the current number of edges in the graph. For example, a perturb_ratio of 0.1 means that 10% of the current number of edges will be added as new random edges.
  • rng: An optionalrandom number generator to ensure reproducible results.

Examples

julia> g = GNNGraph((s, t, w))
 GNNGraph:
   num_nodes: 4
   num_edges: 5
@@ -120,7 +120,7 @@
 julia> perturbed_g = perturb_edges(g, 0.2)
 GNNGraph:
   num_nodes: 4
-  num_edges: 6
source
GNNGraphs.ppr_diffusionMethod
ppr_diffusion(g::GNNGraph{<:COO_T}, alpha =0.85f0) -> GNNGraph

Calculates the Personalized PageRank (PPR) diffusion based on the edge weight matrix of a GNNGraph and updates the graph with new edge weights derived from the PPR matrix. References paper: The pagerank citation ranking: Bringing order to the web

The function performs the following steps:

  1. Constructs a modified adjacency matrix A using the graph's edge weights, where A is adjusted by (α - 1) * A + I, with α being the damping factor (alpha_f32) and I the identity matrix.
  2. Normalizes A to ensure each column sums to 1, representing transition probabilities.
  3. Applies the PPR formula α * (I + (α - 1) * A)^-1 to compute the diffusion matrix.
  4. Updates the original edge weights of the graph based on the PPR diffusion matrix, assigning new weights for each edge from the PPR matrix.

Arguments

  • g::GNNGraph: The input graph for which PPR diffusion is to be calculated. It should have edge weights available.
  • alpha_f32::Float32: The damping factor used in PPR calculation, controlling the teleport probability in the random walk. Defaults to 0.85f0.

Returns

  • A new GNNGraph instance with the same structure as g but with updated edge weights according to the PPR diffusion calculation.
source
GNNGraphs.rand_edge_splitMethod
rand_edge_split(g::GNNGraph, frac; bidirected=is_bidirected(g)) -> g1, g2

Randomly partition the edges in g to form two graphs, g1 and g2. Both will have the same number of nodes as g. g1 will contain a fraction frac of the original edges, while g2 wil contain the rest.

If bidirected = true makes sure that an edge and its reverse go into the same split. This option is supported only for bidirected graphs with no self-loops and multi-edges.

rand_edge_split is tipically used to create train/test splits in link prediction tasks.

source
GNNGraphs.random_walk_peMethod
random_walk_pe(g, walk_length)

Return the random walk positional encoding from the paper Graph Neural Networks with Learnable Structural and Positional Representations of the given graph g and the length of the walk walk_length as a matrix of size (walk_length, g.num_nodes).

source
GNNGraphs.remove_edgesMethod
remove_edges(g::GNNGraph, edges_to_remove::AbstractVector{<:Integer})
+  num_edges: 6
source
GNNGraphs.ppr_diffusionMethod
ppr_diffusion(g::GNNGraph{<:COO_T}, alpha =0.85f0) -> GNNGraph

Calculates the Personalized PageRank (PPR) diffusion based on the edge weight matrix of a GNNGraph and updates the graph with new edge weights derived from the PPR matrix. References paper: The pagerank citation ranking: Bringing order to the web

The function performs the following steps:

  1. Constructs a modified adjacency matrix A using the graph's edge weights, where A is adjusted by (α - 1) * A + I, with α being the damping factor (alpha_f32) and I the identity matrix.
  2. Normalizes A to ensure each column sums to 1, representing transition probabilities.
  3. Applies the PPR formula α * (I + (α - 1) * A)^-1 to compute the diffusion matrix.
  4. Updates the original edge weights of the graph based on the PPR diffusion matrix, assigning new weights for each edge from the PPR matrix.

Arguments

  • g::GNNGraph: The input graph for which PPR diffusion is to be calculated. It should have edge weights available.
  • alpha_f32::Float32: The damping factor used in PPR calculation, controlling the teleport probability in the random walk. Defaults to 0.85f0.

Returns

  • A new GNNGraph instance with the same structure as g but with updated edge weights according to the PPR diffusion calculation.
source
GNNGraphs.rand_edge_splitMethod
rand_edge_split(g::GNNGraph, frac; bidirected=is_bidirected(g)) -> g1, g2

Randomly partition the edges in g to form two graphs, g1 and g2. Both will have the same number of nodes as g. g1 will contain a fraction frac of the original edges, while g2 wil contain the rest.

If bidirected = true makes sure that an edge and its reverse go into the same split. This option is supported only for bidirected graphs with no self-loops and multi-edges.

rand_edge_split is tipically used to create train/test splits in link prediction tasks.

source
GNNGraphs.random_walk_peMethod
random_walk_pe(g, walk_length)

Return the random walk positional encoding from the paper Graph Neural Networks with Learnable Structural and Positional Representations of the given graph g and the length of the walk walk_length as a matrix of size (walk_length, g.num_nodes).

source
GNNGraphs.remove_edgesMethod
remove_edges(g::GNNGraph, edges_to_remove::AbstractVector{<:Integer})
 remove_edges(g::GNNGraph, p=0.5)

Remove specified edges from a GNNGraph, either by specifying edge indices or by randomly removing edges with a given probability.

Arguments

  • g: The input graph from which edges will be removed.
  • edges_to_remove: Vector of edge indices to be removed. This argument is only required for the first method.
  • p: Probability of removing each edge. This argument is only required for the second method and defaults to 0.5.

Returns

A new GNNGraph with the specified edges removed.

Example

julia> using GNNGraphs
 
 # Construct a GNNGraph
@@ -143,7 +143,7 @@
 julia> g_new
 GNNGraph:
   num_nodes: 3
-  num_edges: 2
source
GNNGraphs.remove_multi_edgesMethod
remove_multi_edges(g::GNNGraph; aggr=+)

Remove multiple edges (also called parallel edges or repeated edges) from graph g. Possible edge features are aggregated according to aggr, that can take value +,min, max or mean.

See also remove_self_loops, has_multi_edges, and to_bidirected.

source
GNNGraphs.remove_nodesMethod
remove_nodes(g::GNNGraph, p)

Returns a new graph obtained by dropping nodes from g with independent probabilities p.

Examples

julia> g = GNNGraph([1, 1, 2, 2, 3, 4], [1, 2, 3, 1, 3, 1])
+  num_edges: 2
source
GNNGraphs.remove_multi_edgesMethod
remove_multi_edges(g::GNNGraph; aggr=+)

Remove multiple edges (also called parallel edges or repeated edges) from graph g. Possible edge features are aggregated according to aggr, that can take value +,min, max or mean.

See also remove_self_loops, has_multi_edges, and to_bidirected.

source
GNNGraphs.remove_nodesMethod
remove_nodes(g::GNNGraph, p)

Returns a new graph obtained by dropping nodes from g with independent probabilities p.

Examples

julia> g = GNNGraph([1, 1, 2, 2, 3, 4], [1, 2, 3, 1, 3, 1])
 GNNGraph:
   num_nodes: 4
   num_edges: 6
@@ -151,7 +151,7 @@
 julia> g_new = remove_nodes(g, 0.5)
 GNNGraph:
   num_nodes: 2
-  num_edges: 2
source
GNNGraphs.remove_nodesMethod
remove_nodes(g::GNNGraph, nodes_to_remove::AbstractVector)

Remove specified nodes, and their associated edges, from a GNNGraph. This operation reindexes the remaining nodes to maintain a continuous sequence of node indices, starting from 1. Similarly, edges are reindexed to account for the removal of edges connected to the removed nodes.

Arguments

  • g: The input graph from which nodes (and their edges) will be removed.
  • nodes_to_remove: Vector of node indices to be removed.

Returns

A new GNNGraph with the specified nodes and all edges associated with these nodes removed.

Example

using GNNGraphs
+  num_edges: 2
source
GNNGraphs.remove_nodesMethod
remove_nodes(g::GNNGraph, nodes_to_remove::AbstractVector)

Remove specified nodes, and their associated edges, from a GNNGraph. This operation reindexes the remaining nodes to maintain a continuous sequence of node indices, starting from 1. Similarly, edges are reindexed to account for the removal of edges connected to the removed nodes.

Arguments

  • g: The input graph from which nodes (and their edges) will be removed.
  • nodes_to_remove: Vector of node indices to be removed.

Returns

A new GNNGraph with the specified nodes and all edges associated with these nodes removed.

Example

using GNNGraphs
 
 g = GNNGraph([1, 1, 2, 2, 3], [2, 3, 1, 3, 1])
 
@@ -159,7 +159,7 @@
 g_new = remove_nodes(g, [2, 3])
 
 # g_new now does not contain nodes 2 and 3, and any edges that were connected to these nodes.
-println(g_new)
source
GNNGraphs.remove_self_loopsMethod
remove_self_loops(g::GNNGraph)

Return a graph constructed from g where self-loops (edges from a node to itself) are removed.

See also add_self_loops and remove_multi_edges.

source
GNNGraphs.set_edge_weightMethod
set_edge_weight(g::GNNGraph, w::AbstractVector)

Set w as edge weights in the returned graph.

source
GNNGraphs.to_bidirectedMethod
to_bidirected(g)

Adds a reverse edge for each edge in the graph, then calls remove_multi_edges with mean aggregation to simplify the graph.

See also is_bidirected.

Examples

julia> s, t = [1, 2, 3, 3, 4], [2, 3, 4, 4, 4];
+println(g_new)
source
GNNGraphs.remove_self_loopsMethod
remove_self_loops(g::GNNGraph)

Return a graph constructed from g where self-loops (edges from a node to itself) are removed.

See also add_self_loops and remove_multi_edges.

source
GNNGraphs.set_edge_weightMethod
set_edge_weight(g::GNNGraph, w::AbstractVector)

Set w as edge weights in the returned graph.

source
GNNGraphs.to_bidirectedMethod
to_bidirected(g)

Adds a reverse edge for each edge in the graph, then calls remove_multi_edges with mean aggregation to simplify the graph.

See also is_bidirected.

Examples

julia> s, t = [1, 2, 3, 3, 4], [2, 3, 4, 4, 4];
 
 julia> w = [1.0, 2.0, 3.0, 4.0, 5.0];
 
@@ -200,7 +200,7 @@
  20.0
  35.0
  35.0
- 50.0
source
GNNGraphs.to_unidirectedMethod
to_unidirected(g::GNNGraph)

Return a graph that for each multiple edge between two nodes in g keeps only an edge in one direction.

source
MLUtils.batchMethod
batch(gs::Vector{<:GNNGraph})

Batch together multiple GNNGraphs into a single one containing the total number of original nodes and edges.

Equivalent to SparseArrays.blockdiag. See also MLUtils.unbatch.

Examples

julia> g1 = rand_graph(4, 4, ndata=ones(Float32, 3, 4))
+ 50.0
source
GNNGraphs.to_unidirectedMethod
to_unidirected(g::GNNGraph)

Return a graph that for each multiple edge between two nodes in g keeps only an edge in one direction.

source
MLUtils.batchMethod
batch(gs::Vector{<:GNNGraph})

Batch together multiple GNNGraphs into a single one containing the total number of original nodes and edges.

Equivalent to SparseArrays.blockdiag. See also MLUtils.unbatch.

Examples

julia> g1 = rand_graph(4, 4, ndata=ones(Float32, 3, 4))
 GNNGraph:
   num_nodes: 4
   num_edges: 4
@@ -226,7 +226,7 @@
 3×9 Matrix{Float32}:
  1.0  1.0  1.0  1.0  0.0  0.0  0.0  0.0  0.0
  1.0  1.0  1.0  1.0  0.0  0.0  0.0  0.0  0.0
- 1.0  1.0  1.0  1.0  0.0  0.0  0.0  0.0  0.0
source
MLUtils.unbatchMethod
unbatch(g::GNNGraph)

Opposite of the MLUtils.batch operation, returns an array of the individual graphs batched together in g.

See also MLUtils.batch and getgraph.

Examples

julia> using MLUtils
+ 1.0  1.0  1.0  1.0  0.0  0.0  0.0  0.0  0.0
source
MLUtils.unbatchMethod
unbatch(g::GNNGraph)

Opposite of the MLUtils.batch operation, returns an array of the individual graphs batched together in g.

See also MLUtils.batch and getgraph.

Examples

julia> using MLUtils
 
 julia> gbatched = MLUtils.batch([rand_graph(5, 6), rand_graph(10, 8), rand_graph(4,2)])
 GNNGraph:
@@ -238,8 +238,8 @@
 3-element Vector{GNNGraph{Tuple{Vector{Int64}, Vector{Int64}, Nothing}}}:
  GNNGraph(5, 6) with no data
  GNNGraph(10, 8) with no data
- GNNGraph(4, 2) with no data
source
SparseArrays.blockdiagMethod
blockdiag(xs::GNNGraph...)

Equivalent to MLUtils.batch.

source

Utils

GNNGraphs.sort_edge_indexFunction
sort_edge_index(ei::Tuple) -> u', v'
-sort_edge_index(u, v) -> u', v'

Return a sorted version of the tuple of vectors ei = (u, v), applying a common permutation to u and v. The sorting is lexycographic, that is the pairs (ui, vi) are sorted first according to the ui and then according to vi.

source
GNNGraphs.color_refinementFunction
color_refinement(g::GNNGraph, [x0]) -> x, num_colors, niters

The color refinement algorithm for graph coloring. Given a graph g and an initial coloring x0, the algorithm iteratively refines the coloring until a fixed point is reached.

At each iteration the algorithm computes a hash of the coloring and the sorted list of colors of the neighbors of each node. This hash is used to determine if the coloring has changed.

math x_i' = hashmap((x_i, sort([x_j for j \in N(i)]))).`

This algorithm is related to the 1-Weisfeiler-Lehman algorithm for graph isomorphism testing.

Arguments

  • g::GNNGraph: The graph to color.
  • x0::AbstractVector{<:Integer}: The initial coloring. If not provided, all nodes are colored with 1.

Returns

  • x::AbstractVector{<:Integer}: The final coloring.
  • num_colors::Int: The number of colors used.
  • niters::Int: The number of iterations until convergence.
source

Generate

GNNGraphs.knn_graphMethod
knn_graph(points::AbstractMatrix, 
+ GNNGraph(4, 2) with no data
source
SparseArrays.blockdiagMethod
blockdiag(xs::GNNGraph...)

Equivalent to MLUtils.batch.

source

Utils

GNNGraphs.sort_edge_indexFunction
sort_edge_index(ei::Tuple) -> u', v'
+sort_edge_index(u, v) -> u', v'

Return a sorted version of the tuple of vectors ei = (u, v), applying a common permutation to u and v. The sorting is lexycographic, that is the pairs (ui, vi) are sorted first according to the ui and then according to vi.

source
GNNGraphs.color_refinementFunction
color_refinement(g::GNNGraph, [x0]) -> x, num_colors, niters

The color refinement algorithm for graph coloring. Given a graph g and an initial coloring x0, the algorithm iteratively refines the coloring until a fixed point is reached.

At each iteration the algorithm computes a hash of the coloring and the sorted list of colors of the neighbors of each node. This hash is used to determine if the coloring has changed.

math x_i' = hashmap((x_i, sort([x_j for j \in N(i)]))).`

This algorithm is related to the 1-Weisfeiler-Lehman algorithm for graph isomorphism testing.

Arguments

  • g::GNNGraph: The graph to color.
  • x0::AbstractVector{<:Integer}: The initial coloring. If not provided, all nodes are colored with 1.

Returns

  • x::AbstractVector{<:Integer}: The final coloring.
  • num_colors::Int: The number of colors used.
  • niters::Int: The number of iterations until convergence.
source

Generate

GNNGraphs.knn_graphMethod
knn_graph(points::AbstractMatrix, 
           k::Int; 
           graph_indicator = nothing,
           self_loops = false, 
@@ -259,7 +259,7 @@
 GNNGraph:
     num_nodes = 10
     num_edges = 30
-    num_graphs = 2
source
GNNGraphs.radius_graphMethod
radius_graph(points::AbstractMatrix, 
+    num_graphs = 2
source
GNNGraphs.radius_graphMethod
radius_graph(points::AbstractMatrix, 
              r::AbstractFloat; 
              graph_indicator = nothing,
              self_loops = false, 
@@ -279,7 +279,7 @@
 GNNGraph:
     num_nodes = 10
     num_edges = 20
-    num_graphs = 2

References

Section B paragraphs 1 and 2 of the paper Dynamic Hidden-Variable Network Models

source
GNNGraphs.rand_graphMethod
rand_graph([rng,] n, m; bidirected=true, edge_weight = nothing, kws...)

Generate a random (Erdós-Renyi) GNNGraph with n nodes and m edges.

If bidirected=true the reverse edge of each edge will be present. If bidirected=false instead, m unrelated edges are generated. In any case, the output graph will contain no self-loops or multi-edges.

A vector can be passed as edge_weight. Its length has to be equal to m in the directed case, and m÷2 in the bidirected one.

Pass a random number generator as the first argument to make the generation reproducible.

Additional keyword arguments will be passed to the GNNGraph constructor.

Examples

julia> g = rand_graph(5, 4, bidirected=false)
+    num_graphs = 2

References

Section B paragraphs 1 and 2 of the paper Dynamic Hidden-Variable Network Models

source
GNNGraphs.rand_graphMethod
rand_graph([rng,] n, m; bidirected=true, edge_weight = nothing, kws...)

Generate a random (Erdós-Renyi) GNNGraph with n nodes and m edges.

If bidirected=true the reverse edge of each edge will be present. If bidirected=false instead, m unrelated edges are generated. In any case, the output graph will contain no self-loops or multi-edges.

A vector can be passed as edge_weight. Its length has to be equal to m in the directed case, and m÷2 in the bidirected one.

Pass a random number generator as the first argument to make the generation reproducible.

Additional keyword arguments will be passed to the GNNGraph constructor.

Examples

julia> g = rand_graph(5, 4, bidirected=false)
 GNNGraph:
   num_nodes: 5
   num_edges: 4
@@ -297,7 +297,7 @@
 
 # Each edge has a reverse
 julia> edge_index(g)
-([1, 1, 5, 3], [5, 3, 1, 1])
source

Operators

Base.intersectFunction
intersect(g1::GNNGraph, g2::GNNGraph)

Intersect two graphs by keeping only the common edges.

source

Sampling

GNNGraphs.sample_neighborsFunction
sample_neighbors(g, nodes, K=-1; dir=:in, replace=false, dropnodes=false)

Sample neighboring edges of the given nodes and return the induced subgraph. For each node, a number of inbound (or outbound when dir = :out) edges will be randomly chosen. Ifdropnodes=false`, the graph returned will then contain all the nodes in the original graph, but only the sampled edges.

The returned graph will contain an edge feature EID corresponding to the id of the edge in the original graph. If dropnodes=true, it will also contain a node feature NID with the node ids in the original graph.

Arguments

  • g. The graph.
  • nodes. A list of node IDs to sample neighbors from.
  • K. The maximum number of edges to be sampled for each node. If -1, all the neighboring edges will be selected.
  • dir. Determines whether to sample inbound (:in) or outbound (`:out) edges (Default :in).
  • replace. If true, sample with replacement.
  • dropnodes. If true, the resulting subgraph will contain only the nodes involved in the sampled edges.

Examples

julia> g = rand_graph(20, 100)
+([1, 1, 5, 3], [5, 3, 1, 1])
source

Operators

Base.intersectFunction
intersect(g1::GNNGraph, g2::GNNGraph)

Intersect two graphs by keeping only the common edges.

source

Sampling

GNNGraphs.sample_neighborsFunction
sample_neighbors(g, nodes, K=-1; dir=:in, replace=false, dropnodes=false)

Sample neighboring edges of the given nodes and return the induced subgraph. For each node, a number of inbound (or outbound when dir = :out) edges will be randomly chosen. Ifdropnodes=false`, the graph returned will then contain all the nodes in the original graph, but only the sampled edges.

The returned graph will contain an edge feature EID corresponding to the id of the edge in the original graph. If dropnodes=true, it will also contain a node feature NID with the node ids in the original graph.

Arguments

  • g. The graph.
  • nodes. A list of node IDs to sample neighbors from.
  • K. The maximum number of edges to be sampled for each node. If -1, all the neighboring edges will be selected.
  • dir. Determines whether to sample inbound (:in) or outbound (`:out) edges (Default :in).
  • replace. If true, sample with replacement.
  • dropnodes. If true, the resulting subgraph will contain only the nodes involved in the sampled edges.

Examples

julia> g = rand_graph(20, 100)
 GNNGraph:
     num_nodes = 20
     num_edges = 100
@@ -336,7 +336,7 @@
     num_nodes = 20
     num_edges = 10
     edata:
-        EID => (10,)
source
Graphs.induced_subgraphMethod
induced_subgraph(graph, nodes)

Generates a subgraph from the original graph using the provided nodes. The function includes the nodes' neighbors and creates edges between nodes that are connected in the original graph. If a node has no neighbors, an isolated node will be added to the subgraph. Returns A new GNNGraph containing the subgraph with the specified nodes and their features.

Arguments

  • graph. The original GNNGraph containing nodes, edges, and node features.
  • nodes`. A vector of node indices to include in the subgraph.

Examples

julia> s = [1, 2]
+        EID => (10,)
source
Graphs.induced_subgraphMethod
induced_subgraph(graph, nodes)

Generates a subgraph from the original graph using the provided nodes. The function includes the nodes' neighbors and creates edges between nodes that are connected in the original graph. If a node has no neighbors, an isolated node will be added to the subgraph. Returns A new GNNGraph containing the subgraph with the specified nodes and their features.

Arguments

  • graph. The original GNNGraph containing nodes, edges, and node features.
  • nodes`. A vector of node indices to include in the subgraph.

Examples

julia> s = [1, 2]
 2-element Vector{Int64}:
  1
  2
@@ -369,4 +369,4 @@
         y = 2-element Vector{Float32}
         x = 32×2 Matrix{Float32}
   edata:
-        e = 1-element Vector{Float32}
source
\ No newline at end of file + e = 1-element Vector{Float32}source
\ No newline at end of file diff --git a/docs/GNNGraphs.jl/dev/api/heterograph/index.html b/docs/GNNGraphs.jl/dev/api/heterograph/index.html index 8de711009..6b2643fe6 100644 --- a/docs/GNNGraphs.jl/dev/api/heterograph/index.html +++ b/docs/GNNGraphs.jl/dev/api/heterograph/index.html @@ -39,7 +39,7 @@ julia> hg.ndata[:A].x 2×10 Matrix{Float64}: 0.825882 0.0797502 0.245813 0.142281 0.231253 0.685025 0.821457 0.888838 0.571347 0.53165 - 0.631286 0.316292 0.705325 0.239211 0.533007 0.249233 0.473736 0.595475 0.0623298 0.159307

See also GNNGraph for a homogeneous graph type and rand_heterograph for a function to generate random heterographs.

source
GNNGraphs.edge_type_subgraphMethod
edge_type_subgraph(g::GNNHeteroGraph, edge_ts)

Return a subgraph of g that contains only the edges of type edge_ts. Edge types can be specified as a single edge type (i.e. a tuple containing 3 symbols) or a vector of edge types.

source
GNNGraphs.num_edge_typesMethod
num_edge_types(g)

Return the number of edge types in the graph. For GNNGraphs, this is always 1. For GNNHeteroGraphs, this is the number of unique edge types.

source
GNNGraphs.num_node_typesMethod
num_node_types(g)

Return the number of node types in the graph. For GNNGraphs, this is always 1. For GNNHeteroGraphs, this is the number of unique node types.

source

Query

GNNGraphs.edge_indexMethod
edge_index(g::GNNHeteroGraph, [edge_t])

Return a tuple containing two vectors, respectively storing the source and target nodes for each edges in g of type edge_t = (src_t, rel_t, trg_t).

If edge_t is not provided, it will error if g has more than one edge type.

source
GNNGraphs.graph_indicatorMethod
graph_indicator(g::GNNHeteroGraph, [node_t])

Return a Dict of vectors containing the graph membership (an integer from 1 to g.num_graphs) of each node in the graph for each node type. If node_t is provided, return the graph membership of each node of type node_t instead.

See also batch.

source
Graphs.degreeMethod
degree(g::GNNHeteroGraph, edge_type::EType; dir = :in)

Return a vector containing the degrees of the nodes in g GNNHeteroGraph given edge_type.

Arguments

  • g: A graph.
  • edge_type: A tuple of symbols (source_t, edge_t, target_t) representing the edge type.
  • T: Element type of the returned vector. If nothing, is chosen based on the graph type. Default nothing.
  • dir: For dir = :out the degree of a node is counted based on the outgoing edges. For dir = :in, the ingoing edges are used. If dir = :both we have the sum of the two. Default dir = :out.
source
Graphs.has_edgeMethod
has_edge(g::GNNHeteroGraph, edge_t, i, j)

Return true if there is an edge of type edge_t from node i to node j in g.

Examples

julia> g = rand_bipartite_heterograph((2, 2), (4, 0), bidirected=false)
+    0.631286  0.316292   0.705325  0.239211  0.533007  0.249233  0.473736  0.595475  0.0623298  0.159307

See also GNNGraph for a homogeneous graph type and rand_heterograph for a function to generate random heterographs.

source
GNNGraphs.edge_type_subgraphMethod
edge_type_subgraph(g::GNNHeteroGraph, edge_ts)

Return a subgraph of g that contains only the edges of type edge_ts. Edge types can be specified as a single edge type (i.e. a tuple containing 3 symbols) or a vector of edge types.

source
GNNGraphs.num_edge_typesMethod
num_edge_types(g)

Return the number of edge types in the graph. For GNNGraphs, this is always 1. For GNNHeteroGraphs, this is the number of unique edge types.

source
GNNGraphs.num_node_typesMethod
num_node_types(g)

Return the number of node types in the graph. For GNNGraphs, this is always 1. For GNNHeteroGraphs, this is the number of unique node types.

source

Query

GNNGraphs.edge_indexMethod
edge_index(g::GNNHeteroGraph, [edge_t])

Return a tuple containing two vectors, respectively storing the source and target nodes for each edges in g of type edge_t = (src_t, rel_t, trg_t).

If edge_t is not provided, it will error if g has more than one edge type.

source
GNNGraphs.graph_indicatorMethod
graph_indicator(g::GNNHeteroGraph, [node_t])

Return a Dict of vectors containing the graph membership (an integer from 1 to g.num_graphs) of each node in the graph for each node type. If node_t is provided, return the graph membership of each node of type node_t instead.

See also batch.

source
Graphs.degreeMethod
degree(g::GNNHeteroGraph, edge_type::EType; dir = :in)

Return a vector containing the degrees of the nodes in g GNNHeteroGraph given edge_type.

Arguments

  • g: A graph.
  • edge_type: A tuple of symbols (source_t, edge_t, target_t) representing the edge type.
  • T: Element type of the returned vector. If nothing, is chosen based on the graph type. Default nothing.
  • dir: For dir = :out the degree of a node is counted based on the outgoing edges. For dir = :in, the ingoing edges are used. If dir = :both we have the sum of the two. Default dir = :out.
source
Graphs.has_edgeMethod
has_edge(g::GNNHeteroGraph, edge_t, i, j)

Return true if there is an edge of type edge_t from node i to node j in g.

Examples

julia> g = rand_bipartite_heterograph((2, 2), (4, 0), bidirected=false)
 GNNHeteroGraph:
   num_nodes: Dict(:A => 2, :B => 2)
   num_edges: Dict((:A, :to, :B) => 4, (:B, :to, :A) => 0)
@@ -48,10 +48,10 @@
 true
 
 julia> has_edge(g, (:B,:to,:A), 1, 1)
-false
source

Transform

GNNGraphs.add_edgesMethod
add_edges(g::GNNHeteroGraph, edge_t, s, t; [edata, num_nodes])
+false
source

Transform

GNNGraphs.add_edgesMethod
add_edges(g::GNNHeteroGraph, edge_t, s, t; [edata, num_nodes])
 add_edges(g::GNNHeteroGraph, edge_t => (s, t); [edata, num_nodes])
-add_edges(g::GNNHeteroGraph, edge_t => (s, t, w); [edata, num_nodes])

Add to heterograph g edges of type edge_t with source node vector s and target node vector t. Optionally, pass the edge weights w or the features edata for the new edges. edge_t is a triplet of symbols (src_t, rel_t, dst_t).

If the edge type is not already present in the graph, it is added. If it involves new node types, they are added to the graph as well. In this case, a dictionary or named tuple of num_nodes can be passed to specify the number of nodes of the new types, otherwise the number of nodes is inferred from the maximum node id in s and t.

source
GNNGraphs.add_self_loopsMethod
add_self_loops(g::GNNHeteroGraph, edge_t::EType)
-add_self_loops(g::GNNHeteroGraph)

If the source node type is the same as the destination node type in edge_t, return a graph with the same features as g but also add self-loops of the specified type, edge_t. Otherwise, it returns g unchanged.

Nodes with already existing self-loops of type edge_t will obtain a second set of self-loops of the same type.

If the graph has edge weights for edges of type edge_t, the new edges will have weight 1.

If no edges of type edge_t exist, or all existing edges have no weight, then all new self loops will have no weight.

If edge_t is not passed as argument, for the entire graph self-loop is added to each node for every edge type in the graph where the source and destination node types are the same. This iterates over all edge types present in the graph, applying the self-loop addition logic to each applicable edge type.

source

Generate

GNNGraphs.rand_bipartite_heterographMethod
rand_bipartite_heterograph([rng,] 
+add_edges(g::GNNHeteroGraph, edge_t => (s, t, w); [edata, num_nodes])

Add to heterograph g edges of type edge_t with source node vector s and target node vector t. Optionally, pass the edge weights w or the features edata for the new edges. edge_t is a triplet of symbols (src_t, rel_t, dst_t).

If the edge type is not already present in the graph, it is added. If it involves new node types, they are added to the graph as well. In this case, a dictionary or named tuple of num_nodes can be passed to specify the number of nodes of the new types, otherwise the number of nodes is inferred from the maximum node id in s and t.

source
GNNGraphs.add_self_loopsMethod
add_self_loops(g::GNNHeteroGraph, edge_t::EType)
+add_self_loops(g::GNNHeteroGraph)

If the source node type is the same as the destination node type in edge_t, return a graph with the same features as g but also add self-loops of the specified type, edge_t. Otherwise, it returns g unchanged.

Nodes with already existing self-loops of type edge_t will obtain a second set of self-loops of the same type.

If the graph has edge weights for edges of type edge_t, the new edges will have weight 1.

If no edges of type edge_t exist, or all existing edges have no weight, then all new self loops will have no weight.

If edge_t is not passed as argument, for the entire graph self-loop is added to each node for every edge type in the graph where the source and destination node types are the same. This iterates over all edge types present in the graph, applying the self-loop addition logic to each applicable edge type.

source

Generate

GNNGraphs.rand_bipartite_heterographMethod
rand_bipartite_heterograph([rng,] 
                            (n1, n2), (m12, m21); 
                            bidirected = true, 
                            node_t = (:A, :B), 
@@ -64,8 +64,8 @@
 julia> g = rand_bipartite_heterograph((10, 15), (20, 0), node_t=(:user, :item), edge_t=:-, bidirected=false)
 GNNHeteroGraph:
   num_nodes: Dict(:item => 15, :user => 10)
-  num_edges: Dict((:item, :-, :user) => 0, (:user, :-, :item) => 20)
source
GNNGraphs.rand_heterographFunction
rand_heterograph([rng,] n, m; bidirected=false, kws...)

Construct an GNNHeteroGraph with random edges and with number of nodes and edges specified by n and m respectively. n and m can be any iterable of pairs specifing node/edge types and their numbers.

Pass a random number generator as a first argument to make the generation reproducible.

Setting bidirected=true will generate a bidirected graph, i.e. each edge will have a reverse edge. Therefore, for each edge type (:A, :rel, :B) a corresponding reverse edge type (:B, :rel, :A) will be generated.

Additional keyword arguments will be passed to the GNNHeteroGraph constructor.

Examples

julia> g = rand_heterograph((:user => 10, :movie => 20),
+  num_edges: Dict((:item, :-, :user) => 0, (:user, :-, :item) => 20)
source
GNNGraphs.rand_heterographFunction
rand_heterograph([rng,] n, m; bidirected=false, kws...)

Construct an GNNHeteroGraph with random edges and with number of nodes and edges specified by n and m respectively. n and m can be any iterable of pairs specifing node/edge types and their numbers.

Pass a random number generator as a first argument to make the generation reproducible.

Setting bidirected=true will generate a bidirected graph, i.e. each edge will have a reverse edge. Therefore, for each edge type (:A, :rel, :B) a corresponding reverse edge type (:B, :rel, :A) will be generated.

Additional keyword arguments will be passed to the GNNHeteroGraph constructor.

Examples

julia> g = rand_heterograph((:user => 10, :movie => 20),
                             (:user, :rate, :movie) => 30)
 GNNHeteroGraph:
   num_nodes: Dict(:movie => 20, :user => 10)
-  num_edges: Dict((:user, :rate, :movie) => 30)
source
\ No newline at end of file + num_edges: Dict((:user, :rate, :movie) => 30)source
\ No newline at end of file diff --git a/docs/GNNGraphs.jl/dev/api/samplers/index.html b/docs/GNNGraphs.jl/dev/api/samplers/index.html index 447dc188a..756379efb 100644 --- a/docs/GNNGraphs.jl/dev/api/samplers/index.html +++ b/docs/GNNGraphs.jl/dev/api/samplers/index.html @@ -5,4 +5,4 @@ julia> for mini_batch_gnn in loader batch_counter += 1 println("Batch ", batch_counter, ": Nodes in mini-batch graph: ", nv(mini_batch_gnn)) - endsource
\ No newline at end of file + endsource
\ No newline at end of file diff --git a/docs/GNNGraphs.jl/dev/api/temporalgraph/index.html b/docs/GNNGraphs.jl/dev/api/temporalgraph/index.html index 6f72c2edb..ce93b7ea2 100644 --- a/docs/GNNGraphs.jl/dev/api/temporalgraph/index.html +++ b/docs/GNNGraphs.jl/dev/api/temporalgraph/index.html @@ -16,7 +16,7 @@ num_edges: [20, 20, 20, 20, 20] num_snapshots: 5 tgdata: - x = 4-element Vector{Float64}source
GNNGraphs.add_snapshotMethod
add_snapshot(tg::TemporalSnapshotsGNNGraph, t::Int, g::GNNGraph)

Return a TemporalSnapshotsGNNGraph created starting from tg by adding the snapshot g at time index t.

Examples

julia> using GNNGraphs
+        x = 4-element Vector{Float64}
source
GNNGraphs.add_snapshotMethod
add_snapshot(tg::TemporalSnapshotsGNNGraph, t::Int, g::GNNGraph)

Return a TemporalSnapshotsGNNGraph created starting from tg by adding the snapshot g at time index t.

Examples

julia> using GNNGraphs
 
 julia> snapshots = [rand_graph(10, 20) for i in 1:5];
 
@@ -30,7 +30,7 @@
 TemporalSnapshotsGNNGraph:
   num_nodes: [10, 10, 10, 10, 10, 10]
   num_edges: [20, 20, 16, 20, 20, 20]
-  num_snapshots: 6
source
GNNGraphs.remove_snapshotMethod
remove_snapshot(tg::TemporalSnapshotsGNNGraph, t::Int)

Return a TemporalSnapshotsGNNGraph created starting from tg by removing the snapshot at time index t.

Examples

julia> using GNNGraphs
+  num_snapshots: 6
source
GNNGraphs.remove_snapshotMethod
remove_snapshot(tg::TemporalSnapshotsGNNGraph, t::Int)

Return a TemporalSnapshotsGNNGraph created starting from tg by removing the snapshot at time index t.

Examples

julia> using GNNGraphs
 
 julia> snapshots = [rand_graph(10,20), rand_graph(10,14), rand_graph(10,22)];
 
@@ -44,7 +44,7 @@
 TemporalSnapshotsGNNGraph:
   num_nodes: [10, 10]
   num_edges: [20, 22]
-  num_snapshots: 2
source

Random Generators

GNNGraphs.rand_temporal_radius_graphFunction
rand_temporal_radius_graph(number_nodes::Int, 
+  num_snapshots: 2
source

Random Generators

GNNGraphs.rand_temporal_radius_graphFunction
rand_temporal_radius_graph(number_nodes::Int, 
                            number_snapshots::Int,
                            speed::AbstractFloat,
                            r::AbstractFloat;
@@ -56,7 +56,7 @@
 TemporalSnapshotsGNNGraph:
   num_nodes: [10, 10, 10, 10, 10]
   num_edges: [90, 90, 90, 90, 90]
-  num_snapshots: 5
source
GNNGraphs.rand_temporal_hyperbolic_graphFunction
rand_temporal_hyperbolic_graph(number_nodes::Int, 
+  num_snapshots: 5
source
GNNGraphs.rand_temporal_hyperbolic_graphFunction
rand_temporal_hyperbolic_graph(number_nodes::Int, 
                                number_snapshots::Int;
                                α::Real,
                                R::Real,
@@ -69,4 +69,4 @@
 TemporalSnapshotsGNNGraph:
   num_nodes: [10, 10, 10, 10, 10]
   num_edges: [44, 46, 48, 42, 38]
-  num_snapshots: 5

References

Section D of the paper Dynamic Hidden-Variable Network Models and the paper Hyperbolic Geometry of Complex Networks

source
\ No newline at end of file + num_snapshots: 5

References

Section D of the paper Dynamic Hidden-Variable Network Models and the paper Hyperbolic Geometry of Complex Networks

source
\ No newline at end of file diff --git a/docs/GNNGraphs.jl/dev/guides/datasets/index.html b/docs/GNNGraphs.jl/dev/guides/datasets/index.html index 4a444fc03..af0f62139 100644 --- a/docs/GNNGraphs.jl/dev/guides/datasets/index.html +++ b/docs/GNNGraphs.jl/dev/guides/datasets/index.html @@ -1 +1 @@ -Datasets · GNNGraphs.jl

Datasets

GNNGraphs.jl doesn't come with its own datasets, but leverages those available in the Julia (and non-Julia) ecosystem. In particular, the examples in the GraphNeuralNetworks.jl repository make use of the MLDatasets.jl package. There you will find common graph datasets such as Cora, PubMed, Citeseer, TUDataset and many others. For graphs with static structures and temporal features, datasets such as METRLA, PEMSBAY, ChickenPox, and WindMillEnergy are available. For graphs featuring both temporal structures and temporal features, the TemporalBrains dataset is suitable.

GraphNeuralNetworks.jl provides the mldataset2gnngraph method for interfacing with MLDatasets.jl.

\ No newline at end of file +Datasets · GNNGraphs.jl

Datasets

GNNGraphs.jl doesn't come with its own datasets, but leverages those available in the Julia (and non-Julia) ecosystem. In particular, the examples in the GraphNeuralNetworks.jl repository make use of the MLDatasets.jl package. There you will find common graph datasets such as Cora, PubMed, Citeseer, TUDataset and many others. For graphs with static structures and temporal features, datasets such as METRLA, PEMSBAY, ChickenPox, and WindMillEnergy are available. For graphs featuring both temporal structures and temporal features, the TemporalBrains dataset is suitable.

GraphNeuralNetworks.jl provides the mldataset2gnngraph method for interfacing with MLDatasets.jl.

\ No newline at end of file diff --git a/docs/GNNGraphs.jl/dev/guides/gnngraph/index.html b/docs/GNNGraphs.jl/dev/guides/gnngraph/index.html index 94884a243..c5f04ab66 100644 --- a/docs/GNNGraphs.jl/dev/guides/gnngraph/index.html +++ b/docs/GNNGraphs.jl/dev/guides/gnngraph/index.html @@ -165,4 +165,4 @@ julia> GNNGraph(gd) GNNGraph: num_nodes: 10 - num_edges: 20 \ No newline at end of file + num_edges: 20 \ No newline at end of file diff --git a/docs/GNNGraphs.jl/dev/guides/heterograph/index.html b/docs/GNNGraphs.jl/dev/guides/heterograph/index.html index f488d1f2a..239cd5609 100644 --- a/docs/GNNGraphs.jl/dev/guides/heterograph/index.html +++ b/docs/GNNGraphs.jl/dev/guides/heterograph/index.html @@ -79,4 +79,4 @@ @assert g.num_nodes[:A] == 80 @assert size(g.ndata[:A].x) == (3, 80) # ... -end

Graph convolutions on heterographs

See HeteroGraphConv for how to perform convolutions on heterogeneous graphs.

\ No newline at end of file +end

Graph convolutions on heterographs

See HeteroGraphConv for how to perform convolutions on heterogeneous graphs.

\ No newline at end of file diff --git a/docs/GNNGraphs.jl/dev/guides/temporalgraph/index.html b/docs/GNNGraphs.jl/dev/guides/temporalgraph/index.html index 2abc843ed..2ba2919a1 100644 --- a/docs/GNNGraphs.jl/dev/guides/temporalgraph/index.html +++ b/docs/GNNGraphs.jl/dev/guides/temporalgraph/index.html @@ -86,4 +86,4 @@ julia> [ds.x for ds in tg.ndata]; # vector containing the x feature of each snapshot julia> [g.x for g in tg.snapshots]; # same vector as above, now accessing - # the x feature directly from the snapshots \ No newline at end of file + # the x feature directly from the snapshots \ No newline at end of file diff --git a/docs/GNNGraphs.jl/dev/index.html b/docs/GNNGraphs.jl/dev/index.html index a6a8e1c36..fc087886e 100644 --- a/docs/GNNGraphs.jl/dev/index.html +++ b/docs/GNNGraphs.jl/dev/index.html @@ -1 +1 @@ -Home · GNNGraphs.jl

GNNGraphs.jl

GNNGraphs.jl is a package that provides graph data structures and helper functions specifically designed for working with graph neural networks. This package allows to store not only the graph structure, but also features associated with nodes, edges, and the graph itself. It is the core foundation for the GNNlib.jl, GraphNeuralNetworks.jl, and GNNLux.jl packages.

It supports three types of graphs:

  • Static graph is the basic graph type represented by GNNGraph, where each node and edge can have associated features. This type of graph is used in typical graph neural network applications, where neural networks operate on both the structure of the graph and the features stored in it. It can be used to represent a graph where the structure does not change over time, but the features of the nodes and edges can change over time.

  • Heterogeneous graph is a graph that supports multiple types of nodes and edges, and is represented by GNNHeteroGraph. Each type can have its own properties and relationships. This is useful in scenarios with different entities and interactions, such as in citation graphs or multi-relational data.

  • Temporal graph is a graph that changes over time, and is represented by TemporalSnapshotsGNNGraph. Edges and features can change dynamically. This type of graph is useful for applications that involve tracking time-dependent relationships, such as social networks.

This package depends on the package Graphs.jl.

Installation

The package can be installed with the Julia package manager. From the Julia REPL, type ] to enter the Pkg REPL mode and run:

pkg> add GNNGraphs
\ No newline at end of file +Home · GNNGraphs.jl

GNNGraphs.jl

GNNGraphs.jl is a package that provides graph data structures and helper functions specifically designed for working with graph neural networks. This package allows to store not only the graph structure, but also features associated with nodes, edges, and the graph itself. It is the core foundation for the GNNlib.jl, GraphNeuralNetworks.jl, and GNNLux.jl packages.

It supports three types of graphs:

  • Static graph is the basic graph type represented by GNNGraph, where each node and edge can have associated features. This type of graph is used in typical graph neural network applications, where neural networks operate on both the structure of the graph and the features stored in it. It can be used to represent a graph where the structure does not change over time, but the features of the nodes and edges can change over time.

  • Heterogeneous graph is a graph that supports multiple types of nodes and edges, and is represented by GNNHeteroGraph. Each type can have its own properties and relationships. This is useful in scenarios with different entities and interactions, such as in citation graphs or multi-relational data.

  • Temporal graph is a graph that changes over time, and is represented by TemporalSnapshotsGNNGraph. Edges and features can change dynamically. This type of graph is useful for applications that involve tracking time-dependent relationships, such as social networks.

This package depends on the package Graphs.jl.

Installation

The package can be installed with the Julia package manager. From the Julia REPL, type ] to enter the Pkg REPL mode and run:

pkg> add GNNGraphs
\ No newline at end of file diff --git a/docs/GNNLux.jl/dev/.documenter-siteinfo.json b/docs/GNNLux.jl/dev/.documenter-siteinfo.json index dfd3a5a8b..65e22cd25 100644 --- a/docs/GNNLux.jl/dev/.documenter-siteinfo.json +++ b/docs/GNNLux.jl/dev/.documenter-siteinfo.json @@ -1 +1 @@ -{"documenter":{"julia_version":"1.11.2","generation_timestamp":"2024-12-09T09:32:12","documenter_version":"1.8.0"}} \ No newline at end of file +{"documenter":{"julia_version":"1.11.2","generation_timestamp":"2024-12-09T10:23:59","documenter_version":"1.8.0"}} \ No newline at end of file diff --git a/docs/GNNLux.jl/dev/GNNGraphs/api/datasets/index.html b/docs/GNNLux.jl/dev/GNNGraphs/api/datasets/index.html index f5b1e1887..4d2e972f0 100644 --- a/docs/GNNLux.jl/dev/GNNGraphs/api/datasets/index.html +++ b/docs/GNNLux.jl/dev/GNNGraphs/api/datasets/index.html @@ -9,4 +9,4 @@ targets = 2708-element Vector{Int64} test_mask = 2708-element BitVector features = 1433×2708 Matrix{Float32} - train_mask = 2708-element BitVectorsource
\ No newline at end of file + train_mask = 2708-element BitVectorsource
\ No newline at end of file diff --git a/docs/GNNLux.jl/dev/GNNGraphs/api/gnngraph/index.html b/docs/GNNLux.jl/dev/GNNGraphs/api/gnngraph/index.html index d36bfb24a..b5706c0d1 100644 --- a/docs/GNNLux.jl/dev/GNNGraphs/api/gnngraph/index.html +++ b/docs/GNNLux.jl/dev/GNNGraphs/api/gnngraph/index.html @@ -32,7 +32,7 @@ # Collect edges' source and target nodes. # Both source and target are vectors of length num_edges -source, target = edge_index(g)

A GNNGraph can be sent to the GPU, for example by using Flux.jl's gpu function or MLDataDevices.jl's utilities. ```

source
Base.copyFunction
copy(g::GNNGraph; deep=false)

Create a copy of g. If deep is true, then copy will be a deep copy (equivalent to deepcopy(g)), otherwise it will be a shallow copy with the same underlying graph data.

source

DataStore

GNNGraphs.DataStoreType
DataStore([n, data])
+source, target = edge_index(g)

A GNNGraph can be sent to the GPU, for example by using Flux.jl's gpu function or MLDataDevices.jl's utilities. ```

source
Base.copyFunction
copy(g::GNNGraph; deep=false)

Create a copy of g. If deep is true, then copy will be a deep copy (equivalent to deepcopy(g)), otherwise it will be a shallow copy with the same underlying graph data.

source

DataStore

GNNGraphs.DataStoreType
DataStore([n, data])
 DataStore([n,] k1 = x1, k2 = x2, ...)

A container for feature arrays. The optional argument n enforces that numobs(x) == n for each array contained in the datastore.

At construction time, the data can be provided as any iterables of pairs of symbols and arrays or as keyword arguments:

julia> ds = DataStore(3, x = rand(Float32, 2, 3), y = rand(Float32, 3))
 DataStore(3) with 2 elements:
   y = 3-element Vector{Float32}
@@ -62,8 +62,8 @@
 3-element Vector{Float32}:
  1.0
  1.0
- 1.0
source

Query

GNNGraphs.adjacency_listMethod
adjacency_list(g; dir=:out)
-adjacency_list(g, nodes; dir=:out)

Return the adjacency list representation (a vector of vectors) of the graph g.

Calling a the adjacency list, if dir=:out than a[i] will contain the neighbors of node i through outgoing edges. If dir=:in, it will contain neighbors from incoming edges instead.

If nodes is given, return the neighborhood of the nodes in nodes only.

source
GNNGraphs.edge_indexMethod
edge_index(g::GNNGraph)

Return a tuple containing two vectors, respectively storing the source and target nodes for each edges in g.

s, t = edge_index(g)
source
GNNGraphs.get_graph_typeMethod
get_graph_type(g::GNNGraph)

Return the underlying representation for the graph g as a symbol.

Possible values are:

  • :coo: Coordinate list representation. The graph is stored as a tuple of vectors (s, t, w), where s and t are the source and target nodes of the edges, and w is the edge weights.
  • :sparse: Sparse matrix representation. The graph is stored as a sparse matrix representing the weighted adjacency matrix.
  • :dense: Dense matrix representation. The graph is stored as a dense matrix representing the weighted adjacency matrix.

The default representation for graph constructors GNNGraphs.jl is :coo. The underlying representation can be accessed through the g.graph field.

See also GNNGraph.

Examples

The default representation for graph constructors GNNGraphs.jl is :coo.

julia> g = rand_graph(5, 10)
+ 1.0
source

Query

GNNGraphs.adjacency_listMethod
adjacency_list(g; dir=:out)
+adjacency_list(g, nodes; dir=:out)

Return the adjacency list representation (a vector of vectors) of the graph g.

Calling a the adjacency list, if dir=:out than a[i] will contain the neighbors of node i through outgoing edges. If dir=:in, it will contain neighbors from incoming edges instead.

If nodes is given, return the neighborhood of the nodes in nodes only.

source
GNNGraphs.edge_indexMethod
edge_index(g::GNNGraph)

Return a tuple containing two vectors, respectively storing the source and target nodes for each edges in g.

s, t = edge_index(g)
source
GNNGraphs.get_graph_typeMethod
get_graph_type(g::GNNGraph)

Return the underlying representation for the graph g as a symbol.

Possible values are:

  • :coo: Coordinate list representation. The graph is stored as a tuple of vectors (s, t, w), where s and t are the source and target nodes of the edges, and w is the edge weights.
  • :sparse: Sparse matrix representation. The graph is stored as a sparse matrix representing the weighted adjacency matrix.
  • :dense: Dense matrix representation. The graph is stored as a dense matrix representing the weighted adjacency matrix.

The default representation for graph constructors GNNGraphs.jl is :coo. The underlying representation can be accessed through the g.graph field.

See also GNNGraph.

Examples

The default representation for graph constructors GNNGraphs.jl is :coo.

julia> g = rand_graph(5, 10)
 GNNGraph:
   num_nodes: 5
   num_edges: 10
@@ -88,7 +88,7 @@
 julia> gcoo = GNNGraph(g, graph_type=:coo);
 
 julia> gcoo.graph
-([2, 3, 5], [1, 2, 4], [1, 1, 1])
source
GNNGraphs.graph_indicatorMethod
graph_indicator(g::GNNGraph; edges=false)

Return a vector containing the graph membership (an integer from 1 to g.num_graphs) of each node in the graph. If edges=true, return the graph membership of each edge instead.

source
GNNGraphs.has_isolated_nodesMethod
has_isolated_nodes(g::GNNGraph; dir=:out)

Return true if the graph g contains nodes with out-degree (if dir=:out) or in-degree (if dir = :in) equal to zero.

source
GNNGraphs.has_multi_edgesMethod
has_multi_edges(g::GNNGraph)

Return true if g has any multiple edges.

source
GNNGraphs.is_bidirectedMethod
is_bidirected(g::GNNGraph)

Check if the directed graph g essentially corresponds to an undirected graph, i.e. if for each edge it also contains the reverse edge.

source
GNNGraphs.khop_adjFunction
khop_adj(g::GNNGraph,k::Int,T::DataType=eltype(g); dir=:out, weighted=true)

Return $A^k$ where $A$ is the adjacency matrix of the graph 'g'.

source
GNNGraphs.laplacian_lambda_maxFunction
laplacian_lambda_max(g::GNNGraph, T=Float32; add_self_loops=false, dir=:out)

Return the largest eigenvalue of the normalized symmetric Laplacian of the graph g.

If the graph is batched from multiple graphs, return the list of the largest eigenvalue for each graph.

source
GNNGraphs.normalized_laplacianFunction
normalized_laplacian(g, T=Float32; add_self_loops=false, dir=:out)

Normalized Laplacian matrix of graph g.

Arguments

  • g: A GNNGraph.
  • T: result element type.
  • add_self_loops: add self-loops while calculating the matrix.
  • dir: the edge directionality considered (:out, :in, :both).
source
GNNGraphs.scaled_laplacianFunction
scaled_laplacian(g, T=Float32; dir=:out)

Scaled Laplacian matrix of graph g, defined as $\hat{L} = \frac{2}{\lambda_{max}} L - I$ where $L$ is the normalized Laplacian matrix.

Arguments

  • g: A GNNGraph.
  • T: result element type.
  • dir: the edge directionality considered (:out, :in, :both).
source
Graphs.LinAlg.adjacency_matrixFunction
adjacency_matrix(g::GNNGraph, T=eltype(g); dir=:out, weighted=true)

Return the adjacency matrix A for the graph g.

If dir=:out, A[i,j] > 0 denotes the presence of an edge from node i to node j. If dir=:in instead, A[i,j] > 0 denotes the presence of an edge from node j to node i.

User may specify the eltype T of the returned matrix.

If weighted=true, the A will contain the edge weights if any, otherwise the elements of A will be either 0 or 1.

source
Graphs.degreeMethod
degree(g::GNNGraph, T=nothing; dir=:out, edge_weight=true)

Return a vector containing the degrees of the nodes in g.

The gradient is propagated through this function only if edge_weight is true or a vector.

Arguments

  • g: A graph.
  • T: Element type of the returned vector. If nothing, is chosen based on the graph type and will be an integer if edge_weight = false. Default nothing.
  • dir: For dir = :out the degree of a node is counted based on the outgoing edges. For dir = :in, the ingoing edges are used. If dir = :both we have the sum of the two.
  • edge_weight: If true and the graph contains weighted edges, the degree will be weighted. Set to false instead to just count the number of outgoing/ingoing edges. Finally, you can also pass a vector of weights to be used instead of the graph's own weights. Default true.
source
Graphs.has_self_loopsMethod
has_self_loops(g::GNNGraph)

Return true if g has any self loops.

source
Graphs.inneighborsMethod
inneighbors(g::GNNGraph, i::Integer)

Return the neighbors of node i in the graph g through incoming edges.

See also neighbors and outneighbors.

source
Graphs.outneighborsMethod
outneighbors(g::GNNGraph, i::Integer)

Return the neighbors of node i in the graph g through outgoing edges.

See also neighbors and inneighbors.

source
Graphs.neighborsMethod
neighbors(g::GNNGraph, i::Integer; dir=:out)

Return the neighbors of node i in the graph g. If dir=:out, return the neighbors through outgoing edges. If dir=:in, return the neighbors through incoming edges.

See also outneighbors, inneighbors.

source

Transform

GNNGraphs.add_edgesMethod
add_edges(g::GNNGraph, s::AbstractVector, t::AbstractVector; [edata])
+([2, 3, 5], [1, 2, 4], [1, 1, 1])
source
GNNGraphs.graph_indicatorMethod
graph_indicator(g::GNNGraph; edges=false)

Return a vector containing the graph membership (an integer from 1 to g.num_graphs) of each node in the graph. If edges=true, return the graph membership of each edge instead.

source
GNNGraphs.has_isolated_nodesMethod
has_isolated_nodes(g::GNNGraph; dir=:out)

Return true if the graph g contains nodes with out-degree (if dir=:out) or in-degree (if dir = :in) equal to zero.

source
GNNGraphs.has_multi_edgesMethod
has_multi_edges(g::GNNGraph)

Return true if g has any multiple edges.

source
GNNGraphs.is_bidirectedMethod
is_bidirected(g::GNNGraph)

Check if the directed graph g essentially corresponds to an undirected graph, i.e. if for each edge it also contains the reverse edge.

source
GNNGraphs.khop_adjFunction
khop_adj(g::GNNGraph,k::Int,T::DataType=eltype(g); dir=:out, weighted=true)

Return $A^k$ where $A$ is the adjacency matrix of the graph 'g'.

source
GNNGraphs.laplacian_lambda_maxFunction
laplacian_lambda_max(g::GNNGraph, T=Float32; add_self_loops=false, dir=:out)

Return the largest eigenvalue of the normalized symmetric Laplacian of the graph g.

If the graph is batched from multiple graphs, return the list of the largest eigenvalue for each graph.

source
GNNGraphs.normalized_laplacianFunction
normalized_laplacian(g, T=Float32; add_self_loops=false, dir=:out)

Normalized Laplacian matrix of graph g.

Arguments

  • g: A GNNGraph.
  • T: result element type.
  • add_self_loops: add self-loops while calculating the matrix.
  • dir: the edge directionality considered (:out, :in, :both).
source
GNNGraphs.scaled_laplacianFunction
scaled_laplacian(g, T=Float32; dir=:out)

Scaled Laplacian matrix of graph g, defined as $\hat{L} = \frac{2}{\lambda_{max}} L - I$ where $L$ is the normalized Laplacian matrix.

Arguments

  • g: A GNNGraph.
  • T: result element type.
  • dir: the edge directionality considered (:out, :in, :both).
source
Graphs.LinAlg.adjacency_matrixFunction
adjacency_matrix(g::GNNGraph, T=eltype(g); dir=:out, weighted=true)

Return the adjacency matrix A for the graph g.

If dir=:out, A[i,j] > 0 denotes the presence of an edge from node i to node j. If dir=:in instead, A[i,j] > 0 denotes the presence of an edge from node j to node i.

User may specify the eltype T of the returned matrix.

If weighted=true, the A will contain the edge weights if any, otherwise the elements of A will be either 0 or 1.

source
Graphs.degreeMethod
degree(g::GNNGraph, T=nothing; dir=:out, edge_weight=true)

Return a vector containing the degrees of the nodes in g.

The gradient is propagated through this function only if edge_weight is true or a vector.

Arguments

  • g: A graph.
  • T: Element type of the returned vector. If nothing, is chosen based on the graph type and will be an integer if edge_weight = false. Default nothing.
  • dir: For dir = :out the degree of a node is counted based on the outgoing edges. For dir = :in, the ingoing edges are used. If dir = :both we have the sum of the two.
  • edge_weight: If true and the graph contains weighted edges, the degree will be weighted. Set to false instead to just count the number of outgoing/ingoing edges. Finally, you can also pass a vector of weights to be used instead of the graph's own weights. Default true.
source
Graphs.has_self_loopsMethod
has_self_loops(g::GNNGraph)

Return true if g has any self loops.

source
Graphs.inneighborsMethod
inneighbors(g::GNNGraph, i::Integer)

Return the neighbors of node i in the graph g through incoming edges.

See also neighbors and outneighbors.

source
Graphs.outneighborsMethod
outneighbors(g::GNNGraph, i::Integer)

Return the neighbors of node i in the graph g through outgoing edges.

See also neighbors and inneighbors.

source
Graphs.neighborsMethod
neighbors(g::GNNGraph, i::Integer; dir=:out)

Return the neighbors of node i in the graph g. If dir=:out, return the neighbors through outgoing edges. If dir=:in, return the neighbors through incoming edges.

See also outneighbors, inneighbors.

source

Transform

GNNGraphs.add_edgesMethod
add_edges(g::GNNGraph, s::AbstractVector, t::AbstractVector; [edata])
 add_edges(g::GNNGraph, (s, t); [edata])
 add_edges(g::GNNGraph, (s, t, w); [edata])

Add to graph g the edges with source nodes s and target nodes t. Optionally, pass the edge weight w and the features edata for the new edges. Returns a new graph sharing part of the underlying data with g.

If the s or t contain nodes that are not already present in the graph, they are added to the graph as well.

Examples

julia> s, t = [1, 2, 3, 3, 4], [2, 3, 4, 4, 4];
 
@@ -110,9 +110,9 @@
 julia> add_edges(g, [1,2], [2,3])
 GNNGraph:
   num_nodes: 3
-  num_edges: 2
source
GNNGraphs.add_nodesMethod
add_nodes(g::GNNGraph, n; [ndata])

Add n new nodes to graph g. In the new graph, these nodes will have indexes from g.num_nodes + 1 to g.num_nodes + n.

source
GNNGraphs.add_self_loopsMethod
add_self_loops(g::GNNGraph)

Return a graph with the same features as g but also adding edges connecting the nodes to themselves.

Nodes with already existing self-loops will obtain a second self-loop.

If the graphs has edge weights, the new edges will have weight 1.

source
GNNGraphs.getgraphMethod
getgraph(g::GNNGraph, i; nmap=false)

Return the subgraph of g induced by those nodes j for which g.graph_indicator[j] == i or, if i is a collection, g.graph_indicator[j] ∈ i. In other words, it extract the component graphs from a batched graph.

If nmap=true, return also a vector v mapping the new nodes to the old ones. The node i in the subgraph will correspond to the node v[i] in g.

source
GNNGraphs.negative_sampleMethod
negative_sample(g::GNNGraph; 
+  num_edges: 2
source
GNNGraphs.add_nodesMethod
add_nodes(g::GNNGraph, n; [ndata])

Add n new nodes to graph g. In the new graph, these nodes will have indexes from g.num_nodes + 1 to g.num_nodes + n.

source
GNNGraphs.add_self_loopsMethod
add_self_loops(g::GNNGraph)

Return a graph with the same features as g but also adding edges connecting the nodes to themselves.

Nodes with already existing self-loops will obtain a second self-loop.

If the graphs has edge weights, the new edges will have weight 1.

source
GNNGraphs.getgraphMethod
getgraph(g::GNNGraph, i; nmap=false)

Return the subgraph of g induced by those nodes j for which g.graph_indicator[j] == i or, if i is a collection, g.graph_indicator[j] ∈ i. In other words, it extract the component graphs from a batched graph.

If nmap=true, return also a vector v mapping the new nodes to the old ones. The node i in the subgraph will correspond to the node v[i] in g.

source
GNNGraphs.negative_sampleMethod
negative_sample(g::GNNGraph; 
                 num_neg_edges = g.num_edges, 
-                bidirected = is_bidirected(g))

Return a graph containing random negative edges (i.e. non-edges) from graph g as edges.

If bidirected=true, the output graph will be bidirected and there will be no leakage from the origin graph.

See also is_bidirected.

source
GNNGraphs.perturb_edgesMethod
perturb_edges([rng], g::GNNGraph, perturb_ratio)

Return a new graph obtained from g by adding random edges, based on a specified perturb_ratio. The perturb_ratio determines the fraction of new edges to add relative to the current number of edges in the graph. These new edges are added without creating self-loops.

The function returns a new GNNGraph instance that shares some of the underlying data with g but includes the additional edges. The nodes for the new edges are selected randomly, and no edge data (edata) or weights (w) are assigned to these new edges.

Arguments

  • g::GNNGraph: The graph to be perturbed.
  • perturb_ratio: The ratio of the number of new edges to add relative to the current number of edges in the graph. For example, a perturb_ratio of 0.1 means that 10% of the current number of edges will be added as new random edges.
  • rng: An optionalrandom number generator to ensure reproducible results.

Examples

julia> g = GNNGraph((s, t, w))
+                bidirected = is_bidirected(g))

Return a graph containing random negative edges (i.e. non-edges) from graph g as edges.

If bidirected=true, the output graph will be bidirected and there will be no leakage from the origin graph.

See also is_bidirected.

source
GNNGraphs.perturb_edgesMethod
perturb_edges([rng], g::GNNGraph, perturb_ratio)

Return a new graph obtained from g by adding random edges, based on a specified perturb_ratio. The perturb_ratio determines the fraction of new edges to add relative to the current number of edges in the graph. These new edges are added without creating self-loops.

The function returns a new GNNGraph instance that shares some of the underlying data with g but includes the additional edges. The nodes for the new edges are selected randomly, and no edge data (edata) or weights (w) are assigned to these new edges.

Arguments

  • g::GNNGraph: The graph to be perturbed.
  • perturb_ratio: The ratio of the number of new edges to add relative to the current number of edges in the graph. For example, a perturb_ratio of 0.1 means that 10% of the current number of edges will be added as new random edges.
  • rng: An optionalrandom number generator to ensure reproducible results.

Examples

julia> g = GNNGraph((s, t, w))
 GNNGraph:
   num_nodes: 4
   num_edges: 5
@@ -120,7 +120,7 @@
 julia> perturbed_g = perturb_edges(g, 0.2)
 GNNGraph:
   num_nodes: 4
-  num_edges: 6
source
GNNGraphs.ppr_diffusionMethod
ppr_diffusion(g::GNNGraph{<:COO_T}, alpha =0.85f0) -> GNNGraph

Calculates the Personalized PageRank (PPR) diffusion based on the edge weight matrix of a GNNGraph and updates the graph with new edge weights derived from the PPR matrix. References paper: The pagerank citation ranking: Bringing order to the web

The function performs the following steps:

  1. Constructs a modified adjacency matrix A using the graph's edge weights, where A is adjusted by (α - 1) * A + I, with α being the damping factor (alpha_f32) and I the identity matrix.
  2. Normalizes A to ensure each column sums to 1, representing transition probabilities.
  3. Applies the PPR formula α * (I + (α - 1) * A)^-1 to compute the diffusion matrix.
  4. Updates the original edge weights of the graph based on the PPR diffusion matrix, assigning new weights for each edge from the PPR matrix.

Arguments

  • g::GNNGraph: The input graph for which PPR diffusion is to be calculated. It should have edge weights available.
  • alpha_f32::Float32: The damping factor used in PPR calculation, controlling the teleport probability in the random walk. Defaults to 0.85f0.

Returns

  • A new GNNGraph instance with the same structure as g but with updated edge weights according to the PPR diffusion calculation.
source
GNNGraphs.rand_edge_splitMethod
rand_edge_split(g::GNNGraph, frac; bidirected=is_bidirected(g)) -> g1, g2

Randomly partition the edges in g to form two graphs, g1 and g2. Both will have the same number of nodes as g. g1 will contain a fraction frac of the original edges, while g2 wil contain the rest.

If bidirected = true makes sure that an edge and its reverse go into the same split. This option is supported only for bidirected graphs with no self-loops and multi-edges.

rand_edge_split is tipically used to create train/test splits in link prediction tasks.

source
GNNGraphs.random_walk_peMethod
random_walk_pe(g, walk_length)

Return the random walk positional encoding from the paper Graph Neural Networks with Learnable Structural and Positional Representations of the given graph g and the length of the walk walk_length as a matrix of size (walk_length, g.num_nodes).

source
GNNGraphs.remove_edgesMethod
remove_edges(g::GNNGraph, edges_to_remove::AbstractVector{<:Integer})
+  num_edges: 6
source
GNNGraphs.ppr_diffusionMethod
ppr_diffusion(g::GNNGraph{<:COO_T}, alpha =0.85f0) -> GNNGraph

Calculates the Personalized PageRank (PPR) diffusion based on the edge weight matrix of a GNNGraph and updates the graph with new edge weights derived from the PPR matrix. References paper: The pagerank citation ranking: Bringing order to the web

The function performs the following steps:

  1. Constructs a modified adjacency matrix A using the graph's edge weights, where A is adjusted by (α - 1) * A + I, with α being the damping factor (alpha_f32) and I the identity matrix.
  2. Normalizes A to ensure each column sums to 1, representing transition probabilities.
  3. Applies the PPR formula α * (I + (α - 1) * A)^-1 to compute the diffusion matrix.
  4. Updates the original edge weights of the graph based on the PPR diffusion matrix, assigning new weights for each edge from the PPR matrix.

Arguments

  • g::GNNGraph: The input graph for which PPR diffusion is to be calculated. It should have edge weights available.
  • alpha_f32::Float32: The damping factor used in PPR calculation, controlling the teleport probability in the random walk. Defaults to 0.85f0.

Returns

  • A new GNNGraph instance with the same structure as g but with updated edge weights according to the PPR diffusion calculation.
source
GNNGraphs.rand_edge_splitMethod
rand_edge_split(g::GNNGraph, frac; bidirected=is_bidirected(g)) -> g1, g2

Randomly partition the edges in g to form two graphs, g1 and g2. Both will have the same number of nodes as g. g1 will contain a fraction frac of the original edges, while g2 wil contain the rest.

If bidirected = true makes sure that an edge and its reverse go into the same split. This option is supported only for bidirected graphs with no self-loops and multi-edges.

rand_edge_split is tipically used to create train/test splits in link prediction tasks.

source
GNNGraphs.random_walk_peMethod
random_walk_pe(g, walk_length)

Return the random walk positional encoding from the paper Graph Neural Networks with Learnable Structural and Positional Representations of the given graph g and the length of the walk walk_length as a matrix of size (walk_length, g.num_nodes).

source
GNNGraphs.remove_edgesMethod
remove_edges(g::GNNGraph, edges_to_remove::AbstractVector{<:Integer})
 remove_edges(g::GNNGraph, p=0.5)

Remove specified edges from a GNNGraph, either by specifying edge indices or by randomly removing edges with a given probability.

Arguments

  • g: The input graph from which edges will be removed.
  • edges_to_remove: Vector of edge indices to be removed. This argument is only required for the first method.
  • p: Probability of removing each edge. This argument is only required for the second method and defaults to 0.5.

Returns

A new GNNGraph with the specified edges removed.

Example

julia> using GNNGraphs
 
 # Construct a GNNGraph
@@ -143,7 +143,7 @@
 julia> g_new
 GNNGraph:
   num_nodes: 3
-  num_edges: 2
source
GNNGraphs.remove_multi_edgesMethod
remove_multi_edges(g::GNNGraph; aggr=+)

Remove multiple edges (also called parallel edges or repeated edges) from graph g. Possible edge features are aggregated according to aggr, that can take value +,min, max or mean.

See also remove_self_loops, has_multi_edges, and to_bidirected.

source
GNNGraphs.remove_nodesMethod
remove_nodes(g::GNNGraph, p)

Returns a new graph obtained by dropping nodes from g with independent probabilities p.

Examples

julia> g = GNNGraph([1, 1, 2, 2, 3, 4], [1, 2, 3, 1, 3, 1])
+  num_edges: 2
source
GNNGraphs.remove_multi_edgesMethod
remove_multi_edges(g::GNNGraph; aggr=+)

Remove multiple edges (also called parallel edges or repeated edges) from graph g. Possible edge features are aggregated according to aggr, that can take value +,min, max or mean.

See also remove_self_loops, has_multi_edges, and to_bidirected.

source
GNNGraphs.remove_nodesMethod
remove_nodes(g::GNNGraph, p)

Returns a new graph obtained by dropping nodes from g with independent probabilities p.

Examples

julia> g = GNNGraph([1, 1, 2, 2, 3, 4], [1, 2, 3, 1, 3, 1])
 GNNGraph:
   num_nodes: 4
   num_edges: 6
@@ -151,7 +151,7 @@
 julia> g_new = remove_nodes(g, 0.5)
 GNNGraph:
   num_nodes: 2
-  num_edges: 2
source
GNNGraphs.remove_nodesMethod
remove_nodes(g::GNNGraph, nodes_to_remove::AbstractVector)

Remove specified nodes, and their associated edges, from a GNNGraph. This operation reindexes the remaining nodes to maintain a continuous sequence of node indices, starting from 1. Similarly, edges are reindexed to account for the removal of edges connected to the removed nodes.

Arguments

  • g: The input graph from which nodes (and their edges) will be removed.
  • nodes_to_remove: Vector of node indices to be removed.

Returns

A new GNNGraph with the specified nodes and all edges associated with these nodes removed.

Example

using GNNGraphs
+  num_edges: 2
source
GNNGraphs.remove_nodesMethod
remove_nodes(g::GNNGraph, nodes_to_remove::AbstractVector)

Remove specified nodes, and their associated edges, from a GNNGraph. This operation reindexes the remaining nodes to maintain a continuous sequence of node indices, starting from 1. Similarly, edges are reindexed to account for the removal of edges connected to the removed nodes.

Arguments

  • g: The input graph from which nodes (and their edges) will be removed.
  • nodes_to_remove: Vector of node indices to be removed.

Returns

A new GNNGraph with the specified nodes and all edges associated with these nodes removed.

Example

using GNNGraphs
 
 g = GNNGraph([1, 1, 2, 2, 3], [2, 3, 1, 3, 1])
 
@@ -159,7 +159,7 @@
 g_new = remove_nodes(g, [2, 3])
 
 # g_new now does not contain nodes 2 and 3, and any edges that were connected to these nodes.
-println(g_new)
source
GNNGraphs.remove_self_loopsMethod
remove_self_loops(g::GNNGraph)

Return a graph constructed from g where self-loops (edges from a node to itself) are removed.

See also add_self_loops and remove_multi_edges.

source
GNNGraphs.set_edge_weightMethod
set_edge_weight(g::GNNGraph, w::AbstractVector)

Set w as edge weights in the returned graph.

source
GNNGraphs.to_bidirectedMethod
to_bidirected(g)

Adds a reverse edge for each edge in the graph, then calls remove_multi_edges with mean aggregation to simplify the graph.

See also is_bidirected.

Examples

julia> s, t = [1, 2, 3, 3, 4], [2, 3, 4, 4, 4];
+println(g_new)
source
GNNGraphs.remove_self_loopsMethod
remove_self_loops(g::GNNGraph)

Return a graph constructed from g where self-loops (edges from a node to itself) are removed.

See also add_self_loops and remove_multi_edges.

source
GNNGraphs.set_edge_weightMethod
set_edge_weight(g::GNNGraph, w::AbstractVector)

Set w as edge weights in the returned graph.

source
GNNGraphs.to_bidirectedMethod
to_bidirected(g)

Adds a reverse edge for each edge in the graph, then calls remove_multi_edges with mean aggregation to simplify the graph.

See also is_bidirected.

Examples

julia> s, t = [1, 2, 3, 3, 4], [2, 3, 4, 4, 4];
 
 julia> w = [1.0, 2.0, 3.0, 4.0, 5.0];
 
@@ -200,7 +200,7 @@
  20.0
  35.0
  35.0
- 50.0
source
GNNGraphs.to_unidirectedMethod
to_unidirected(g::GNNGraph)

Return a graph that for each multiple edge between two nodes in g keeps only an edge in one direction.

source
MLUtils.batchMethod
batch(gs::Vector{<:GNNGraph})

Batch together multiple GNNGraphs into a single one containing the total number of original nodes and edges.

Equivalent to SparseArrays.blockdiag. See also MLUtils.unbatch.

Examples

julia> g1 = rand_graph(4, 4, ndata=ones(Float32, 3, 4))
+ 50.0
source
GNNGraphs.to_unidirectedMethod
to_unidirected(g::GNNGraph)

Return a graph that for each multiple edge between two nodes in g keeps only an edge in one direction.

source
MLUtils.batchMethod
batch(gs::Vector{<:GNNGraph})

Batch together multiple GNNGraphs into a single one containing the total number of original nodes and edges.

Equivalent to SparseArrays.blockdiag. See also MLUtils.unbatch.

Examples

julia> g1 = rand_graph(4, 4, ndata=ones(Float32, 3, 4))
 GNNGraph:
   num_nodes: 4
   num_edges: 4
@@ -226,7 +226,7 @@
 3×9 Matrix{Float32}:
  1.0  1.0  1.0  1.0  0.0  0.0  0.0  0.0  0.0
  1.0  1.0  1.0  1.0  0.0  0.0  0.0  0.0  0.0
- 1.0  1.0  1.0  1.0  0.0  0.0  0.0  0.0  0.0
source
MLUtils.unbatchMethod
unbatch(g::GNNGraph)

Opposite of the MLUtils.batch operation, returns an array of the individual graphs batched together in g.

See also MLUtils.batch and getgraph.

Examples

julia> using MLUtils
+ 1.0  1.0  1.0  1.0  0.0  0.0  0.0  0.0  0.0
source
MLUtils.unbatchMethod
unbatch(g::GNNGraph)

Opposite of the MLUtils.batch operation, returns an array of the individual graphs batched together in g.

See also MLUtils.batch and getgraph.

Examples

julia> using MLUtils
 
 julia> gbatched = MLUtils.batch([rand_graph(5, 6), rand_graph(10, 8), rand_graph(4,2)])
 GNNGraph:
@@ -238,8 +238,8 @@
 3-element Vector{GNNGraph{Tuple{Vector{Int64}, Vector{Int64}, Nothing}}}:
  GNNGraph(5, 6) with no data
  GNNGraph(10, 8) with no data
- GNNGraph(4, 2) with no data
source
SparseArrays.blockdiagMethod
blockdiag(xs::GNNGraph...)

Equivalent to MLUtils.batch.

source

Utils

GNNGraphs.sort_edge_indexFunction
sort_edge_index(ei::Tuple) -> u', v'
-sort_edge_index(u, v) -> u', v'

Return a sorted version of the tuple of vectors ei = (u, v), applying a common permutation to u and v. The sorting is lexycographic, that is the pairs (ui, vi) are sorted first according to the ui and then according to vi.

source
GNNGraphs.color_refinementFunction
color_refinement(g::GNNGraph, [x0]) -> x, num_colors, niters

The color refinement algorithm for graph coloring. Given a graph g and an initial coloring x0, the algorithm iteratively refines the coloring until a fixed point is reached.

At each iteration the algorithm computes a hash of the coloring and the sorted list of colors of the neighbors of each node. This hash is used to determine if the coloring has changed.

math x_i' = hashmap((x_i, sort([x_j for j \in N(i)]))).`

This algorithm is related to the 1-Weisfeiler-Lehman algorithm for graph isomorphism testing.

Arguments

  • g::GNNGraph: The graph to color.
  • x0::AbstractVector{<:Integer}: The initial coloring. If not provided, all nodes are colored with 1.

Returns

  • x::AbstractVector{<:Integer}: The final coloring.
  • num_colors::Int: The number of colors used.
  • niters::Int: The number of iterations until convergence.
source

Generate

GNNGraphs.knn_graphMethod
knn_graph(points::AbstractMatrix, 
+ GNNGraph(4, 2) with no data
source
SparseArrays.blockdiagMethod
blockdiag(xs::GNNGraph...)

Equivalent to MLUtils.batch.

source

Utils

GNNGraphs.sort_edge_indexFunction
sort_edge_index(ei::Tuple) -> u', v'
+sort_edge_index(u, v) -> u', v'

Return a sorted version of the tuple of vectors ei = (u, v), applying a common permutation to u and v. The sorting is lexycographic, that is the pairs (ui, vi) are sorted first according to the ui and then according to vi.

source
GNNGraphs.color_refinementFunction
color_refinement(g::GNNGraph, [x0]) -> x, num_colors, niters

The color refinement algorithm for graph coloring. Given a graph g and an initial coloring x0, the algorithm iteratively refines the coloring until a fixed point is reached.

At each iteration the algorithm computes a hash of the coloring and the sorted list of colors of the neighbors of each node. This hash is used to determine if the coloring has changed.

math x_i' = hashmap((x_i, sort([x_j for j \in N(i)]))).`

This algorithm is related to the 1-Weisfeiler-Lehman algorithm for graph isomorphism testing.

Arguments

  • g::GNNGraph: The graph to color.
  • x0::AbstractVector{<:Integer}: The initial coloring. If not provided, all nodes are colored with 1.

Returns

  • x::AbstractVector{<:Integer}: The final coloring.
  • num_colors::Int: The number of colors used.
  • niters::Int: The number of iterations until convergence.
source

Generate

GNNGraphs.knn_graphMethod
knn_graph(points::AbstractMatrix, 
           k::Int; 
           graph_indicator = nothing,
           self_loops = false, 
@@ -259,7 +259,7 @@
 GNNGraph:
     num_nodes = 10
     num_edges = 30
-    num_graphs = 2
source
GNNGraphs.radius_graphMethod
radius_graph(points::AbstractMatrix, 
+    num_graphs = 2
source
GNNGraphs.radius_graphMethod
radius_graph(points::AbstractMatrix, 
              r::AbstractFloat; 
              graph_indicator = nothing,
              self_loops = false, 
@@ -279,7 +279,7 @@
 GNNGraph:
     num_nodes = 10
     num_edges = 20
-    num_graphs = 2

References

Section B paragraphs 1 and 2 of the paper Dynamic Hidden-Variable Network Models

source
GNNGraphs.rand_graphMethod
rand_graph([rng,] n, m; bidirected=true, edge_weight = nothing, kws...)

Generate a random (Erdós-Renyi) GNNGraph with n nodes and m edges.

If bidirected=true the reverse edge of each edge will be present. If bidirected=false instead, m unrelated edges are generated. In any case, the output graph will contain no self-loops or multi-edges.

A vector can be passed as edge_weight. Its length has to be equal to m in the directed case, and m÷2 in the bidirected one.

Pass a random number generator as the first argument to make the generation reproducible.

Additional keyword arguments will be passed to the GNNGraph constructor.

Examples

julia> g = rand_graph(5, 4, bidirected=false)
+    num_graphs = 2

References

Section B paragraphs 1 and 2 of the paper Dynamic Hidden-Variable Network Models

source
GNNGraphs.rand_graphMethod
rand_graph([rng,] n, m; bidirected=true, edge_weight = nothing, kws...)

Generate a random (Erdós-Renyi) GNNGraph with n nodes and m edges.

If bidirected=true the reverse edge of each edge will be present. If bidirected=false instead, m unrelated edges are generated. In any case, the output graph will contain no self-loops or multi-edges.

A vector can be passed as edge_weight. Its length has to be equal to m in the directed case, and m÷2 in the bidirected one.

Pass a random number generator as the first argument to make the generation reproducible.

Additional keyword arguments will be passed to the GNNGraph constructor.

Examples

julia> g = rand_graph(5, 4, bidirected=false)
 GNNGraph:
   num_nodes: 5
   num_edges: 4
@@ -297,7 +297,7 @@
 
 # Each edge has a reverse
 julia> edge_index(g)
-([1, 1, 5, 3], [5, 3, 1, 1])
source

Operators

Base.intersectFunction
intersect(g1::GNNGraph, g2::GNNGraph)

Intersect two graphs by keeping only the common edges.

source

Sampling

GNNGraphs.sample_neighborsFunction
sample_neighbors(g, nodes, K=-1; dir=:in, replace=false, dropnodes=false)

Sample neighboring edges of the given nodes and return the induced subgraph. For each node, a number of inbound (or outbound when dir = :out) edges will be randomly chosen. Ifdropnodes=false`, the graph returned will then contain all the nodes in the original graph, but only the sampled edges.

The returned graph will contain an edge feature EID corresponding to the id of the edge in the original graph. If dropnodes=true, it will also contain a node feature NID with the node ids in the original graph.

Arguments

  • g. The graph.
  • nodes. A list of node IDs to sample neighbors from.
  • K. The maximum number of edges to be sampled for each node. If -1, all the neighboring edges will be selected.
  • dir. Determines whether to sample inbound (:in) or outbound (`:out) edges (Default :in).
  • replace. If true, sample with replacement.
  • dropnodes. If true, the resulting subgraph will contain only the nodes involved in the sampled edges.

Examples

julia> g = rand_graph(20, 100)
+([1, 1, 5, 3], [5, 3, 1, 1])
source

Operators

Base.intersectFunction
intersect(g1::GNNGraph, g2::GNNGraph)

Intersect two graphs by keeping only the common edges.

source

Sampling

GNNGraphs.sample_neighborsFunction
sample_neighbors(g, nodes, K=-1; dir=:in, replace=false, dropnodes=false)

Sample neighboring edges of the given nodes and return the induced subgraph. For each node, a number of inbound (or outbound when dir = :out) edges will be randomly chosen. Ifdropnodes=false`, the graph returned will then contain all the nodes in the original graph, but only the sampled edges.

The returned graph will contain an edge feature EID corresponding to the id of the edge in the original graph. If dropnodes=true, it will also contain a node feature NID with the node ids in the original graph.

Arguments

  • g. The graph.
  • nodes. A list of node IDs to sample neighbors from.
  • K. The maximum number of edges to be sampled for each node. If -1, all the neighboring edges will be selected.
  • dir. Determines whether to sample inbound (:in) or outbound (`:out) edges (Default :in).
  • replace. If true, sample with replacement.
  • dropnodes. If true, the resulting subgraph will contain only the nodes involved in the sampled edges.

Examples

julia> g = rand_graph(20, 100)
 GNNGraph:
     num_nodes = 20
     num_edges = 100
@@ -336,7 +336,7 @@
     num_nodes = 20
     num_edges = 10
     edata:
-        EID => (10,)
source
Graphs.induced_subgraphMethod
induced_subgraph(graph, nodes)

Generates a subgraph from the original graph using the provided nodes. The function includes the nodes' neighbors and creates edges between nodes that are connected in the original graph. If a node has no neighbors, an isolated node will be added to the subgraph. Returns A new GNNGraph containing the subgraph with the specified nodes and their features.

Arguments

  • graph. The original GNNGraph containing nodes, edges, and node features.
  • nodes`. A vector of node indices to include in the subgraph.

Examples

julia> s = [1, 2]
+        EID => (10,)
source
Graphs.induced_subgraphMethod
induced_subgraph(graph, nodes)

Generates a subgraph from the original graph using the provided nodes. The function includes the nodes' neighbors and creates edges between nodes that are connected in the original graph. If a node has no neighbors, an isolated node will be added to the subgraph. Returns A new GNNGraph containing the subgraph with the specified nodes and their features.

Arguments

  • graph. The original GNNGraph containing nodes, edges, and node features.
  • nodes`. A vector of node indices to include in the subgraph.

Examples

julia> s = [1, 2]
 2-element Vector{Int64}:
  1
  2
@@ -369,4 +369,4 @@
         y = 2-element Vector{Float32}
         x = 32×2 Matrix{Float32}
   edata:
-        e = 1-element Vector{Float32}
source
\ No newline at end of file + e = 1-element Vector{Float32}source
\ No newline at end of file diff --git a/docs/GNNLux.jl/dev/GNNGraphs/api/heterograph/index.html b/docs/GNNLux.jl/dev/GNNGraphs/api/heterograph/index.html index 5aaaad6b9..bc51cb926 100644 --- a/docs/GNNLux.jl/dev/GNNGraphs/api/heterograph/index.html +++ b/docs/GNNLux.jl/dev/GNNGraphs/api/heterograph/index.html @@ -39,7 +39,7 @@ julia> hg.ndata[:A].x 2×10 Matrix{Float64}: 0.825882 0.0797502 0.245813 0.142281 0.231253 0.685025 0.821457 0.888838 0.571347 0.53165 - 0.631286 0.316292 0.705325 0.239211 0.533007 0.249233 0.473736 0.595475 0.0623298 0.159307

See also GNNGraph for a homogeneous graph type and rand_heterograph for a function to generate random heterographs.

source
GNNGraphs.edge_type_subgraphMethod
edge_type_subgraph(g::GNNHeteroGraph, edge_ts)

Return a subgraph of g that contains only the edges of type edge_ts. Edge types can be specified as a single edge type (i.e. a tuple containing 3 symbols) or a vector of edge types.

source
GNNGraphs.num_edge_typesMethod
num_edge_types(g)

Return the number of edge types in the graph. For GNNGraphs, this is always 1. For GNNHeteroGraphs, this is the number of unique edge types.

source
GNNGraphs.num_node_typesMethod
num_node_types(g)

Return the number of node types in the graph. For GNNGraphs, this is always 1. For GNNHeteroGraphs, this is the number of unique node types.

source

Query

GNNGraphs.edge_indexMethod
edge_index(g::GNNHeteroGraph, [edge_t])

Return a tuple containing two vectors, respectively storing the source and target nodes for each edges in g of type edge_t = (src_t, rel_t, trg_t).

If edge_t is not provided, it will error if g has more than one edge type.

source
GNNGraphs.graph_indicatorMethod
graph_indicator(g::GNNHeteroGraph, [node_t])

Return a Dict of vectors containing the graph membership (an integer from 1 to g.num_graphs) of each node in the graph for each node type. If node_t is provided, return the graph membership of each node of type node_t instead.

See also batch.

source
Graphs.degreeMethod
degree(g::GNNHeteroGraph, edge_type::EType; dir = :in)

Return a vector containing the degrees of the nodes in g GNNHeteroGraph given edge_type.

Arguments

  • g: A graph.
  • edge_type: A tuple of symbols (source_t, edge_t, target_t) representing the edge type.
  • T: Element type of the returned vector. If nothing, is chosen based on the graph type. Default nothing.
  • dir: For dir = :out the degree of a node is counted based on the outgoing edges. For dir = :in, the ingoing edges are used. If dir = :both we have the sum of the two. Default dir = :out.
source
Graphs.has_edgeMethod
has_edge(g::GNNHeteroGraph, edge_t, i, j)

Return true if there is an edge of type edge_t from node i to node j in g.

Examples

julia> g = rand_bipartite_heterograph((2, 2), (4, 0), bidirected=false)
+    0.631286  0.316292   0.705325  0.239211  0.533007  0.249233  0.473736  0.595475  0.0623298  0.159307

See also GNNGraph for a homogeneous graph type and rand_heterograph for a function to generate random heterographs.

source
GNNGraphs.edge_type_subgraphMethod
edge_type_subgraph(g::GNNHeteroGraph, edge_ts)

Return a subgraph of g that contains only the edges of type edge_ts. Edge types can be specified as a single edge type (i.e. a tuple containing 3 symbols) or a vector of edge types.

source
GNNGraphs.num_edge_typesMethod
num_edge_types(g)

Return the number of edge types in the graph. For GNNGraphs, this is always 1. For GNNHeteroGraphs, this is the number of unique edge types.

source
GNNGraphs.num_node_typesMethod
num_node_types(g)

Return the number of node types in the graph. For GNNGraphs, this is always 1. For GNNHeteroGraphs, this is the number of unique node types.

source

Query

GNNGraphs.edge_indexMethod
edge_index(g::GNNHeteroGraph, [edge_t])

Return a tuple containing two vectors, respectively storing the source and target nodes for each edges in g of type edge_t = (src_t, rel_t, trg_t).

If edge_t is not provided, it will error if g has more than one edge type.

source
GNNGraphs.graph_indicatorMethod
graph_indicator(g::GNNHeteroGraph, [node_t])

Return a Dict of vectors containing the graph membership (an integer from 1 to g.num_graphs) of each node in the graph for each node type. If node_t is provided, return the graph membership of each node of type node_t instead.

See also batch.

source
Graphs.degreeMethod
degree(g::GNNHeteroGraph, edge_type::EType; dir = :in)

Return a vector containing the degrees of the nodes in g GNNHeteroGraph given edge_type.

Arguments

  • g: A graph.
  • edge_type: A tuple of symbols (source_t, edge_t, target_t) representing the edge type.
  • T: Element type of the returned vector. If nothing, is chosen based on the graph type. Default nothing.
  • dir: For dir = :out the degree of a node is counted based on the outgoing edges. For dir = :in, the ingoing edges are used. If dir = :both we have the sum of the two. Default dir = :out.
source
Graphs.has_edgeMethod
has_edge(g::GNNHeteroGraph, edge_t, i, j)

Return true if there is an edge of type edge_t from node i to node j in g.

Examples

julia> g = rand_bipartite_heterograph((2, 2), (4, 0), bidirected=false)
 GNNHeteroGraph:
   num_nodes: Dict(:A => 2, :B => 2)
   num_edges: Dict((:A, :to, :B) => 4, (:B, :to, :A) => 0)
@@ -48,10 +48,10 @@
 true
 
 julia> has_edge(g, (:B,:to,:A), 1, 1)
-false
source

Transform

GNNGraphs.add_edgesMethod
add_edges(g::GNNHeteroGraph, edge_t, s, t; [edata, num_nodes])
+false
source

Transform

GNNGraphs.add_edgesMethod
add_edges(g::GNNHeteroGraph, edge_t, s, t; [edata, num_nodes])
 add_edges(g::GNNHeteroGraph, edge_t => (s, t); [edata, num_nodes])
-add_edges(g::GNNHeteroGraph, edge_t => (s, t, w); [edata, num_nodes])

Add to heterograph g edges of type edge_t with source node vector s and target node vector t. Optionally, pass the edge weights w or the features edata for the new edges. edge_t is a triplet of symbols (src_t, rel_t, dst_t).

If the edge type is not already present in the graph, it is added. If it involves new node types, they are added to the graph as well. In this case, a dictionary or named tuple of num_nodes can be passed to specify the number of nodes of the new types, otherwise the number of nodes is inferred from the maximum node id in s and t.

source
GNNGraphs.add_self_loopsMethod
add_self_loops(g::GNNHeteroGraph, edge_t::EType)
-add_self_loops(g::GNNHeteroGraph)

If the source node type is the same as the destination node type in edge_t, return a graph with the same features as g but also add self-loops of the specified type, edge_t. Otherwise, it returns g unchanged.

Nodes with already existing self-loops of type edge_t will obtain a second set of self-loops of the same type.

If the graph has edge weights for edges of type edge_t, the new edges will have weight 1.

If no edges of type edge_t exist, or all existing edges have no weight, then all new self loops will have no weight.

If edge_t is not passed as argument, for the entire graph self-loop is added to each node for every edge type in the graph where the source and destination node types are the same. This iterates over all edge types present in the graph, applying the self-loop addition logic to each applicable edge type.

source

Generate

GNNGraphs.rand_bipartite_heterographMethod
rand_bipartite_heterograph([rng,] 
+add_edges(g::GNNHeteroGraph, edge_t => (s, t, w); [edata, num_nodes])

Add to heterograph g edges of type edge_t with source node vector s and target node vector t. Optionally, pass the edge weights w or the features edata for the new edges. edge_t is a triplet of symbols (src_t, rel_t, dst_t).

If the edge type is not already present in the graph, it is added. If it involves new node types, they are added to the graph as well. In this case, a dictionary or named tuple of num_nodes can be passed to specify the number of nodes of the new types, otherwise the number of nodes is inferred from the maximum node id in s and t.

source
GNNGraphs.add_self_loopsMethod
add_self_loops(g::GNNHeteroGraph, edge_t::EType)
+add_self_loops(g::GNNHeteroGraph)

If the source node type is the same as the destination node type in edge_t, return a graph with the same features as g but also add self-loops of the specified type, edge_t. Otherwise, it returns g unchanged.

Nodes with already existing self-loops of type edge_t will obtain a second set of self-loops of the same type.

If the graph has edge weights for edges of type edge_t, the new edges will have weight 1.

If no edges of type edge_t exist, or all existing edges have no weight, then all new self loops will have no weight.

If edge_t is not passed as argument, for the entire graph self-loop is added to each node for every edge type in the graph where the source and destination node types are the same. This iterates over all edge types present in the graph, applying the self-loop addition logic to each applicable edge type.

source

Generate

GNNGraphs.rand_bipartite_heterographMethod
rand_bipartite_heterograph([rng,] 
                            (n1, n2), (m12, m21); 
                            bidirected = true, 
                            node_t = (:A, :B), 
@@ -64,8 +64,8 @@
 julia> g = rand_bipartite_heterograph((10, 15), (20, 0), node_t=(:user, :item), edge_t=:-, bidirected=false)
 GNNHeteroGraph:
   num_nodes: Dict(:item => 15, :user => 10)
-  num_edges: Dict((:item, :-, :user) => 0, (:user, :-, :item) => 20)
source
GNNGraphs.rand_heterographFunction
rand_heterograph([rng,] n, m; bidirected=false, kws...)

Construct an GNNHeteroGraph with random edges and with number of nodes and edges specified by n and m respectively. n and m can be any iterable of pairs specifing node/edge types and their numbers.

Pass a random number generator as a first argument to make the generation reproducible.

Setting bidirected=true will generate a bidirected graph, i.e. each edge will have a reverse edge. Therefore, for each edge type (:A, :rel, :B) a corresponding reverse edge type (:B, :rel, :A) will be generated.

Additional keyword arguments will be passed to the GNNHeteroGraph constructor.

Examples

julia> g = rand_heterograph((:user => 10, :movie => 20),
+  num_edges: Dict((:item, :-, :user) => 0, (:user, :-, :item) => 20)
source
GNNGraphs.rand_heterographFunction
rand_heterograph([rng,] n, m; bidirected=false, kws...)

Construct an GNNHeteroGraph with random edges and with number of nodes and edges specified by n and m respectively. n and m can be any iterable of pairs specifing node/edge types and their numbers.

Pass a random number generator as a first argument to make the generation reproducible.

Setting bidirected=true will generate a bidirected graph, i.e. each edge will have a reverse edge. Therefore, for each edge type (:A, :rel, :B) a corresponding reverse edge type (:B, :rel, :A) will be generated.

Additional keyword arguments will be passed to the GNNHeteroGraph constructor.

Examples

julia> g = rand_heterograph((:user => 10, :movie => 20),
                             (:user, :rate, :movie) => 30)
 GNNHeteroGraph:
   num_nodes: Dict(:movie => 20, :user => 10)
-  num_edges: Dict((:user, :rate, :movie) => 30)
source
\ No newline at end of file + num_edges: Dict((:user, :rate, :movie) => 30)source
\ No newline at end of file diff --git a/docs/GNNLux.jl/dev/GNNGraphs/api/samplers/index.html b/docs/GNNLux.jl/dev/GNNGraphs/api/samplers/index.html index 522a15ff6..559737c01 100644 --- a/docs/GNNLux.jl/dev/GNNGraphs/api/samplers/index.html +++ b/docs/GNNLux.jl/dev/GNNGraphs/api/samplers/index.html @@ -5,4 +5,4 @@ julia> for mini_batch_gnn in loader batch_counter += 1 println("Batch ", batch_counter, ": Nodes in mini-batch graph: ", nv(mini_batch_gnn)) - endsource
\ No newline at end of file + endsource
\ No newline at end of file diff --git a/docs/GNNLux.jl/dev/GNNGraphs/api/temporalgraph/index.html b/docs/GNNLux.jl/dev/GNNGraphs/api/temporalgraph/index.html index c72b17d91..fecc4e1c0 100644 --- a/docs/GNNLux.jl/dev/GNNGraphs/api/temporalgraph/index.html +++ b/docs/GNNLux.jl/dev/GNNGraphs/api/temporalgraph/index.html @@ -16,7 +16,7 @@ num_edges: [20, 20, 20, 20, 20] num_snapshots: 5 tgdata: - x = 4-element Vector{Float64}source
GNNGraphs.add_snapshotMethod
add_snapshot(tg::TemporalSnapshotsGNNGraph, t::Int, g::GNNGraph)

Return a TemporalSnapshotsGNNGraph created starting from tg by adding the snapshot g at time index t.

Examples

julia> using GNNGraphs
+        x = 4-element Vector{Float64}
source
GNNGraphs.add_snapshotMethod
add_snapshot(tg::TemporalSnapshotsGNNGraph, t::Int, g::GNNGraph)

Return a TemporalSnapshotsGNNGraph created starting from tg by adding the snapshot g at time index t.

Examples

julia> using GNNGraphs
 
 julia> snapshots = [rand_graph(10, 20) for i in 1:5];
 
@@ -30,7 +30,7 @@
 TemporalSnapshotsGNNGraph:
   num_nodes: [10, 10, 10, 10, 10, 10]
   num_edges: [20, 20, 16, 20, 20, 20]
-  num_snapshots: 6
source
GNNGraphs.remove_snapshotMethod
remove_snapshot(tg::TemporalSnapshotsGNNGraph, t::Int)

Return a TemporalSnapshotsGNNGraph created starting from tg by removing the snapshot at time index t.

Examples

julia> using GNNGraphs
+  num_snapshots: 6
source
GNNGraphs.remove_snapshotMethod
remove_snapshot(tg::TemporalSnapshotsGNNGraph, t::Int)

Return a TemporalSnapshotsGNNGraph created starting from tg by removing the snapshot at time index t.

Examples

julia> using GNNGraphs
 
 julia> snapshots = [rand_graph(10,20), rand_graph(10,14), rand_graph(10,22)];
 
@@ -44,7 +44,7 @@
 TemporalSnapshotsGNNGraph:
   num_nodes: [10, 10]
   num_edges: [20, 22]
-  num_snapshots: 2
source

Random Generators

GNNGraphs.rand_temporal_radius_graphFunction
rand_temporal_radius_graph(number_nodes::Int, 
+  num_snapshots: 2
source

Random Generators

GNNGraphs.rand_temporal_radius_graphFunction
rand_temporal_radius_graph(number_nodes::Int, 
                            number_snapshots::Int,
                            speed::AbstractFloat,
                            r::AbstractFloat;
@@ -56,7 +56,7 @@
 TemporalSnapshotsGNNGraph:
   num_nodes: [10, 10, 10, 10, 10]
   num_edges: [90, 90, 90, 90, 90]
-  num_snapshots: 5
source
GNNGraphs.rand_temporal_hyperbolic_graphFunction
rand_temporal_hyperbolic_graph(number_nodes::Int, 
+  num_snapshots: 5
source
GNNGraphs.rand_temporal_hyperbolic_graphFunction
rand_temporal_hyperbolic_graph(number_nodes::Int, 
                                number_snapshots::Int;
                                α::Real,
                                R::Real,
@@ -69,4 +69,4 @@
 TemporalSnapshotsGNNGraph:
   num_nodes: [10, 10, 10, 10, 10]
   num_edges: [44, 46, 48, 42, 38]
-  num_snapshots: 5

References

Section D of the paper Dynamic Hidden-Variable Network Models and the paper Hyperbolic Geometry of Complex Networks

source
\ No newline at end of file + num_snapshots: 5

References

Section D of the paper Dynamic Hidden-Variable Network Models and the paper Hyperbolic Geometry of Complex Networks

source
\ No newline at end of file diff --git a/docs/GNNLux.jl/dev/GNNGraphs/guides/datasets/index.html b/docs/GNNLux.jl/dev/GNNGraphs/guides/datasets/index.html index 41390f390..bee6db356 100644 --- a/docs/GNNLux.jl/dev/GNNGraphs/guides/datasets/index.html +++ b/docs/GNNLux.jl/dev/GNNGraphs/guides/datasets/index.html @@ -1 +1 @@ -Datasets · GNNLux.jl

Datasets

GNNGraphs.jl doesn't come with its own datasets, but leverages those available in the Julia (and non-Julia) ecosystem. In particular, the examples in the GraphNeuralNetworks.jl repository make use of the MLDatasets.jl package. There you will find common graph datasets such as Cora, PubMed, Citeseer, TUDataset and many others. For graphs with static structures and temporal features, datasets such as METRLA, PEMSBAY, ChickenPox, and WindMillEnergy are available. For graphs featuring both temporal structures and temporal features, the TemporalBrains dataset is suitable.

GraphNeuralNetworks.jl provides the mldataset2gnngraph method for interfacing with MLDatasets.jl.

\ No newline at end of file +Datasets · GNNLux.jl

Datasets

GNNGraphs.jl doesn't come with its own datasets, but leverages those available in the Julia (and non-Julia) ecosystem. In particular, the examples in the GraphNeuralNetworks.jl repository make use of the MLDatasets.jl package. There you will find common graph datasets such as Cora, PubMed, Citeseer, TUDataset and many others. For graphs with static structures and temporal features, datasets such as METRLA, PEMSBAY, ChickenPox, and WindMillEnergy are available. For graphs featuring both temporal structures and temporal features, the TemporalBrains dataset is suitable.

GraphNeuralNetworks.jl provides the mldataset2gnngraph method for interfacing with MLDatasets.jl.

\ No newline at end of file diff --git a/docs/GNNLux.jl/dev/GNNGraphs/guides/gnngraph/index.html b/docs/GNNLux.jl/dev/GNNGraphs/guides/gnngraph/index.html index 6d28d5018..989eab753 100644 --- a/docs/GNNLux.jl/dev/GNNGraphs/guides/gnngraph/index.html +++ b/docs/GNNLux.jl/dev/GNNGraphs/guides/gnngraph/index.html @@ -165,4 +165,4 @@ julia> GNNGraph(gd) GNNGraph: num_nodes: 10 - num_edges: 20 \ No newline at end of file + num_edges: 20 \ No newline at end of file diff --git a/docs/GNNLux.jl/dev/GNNGraphs/guides/heterograph/index.html b/docs/GNNLux.jl/dev/GNNGraphs/guides/heterograph/index.html index 438ca142a..32a8a1f46 100644 --- a/docs/GNNLux.jl/dev/GNNGraphs/guides/heterograph/index.html +++ b/docs/GNNLux.jl/dev/GNNGraphs/guides/heterograph/index.html @@ -79,4 +79,4 @@ @assert g.num_nodes[:A] == 80 @assert size(g.ndata[:A].x) == (3, 80) # ... -end

Graph convolutions on heterographs

See HeteroGraphConv for how to perform convolutions on heterogeneous graphs.

\ No newline at end of file +end

Graph convolutions on heterographs

See HeteroGraphConv for how to perform convolutions on heterogeneous graphs.

\ No newline at end of file diff --git a/docs/GNNLux.jl/dev/GNNGraphs/guides/temporalgraph/index.html b/docs/GNNLux.jl/dev/GNNGraphs/guides/temporalgraph/index.html index 48922ddec..b4e1e4898 100644 --- a/docs/GNNLux.jl/dev/GNNGraphs/guides/temporalgraph/index.html +++ b/docs/GNNLux.jl/dev/GNNGraphs/guides/temporalgraph/index.html @@ -86,4 +86,4 @@ julia> [ds.x for ds in tg.ndata]; # vector containing the x feature of each snapshot julia> [g.x for g in tg.snapshots]; # same vector as above, now accessing - # the x feature directly from the snapshots \ No newline at end of file + # the x feature directly from the snapshots \ No newline at end of file diff --git a/docs/GNNLux.jl/dev/GNNGraphs/index.html b/docs/GNNLux.jl/dev/GNNGraphs/index.html index 26e07a849..636f6abad 100644 --- a/docs/GNNLux.jl/dev/GNNGraphs/index.html +++ b/docs/GNNLux.jl/dev/GNNGraphs/index.html @@ -1 +1 @@ -GNNGraphs.jl · GNNLux.jl

GNNGraphs.jl

GNNGraphs.jl is a package that provides graph data structures and helper functions specifically designed for working with graph neural networks. This package allows to store not only the graph structure, but also features associated with nodes, edges, and the graph itself. It is the core foundation for the GNNlib.jl, GraphNeuralNetworks.jl, and GNNLux.jl packages.

It supports three types of graphs:

  • Static graph is the basic graph type represented by GNNGraph, where each node and edge can have associated features. This type of graph is used in typical graph neural network applications, where neural networks operate on both the structure of the graph and the features stored in it. It can be used to represent a graph where the structure does not change over time, but the features of the nodes and edges can change over time.

  • Heterogeneous graph is a graph that supports multiple types of nodes and edges, and is represented by GNNHeteroGraph. Each type can have its own properties and relationships. This is useful in scenarios with different entities and interactions, such as in citation graphs or multi-relational data.

  • Temporal graph is a graph that changes over time, and is represented by TemporalSnapshotsGNNGraph. Edges and features can change dynamically. This type of graph is useful for applications that involve tracking time-dependent relationships, such as social networks.

This package depends on the package Graphs.jl.

Installation

The package can be installed with the Julia package manager. From the Julia REPL, type ] to enter the Pkg REPL mode and run:

pkg> add GNNGraphs
\ No newline at end of file +GNNGraphs.jl · GNNLux.jl

GNNGraphs.jl

GNNGraphs.jl is a package that provides graph data structures and helper functions specifically designed for working with graph neural networks. This package allows to store not only the graph structure, but also features associated with nodes, edges, and the graph itself. It is the core foundation for the GNNlib.jl, GraphNeuralNetworks.jl, and GNNLux.jl packages.

It supports three types of graphs:

  • Static graph is the basic graph type represented by GNNGraph, where each node and edge can have associated features. This type of graph is used in typical graph neural network applications, where neural networks operate on both the structure of the graph and the features stored in it. It can be used to represent a graph where the structure does not change over time, but the features of the nodes and edges can change over time.

  • Heterogeneous graph is a graph that supports multiple types of nodes and edges, and is represented by GNNHeteroGraph. Each type can have its own properties and relationships. This is useful in scenarios with different entities and interactions, such as in citation graphs or multi-relational data.

  • Temporal graph is a graph that changes over time, and is represented by TemporalSnapshotsGNNGraph. Edges and features can change dynamically. This type of graph is useful for applications that involve tracking time-dependent relationships, such as social networks.

This package depends on the package Graphs.jl.

Installation

The package can be installed with the Julia package manager. From the Julia REPL, type ] to enter the Pkg REPL mode and run:

pkg> add GNNGraphs
\ No newline at end of file diff --git a/docs/GNNLux.jl/dev/GNNlib/api/messagepassing/index.html b/docs/GNNLux.jl/dev/GNNlib/api/messagepassing/index.html index a5119ec38..089393c1c 100644 --- a/docs/GNNLux.jl/dev/GNNlib/api/messagepassing/index.html +++ b/docs/GNNLux.jl/dev/GNNlib/api/messagepassing/index.html @@ -1,5 +1,5 @@ Message Passing · GNNLux.jl

Message Passing

Interface

GNNlib.apply_edgesFunction
apply_edges(fmsg, g; [xi, xj, e])
-apply_edges(fmsg, g, xi, xj, e=nothing)

Returns the message from node j to node i applying the message function fmsg on the edges in graph g. In the message-passing scheme, the incoming messages from the neighborhood of i will later be aggregated in order to update the features of node i (see aggregate_neighbors).

The function fmsg operates on batches of edges, therefore xi, xj, and e are tensors whose last dimension is the batch size, or can be named tuples of such tensors.

Arguments

  • g: An AbstractGNNGraph.
  • xi: An array or a named tuple containing arrays whose last dimension's size is g.num_nodes. It will be appropriately materialized on the target node of each edge (see also edge_index).
  • xj: As xi, but now to be materialized on each edge's source node.
  • e: An array or a named tuple containing arrays whose last dimension's size is g.num_edges.
  • fmsg: A function that takes as inputs the edge-materialized xi, xj, and e. These are arrays (or named tuples of arrays) whose last dimension' size is the size of a batch of edges. The output of f has to be an array (or a named tuple of arrays) with the same batch size. If also layer is passed to propagate, the signature of fmsg has to be fmsg(layer, xi, xj, e) instead of fmsg(xi, xj, e).

See also propagate and aggregate_neighbors.

source
GNNlib.aggregate_neighborsFunction
aggregate_neighbors(g, aggr, m)

Given a graph g, edge features m, and an aggregation operator aggr (e.g +, min, max, mean), returns the new node features

\[\mathbf{x}_i = \square_{j \in \mathcal{N}(i)} \mathbf{m}_{j\to i}\]

Neighborhood aggregation is the second step of propagate, where it comes after apply_edges.

source
GNNlib.propagateFunction
propagate(fmsg, g, aggr; [xi, xj, e])
+apply_edges(fmsg, g, xi, xj, e=nothing)

Returns the message from node j to node i applying the message function fmsg on the edges in graph g. In the message-passing scheme, the incoming messages from the neighborhood of i will later be aggregated in order to update the features of node i (see aggregate_neighbors).

The function fmsg operates on batches of edges, therefore xi, xj, and e are tensors whose last dimension is the batch size, or can be named tuples of such tensors.

Arguments

  • g: An AbstractGNNGraph.
  • xi: An array or a named tuple containing arrays whose last dimension's size is g.num_nodes. It will be appropriately materialized on the target node of each edge (see also edge_index).
  • xj: As xi, but now to be materialized on each edge's source node.
  • e: An array or a named tuple containing arrays whose last dimension's size is g.num_edges.
  • fmsg: A function that takes as inputs the edge-materialized xi, xj, and e. These are arrays (or named tuples of arrays) whose last dimension' size is the size of a batch of edges. The output of f has to be an array (or a named tuple of arrays) with the same batch size. If also layer is passed to propagate, the signature of fmsg has to be fmsg(layer, xi, xj, e) instead of fmsg(xi, xj, e).

See also propagate and aggregate_neighbors.

source
GNNlib.aggregate_neighborsFunction
aggregate_neighbors(g, aggr, m)

Given a graph g, edge features m, and an aggregation operator aggr (e.g +, min, max, mean), returns the new node features

\[\mathbf{x}_i = \square_{j \in \mathcal{N}(i)} \mathbf{m}_{j\to i}\]

Neighborhood aggregation is the second step of propagate, where it comes after apply_edges.

source
GNNlib.propagateFunction
propagate(fmsg, g, aggr; [xi, xj, e])
 propagate(fmsg, g, aggr xi, xj, e=nothing)

Performs message passing on graph g. Takes care of materializing the node features on each edge, applying the message function fmsg, and returning an aggregated message $\bar{\mathbf{m}}$ (depending on the return value of fmsg, an array or a named tuple of arrays with last dimension's size g.num_nodes).

It can be decomposed in two steps:

m = apply_edges(fmsg, g, xi, xj, e)
 m̄ = aggregate_neighbors(g, aggr, m)

GNN layers typically call propagate in their forward pass, providing as input f a closure.

Arguments

  • g: A GNNGraph.
  • xi: An array or a named tuple containing arrays whose last dimension's size is g.num_nodes. It will be appropriately materialized on the target node of each edge (see also edge_index).
  • xj: As xj, but to be materialized on edges' sources.
  • e: An array or a named tuple containing arrays whose last dimension's size is g.num_edges.
  • fmsg: A generic function that will be passed over to apply_edges. Has to take as inputs the edge-materialized xi, xj, and e (arrays or named tuples of arrays whose last dimension' size is the size of a batch of edges). Its output has to be an array or a named tuple of arrays with the same batch size. If also layer is passed to propagate, the signature of fmsg has to be fmsg(layer, xi, xj, e) instead of fmsg(xi, xj, e).
  • aggr: Neighborhood aggregation operator. Use +, mean, max, or min.

Examples

using GraphNeuralNetworks, Flux
 
@@ -25,4 +25,4 @@
 end
 
 l = GNNConv(10 => 20)
-l(g, x)

See also apply_edges and aggregate_neighbors.

source

Built-in message functions

GNNlib.e_mul_xjFunction
e_mul_xj(xi, xj, e) = reshape(e, (...)) .* xj

Reshape e into a broadcast compatible shape with xj (by prepending singleton dimensions) then perform broadcasted multiplication.

source
GNNlib.w_mul_xjFunction
w_mul_xj(xi, xj, w) = reshape(w, (...)) .* xj

Similar to e_mul_xj but specialized on scalar edge features (weights).

source
\ No newline at end of file +l(g, x)

See also apply_edges and aggregate_neighbors.

source

Built-in message functions

GNNlib.copy_xiFunction
copy_xi(xi, xj, e) = xi
source
GNNlib.copy_xjFunction
copy_xj(xi, xj, e) = xj
source
GNNlib.xi_dot_xjFunction
xi_dot_xj(xi, xj, e) = sum(xi .* xj, dims=1)
source
GNNlib.xi_sub_xjFunction
xi_sub_xj(xi, xj, e) = xi .- xj
source
GNNlib.xj_sub_xiFunction
xj_sub_xi(xi, xj, e) = xj .- xi
source
GNNlib.e_mul_xjFunction
e_mul_xj(xi, xj, e) = reshape(e, (...)) .* xj

Reshape e into a broadcast compatible shape with xj (by prepending singleton dimensions) then perform broadcasted multiplication.

source
GNNlib.w_mul_xjFunction
w_mul_xj(xi, xj, w) = reshape(w, (...)) .* xj

Similar to e_mul_xj but specialized on scalar edge features (weights).

source
\ No newline at end of file diff --git a/docs/GNNLux.jl/dev/GNNlib/api/utils/index.html b/docs/GNNLux.jl/dev/GNNlib/api/utils/index.html index 20c015179..564d1484e 100644 --- a/docs/GNNLux.jl/dev/GNNlib/api/utils/index.html +++ b/docs/GNNLux.jl/dev/GNNlib/api/utils/index.html @@ -1,2 +1,2 @@ -Other Operators · GNNLux.jl

Utility Functions

Graph-wise operations

GNNlib.reduce_nodesFunction
reduce_nodes(aggr, g, x)

For a batched graph g, return the graph-wise aggregation of the node features x. The aggregation operator aggr can be +, mean, max, or min. The returned array will have last dimension g.num_graphs.

See also: reduce_edges.

source
reduce_nodes(aggr, indicator::AbstractVector, x)

Return the graph-wise aggregation of the node features x given the graph indicator indicator. The aggregation operator aggr can be +, mean, max, or min.

See also graph_indicator.

source
GNNlib.reduce_edgesFunction
reduce_edges(aggr, g, e)

For a batched graph g, return the graph-wise aggregation of the edge features e. The aggregation operator aggr can be +, mean, max, or min. The returned array will have last dimension g.num_graphs.

source
GNNlib.broadcast_nodesFunction
broadcast_nodes(g, x)

Graph-wise broadcast array x of size (*, g.num_graphs) to size (*, g.num_nodes).

source
GNNlib.broadcast_edgesFunction
broadcast_edges(g, x)

Graph-wise broadcast array x of size (*, g.num_graphs) to size (*, g.num_edges).

source

Neighborhood operations

GNNlib.softmax_edge_neighborsFunction
softmax_edge_neighbors(g, e)

Softmax over each node's neighborhood of the edge features e.

\[\mathbf{e}'_{j\to i} = \frac{e^{\mathbf{e}_{j\to i}}} - {\sum_{j'\in N(i)} e^{\mathbf{e}_{j'\to i}}}.\]

source

NNlib's gather and scatter functions

Primitive functions for message passing implemented in NNlib.jl:

\ No newline at end of file +Other Operators · GNNLux.jl

Utility Functions

Graph-wise operations

GNNlib.reduce_nodesFunction
reduce_nodes(aggr, g, x)

For a batched graph g, return the graph-wise aggregation of the node features x. The aggregation operator aggr can be +, mean, max, or min. The returned array will have last dimension g.num_graphs.

See also: reduce_edges.

source
reduce_nodes(aggr, indicator::AbstractVector, x)

Return the graph-wise aggregation of the node features x given the graph indicator indicator. The aggregation operator aggr can be +, mean, max, or min.

See also graph_indicator.

source
GNNlib.reduce_edgesFunction
reduce_edges(aggr, g, e)

For a batched graph g, return the graph-wise aggregation of the edge features e. The aggregation operator aggr can be +, mean, max, or min. The returned array will have last dimension g.num_graphs.

source
GNNlib.broadcast_nodesFunction
broadcast_nodes(g, x)

Graph-wise broadcast array x of size (*, g.num_graphs) to size (*, g.num_nodes).

source
GNNlib.broadcast_edgesFunction
broadcast_edges(g, x)

Graph-wise broadcast array x of size (*, g.num_graphs) to size (*, g.num_edges).

source

Neighborhood operations

GNNlib.softmax_edge_neighborsFunction
softmax_edge_neighbors(g, e)

Softmax over each node's neighborhood of the edge features e.

\[\mathbf{e}'_{j\to i} = \frac{e^{\mathbf{e}_{j\to i}}} + {\sum_{j'\in N(i)} e^{\mathbf{e}_{j'\to i}}}.\]

source

NNlib's gather and scatter functions

Primitive functions for message passing implemented in NNlib.jl:

\ No newline at end of file diff --git a/docs/GNNLux.jl/dev/GNNlib/guides/messagepassing/index.html b/docs/GNNLux.jl/dev/GNNlib/guides/messagepassing/index.html index 78f93dca3..fdf679750 100644 --- a/docs/GNNLux.jl/dev/GNNlib/guides/messagepassing/index.html +++ b/docs/GNNLux.jl/dev/GNNlib/guides/messagepassing/index.html @@ -75,4 +75,4 @@ x = propagate(message, g, +, xj=x) return l.σ.(l.weight * x .+ l.bias) -end

See the GATConv implementation here for a more complex example.

Built-in message functions

In order to exploit optimized specializations of the propagate, it is recommended to use built-in message functions such as copy_xj whenever possible.

\ No newline at end of file +end

See the GATConv implementation here for a more complex example.

Built-in message functions

In order to exploit optimized specializations of the propagate, it is recommended to use built-in message functions such as copy_xj whenever possible.

\ No newline at end of file diff --git a/docs/GNNLux.jl/dev/GNNlib/index.html b/docs/GNNLux.jl/dev/GNNlib/index.html index 85ceab333..86d85d819 100644 --- a/docs/GNNLux.jl/dev/GNNlib/index.html +++ b/docs/GNNLux.jl/dev/GNNlib/index.html @@ -1 +1 @@ -GNNlib.jl · GNNLux.jl

GNNlib.jl

GNNlib.jl is a package that provides the implementation of the basic message passing functions and functional implementation of graph convolutional layers, which are used to build graph neural networks in both the Flux.jl and Lux.jl machine learning frameworks, created in the GraphNeuralNetworks.jl and GNNLux.jl packages, respectively.

This package depends on GNNGraphs.jl and NNlib.jl, and is primarily intended for developers looking to create new GNN architectures. For most users, the higher-level GraphNeuralNetworks.jl and GNNLux.jl packages are recommended.

Installation

The package can be installed with the Julia package manager. From the Julia REPL, type ] to enter the Pkg REPL mode and run:

pkg> add GNNlib
\ No newline at end of file +GNNlib.jl · GNNLux.jl

GNNlib.jl

GNNlib.jl is a package that provides the implementation of the basic message passing functions and functional implementation of graph convolutional layers, which are used to build graph neural networks in both the Flux.jl and Lux.jl machine learning frameworks, created in the GraphNeuralNetworks.jl and GNNLux.jl packages, respectively.

This package depends on GNNGraphs.jl and NNlib.jl, and is primarily intended for developers looking to create new GNN architectures. For most users, the higher-level GraphNeuralNetworks.jl and GNNLux.jl packages are recommended.

Installation

The package can be installed with the Julia package manager. From the Julia REPL, type ] to enter the Pkg REPL mode and run:

pkg> add GNNlib
\ No newline at end of file diff --git a/docs/GNNLux.jl/dev/api/basic/index.html b/docs/GNNLux.jl/dev/api/basic/index.html index f5f037a27..f2974ed44 100644 --- a/docs/GNNLux.jl/dev/api/basic/index.html +++ b/docs/GNNLux.jl/dev/api/basic/index.html @@ -1,4 +1,4 @@ -Basic layers · GNNLux.jl

Basic Layers

GNNLux.GNNLayerType
abstract type GNNLayer <: AbstractLuxLayer end

An abstract type from which graph neural network layers are derived. It is derived from Lux's AbstractLuxLayer type.

See also GNNLux.GNNChain.

source
GNNLux.GNNChainType
GNNChain(layers...)
+Basic layers · GNNLux.jl

Basic Layers

GNNLux.GNNLayerType
abstract type GNNLayer <: AbstractLuxLayer end

An abstract type from which graph neural network layers are derived. It is derived from Lux's AbstractLuxLayer type.

See also GNNLux.GNNChain.

source
GNNLux.GNNChainType
GNNChain(layers...)
 GNNChain(name = layer, ...)

Collects multiple layers / functions to be called in sequence on given input graph and input node features.

It allows to compose layers in a sequential fashion as Lux.Chain does, propagating the output of each layer to the next one. In addition, GNNChain handles the input graph as well, providing it as a first argument only to layers subtyping the GNNLayer abstract type.

GNNChain supports indexing and slicing, m[2] or m[1:end-1], and if names are given, m[:name] == m[1] etc.

Examples

julia> using Lux, GNNLux, Random
 
 julia> rng = Random.default_rng();
@@ -17,4 +17,4 @@
 julia> ps, st = LuxCore.setup(rng, m);
 
 julia> m(g, x, ps, st)     # First entry is the output, second entry is the state of the model
-(Float32[-0.15594329 -0.15594329 -0.15594329; 0.93431795 0.93431795 0.93431795; 0.27568763 0.27568763 0.27568763; 0.12568939 0.12568939 0.12568939], (layer_1 = NamedTuple(), layer_2 = NamedTuple(), layer_3 = NamedTuple()))
source
\ No newline at end of file +(Float32[-0.15594329 -0.15594329 -0.15594329; 0.93431795 0.93431795 0.93431795; 0.27568763 0.27568763 0.27568763; 0.12568939 0.12568939 0.12568939], (layer_1 = NamedTuple(), layer_2 = NamedTuple(), layer_3 = NamedTuple()))
source
\ No newline at end of file diff --git a/docs/GNNLux.jl/dev/api/conv/index.html b/docs/GNNLux.jl/dev/api/conv/index.html index 2cc44ffc6..399448de5 100644 --- a/docs/GNNLux.jl/dev/api/conv/index.html +++ b/docs/GNNLux.jl/dev/api/conv/index.html @@ -17,7 +17,7 @@ ps, st = LuxCore.setup(rng, l) # forward pass -y, st = l(g, x, ps, st) source
GNNLux.CGConvType
CGConv((in, ein) => out, act = identity; residual = false,
+y, st = l(g, x, ps, st)   
source
GNNLux.CGConvType
CGConv((in, ein) => out, act = identity; residual = false,
             use_bias = true, init_weight = glorot_uniform, init_bias = zeros32)
 CGConv(in => out, ...)

The crystal graph convolutional layer from the paper Crystal Graph Convolutional Neural Networks for an Accurate and Interpretable Prediction of Material Properties. Performs the operation

\[\mathbf{x}_i' = \mathbf{x}_i + \sum_{j\in N(i)}\sigma(W_f \mathbf{z}_{ij} + \mathbf{b}_f)\, act(W_s \mathbf{z}_{ij} + \mathbf{b}_s)\]

where $\mathbf{z}_{ij}$ is the node and edge features concatenation $[\mathbf{x}_i; \mathbf{x}_j; \mathbf{e}_{j\to i}]$ and $\sigma$ is the sigmoid function. The residual $\mathbf{x}_i$ is added only if residual=true and the output size is the same as the input size.

Arguments

  • in: The dimension of input node features.
  • ein: The dimension of input edge features.

If ein is not given, assumes that no edge features are passed as input in the forward pass.

  • out: The dimension of output node features.
  • act: Activation function.
  • residual: Add a residual connection.
  • init_weight: Weights' initializer. Default glorot_uniform.
  • init_bias: Bias initializer. Default zeros32.
  • use_bias: Add learnable bias. Default true.

Examples

using GNNLux, Lux, Random
 
@@ -40,7 +40,7 @@
 # No edge features
 l = CGConv(2 => 4, tanh)
 ps, st = LuxCore.setup(rng, l)
-y, st = l(g, x, ps, st)    # size: (4, num_nodes)
source
GNNLux.ChebConvType
ChebConv(in => out, k; init_weight = glorot_uniform, init_bias = zeros32, use_bias = true)

Chebyshev spectral graph convolutional layer from paper Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering.

Implements

\[X' = \sum^{K-1}_{k=0} W^{(k)} Z^{(k)}\]

where $Z^{(k)}$ is the $k$-th term of Chebyshev polynomials, and can be calculated by the following recursive form:

\[\begin{aligned} +y, st = l(g, x, ps, st) # size: (4, num_nodes)

source
GNNLux.ChebConvType
ChebConv(in => out, k; init_weight = glorot_uniform, init_bias = zeros32, use_bias = true)

Chebyshev spectral graph convolutional layer from paper Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering.

Implements

\[X' = \sum^{K-1}_{k=0} W^{(k)} Z^{(k)}\]

where $Z^{(k)}$ is the $k$-th term of Chebyshev polynomials, and can be calculated by the following recursive form:

\[\begin{aligned} Z^{(0)} &= X \\ Z^{(1)} &= \hat{L} X \\ Z^{(k)} &= 2 \hat{L} Z^{(k-1)} - Z^{(k-2)} @@ -62,7 +62,7 @@ ps, st = LuxCore.setup(rng, l) # forward pass -y, st = l(g, x, ps, st) # size of the output y: 5 × num_nodes

source
GNNLux.DConvType
DConv(in => out, k; init_weight = glorot_uniform, init_bias = zeros32, use_bias = true)

Diffusion convolution layer from the paper Diffusion Convolutional Recurrent Neural Networks: Data-Driven Traffic Forecasting.

Arguments

  • in: The dimension of input features.
  • out: The dimension of output features.
  • k: Number of diffusion steps.
  • init_weight: Weights' initializer. Default glorot_uniform.
  • init_bias: Bias initializer. Default zeros32.
  • use_bias: Add learnable bias. Default true.

Examples

using GNNLux, Lux, Random
+y, st = l(g, x, ps, st)       # size of the output y:  5 × num_nodes
source
GNNLux.DConvType
DConv(in => out, k; init_weight = glorot_uniform, init_bias = zeros32, use_bias = true)

Diffusion convolution layer from the paper Diffusion Convolutional Recurrent Neural Networks: Data-Driven Traffic Forecasting.

Arguments

  • in: The dimension of input features.
  • out: The dimension of output features.
  • k: Number of diffusion steps.
  • init_weight: Weights' initializer. Default glorot_uniform.
  • init_bias: Bias initializer. Default zeros32.
  • use_bias: Add learnable bias. Default true.

Examples

using GNNLux, Lux, Random
 
 # initialize random number generator
 rng = Random.default_rng()
@@ -76,7 +76,7 @@
 ps, st = LuxCore.setup(rng, dconv)
 
 # forward pass
-y, st = dconv(g, g.ndata.x, ps, st)   # size: (4, num_nodes)
source
GNNLux.EGNNConvType
EGNNConv((in, ein) => out; hidden_size=2in, residual=false)
+y, st = dconv(g, g.ndata.x, ps, st)   # size: (4, num_nodes)
source
GNNLux.EGNNConvType
EGNNConv((in, ein) => out; hidden_size=2in, residual=false)
 EGNNConv(in => out; hidden_size=2in, residual=false)

Equivariant Graph Convolutional Layer from E(n) Equivariant Graph Neural Networks.

The layer performs the following operation:

\[\begin{aligned} \mathbf{m}_{j\to i} &=\phi_e(\mathbf{h}_i, \mathbf{h}_j, \lVert\mathbf{x}_i-\mathbf{x}_j\rVert^2, \mathbf{e}_{j\to i}),\\ \mathbf{x}_i' &= \mathbf{x}_i + C_i\sum_{j\in\mathcal{N}(i)}(\mathbf{x}_i-\mathbf{x}_j)\phi_x(\mathbf{m}_{j\to i}),\\ @@ -98,7 +98,7 @@ ps, st = LuxCore.setup(rng, egnn) # forward pass -(hnew, xnew), st = egnn(g, h, x, ps, st)

source
GNNLux.EdgeConvType
EdgeConv(nn; aggr=max)

Edge convolutional layer from paper Dynamic Graph CNN for Learning on Point Clouds.

Performs the operation

\[\mathbf{x}_i' = \square_{j \in N(i)}\, nn([\mathbf{x}_i; \mathbf{x}_j - \mathbf{x}_i])\]

where nn generally denotes a learnable function, e.g. a linear layer or a multi-layer perceptron.

Arguments

  • nn: A (possibly learnable) function.
  • aggr: Aggregation operator for the incoming messages (e.g. +, *, max, min, and mean).

Examples:

using GNNLux, Lux, Random
+(hnew, xnew), st = egnn(g, h, x, ps, st)
source
GNNLux.EdgeConvType
EdgeConv(nn; aggr=max)

Edge convolutional layer from paper Dynamic Graph CNN for Learning on Point Clouds.

Performs the operation

\[\mathbf{x}_i' = \square_{j \in N(i)}\, nn([\mathbf{x}_i; \mathbf{x}_j - \mathbf{x}_i])\]

where nn generally denotes a learnable function, e.g. a linear layer or a multi-layer perceptron.

Arguments

  • nn: A (possibly learnable) function.
  • aggr: Aggregation operator for the incoming messages (e.g. +, *, max, min, and mean).

Examples:

using GNNLux, Lux, Random
 
 # initialize random number generator
 rng = Random.default_rng()
@@ -118,7 +118,7 @@
 ps, st = LuxCore.setup(rng, l)
 
 # forward pass
-y, st = l(g, x, ps, st)
source
GNNLux.GATConvType
GATConv(in => out, σ = identity; heads = 1, concat = true, negative_slope = 0.2, init_weight = glorot_uniform, init_bias = zeros32, use_bias = true, add_self_loops = true, dropout=0.0)
+y, st = l(g, x, ps, st)
source
GNNLux.GATConvType
GATConv(in => out, σ = identity; heads = 1, concat = true, negative_slope = 0.2, init_weight = glorot_uniform, init_bias = zeros32, use_bias = true, add_self_loops = true, dropout=0.0)
 GATConv((in, ein) => out, ...)

Graph attentional layer from the paper Graph Attention Networks.

Implements the operation

\[\mathbf{x}_i' = \sum_{j \in N(i) \cup \{i\}} \alpha_{ij} W \mathbf{x}_j\]

where the attention coefficients $\alpha_{ij}$ are given by

\[\alpha_{ij} = \frac{1}{z_i} \exp(LeakyReLU(\mathbf{a}^T [W \mathbf{x}_i; W \mathbf{x}_j]))\]

with $z_i$ a normalization factor.

In case ein > 0 is given, edge features of dimension ein will be expected in the forward pass and the attention coefficients will be calculated as

\[\alpha_{ij} = \frac{1}{z_i} \exp(LeakyReLU(\mathbf{a}^T [W_e \mathbf{e}_{j\to i}; W \mathbf{x}_i; W \mathbf{x}_j]))\]

Arguments

  • in: The dimension of input node features.
  • ein: The dimension of input edge features. Default 0 (i.e. no edge features passed in the forward).
  • out: The dimension of output node features.
  • σ: Activation function. Default identity.
  • heads: Number attention heads. Default 1.
  • concat: Concatenate layer output or not. If not, layer output is averaged over the heads. Default true.
  • negative_slope: The parameter of LeakyReLU.Default 0.2.
  • init_weight: Weights' initializer. Default glorot_uniform.
  • init_bias: Bias initializer. Default zeros32.
  • use_bias: Add learnable bias. Default true.
  • add_self_loops: Add self loops to the graph before performing the convolution. Default true.
  • dropout: Dropout probability on the normalized attention coefficient. Default 0.0.

Examples

using GNNLux, Lux, Random
 
 # initialize random number generator
@@ -139,7 +139,7 @@
 ps, st = LuxCore.setup(rng, l)
 
 # forward pass
-y, st = l(g, x, ps, st)       
source
GNNLux.GATv2ConvType
GATv2Conv(in => out, σ = identity; heads = 1, concat = true, negative_slope = 0.2, init_weight = glorot_uniform, init_bias = zeros32, use_bias = true, add_self_loops = true, dropout=0.0)
+y, st = l(g, x, ps, st)       
source
GNNLux.GATv2ConvType
GATv2Conv(in => out, σ = identity; heads = 1, concat = true, negative_slope = 0.2, init_weight = glorot_uniform, init_bias = zeros32, use_bias = true, add_self_loops = true, dropout=0.0)
 GATv2Conv((in, ein) => out, ...)

GATv2 attentional layer from the paper How Attentive are Graph Attention Networks?.

Implements the operation

\[\mathbf{x}_i' = \sum_{j \in N(i) \cup \{i\}} \alpha_{ij} W_1 \mathbf{x}_j\]

where the attention coefficients $\alpha_{ij}$ are given by

\[\alpha_{ij} = \frac{1}{z_i} \exp(\mathbf{a}^T LeakyReLU(W_2 \mathbf{x}_i + W_1 \mathbf{x}_j))\]

with $z_i$ a normalization factor.

In case ein > 0 is given, edge features of dimension ein will be expected in the forward pass and the attention coefficients will be calculated as

\[\alpha_{ij} = \frac{1}{z_i} \exp(\mathbf{a}^T LeakyReLU(W_3 \mathbf{e}_{j\to i} + W_2 \mathbf{x}_i + W_1 \mathbf{x}_j)).\]

Arguments

  • in: The dimension of input node features.
  • ein: The dimension of input edge features. Default 0 (i.e. no edge features passed in the forward).
  • out: The dimension of output node features.
  • σ: Activation function. Default identity.
  • heads: Number attention heads. Default 1.
  • concat: Concatenate layer output or not. If not, layer output is averaged over the heads. Default true.
  • negative_slope: The parameter of LeakyReLU.Default 0.2.
  • add_self_loops: Add self loops to the graph before performing the convolution. Default true.
  • dropout: Dropout probability on the normalized attention coefficient. Default 0.0.
  • init_weight: Weights' initializer. Default glorot_uniform.
  • init_bias: Bias initializer. Default zeros32.
  • use_bias: Add learnable bias. Default true.

Examples

using GNNLux, Lux, Random
 
 # initialize random number generator
@@ -164,7 +164,7 @@
 e = randn(rng, Float32, ein, length(s))
 
 # forward pass
-y, st = l(g, x, e, ps, st)    
source
GNNLux.GCNConvType
GCNConv(in => out, σ=identity; [init_weight, init_bias, use_bias, add_self_loops, use_edge_weight])

Graph convolutional layer from paper Semi-supervised Classification with Graph Convolutional Networks.

Performs the operation

\[\mathbf{x}'_i = \sum_{j\in N(i)} a_{ij} W \mathbf{x}_j\]

where $a_{ij} = 1 / \sqrt{|N(i)||N(j)|}$ is a normalization factor computed from the node degrees.

If the input graph has weighted edges and use_edge_weight=true, than $a_{ij}$ will be computed as

\[a_{ij} = \frac{e_{j\to i}}{\sqrt{\sum_{j \in N(i)} e_{j\to i}} \sqrt{\sum_{i \in N(j)} e_{i\to j}}}\]

Arguments

  • in: Number of input features.
  • out: Number of output features.
  • σ: Activation function. Default identity.
  • init_weight: Weights' initializer. Default glorot_uniform.
  • init_bias: Bias initializer. Default zeros32.
  • use_bias: Add learnable bias. Default true.
  • add_self_loops: Add self loops to the graph before performing the convolution. Default false.
  • use_edge_weight: If true, consider the edge weights in the input graph (if available). If add_self_loops=true the new weights will be set to 1. This option is ignored if the edge_weight is explicitly provided in the forward pass. Default false.

Forward

(::GCNConv)(g, x, [edge_weight], ps, st; norm_fn = d -> 1 ./ sqrt.(d), conv_weight=nothing)

Takes as input a graph g, a node feature matrix x of size [in, num_nodes], optionally an edge weight vector and the parameter and state of the layer. Returns a node feature matrix of size [out, num_nodes].

The norm_fn parameter allows for custom normalization of the graph convolution operation by passing a function as argument. By default, it computes $\frac{1}{\sqrt{d}}$ i.e the inverse square root of the degree (d) of each node in the graph. If conv_weight is an AbstractMatrix of size [out, in], then the convolution is performed using that weight matrix.

Examples

using GNNLux, Lux, Random
+y, st = l(g, x, e, ps, st)    
source
GNNLux.GCNConvType
GCNConv(in => out, σ=identity; [init_weight, init_bias, use_bias, add_self_loops, use_edge_weight])

Graph convolutional layer from paper Semi-supervised Classification with Graph Convolutional Networks.

Performs the operation

\[\mathbf{x}'_i = \sum_{j\in N(i)} a_{ij} W \mathbf{x}_j\]

where $a_{ij} = 1 / \sqrt{|N(i)||N(j)|}$ is a normalization factor computed from the node degrees.

If the input graph has weighted edges and use_edge_weight=true, than $a_{ij}$ will be computed as

\[a_{ij} = \frac{e_{j\to i}}{\sqrt{\sum_{j \in N(i)} e_{j\to i}} \sqrt{\sum_{i \in N(j)} e_{i\to j}}}\]

Arguments

  • in: Number of input features.
  • out: Number of output features.
  • σ: Activation function. Default identity.
  • init_weight: Weights' initializer. Default glorot_uniform.
  • init_bias: Bias initializer. Default zeros32.
  • use_bias: Add learnable bias. Default true.
  • add_self_loops: Add self loops to the graph before performing the convolution. Default false.
  • use_edge_weight: If true, consider the edge weights in the input graph (if available). If add_self_loops=true the new weights will be set to 1. This option is ignored if the edge_weight is explicitly provided in the forward pass. Default false.

Forward

(::GCNConv)(g, x, [edge_weight], ps, st; norm_fn = d -> 1 ./ sqrt.(d), conv_weight=nothing)

Takes as input a graph g, a node feature matrix x of size [in, num_nodes], optionally an edge weight vector and the parameter and state of the layer. Returns a node feature matrix of size [out, num_nodes].

The norm_fn parameter allows for custom normalization of the graph convolution operation by passing a function as argument. By default, it computes $\frac{1}{\sqrt{d}}$ i.e the inverse square root of the degree (d) of each node in the graph. If conv_weight is an AbstractMatrix of size [out, in], then the convolution is performed using that weight matrix.

Examples

using GNNLux, Lux, Random
 
 # initialize random number generator
 rng = Random.default_rng()
@@ -193,7 +193,7 @@
 g = GNNGraph(s, t, w)
 l = GCNConv(3 => 5, use_edge_weight=true)
 ps, st = Lux.setup(rng, l)
-y = l(g, x, ps, st) # same as l(g, x, w) 
source
GNNLux.GINConvType
GINConv(f, ϵ; aggr=+)

Graph Isomorphism convolutional layer from paper How Powerful are Graph Neural Networks?.

Implements the graph convolution

\[\mathbf{x}_i' = f_\Theta\left((1 + \epsilon) \mathbf{x}_i + \sum_{j \in N(i)} \mathbf{x}_j \right)\]

where $f_\Theta$ typically denotes a learnable function, e.g. a linear layer or a multi-layer perceptron.

Arguments

  • f: A (possibly learnable) function acting on node features.
  • ϵ: Weighting factor.

Examples:

using GNNLux, Lux, Random
+y = l(g, x, ps, st) # same as l(g, x, w) 
source
GNNLux.GINConvType
GINConv(f, ϵ; aggr=+)

Graph Isomorphism convolutional layer from paper How Powerful are Graph Neural Networks?.

Implements the graph convolution

\[\mathbf{x}_i' = f_\Theta\left((1 + \epsilon) \mathbf{x}_i + \sum_{j \in N(i)} \mathbf{x}_j \right)\]

where $f_\Theta$ typically denotes a learnable function, e.g. a linear layer or a multi-layer perceptron.

Arguments

  • f: A (possibly learnable) function acting on node features.
  • ϵ: Weighting factor.

Examples:

using GNNLux, Lux, Random
 
 # initialize random number generator
 rng = Random.default_rng()
@@ -216,7 +216,7 @@
 ps, st = LuxCore.setup(rng, l)
 
 # forward pass
-y, st = l(g, x, ps, st)       # size:  out_channel × num_nodes
source
GNNLux.GMMConvType
GMMConv((in, ein) => out, σ=identity; K = 1, residual = false init_weight = glorot_uniform, init_bias = zeros32, use_bias = true)

Graph mixture model convolution layer from the paper Geometric deep learning on graphs and manifolds using mixture model CNNs Performs the operation

\[\mathbf{x}_i' = \mathbf{x}_i + \frac{1}{|N(i)|} \sum_{j\in N(i)}\frac{1}{K}\sum_{k=1}^K \mathbf{w}_k(\mathbf{e}_{j\to i}) \odot \Theta_k \mathbf{x}_j\]

where $w^a_{k}(e^a)$ for feature a and kernel k is given by

\[w^a_{k}(e^a) = \exp(-\frac{1}{2}(e^a - \mu^a_k)^T (\Sigma^{-1})^a_k(e^a - \mu^a_k))\]

$\Theta_k, \mu^a_k, (\Sigma^{-1})^a_k$ are learnable parameters.

The input to the layer is a node feature array x of size (num_features, num_nodes) and edge pseudo-coordinate array e of size (num_features, num_edges) The residual $\mathbf{x}_i$ is added only if residual=true and the output size is the same as the input size.

Arguments

  • in: Number of input node features.
  • ein: Number of input edge features.
  • out: Number of output features.
  • σ: Activation function. Default identity.
  • K: Number of kernels. Default 1.
  • residual: Residual conncetion. Default false.
  • init_weight: Weights' initializer. Default glorot_uniform.
  • init_bias: Bias initializer. Default zeros32.
  • use_bias: Add learnable bias. Default true.

Examples

using GNNLux, Lux, Random
+y, st = l(g, x, ps, st)       # size:  out_channel × num_nodes
source
GNNLux.GMMConvType
GMMConv((in, ein) => out, σ=identity; K = 1, residual = false init_weight = glorot_uniform, init_bias = zeros32, use_bias = true)

Graph mixture model convolution layer from the paper Geometric deep learning on graphs and manifolds using mixture model CNNs Performs the operation

\[\mathbf{x}_i' = \mathbf{x}_i + \frac{1}{|N(i)|} \sum_{j\in N(i)}\frac{1}{K}\sum_{k=1}^K \mathbf{w}_k(\mathbf{e}_{j\to i}) \odot \Theta_k \mathbf{x}_j\]

where $w^a_{k}(e^a)$ for feature a and kernel k is given by

\[w^a_{k}(e^a) = \exp(-\frac{1}{2}(e^a - \mu^a_k)^T (\Sigma^{-1})^a_k(e^a - \mu^a_k))\]

$\Theta_k, \mu^a_k, (\Sigma^{-1})^a_k$ are learnable parameters.

The input to the layer is a node feature array x of size (num_features, num_nodes) and edge pseudo-coordinate array e of size (num_features, num_edges) The residual $\mathbf{x}_i$ is added only if residual=true and the output size is the same as the input size.

Arguments

  • in: Number of input node features.
  • ein: Number of input edge features.
  • out: Number of output features.
  • σ: Activation function. Default identity.
  • K: Number of kernels. Default 1.
  • residual: Residual conncetion. Default false.
  • init_weight: Weights' initializer. Default glorot_uniform.
  • init_bias: Bias initializer. Default zeros32.
  • use_bias: Add learnable bias. Default true.

Examples

using GNNLux, Lux, Random
 
 # initialize random number generator
 rng = Random.default_rng()
@@ -236,7 +236,7 @@
 ps, st = LuxCore.setup(rng, l)
 
 # forward pass
-y, st = l(g, x, e, ps, st)       # size:  out × num_nodes
source
GNNLux.GatedGraphConvType
GatedGraphConv(out, num_layers; 
+y, st = l(g, x, e, ps, st)       # size:  out × num_nodes
source
GNNLux.GatedGraphConvType
GatedGraphConv(out, num_layers; 
         aggr = +, init_weight = glorot_uniform)

Gated graph convolution layer from Gated Graph Sequence Neural Networks.

Implements the recursion

\[\begin{aligned} \mathbf{h}^{(0)}_i &= [\mathbf{x}_i; \mathbf{0}] \\ \mathbf{h}^{(l)}_i &= GRU(\mathbf{h}^{(l-1)}_i, \square_{j \in N(i)} W \mathbf{h}^{(l-1)}_j) @@ -259,7 +259,7 @@ ps, st = LuxCore.setup(rng, l) # forward pass -y, st = l(g, x, ps, st) # size: out_channel × num_nodes

source
GNNLux.GraphConvType
GraphConv(in => out, σ = identity; aggr = +, init_weight = glorot_uniform,init_bias = zeros32, use_bias = true)

Graph convolution layer from Reference: Weisfeiler and Leman Go Neural: Higher-order Graph Neural Networks.

Performs:

\[\mathbf{x}_i' = W_1 \mathbf{x}_i + \square_{j \in \mathcal{N}(i)} W_2 \mathbf{x}_j\]

where the aggregation type is selected by aggr.

Arguments

  • in: The dimension of input features.
  • out: The dimension of output features.
  • σ: Activation function.
  • aggr: Aggregation operator for the incoming messages (e.g. +, *, max, min, and mean).
  • init_weight: Weights' initializer. Default glorot_uniform.
  • init_bias: Bias initializer. Default zeros32.
  • use_bias: Add learnable bias. Default true.

Examples

using GNNLux, Lux, Random
+y, st = l(g, x, ps, st)       # size:  out_channel × num_nodes  
source
GNNLux.GraphConvType
GraphConv(in => out, σ = identity; aggr = +, init_weight = glorot_uniform,init_bias = zeros32, use_bias = true)

Graph convolution layer from Reference: Weisfeiler and Leman Go Neural: Higher-order Graph Neural Networks.

Performs:

\[\mathbf{x}_i' = W_1 \mathbf{x}_i + \square_{j \in \mathcal{N}(i)} W_2 \mathbf{x}_j\]

where the aggregation type is selected by aggr.

Arguments

  • in: The dimension of input features.
  • out: The dimension of output features.
  • σ: Activation function.
  • aggr: Aggregation operator for the incoming messages (e.g. +, *, max, min, and mean).
  • init_weight: Weights' initializer. Default glorot_uniform.
  • init_bias: Bias initializer. Default zeros32.
  • use_bias: Add learnable bias. Default true.

Examples

using GNNLux, Lux, Random
 
 # initialize random number generator
 rng = Random.default_rng()
@@ -279,7 +279,7 @@
 ps, st = LuxCore.setup(rng, l)
 
 # forward pass
-y, st = l(g, x, ps, st)       # size of the output y:  5 × num_nodes
source
GNNLux.MEGNetConvType
MEGNetConv(ϕe, ϕv; aggr=mean)
+y, st = l(g, x, ps, st)       # size of the output y:  5 × num_nodes
source
GNNLux.MEGNetConvType
MEGNetConv(ϕe, ϕv; aggr=mean)
 MEGNetConv(in => out; aggr=mean)

Convolution from Graph Networks as a Universal Machine Learning Framework for Molecules and Crystals paper. In the forward pass, takes as inputs node features x and edge features e and returns updated features x' and e' according to

\[\begin{aligned} \mathbf{e}_{i\to j}' = \phi_e([\mathbf{x}_i;\, \mathbf{x}_j;\, \mathbf{e}_{i\to j}]),\\ \mathbf{x}_{i}' = \phi_v([\mathbf{x}_i;\, \square_{j\in \mathcal{N}(i)}\,\mathbf{e}_{j\to i}']). @@ -300,7 +300,7 @@ ps, st = LuxCore.setup(rng, m) # forward pass -(x′, e′), st = m(g, x, e, ps, st)

source
GNNLux.NNConvType
NNConv(in => out, f, σ=identity; aggr=+, init_bias = zeros32, use_bias = true, init_weight = glorot_uniform)

The continuous kernel-based convolutional operator from the Neural Message Passing for Quantum Chemistry paper. This convolution is also known as the edge-conditioned convolution from the Dynamic Edge-Conditioned Filters in Convolutional Neural Networks on Graphs paper.

Performs the operation

\[\mathbf{x}_i' = W \mathbf{x}_i + \square_{j \in N(i)} f_\Theta(\mathbf{e}_{j\to i})\,\mathbf{x}_j\]

where $f_\Theta$ denotes a learnable function (e.g. a linear layer or a multi-layer perceptron). Given an input of batched edge features e of size (num_edge_features, num_edges), the function f will return an batched matrices array whose size is (out, in, num_edges). For convenience, also functions returning a single (out*in, num_edges) matrix are allowed.

Arguments

  • in: The dimension of input node features.
  • out: The dimension of output node features.
  • f: A (possibly learnable) function acting on edge features.
  • aggr: Aggregation operator for the incoming messages (e.g. +, *, max, min, and mean).
  • σ: Activation function.
  • init_weight: Weights' initializer. Default glorot_uniform.
  • init_bias: Bias initializer. Default zeros32.
  • use_bias: Add learnable bias. Default true.

Examples:

using GNNLux, Lux, Random
+(x′, e′), st = m(g, x, e, ps, st)
source
GNNLux.NNConvType
NNConv(in => out, f, σ=identity; aggr=+, init_bias = zeros32, use_bias = true, init_weight = glorot_uniform)

The continuous kernel-based convolutional operator from the Neural Message Passing for Quantum Chemistry paper. This convolution is also known as the edge-conditioned convolution from the Dynamic Edge-Conditioned Filters in Convolutional Neural Networks on Graphs paper.

Performs the operation

\[\mathbf{x}_i' = W \mathbf{x}_i + \square_{j \in N(i)} f_\Theta(\mathbf{e}_{j\to i})\,\mathbf{x}_j\]

where $f_\Theta$ denotes a learnable function (e.g. a linear layer or a multi-layer perceptron). Given an input of batched edge features e of size (num_edge_features, num_edges), the function f will return an batched matrices array whose size is (out, in, num_edges). For convenience, also functions returning a single (out*in, num_edges) matrix are allowed.

Arguments

  • in: The dimension of input node features.
  • out: The dimension of output node features.
  • f: A (possibly learnable) function acting on edge features.
  • aggr: Aggregation operator for the incoming messages (e.g. +, *, max, min, and mean).
  • σ: Activation function.
  • init_weight: Weights' initializer. Default glorot_uniform.
  • init_bias: Bias initializer. Default zeros32.
  • use_bias: Add learnable bias. Default true.

Examples:

using GNNLux, Lux, Random
 
 # initialize random number generator
 rng = Random.default_rng()
@@ -326,7 +326,7 @@
 ps, st = LuxCore.setup(rng, l)
 
 # forward pass
-y, st = l(g, x, e, ps, st)       # size:  n_out × num_nodes 
source
GNNLux.ResGatedGraphConvType
ResGatedGraphConv(in => out, act=identity; init_weight = glorot_uniform, init_bias = zeros32, use_bias = true)

The residual gated graph convolutional operator from the Residual Gated Graph ConvNets paper.

The layer's forward pass is given by

\[\mathbf{x}_i' = act\big(U\mathbf{x}_i + \sum_{j \in N(i)} \eta_{ij} V \mathbf{x}_j\big),\]

where the edge gates $\eta_{ij}$ are given by

\[\eta_{ij} = sigmoid(A\mathbf{x}_i + B\mathbf{x}_j).\]

Arguments

  • in: The dimension of input features.
  • out: The dimension of output features.
  • act: Activation function.
  • init_weight: Weights' initializer. Default glorot_uniform.
  • init_bias: Bias initializer. Default zeros32.
  • use_bias: Add learnable bias. Default true.

Examples:

using GNNLux, Lux, Random
+y, st = l(g, x, e, ps, st)       # size:  n_out × num_nodes 
source
GNNLux.ResGatedGraphConvType
ResGatedGraphConv(in => out, act=identity; init_weight = glorot_uniform, init_bias = zeros32, use_bias = true)

The residual gated graph convolutional operator from the Residual Gated Graph ConvNets paper.

The layer's forward pass is given by

\[\mathbf{x}_i' = act\big(U\mathbf{x}_i + \sum_{j \in N(i)} \eta_{ij} V \mathbf{x}_j\big),\]

where the edge gates $\eta_{ij}$ are given by

\[\eta_{ij} = sigmoid(A\mathbf{x}_i + B\mathbf{x}_j).\]

Arguments

  • in: The dimension of input features.
  • out: The dimension of output features.
  • act: Activation function.
  • init_weight: Weights' initializer. Default glorot_uniform.
  • init_bias: Bias initializer. Default zeros32.
  • use_bias: Add learnable bias. Default true.

Examples:

using GNNLux, Lux, Random
 
 # initialize random number generator
 rng = Random.default_rng()
@@ -346,7 +346,7 @@
 ps, st = LuxCore.setup(rng, l)
 
 # forward pass
-y, st = l(g, x, ps, st)       # size:  out_channel × num_nodes  
source
GNNLux.SAGEConvType
SAGEConv(in => out, σ=identity; aggr=mean, init_weight = glorot_uniform, init_bias = zeros32, use_bias=true)

GraphSAGE convolution layer from paper Inductive Representation Learning on Large Graphs.

Performs:

\[\mathbf{x}_i' = W \cdot [\mathbf{x}_i; \square_{j \in \mathcal{N}(i)} \mathbf{x}_j]\]

where the aggregation type is selected by aggr.

Arguments

  • in: The dimension of input features.
  • out: The dimension of output features.
  • σ: Activation function.
  • aggr: Aggregation operator for the incoming messages (e.g. +, *, max, min, and mean).
  • init_bias: Bias initializer. Default zeros32.
  • use_bias: Add learnable bias. Default true.

Examples:

using GNNLux, Lux, Random
+y, st = l(g, x, ps, st)       # size:  out_channel × num_nodes  
source
GNNLux.SAGEConvType
SAGEConv(in => out, σ=identity; aggr=mean, init_weight = glorot_uniform, init_bias = zeros32, use_bias=true)

GraphSAGE convolution layer from paper Inductive Representation Learning on Large Graphs.

Performs:

\[\mathbf{x}_i' = W \cdot [\mathbf{x}_i; \square_{j \in \mathcal{N}(i)} \mathbf{x}_j]\]

where the aggregation type is selected by aggr.

Arguments

  • in: The dimension of input features.
  • out: The dimension of output features.
  • σ: Activation function.
  • aggr: Aggregation operator for the incoming messages (e.g. +, *, max, min, and mean).
  • init_bias: Bias initializer. Default zeros32.
  • use_bias: Add learnable bias. Default true.

Examples:

using GNNLux, Lux, Random
 
 # initialize random number generator
 rng = Random.default_rng()
@@ -366,7 +366,7 @@
 ps, st = LuxCore.setup(rng, l)
 
 # forward pass
-y, st = l(g, x, ps, st)       # size:  out_channel × num_nodes   
source
GNNLux.SGConvType
SGConv(int => out, k = 1; init_weight = glorot_uniform, init_bias = zeros32, use_bias = true, add_self_loops = true,use_edge_weight = false)

SGC layer from Simplifying Graph Convolutional Networks Performs operation

\[H^{K} = (\tilde{D}^{-1/2} \tilde{A} \tilde{D}^{-1/2})^K X \Theta\]

where $\tilde{A}$ is $A + I$.

Arguments

  • in: Number of input features.
  • out: Number of output features.
  • k : Number of hops k. Default 1.
  • add_self_loops: Add self loops to the graph before performing the convolution. Default false.
  • use_edge_weight: If true, consider the edge weights in the input graph (if available). If add_self_loops=true the new weights will be set to 1. Default false.
  • init_weight: Weights' initializer. Default glorot_uniform.
  • init_bias: Bias initializer. Default zeros32.
  • use_bias: Add learnable bias. Default true.

Examples

using GNNLux, Lux, Random
+y, st = l(g, x, ps, st)       # size:  out_channel × num_nodes   
source
GNNLux.SGConvType
SGConv(int => out, k = 1; init_weight = glorot_uniform, init_bias = zeros32, use_bias = true, add_self_loops = true,use_edge_weight = false)

SGC layer from Simplifying Graph Convolutional Networks Performs operation

\[H^{K} = (\tilde{D}^{-1/2} \tilde{A} \tilde{D}^{-1/2})^K X \Theta\]

where $\tilde{A}$ is $A + I$.

Arguments

  • in: Number of input features.
  • out: Number of output features.
  • k : Number of hops k. Default 1.
  • add_self_loops: Add self loops to the graph before performing the convolution. Default false.
  • use_edge_weight: If true, consider the edge weights in the input graph (if available). If add_self_loops=true the new weights will be set to 1. Default false.
  • init_weight: Weights' initializer. Default glorot_uniform.
  • init_bias: Bias initializer. Default zeros32.
  • use_bias: Add learnable bias. Default true.

Examples

using GNNLux, Lux, Random
 
 # initialize random number generator
 rng = Random.default_rng()
@@ -394,4 +394,4 @@
 g = GNNGraph(s, t, w)
 l = SGConv(3 => 5, add_self_loops = true, use_edge_weight=true) 
 ps, st = LuxCore.setup(rng, l)
-y, st = l(g, x, ps, st) # same as l(g, x, w) 
source
\ No newline at end of file +y, st = l(g, x, ps, st) # same as l(g, x, w) source
\ No newline at end of file diff --git a/docs/GNNLux.jl/dev/api/temporalconv/index.html b/docs/GNNLux.jl/dev/api/temporalconv/index.html index ccc043f47..cb7eeba17 100644 --- a/docs/GNNLux.jl/dev/api/temporalconv/index.html +++ b/docs/GNNLux.jl/dev/api/temporalconv/index.html @@ -14,7 +14,7 @@ ps, st = LuxCore.setup(rng, l) # forward pass -y, st = l(g, x, ps, st) # result size (6, 5)source
GNNLux.EvolveGCNOType
EvolveGCNO(ch; use_bias = true, init_weight = glorot_uniform, init_state = zeros32, init_bias = zeros32)

Evolving Graph Convolutional Network (EvolveGCNO) layer from the paper EvolveGCN: Evolving Graph Convolutional Networks for Dynamic Graphs.

Perfoms a Graph Convolutional layer with parameters derived from a Long Short-Term Memory (LSTM) layer across the snapshots of the temporal graph.

Arguments

  • in: Number of input features.
  • out: Number of output features.
  • use_bias: Add learnable bias. Default true.
  • init_weight: Weights' initializer. Default glorot_uniform.
  • init_state: Initial state of the hidden stat of the GRU layer. Default zeros32.
  • init_bias: Bias initializer. Default zeros32.

Examples

using GNNLux, Lux, Random
+y, st = l(g, x, ps, st)      # result size (6, 5)
source
GNNLux.EvolveGCNOType
EvolveGCNO(ch; use_bias = true, init_weight = glorot_uniform, init_state = zeros32, init_bias = zeros32)

Evolving Graph Convolutional Network (EvolveGCNO) layer from the paper EvolveGCN: Evolving Graph Convolutional Networks for Dynamic Graphs.

Perfoms a Graph Convolutional layer with parameters derived from a Long Short-Term Memory (LSTM) layer across the snapshots of the temporal graph.

Arguments

  • in: Number of input features.
  • out: Number of output features.
  • use_bias: Add learnable bias. Default true.
  • init_weight: Weights' initializer. Default glorot_uniform.
  • init_state: Initial state of the hidden stat of the GRU layer. Default zeros32.
  • init_bias: Bias initializer. Default zeros32.

Examples

using GNNLux, Lux, Random
 
 # initialize random number generator
 rng = Random.default_rng()
@@ -29,7 +29,7 @@
 ps, st = LuxCore.setup(rng, l)
 
 # forward pass
-y, st = l(tg, tg.ndata.x , ps, st)      # result size 3, size y[1] (5, 10)
source
GNNLux.DCGRUMethod
DCGRU(in => out, k; use_bias = true, init_weight = glorot_uniform, init_state = zeros32, init_bias = zeros32)

Diffusion Convolutional Recurrent Neural Network (DCGRU) layer from the paper Diffusion Convolutional Recurrent Neural Network: Data-driven Traffic Forecasting.

Performs a Diffusion Convolutional layer to model spatial dependencies, followed by a Gated Recurrent Unit (GRU) cell to model temporal dependencies.

Arguments

  • in: Number of input features.
  • out: Number of output features.
  • k: Diffusion step.
  • use_bias: Add learnable bias. Default true.
  • init_weight: Weights' initializer. Default glorot_uniform.
  • init_state: Initial state of the hidden stat of the GRU layer. Default zeros32.
  • init_bias: Bias initializer. Default zeros32.

Examples

using GNNLux, Lux, Random
+y, st = l(tg, tg.ndata.x , ps, st)      # result size 3, size y[1] (5, 10)
source
GNNLux.DCGRUMethod
DCGRU(in => out, k; use_bias = true, init_weight = glorot_uniform, init_state = zeros32, init_bias = zeros32)

Diffusion Convolutional Recurrent Neural Network (DCGRU) layer from the paper Diffusion Convolutional Recurrent Neural Network: Data-driven Traffic Forecasting.

Performs a Diffusion Convolutional layer to model spatial dependencies, followed by a Gated Recurrent Unit (GRU) cell to model temporal dependencies.

Arguments

  • in: Number of input features.
  • out: Number of output features.
  • k: Diffusion step.
  • use_bias: Add learnable bias. Default true.
  • init_weight: Weights' initializer. Default glorot_uniform.
  • init_state: Initial state of the hidden stat of the GRU layer. Default zeros32.
  • init_bias: Bias initializer. Default zeros32.

Examples

using GNNLux, Lux, Random
 
 # initialize random number generator
 rng = Random.default_rng()
@@ -45,7 +45,7 @@
 ps, st = LuxCore.setup(rng, l)
 
 # forward pass
-y, st = l(g, x, ps, st)      # result size (5, 5)
source
GNNLux.GConvGRUMethod
GConvGRU(in => out, k; use_bias = true, init_weight = glorot_uniform, init_state = zeros32, init_bias = zeros32)

Graph Convolutional Gated Recurrent Unit (GConvGRU) recurrent layer from the paper Structured Sequence Modeling with Graph Convolutional Recurrent Networks.

Performs a layer of ChebConv to model spatial dependencies, followed by a Gated Recurrent Unit (GRU) cell to model temporal dependencies.

Arguments

  • in: Number of input features.
  • out: Number of output features.
  • k: Chebyshev polynomial order.
  • use_bias: Add learnable bias. Default true.
  • init_weight: Weights' initializer. Default glorot_uniform.
  • init_state: Initial state of the hidden stat of the GRU layer. Default zeros32.
  • init_bias: Bias initializer. Default zeros32.

Examples

using GNNLux, Lux, Random
+y, st = l(g, x, ps, st)      # result size (5, 5)
source
GNNLux.GConvGRUMethod
GConvGRU(in => out, k; use_bias = true, init_weight = glorot_uniform, init_state = zeros32, init_bias = zeros32)

Graph Convolutional Gated Recurrent Unit (GConvGRU) recurrent layer from the paper Structured Sequence Modeling with Graph Convolutional Recurrent Networks.

Performs a layer of ChebConv to model spatial dependencies, followed by a Gated Recurrent Unit (GRU) cell to model temporal dependencies.

Arguments

  • in: Number of input features.
  • out: Number of output features.
  • k: Chebyshev polynomial order.
  • use_bias: Add learnable bias. Default true.
  • init_weight: Weights' initializer. Default glorot_uniform.
  • init_state: Initial state of the hidden stat of the GRU layer. Default zeros32.
  • init_bias: Bias initializer. Default zeros32.

Examples

using GNNLux, Lux, Random
 
 # initialize random number generator
 rng = Random.default_rng()
@@ -61,7 +61,7 @@
 ps, st = LuxCore.setup(rng, l)
 
 # forward pass
-y, st = l(g, x, ps, st)      # result size (5, 5)
source
GNNLux.GConvLSTMMethod
GConvLSTM(in => out, k; use_bias = true, init_weight = glorot_uniform, init_state = zeros32, init_bias = zeros32)

Graph Convolutional Long Short-Term Memory (GConvLSTM) recurrent layer from the paper Structured Sequence Modeling with Graph Convolutional Recurrent Networks.

Performs a layer of ChebConv to model spatial dependencies, followed by a Long Short-Term Memory (LSTM) cell to model temporal dependencies.

Arguments

  • in: Number of input features.
  • out: Number of output features.
  • k: Chebyshev polynomial order.
  • use_bias: Add learnable bias. Default true.
  • init_weight: Weights' initializer. Default glorot_uniform.
  • init_state: Initial state of the hidden stat of the GRU layer. Default zeros32.
  • init_bias: Bias initializer. Default zeros32.

Examples

using GNNLux, Lux, Random
+y, st = l(g, x, ps, st)      # result size (5, 5)
source
GNNLux.GConvLSTMMethod
GConvLSTM(in => out, k; use_bias = true, init_weight = glorot_uniform, init_state = zeros32, init_bias = zeros32)

Graph Convolutional Long Short-Term Memory (GConvLSTM) recurrent layer from the paper Structured Sequence Modeling with Graph Convolutional Recurrent Networks.

Performs a layer of ChebConv to model spatial dependencies, followed by a Long Short-Term Memory (LSTM) cell to model temporal dependencies.

Arguments

  • in: Number of input features.
  • out: Number of output features.
  • k: Chebyshev polynomial order.
  • use_bias: Add learnable bias. Default true.
  • init_weight: Weights' initializer. Default glorot_uniform.
  • init_state: Initial state of the hidden stat of the GRU layer. Default zeros32.
  • init_bias: Bias initializer. Default zeros32.

Examples

using GNNLux, Lux, Random
 
 # initialize random number generator
 rng = Random.default_rng()
@@ -77,7 +77,7 @@
 ps, st = LuxCore.setup(rng, l)
 
 # forward pass
-y, st = l(g, x, ps, st)      # result size (5, 5)
source
GNNLux.TGCNMethod
TGCN(in => out; use_bias = true, init_weight = glorot_uniform, init_state = zeros32, init_bias = zeros32, add_self_loops = false, use_edge_weight = true)

Temporal Graph Convolutional Network (T-GCN) recurrent layer from the paper T-GCN: A Temporal Graph Convolutional Network for Traffic Prediction.

Performs a layer of GCNConv to model spatial dependencies, followed by a Gated Recurrent Unit (GRU) cell to model temporal dependencies.

Arguments

  • in: Number of input features.
  • out: Number of output features.
  • use_bias: Add learnable bias. Default true.
  • init_weight: Weights' initializer. Default glorot_uniform.
  • init_state: Initial state of the hidden stat of the GRU layer. Default zeros32.
  • init_bias: Bias initializer. Default zeros32.
  • add_self_loops: Add self loops to the graph before performing the convolution. Default false.
  • use_edge_weight: If true, consider the edge weights in the input graph (if available). If add_self_loops=true the new weights will be set to 1. This option is ignored if the edge_weight is explicitly provided in the forward pass. Default false.

Examples

using GNNLux, Lux, Random
+y, st = l(g, x, ps, st)      # result size (5, 5)
source
GNNLux.TGCNMethod
TGCN(in => out; use_bias = true, init_weight = glorot_uniform, init_state = zeros32, init_bias = zeros32, add_self_loops = false, use_edge_weight = true)

Temporal Graph Convolutional Network (T-GCN) recurrent layer from the paper T-GCN: A Temporal Graph Convolutional Network for Traffic Prediction.

Performs a layer of GCNConv to model spatial dependencies, followed by a Gated Recurrent Unit (GRU) cell to model temporal dependencies.

Arguments

  • in: Number of input features.
  • out: Number of output features.
  • use_bias: Add learnable bias. Default true.
  • init_weight: Weights' initializer. Default glorot_uniform.
  • init_state: Initial state of the hidden stat of the GRU layer. Default zeros32.
  • init_bias: Bias initializer. Default zeros32.
  • add_self_loops: Add self loops to the graph before performing the convolution. Default false.
  • use_edge_weight: If true, consider the edge weights in the input graph (if available). If add_self_loops=true the new weights will be set to 1. This option is ignored if the edge_weight is explicitly provided in the forward pass. Default false.

Examples

using GNNLux, Lux, Random
 
 # initialize random number generator
 rng = Random.default_rng()
@@ -93,4 +93,4 @@
 ps, st = LuxCore.setup(rng, tgcn)
 
 # forward pass
-y, st = tgcn(g, x, ps, st)      # result size (6, 5)
source
\ No newline at end of file +y, st = tgcn(g, x, ps, st) # result size (6, 5)source
\ No newline at end of file diff --git a/docs/GNNLux.jl/dev/guides/models/index.html b/docs/GNNLux.jl/dev/guides/models/index.html index a61130b59..d71fb82b5 100644 --- a/docs/GNNLux.jl/dev/guides/models/index.html +++ b/docs/GNNLux.jl/dev/guides/models/index.html @@ -43,4 +43,4 @@ x -> relu.(x), GraphConv(d => d, relu), Dropout(0.5), - Dense(d, dout))

The GNNChain only propagates the graph and the node features. More complex scenarios, e.g. when also edge features are updated, have to be handled using the explicit definition of the forward pass.

\ No newline at end of file + Dense(d, dout))

The GNNChain only propagates the graph and the node features. More complex scenarios, e.g. when also edge features are updated, have to be handled using the explicit definition of the forward pass.

\ No newline at end of file diff --git a/docs/GNNLux.jl/dev/index.html b/docs/GNNLux.jl/dev/index.html index 116cbe2ea..dfc598a0d 100644 --- a/docs/GNNLux.jl/dev/index.html +++ b/docs/GNNLux.jl/dev/index.html @@ -53,4 +53,4 @@ return model, ps, st end -train_model!(model, ps, st, train_graphs, test_graphs) \ No newline at end of file +train_model!(model, ps, st, train_graphs, test_graphs) \ No newline at end of file diff --git a/docs/GNNLux.jl/dev/tutorials/gnn_intro/index.html b/docs/GNNLux.jl/dev/tutorials/gnn_intro/index.html index 61b3145b4..bc167c041 100644 --- a/docs/GNNLux.jl/dev/tutorials/gnn_intro/index.html +++ b/docs/GNNLux.jl/dev/tutorials/gnn_intro/index.html @@ -181,4 +181,4 @@ Epoch: 1900 Loss: 0.0017966953 Epoch: 2000 Loss: 0.0016328939

Train accuracy:

(ŷ, emb_final), st = gcn(g, g.x, ps, st)
-mean(onecold(ŷ[:, train_mask]) .== onecold(g.y[:, train_mask]))
1.0

Test accuracy:

mean(onecold(ŷ[:, .!train_mask]) .== onecold(y[:, .!train_mask]))
0.7333333333333333

Final embedding:

visualize_embeddings(emb_final, colors = labels)

As one can see, our 3-layer GCN model manages to linearly separating the communities and classifying most of the nodes correctly.

Furthermore, we did this all with a few lines of code, thanks to the GNNLux.jl which helped us out with data handling and GNN implementations.


This page was generated using Literate.jl.

\ No newline at end of file +mean(onecold(ŷ[:, train_mask]) .== onecold(g.y[:, train_mask]))
1.0

Test accuracy:

mean(onecold(ŷ[:, .!train_mask]) .== onecold(y[:, .!train_mask]))
0.7333333333333333

Final embedding:

visualize_embeddings(emb_final, colors = labels)

As one can see, our 3-layer GCN model manages to linearly separating the communities and classifying most of the nodes correctly.

Furthermore, we did this all with a few lines of code, thanks to the GNNLux.jl which helped us out with data handling and GNN implementations.


This page was generated using Literate.jl.

\ No newline at end of file diff --git a/docs/GNNlib.jl/dev/.documenter-siteinfo.json b/docs/GNNlib.jl/dev/.documenter-siteinfo.json index e420fc9a8..834922542 100644 --- a/docs/GNNlib.jl/dev/.documenter-siteinfo.json +++ b/docs/GNNlib.jl/dev/.documenter-siteinfo.json @@ -1 +1 @@ -{"documenter":{"julia_version":"1.11.2","generation_timestamp":"2024-12-09T09:31:26","documenter_version":"1.8.0"}} \ No newline at end of file +{"documenter":{"julia_version":"1.11.2","generation_timestamp":"2024-12-09T10:23:10","documenter_version":"1.8.0"}} \ No newline at end of file diff --git a/docs/GNNlib.jl/dev/GNNGraphs/api/datasets/index.html b/docs/GNNlib.jl/dev/GNNGraphs/api/datasets/index.html index 95f00bd23..187f87a1c 100644 --- a/docs/GNNlib.jl/dev/GNNGraphs/api/datasets/index.html +++ b/docs/GNNlib.jl/dev/GNNGraphs/api/datasets/index.html @@ -9,4 +9,4 @@ targets = 2708-element Vector{Int64} test_mask = 2708-element BitVector features = 1433×2708 Matrix{Float32} - train_mask = 2708-element BitVectorsource
\ No newline at end of file + train_mask = 2708-element BitVectorsource
\ No newline at end of file diff --git a/docs/GNNlib.jl/dev/GNNGraphs/api/gnngraph/index.html b/docs/GNNlib.jl/dev/GNNGraphs/api/gnngraph/index.html index 0b2ba6398..ae58bad15 100644 --- a/docs/GNNlib.jl/dev/GNNGraphs/api/gnngraph/index.html +++ b/docs/GNNlib.jl/dev/GNNGraphs/api/gnngraph/index.html @@ -32,7 +32,7 @@ # Collect edges' source and target nodes. # Both source and target are vectors of length num_edges -source, target = edge_index(g)

A GNNGraph can be sent to the GPU, for example by using Flux.jl's gpu function or MLDataDevices.jl's utilities. ```

source
Base.copyFunction
copy(g::GNNGraph; deep=false)

Create a copy of g. If deep is true, then copy will be a deep copy (equivalent to deepcopy(g)), otherwise it will be a shallow copy with the same underlying graph data.

source

DataStore

GNNGraphs.DataStoreType
DataStore([n, data])
+source, target = edge_index(g)

A GNNGraph can be sent to the GPU, for example by using Flux.jl's gpu function or MLDataDevices.jl's utilities. ```

source
Base.copyFunction
copy(g::GNNGraph; deep=false)

Create a copy of g. If deep is true, then copy will be a deep copy (equivalent to deepcopy(g)), otherwise it will be a shallow copy with the same underlying graph data.

source

DataStore

GNNGraphs.DataStoreType
DataStore([n, data])
 DataStore([n,] k1 = x1, k2 = x2, ...)

A container for feature arrays. The optional argument n enforces that numobs(x) == n for each array contained in the datastore.

At construction time, the data can be provided as any iterables of pairs of symbols and arrays or as keyword arguments:

julia> ds = DataStore(3, x = rand(Float32, 2, 3), y = rand(Float32, 3))
 DataStore(3) with 2 elements:
   y = 3-element Vector{Float32}
@@ -62,8 +62,8 @@
 3-element Vector{Float32}:
  1.0
  1.0
- 1.0
source

Query

GNNGraphs.adjacency_listMethod
adjacency_list(g; dir=:out)
-adjacency_list(g, nodes; dir=:out)

Return the adjacency list representation (a vector of vectors) of the graph g.

Calling a the adjacency list, if dir=:out than a[i] will contain the neighbors of node i through outgoing edges. If dir=:in, it will contain neighbors from incoming edges instead.

If nodes is given, return the neighborhood of the nodes in nodes only.

source
GNNGraphs.edge_indexMethod
edge_index(g::GNNGraph)

Return a tuple containing two vectors, respectively storing the source and target nodes for each edges in g.

s, t = edge_index(g)
source
GNNGraphs.get_graph_typeMethod
get_graph_type(g::GNNGraph)

Return the underlying representation for the graph g as a symbol.

Possible values are:

  • :coo: Coordinate list representation. The graph is stored as a tuple of vectors (s, t, w), where s and t are the source and target nodes of the edges, and w is the edge weights.
  • :sparse: Sparse matrix representation. The graph is stored as a sparse matrix representing the weighted adjacency matrix.
  • :dense: Dense matrix representation. The graph is stored as a dense matrix representing the weighted adjacency matrix.

The default representation for graph constructors GNNGraphs.jl is :coo. The underlying representation can be accessed through the g.graph field.

See also GNNGraph.

Examples

The default representation for graph constructors GNNGraphs.jl is :coo.

julia> g = rand_graph(5, 10)
+ 1.0
source

Query

GNNGraphs.adjacency_listMethod
adjacency_list(g; dir=:out)
+adjacency_list(g, nodes; dir=:out)

Return the adjacency list representation (a vector of vectors) of the graph g.

Calling a the adjacency list, if dir=:out than a[i] will contain the neighbors of node i through outgoing edges. If dir=:in, it will contain neighbors from incoming edges instead.

If nodes is given, return the neighborhood of the nodes in nodes only.

source
GNNGraphs.edge_indexMethod
edge_index(g::GNNGraph)

Return a tuple containing two vectors, respectively storing the source and target nodes for each edges in g.

s, t = edge_index(g)
source
GNNGraphs.get_graph_typeMethod
get_graph_type(g::GNNGraph)

Return the underlying representation for the graph g as a symbol.

Possible values are:

  • :coo: Coordinate list representation. The graph is stored as a tuple of vectors (s, t, w), where s and t are the source and target nodes of the edges, and w is the edge weights.
  • :sparse: Sparse matrix representation. The graph is stored as a sparse matrix representing the weighted adjacency matrix.
  • :dense: Dense matrix representation. The graph is stored as a dense matrix representing the weighted adjacency matrix.

The default representation for graph constructors GNNGraphs.jl is :coo. The underlying representation can be accessed through the g.graph field.

See also GNNGraph.

Examples

The default representation for graph constructors GNNGraphs.jl is :coo.

julia> g = rand_graph(5, 10)
 GNNGraph:
   num_nodes: 5
   num_edges: 10
@@ -88,7 +88,7 @@
 julia> gcoo = GNNGraph(g, graph_type=:coo);
 
 julia> gcoo.graph
-([2, 3, 5], [1, 2, 4], [1, 1, 1])
source
GNNGraphs.graph_indicatorMethod
graph_indicator(g::GNNGraph; edges=false)

Return a vector containing the graph membership (an integer from 1 to g.num_graphs) of each node in the graph. If edges=true, return the graph membership of each edge instead.

source
GNNGraphs.has_isolated_nodesMethod
has_isolated_nodes(g::GNNGraph; dir=:out)

Return true if the graph g contains nodes with out-degree (if dir=:out) or in-degree (if dir = :in) equal to zero.

source
GNNGraphs.has_multi_edgesMethod
has_multi_edges(g::GNNGraph)

Return true if g has any multiple edges.

source
GNNGraphs.is_bidirectedMethod
is_bidirected(g::GNNGraph)

Check if the directed graph g essentially corresponds to an undirected graph, i.e. if for each edge it also contains the reverse edge.

source
GNNGraphs.khop_adjFunction
khop_adj(g::GNNGraph,k::Int,T::DataType=eltype(g); dir=:out, weighted=true)

Return $A^k$ where $A$ is the adjacency matrix of the graph 'g'.

source
GNNGraphs.laplacian_lambda_maxFunction
laplacian_lambda_max(g::GNNGraph, T=Float32; add_self_loops=false, dir=:out)

Return the largest eigenvalue of the normalized symmetric Laplacian of the graph g.

If the graph is batched from multiple graphs, return the list of the largest eigenvalue for each graph.

source
GNNGraphs.normalized_laplacianFunction
normalized_laplacian(g, T=Float32; add_self_loops=false, dir=:out)

Normalized Laplacian matrix of graph g.

Arguments

  • g: A GNNGraph.
  • T: result element type.
  • add_self_loops: add self-loops while calculating the matrix.
  • dir: the edge directionality considered (:out, :in, :both).
source
GNNGraphs.scaled_laplacianFunction
scaled_laplacian(g, T=Float32; dir=:out)

Scaled Laplacian matrix of graph g, defined as $\hat{L} = \frac{2}{\lambda_{max}} L - I$ where $L$ is the normalized Laplacian matrix.

Arguments

  • g: A GNNGraph.
  • T: result element type.
  • dir: the edge directionality considered (:out, :in, :both).
source
Graphs.LinAlg.adjacency_matrixFunction
adjacency_matrix(g::GNNGraph, T=eltype(g); dir=:out, weighted=true)

Return the adjacency matrix A for the graph g.

If dir=:out, A[i,j] > 0 denotes the presence of an edge from node i to node j. If dir=:in instead, A[i,j] > 0 denotes the presence of an edge from node j to node i.

User may specify the eltype T of the returned matrix.

If weighted=true, the A will contain the edge weights if any, otherwise the elements of A will be either 0 or 1.

source
Graphs.degreeMethod
degree(g::GNNGraph, T=nothing; dir=:out, edge_weight=true)

Return a vector containing the degrees of the nodes in g.

The gradient is propagated through this function only if edge_weight is true or a vector.

Arguments

  • g: A graph.
  • T: Element type of the returned vector. If nothing, is chosen based on the graph type and will be an integer if edge_weight = false. Default nothing.
  • dir: For dir = :out the degree of a node is counted based on the outgoing edges. For dir = :in, the ingoing edges are used. If dir = :both we have the sum of the two.
  • edge_weight: If true and the graph contains weighted edges, the degree will be weighted. Set to false instead to just count the number of outgoing/ingoing edges. Finally, you can also pass a vector of weights to be used instead of the graph's own weights. Default true.
source
Graphs.has_self_loopsMethod
has_self_loops(g::GNNGraph)

Return true if g has any self loops.

source
Graphs.inneighborsMethod
inneighbors(g::GNNGraph, i::Integer)

Return the neighbors of node i in the graph g through incoming edges.

See also neighbors and outneighbors.

source
Graphs.outneighborsMethod
outneighbors(g::GNNGraph, i::Integer)

Return the neighbors of node i in the graph g through outgoing edges.

See also neighbors and inneighbors.

source
Graphs.neighborsMethod
neighbors(g::GNNGraph, i::Integer; dir=:out)

Return the neighbors of node i in the graph g. If dir=:out, return the neighbors through outgoing edges. If dir=:in, return the neighbors through incoming edges.

See also outneighbors, inneighbors.

source

Transform

GNNGraphs.add_edgesMethod
add_edges(g::GNNGraph, s::AbstractVector, t::AbstractVector; [edata])
+([2, 3, 5], [1, 2, 4], [1, 1, 1])
source
GNNGraphs.graph_indicatorMethod
graph_indicator(g::GNNGraph; edges=false)

Return a vector containing the graph membership (an integer from 1 to g.num_graphs) of each node in the graph. If edges=true, return the graph membership of each edge instead.

source
GNNGraphs.has_isolated_nodesMethod
has_isolated_nodes(g::GNNGraph; dir=:out)

Return true if the graph g contains nodes with out-degree (if dir=:out) or in-degree (if dir = :in) equal to zero.

source
GNNGraphs.has_multi_edgesMethod
has_multi_edges(g::GNNGraph)

Return true if g has any multiple edges.

source
GNNGraphs.is_bidirectedMethod
is_bidirected(g::GNNGraph)

Check if the directed graph g essentially corresponds to an undirected graph, i.e. if for each edge it also contains the reverse edge.

source
GNNGraphs.khop_adjFunction
khop_adj(g::GNNGraph,k::Int,T::DataType=eltype(g); dir=:out, weighted=true)

Return $A^k$ where $A$ is the adjacency matrix of the graph 'g'.

source
GNNGraphs.laplacian_lambda_maxFunction
laplacian_lambda_max(g::GNNGraph, T=Float32; add_self_loops=false, dir=:out)

Return the largest eigenvalue of the normalized symmetric Laplacian of the graph g.

If the graph is batched from multiple graphs, return the list of the largest eigenvalue for each graph.

source
GNNGraphs.normalized_laplacianFunction
normalized_laplacian(g, T=Float32; add_self_loops=false, dir=:out)

Normalized Laplacian matrix of graph g.

Arguments

  • g: A GNNGraph.
  • T: result element type.
  • add_self_loops: add self-loops while calculating the matrix.
  • dir: the edge directionality considered (:out, :in, :both).
source
GNNGraphs.scaled_laplacianFunction
scaled_laplacian(g, T=Float32; dir=:out)

Scaled Laplacian matrix of graph g, defined as $\hat{L} = \frac{2}{\lambda_{max}} L - I$ where $L$ is the normalized Laplacian matrix.

Arguments

  • g: A GNNGraph.
  • T: result element type.
  • dir: the edge directionality considered (:out, :in, :both).
source
Graphs.LinAlg.adjacency_matrixFunction
adjacency_matrix(g::GNNGraph, T=eltype(g); dir=:out, weighted=true)

Return the adjacency matrix A for the graph g.

If dir=:out, A[i,j] > 0 denotes the presence of an edge from node i to node j. If dir=:in instead, A[i,j] > 0 denotes the presence of an edge from node j to node i.

User may specify the eltype T of the returned matrix.

If weighted=true, the A will contain the edge weights if any, otherwise the elements of A will be either 0 or 1.

source
Graphs.degreeMethod
degree(g::GNNGraph, T=nothing; dir=:out, edge_weight=true)

Return a vector containing the degrees of the nodes in g.

The gradient is propagated through this function only if edge_weight is true or a vector.

Arguments

  • g: A graph.
  • T: Element type of the returned vector. If nothing, is chosen based on the graph type and will be an integer if edge_weight = false. Default nothing.
  • dir: For dir = :out the degree of a node is counted based on the outgoing edges. For dir = :in, the ingoing edges are used. If dir = :both we have the sum of the two.
  • edge_weight: If true and the graph contains weighted edges, the degree will be weighted. Set to false instead to just count the number of outgoing/ingoing edges. Finally, you can also pass a vector of weights to be used instead of the graph's own weights. Default true.
source
Graphs.has_self_loopsMethod
has_self_loops(g::GNNGraph)

Return true if g has any self loops.

source
Graphs.inneighborsMethod
inneighbors(g::GNNGraph, i::Integer)

Return the neighbors of node i in the graph g through incoming edges.

See also neighbors and outneighbors.

source
Graphs.outneighborsMethod
outneighbors(g::GNNGraph, i::Integer)

Return the neighbors of node i in the graph g through outgoing edges.

See also neighbors and inneighbors.

source
Graphs.neighborsMethod
neighbors(g::GNNGraph, i::Integer; dir=:out)

Return the neighbors of node i in the graph g. If dir=:out, return the neighbors through outgoing edges. If dir=:in, return the neighbors through incoming edges.

See also outneighbors, inneighbors.

source

Transform

GNNGraphs.add_edgesMethod
add_edges(g::GNNGraph, s::AbstractVector, t::AbstractVector; [edata])
 add_edges(g::GNNGraph, (s, t); [edata])
 add_edges(g::GNNGraph, (s, t, w); [edata])

Add to graph g the edges with source nodes s and target nodes t. Optionally, pass the edge weight w and the features edata for the new edges. Returns a new graph sharing part of the underlying data with g.

If the s or t contain nodes that are not already present in the graph, they are added to the graph as well.

Examples

julia> s, t = [1, 2, 3, 3, 4], [2, 3, 4, 4, 4];
 
@@ -110,9 +110,9 @@
 julia> add_edges(g, [1,2], [2,3])
 GNNGraph:
   num_nodes: 3
-  num_edges: 2
source
GNNGraphs.add_nodesMethod
add_nodes(g::GNNGraph, n; [ndata])

Add n new nodes to graph g. In the new graph, these nodes will have indexes from g.num_nodes + 1 to g.num_nodes + n.

source
GNNGraphs.add_self_loopsMethod
add_self_loops(g::GNNGraph)

Return a graph with the same features as g but also adding edges connecting the nodes to themselves.

Nodes with already existing self-loops will obtain a second self-loop.

If the graphs has edge weights, the new edges will have weight 1.

source
GNNGraphs.getgraphMethod
getgraph(g::GNNGraph, i; nmap=false)

Return the subgraph of g induced by those nodes j for which g.graph_indicator[j] == i or, if i is a collection, g.graph_indicator[j] ∈ i. In other words, it extract the component graphs from a batched graph.

If nmap=true, return also a vector v mapping the new nodes to the old ones. The node i in the subgraph will correspond to the node v[i] in g.

source
GNNGraphs.negative_sampleMethod
negative_sample(g::GNNGraph; 
+  num_edges: 2
source
GNNGraphs.add_nodesMethod
add_nodes(g::GNNGraph, n; [ndata])

Add n new nodes to graph g. In the new graph, these nodes will have indexes from g.num_nodes + 1 to g.num_nodes + n.

source
GNNGraphs.add_self_loopsMethod
add_self_loops(g::GNNGraph)

Return a graph with the same features as g but also adding edges connecting the nodes to themselves.

Nodes with already existing self-loops will obtain a second self-loop.

If the graphs has edge weights, the new edges will have weight 1.

source
GNNGraphs.getgraphMethod
getgraph(g::GNNGraph, i; nmap=false)

Return the subgraph of g induced by those nodes j for which g.graph_indicator[j] == i or, if i is a collection, g.graph_indicator[j] ∈ i. In other words, it extract the component graphs from a batched graph.

If nmap=true, return also a vector v mapping the new nodes to the old ones. The node i in the subgraph will correspond to the node v[i] in g.

source
GNNGraphs.negative_sampleMethod
negative_sample(g::GNNGraph; 
                 num_neg_edges = g.num_edges, 
-                bidirected = is_bidirected(g))

Return a graph containing random negative edges (i.e. non-edges) from graph g as edges.

If bidirected=true, the output graph will be bidirected and there will be no leakage from the origin graph.

See also is_bidirected.

source
GNNGraphs.perturb_edgesMethod
perturb_edges([rng], g::GNNGraph, perturb_ratio)

Return a new graph obtained from g by adding random edges, based on a specified perturb_ratio. The perturb_ratio determines the fraction of new edges to add relative to the current number of edges in the graph. These new edges are added without creating self-loops.

The function returns a new GNNGraph instance that shares some of the underlying data with g but includes the additional edges. The nodes for the new edges are selected randomly, and no edge data (edata) or weights (w) are assigned to these new edges.

Arguments

  • g::GNNGraph: The graph to be perturbed.
  • perturb_ratio: The ratio of the number of new edges to add relative to the current number of edges in the graph. For example, a perturb_ratio of 0.1 means that 10% of the current number of edges will be added as new random edges.
  • rng: An optionalrandom number generator to ensure reproducible results.

Examples

julia> g = GNNGraph((s, t, w))
+                bidirected = is_bidirected(g))

Return a graph containing random negative edges (i.e. non-edges) from graph g as edges.

If bidirected=true, the output graph will be bidirected and there will be no leakage from the origin graph.

See also is_bidirected.

source
GNNGraphs.perturb_edgesMethod
perturb_edges([rng], g::GNNGraph, perturb_ratio)

Return a new graph obtained from g by adding random edges, based on a specified perturb_ratio. The perturb_ratio determines the fraction of new edges to add relative to the current number of edges in the graph. These new edges are added without creating self-loops.

The function returns a new GNNGraph instance that shares some of the underlying data with g but includes the additional edges. The nodes for the new edges are selected randomly, and no edge data (edata) or weights (w) are assigned to these new edges.

Arguments

  • g::GNNGraph: The graph to be perturbed.
  • perturb_ratio: The ratio of the number of new edges to add relative to the current number of edges in the graph. For example, a perturb_ratio of 0.1 means that 10% of the current number of edges will be added as new random edges.
  • rng: An optionalrandom number generator to ensure reproducible results.

Examples

julia> g = GNNGraph((s, t, w))
 GNNGraph:
   num_nodes: 4
   num_edges: 5
@@ -120,7 +120,7 @@
 julia> perturbed_g = perturb_edges(g, 0.2)
 GNNGraph:
   num_nodes: 4
-  num_edges: 6
source
GNNGraphs.ppr_diffusionMethod
ppr_diffusion(g::GNNGraph{<:COO_T}, alpha =0.85f0) -> GNNGraph

Calculates the Personalized PageRank (PPR) diffusion based on the edge weight matrix of a GNNGraph and updates the graph with new edge weights derived from the PPR matrix. References paper: The pagerank citation ranking: Bringing order to the web

The function performs the following steps:

  1. Constructs a modified adjacency matrix A using the graph's edge weights, where A is adjusted by (α - 1) * A + I, with α being the damping factor (alpha_f32) and I the identity matrix.
  2. Normalizes A to ensure each column sums to 1, representing transition probabilities.
  3. Applies the PPR formula α * (I + (α - 1) * A)^-1 to compute the diffusion matrix.
  4. Updates the original edge weights of the graph based on the PPR diffusion matrix, assigning new weights for each edge from the PPR matrix.

Arguments

  • g::GNNGraph: The input graph for which PPR diffusion is to be calculated. It should have edge weights available.
  • alpha_f32::Float32: The damping factor used in PPR calculation, controlling the teleport probability in the random walk. Defaults to 0.85f0.

Returns

  • A new GNNGraph instance with the same structure as g but with updated edge weights according to the PPR diffusion calculation.
source
GNNGraphs.rand_edge_splitMethod
rand_edge_split(g::GNNGraph, frac; bidirected=is_bidirected(g)) -> g1, g2

Randomly partition the edges in g to form two graphs, g1 and g2. Both will have the same number of nodes as g. g1 will contain a fraction frac of the original edges, while g2 wil contain the rest.

If bidirected = true makes sure that an edge and its reverse go into the same split. This option is supported only for bidirected graphs with no self-loops and multi-edges.

rand_edge_split is tipically used to create train/test splits in link prediction tasks.

source
GNNGraphs.random_walk_peMethod
random_walk_pe(g, walk_length)

Return the random walk positional encoding from the paper Graph Neural Networks with Learnable Structural and Positional Representations of the given graph g and the length of the walk walk_length as a matrix of size (walk_length, g.num_nodes).

source
GNNGraphs.remove_edgesMethod
remove_edges(g::GNNGraph, edges_to_remove::AbstractVector{<:Integer})
+  num_edges: 6
source
GNNGraphs.ppr_diffusionMethod
ppr_diffusion(g::GNNGraph{<:COO_T}, alpha =0.85f0) -> GNNGraph

Calculates the Personalized PageRank (PPR) diffusion based on the edge weight matrix of a GNNGraph and updates the graph with new edge weights derived from the PPR matrix. References paper: The pagerank citation ranking: Bringing order to the web

The function performs the following steps:

  1. Constructs a modified adjacency matrix A using the graph's edge weights, where A is adjusted by (α - 1) * A + I, with α being the damping factor (alpha_f32) and I the identity matrix.
  2. Normalizes A to ensure each column sums to 1, representing transition probabilities.
  3. Applies the PPR formula α * (I + (α - 1) * A)^-1 to compute the diffusion matrix.
  4. Updates the original edge weights of the graph based on the PPR diffusion matrix, assigning new weights for each edge from the PPR matrix.

Arguments

  • g::GNNGraph: The input graph for which PPR diffusion is to be calculated. It should have edge weights available.
  • alpha_f32::Float32: The damping factor used in PPR calculation, controlling the teleport probability in the random walk. Defaults to 0.85f0.

Returns

  • A new GNNGraph instance with the same structure as g but with updated edge weights according to the PPR diffusion calculation.
source
GNNGraphs.rand_edge_splitMethod
rand_edge_split(g::GNNGraph, frac; bidirected=is_bidirected(g)) -> g1, g2

Randomly partition the edges in g to form two graphs, g1 and g2. Both will have the same number of nodes as g. g1 will contain a fraction frac of the original edges, while g2 wil contain the rest.

If bidirected = true makes sure that an edge and its reverse go into the same split. This option is supported only for bidirected graphs with no self-loops and multi-edges.

rand_edge_split is tipically used to create train/test splits in link prediction tasks.

source
GNNGraphs.random_walk_peMethod
random_walk_pe(g, walk_length)

Return the random walk positional encoding from the paper Graph Neural Networks with Learnable Structural and Positional Representations of the given graph g and the length of the walk walk_length as a matrix of size (walk_length, g.num_nodes).

source
GNNGraphs.remove_edgesMethod
remove_edges(g::GNNGraph, edges_to_remove::AbstractVector{<:Integer})
 remove_edges(g::GNNGraph, p=0.5)

Remove specified edges from a GNNGraph, either by specifying edge indices or by randomly removing edges with a given probability.

Arguments

  • g: The input graph from which edges will be removed.
  • edges_to_remove: Vector of edge indices to be removed. This argument is only required for the first method.
  • p: Probability of removing each edge. This argument is only required for the second method and defaults to 0.5.

Returns

A new GNNGraph with the specified edges removed.

Example

julia> using GNNGraphs
 
 # Construct a GNNGraph
@@ -143,7 +143,7 @@
 julia> g_new
 GNNGraph:
   num_nodes: 3
-  num_edges: 2
source
GNNGraphs.remove_multi_edgesMethod
remove_multi_edges(g::GNNGraph; aggr=+)

Remove multiple edges (also called parallel edges or repeated edges) from graph g. Possible edge features are aggregated according to aggr, that can take value +,min, max or mean.

See also remove_self_loops, has_multi_edges, and to_bidirected.

source
GNNGraphs.remove_nodesMethod
remove_nodes(g::GNNGraph, p)

Returns a new graph obtained by dropping nodes from g with independent probabilities p.

Examples

julia> g = GNNGraph([1, 1, 2, 2, 3, 4], [1, 2, 3, 1, 3, 1])
+  num_edges: 2
source
GNNGraphs.remove_multi_edgesMethod
remove_multi_edges(g::GNNGraph; aggr=+)

Remove multiple edges (also called parallel edges or repeated edges) from graph g. Possible edge features are aggregated according to aggr, that can take value +,min, max or mean.

See also remove_self_loops, has_multi_edges, and to_bidirected.

source
GNNGraphs.remove_nodesMethod
remove_nodes(g::GNNGraph, p)

Returns a new graph obtained by dropping nodes from g with independent probabilities p.

Examples

julia> g = GNNGraph([1, 1, 2, 2, 3, 4], [1, 2, 3, 1, 3, 1])
 GNNGraph:
   num_nodes: 4
   num_edges: 6
@@ -151,7 +151,7 @@
 julia> g_new = remove_nodes(g, 0.5)
 GNNGraph:
   num_nodes: 2
-  num_edges: 2
source
GNNGraphs.remove_nodesMethod
remove_nodes(g::GNNGraph, nodes_to_remove::AbstractVector)

Remove specified nodes, and their associated edges, from a GNNGraph. This operation reindexes the remaining nodes to maintain a continuous sequence of node indices, starting from 1. Similarly, edges are reindexed to account for the removal of edges connected to the removed nodes.

Arguments

  • g: The input graph from which nodes (and their edges) will be removed.
  • nodes_to_remove: Vector of node indices to be removed.

Returns

A new GNNGraph with the specified nodes and all edges associated with these nodes removed.

Example

using GNNGraphs
+  num_edges: 2
source
GNNGraphs.remove_nodesMethod
remove_nodes(g::GNNGraph, nodes_to_remove::AbstractVector)

Remove specified nodes, and their associated edges, from a GNNGraph. This operation reindexes the remaining nodes to maintain a continuous sequence of node indices, starting from 1. Similarly, edges are reindexed to account for the removal of edges connected to the removed nodes.

Arguments

  • g: The input graph from which nodes (and their edges) will be removed.
  • nodes_to_remove: Vector of node indices to be removed.

Returns

A new GNNGraph with the specified nodes and all edges associated with these nodes removed.

Example

using GNNGraphs
 
 g = GNNGraph([1, 1, 2, 2, 3], [2, 3, 1, 3, 1])
 
@@ -159,7 +159,7 @@
 g_new = remove_nodes(g, [2, 3])
 
 # g_new now does not contain nodes 2 and 3, and any edges that were connected to these nodes.
-println(g_new)
source
GNNGraphs.remove_self_loopsMethod
remove_self_loops(g::GNNGraph)

Return a graph constructed from g where self-loops (edges from a node to itself) are removed.

See also add_self_loops and remove_multi_edges.

source
GNNGraphs.set_edge_weightMethod
set_edge_weight(g::GNNGraph, w::AbstractVector)

Set w as edge weights in the returned graph.

source
GNNGraphs.to_bidirectedMethod
to_bidirected(g)

Adds a reverse edge for each edge in the graph, then calls remove_multi_edges with mean aggregation to simplify the graph.

See also is_bidirected.

Examples

julia> s, t = [1, 2, 3, 3, 4], [2, 3, 4, 4, 4];
+println(g_new)
source
GNNGraphs.remove_self_loopsMethod
remove_self_loops(g::GNNGraph)

Return a graph constructed from g where self-loops (edges from a node to itself) are removed.

See also add_self_loops and remove_multi_edges.

source
GNNGraphs.set_edge_weightMethod
set_edge_weight(g::GNNGraph, w::AbstractVector)

Set w as edge weights in the returned graph.

source
GNNGraphs.to_bidirectedMethod
to_bidirected(g)

Adds a reverse edge for each edge in the graph, then calls remove_multi_edges with mean aggregation to simplify the graph.

See also is_bidirected.

Examples

julia> s, t = [1, 2, 3, 3, 4], [2, 3, 4, 4, 4];
 
 julia> w = [1.0, 2.0, 3.0, 4.0, 5.0];
 
@@ -200,7 +200,7 @@
  20.0
  35.0
  35.0
- 50.0
source
GNNGraphs.to_unidirectedMethod
to_unidirected(g::GNNGraph)

Return a graph that for each multiple edge between two nodes in g keeps only an edge in one direction.

source
MLUtils.batchMethod
batch(gs::Vector{<:GNNGraph})

Batch together multiple GNNGraphs into a single one containing the total number of original nodes and edges.

Equivalent to SparseArrays.blockdiag. See also MLUtils.unbatch.

Examples

julia> g1 = rand_graph(4, 4, ndata=ones(Float32, 3, 4))
+ 50.0
source
GNNGraphs.to_unidirectedMethod
to_unidirected(g::GNNGraph)

Return a graph that for each multiple edge between two nodes in g keeps only an edge in one direction.

source
MLUtils.batchMethod
batch(gs::Vector{<:GNNGraph})

Batch together multiple GNNGraphs into a single one containing the total number of original nodes and edges.

Equivalent to SparseArrays.blockdiag. See also MLUtils.unbatch.

Examples

julia> g1 = rand_graph(4, 4, ndata=ones(Float32, 3, 4))
 GNNGraph:
   num_nodes: 4
   num_edges: 4
@@ -226,7 +226,7 @@
 3×9 Matrix{Float32}:
  1.0  1.0  1.0  1.0  0.0  0.0  0.0  0.0  0.0
  1.0  1.0  1.0  1.0  0.0  0.0  0.0  0.0  0.0
- 1.0  1.0  1.0  1.0  0.0  0.0  0.0  0.0  0.0
source
MLUtils.unbatchMethod
unbatch(g::GNNGraph)

Opposite of the MLUtils.batch operation, returns an array of the individual graphs batched together in g.

See also MLUtils.batch and getgraph.

Examples

julia> using MLUtils
+ 1.0  1.0  1.0  1.0  0.0  0.0  0.0  0.0  0.0
source
MLUtils.unbatchMethod
unbatch(g::GNNGraph)

Opposite of the MLUtils.batch operation, returns an array of the individual graphs batched together in g.

See also MLUtils.batch and getgraph.

Examples

julia> using MLUtils
 
 julia> gbatched = MLUtils.batch([rand_graph(5, 6), rand_graph(10, 8), rand_graph(4,2)])
 GNNGraph:
@@ -238,8 +238,8 @@
 3-element Vector{GNNGraph{Tuple{Vector{Int64}, Vector{Int64}, Nothing}}}:
  GNNGraph(5, 6) with no data
  GNNGraph(10, 8) with no data
- GNNGraph(4, 2) with no data
source
SparseArrays.blockdiagMethod
blockdiag(xs::GNNGraph...)

Equivalent to MLUtils.batch.

source

Utils

GNNGraphs.sort_edge_indexFunction
sort_edge_index(ei::Tuple) -> u', v'
-sort_edge_index(u, v) -> u', v'

Return a sorted version of the tuple of vectors ei = (u, v), applying a common permutation to u and v. The sorting is lexycographic, that is the pairs (ui, vi) are sorted first according to the ui and then according to vi.

source
GNNGraphs.color_refinementFunction
color_refinement(g::GNNGraph, [x0]) -> x, num_colors, niters

The color refinement algorithm for graph coloring. Given a graph g and an initial coloring x0, the algorithm iteratively refines the coloring until a fixed point is reached.

At each iteration the algorithm computes a hash of the coloring and the sorted list of colors of the neighbors of each node. This hash is used to determine if the coloring has changed.

math x_i' = hashmap((x_i, sort([x_j for j \in N(i)]))).`

This algorithm is related to the 1-Weisfeiler-Lehman algorithm for graph isomorphism testing.

Arguments

  • g::GNNGraph: The graph to color.
  • x0::AbstractVector{<:Integer}: The initial coloring. If not provided, all nodes are colored with 1.

Returns

  • x::AbstractVector{<:Integer}: The final coloring.
  • num_colors::Int: The number of colors used.
  • niters::Int: The number of iterations until convergence.
source

Generate

GNNGraphs.knn_graphMethod
knn_graph(points::AbstractMatrix, 
+ GNNGraph(4, 2) with no data
source
SparseArrays.blockdiagMethod
blockdiag(xs::GNNGraph...)

Equivalent to MLUtils.batch.

source

Utils

GNNGraphs.sort_edge_indexFunction
sort_edge_index(ei::Tuple) -> u', v'
+sort_edge_index(u, v) -> u', v'

Return a sorted version of the tuple of vectors ei = (u, v), applying a common permutation to u and v. The sorting is lexycographic, that is the pairs (ui, vi) are sorted first according to the ui and then according to vi.

source
GNNGraphs.color_refinementFunction
color_refinement(g::GNNGraph, [x0]) -> x, num_colors, niters

The color refinement algorithm for graph coloring. Given a graph g and an initial coloring x0, the algorithm iteratively refines the coloring until a fixed point is reached.

At each iteration the algorithm computes a hash of the coloring and the sorted list of colors of the neighbors of each node. This hash is used to determine if the coloring has changed.

math x_i' = hashmap((x_i, sort([x_j for j \in N(i)]))).`

This algorithm is related to the 1-Weisfeiler-Lehman algorithm for graph isomorphism testing.

Arguments

  • g::GNNGraph: The graph to color.
  • x0::AbstractVector{<:Integer}: The initial coloring. If not provided, all nodes are colored with 1.

Returns

  • x::AbstractVector{<:Integer}: The final coloring.
  • num_colors::Int: The number of colors used.
  • niters::Int: The number of iterations until convergence.
source

Generate

GNNGraphs.knn_graphMethod
knn_graph(points::AbstractMatrix, 
           k::Int; 
           graph_indicator = nothing,
           self_loops = false, 
@@ -259,7 +259,7 @@
 GNNGraph:
     num_nodes = 10
     num_edges = 30
-    num_graphs = 2
source
GNNGraphs.radius_graphMethod
radius_graph(points::AbstractMatrix, 
+    num_graphs = 2
source
GNNGraphs.radius_graphMethod
radius_graph(points::AbstractMatrix, 
              r::AbstractFloat; 
              graph_indicator = nothing,
              self_loops = false, 
@@ -279,7 +279,7 @@
 GNNGraph:
     num_nodes = 10
     num_edges = 20
-    num_graphs = 2

References

Section B paragraphs 1 and 2 of the paper Dynamic Hidden-Variable Network Models

source
GNNGraphs.rand_graphMethod
rand_graph([rng,] n, m; bidirected=true, edge_weight = nothing, kws...)

Generate a random (Erdós-Renyi) GNNGraph with n nodes and m edges.

If bidirected=true the reverse edge of each edge will be present. If bidirected=false instead, m unrelated edges are generated. In any case, the output graph will contain no self-loops or multi-edges.

A vector can be passed as edge_weight. Its length has to be equal to m in the directed case, and m÷2 in the bidirected one.

Pass a random number generator as the first argument to make the generation reproducible.

Additional keyword arguments will be passed to the GNNGraph constructor.

Examples

julia> g = rand_graph(5, 4, bidirected=false)
+    num_graphs = 2

References

Section B paragraphs 1 and 2 of the paper Dynamic Hidden-Variable Network Models

source
GNNGraphs.rand_graphMethod
rand_graph([rng,] n, m; bidirected=true, edge_weight = nothing, kws...)

Generate a random (Erdós-Renyi) GNNGraph with n nodes and m edges.

If bidirected=true the reverse edge of each edge will be present. If bidirected=false instead, m unrelated edges are generated. In any case, the output graph will contain no self-loops or multi-edges.

A vector can be passed as edge_weight. Its length has to be equal to m in the directed case, and m÷2 in the bidirected one.

Pass a random number generator as the first argument to make the generation reproducible.

Additional keyword arguments will be passed to the GNNGraph constructor.

Examples

julia> g = rand_graph(5, 4, bidirected=false)
 GNNGraph:
   num_nodes: 5
   num_edges: 4
@@ -297,7 +297,7 @@
 
 # Each edge has a reverse
 julia> edge_index(g)
-([1, 1, 5, 3], [5, 3, 1, 1])
source

Operators

Base.intersectFunction
intersect(g1::GNNGraph, g2::GNNGraph)

Intersect two graphs by keeping only the common edges.

source

Sampling

GNNGraphs.sample_neighborsFunction
sample_neighbors(g, nodes, K=-1; dir=:in, replace=false, dropnodes=false)

Sample neighboring edges of the given nodes and return the induced subgraph. For each node, a number of inbound (or outbound when dir = :out) edges will be randomly chosen. Ifdropnodes=false`, the graph returned will then contain all the nodes in the original graph, but only the sampled edges.

The returned graph will contain an edge feature EID corresponding to the id of the edge in the original graph. If dropnodes=true, it will also contain a node feature NID with the node ids in the original graph.

Arguments

  • g. The graph.
  • nodes. A list of node IDs to sample neighbors from.
  • K. The maximum number of edges to be sampled for each node. If -1, all the neighboring edges will be selected.
  • dir. Determines whether to sample inbound (:in) or outbound (`:out) edges (Default :in).
  • replace. If true, sample with replacement.
  • dropnodes. If true, the resulting subgraph will contain only the nodes involved in the sampled edges.

Examples

julia> g = rand_graph(20, 100)
+([1, 1, 5, 3], [5, 3, 1, 1])
source

Operators

Base.intersectFunction
intersect(g1::GNNGraph, g2::GNNGraph)

Intersect two graphs by keeping only the common edges.

source

Sampling

GNNGraphs.sample_neighborsFunction
sample_neighbors(g, nodes, K=-1; dir=:in, replace=false, dropnodes=false)

Sample neighboring edges of the given nodes and return the induced subgraph. For each node, a number of inbound (or outbound when dir = :out) edges will be randomly chosen. Ifdropnodes=false`, the graph returned will then contain all the nodes in the original graph, but only the sampled edges.

The returned graph will contain an edge feature EID corresponding to the id of the edge in the original graph. If dropnodes=true, it will also contain a node feature NID with the node ids in the original graph.

Arguments

  • g. The graph.
  • nodes. A list of node IDs to sample neighbors from.
  • K. The maximum number of edges to be sampled for each node. If -1, all the neighboring edges will be selected.
  • dir. Determines whether to sample inbound (:in) or outbound (`:out) edges (Default :in).
  • replace. If true, sample with replacement.
  • dropnodes. If true, the resulting subgraph will contain only the nodes involved in the sampled edges.

Examples

julia> g = rand_graph(20, 100)
 GNNGraph:
     num_nodes = 20
     num_edges = 100
@@ -336,7 +336,7 @@
     num_nodes = 20
     num_edges = 10
     edata:
-        EID => (10,)
source
Graphs.induced_subgraphMethod
induced_subgraph(graph, nodes)

Generates a subgraph from the original graph using the provided nodes. The function includes the nodes' neighbors and creates edges between nodes that are connected in the original graph. If a node has no neighbors, an isolated node will be added to the subgraph. Returns A new GNNGraph containing the subgraph with the specified nodes and their features.

Arguments

  • graph. The original GNNGraph containing nodes, edges, and node features.
  • nodes`. A vector of node indices to include in the subgraph.

Examples

julia> s = [1, 2]
+        EID => (10,)
source
Graphs.induced_subgraphMethod
induced_subgraph(graph, nodes)

Generates a subgraph from the original graph using the provided nodes. The function includes the nodes' neighbors and creates edges between nodes that are connected in the original graph. If a node has no neighbors, an isolated node will be added to the subgraph. Returns A new GNNGraph containing the subgraph with the specified nodes and their features.

Arguments

  • graph. The original GNNGraph containing nodes, edges, and node features.
  • nodes`. A vector of node indices to include in the subgraph.

Examples

julia> s = [1, 2]
 2-element Vector{Int64}:
  1
  2
@@ -369,4 +369,4 @@
         y = 2-element Vector{Float32}
         x = 32×2 Matrix{Float32}
   edata:
-        e = 1-element Vector{Float32}
source
\ No newline at end of file + e = 1-element Vector{Float32}source
\ No newline at end of file diff --git a/docs/GNNlib.jl/dev/GNNGraphs/api/heterograph/index.html b/docs/GNNlib.jl/dev/GNNGraphs/api/heterograph/index.html index 7700e068e..031cbe9b8 100644 --- a/docs/GNNlib.jl/dev/GNNGraphs/api/heterograph/index.html +++ b/docs/GNNlib.jl/dev/GNNGraphs/api/heterograph/index.html @@ -39,7 +39,7 @@ julia> hg.ndata[:A].x 2×10 Matrix{Float64}: 0.825882 0.0797502 0.245813 0.142281 0.231253 0.685025 0.821457 0.888838 0.571347 0.53165 - 0.631286 0.316292 0.705325 0.239211 0.533007 0.249233 0.473736 0.595475 0.0623298 0.159307

See also GNNGraph for a homogeneous graph type and rand_heterograph for a function to generate random heterographs.

source
GNNGraphs.edge_type_subgraphMethod
edge_type_subgraph(g::GNNHeteroGraph, edge_ts)

Return a subgraph of g that contains only the edges of type edge_ts. Edge types can be specified as a single edge type (i.e. a tuple containing 3 symbols) or a vector of edge types.

source
GNNGraphs.num_edge_typesMethod
num_edge_types(g)

Return the number of edge types in the graph. For GNNGraphs, this is always 1. For GNNHeteroGraphs, this is the number of unique edge types.

source
GNNGraphs.num_node_typesMethod
num_node_types(g)

Return the number of node types in the graph. For GNNGraphs, this is always 1. For GNNHeteroGraphs, this is the number of unique node types.

source

Query

GNNGraphs.edge_indexMethod
edge_index(g::GNNHeteroGraph, [edge_t])

Return a tuple containing two vectors, respectively storing the source and target nodes for each edges in g of type edge_t = (src_t, rel_t, trg_t).

If edge_t is not provided, it will error if g has more than one edge type.

source
GNNGraphs.graph_indicatorMethod
graph_indicator(g::GNNHeteroGraph, [node_t])

Return a Dict of vectors containing the graph membership (an integer from 1 to g.num_graphs) of each node in the graph for each node type. If node_t is provided, return the graph membership of each node of type node_t instead.

See also batch.

source
Graphs.degreeMethod
degree(g::GNNHeteroGraph, edge_type::EType; dir = :in)

Return a vector containing the degrees of the nodes in g GNNHeteroGraph given edge_type.

Arguments

  • g: A graph.
  • edge_type: A tuple of symbols (source_t, edge_t, target_t) representing the edge type.
  • T: Element type of the returned vector. If nothing, is chosen based on the graph type. Default nothing.
  • dir: For dir = :out the degree of a node is counted based on the outgoing edges. For dir = :in, the ingoing edges are used. If dir = :both we have the sum of the two. Default dir = :out.
source
Graphs.has_edgeMethod
has_edge(g::GNNHeteroGraph, edge_t, i, j)

Return true if there is an edge of type edge_t from node i to node j in g.

Examples

julia> g = rand_bipartite_heterograph((2, 2), (4, 0), bidirected=false)
+    0.631286  0.316292   0.705325  0.239211  0.533007  0.249233  0.473736  0.595475  0.0623298  0.159307

See also GNNGraph for a homogeneous graph type and rand_heterograph for a function to generate random heterographs.

source
GNNGraphs.edge_type_subgraphMethod
edge_type_subgraph(g::GNNHeteroGraph, edge_ts)

Return a subgraph of g that contains only the edges of type edge_ts. Edge types can be specified as a single edge type (i.e. a tuple containing 3 symbols) or a vector of edge types.

source
GNNGraphs.num_edge_typesMethod
num_edge_types(g)

Return the number of edge types in the graph. For GNNGraphs, this is always 1. For GNNHeteroGraphs, this is the number of unique edge types.

source
GNNGraphs.num_node_typesMethod
num_node_types(g)

Return the number of node types in the graph. For GNNGraphs, this is always 1. For GNNHeteroGraphs, this is the number of unique node types.

source

Query

GNNGraphs.edge_indexMethod
edge_index(g::GNNHeteroGraph, [edge_t])

Return a tuple containing two vectors, respectively storing the source and target nodes for each edges in g of type edge_t = (src_t, rel_t, trg_t).

If edge_t is not provided, it will error if g has more than one edge type.

source
GNNGraphs.graph_indicatorMethod
graph_indicator(g::GNNHeteroGraph, [node_t])

Return a Dict of vectors containing the graph membership (an integer from 1 to g.num_graphs) of each node in the graph for each node type. If node_t is provided, return the graph membership of each node of type node_t instead.

See also batch.

source
Graphs.degreeMethod
degree(g::GNNHeteroGraph, edge_type::EType; dir = :in)

Return a vector containing the degrees of the nodes in g GNNHeteroGraph given edge_type.

Arguments

  • g: A graph.
  • edge_type: A tuple of symbols (source_t, edge_t, target_t) representing the edge type.
  • T: Element type of the returned vector. If nothing, is chosen based on the graph type. Default nothing.
  • dir: For dir = :out the degree of a node is counted based on the outgoing edges. For dir = :in, the ingoing edges are used. If dir = :both we have the sum of the two. Default dir = :out.
source
Graphs.has_edgeMethod
has_edge(g::GNNHeteroGraph, edge_t, i, j)

Return true if there is an edge of type edge_t from node i to node j in g.

Examples

julia> g = rand_bipartite_heterograph((2, 2), (4, 0), bidirected=false)
 GNNHeteroGraph:
   num_nodes: Dict(:A => 2, :B => 2)
   num_edges: Dict((:A, :to, :B) => 4, (:B, :to, :A) => 0)
@@ -48,10 +48,10 @@
 true
 
 julia> has_edge(g, (:B,:to,:A), 1, 1)
-false
source

Transform

GNNGraphs.add_edgesMethod
add_edges(g::GNNHeteroGraph, edge_t, s, t; [edata, num_nodes])
+false
source

Transform

GNNGraphs.add_edgesMethod
add_edges(g::GNNHeteroGraph, edge_t, s, t; [edata, num_nodes])
 add_edges(g::GNNHeteroGraph, edge_t => (s, t); [edata, num_nodes])
-add_edges(g::GNNHeteroGraph, edge_t => (s, t, w); [edata, num_nodes])

Add to heterograph g edges of type edge_t with source node vector s and target node vector t. Optionally, pass the edge weights w or the features edata for the new edges. edge_t is a triplet of symbols (src_t, rel_t, dst_t).

If the edge type is not already present in the graph, it is added. If it involves new node types, they are added to the graph as well. In this case, a dictionary or named tuple of num_nodes can be passed to specify the number of nodes of the new types, otherwise the number of nodes is inferred from the maximum node id in s and t.

source
GNNGraphs.add_self_loopsMethod
add_self_loops(g::GNNHeteroGraph, edge_t::EType)
-add_self_loops(g::GNNHeteroGraph)

If the source node type is the same as the destination node type in edge_t, return a graph with the same features as g but also add self-loops of the specified type, edge_t. Otherwise, it returns g unchanged.

Nodes with already existing self-loops of type edge_t will obtain a second set of self-loops of the same type.

If the graph has edge weights for edges of type edge_t, the new edges will have weight 1.

If no edges of type edge_t exist, or all existing edges have no weight, then all new self loops will have no weight.

If edge_t is not passed as argument, for the entire graph self-loop is added to each node for every edge type in the graph where the source and destination node types are the same. This iterates over all edge types present in the graph, applying the self-loop addition logic to each applicable edge type.

source

Generate

GNNGraphs.rand_bipartite_heterographMethod
rand_bipartite_heterograph([rng,] 
+add_edges(g::GNNHeteroGraph, edge_t => (s, t, w); [edata, num_nodes])

Add to heterograph g edges of type edge_t with source node vector s and target node vector t. Optionally, pass the edge weights w or the features edata for the new edges. edge_t is a triplet of symbols (src_t, rel_t, dst_t).

If the edge type is not already present in the graph, it is added. If it involves new node types, they are added to the graph as well. In this case, a dictionary or named tuple of num_nodes can be passed to specify the number of nodes of the new types, otherwise the number of nodes is inferred from the maximum node id in s and t.

source
GNNGraphs.add_self_loopsMethod
add_self_loops(g::GNNHeteroGraph, edge_t::EType)
+add_self_loops(g::GNNHeteroGraph)

If the source node type is the same as the destination node type in edge_t, return a graph with the same features as g but also add self-loops of the specified type, edge_t. Otherwise, it returns g unchanged.

Nodes with already existing self-loops of type edge_t will obtain a second set of self-loops of the same type.

If the graph has edge weights for edges of type edge_t, the new edges will have weight 1.

If no edges of type edge_t exist, or all existing edges have no weight, then all new self loops will have no weight.

If edge_t is not passed as argument, for the entire graph self-loop is added to each node for every edge type in the graph where the source and destination node types are the same. This iterates over all edge types present in the graph, applying the self-loop addition logic to each applicable edge type.

source

Generate

GNNGraphs.rand_bipartite_heterographMethod
rand_bipartite_heterograph([rng,] 
                            (n1, n2), (m12, m21); 
                            bidirected = true, 
                            node_t = (:A, :B), 
@@ -64,8 +64,8 @@
 julia> g = rand_bipartite_heterograph((10, 15), (20, 0), node_t=(:user, :item), edge_t=:-, bidirected=false)
 GNNHeteroGraph:
   num_nodes: Dict(:item => 15, :user => 10)
-  num_edges: Dict((:item, :-, :user) => 0, (:user, :-, :item) => 20)
source
GNNGraphs.rand_heterographFunction
rand_heterograph([rng,] n, m; bidirected=false, kws...)

Construct an GNNHeteroGraph with random edges and with number of nodes and edges specified by n and m respectively. n and m can be any iterable of pairs specifing node/edge types and their numbers.

Pass a random number generator as a first argument to make the generation reproducible.

Setting bidirected=true will generate a bidirected graph, i.e. each edge will have a reverse edge. Therefore, for each edge type (:A, :rel, :B) a corresponding reverse edge type (:B, :rel, :A) will be generated.

Additional keyword arguments will be passed to the GNNHeteroGraph constructor.

Examples

julia> g = rand_heterograph((:user => 10, :movie => 20),
+  num_edges: Dict((:item, :-, :user) => 0, (:user, :-, :item) => 20)
source
GNNGraphs.rand_heterographFunction
rand_heterograph([rng,] n, m; bidirected=false, kws...)

Construct an GNNHeteroGraph with random edges and with number of nodes and edges specified by n and m respectively. n and m can be any iterable of pairs specifing node/edge types and their numbers.

Pass a random number generator as a first argument to make the generation reproducible.

Setting bidirected=true will generate a bidirected graph, i.e. each edge will have a reverse edge. Therefore, for each edge type (:A, :rel, :B) a corresponding reverse edge type (:B, :rel, :A) will be generated.

Additional keyword arguments will be passed to the GNNHeteroGraph constructor.

Examples

julia> g = rand_heterograph((:user => 10, :movie => 20),
                             (:user, :rate, :movie) => 30)
 GNNHeteroGraph:
   num_nodes: Dict(:movie => 20, :user => 10)
-  num_edges: Dict((:user, :rate, :movie) => 30)
source
\ No newline at end of file + num_edges: Dict((:user, :rate, :movie) => 30)source
\ No newline at end of file diff --git a/docs/GNNlib.jl/dev/GNNGraphs/api/samplers/index.html b/docs/GNNlib.jl/dev/GNNGraphs/api/samplers/index.html index 19c9167a9..582f2e8fe 100644 --- a/docs/GNNlib.jl/dev/GNNGraphs/api/samplers/index.html +++ b/docs/GNNlib.jl/dev/GNNGraphs/api/samplers/index.html @@ -5,4 +5,4 @@ julia> for mini_batch_gnn in loader batch_counter += 1 println("Batch ", batch_counter, ": Nodes in mini-batch graph: ", nv(mini_batch_gnn)) - endsource
\ No newline at end of file + endsource
\ No newline at end of file diff --git a/docs/GNNlib.jl/dev/GNNGraphs/api/temporalgraph/index.html b/docs/GNNlib.jl/dev/GNNGraphs/api/temporalgraph/index.html index f0f0683a2..15da19d4b 100644 --- a/docs/GNNlib.jl/dev/GNNGraphs/api/temporalgraph/index.html +++ b/docs/GNNlib.jl/dev/GNNGraphs/api/temporalgraph/index.html @@ -16,7 +16,7 @@ num_edges: [20, 20, 20, 20, 20] num_snapshots: 5 tgdata: - x = 4-element Vector{Float64}source
GNNGraphs.add_snapshotMethod
add_snapshot(tg::TemporalSnapshotsGNNGraph, t::Int, g::GNNGraph)

Return a TemporalSnapshotsGNNGraph created starting from tg by adding the snapshot g at time index t.

Examples

julia> using GNNGraphs
+        x = 4-element Vector{Float64}
source
GNNGraphs.add_snapshotMethod
add_snapshot(tg::TemporalSnapshotsGNNGraph, t::Int, g::GNNGraph)

Return a TemporalSnapshotsGNNGraph created starting from tg by adding the snapshot g at time index t.

Examples

julia> using GNNGraphs
 
 julia> snapshots = [rand_graph(10, 20) for i in 1:5];
 
@@ -30,7 +30,7 @@
 TemporalSnapshotsGNNGraph:
   num_nodes: [10, 10, 10, 10, 10, 10]
   num_edges: [20, 20, 16, 20, 20, 20]
-  num_snapshots: 6
source
GNNGraphs.remove_snapshotMethod
remove_snapshot(tg::TemporalSnapshotsGNNGraph, t::Int)

Return a TemporalSnapshotsGNNGraph created starting from tg by removing the snapshot at time index t.

Examples

julia> using GNNGraphs
+  num_snapshots: 6
source
GNNGraphs.remove_snapshotMethod
remove_snapshot(tg::TemporalSnapshotsGNNGraph, t::Int)

Return a TemporalSnapshotsGNNGraph created starting from tg by removing the snapshot at time index t.

Examples

julia> using GNNGraphs
 
 julia> snapshots = [rand_graph(10,20), rand_graph(10,14), rand_graph(10,22)];
 
@@ -44,7 +44,7 @@
 TemporalSnapshotsGNNGraph:
   num_nodes: [10, 10]
   num_edges: [20, 22]
-  num_snapshots: 2
source

Random Generators

GNNGraphs.rand_temporal_radius_graphFunction
rand_temporal_radius_graph(number_nodes::Int, 
+  num_snapshots: 2
source

Random Generators

GNNGraphs.rand_temporal_radius_graphFunction
rand_temporal_radius_graph(number_nodes::Int, 
                            number_snapshots::Int,
                            speed::AbstractFloat,
                            r::AbstractFloat;
@@ -56,7 +56,7 @@
 TemporalSnapshotsGNNGraph:
   num_nodes: [10, 10, 10, 10, 10]
   num_edges: [90, 90, 90, 90, 90]
-  num_snapshots: 5
source
GNNGraphs.rand_temporal_hyperbolic_graphFunction
rand_temporal_hyperbolic_graph(number_nodes::Int, 
+  num_snapshots: 5
source
GNNGraphs.rand_temporal_hyperbolic_graphFunction
rand_temporal_hyperbolic_graph(number_nodes::Int, 
                                number_snapshots::Int;
                                α::Real,
                                R::Real,
@@ -69,4 +69,4 @@
 TemporalSnapshotsGNNGraph:
   num_nodes: [10, 10, 10, 10, 10]
   num_edges: [44, 46, 48, 42, 38]
-  num_snapshots: 5

References

Section D of the paper Dynamic Hidden-Variable Network Models and the paper Hyperbolic Geometry of Complex Networks

source
\ No newline at end of file + num_snapshots: 5

References

Section D of the paper Dynamic Hidden-Variable Network Models and the paper Hyperbolic Geometry of Complex Networks

source
\ No newline at end of file diff --git a/docs/GNNlib.jl/dev/GNNGraphs/guides/datasets/index.html b/docs/GNNlib.jl/dev/GNNGraphs/guides/datasets/index.html index 94da1fa1b..251499b12 100644 --- a/docs/GNNlib.jl/dev/GNNGraphs/guides/datasets/index.html +++ b/docs/GNNlib.jl/dev/GNNGraphs/guides/datasets/index.html @@ -1 +1 @@ -Datasets · GNNlib.jl

Datasets

GNNGraphs.jl doesn't come with its own datasets, but leverages those available in the Julia (and non-Julia) ecosystem. In particular, the examples in the GraphNeuralNetworks.jl repository make use of the MLDatasets.jl package. There you will find common graph datasets such as Cora, PubMed, Citeseer, TUDataset and many others. For graphs with static structures and temporal features, datasets such as METRLA, PEMSBAY, ChickenPox, and WindMillEnergy are available. For graphs featuring both temporal structures and temporal features, the TemporalBrains dataset is suitable.

GraphNeuralNetworks.jl provides the mldataset2gnngraph method for interfacing with MLDatasets.jl.

\ No newline at end of file +Datasets · GNNlib.jl

Datasets

GNNGraphs.jl doesn't come with its own datasets, but leverages those available in the Julia (and non-Julia) ecosystem. In particular, the examples in the GraphNeuralNetworks.jl repository make use of the MLDatasets.jl package. There you will find common graph datasets such as Cora, PubMed, Citeseer, TUDataset and many others. For graphs with static structures and temporal features, datasets such as METRLA, PEMSBAY, ChickenPox, and WindMillEnergy are available. For graphs featuring both temporal structures and temporal features, the TemporalBrains dataset is suitable.

GraphNeuralNetworks.jl provides the mldataset2gnngraph method for interfacing with MLDatasets.jl.

\ No newline at end of file diff --git a/docs/GNNlib.jl/dev/GNNGraphs/guides/gnngraph/index.html b/docs/GNNlib.jl/dev/GNNGraphs/guides/gnngraph/index.html index de53ed5ef..cc660e734 100644 --- a/docs/GNNlib.jl/dev/GNNGraphs/guides/gnngraph/index.html +++ b/docs/GNNlib.jl/dev/GNNGraphs/guides/gnngraph/index.html @@ -165,4 +165,4 @@ julia> GNNGraph(gd) GNNGraph: num_nodes: 10 - num_edges: 20 \ No newline at end of file + num_edges: 20 \ No newline at end of file diff --git a/docs/GNNlib.jl/dev/GNNGraphs/guides/heterograph/index.html b/docs/GNNlib.jl/dev/GNNGraphs/guides/heterograph/index.html index 7d59b78d1..052012cb6 100644 --- a/docs/GNNlib.jl/dev/GNNGraphs/guides/heterograph/index.html +++ b/docs/GNNlib.jl/dev/GNNGraphs/guides/heterograph/index.html @@ -79,4 +79,4 @@ @assert g.num_nodes[:A] == 80 @assert size(g.ndata[:A].x) == (3, 80) # ... -end

Graph convolutions on heterographs

See HeteroGraphConv for how to perform convolutions on heterogeneous graphs.

\ No newline at end of file +end

Graph convolutions on heterographs

See HeteroGraphConv for how to perform convolutions on heterogeneous graphs.

\ No newline at end of file diff --git a/docs/GNNlib.jl/dev/GNNGraphs/guides/temporalgraph/index.html b/docs/GNNlib.jl/dev/GNNGraphs/guides/temporalgraph/index.html index 4d73f36aa..92641a0fc 100644 --- a/docs/GNNlib.jl/dev/GNNGraphs/guides/temporalgraph/index.html +++ b/docs/GNNlib.jl/dev/GNNGraphs/guides/temporalgraph/index.html @@ -86,4 +86,4 @@ julia> [ds.x for ds in tg.ndata]; # vector containing the x feature of each snapshot julia> [g.x for g in tg.snapshots]; # same vector as above, now accessing - # the x feature directly from the snapshots \ No newline at end of file + # the x feature directly from the snapshots \ No newline at end of file diff --git a/docs/GNNlib.jl/dev/GNNGraphs/index.html b/docs/GNNlib.jl/dev/GNNGraphs/index.html index 8ac2a631c..f497ac70a 100644 --- a/docs/GNNlib.jl/dev/GNNGraphs/index.html +++ b/docs/GNNlib.jl/dev/GNNGraphs/index.html @@ -1 +1 @@ -GNNGraphs.jl · GNNlib.jl

GNNGraphs.jl

GNNGraphs.jl is a package that provides graph data structures and helper functions specifically designed for working with graph neural networks. This package allows to store not only the graph structure, but also features associated with nodes, edges, and the graph itself. It is the core foundation for the GNNlib.jl, GraphNeuralNetworks.jl, and GNNLux.jl packages.

It supports three types of graphs:

  • Static graph is the basic graph type represented by GNNGraph, where each node and edge can have associated features. This type of graph is used in typical graph neural network applications, where neural networks operate on both the structure of the graph and the features stored in it. It can be used to represent a graph where the structure does not change over time, but the features of the nodes and edges can change over time.

  • Heterogeneous graph is a graph that supports multiple types of nodes and edges, and is represented by GNNHeteroGraph. Each type can have its own properties and relationships. This is useful in scenarios with different entities and interactions, such as in citation graphs or multi-relational data.

  • Temporal graph is a graph that changes over time, and is represented by TemporalSnapshotsGNNGraph. Edges and features can change dynamically. This type of graph is useful for applications that involve tracking time-dependent relationships, such as social networks.

This package depends on the package Graphs.jl.

Installation

The package can be installed with the Julia package manager. From the Julia REPL, type ] to enter the Pkg REPL mode and run:

pkg> add GNNGraphs
\ No newline at end of file +GNNGraphs.jl · GNNlib.jl

GNNGraphs.jl

GNNGraphs.jl is a package that provides graph data structures and helper functions specifically designed for working with graph neural networks. This package allows to store not only the graph structure, but also features associated with nodes, edges, and the graph itself. It is the core foundation for the GNNlib.jl, GraphNeuralNetworks.jl, and GNNLux.jl packages.

It supports three types of graphs:

  • Static graph is the basic graph type represented by GNNGraph, where each node and edge can have associated features. This type of graph is used in typical graph neural network applications, where neural networks operate on both the structure of the graph and the features stored in it. It can be used to represent a graph where the structure does not change over time, but the features of the nodes and edges can change over time.

  • Heterogeneous graph is a graph that supports multiple types of nodes and edges, and is represented by GNNHeteroGraph. Each type can have its own properties and relationships. This is useful in scenarios with different entities and interactions, such as in citation graphs or multi-relational data.

  • Temporal graph is a graph that changes over time, and is represented by TemporalSnapshotsGNNGraph. Edges and features can change dynamically. This type of graph is useful for applications that involve tracking time-dependent relationships, such as social networks.

This package depends on the package Graphs.jl.

Installation

The package can be installed with the Julia package manager. From the Julia REPL, type ] to enter the Pkg REPL mode and run:

pkg> add GNNGraphs
\ No newline at end of file diff --git a/docs/GNNlib.jl/dev/api/messagepassing/index.html b/docs/GNNlib.jl/dev/api/messagepassing/index.html index fde70bfda..c01afcbe9 100644 --- a/docs/GNNlib.jl/dev/api/messagepassing/index.html +++ b/docs/GNNlib.jl/dev/api/messagepassing/index.html @@ -1,5 +1,5 @@ Message Passing · GNNlib.jl

Message Passing

Interface

GNNlib.apply_edgesFunction
apply_edges(fmsg, g; [xi, xj, e])
-apply_edges(fmsg, g, xi, xj, e=nothing)

Returns the message from node j to node i applying the message function fmsg on the edges in graph g. In the message-passing scheme, the incoming messages from the neighborhood of i will later be aggregated in order to update the features of node i (see aggregate_neighbors).

The function fmsg operates on batches of edges, therefore xi, xj, and e are tensors whose last dimension is the batch size, or can be named tuples of such tensors.

Arguments

  • g: An AbstractGNNGraph.
  • xi: An array or a named tuple containing arrays whose last dimension's size is g.num_nodes. It will be appropriately materialized on the target node of each edge (see also edge_index).
  • xj: As xi, but now to be materialized on each edge's source node.
  • e: An array or a named tuple containing arrays whose last dimension's size is g.num_edges.
  • fmsg: A function that takes as inputs the edge-materialized xi, xj, and e. These are arrays (or named tuples of arrays) whose last dimension' size is the size of a batch of edges. The output of f has to be an array (or a named tuple of arrays) with the same batch size. If also layer is passed to propagate, the signature of fmsg has to be fmsg(layer, xi, xj, e) instead of fmsg(xi, xj, e).

See also propagate and aggregate_neighbors.

source
GNNlib.aggregate_neighborsFunction
aggregate_neighbors(g, aggr, m)

Given a graph g, edge features m, and an aggregation operator aggr (e.g +, min, max, mean), returns the new node features

\[\mathbf{x}_i = \square_{j \in \mathcal{N}(i)} \mathbf{m}_{j\to i}\]

Neighborhood aggregation is the second step of propagate, where it comes after apply_edges.

source
GNNlib.propagateFunction
propagate(fmsg, g, aggr; [xi, xj, e])
+apply_edges(fmsg, g, xi, xj, e=nothing)

Returns the message from node j to node i applying the message function fmsg on the edges in graph g. In the message-passing scheme, the incoming messages from the neighborhood of i will later be aggregated in order to update the features of node i (see aggregate_neighbors).

The function fmsg operates on batches of edges, therefore xi, xj, and e are tensors whose last dimension is the batch size, or can be named tuples of such tensors.

Arguments

  • g: An AbstractGNNGraph.
  • xi: An array or a named tuple containing arrays whose last dimension's size is g.num_nodes. It will be appropriately materialized on the target node of each edge (see also edge_index).
  • xj: As xi, but now to be materialized on each edge's source node.
  • e: An array or a named tuple containing arrays whose last dimension's size is g.num_edges.
  • fmsg: A function that takes as inputs the edge-materialized xi, xj, and e. These are arrays (or named tuples of arrays) whose last dimension' size is the size of a batch of edges. The output of f has to be an array (or a named tuple of arrays) with the same batch size. If also layer is passed to propagate, the signature of fmsg has to be fmsg(layer, xi, xj, e) instead of fmsg(xi, xj, e).

See also propagate and aggregate_neighbors.

source
GNNlib.aggregate_neighborsFunction
aggregate_neighbors(g, aggr, m)

Given a graph g, edge features m, and an aggregation operator aggr (e.g +, min, max, mean), returns the new node features

\[\mathbf{x}_i = \square_{j \in \mathcal{N}(i)} \mathbf{m}_{j\to i}\]

Neighborhood aggregation is the second step of propagate, where it comes after apply_edges.

source
GNNlib.propagateFunction
propagate(fmsg, g, aggr; [xi, xj, e])
 propagate(fmsg, g, aggr xi, xj, e=nothing)

Performs message passing on graph g. Takes care of materializing the node features on each edge, applying the message function fmsg, and returning an aggregated message $\bar{\mathbf{m}}$ (depending on the return value of fmsg, an array or a named tuple of arrays with last dimension's size g.num_nodes).

It can be decomposed in two steps:

m = apply_edges(fmsg, g, xi, xj, e)
 m̄ = aggregate_neighbors(g, aggr, m)

GNN layers typically call propagate in their forward pass, providing as input f a closure.

Arguments

  • g: A GNNGraph.
  • xi: An array or a named tuple containing arrays whose last dimension's size is g.num_nodes. It will be appropriately materialized on the target node of each edge (see also edge_index).
  • xj: As xj, but to be materialized on edges' sources.
  • e: An array or a named tuple containing arrays whose last dimension's size is g.num_edges.
  • fmsg: A generic function that will be passed over to apply_edges. Has to take as inputs the edge-materialized xi, xj, and e (arrays or named tuples of arrays whose last dimension' size is the size of a batch of edges). Its output has to be an array or a named tuple of arrays with the same batch size. If also layer is passed to propagate, the signature of fmsg has to be fmsg(layer, xi, xj, e) instead of fmsg(xi, xj, e).
  • aggr: Neighborhood aggregation operator. Use +, mean, max, or min.

Examples

using GraphNeuralNetworks, Flux
 
@@ -25,4 +25,4 @@
 end
 
 l = GNNConv(10 => 20)
-l(g, x)

See also apply_edges and aggregate_neighbors.

source

Built-in message functions

GNNlib.e_mul_xjFunction
e_mul_xj(xi, xj, e) = reshape(e, (...)) .* xj

Reshape e into a broadcast compatible shape with xj (by prepending singleton dimensions) then perform broadcasted multiplication.

source
GNNlib.w_mul_xjFunction
w_mul_xj(xi, xj, w) = reshape(w, (...)) .* xj

Similar to e_mul_xj but specialized on scalar edge features (weights).

source
\ No newline at end of file +l(g, x)

See also apply_edges and aggregate_neighbors.

source

Built-in message functions

GNNlib.copy_xiFunction
copy_xi(xi, xj, e) = xi
source
GNNlib.copy_xjFunction
copy_xj(xi, xj, e) = xj
source
GNNlib.xi_dot_xjFunction
xi_dot_xj(xi, xj, e) = sum(xi .* xj, dims=1)
source
GNNlib.xi_sub_xjFunction
xi_sub_xj(xi, xj, e) = xi .- xj
source
GNNlib.xj_sub_xiFunction
xj_sub_xi(xi, xj, e) = xj .- xi
source
GNNlib.e_mul_xjFunction
e_mul_xj(xi, xj, e) = reshape(e, (...)) .* xj

Reshape e into a broadcast compatible shape with xj (by prepending singleton dimensions) then perform broadcasted multiplication.

source
GNNlib.w_mul_xjFunction
w_mul_xj(xi, xj, w) = reshape(w, (...)) .* xj

Similar to e_mul_xj but specialized on scalar edge features (weights).

source
\ No newline at end of file diff --git a/docs/GNNlib.jl/dev/api/utils/index.html b/docs/GNNlib.jl/dev/api/utils/index.html index 07ecca638..8c057bf62 100644 --- a/docs/GNNlib.jl/dev/api/utils/index.html +++ b/docs/GNNlib.jl/dev/api/utils/index.html @@ -1,2 +1,2 @@ -Utils · GNNlib.jl

Utility Functions

Graph-wise operations

GNNlib.reduce_nodesFunction
reduce_nodes(aggr, g, x)

For a batched graph g, return the graph-wise aggregation of the node features x. The aggregation operator aggr can be +, mean, max, or min. The returned array will have last dimension g.num_graphs.

See also: reduce_edges.

source
reduce_nodes(aggr, indicator::AbstractVector, x)

Return the graph-wise aggregation of the node features x given the graph indicator indicator. The aggregation operator aggr can be +, mean, max, or min.

See also graph_indicator.

source
GNNlib.reduce_edgesFunction
reduce_edges(aggr, g, e)

For a batched graph g, return the graph-wise aggregation of the edge features e. The aggregation operator aggr can be +, mean, max, or min. The returned array will have last dimension g.num_graphs.

source
GNNlib.broadcast_nodesFunction
broadcast_nodes(g, x)

Graph-wise broadcast array x of size (*, g.num_graphs) to size (*, g.num_nodes).

source
GNNlib.broadcast_edgesFunction
broadcast_edges(g, x)

Graph-wise broadcast array x of size (*, g.num_graphs) to size (*, g.num_edges).

source

Neighborhood operations

GNNlib.softmax_edge_neighborsFunction
softmax_edge_neighbors(g, e)

Softmax over each node's neighborhood of the edge features e.

\[\mathbf{e}'_{j\to i} = \frac{e^{\mathbf{e}_{j\to i}}} - {\sum_{j'\in N(i)} e^{\mathbf{e}_{j'\to i}}}.\]

source

NNlib's gather and scatter functions

Primitive functions for message passing implemented in NNlib.jl:

\ No newline at end of file +Utils · GNNlib.jl

Utility Functions

Graph-wise operations

GNNlib.reduce_nodesFunction
reduce_nodes(aggr, g, x)

For a batched graph g, return the graph-wise aggregation of the node features x. The aggregation operator aggr can be +, mean, max, or min. The returned array will have last dimension g.num_graphs.

See also: reduce_edges.

source
reduce_nodes(aggr, indicator::AbstractVector, x)

Return the graph-wise aggregation of the node features x given the graph indicator indicator. The aggregation operator aggr can be +, mean, max, or min.

See also graph_indicator.

source
GNNlib.reduce_edgesFunction
reduce_edges(aggr, g, e)

For a batched graph g, return the graph-wise aggregation of the edge features e. The aggregation operator aggr can be +, mean, max, or min. The returned array will have last dimension g.num_graphs.

source
GNNlib.broadcast_nodesFunction
broadcast_nodes(g, x)

Graph-wise broadcast array x of size (*, g.num_graphs) to size (*, g.num_nodes).

source
GNNlib.broadcast_edgesFunction
broadcast_edges(g, x)

Graph-wise broadcast array x of size (*, g.num_graphs) to size (*, g.num_edges).

source

Neighborhood operations

GNNlib.softmax_edge_neighborsFunction
softmax_edge_neighbors(g, e)

Softmax over each node's neighborhood of the edge features e.

\[\mathbf{e}'_{j\to i} = \frac{e^{\mathbf{e}_{j\to i}}} + {\sum_{j'\in N(i)} e^{\mathbf{e}_{j'\to i}}}.\]

source

NNlib's gather and scatter functions

Primitive functions for message passing implemented in NNlib.jl:

\ No newline at end of file diff --git a/docs/GNNlib.jl/dev/guides/messagepassing/index.html b/docs/GNNlib.jl/dev/guides/messagepassing/index.html index 576692fd3..4278b8535 100644 --- a/docs/GNNlib.jl/dev/guides/messagepassing/index.html +++ b/docs/GNNlib.jl/dev/guides/messagepassing/index.html @@ -75,4 +75,4 @@ x = propagate(message, g, +, xj=x) return l.σ.(l.weight * x .+ l.bias) -end

See the GATConv implementation here for a more complex example.

Built-in message functions

In order to exploit optimized specializations of the propagate, it is recommended to use built-in message functions such as copy_xj whenever possible.

\ No newline at end of file +end

See the GATConv implementation here for a more complex example.

Built-in message functions

In order to exploit optimized specializations of the propagate, it is recommended to use built-in message functions such as copy_xj whenever possible.

\ No newline at end of file diff --git a/docs/GNNlib.jl/dev/index.html b/docs/GNNlib.jl/dev/index.html index a908b74cd..a5148ad96 100644 --- a/docs/GNNlib.jl/dev/index.html +++ b/docs/GNNlib.jl/dev/index.html @@ -1 +1 @@ -Home · GNNlib.jl

GNNlib.jl

GNNlib.jl is a package that provides the implementation of the basic message passing functions and functional implementation of graph convolutional layers, which are used to build graph neural networks in both the Flux.jl and Lux.jl machine learning frameworks, created in the GraphNeuralNetworks.jl and GNNLux.jl packages, respectively.

This package depends on GNNGraphs.jl and NNlib.jl, and is primarily intended for developers looking to create new GNN architectures. For most users, the higher-level GraphNeuralNetworks.jl and GNNLux.jl packages are recommended.

Installation

The package can be installed with the Julia package manager. From the Julia REPL, type ] to enter the Pkg REPL mode and run:

pkg> add GNNlib
\ No newline at end of file +Home · GNNlib.jl

GNNlib.jl

GNNlib.jl is a package that provides the implementation of the basic message passing functions and functional implementation of graph convolutional layers, which are used to build graph neural networks in both the Flux.jl and Lux.jl machine learning frameworks, created in the GraphNeuralNetworks.jl and GNNLux.jl packages, respectively.

This package depends on GNNGraphs.jl and NNlib.jl, and is primarily intended for developers looking to create new GNN architectures. For most users, the higher-level GraphNeuralNetworks.jl and GNNLux.jl packages are recommended.

Installation

The package can be installed with the Julia package manager. From the Julia REPL, type ] to enter the Pkg REPL mode and run:

pkg> add GNNlib
\ No newline at end of file diff --git a/docs/GraphNeuralNetworks.jl/dev/.documenter-siteinfo.json b/docs/GraphNeuralNetworks.jl/dev/.documenter-siteinfo.json index 43cd2052e..1901beb56 100644 --- a/docs/GraphNeuralNetworks.jl/dev/.documenter-siteinfo.json +++ b/docs/GraphNeuralNetworks.jl/dev/.documenter-siteinfo.json @@ -1 +1 @@ -{"documenter":{"julia_version":"1.11.2","generation_timestamp":"2024-12-09T09:33:07","documenter_version":"1.8.0"}} \ No newline at end of file +{"documenter":{"julia_version":"1.11.2","generation_timestamp":"2024-12-09T10:24:56","documenter_version":"1.8.0"}} \ No newline at end of file diff --git a/docs/GraphNeuralNetworks.jl/dev/GNNGraphs/api/datasets/index.html b/docs/GraphNeuralNetworks.jl/dev/GNNGraphs/api/datasets/index.html index 985b346a6..78923623b 100644 --- a/docs/GraphNeuralNetworks.jl/dev/GNNGraphs/api/datasets/index.html +++ b/docs/GraphNeuralNetworks.jl/dev/GNNGraphs/api/datasets/index.html @@ -9,4 +9,4 @@ targets = 2708-element Vector{Int64} test_mask = 2708-element BitVector features = 1433×2708 Matrix{Float32} - train_mask = 2708-element BitVectorsource
\ No newline at end of file + train_mask = 2708-element BitVectorsource
\ No newline at end of file diff --git a/docs/GraphNeuralNetworks.jl/dev/GNNGraphs/api/gnngraph/index.html b/docs/GraphNeuralNetworks.jl/dev/GNNGraphs/api/gnngraph/index.html index d14230bb3..d083d7621 100644 --- a/docs/GraphNeuralNetworks.jl/dev/GNNGraphs/api/gnngraph/index.html +++ b/docs/GraphNeuralNetworks.jl/dev/GNNGraphs/api/gnngraph/index.html @@ -32,7 +32,7 @@ # Collect edges' source and target nodes. # Both source and target are vectors of length num_edges -source, target = edge_index(g)

A GNNGraph can be sent to the GPU, for example by using Flux.jl's gpu function or MLDataDevices.jl's utilities. ```

source
Base.copyFunction
copy(g::GNNGraph; deep=false)

Create a copy of g. If deep is true, then copy will be a deep copy (equivalent to deepcopy(g)), otherwise it will be a shallow copy with the same underlying graph data.

source

DataStore

GNNGraphs.DataStoreType
DataStore([n, data])
+source, target = edge_index(g)

A GNNGraph can be sent to the GPU, for example by using Flux.jl's gpu function or MLDataDevices.jl's utilities. ```

source
Base.copyFunction
copy(g::GNNGraph; deep=false)

Create a copy of g. If deep is true, then copy will be a deep copy (equivalent to deepcopy(g)), otherwise it will be a shallow copy with the same underlying graph data.

source

DataStore

GNNGraphs.DataStoreType
DataStore([n, data])
 DataStore([n,] k1 = x1, k2 = x2, ...)

A container for feature arrays. The optional argument n enforces that numobs(x) == n for each array contained in the datastore.

At construction time, the data can be provided as any iterables of pairs of symbols and arrays or as keyword arguments:

julia> ds = DataStore(3, x = rand(Float32, 2, 3), y = rand(Float32, 3))
 DataStore(3) with 2 elements:
   y = 3-element Vector{Float32}
@@ -62,8 +62,8 @@
 3-element Vector{Float32}:
  1.0
  1.0
- 1.0
source

Query

GNNGraphs.adjacency_listMethod
adjacency_list(g; dir=:out)
-adjacency_list(g, nodes; dir=:out)

Return the adjacency list representation (a vector of vectors) of the graph g.

Calling a the adjacency list, if dir=:out than a[i] will contain the neighbors of node i through outgoing edges. If dir=:in, it will contain neighbors from incoming edges instead.

If nodes is given, return the neighborhood of the nodes in nodes only.

source
GNNGraphs.edge_indexMethod
edge_index(g::GNNGraph)

Return a tuple containing two vectors, respectively storing the source and target nodes for each edges in g.

s, t = edge_index(g)
source
GNNGraphs.get_graph_typeMethod
get_graph_type(g::GNNGraph)

Return the underlying representation for the graph g as a symbol.

Possible values are:

  • :coo: Coordinate list representation. The graph is stored as a tuple of vectors (s, t, w), where s and t are the source and target nodes of the edges, and w is the edge weights.
  • :sparse: Sparse matrix representation. The graph is stored as a sparse matrix representing the weighted adjacency matrix.
  • :dense: Dense matrix representation. The graph is stored as a dense matrix representing the weighted adjacency matrix.

The default representation for graph constructors GNNGraphs.jl is :coo. The underlying representation can be accessed through the g.graph field.

See also GNNGraph.

Examples

The default representation for graph constructors GNNGraphs.jl is :coo.

julia> g = rand_graph(5, 10)
+ 1.0
source

Query

GNNGraphs.adjacency_listMethod
adjacency_list(g; dir=:out)
+adjacency_list(g, nodes; dir=:out)

Return the adjacency list representation (a vector of vectors) of the graph g.

Calling a the adjacency list, if dir=:out than a[i] will contain the neighbors of node i through outgoing edges. If dir=:in, it will contain neighbors from incoming edges instead.

If nodes is given, return the neighborhood of the nodes in nodes only.

source
GNNGraphs.edge_indexMethod
edge_index(g::GNNGraph)

Return a tuple containing two vectors, respectively storing the source and target nodes for each edges in g.

s, t = edge_index(g)
source
GNNGraphs.get_graph_typeMethod
get_graph_type(g::GNNGraph)

Return the underlying representation for the graph g as a symbol.

Possible values are:

  • :coo: Coordinate list representation. The graph is stored as a tuple of vectors (s, t, w), where s and t are the source and target nodes of the edges, and w is the edge weights.
  • :sparse: Sparse matrix representation. The graph is stored as a sparse matrix representing the weighted adjacency matrix.
  • :dense: Dense matrix representation. The graph is stored as a dense matrix representing the weighted adjacency matrix.

The default representation for graph constructors GNNGraphs.jl is :coo. The underlying representation can be accessed through the g.graph field.

See also GNNGraph.

Examples

The default representation for graph constructors GNNGraphs.jl is :coo.

julia> g = rand_graph(5, 10)
 GNNGraph:
   num_nodes: 5
   num_edges: 10
@@ -88,7 +88,7 @@
 julia> gcoo = GNNGraph(g, graph_type=:coo);
 
 julia> gcoo.graph
-([2, 3, 5], [1, 2, 4], [1, 1, 1])
source
GNNGraphs.graph_indicatorMethod
graph_indicator(g::GNNGraph; edges=false)

Return a vector containing the graph membership (an integer from 1 to g.num_graphs) of each node in the graph. If edges=true, return the graph membership of each edge instead.

source
GNNGraphs.has_isolated_nodesMethod
has_isolated_nodes(g::GNNGraph; dir=:out)

Return true if the graph g contains nodes with out-degree (if dir=:out) or in-degree (if dir = :in) equal to zero.

source
GNNGraphs.has_multi_edgesMethod
has_multi_edges(g::GNNGraph)

Return true if g has any multiple edges.

source
GNNGraphs.is_bidirectedMethod
is_bidirected(g::GNNGraph)

Check if the directed graph g essentially corresponds to an undirected graph, i.e. if for each edge it also contains the reverse edge.

source
GNNGraphs.khop_adjFunction
khop_adj(g::GNNGraph,k::Int,T::DataType=eltype(g); dir=:out, weighted=true)

Return $A^k$ where $A$ is the adjacency matrix of the graph 'g'.

source
GNNGraphs.laplacian_lambda_maxFunction
laplacian_lambda_max(g::GNNGraph, T=Float32; add_self_loops=false, dir=:out)

Return the largest eigenvalue of the normalized symmetric Laplacian of the graph g.

If the graph is batched from multiple graphs, return the list of the largest eigenvalue for each graph.

source
GNNGraphs.normalized_laplacianFunction
normalized_laplacian(g, T=Float32; add_self_loops=false, dir=:out)

Normalized Laplacian matrix of graph g.

Arguments

  • g: A GNNGraph.
  • T: result element type.
  • add_self_loops: add self-loops while calculating the matrix.
  • dir: the edge directionality considered (:out, :in, :both).
source
GNNGraphs.scaled_laplacianFunction
scaled_laplacian(g, T=Float32; dir=:out)

Scaled Laplacian matrix of graph g, defined as $\hat{L} = \frac{2}{\lambda_{max}} L - I$ where $L$ is the normalized Laplacian matrix.

Arguments

  • g: A GNNGraph.
  • T: result element type.
  • dir: the edge directionality considered (:out, :in, :both).
source
Graphs.LinAlg.adjacency_matrixFunction
adjacency_matrix(g::GNNGraph, T=eltype(g); dir=:out, weighted=true)

Return the adjacency matrix A for the graph g.

If dir=:out, A[i,j] > 0 denotes the presence of an edge from node i to node j. If dir=:in instead, A[i,j] > 0 denotes the presence of an edge from node j to node i.

User may specify the eltype T of the returned matrix.

If weighted=true, the A will contain the edge weights if any, otherwise the elements of A will be either 0 or 1.

source
Graphs.degreeMethod
degree(g::GNNGraph, T=nothing; dir=:out, edge_weight=true)

Return a vector containing the degrees of the nodes in g.

The gradient is propagated through this function only if edge_weight is true or a vector.

Arguments

  • g: A graph.
  • T: Element type of the returned vector. If nothing, is chosen based on the graph type and will be an integer if edge_weight = false. Default nothing.
  • dir: For dir = :out the degree of a node is counted based on the outgoing edges. For dir = :in, the ingoing edges are used. If dir = :both we have the sum of the two.
  • edge_weight: If true and the graph contains weighted edges, the degree will be weighted. Set to false instead to just count the number of outgoing/ingoing edges. Finally, you can also pass a vector of weights to be used instead of the graph's own weights. Default true.
source
Graphs.has_self_loopsMethod
has_self_loops(g::GNNGraph)

Return true if g has any self loops.

source
Graphs.inneighborsMethod
inneighbors(g::GNNGraph, i::Integer)

Return the neighbors of node i in the graph g through incoming edges.

See also neighbors and outneighbors.

source
Graphs.outneighborsMethod
outneighbors(g::GNNGraph, i::Integer)

Return the neighbors of node i in the graph g through outgoing edges.

See also neighbors and inneighbors.

source
Graphs.neighborsMethod
neighbors(g::GNNGraph, i::Integer; dir=:out)

Return the neighbors of node i in the graph g. If dir=:out, return the neighbors through outgoing edges. If dir=:in, return the neighbors through incoming edges.

See also outneighbors, inneighbors.

source

Transform

GNNGraphs.add_edgesMethod
add_edges(g::GNNGraph, s::AbstractVector, t::AbstractVector; [edata])
+([2, 3, 5], [1, 2, 4], [1, 1, 1])
source
GNNGraphs.graph_indicatorMethod
graph_indicator(g::GNNGraph; edges=false)

Return a vector containing the graph membership (an integer from 1 to g.num_graphs) of each node in the graph. If edges=true, return the graph membership of each edge instead.

source
GNNGraphs.has_isolated_nodesMethod
has_isolated_nodes(g::GNNGraph; dir=:out)

Return true if the graph g contains nodes with out-degree (if dir=:out) or in-degree (if dir = :in) equal to zero.

source
GNNGraphs.has_multi_edgesMethod
has_multi_edges(g::GNNGraph)

Return true if g has any multiple edges.

source
GNNGraphs.is_bidirectedMethod
is_bidirected(g::GNNGraph)

Check if the directed graph g essentially corresponds to an undirected graph, i.e. if for each edge it also contains the reverse edge.

source
GNNGraphs.khop_adjFunction
khop_adj(g::GNNGraph,k::Int,T::DataType=eltype(g); dir=:out, weighted=true)

Return $A^k$ where $A$ is the adjacency matrix of the graph 'g'.

source
GNNGraphs.laplacian_lambda_maxFunction
laplacian_lambda_max(g::GNNGraph, T=Float32; add_self_loops=false, dir=:out)

Return the largest eigenvalue of the normalized symmetric Laplacian of the graph g.

If the graph is batched from multiple graphs, return the list of the largest eigenvalue for each graph.

source
GNNGraphs.normalized_laplacianFunction
normalized_laplacian(g, T=Float32; add_self_loops=false, dir=:out)

Normalized Laplacian matrix of graph g.

Arguments

  • g: A GNNGraph.
  • T: result element type.
  • add_self_loops: add self-loops while calculating the matrix.
  • dir: the edge directionality considered (:out, :in, :both).
source
GNNGraphs.scaled_laplacianFunction
scaled_laplacian(g, T=Float32; dir=:out)

Scaled Laplacian matrix of graph g, defined as $\hat{L} = \frac{2}{\lambda_{max}} L - I$ where $L$ is the normalized Laplacian matrix.

Arguments

  • g: A GNNGraph.
  • T: result element type.
  • dir: the edge directionality considered (:out, :in, :both).
source
Graphs.LinAlg.adjacency_matrixFunction
adjacency_matrix(g::GNNGraph, T=eltype(g); dir=:out, weighted=true)

Return the adjacency matrix A for the graph g.

If dir=:out, A[i,j] > 0 denotes the presence of an edge from node i to node j. If dir=:in instead, A[i,j] > 0 denotes the presence of an edge from node j to node i.

User may specify the eltype T of the returned matrix.

If weighted=true, the A will contain the edge weights if any, otherwise the elements of A will be either 0 or 1.

source
Graphs.degreeMethod
degree(g::GNNGraph, T=nothing; dir=:out, edge_weight=true)

Return a vector containing the degrees of the nodes in g.

The gradient is propagated through this function only if edge_weight is true or a vector.

Arguments

  • g: A graph.
  • T: Element type of the returned vector. If nothing, is chosen based on the graph type and will be an integer if edge_weight = false. Default nothing.
  • dir: For dir = :out the degree of a node is counted based on the outgoing edges. For dir = :in, the ingoing edges are used. If dir = :both we have the sum of the two.
  • edge_weight: If true and the graph contains weighted edges, the degree will be weighted. Set to false instead to just count the number of outgoing/ingoing edges. Finally, you can also pass a vector of weights to be used instead of the graph's own weights. Default true.
source
Graphs.has_self_loopsMethod
has_self_loops(g::GNNGraph)

Return true if g has any self loops.

source
Graphs.inneighborsMethod
inneighbors(g::GNNGraph, i::Integer)

Return the neighbors of node i in the graph g through incoming edges.

See also neighbors and outneighbors.

source
Graphs.outneighborsMethod
outneighbors(g::GNNGraph, i::Integer)

Return the neighbors of node i in the graph g through outgoing edges.

See also neighbors and inneighbors.

source
Graphs.neighborsMethod
neighbors(g::GNNGraph, i::Integer; dir=:out)

Return the neighbors of node i in the graph g. If dir=:out, return the neighbors through outgoing edges. If dir=:in, return the neighbors through incoming edges.

See also outneighbors, inneighbors.

source

Transform

GNNGraphs.add_edgesMethod
add_edges(g::GNNGraph, s::AbstractVector, t::AbstractVector; [edata])
 add_edges(g::GNNGraph, (s, t); [edata])
 add_edges(g::GNNGraph, (s, t, w); [edata])

Add to graph g the edges with source nodes s and target nodes t. Optionally, pass the edge weight w and the features edata for the new edges. Returns a new graph sharing part of the underlying data with g.

If the s or t contain nodes that are not already present in the graph, they are added to the graph as well.

Examples

julia> s, t = [1, 2, 3, 3, 4], [2, 3, 4, 4, 4];
 
@@ -110,9 +110,9 @@
 julia> add_edges(g, [1,2], [2,3])
 GNNGraph:
   num_nodes: 3
-  num_edges: 2
source
GNNGraphs.add_nodesMethod
add_nodes(g::GNNGraph, n; [ndata])

Add n new nodes to graph g. In the new graph, these nodes will have indexes from g.num_nodes + 1 to g.num_nodes + n.

source
GNNGraphs.add_self_loopsMethod
add_self_loops(g::GNNGraph)

Return a graph with the same features as g but also adding edges connecting the nodes to themselves.

Nodes with already existing self-loops will obtain a second self-loop.

If the graphs has edge weights, the new edges will have weight 1.

source
GNNGraphs.getgraphMethod
getgraph(g::GNNGraph, i; nmap=false)

Return the subgraph of g induced by those nodes j for which g.graph_indicator[j] == i or, if i is a collection, g.graph_indicator[j] ∈ i. In other words, it extract the component graphs from a batched graph.

If nmap=true, return also a vector v mapping the new nodes to the old ones. The node i in the subgraph will correspond to the node v[i] in g.

source
GNNGraphs.negative_sampleMethod
negative_sample(g::GNNGraph; 
+  num_edges: 2
source
GNNGraphs.add_nodesMethod
add_nodes(g::GNNGraph, n; [ndata])

Add n new nodes to graph g. In the new graph, these nodes will have indexes from g.num_nodes + 1 to g.num_nodes + n.

source
GNNGraphs.add_self_loopsMethod
add_self_loops(g::GNNGraph)

Return a graph with the same features as g but also adding edges connecting the nodes to themselves.

Nodes with already existing self-loops will obtain a second self-loop.

If the graphs has edge weights, the new edges will have weight 1.

source
GNNGraphs.getgraphMethod
getgraph(g::GNNGraph, i; nmap=false)

Return the subgraph of g induced by those nodes j for which g.graph_indicator[j] == i or, if i is a collection, g.graph_indicator[j] ∈ i. In other words, it extract the component graphs from a batched graph.

If nmap=true, return also a vector v mapping the new nodes to the old ones. The node i in the subgraph will correspond to the node v[i] in g.

source
GNNGraphs.negative_sampleMethod
negative_sample(g::GNNGraph; 
                 num_neg_edges = g.num_edges, 
-                bidirected = is_bidirected(g))

Return a graph containing random negative edges (i.e. non-edges) from graph g as edges.

If bidirected=true, the output graph will be bidirected and there will be no leakage from the origin graph.

See also is_bidirected.

source
GNNGraphs.perturb_edgesMethod
perturb_edges([rng], g::GNNGraph, perturb_ratio)

Return a new graph obtained from g by adding random edges, based on a specified perturb_ratio. The perturb_ratio determines the fraction of new edges to add relative to the current number of edges in the graph. These new edges are added without creating self-loops.

The function returns a new GNNGraph instance that shares some of the underlying data with g but includes the additional edges. The nodes for the new edges are selected randomly, and no edge data (edata) or weights (w) are assigned to these new edges.

Arguments

  • g::GNNGraph: The graph to be perturbed.
  • perturb_ratio: The ratio of the number of new edges to add relative to the current number of edges in the graph. For example, a perturb_ratio of 0.1 means that 10% of the current number of edges will be added as new random edges.
  • rng: An optionalrandom number generator to ensure reproducible results.

Examples

julia> g = GNNGraph((s, t, w))
+                bidirected = is_bidirected(g))

Return a graph containing random negative edges (i.e. non-edges) from graph g as edges.

If bidirected=true, the output graph will be bidirected and there will be no leakage from the origin graph.

See also is_bidirected.

source
GNNGraphs.perturb_edgesMethod
perturb_edges([rng], g::GNNGraph, perturb_ratio)

Return a new graph obtained from g by adding random edges, based on a specified perturb_ratio. The perturb_ratio determines the fraction of new edges to add relative to the current number of edges in the graph. These new edges are added without creating self-loops.

The function returns a new GNNGraph instance that shares some of the underlying data with g but includes the additional edges. The nodes for the new edges are selected randomly, and no edge data (edata) or weights (w) are assigned to these new edges.

Arguments

  • g::GNNGraph: The graph to be perturbed.
  • perturb_ratio: The ratio of the number of new edges to add relative to the current number of edges in the graph. For example, a perturb_ratio of 0.1 means that 10% of the current number of edges will be added as new random edges.
  • rng: An optionalrandom number generator to ensure reproducible results.

Examples

julia> g = GNNGraph((s, t, w))
 GNNGraph:
   num_nodes: 4
   num_edges: 5
@@ -120,7 +120,7 @@
 julia> perturbed_g = perturb_edges(g, 0.2)
 GNNGraph:
   num_nodes: 4
-  num_edges: 6
source
GNNGraphs.ppr_diffusionMethod
ppr_diffusion(g::GNNGraph{<:COO_T}, alpha =0.85f0) -> GNNGraph

Calculates the Personalized PageRank (PPR) diffusion based on the edge weight matrix of a GNNGraph and updates the graph with new edge weights derived from the PPR matrix. References paper: The pagerank citation ranking: Bringing order to the web

The function performs the following steps:

  1. Constructs a modified adjacency matrix A using the graph's edge weights, where A is adjusted by (α - 1) * A + I, with α being the damping factor (alpha_f32) and I the identity matrix.
  2. Normalizes A to ensure each column sums to 1, representing transition probabilities.
  3. Applies the PPR formula α * (I + (α - 1) * A)^-1 to compute the diffusion matrix.
  4. Updates the original edge weights of the graph based on the PPR diffusion matrix, assigning new weights for each edge from the PPR matrix.

Arguments

  • g::GNNGraph: The input graph for which PPR diffusion is to be calculated. It should have edge weights available.
  • alpha_f32::Float32: The damping factor used in PPR calculation, controlling the teleport probability in the random walk. Defaults to 0.85f0.

Returns

  • A new GNNGraph instance with the same structure as g but with updated edge weights according to the PPR diffusion calculation.
source
GNNGraphs.rand_edge_splitMethod
rand_edge_split(g::GNNGraph, frac; bidirected=is_bidirected(g)) -> g1, g2

Randomly partition the edges in g to form two graphs, g1 and g2. Both will have the same number of nodes as g. g1 will contain a fraction frac of the original edges, while g2 wil contain the rest.

If bidirected = true makes sure that an edge and its reverse go into the same split. This option is supported only for bidirected graphs with no self-loops and multi-edges.

rand_edge_split is tipically used to create train/test splits in link prediction tasks.

source
GNNGraphs.random_walk_peMethod
random_walk_pe(g, walk_length)

Return the random walk positional encoding from the paper Graph Neural Networks with Learnable Structural and Positional Representations of the given graph g and the length of the walk walk_length as a matrix of size (walk_length, g.num_nodes).

source
GNNGraphs.remove_edgesMethod
remove_edges(g::GNNGraph, edges_to_remove::AbstractVector{<:Integer})
+  num_edges: 6
source
GNNGraphs.ppr_diffusionMethod
ppr_diffusion(g::GNNGraph{<:COO_T}, alpha =0.85f0) -> GNNGraph

Calculates the Personalized PageRank (PPR) diffusion based on the edge weight matrix of a GNNGraph and updates the graph with new edge weights derived from the PPR matrix. References paper: The pagerank citation ranking: Bringing order to the web

The function performs the following steps:

  1. Constructs a modified adjacency matrix A using the graph's edge weights, where A is adjusted by (α - 1) * A + I, with α being the damping factor (alpha_f32) and I the identity matrix.
  2. Normalizes A to ensure each column sums to 1, representing transition probabilities.
  3. Applies the PPR formula α * (I + (α - 1) * A)^-1 to compute the diffusion matrix.
  4. Updates the original edge weights of the graph based on the PPR diffusion matrix, assigning new weights for each edge from the PPR matrix.

Arguments

  • g::GNNGraph: The input graph for which PPR diffusion is to be calculated. It should have edge weights available.
  • alpha_f32::Float32: The damping factor used in PPR calculation, controlling the teleport probability in the random walk. Defaults to 0.85f0.

Returns

  • A new GNNGraph instance with the same structure as g but with updated edge weights according to the PPR diffusion calculation.
source
GNNGraphs.rand_edge_splitMethod
rand_edge_split(g::GNNGraph, frac; bidirected=is_bidirected(g)) -> g1, g2

Randomly partition the edges in g to form two graphs, g1 and g2. Both will have the same number of nodes as g. g1 will contain a fraction frac of the original edges, while g2 wil contain the rest.

If bidirected = true makes sure that an edge and its reverse go into the same split. This option is supported only for bidirected graphs with no self-loops and multi-edges.

rand_edge_split is tipically used to create train/test splits in link prediction tasks.

source
GNNGraphs.random_walk_peMethod
random_walk_pe(g, walk_length)

Return the random walk positional encoding from the paper Graph Neural Networks with Learnable Structural and Positional Representations of the given graph g and the length of the walk walk_length as a matrix of size (walk_length, g.num_nodes).

source
GNNGraphs.remove_edgesMethod
remove_edges(g::GNNGraph, edges_to_remove::AbstractVector{<:Integer})
 remove_edges(g::GNNGraph, p=0.5)

Remove specified edges from a GNNGraph, either by specifying edge indices or by randomly removing edges with a given probability.

Arguments

  • g: The input graph from which edges will be removed.
  • edges_to_remove: Vector of edge indices to be removed. This argument is only required for the first method.
  • p: Probability of removing each edge. This argument is only required for the second method and defaults to 0.5.

Returns

A new GNNGraph with the specified edges removed.

Example

julia> using GNNGraphs
 
 # Construct a GNNGraph
@@ -143,7 +143,7 @@
 julia> g_new
 GNNGraph:
   num_nodes: 3
-  num_edges: 2
source
GNNGraphs.remove_multi_edgesMethod
remove_multi_edges(g::GNNGraph; aggr=+)

Remove multiple edges (also called parallel edges or repeated edges) from graph g. Possible edge features are aggregated according to aggr, that can take value +,min, max or mean.

See also remove_self_loops, has_multi_edges, and to_bidirected.

source
GNNGraphs.remove_nodesMethod
remove_nodes(g::GNNGraph, p)

Returns a new graph obtained by dropping nodes from g with independent probabilities p.

Examples

julia> g = GNNGraph([1, 1, 2, 2, 3, 4], [1, 2, 3, 1, 3, 1])
+  num_edges: 2
source
GNNGraphs.remove_multi_edgesMethod
remove_multi_edges(g::GNNGraph; aggr=+)

Remove multiple edges (also called parallel edges or repeated edges) from graph g. Possible edge features are aggregated according to aggr, that can take value +,min, max or mean.

See also remove_self_loops, has_multi_edges, and to_bidirected.

source
GNNGraphs.remove_nodesMethod
remove_nodes(g::GNNGraph, p)

Returns a new graph obtained by dropping nodes from g with independent probabilities p.

Examples

julia> g = GNNGraph([1, 1, 2, 2, 3, 4], [1, 2, 3, 1, 3, 1])
 GNNGraph:
   num_nodes: 4
   num_edges: 6
@@ -151,7 +151,7 @@
 julia> g_new = remove_nodes(g, 0.5)
 GNNGraph:
   num_nodes: 2
-  num_edges: 2
source
GNNGraphs.remove_nodesMethod
remove_nodes(g::GNNGraph, nodes_to_remove::AbstractVector)

Remove specified nodes, and their associated edges, from a GNNGraph. This operation reindexes the remaining nodes to maintain a continuous sequence of node indices, starting from 1. Similarly, edges are reindexed to account for the removal of edges connected to the removed nodes.

Arguments

  • g: The input graph from which nodes (and their edges) will be removed.
  • nodes_to_remove: Vector of node indices to be removed.

Returns

A new GNNGraph with the specified nodes and all edges associated with these nodes removed.

Example

using GNNGraphs
+  num_edges: 2
source
GNNGraphs.remove_nodesMethod
remove_nodes(g::GNNGraph, nodes_to_remove::AbstractVector)

Remove specified nodes, and their associated edges, from a GNNGraph. This operation reindexes the remaining nodes to maintain a continuous sequence of node indices, starting from 1. Similarly, edges are reindexed to account for the removal of edges connected to the removed nodes.

Arguments

  • g: The input graph from which nodes (and their edges) will be removed.
  • nodes_to_remove: Vector of node indices to be removed.

Returns

A new GNNGraph with the specified nodes and all edges associated with these nodes removed.

Example

using GNNGraphs
 
 g = GNNGraph([1, 1, 2, 2, 3], [2, 3, 1, 3, 1])
 
@@ -159,7 +159,7 @@
 g_new = remove_nodes(g, [2, 3])
 
 # g_new now does not contain nodes 2 and 3, and any edges that were connected to these nodes.
-println(g_new)
source
GNNGraphs.remove_self_loopsMethod
remove_self_loops(g::GNNGraph)

Return a graph constructed from g where self-loops (edges from a node to itself) are removed.

See also add_self_loops and remove_multi_edges.

source
GNNGraphs.set_edge_weightMethod
set_edge_weight(g::GNNGraph, w::AbstractVector)

Set w as edge weights in the returned graph.

source
GNNGraphs.to_bidirectedMethod
to_bidirected(g)

Adds a reverse edge for each edge in the graph, then calls remove_multi_edges with mean aggregation to simplify the graph.

See also is_bidirected.

Examples

julia> s, t = [1, 2, 3, 3, 4], [2, 3, 4, 4, 4];
+println(g_new)
source
GNNGraphs.remove_self_loopsMethod
remove_self_loops(g::GNNGraph)

Return a graph constructed from g where self-loops (edges from a node to itself) are removed.

See also add_self_loops and remove_multi_edges.

source
GNNGraphs.set_edge_weightMethod
set_edge_weight(g::GNNGraph, w::AbstractVector)

Set w as edge weights in the returned graph.

source
GNNGraphs.to_bidirectedMethod
to_bidirected(g)

Adds a reverse edge for each edge in the graph, then calls remove_multi_edges with mean aggregation to simplify the graph.

See also is_bidirected.

Examples

julia> s, t = [1, 2, 3, 3, 4], [2, 3, 4, 4, 4];
 
 julia> w = [1.0, 2.0, 3.0, 4.0, 5.0];
 
@@ -200,7 +200,7 @@
  20.0
  35.0
  35.0
- 50.0
source
GNNGraphs.to_unidirectedMethod
to_unidirected(g::GNNGraph)

Return a graph that for each multiple edge between two nodes in g keeps only an edge in one direction.

source
MLUtils.batchMethod
batch(gs::Vector{<:GNNGraph})

Batch together multiple GNNGraphs into a single one containing the total number of original nodes and edges.

Equivalent to SparseArrays.blockdiag. See also MLUtils.unbatch.

Examples

julia> g1 = rand_graph(4, 4, ndata=ones(Float32, 3, 4))
+ 50.0
source
GNNGraphs.to_unidirectedMethod
to_unidirected(g::GNNGraph)

Return a graph that for each multiple edge between two nodes in g keeps only an edge in one direction.

source
MLUtils.batchMethod
batch(gs::Vector{<:GNNGraph})

Batch together multiple GNNGraphs into a single one containing the total number of original nodes and edges.

Equivalent to SparseArrays.blockdiag. See also MLUtils.unbatch.

Examples

julia> g1 = rand_graph(4, 4, ndata=ones(Float32, 3, 4))
 GNNGraph:
   num_nodes: 4
   num_edges: 4
@@ -226,7 +226,7 @@
 3×9 Matrix{Float32}:
  1.0  1.0  1.0  1.0  0.0  0.0  0.0  0.0  0.0
  1.0  1.0  1.0  1.0  0.0  0.0  0.0  0.0  0.0
- 1.0  1.0  1.0  1.0  0.0  0.0  0.0  0.0  0.0
source
MLUtils.unbatchMethod
unbatch(g::GNNGraph)

Opposite of the MLUtils.batch operation, returns an array of the individual graphs batched together in g.

See also MLUtils.batch and getgraph.

Examples

julia> using MLUtils
+ 1.0  1.0  1.0  1.0  0.0  0.0  0.0  0.0  0.0
source
MLUtils.unbatchMethod
unbatch(g::GNNGraph)

Opposite of the MLUtils.batch operation, returns an array of the individual graphs batched together in g.

See also MLUtils.batch and getgraph.

Examples

julia> using MLUtils
 
 julia> gbatched = MLUtils.batch([rand_graph(5, 6), rand_graph(10, 8), rand_graph(4,2)])
 GNNGraph:
@@ -238,8 +238,8 @@
 3-element Vector{GNNGraph{Tuple{Vector{Int64}, Vector{Int64}, Nothing}}}:
  GNNGraph(5, 6) with no data
  GNNGraph(10, 8) with no data
- GNNGraph(4, 2) with no data
source
SparseArrays.blockdiagMethod
blockdiag(xs::GNNGraph...)

Equivalent to MLUtils.batch.

source

Utils

GNNGraphs.sort_edge_indexFunction
sort_edge_index(ei::Tuple) -> u', v'
-sort_edge_index(u, v) -> u', v'

Return a sorted version of the tuple of vectors ei = (u, v), applying a common permutation to u and v. The sorting is lexycographic, that is the pairs (ui, vi) are sorted first according to the ui and then according to vi.

source
GNNGraphs.color_refinementFunction
color_refinement(g::GNNGraph, [x0]) -> x, num_colors, niters

The color refinement algorithm for graph coloring. Given a graph g and an initial coloring x0, the algorithm iteratively refines the coloring until a fixed point is reached.

At each iteration the algorithm computes a hash of the coloring and the sorted list of colors of the neighbors of each node. This hash is used to determine if the coloring has changed.

math x_i' = hashmap((x_i, sort([x_j for j \in N(i)]))).`

This algorithm is related to the 1-Weisfeiler-Lehman algorithm for graph isomorphism testing.

Arguments

  • g::GNNGraph: The graph to color.
  • x0::AbstractVector{<:Integer}: The initial coloring. If not provided, all nodes are colored with 1.

Returns

  • x::AbstractVector{<:Integer}: The final coloring.
  • num_colors::Int: The number of colors used.
  • niters::Int: The number of iterations until convergence.
source

Generate

GNNGraphs.knn_graphMethod
knn_graph(points::AbstractMatrix, 
+ GNNGraph(4, 2) with no data
source
SparseArrays.blockdiagMethod
blockdiag(xs::GNNGraph...)

Equivalent to MLUtils.batch.

source

Utils

GNNGraphs.sort_edge_indexFunction
sort_edge_index(ei::Tuple) -> u', v'
+sort_edge_index(u, v) -> u', v'

Return a sorted version of the tuple of vectors ei = (u, v), applying a common permutation to u and v. The sorting is lexycographic, that is the pairs (ui, vi) are sorted first according to the ui and then according to vi.

source
GNNGraphs.color_refinementFunction
color_refinement(g::GNNGraph, [x0]) -> x, num_colors, niters

The color refinement algorithm for graph coloring. Given a graph g and an initial coloring x0, the algorithm iteratively refines the coloring until a fixed point is reached.

At each iteration the algorithm computes a hash of the coloring and the sorted list of colors of the neighbors of each node. This hash is used to determine if the coloring has changed.

math x_i' = hashmap((x_i, sort([x_j for j \in N(i)]))).`

This algorithm is related to the 1-Weisfeiler-Lehman algorithm for graph isomorphism testing.

Arguments

  • g::GNNGraph: The graph to color.
  • x0::AbstractVector{<:Integer}: The initial coloring. If not provided, all nodes are colored with 1.

Returns

  • x::AbstractVector{<:Integer}: The final coloring.
  • num_colors::Int: The number of colors used.
  • niters::Int: The number of iterations until convergence.
source

Generate

GNNGraphs.knn_graphMethod
knn_graph(points::AbstractMatrix, 
           k::Int; 
           graph_indicator = nothing,
           self_loops = false, 
@@ -259,7 +259,7 @@
 GNNGraph:
     num_nodes = 10
     num_edges = 30
-    num_graphs = 2
source
GNNGraphs.radius_graphMethod
radius_graph(points::AbstractMatrix, 
+    num_graphs = 2
source
GNNGraphs.radius_graphMethod
radius_graph(points::AbstractMatrix, 
              r::AbstractFloat; 
              graph_indicator = nothing,
              self_loops = false, 
@@ -279,7 +279,7 @@
 GNNGraph:
     num_nodes = 10
     num_edges = 20
-    num_graphs = 2

References

Section B paragraphs 1 and 2 of the paper Dynamic Hidden-Variable Network Models

source
GNNGraphs.rand_graphMethod
rand_graph([rng,] n, m; bidirected=true, edge_weight = nothing, kws...)

Generate a random (Erdós-Renyi) GNNGraph with n nodes and m edges.

If bidirected=true the reverse edge of each edge will be present. If bidirected=false instead, m unrelated edges are generated. In any case, the output graph will contain no self-loops or multi-edges.

A vector can be passed as edge_weight. Its length has to be equal to m in the directed case, and m÷2 in the bidirected one.

Pass a random number generator as the first argument to make the generation reproducible.

Additional keyword arguments will be passed to the GNNGraph constructor.

Examples

julia> g = rand_graph(5, 4, bidirected=false)
+    num_graphs = 2

References

Section B paragraphs 1 and 2 of the paper Dynamic Hidden-Variable Network Models

source
GNNGraphs.rand_graphMethod
rand_graph([rng,] n, m; bidirected=true, edge_weight = nothing, kws...)

Generate a random (Erdós-Renyi) GNNGraph with n nodes and m edges.

If bidirected=true the reverse edge of each edge will be present. If bidirected=false instead, m unrelated edges are generated. In any case, the output graph will contain no self-loops or multi-edges.

A vector can be passed as edge_weight. Its length has to be equal to m in the directed case, and m÷2 in the bidirected one.

Pass a random number generator as the first argument to make the generation reproducible.

Additional keyword arguments will be passed to the GNNGraph constructor.

Examples

julia> g = rand_graph(5, 4, bidirected=false)
 GNNGraph:
   num_nodes: 5
   num_edges: 4
@@ -297,7 +297,7 @@
 
 # Each edge has a reverse
 julia> edge_index(g)
-([1, 1, 5, 3], [5, 3, 1, 1])
source

Operators

Base.intersectFunction
intersect(g1::GNNGraph, g2::GNNGraph)

Intersect two graphs by keeping only the common edges.

source

Sampling

GNNGraphs.sample_neighborsFunction
sample_neighbors(g, nodes, K=-1; dir=:in, replace=false, dropnodes=false)

Sample neighboring edges of the given nodes and return the induced subgraph. For each node, a number of inbound (or outbound when dir = :out) edges will be randomly chosen. Ifdropnodes=false`, the graph returned will then contain all the nodes in the original graph, but only the sampled edges.

The returned graph will contain an edge feature EID corresponding to the id of the edge in the original graph. If dropnodes=true, it will also contain a node feature NID with the node ids in the original graph.

Arguments

  • g. The graph.
  • nodes. A list of node IDs to sample neighbors from.
  • K. The maximum number of edges to be sampled for each node. If -1, all the neighboring edges will be selected.
  • dir. Determines whether to sample inbound (:in) or outbound (`:out) edges (Default :in).
  • replace. If true, sample with replacement.
  • dropnodes. If true, the resulting subgraph will contain only the nodes involved in the sampled edges.

Examples

julia> g = rand_graph(20, 100)
+([1, 1, 5, 3], [5, 3, 1, 1])
source

Operators

Base.intersectFunction
intersect(g1::GNNGraph, g2::GNNGraph)

Intersect two graphs by keeping only the common edges.

source

Sampling

GNNGraphs.sample_neighborsFunction
sample_neighbors(g, nodes, K=-1; dir=:in, replace=false, dropnodes=false)

Sample neighboring edges of the given nodes and return the induced subgraph. For each node, a number of inbound (or outbound when dir = :out) edges will be randomly chosen. Ifdropnodes=false`, the graph returned will then contain all the nodes in the original graph, but only the sampled edges.

The returned graph will contain an edge feature EID corresponding to the id of the edge in the original graph. If dropnodes=true, it will also contain a node feature NID with the node ids in the original graph.

Arguments

  • g. The graph.
  • nodes. A list of node IDs to sample neighbors from.
  • K. The maximum number of edges to be sampled for each node. If -1, all the neighboring edges will be selected.
  • dir. Determines whether to sample inbound (:in) or outbound (`:out) edges (Default :in).
  • replace. If true, sample with replacement.
  • dropnodes. If true, the resulting subgraph will contain only the nodes involved in the sampled edges.

Examples

julia> g = rand_graph(20, 100)
 GNNGraph:
     num_nodes = 20
     num_edges = 100
@@ -336,7 +336,7 @@
     num_nodes = 20
     num_edges = 10
     edata:
-        EID => (10,)
source
Graphs.induced_subgraphMethod
induced_subgraph(graph, nodes)

Generates a subgraph from the original graph using the provided nodes. The function includes the nodes' neighbors and creates edges between nodes that are connected in the original graph. If a node has no neighbors, an isolated node will be added to the subgraph. Returns A new GNNGraph containing the subgraph with the specified nodes and their features.

Arguments

  • graph. The original GNNGraph containing nodes, edges, and node features.
  • nodes`. A vector of node indices to include in the subgraph.

Examples

julia> s = [1, 2]
+        EID => (10,)
source
Graphs.induced_subgraphMethod
induced_subgraph(graph, nodes)

Generates a subgraph from the original graph using the provided nodes. The function includes the nodes' neighbors and creates edges between nodes that are connected in the original graph. If a node has no neighbors, an isolated node will be added to the subgraph. Returns A new GNNGraph containing the subgraph with the specified nodes and their features.

Arguments

  • graph. The original GNNGraph containing nodes, edges, and node features.
  • nodes`. A vector of node indices to include in the subgraph.

Examples

julia> s = [1, 2]
 2-element Vector{Int64}:
  1
  2
@@ -369,4 +369,4 @@
         y = 2-element Vector{Float32}
         x = 32×2 Matrix{Float32}
   edata:
-        e = 1-element Vector{Float32}
source
\ No newline at end of file + e = 1-element Vector{Float32}source
\ No newline at end of file diff --git a/docs/GraphNeuralNetworks.jl/dev/GNNGraphs/api/heterograph/index.html b/docs/GraphNeuralNetworks.jl/dev/GNNGraphs/api/heterograph/index.html index 5e1fa11d0..2d6cf07cb 100644 --- a/docs/GraphNeuralNetworks.jl/dev/GNNGraphs/api/heterograph/index.html +++ b/docs/GraphNeuralNetworks.jl/dev/GNNGraphs/api/heterograph/index.html @@ -39,7 +39,7 @@ julia> hg.ndata[:A].x 2×10 Matrix{Float64}: 0.825882 0.0797502 0.245813 0.142281 0.231253 0.685025 0.821457 0.888838 0.571347 0.53165 - 0.631286 0.316292 0.705325 0.239211 0.533007 0.249233 0.473736 0.595475 0.0623298 0.159307

See also GNNGraph for a homogeneous graph type and rand_heterograph for a function to generate random heterographs.

source
GNNGraphs.edge_type_subgraphMethod
edge_type_subgraph(g::GNNHeteroGraph, edge_ts)

Return a subgraph of g that contains only the edges of type edge_ts. Edge types can be specified as a single edge type (i.e. a tuple containing 3 symbols) or a vector of edge types.

source
GNNGraphs.num_edge_typesMethod
num_edge_types(g)

Return the number of edge types in the graph. For GNNGraphs, this is always 1. For GNNHeteroGraphs, this is the number of unique edge types.

source
GNNGraphs.num_node_typesMethod
num_node_types(g)

Return the number of node types in the graph. For GNNGraphs, this is always 1. For GNNHeteroGraphs, this is the number of unique node types.

source

Query

GNNGraphs.edge_indexMethod
edge_index(g::GNNHeteroGraph, [edge_t])

Return a tuple containing two vectors, respectively storing the source and target nodes for each edges in g of type edge_t = (src_t, rel_t, trg_t).

If edge_t is not provided, it will error if g has more than one edge type.

source
GNNGraphs.graph_indicatorMethod
graph_indicator(g::GNNHeteroGraph, [node_t])

Return a Dict of vectors containing the graph membership (an integer from 1 to g.num_graphs) of each node in the graph for each node type. If node_t is provided, return the graph membership of each node of type node_t instead.

See also batch.

source
Graphs.degreeMethod
degree(g::GNNHeteroGraph, edge_type::EType; dir = :in)

Return a vector containing the degrees of the nodes in g GNNHeteroGraph given edge_type.

Arguments

  • g: A graph.
  • edge_type: A tuple of symbols (source_t, edge_t, target_t) representing the edge type.
  • T: Element type of the returned vector. If nothing, is chosen based on the graph type. Default nothing.
  • dir: For dir = :out the degree of a node is counted based on the outgoing edges. For dir = :in, the ingoing edges are used. If dir = :both we have the sum of the two. Default dir = :out.
source
Graphs.has_edgeMethod
has_edge(g::GNNHeteroGraph, edge_t, i, j)

Return true if there is an edge of type edge_t from node i to node j in g.

Examples

julia> g = rand_bipartite_heterograph((2, 2), (4, 0), bidirected=false)
+    0.631286  0.316292   0.705325  0.239211  0.533007  0.249233  0.473736  0.595475  0.0623298  0.159307

See also GNNGraph for a homogeneous graph type and rand_heterograph for a function to generate random heterographs.

source
GNNGraphs.edge_type_subgraphMethod
edge_type_subgraph(g::GNNHeteroGraph, edge_ts)

Return a subgraph of g that contains only the edges of type edge_ts. Edge types can be specified as a single edge type (i.e. a tuple containing 3 symbols) or a vector of edge types.

source
GNNGraphs.num_edge_typesMethod
num_edge_types(g)

Return the number of edge types in the graph. For GNNGraphs, this is always 1. For GNNHeteroGraphs, this is the number of unique edge types.

source
GNNGraphs.num_node_typesMethod
num_node_types(g)

Return the number of node types in the graph. For GNNGraphs, this is always 1. For GNNHeteroGraphs, this is the number of unique node types.

source

Query

GNNGraphs.edge_indexMethod
edge_index(g::GNNHeteroGraph, [edge_t])

Return a tuple containing two vectors, respectively storing the source and target nodes for each edges in g of type edge_t = (src_t, rel_t, trg_t).

If edge_t is not provided, it will error if g has more than one edge type.

source
GNNGraphs.graph_indicatorMethod
graph_indicator(g::GNNHeteroGraph, [node_t])

Return a Dict of vectors containing the graph membership (an integer from 1 to g.num_graphs) of each node in the graph for each node type. If node_t is provided, return the graph membership of each node of type node_t instead.

See also batch.

source
Graphs.degreeMethod
degree(g::GNNHeteroGraph, edge_type::EType; dir = :in)

Return a vector containing the degrees of the nodes in g GNNHeteroGraph given edge_type.

Arguments

  • g: A graph.
  • edge_type: A tuple of symbols (source_t, edge_t, target_t) representing the edge type.
  • T: Element type of the returned vector. If nothing, is chosen based on the graph type. Default nothing.
  • dir: For dir = :out the degree of a node is counted based on the outgoing edges. For dir = :in, the ingoing edges are used. If dir = :both we have the sum of the two. Default dir = :out.
source
Graphs.has_edgeMethod
has_edge(g::GNNHeteroGraph, edge_t, i, j)

Return true if there is an edge of type edge_t from node i to node j in g.

Examples

julia> g = rand_bipartite_heterograph((2, 2), (4, 0), bidirected=false)
 GNNHeteroGraph:
   num_nodes: Dict(:A => 2, :B => 2)
   num_edges: Dict((:A, :to, :B) => 4, (:B, :to, :A) => 0)
@@ -48,10 +48,10 @@
 true
 
 julia> has_edge(g, (:B,:to,:A), 1, 1)
-false
source

Transform

GNNGraphs.add_edgesMethod
add_edges(g::GNNHeteroGraph, edge_t, s, t; [edata, num_nodes])
+false
source

Transform

GNNGraphs.add_edgesMethod
add_edges(g::GNNHeteroGraph, edge_t, s, t; [edata, num_nodes])
 add_edges(g::GNNHeteroGraph, edge_t => (s, t); [edata, num_nodes])
-add_edges(g::GNNHeteroGraph, edge_t => (s, t, w); [edata, num_nodes])

Add to heterograph g edges of type edge_t with source node vector s and target node vector t. Optionally, pass the edge weights w or the features edata for the new edges. edge_t is a triplet of symbols (src_t, rel_t, dst_t).

If the edge type is not already present in the graph, it is added. If it involves new node types, they are added to the graph as well. In this case, a dictionary or named tuple of num_nodes can be passed to specify the number of nodes of the new types, otherwise the number of nodes is inferred from the maximum node id in s and t.

source
GNNGraphs.add_self_loopsMethod
add_self_loops(g::GNNHeteroGraph, edge_t::EType)
-add_self_loops(g::GNNHeteroGraph)

If the source node type is the same as the destination node type in edge_t, return a graph with the same features as g but also add self-loops of the specified type, edge_t. Otherwise, it returns g unchanged.

Nodes with already existing self-loops of type edge_t will obtain a second set of self-loops of the same type.

If the graph has edge weights for edges of type edge_t, the new edges will have weight 1.

If no edges of type edge_t exist, or all existing edges have no weight, then all new self loops will have no weight.

If edge_t is not passed as argument, for the entire graph self-loop is added to each node for every edge type in the graph where the source and destination node types are the same. This iterates over all edge types present in the graph, applying the self-loop addition logic to each applicable edge type.

source

Generate

GNNGraphs.rand_bipartite_heterographMethod
rand_bipartite_heterograph([rng,] 
+add_edges(g::GNNHeteroGraph, edge_t => (s, t, w); [edata, num_nodes])

Add to heterograph g edges of type edge_t with source node vector s and target node vector t. Optionally, pass the edge weights w or the features edata for the new edges. edge_t is a triplet of symbols (src_t, rel_t, dst_t).

If the edge type is not already present in the graph, it is added. If it involves new node types, they are added to the graph as well. In this case, a dictionary or named tuple of num_nodes can be passed to specify the number of nodes of the new types, otherwise the number of nodes is inferred from the maximum node id in s and t.

source
GNNGraphs.add_self_loopsMethod
add_self_loops(g::GNNHeteroGraph, edge_t::EType)
+add_self_loops(g::GNNHeteroGraph)

If the source node type is the same as the destination node type in edge_t, return a graph with the same features as g but also add self-loops of the specified type, edge_t. Otherwise, it returns g unchanged.

Nodes with already existing self-loops of type edge_t will obtain a second set of self-loops of the same type.

If the graph has edge weights for edges of type edge_t, the new edges will have weight 1.

If no edges of type edge_t exist, or all existing edges have no weight, then all new self loops will have no weight.

If edge_t is not passed as argument, for the entire graph self-loop is added to each node for every edge type in the graph where the source and destination node types are the same. This iterates over all edge types present in the graph, applying the self-loop addition logic to each applicable edge type.

source

Generate

GNNGraphs.rand_bipartite_heterographMethod
rand_bipartite_heterograph([rng,] 
                            (n1, n2), (m12, m21); 
                            bidirected = true, 
                            node_t = (:A, :B), 
@@ -64,8 +64,8 @@
 julia> g = rand_bipartite_heterograph((10, 15), (20, 0), node_t=(:user, :item), edge_t=:-, bidirected=false)
 GNNHeteroGraph:
   num_nodes: Dict(:item => 15, :user => 10)
-  num_edges: Dict((:item, :-, :user) => 0, (:user, :-, :item) => 20)
source
GNNGraphs.rand_heterographFunction
rand_heterograph([rng,] n, m; bidirected=false, kws...)

Construct an GNNHeteroGraph with random edges and with number of nodes and edges specified by n and m respectively. n and m can be any iterable of pairs specifing node/edge types and their numbers.

Pass a random number generator as a first argument to make the generation reproducible.

Setting bidirected=true will generate a bidirected graph, i.e. each edge will have a reverse edge. Therefore, for each edge type (:A, :rel, :B) a corresponding reverse edge type (:B, :rel, :A) will be generated.

Additional keyword arguments will be passed to the GNNHeteroGraph constructor.

Examples

julia> g = rand_heterograph((:user => 10, :movie => 20),
+  num_edges: Dict((:item, :-, :user) => 0, (:user, :-, :item) => 20)
source
GNNGraphs.rand_heterographFunction
rand_heterograph([rng,] n, m; bidirected=false, kws...)

Construct an GNNHeteroGraph with random edges and with number of nodes and edges specified by n and m respectively. n and m can be any iterable of pairs specifing node/edge types and their numbers.

Pass a random number generator as a first argument to make the generation reproducible.

Setting bidirected=true will generate a bidirected graph, i.e. each edge will have a reverse edge. Therefore, for each edge type (:A, :rel, :B) a corresponding reverse edge type (:B, :rel, :A) will be generated.

Additional keyword arguments will be passed to the GNNHeteroGraph constructor.

Examples

julia> g = rand_heterograph((:user => 10, :movie => 20),
                             (:user, :rate, :movie) => 30)
 GNNHeteroGraph:
   num_nodes: Dict(:movie => 20, :user => 10)
-  num_edges: Dict((:user, :rate, :movie) => 30)
source
\ No newline at end of file + num_edges: Dict((:user, :rate, :movie) => 30)source
\ No newline at end of file diff --git a/docs/GraphNeuralNetworks.jl/dev/GNNGraphs/api/samplers/index.html b/docs/GraphNeuralNetworks.jl/dev/GNNGraphs/api/samplers/index.html index a9cb09c26..a4b2d981e 100644 --- a/docs/GraphNeuralNetworks.jl/dev/GNNGraphs/api/samplers/index.html +++ b/docs/GraphNeuralNetworks.jl/dev/GNNGraphs/api/samplers/index.html @@ -5,4 +5,4 @@ julia> for mini_batch_gnn in loader batch_counter += 1 println("Batch ", batch_counter, ": Nodes in mini-batch graph: ", nv(mini_batch_gnn)) - endsource
\ No newline at end of file + endsource
\ No newline at end of file diff --git a/docs/GraphNeuralNetworks.jl/dev/GNNGraphs/api/temporalgraph/index.html b/docs/GraphNeuralNetworks.jl/dev/GNNGraphs/api/temporalgraph/index.html index e15eecf23..dc2f63483 100644 --- a/docs/GraphNeuralNetworks.jl/dev/GNNGraphs/api/temporalgraph/index.html +++ b/docs/GraphNeuralNetworks.jl/dev/GNNGraphs/api/temporalgraph/index.html @@ -16,7 +16,7 @@ num_edges: [20, 20, 20, 20, 20] num_snapshots: 5 tgdata: - x = 4-element Vector{Float64}source
GNNGraphs.add_snapshotMethod
add_snapshot(tg::TemporalSnapshotsGNNGraph, t::Int, g::GNNGraph)

Return a TemporalSnapshotsGNNGraph created starting from tg by adding the snapshot g at time index t.

Examples

julia> using GNNGraphs
+        x = 4-element Vector{Float64}
source
GNNGraphs.add_snapshotMethod
add_snapshot(tg::TemporalSnapshotsGNNGraph, t::Int, g::GNNGraph)

Return a TemporalSnapshotsGNNGraph created starting from tg by adding the snapshot g at time index t.

Examples

julia> using GNNGraphs
 
 julia> snapshots = [rand_graph(10, 20) for i in 1:5];
 
@@ -30,7 +30,7 @@
 TemporalSnapshotsGNNGraph:
   num_nodes: [10, 10, 10, 10, 10, 10]
   num_edges: [20, 20, 16, 20, 20, 20]
-  num_snapshots: 6
source
GNNGraphs.remove_snapshotMethod
remove_snapshot(tg::TemporalSnapshotsGNNGraph, t::Int)

Return a TemporalSnapshotsGNNGraph created starting from tg by removing the snapshot at time index t.

Examples

julia> using GNNGraphs
+  num_snapshots: 6
source
GNNGraphs.remove_snapshotMethod
remove_snapshot(tg::TemporalSnapshotsGNNGraph, t::Int)

Return a TemporalSnapshotsGNNGraph created starting from tg by removing the snapshot at time index t.

Examples

julia> using GNNGraphs
 
 julia> snapshots = [rand_graph(10,20), rand_graph(10,14), rand_graph(10,22)];
 
@@ -44,7 +44,7 @@
 TemporalSnapshotsGNNGraph:
   num_nodes: [10, 10]
   num_edges: [20, 22]
-  num_snapshots: 2
source

Random Generators

GNNGraphs.rand_temporal_radius_graphFunction
rand_temporal_radius_graph(number_nodes::Int, 
+  num_snapshots: 2
source

Random Generators

GNNGraphs.rand_temporal_radius_graphFunction
rand_temporal_radius_graph(number_nodes::Int, 
                            number_snapshots::Int,
                            speed::AbstractFloat,
                            r::AbstractFloat;
@@ -56,7 +56,7 @@
 TemporalSnapshotsGNNGraph:
   num_nodes: [10, 10, 10, 10, 10]
   num_edges: [90, 90, 90, 90, 90]
-  num_snapshots: 5
source
GNNGraphs.rand_temporal_hyperbolic_graphFunction
rand_temporal_hyperbolic_graph(number_nodes::Int, 
+  num_snapshots: 5
source
GNNGraphs.rand_temporal_hyperbolic_graphFunction
rand_temporal_hyperbolic_graph(number_nodes::Int, 
                                number_snapshots::Int;
                                α::Real,
                                R::Real,
@@ -69,4 +69,4 @@
 TemporalSnapshotsGNNGraph:
   num_nodes: [10, 10, 10, 10, 10]
   num_edges: [44, 46, 48, 42, 38]
-  num_snapshots: 5

References

Section D of the paper Dynamic Hidden-Variable Network Models and the paper Hyperbolic Geometry of Complex Networks

source
\ No newline at end of file + num_snapshots: 5

References

Section D of the paper Dynamic Hidden-Variable Network Models and the paper Hyperbolic Geometry of Complex Networks

source
\ No newline at end of file diff --git a/docs/GraphNeuralNetworks.jl/dev/GNNGraphs/guides/datasets/index.html b/docs/GraphNeuralNetworks.jl/dev/GNNGraphs/guides/datasets/index.html index 6fdda62e3..c98ab5424 100644 --- a/docs/GraphNeuralNetworks.jl/dev/GNNGraphs/guides/datasets/index.html +++ b/docs/GraphNeuralNetworks.jl/dev/GNNGraphs/guides/datasets/index.html @@ -1 +1 @@ -Datasets · GraphNeuralNetworks.jl

Datasets

GNNGraphs.jl doesn't come with its own datasets, but leverages those available in the Julia (and non-Julia) ecosystem. In particular, the examples in the GraphNeuralNetworks.jl repository make use of the MLDatasets.jl package. There you will find common graph datasets such as Cora, PubMed, Citeseer, TUDataset and many others. For graphs with static structures and temporal features, datasets such as METRLA, PEMSBAY, ChickenPox, and WindMillEnergy are available. For graphs featuring both temporal structures and temporal features, the TemporalBrains dataset is suitable.

GraphNeuralNetworks.jl provides the mldataset2gnngraph method for interfacing with MLDatasets.jl.

\ No newline at end of file +Datasets · GraphNeuralNetworks.jl

Datasets

GNNGraphs.jl doesn't come with its own datasets, but leverages those available in the Julia (and non-Julia) ecosystem. In particular, the examples in the GraphNeuralNetworks.jl repository make use of the MLDatasets.jl package. There you will find common graph datasets such as Cora, PubMed, Citeseer, TUDataset and many others. For graphs with static structures and temporal features, datasets such as METRLA, PEMSBAY, ChickenPox, and WindMillEnergy are available. For graphs featuring both temporal structures and temporal features, the TemporalBrains dataset is suitable.

GraphNeuralNetworks.jl provides the mldataset2gnngraph method for interfacing with MLDatasets.jl.

\ No newline at end of file diff --git a/docs/GraphNeuralNetworks.jl/dev/GNNGraphs/guides/gnngraph/index.html b/docs/GraphNeuralNetworks.jl/dev/GNNGraphs/guides/gnngraph/index.html index 82e38e0c0..d038f3af2 100644 --- a/docs/GraphNeuralNetworks.jl/dev/GNNGraphs/guides/gnngraph/index.html +++ b/docs/GraphNeuralNetworks.jl/dev/GNNGraphs/guides/gnngraph/index.html @@ -165,4 +165,4 @@ julia> GNNGraph(gd) GNNGraph: num_nodes: 10 - num_edges: 20 \ No newline at end of file + num_edges: 20 \ No newline at end of file diff --git a/docs/GraphNeuralNetworks.jl/dev/GNNGraphs/guides/heterograph/index.html b/docs/GraphNeuralNetworks.jl/dev/GNNGraphs/guides/heterograph/index.html index 06bb78abe..f288d4afa 100644 --- a/docs/GraphNeuralNetworks.jl/dev/GNNGraphs/guides/heterograph/index.html +++ b/docs/GraphNeuralNetworks.jl/dev/GNNGraphs/guides/heterograph/index.html @@ -79,4 +79,4 @@ @assert g.num_nodes[:A] == 80 @assert size(g.ndata[:A].x) == (3, 80) # ... -end

Graph convolutions on heterographs

See HeteroGraphConv for how to perform convolutions on heterogeneous graphs.

\ No newline at end of file +end

Graph convolutions on heterographs

See HeteroGraphConv for how to perform convolutions on heterogeneous graphs.

\ No newline at end of file diff --git a/docs/GraphNeuralNetworks.jl/dev/GNNGraphs/guides/temporalgraph/index.html b/docs/GraphNeuralNetworks.jl/dev/GNNGraphs/guides/temporalgraph/index.html index 6a058860c..69abc33c4 100644 --- a/docs/GraphNeuralNetworks.jl/dev/GNNGraphs/guides/temporalgraph/index.html +++ b/docs/GraphNeuralNetworks.jl/dev/GNNGraphs/guides/temporalgraph/index.html @@ -86,4 +86,4 @@ julia> [ds.x for ds in tg.ndata]; # vector containing the x feature of each snapshot julia> [g.x for g in tg.snapshots]; # same vector as above, now accessing - # the x feature directly from the snapshots \ No newline at end of file + # the x feature directly from the snapshots \ No newline at end of file diff --git a/docs/GraphNeuralNetworks.jl/dev/GNNGraphs/index.html b/docs/GraphNeuralNetworks.jl/dev/GNNGraphs/index.html index 8aa24b947..c8c5fdbf9 100644 --- a/docs/GraphNeuralNetworks.jl/dev/GNNGraphs/index.html +++ b/docs/GraphNeuralNetworks.jl/dev/GNNGraphs/index.html @@ -1 +1 @@ -GNNGraphs.jl · GraphNeuralNetworks.jl

GNNGraphs.jl

GNNGraphs.jl is a package that provides graph data structures and helper functions specifically designed for working with graph neural networks. This package allows to store not only the graph structure, but also features associated with nodes, edges, and the graph itself. It is the core foundation for the GNNlib.jl, GraphNeuralNetworks.jl, and GNNLux.jl packages.

It supports three types of graphs:

  • Static graph is the basic graph type represented by GNNGraph, where each node and edge can have associated features. This type of graph is used in typical graph neural network applications, where neural networks operate on both the structure of the graph and the features stored in it. It can be used to represent a graph where the structure does not change over time, but the features of the nodes and edges can change over time.

  • Heterogeneous graph is a graph that supports multiple types of nodes and edges, and is represented by GNNHeteroGraph. Each type can have its own properties and relationships. This is useful in scenarios with different entities and interactions, such as in citation graphs or multi-relational data.

  • Temporal graph is a graph that changes over time, and is represented by TemporalSnapshotsGNNGraph. Edges and features can change dynamically. This type of graph is useful for applications that involve tracking time-dependent relationships, such as social networks.

This package depends on the package Graphs.jl.

Installation

The package can be installed with the Julia package manager. From the Julia REPL, type ] to enter the Pkg REPL mode and run:

pkg> add GNNGraphs
\ No newline at end of file +GNNGraphs.jl · GraphNeuralNetworks.jl

GNNGraphs.jl

GNNGraphs.jl is a package that provides graph data structures and helper functions specifically designed for working with graph neural networks. This package allows to store not only the graph structure, but also features associated with nodes, edges, and the graph itself. It is the core foundation for the GNNlib.jl, GraphNeuralNetworks.jl, and GNNLux.jl packages.

It supports three types of graphs:

  • Static graph is the basic graph type represented by GNNGraph, where each node and edge can have associated features. This type of graph is used in typical graph neural network applications, where neural networks operate on both the structure of the graph and the features stored in it. It can be used to represent a graph where the structure does not change over time, but the features of the nodes and edges can change over time.

  • Heterogeneous graph is a graph that supports multiple types of nodes and edges, and is represented by GNNHeteroGraph. Each type can have its own properties and relationships. This is useful in scenarios with different entities and interactions, such as in citation graphs or multi-relational data.

  • Temporal graph is a graph that changes over time, and is represented by TemporalSnapshotsGNNGraph. Edges and features can change dynamically. This type of graph is useful for applications that involve tracking time-dependent relationships, such as social networks.

This package depends on the package Graphs.jl.

Installation

The package can be installed with the Julia package manager. From the Julia REPL, type ] to enter the Pkg REPL mode and run:

pkg> add GNNGraphs
\ No newline at end of file diff --git a/docs/GraphNeuralNetworks.jl/dev/GNNlib/api/messagepassing/index.html b/docs/GraphNeuralNetworks.jl/dev/GNNlib/api/messagepassing/index.html index be55b9c14..1d06a8616 100644 --- a/docs/GraphNeuralNetworks.jl/dev/GNNlib/api/messagepassing/index.html +++ b/docs/GraphNeuralNetworks.jl/dev/GNNlib/api/messagepassing/index.html @@ -1,5 +1,5 @@ Message Passing · GraphNeuralNetworks.jl

Message Passing

Interface

GNNlib.apply_edgesFunction
apply_edges(fmsg, g; [xi, xj, e])
-apply_edges(fmsg, g, xi, xj, e=nothing)

Returns the message from node j to node i applying the message function fmsg on the edges in graph g. In the message-passing scheme, the incoming messages from the neighborhood of i will later be aggregated in order to update the features of node i (see aggregate_neighbors).

The function fmsg operates on batches of edges, therefore xi, xj, and e are tensors whose last dimension is the batch size, or can be named tuples of such tensors.

Arguments

  • g: An AbstractGNNGraph.
  • xi: An array or a named tuple containing arrays whose last dimension's size is g.num_nodes. It will be appropriately materialized on the target node of each edge (see also edge_index).
  • xj: As xi, but now to be materialized on each edge's source node.
  • e: An array or a named tuple containing arrays whose last dimension's size is g.num_edges.
  • fmsg: A function that takes as inputs the edge-materialized xi, xj, and e. These are arrays (or named tuples of arrays) whose last dimension' size is the size of a batch of edges. The output of f has to be an array (or a named tuple of arrays) with the same batch size. If also layer is passed to propagate, the signature of fmsg has to be fmsg(layer, xi, xj, e) instead of fmsg(xi, xj, e).

See also propagate and aggregate_neighbors.

source
GNNlib.aggregate_neighborsFunction
aggregate_neighbors(g, aggr, m)

Given a graph g, edge features m, and an aggregation operator aggr (e.g +, min, max, mean), returns the new node features

\[\mathbf{x}_i = \square_{j \in \mathcal{N}(i)} \mathbf{m}_{j\to i}\]

Neighborhood aggregation is the second step of propagate, where it comes after apply_edges.

source
GNNlib.propagateFunction
propagate(fmsg, g, aggr; [xi, xj, e])
+apply_edges(fmsg, g, xi, xj, e=nothing)

Returns the message from node j to node i applying the message function fmsg on the edges in graph g. In the message-passing scheme, the incoming messages from the neighborhood of i will later be aggregated in order to update the features of node i (see aggregate_neighbors).

The function fmsg operates on batches of edges, therefore xi, xj, and e are tensors whose last dimension is the batch size, or can be named tuples of such tensors.

Arguments

  • g: An AbstractGNNGraph.
  • xi: An array or a named tuple containing arrays whose last dimension's size is g.num_nodes. It will be appropriately materialized on the target node of each edge (see also edge_index).
  • xj: As xi, but now to be materialized on each edge's source node.
  • e: An array or a named tuple containing arrays whose last dimension's size is g.num_edges.
  • fmsg: A function that takes as inputs the edge-materialized xi, xj, and e. These are arrays (or named tuples of arrays) whose last dimension' size is the size of a batch of edges. The output of f has to be an array (or a named tuple of arrays) with the same batch size. If also layer is passed to propagate, the signature of fmsg has to be fmsg(layer, xi, xj, e) instead of fmsg(xi, xj, e).

See also propagate and aggregate_neighbors.

source
GNNlib.aggregate_neighborsFunction
aggregate_neighbors(g, aggr, m)

Given a graph g, edge features m, and an aggregation operator aggr (e.g +, min, max, mean), returns the new node features

\[\mathbf{x}_i = \square_{j \in \mathcal{N}(i)} \mathbf{m}_{j\to i}\]

Neighborhood aggregation is the second step of propagate, where it comes after apply_edges.

source
GNNlib.propagateFunction
propagate(fmsg, g, aggr; [xi, xj, e])
 propagate(fmsg, g, aggr xi, xj, e=nothing)

Performs message passing on graph g. Takes care of materializing the node features on each edge, applying the message function fmsg, and returning an aggregated message $\bar{\mathbf{m}}$ (depending on the return value of fmsg, an array or a named tuple of arrays with last dimension's size g.num_nodes).

It can be decomposed in two steps:

m = apply_edges(fmsg, g, xi, xj, e)
 m̄ = aggregate_neighbors(g, aggr, m)

GNN layers typically call propagate in their forward pass, providing as input f a closure.

Arguments

  • g: A GNNGraph.
  • xi: An array or a named tuple containing arrays whose last dimension's size is g.num_nodes. It will be appropriately materialized on the target node of each edge (see also edge_index).
  • xj: As xj, but to be materialized on edges' sources.
  • e: An array or a named tuple containing arrays whose last dimension's size is g.num_edges.
  • fmsg: A generic function that will be passed over to apply_edges. Has to take as inputs the edge-materialized xi, xj, and e (arrays or named tuples of arrays whose last dimension' size is the size of a batch of edges). Its output has to be an array or a named tuple of arrays with the same batch size. If also layer is passed to propagate, the signature of fmsg has to be fmsg(layer, xi, xj, e) instead of fmsg(xi, xj, e).
  • aggr: Neighborhood aggregation operator. Use +, mean, max, or min.

Examples

using GraphNeuralNetworks, Flux
 
@@ -25,4 +25,4 @@
 end
 
 l = GNNConv(10 => 20)
-l(g, x)

See also apply_edges and aggregate_neighbors.

source

Built-in message functions

GNNlib.e_mul_xjFunction
e_mul_xj(xi, xj, e) = reshape(e, (...)) .* xj

Reshape e into a broadcast compatible shape with xj (by prepending singleton dimensions) then perform broadcasted multiplication.

source
GNNlib.w_mul_xjFunction
w_mul_xj(xi, xj, w) = reshape(w, (...)) .* xj

Similar to e_mul_xj but specialized on scalar edge features (weights).

source
\ No newline at end of file +l(g, x)

See also apply_edges and aggregate_neighbors.

source

Built-in message functions

GNNlib.copy_xiFunction
copy_xi(xi, xj, e) = xi
source
GNNlib.copy_xjFunction
copy_xj(xi, xj, e) = xj
source
GNNlib.xi_dot_xjFunction
xi_dot_xj(xi, xj, e) = sum(xi .* xj, dims=1)
source
GNNlib.xi_sub_xjFunction
xi_sub_xj(xi, xj, e) = xi .- xj
source
GNNlib.xj_sub_xiFunction
xj_sub_xi(xi, xj, e) = xj .- xi
source
GNNlib.e_mul_xjFunction
e_mul_xj(xi, xj, e) = reshape(e, (...)) .* xj

Reshape e into a broadcast compatible shape with xj (by prepending singleton dimensions) then perform broadcasted multiplication.

source
GNNlib.w_mul_xjFunction
w_mul_xj(xi, xj, w) = reshape(w, (...)) .* xj

Similar to e_mul_xj but specialized on scalar edge features (weights).

source
\ No newline at end of file diff --git a/docs/GraphNeuralNetworks.jl/dev/GNNlib/api/utils/index.html b/docs/GraphNeuralNetworks.jl/dev/GNNlib/api/utils/index.html index fa08c0ce2..7e3ab098f 100644 --- a/docs/GraphNeuralNetworks.jl/dev/GNNlib/api/utils/index.html +++ b/docs/GraphNeuralNetworks.jl/dev/GNNlib/api/utils/index.html @@ -1,2 +1,2 @@ -Other Operators · GraphNeuralNetworks.jl

Utility Functions

Graph-wise operations

GNNlib.reduce_nodesFunction
reduce_nodes(aggr, g, x)

For a batched graph g, return the graph-wise aggregation of the node features x. The aggregation operator aggr can be +, mean, max, or min. The returned array will have last dimension g.num_graphs.

See also: reduce_edges.

source
reduce_nodes(aggr, indicator::AbstractVector, x)

Return the graph-wise aggregation of the node features x given the graph indicator indicator. The aggregation operator aggr can be +, mean, max, or min.

See also graph_indicator.

source
GNNlib.reduce_edgesFunction
reduce_edges(aggr, g, e)

For a batched graph g, return the graph-wise aggregation of the edge features e. The aggregation operator aggr can be +, mean, max, or min. The returned array will have last dimension g.num_graphs.

source
GNNlib.broadcast_nodesFunction
broadcast_nodes(g, x)

Graph-wise broadcast array x of size (*, g.num_graphs) to size (*, g.num_nodes).

source
GNNlib.broadcast_edgesFunction
broadcast_edges(g, x)

Graph-wise broadcast array x of size (*, g.num_graphs) to size (*, g.num_edges).

source

Neighborhood operations

GNNlib.softmax_edge_neighborsFunction
softmax_edge_neighbors(g, e)

Softmax over each node's neighborhood of the edge features e.

\[\mathbf{e}'_{j\to i} = \frac{e^{\mathbf{e}_{j\to i}}} - {\sum_{j'\in N(i)} e^{\mathbf{e}_{j'\to i}}}.\]

source

NNlib's gather and scatter functions

Primitive functions for message passing implemented in NNlib.jl:

\ No newline at end of file +Other Operators · GraphNeuralNetworks.jl

Utility Functions

Graph-wise operations

GNNlib.reduce_nodesFunction
reduce_nodes(aggr, g, x)

For a batched graph g, return the graph-wise aggregation of the node features x. The aggregation operator aggr can be +, mean, max, or min. The returned array will have last dimension g.num_graphs.

See also: reduce_edges.

source
reduce_nodes(aggr, indicator::AbstractVector, x)

Return the graph-wise aggregation of the node features x given the graph indicator indicator. The aggregation operator aggr can be +, mean, max, or min.

See also graph_indicator.

source
GNNlib.reduce_edgesFunction
reduce_edges(aggr, g, e)

For a batched graph g, return the graph-wise aggregation of the edge features e. The aggregation operator aggr can be +, mean, max, or min. The returned array will have last dimension g.num_graphs.

source
GNNlib.broadcast_nodesFunction
broadcast_nodes(g, x)

Graph-wise broadcast array x of size (*, g.num_graphs) to size (*, g.num_nodes).

source
GNNlib.broadcast_edgesFunction
broadcast_edges(g, x)

Graph-wise broadcast array x of size (*, g.num_graphs) to size (*, g.num_edges).

source

Neighborhood operations

GNNlib.softmax_edge_neighborsFunction
softmax_edge_neighbors(g, e)

Softmax over each node's neighborhood of the edge features e.

\[\mathbf{e}'_{j\to i} = \frac{e^{\mathbf{e}_{j\to i}}} + {\sum_{j'\in N(i)} e^{\mathbf{e}_{j'\to i}}}.\]

source

NNlib's gather and scatter functions

Primitive functions for message passing implemented in NNlib.jl:

\ No newline at end of file diff --git a/docs/GraphNeuralNetworks.jl/dev/GNNlib/guides/messagepassing/index.html b/docs/GraphNeuralNetworks.jl/dev/GNNlib/guides/messagepassing/index.html index feb4c2ec9..b92e45641 100644 --- a/docs/GraphNeuralNetworks.jl/dev/GNNlib/guides/messagepassing/index.html +++ b/docs/GraphNeuralNetworks.jl/dev/GNNlib/guides/messagepassing/index.html @@ -75,4 +75,4 @@ x = propagate(message, g, +, xj=x) return l.σ.(l.weight * x .+ l.bias) -end

See the GATConv implementation here for a more complex example.

Built-in message functions

In order to exploit optimized specializations of the propagate, it is recommended to use built-in message functions such as copy_xj whenever possible.

\ No newline at end of file +end

See the GATConv implementation here for a more complex example.

Built-in message functions

In order to exploit optimized specializations of the propagate, it is recommended to use built-in message functions such as copy_xj whenever possible.

\ No newline at end of file diff --git a/docs/GraphNeuralNetworks.jl/dev/GNNlib/index.html b/docs/GraphNeuralNetworks.jl/dev/GNNlib/index.html index 67d247a8c..ad3f7b0a3 100644 --- a/docs/GraphNeuralNetworks.jl/dev/GNNlib/index.html +++ b/docs/GraphNeuralNetworks.jl/dev/GNNlib/index.html @@ -1 +1 @@ -GNNlib.jl · GraphNeuralNetworks.jl

GNNlib.jl

GNNlib.jl is a package that provides the implementation of the basic message passing functions and functional implementation of graph convolutional layers, which are used to build graph neural networks in both the Flux.jl and Lux.jl machine learning frameworks, created in the GraphNeuralNetworks.jl and GNNLux.jl packages, respectively.

This package depends on GNNGraphs.jl and NNlib.jl, and is primarily intended for developers looking to create new GNN architectures. For most users, the higher-level GraphNeuralNetworks.jl and GNNLux.jl packages are recommended.

Installation

The package can be installed with the Julia package manager. From the Julia REPL, type ] to enter the Pkg REPL mode and run:

pkg> add GNNlib
\ No newline at end of file +GNNlib.jl · GraphNeuralNetworks.jl

GNNlib.jl

GNNlib.jl is a package that provides the implementation of the basic message passing functions and functional implementation of graph convolutional layers, which are used to build graph neural networks in both the Flux.jl and Lux.jl machine learning frameworks, created in the GraphNeuralNetworks.jl and GNNLux.jl packages, respectively.

This package depends on GNNGraphs.jl and NNlib.jl, and is primarily intended for developers looking to create new GNN architectures. For most users, the higher-level GraphNeuralNetworks.jl and GNNLux.jl packages are recommended.

Installation

The package can be installed with the Julia package manager. From the Julia REPL, type ] to enter the Pkg REPL mode and run:

pkg> add GNNlib
\ No newline at end of file diff --git a/docs/GraphNeuralNetworks.jl/dev/api/basic/index.html b/docs/GraphNeuralNetworks.jl/dev/api/basic/index.html index 21c4c01a1..7d8de97b4 100644 --- a/docs/GraphNeuralNetworks.jl/dev/api/basic/index.html +++ b/docs/GraphNeuralNetworks.jl/dev/api/basic/index.html @@ -8,7 +8,7 @@ julia> dotdec(g, rand(2, 5)) 1×6 Matrix{Float64}: - 0.345098 0.458305 0.106353 0.345098 0.458305 0.106353source
GraphNeuralNetworks.GNNChainType
GNNChain(layers...)
+ 0.345098  0.458305  0.106353  0.345098  0.458305  0.106353
source
GraphNeuralNetworks.GNNChainType
GNNChain(layers...)
 GNNChain(name = layer, ...)

Collects multiple layers / functions to be called in sequence on given input graph and input node features.

It allows to compose layers in a sequential fashion as Flux.Chain does, propagating the output of each layer to the next one. In addition, GNNChain handles the input graph as well, providing it as a first argument only to layers subtyping the GNNLayer abstract type.

GNNChain supports indexing and slicing, m[2] or m[1:end-1], and if names are given, m[:name] == m[1] etc.

Examples

julia> using Flux, GraphNeuralNetworks
 
 julia> m = GNNChain(GCNConv(2=>5), 
@@ -40,7 +40,7 @@
  2.90053  2.90053  2.90053  2.90053  2.90053  2.90053
 
 julia> m2[:enc](g, x) == m(g, x)
-true
source
GraphNeuralNetworks.GNNLayerType
abstract type GNNLayer end

An abstract type from which graph neural network layers are derived.

See also GNNChain.

source
GraphNeuralNetworks.WithGraphType
WithGraph(model, g::GNNGraph; traingraph=false)

A type wrapping the model and tying it to the graph g. In the forward pass, can only take feature arrays as inputs, returning model(g, x...; kws...).

If traingraph=false, the graph's parameters won't be part of the trainable parameters in the gradient updates.

Examples

g = GNNGraph([1,2,3], [2,3,1])
+true
source
GraphNeuralNetworks.GNNLayerType
abstract type GNNLayer end

An abstract type from which graph neural network layers are derived.

See also GNNChain.

source
GraphNeuralNetworks.WithGraphType
WithGraph(model, g::GNNGraph; traingraph=false)

A type wrapping the model and tying it to the graph g. In the forward pass, can only take feature arrays as inputs, returning model(g, x...; kws...).

If traingraph=false, the graph's parameters won't be part of the trainable parameters in the gradient updates.

Examples

g = GNNGraph([1,2,3], [2,3,1])
 x = rand(Float32, 2, 3)
 model = SAGEConv(2 => 3)
 wg = WithGraph(model, g)
@@ -50,4 +50,4 @@
 g2 = GNNGraph([1,1,2,3], [2,4,1,1])
 x2 = rand(Float32, 2, 4)
 # WithGraph will ignore the internal graph if fed with a new one. 
-@assert wg(g2, x2) == model(g2, x2)
source
\ No newline at end of file +@assert wg(g2, x2) == model(g2, x2)source
\ No newline at end of file diff --git a/docs/GraphNeuralNetworks.jl/dev/api/conv/index.html b/docs/GraphNeuralNetworks.jl/dev/api/conv/index.html index 4927a750b..c8b42d0d5 100644 --- a/docs/GraphNeuralNetworks.jl/dev/api/conv/index.html +++ b/docs/GraphNeuralNetworks.jl/dev/api/conv/index.html @@ -9,7 +9,7 @@ l = AGNNConv(init_beta=2.0f0) # forward pass -y = l(g, x) source
GraphNeuralNetworks.CGConvType
CGConv((in, ein) => out, act=identity; bias=true, init=glorot_uniform, residual=false)
+y = l(g, x)   
source
GraphNeuralNetworks.CGConvType
CGConv((in, ein) => out, act=identity; bias=true, init=glorot_uniform, residual=false)
 CGConv(in => out, ...)

The crystal graph convolutional layer from the paper Crystal Graph Convolutional Neural Networks for an Accurate and Interpretable Prediction of Material Properties. Performs the operation

\[\mathbf{x}_i' = \mathbf{x}_i + \sum_{j\in N(i)}\sigma(W_f \mathbf{z}_{ij} + \mathbf{b}_f)\, act(W_s \mathbf{z}_{ij} + \mathbf{b}_s)\]

where $\mathbf{z}_{ij}$ is the node and edge features concatenation $[\mathbf{x}_i; \mathbf{x}_j; \mathbf{e}_{j\to i}]$ and $\sigma$ is the sigmoid function. The residual $\mathbf{x}_i$ is added only if residual=true and the output size is the same as the input size.

Arguments

  • in: The dimension of input node features.
  • ein: The dimension of input edge features.

If ein is not given, assumes that no edge features are passed as input in the forward pass.

  • out: The dimension of output node features.
  • act: Activation function.
  • bias: Add learnable bias.
  • init: Weights' initializer.
  • residual: Add a residual connection.

Examples

g = rand_graph(5, 6)
 x = rand(Float32, 2, g.num_nodes)
 e = rand(Float32, 3, g.num_edges)
@@ -19,7 +19,7 @@
 
 # No edge features
 l = CGConv(2 => 4, tanh)
-y = l(g, x)    # size: (4, num_nodes)
source
GraphNeuralNetworks.ChebConvType
ChebConv(in => out, k; bias=true, init=glorot_uniform)

Chebyshev spectral graph convolutional layer from paper Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering.

Implements

\[X' = \sum^{K-1}_{k=0} W^{(k)} Z^{(k)}\]

where $Z^{(k)}$ is the $k$-th term of Chebyshev polynomials, and can be calculated by the following recursive form:

\[\begin{aligned} +y = l(g, x) # size: (4, num_nodes)

source
GraphNeuralNetworks.ChebConvType
ChebConv(in => out, k; bias=true, init=glorot_uniform)

Chebyshev spectral graph convolutional layer from paper Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering.

Implements

\[X' = \sum^{K-1}_{k=0} W^{(k)} Z^{(k)}\]

where $Z^{(k)}$ is the $k$-th term of Chebyshev polynomials, and can be calculated by the following recursive form:

\[\begin{aligned} Z^{(0)} &= X \\ Z^{(1)} &= \hat{L} X \\ Z^{(k)} &= 2 \hat{L} Z^{(k-1)} - Z^{(k-2)} @@ -33,7 +33,7 @@ l = ChebConv(3 => 5, 5) # forward pass -y = l(g, x) # size: 5 × num_nodes

source
GraphNeuralNetworks.DConvType
DConv(ch::Pair{Int, Int}, k::Int; init = glorot_uniform, bias = true)

Diffusion convolution layer from the paper Diffusion Convolutional Recurrent Neural Networks: Data-Driven Traffic Forecasting.

Arguments

  • ch: Pair of input and output dimensions.
  • k: Number of diffusion steps.
  • init: Weights' initializer. Default glorot_uniform.
  • bias: Add learnable bias. Default true.

Examples

julia> g = GNNGraph(rand(10, 10), ndata = rand(Float32, 2, 10));
+y = l(g, x)       # size:  5 × num_nodes
source
GraphNeuralNetworks.DConvType
DConv(ch::Pair{Int, Int}, k::Int; init = glorot_uniform, bias = true)

Diffusion convolution layer from the paper Diffusion Convolutional Recurrent Neural Networks: Data-Driven Traffic Forecasting.

Arguments

  • ch: Pair of input and output dimensions.
  • k: Number of diffusion steps.
  • init: Weights' initializer. Default glorot_uniform.
  • bias: Add learnable bias. Default true.

Examples

julia> g = GNNGraph(rand(10, 10), ndata = rand(Float32, 2, 10));
 
 julia> dconv = DConv(2 => 4, 4)
 DConv(2 => 4, 4)
@@ -41,7 +41,7 @@
 julia> y = dconv(g, g.ndata.x);
 
 julia> size(y)
-(4, 10)
source
GraphNeuralNetworks.EGNNConvType
EGNNConv((in, ein) => out; hidden_size=2in, residual=false)
+(4, 10)
source
GraphNeuralNetworks.EGNNConvType
EGNNConv((in, ein) => out; hidden_size=2in, residual=false)
 EGNNConv(in => out; hidden_size=2in, residual=false)

Equivariant Graph Convolutional Layer from E(n) Equivariant Graph Neural Networks.

The layer performs the following operation:

\[\begin{aligned} \mathbf{m}_{j\to i} &=\phi_e(\mathbf{h}_i, \mathbf{h}_j, \lVert\mathbf{x}_i-\mathbf{x}_j\rVert^2, \mathbf{e}_{j\to i}),\\ \mathbf{x}_i' &= \mathbf{x}_i + C_i\sum_{j\in\mathcal{N}(i)}(\mathbf{x}_i-\mathbf{x}_j)\phi_x(\mathbf{m}_{j\to i}),\\ @@ -51,7 +51,7 @@ h = randn(Float32, 5, g.num_nodes) x = randn(Float32, 3, g.num_nodes) egnn = EGNNConv(5 => 6, 10) -hnew, xnew = egnn(g, h, x)

source
GraphNeuralNetworks.EdgeConvType
EdgeConv(nn; aggr=max)

Edge convolutional layer from paper Dynamic Graph CNN for Learning on Point Clouds.

Performs the operation

\[\mathbf{x}_i' = \square_{j \in N(i)}\, nn([\mathbf{x}_i; \mathbf{x}_j - \mathbf{x}_i])\]

where nn generally denotes a learnable function, e.g. a linear layer or a multi-layer perceptron.

Arguments

  • nn: A (possibly learnable) function.
  • aggr: Aggregation operator for the incoming messages (e.g. +, *, max, min, and mean).

Examples:

# create data
+hnew, xnew = egnn(g, h, x)
source
GraphNeuralNetworks.EdgeConvType
EdgeConv(nn; aggr=max)

Edge convolutional layer from paper Dynamic Graph CNN for Learning on Point Clouds.

Performs the operation

\[\mathbf{x}_i' = \square_{j \in N(i)}\, nn([\mathbf{x}_i; \mathbf{x}_j - \mathbf{x}_i])\]

where nn generally denotes a learnable function, e.g. a linear layer or a multi-layer perceptron.

Arguments

  • nn: A (possibly learnable) function.
  • aggr: Aggregation operator for the incoming messages (e.g. +, *, max, min, and mean).

Examples:

# create data
 s = [1,1,2,3]
 t = [2,3,1,1]
 in_channel = 3
@@ -62,7 +62,7 @@
 l = EdgeConv(Dense(2 * in_channel, out_channel), aggr = +)
 
 # forward pass
-y = l(g, x)
source
GraphNeuralNetworks.GATConvType
GATConv(in => out, [σ; heads, concat, init, bias, negative_slope, add_self_loops])
+y = l(g, x)
source
GraphNeuralNetworks.GATConvType
GATConv(in => out, [σ; heads, concat, init, bias, negative_slope, add_self_loops])
 GATConv((in, ein) => out, ...)

Graph attentional layer from the paper Graph Attention Networks.

Implements the operation

\[\mathbf{x}_i' = \sum_{j \in N(i) \cup \{i\}} \alpha_{ij} W \mathbf{x}_j\]

where the attention coefficients $\alpha_{ij}$ are given by

\[\alpha_{ij} = \frac{1}{z_i} \exp(LeakyReLU(\mathbf{a}^T [W \mathbf{x}_i; W \mathbf{x}_j]))\]

with $z_i$ a normalization factor.

In case ein > 0 is given, edge features of dimension ein will be expected in the forward pass and the attention coefficients will be calculated as

\[\alpha_{ij} = \frac{1}{z_i} \exp(LeakyReLU(\mathbf{a}^T [W_e \mathbf{e}_{j\to i}; W \mathbf{x}_i; W \mathbf{x}_j]))\]

Arguments

  • in: The dimension of input node features.
  • ein: The dimension of input edge features. Default 0 (i.e. no edge features passed in the forward).
  • out: The dimension of output node features.
  • σ: Activation function. Default identity.
  • bias: Learn the additive bias if true. Default true.
  • heads: Number attention heads. Default 1.
  • concat: Concatenate layer output or not. If not, layer output is averaged over the heads. Default true.
  • negative_slope: The parameter of LeakyReLU.Default 0.2.
  • add_self_loops: Add self loops to the graph before performing the convolution. Default true.
  • dropout: Dropout probability on the normalized attention coefficient. Default 0.0.

Examples

# create data
 s = [1,1,2,3]
 t = [2,3,1,1]
@@ -75,7 +75,7 @@
 l = GATConv(in_channel => out_channel, add_self_loops = false, bias = false; heads=2, concat=true)
 
 # forward pass
-y = l(g, x)       
source
GraphNeuralNetworks.GATv2ConvType
GATv2Conv(in => out, [σ; heads, concat, init, bias, negative_slope, add_self_loops])
+y = l(g, x)       
source
GraphNeuralNetworks.GATv2ConvType
GATv2Conv(in => out, [σ; heads, concat, init, bias, negative_slope, add_self_loops])
 GATv2Conv((in, ein) => out, ...)

GATv2 attentional layer from the paper How Attentive are Graph Attention Networks?.

Implements the operation

\[\mathbf{x}_i' = \sum_{j \in N(i) \cup \{i\}} \alpha_{ij} W_1 \mathbf{x}_j\]

where the attention coefficients $\alpha_{ij}$ are given by

\[\alpha_{ij} = \frac{1}{z_i} \exp(\mathbf{a}^T LeakyReLU(W_2 \mathbf{x}_i + W_1 \mathbf{x}_j))\]

with $z_i$ a normalization factor.

In case ein > 0 is given, edge features of dimension ein will be expected in the forward pass and the attention coefficients will be calculated as

\[\alpha_{ij} = \frac{1}{z_i} \exp(\mathbf{a}^T LeakyReLU(W_3 \mathbf{e}_{j\to i} + W_2 \mathbf{x}_i + W_1 \mathbf{x}_j)).\]

Arguments

  • in: The dimension of input node features.
  • ein: The dimension of input edge features. Default 0 (i.e. no edge features passed in the forward).
  • out: The dimension of output node features.
  • σ: Activation function. Default identity.
  • bias: Learn the additive bias if true. Default true.
  • heads: Number attention heads. Default 1.
  • concat: Concatenate layer output or not. If not, layer output is averaged over the heads. Default true.
  • negative_slope: The parameter of LeakyReLU.Default 0.2.
  • add_self_loops: Add self loops to the graph before performing the convolution. Default true.
  • dropout: Dropout probability on the normalized attention coefficient. Default 0.0.

Examples

# create data
 s = [1,1,2,3]
 t = [2,3,1,1]
@@ -92,7 +92,7 @@
 e = randn(Float32, ein, length(s))
 
 # forward pass
-y = l(g, x, e)    
source
GraphNeuralNetworks.GCNConvType
GCNConv(in => out, σ=identity; [bias, init, add_self_loops, use_edge_weight])

Graph convolutional layer from paper Semi-supervised Classification with Graph Convolutional Networks.

Performs the operation

\[\mathbf{x}'_i = \sum_{j\in N(i)} a_{ij} W \mathbf{x}_j\]

where $a_{ij} = 1 / \sqrt{|N(i)||N(j)|}$ is a normalization factor computed from the node degrees.

If the input graph has weighted edges and use_edge_weight=true, than $a_{ij}$ will be computed as

\[a_{ij} = \frac{e_{j\to i}}{\sqrt{\sum_{j \in N(i)} e_{j\to i}} \sqrt{\sum_{i \in N(j)} e_{i\to j}}}\]

The input to the layer is a node feature array X of size (num_features, num_nodes) and optionally an edge weight vector.

Arguments

  • in: Number of input features.
  • out: Number of output features.
  • σ: Activation function. Default identity.
  • bias: Add learnable bias. Default true.
  • init: Weights' initializer. Default glorot_uniform.
  • add_self_loops: Add self loops to the graph before performing the convolution. Default false.
  • use_edge_weight: If true, consider the edge weights in the input graph (if available). If add_self_loops=true the new weights will be set to 1. This option is ignored if the edge_weight is explicitly provided in the forward pass. Default false.

Forward

(::GCNConv)(g::GNNGraph, x, edge_weight = nothing; norm_fn = d -> 1 ./ sqrt.(d), conv_weight = nothing) -> AbstractMatrix

Takes as input a graph g, a node feature matrix x of size [in, num_nodes], and optionally an edge weight vector. Returns a node feature matrix of size [out, num_nodes].

The norm_fn parameter allows for custom normalization of the graph convolution operation by passing a function as argument. By default, it computes $\frac{1}{\sqrt{d}}$ i.e the inverse square root of the degree (d) of each node in the graph. If conv_weight is an AbstractMatrix of size [out, in], then the convolution is performed using that weight matrix instead of the weights stored in the model.

Examples

# create data
+y = l(g, x, e)    
source
GraphNeuralNetworks.GCNConvType
GCNConv(in => out, σ=identity; [bias, init, add_self_loops, use_edge_weight])

Graph convolutional layer from paper Semi-supervised Classification with Graph Convolutional Networks.

Performs the operation

\[\mathbf{x}'_i = \sum_{j\in N(i)} a_{ij} W \mathbf{x}_j\]

where $a_{ij} = 1 / \sqrt{|N(i)||N(j)|}$ is a normalization factor computed from the node degrees.

If the input graph has weighted edges and use_edge_weight=true, than $a_{ij}$ will be computed as

\[a_{ij} = \frac{e_{j\to i}}{\sqrt{\sum_{j \in N(i)} e_{j\to i}} \sqrt{\sum_{i \in N(j)} e_{i\to j}}}\]

The input to the layer is a node feature array X of size (num_features, num_nodes) and optionally an edge weight vector.

Arguments

  • in: Number of input features.
  • out: Number of output features.
  • σ: Activation function. Default identity.
  • bias: Add learnable bias. Default true.
  • init: Weights' initializer. Default glorot_uniform.
  • add_self_loops: Add self loops to the graph before performing the convolution. Default false.
  • use_edge_weight: If true, consider the edge weights in the input graph (if available). If add_self_loops=true the new weights will be set to 1. This option is ignored if the edge_weight is explicitly provided in the forward pass. Default false.

Forward

(::GCNConv)(g::GNNGraph, x, edge_weight = nothing; norm_fn = d -> 1 ./ sqrt.(d), conv_weight = nothing) -> AbstractMatrix

Takes as input a graph g, a node feature matrix x of size [in, num_nodes], and optionally an edge weight vector. Returns a node feature matrix of size [out, num_nodes].

The norm_fn parameter allows for custom normalization of the graph convolution operation by passing a function as argument. By default, it computes $\frac{1}{\sqrt{d}}$ i.e the inverse square root of the degree (d) of each node in the graph. If conv_weight is an AbstractMatrix of size [out, in], then the convolution is performed using that weight matrix instead of the weights stored in the model.

Examples

# create data
 s = [1,1,2,3]
 t = [2,3,1,1]
 g = GNNGraph(s, t)
@@ -112,7 +112,7 @@
 # Edge weights can also be embedded in the graph.
 g = GNNGraph(s, t, w)
 l = GCNConv(3 => 5, use_edge_weight=true) 
-y = l(g, x) # same as l(g, x, w) 
source
GraphNeuralNetworks.GINConvType
GINConv(f, ϵ; aggr=+)

Graph Isomorphism convolutional layer from paper How Powerful are Graph Neural Networks?.

Implements the graph convolution

\[\mathbf{x}_i' = f_\Theta\left((1 + \epsilon) \mathbf{x}_i + \sum_{j \in N(i)} \mathbf{x}_j \right)\]

where $f_\Theta$ typically denotes a learnable function, e.g. a linear layer or a multi-layer perceptron.

Arguments

  • f: A (possibly learnable) function acting on node features.
  • ϵ: Weighting factor.

Examples:

# create data
+y = l(g, x) # same as l(g, x, w) 
source
GraphNeuralNetworks.GINConvType
GINConv(f, ϵ; aggr=+)

Graph Isomorphism convolutional layer from paper How Powerful are Graph Neural Networks?.

Implements the graph convolution

\[\mathbf{x}_i' = f_\Theta\left((1 + \epsilon) \mathbf{x}_i + \sum_{j \in N(i)} \mathbf{x}_j \right)\]

where $f_\Theta$ typically denotes a learnable function, e.g. a linear layer or a multi-layer perceptron.

Arguments

  • f: A (possibly learnable) function acting on node features.
  • ϵ: Weighting factor.

Examples:

# create data
 s = [1,1,2,3]
 t = [2,3,1,1]
 in_channel = 3
@@ -126,7 +126,7 @@
 l = GINConv(nn, 0.01f0, aggr = mean)
 
 # forward pass
-y = l(g, x)  
source
GraphNeuralNetworks.GMMConvType
GMMConv((in, ein) => out, σ=identity; K=1, bias=true, init=glorot_uniform, residual=false)

Graph mixture model convolution layer from the paper Geometric deep learning on graphs and manifolds using mixture model CNNs Performs the operation

\[\mathbf{x}_i' = \mathbf{x}_i + \frac{1}{|N(i)|} \sum_{j\in N(i)}\frac{1}{K}\sum_{k=1}^K \mathbf{w}_k(\mathbf{e}_{j\to i}) \odot \Theta_k \mathbf{x}_j\]

where $w^a_{k}(e^a)$ for feature a and kernel k is given by

\[w^a_{k}(e^a) = \exp(-\frac{1}{2}(e^a - \mu^a_k)^T (\Sigma^{-1})^a_k(e^a - \mu^a_k))\]

$\Theta_k, \mu^a_k, (\Sigma^{-1})^a_k$ are learnable parameters.

The input to the layer is a node feature array x of size (num_features, num_nodes) and edge pseudo-coordinate array e of size (num_features, num_edges) The residual $\mathbf{x}_i$ is added only if residual=true and the output size is the same as the input size.

Arguments

  • in: Number of input node features.
  • ein: Number of input edge features.
  • out: Number of output features.
  • σ: Activation function. Default identity.
  • K: Number of kernels. Default 1.
  • bias: Add learnable bias. Default true.
  • init: Weights' initializer. Default glorot_uniform.
  • residual: Residual conncetion. Default false.

Examples

# create data
+y = l(g, x)  
source
GraphNeuralNetworks.GMMConvType
GMMConv((in, ein) => out, σ=identity; K=1, bias=true, init=glorot_uniform, residual=false)

Graph mixture model convolution layer from the paper Geometric deep learning on graphs and manifolds using mixture model CNNs Performs the operation

\[\mathbf{x}_i' = \mathbf{x}_i + \frac{1}{|N(i)|} \sum_{j\in N(i)}\frac{1}{K}\sum_{k=1}^K \mathbf{w}_k(\mathbf{e}_{j\to i}) \odot \Theta_k \mathbf{x}_j\]

where $w^a_{k}(e^a)$ for feature a and kernel k is given by

\[w^a_{k}(e^a) = \exp(-\frac{1}{2}(e^a - \mu^a_k)^T (\Sigma^{-1})^a_k(e^a - \mu^a_k))\]

$\Theta_k, \mu^a_k, (\Sigma^{-1})^a_k$ are learnable parameters.

The input to the layer is a node feature array x of size (num_features, num_nodes) and edge pseudo-coordinate array e of size (num_features, num_edges) The residual $\mathbf{x}_i$ is added only if residual=true and the output size is the same as the input size.

Arguments

  • in: Number of input node features.
  • ein: Number of input edge features.
  • out: Number of output features.
  • σ: Activation function. Default identity.
  • K: Number of kernels. Default 1.
  • bias: Add learnable bias. Default true.
  • init: Weights' initializer. Default glorot_uniform.
  • residual: Residual conncetion. Default false.

Examples

# create data
 s = [1,1,2,3]
 t = [2,3,1,1]
 g = GNNGraph(s,t)
@@ -138,7 +138,7 @@
 l = GMMConv((nin, ein) => out, K=K)
 
 # forward pass
-l(g, x, e)
source
GraphNeuralNetworks.GatedGraphConvType
GatedGraphConv(out, num_layers; aggr=+, init=glorot_uniform)

Gated graph convolution layer from Gated Graph Sequence Neural Networks.

Implements the recursion

\[\begin{aligned} +l(g, x, e)

source
GraphNeuralNetworks.GatedGraphConvType
GatedGraphConv(out, num_layers; aggr=+, init=glorot_uniform)

Gated graph convolution layer from Gated Graph Sequence Neural Networks.

Implements the recursion

\[\begin{aligned} \mathbf{h}^{(0)}_i &= [\mathbf{x}_i; \mathbf{0}] \\ \mathbf{h}^{(l)}_i &= GRU(\mathbf{h}^{(l-1)}_i, \square_{j \in N(i)} W \mathbf{h}^{(l-1)}_j) \end{aligned}\]

where $\mathbf{h}^{(l)}_i$ denotes the $l$-th hidden variables passing through GRU. The dimension of input $\mathbf{x}_i$ needs to be less or equal to out.

Arguments

  • out: The dimension of output features.
  • num_layers: The number of recursion steps.
  • aggr: Aggregation operator for the incoming messages (e.g. +, *, max, min, and mean).
  • init: Weight initialization function.

Examples:

# create data
@@ -152,7 +152,7 @@
 l = GatedGraphConv(out_channel, num_layers)
 
 # forward pass
-y = l(g, x)   
source
GraphNeuralNetworks.GraphConvType
GraphConv(in => out, σ=identity; aggr=+, bias=true, init=glorot_uniform)

Graph convolution layer from Reference: Weisfeiler and Leman Go Neural: Higher-order Graph Neural Networks.

Performs:

\[\mathbf{x}_i' = W_1 \mathbf{x}_i + \square_{j \in \mathcal{N}(i)} W_2 \mathbf{x}_j\]

where the aggregation type is selected by aggr.

Arguments

  • in: The dimension of input features.
  • out: The dimension of output features.
  • σ: Activation function.
  • aggr: Aggregation operator for the incoming messages (e.g. +, *, max, min, and mean).
  • bias: Add learnable bias.
  • init: Weights' initializer.

Examples

# create data
+y = l(g, x)   
source
GraphNeuralNetworks.GraphConvType
GraphConv(in => out, σ=identity; aggr=+, bias=true, init=glorot_uniform)

Graph convolution layer from Reference: Weisfeiler and Leman Go Neural: Higher-order Graph Neural Networks.

Performs:

\[\mathbf{x}_i' = W_1 \mathbf{x}_i + \square_{j \in \mathcal{N}(i)} W_2 \mathbf{x}_j\]

where the aggregation type is selected by aggr.

Arguments

  • in: The dimension of input features.
  • out: The dimension of output features.
  • σ: Activation function.
  • aggr: Aggregation operator for the incoming messages (e.g. +, *, max, min, and mean).
  • bias: Add learnable bias.
  • init: Weights' initializer.

Examples

# create data
 s = [1,1,2,3]
 t = [2,3,1,1]
 in_channel = 3
@@ -164,7 +164,7 @@
 l = GraphConv(in_channel => out_channel, relu, bias = false, aggr = mean)
 
 # forward pass
-y = l(g, x)       
source
GraphNeuralNetworks.MEGNetConvType
MEGNetConv(ϕe, ϕv; aggr=mean)
+y = l(g, x)       
source
GraphNeuralNetworks.MEGNetConvType
MEGNetConv(ϕe, ϕv; aggr=mean)
 MEGNetConv(in => out; aggr=mean)

Convolution from Graph Networks as a Universal Machine Learning Framework for Molecules and Crystals paper. In the forward pass, takes as inputs node features x and edge features e and returns updated features x' and e' according to

\[\begin{aligned} \mathbf{e}_{i\to j}' = \phi_e([\mathbf{x}_i;\, \mathbf{x}_j;\, \mathbf{e}_{i\to j}]),\\ \mathbf{x}_{i}' = \phi_v([\mathbf{x}_i;\, \square_{j\in \mathcal{N}(i)}\,\mathbf{e}_{j\to i}']). @@ -172,7 +172,7 @@ x = randn(Float32, 3, 10) e = randn(Float32, 3, 30) m = MEGNetConv(3 => 3) -x′, e′ = m(g, x, e)

source
GraphNeuralNetworks.NNConvType
NNConv(in => out, f, σ=identity; aggr=+, bias=true, init=glorot_uniform)

The continuous kernel-based convolutional operator from the Neural Message Passing for Quantum Chemistry paper. This convolution is also known as the edge-conditioned convolution from the Dynamic Edge-Conditioned Filters in Convolutional Neural Networks on Graphs paper.

Performs the operation

\[\mathbf{x}_i' = W \mathbf{x}_i + \square_{j \in N(i)} f_\Theta(\mathbf{e}_{j\to i})\,\mathbf{x}_j\]

where $f_\Theta$ denotes a learnable function (e.g. a linear layer or a multi-layer perceptron). Given an input of batched edge features e of size (num_edge_features, num_edges), the function f will return an batched matrices array whose size is (out, in, num_edges). For convenience, also functions returning a single (out*in, num_edges) matrix are allowed.

Arguments

  • in: The dimension of input node features.
  • out: The dimension of output node features.
  • f: A (possibly learnable) function acting on edge features.
  • aggr: Aggregation operator for the incoming messages (e.g. +, *, max, min, and mean).
  • σ: Activation function.
  • bias: Add learnable bias.
  • init: Weights' initializer.

Examples:

n_in = 3
+x′, e′ = m(g, x, e)
source
GraphNeuralNetworks.NNConvType
NNConv(in => out, f, σ=identity; aggr=+, bias=true, init=glorot_uniform)

The continuous kernel-based convolutional operator from the Neural Message Passing for Quantum Chemistry paper. This convolution is also known as the edge-conditioned convolution from the Dynamic Edge-Conditioned Filters in Convolutional Neural Networks on Graphs paper.

Performs the operation

\[\mathbf{x}_i' = W \mathbf{x}_i + \square_{j \in N(i)} f_\Theta(\mathbf{e}_{j\to i})\,\mathbf{x}_j\]

where $f_\Theta$ denotes a learnable function (e.g. a linear layer or a multi-layer perceptron). Given an input of batched edge features e of size (num_edge_features, num_edges), the function f will return an batched matrices array whose size is (out, in, num_edges). For convenience, also functions returning a single (out*in, num_edges) matrix are allowed.

Arguments

  • in: The dimension of input node features.
  • out: The dimension of output node features.
  • f: A (possibly learnable) function acting on edge features.
  • aggr: Aggregation operator for the incoming messages (e.g. +, *, max, min, and mean).
  • σ: Activation function.
  • bias: Add learnable bias.
  • init: Weights' initializer.

Examples:

n_in = 3
 n_in_edge = 10
 n_out = 5
 
@@ -191,7 +191,7 @@
 e = randn(Float32, n_in_edge, g.num_edges)
 
 # forward pass
-y = l(g, x, e)  
source
GraphNeuralNetworks.ResGatedGraphConvType
ResGatedGraphConv(in => out, act=identity; init=glorot_uniform, bias=true)

The residual gated graph convolutional operator from the Residual Gated Graph ConvNets paper.

The layer's forward pass is given by

\[\mathbf{x}_i' = act\big(U\mathbf{x}_i + \sum_{j \in N(i)} \eta_{ij} V \mathbf{x}_j\big),\]

where the edge gates $\eta_{ij}$ are given by

\[\eta_{ij} = sigmoid(A\mathbf{x}_i + B\mathbf{x}_j).\]

Arguments

  • in: The dimension of input features.
  • out: The dimension of output features.
  • act: Activation function.
  • init: Weight matrices' initializing function.
  • bias: Learn an additive bias if true.

Examples:

# create data
+y = l(g, x, e)  
source
GraphNeuralNetworks.ResGatedGraphConvType
ResGatedGraphConv(in => out, act=identity; init=glorot_uniform, bias=true)

The residual gated graph convolutional operator from the Residual Gated Graph ConvNets paper.

The layer's forward pass is given by

\[\mathbf{x}_i' = act\big(U\mathbf{x}_i + \sum_{j \in N(i)} \eta_{ij} V \mathbf{x}_j\big),\]

where the edge gates $\eta_{ij}$ are given by

\[\eta_{ij} = sigmoid(A\mathbf{x}_i + B\mathbf{x}_j).\]

Arguments

  • in: The dimension of input features.
  • out: The dimension of output features.
  • act: Activation function.
  • init: Weight matrices' initializing function.
  • bias: Learn an additive bias if true.

Examples:

# create data
 s = [1,1,2,3]
 t = [2,3,1,1]
 in_channel = 3
@@ -202,7 +202,7 @@
 l = ResGatedGraphConv(in_channel => out_channel, tanh, bias = true)
 
 # forward pass
-y = l(g, x)  
source
GraphNeuralNetworks.SAGEConvType
SAGEConv(in => out, σ=identity; aggr=mean, bias=true, init=glorot_uniform)

GraphSAGE convolution layer from paper Inductive Representation Learning on Large Graphs.

Performs:

\[\mathbf{x}_i' = W \cdot [\mathbf{x}_i; \square_{j \in \mathcal{N}(i)} \mathbf{x}_j]\]

where the aggregation type is selected by aggr.

Arguments

  • in: The dimension of input features.
  • out: The dimension of output features.
  • σ: Activation function.
  • aggr: Aggregation operator for the incoming messages (e.g. +, *, max, min, and mean).
  • bias: Add learnable bias.
  • init: Weights' initializer.

Examples:

# create data
+y = l(g, x)  
source
GraphNeuralNetworks.SAGEConvType
SAGEConv(in => out, σ=identity; aggr=mean, bias=true, init=glorot_uniform)

GraphSAGE convolution layer from paper Inductive Representation Learning on Large Graphs.

Performs:

\[\mathbf{x}_i' = W \cdot [\mathbf{x}_i; \square_{j \in \mathcal{N}(i)} \mathbf{x}_j]\]

where the aggregation type is selected by aggr.

Arguments

  • in: The dimension of input features.
  • out: The dimension of output features.
  • σ: Activation function.
  • aggr: Aggregation operator for the incoming messages (e.g. +, *, max, min, and mean).
  • bias: Add learnable bias.
  • init: Weights' initializer.

Examples:

# create data
 s = [1,1,2,3]
 t = [2,3,1,1]
 in_channel = 3
@@ -213,7 +213,7 @@
 l = SAGEConv(in_channel => out_channel, tanh, bias = false, aggr = +)
 
 # forward pass
-y = l(g, x)   
source
GraphNeuralNetworks.SGConvType
SGConv(int => out, k=1; [bias, init, add_self_loops, use_edge_weight])

SGC layer from Simplifying Graph Convolutional Networks Performs operation

\[H^{K} = (\tilde{D}^{-1/2} \tilde{A} \tilde{D}^{-1/2})^K X \Theta\]

where $\tilde{A}$ is $A + I$.

Arguments

  • in: Number of input features.
  • out: Number of output features.
  • k : Number of hops k. Default 1.
  • bias: Add learnable bias. Default true.
  • init: Weights' initializer. Default glorot_uniform.
  • add_self_loops: Add self loops to the graph before performing the convolution. Default false.
  • use_edge_weight: If true, consider the edge weights in the input graph (if available). If add_self_loops=true the new weights will be set to 1. Default false.

Examples

# create data
+y = l(g, x)   
source
GraphNeuralNetworks.SGConvType
SGConv(int => out, k=1; [bias, init, add_self_loops, use_edge_weight])

SGC layer from Simplifying Graph Convolutional Networks Performs operation

\[H^{K} = (\tilde{D}^{-1/2} \tilde{A} \tilde{D}^{-1/2})^K X \Theta\]

where $\tilde{A}$ is $A + I$.

Arguments

  • in: Number of input features.
  • out: Number of output features.
  • k : Number of hops k. Default 1.
  • bias: Add learnable bias. Default true.
  • init: Weights' initializer. Default glorot_uniform.
  • add_self_loops: Add self loops to the graph before performing the convolution. Default false.
  • use_edge_weight: If true, consider the edge weights in the input graph (if available). If add_self_loops=true the new weights will be set to 1. Default false.

Examples

# create data
 s = [1,1,2,3]
 t = [2,3,1,1]
 g = GNNGraph(s, t)
@@ -232,7 +232,7 @@
 # Edge weights can also be embedded in the graph.
 g = GNNGraph(s, t, w)
 l = SGConv(3 => 5, add_self_loops = true, use_edge_weight=true) 
-y = l(g, x) # same as l(g, x, w) 
source
GraphNeuralNetworks.TAGConvType
TAGConv(in => out, k=3; bias=true, init=glorot_uniform, add_self_loops=true, use_edge_weight=false)

TAGConv layer from Topology Adaptive Graph Convolutional Networks. This layer extends the idea of graph convolutions by applying filters that adapt to the topology of the data. It performs the operation:

\[H^{K} = {\sum}_{k=0}^K (D^{-1/2} A D^{-1/2})^{k} X {\Theta}_{k}\]

where A is the adjacency matrix of the graph, D is the degree matrix, X is the input feature matrix, and ${\Theta}_{k}$ is a unique weight matrix for each hop k.

Arguments

  • in: Number of input features.
  • out: Number of output features.
  • k: Maximum number of hops to consider. Default is 3.
  • bias: Whether to include a learnable bias term. Default is true.
  • init: Initialization function for the weights. Default is glorot_uniform.
  • add_self_loops: Whether to add self-loops to the adjacency matrix. Default is true.
  • use_edge_weight: If true, edge weights are considered in the computation (if available). Default is false.

Examples

# Example graph data
+y = l(g, x) # same as l(g, x, w) 
source
GraphNeuralNetworks.TAGConvType
TAGConv(in => out, k=3; bias=true, init=glorot_uniform, add_self_loops=true, use_edge_weight=false)

TAGConv layer from Topology Adaptive Graph Convolutional Networks. This layer extends the idea of graph convolutions by applying filters that adapt to the topology of the data. It performs the operation:

\[H^{K} = {\sum}_{k=0}^K (D^{-1/2} A D^{-1/2})^{k} X {\Theta}_{k}\]

where A is the adjacency matrix of the graph, D is the degree matrix, X is the input feature matrix, and ${\Theta}_{k}$ is a unique weight matrix for each hop k.

Arguments

  • in: Number of input features.
  • out: Number of output features.
  • k: Maximum number of hops to consider. Default is 3.
  • bias: Whether to include a learnable bias term. Default is true.
  • init: Initialization function for the weights. Default is glorot_uniform.
  • add_self_loops: Whether to add self-loops to the adjacency matrix. Default is true.
  • use_edge_weight: If true, edge weights are considered in the computation (if available). Default is false.

Examples

# Example graph data
 s = [1, 1, 2, 3]
 t = [2, 3, 1, 1]
 g = GNNGraph(s, t)  # Create a graph
@@ -242,7 +242,7 @@
 l = TAGConv(3 => 5, k=3; add_self_loops=true)
 
 # Apply the TAGConv layer
-y = l(g, x)  # Output size: 5 × num_nodes
source
GraphNeuralNetworks.TransformerConvType
TransformerConv((in, ein) => out; [heads, concat, init, add_self_loops, bias_qkv,
+y = l(g, x)  # Output size: 5 × num_nodes
source
GraphNeuralNetworks.TransformerConvType
TransformerConv((in, ein) => out; [heads, concat, init, add_self_loops, bias_qkv,
     bias_root, root_weight, gating, skip_connection, batch_norm, ff_channels]))

The transformer-like multi head attention convolutional operator from the Masked Label Prediction: Unified Message Passing Model for Semi-Supervised Classification paper, which also considers edge features. It further contains options to also be configured as the transformer-like convolutional operator from the Attention, Learn to Solve Routing Problems! paper, including a successive feed-forward network as well as skip layers and batch normalization.

The layer's basic forward pass is given by

\[x_i' = W_1x_i + \sum_{j\in N(i)} \alpha_{ij} (W_2 x_j + W_6e_{ij})\]

where the attention scores are

\[\alpha_{ij} = \mathrm{softmax}\left(\frac{(W_3x_i)^T(W_4x_j+ W_6e_{ij})}{\sqrt{d}}\right).\]

Optionally, a combination of the aggregated value with transformed root node features by a gating mechanism via

\[x'_i = \beta_i W_1 x_i + (1 - \beta_i) \underbrace{\left(\sum_{j \in \mathcal{N}(i)} \alpha_{i,j} W_2 x_j \right)}_{=m_i}\]

with

\[\beta_i = \textrm{sigmoid}(W_5^{\top} [ W_1 x_i, m_i, W_1 x_i - m_i ]).\]

can be performed.

Arguments

  • in: Dimension of input features, which also corresponds to the dimension of the output features.
  • ein: Dimension of the edge features; if 0, no edge features will be used.
  • out: Dimension of the output.
  • heads: Number of heads in output. Default 1.
  • concat: Concatenate layer output or not. If not, layer output is averaged over the heads. Default true.
  • init: Weight matrices' initializing function. Default glorot_uniform.
  • add_self_loops: Add self loops to the input graph. Default false.
  • bias_qkv: If set, bias is used in the key, query and value transformations for nodes. Default true.
  • bias_root: If set, the layer will also learn an additive bias for the root when root weight is used. Default true.
  • root_weight: If set, the layer will add the transformed root node features to the output. Default true.
  • gating: If set, will combine aggregation and transformed root node features by a gating mechanism. Default false.
  • skip_connection: If set, a skip connection will be made from the input and added to the output. Default false.
  • batch_norm: If set, a batch normalization will be applied to the output. Default false.
  • ff_channels: If positive, a feed-forward NN is appended, with the first having the given number of hidden nodes; this NN also gets a skip connection and batch normalization if the respective parameters are set. Default: 0.

Examples

N, in_channel, out_channel = 4, 3, 5
@@ -251,4 +251,4 @@
 l = TransformerConv((in_channel, ein) => in_channel; heads, gating = true, bias_qkv = true)
 x = rand(Float32, in_channel, N)
 e = rand(Float32, ein, g.num_edges)
-l(g, x, e)
source
\ No newline at end of file +l(g, x, e)source
\ No newline at end of file diff --git a/docs/GraphNeuralNetworks.jl/dev/api/heteroconv/index.html b/docs/GraphNeuralNetworks.jl/dev/api/heteroconv/index.html index 206febd46..58281ce20 100644 --- a/docs/GraphNeuralNetworks.jl/dev/api/heteroconv/index.html +++ b/docs/GraphNeuralNetworks.jl/dev/api/heteroconv/index.html @@ -12,4 +12,4 @@ julia> y = layer(g, x); # output is a named tuple julia> size(y.A) == (32, 10) && size(y.B) == (32, 15) -truesource
\ No newline at end of file +truesource
\ No newline at end of file diff --git a/docs/GraphNeuralNetworks.jl/dev/api/pool/index.html b/docs/GraphNeuralNetworks.jl/dev/api/pool/index.html index 54b0c25e7..b52aabc6e 100644 --- a/docs/GraphNeuralNetworks.jl/dev/api/pool/index.html +++ b/docs/GraphNeuralNetworks.jl/dev/api/pool/index.html @@ -12,7 +12,7 @@ u = pool(g, g.ndata.x) -@assert size(u) == (chout, g.num_graphs)source
GraphNeuralNetworks.GlobalPoolType
GlobalPool(aggr)

Global pooling layer for graph neural networks. Takes a graph and feature nodes as inputs and performs the operation

\[\mathbf{u}_V = \square_{i \in V} \mathbf{x}_i\]

where $V$ is the set of nodes of the input graph and the type of aggregation represented by $\square$ is selected by the aggr argument. Commonly used aggregations are mean, max, and +.

See also reduce_nodes.

Examples

using Flux, GraphNeuralNetworks, Graphs
+@assert size(u) == (chout, g.num_graphs)
source
GraphNeuralNetworks.GlobalPoolType
GlobalPool(aggr)

Global pooling layer for graph neural networks. Takes a graph and feature nodes as inputs and performs the operation

\[\mathbf{u}_V = \square_{i \in V} \mathbf{x}_i\]

where $V$ is the set of nodes of the input graph and the type of aggregation represented by $\square$ is selected by the aggr argument. Commonly used aggregations are mean, max, and +.

See also reduce_nodes.

Examples

using Flux, GraphNeuralNetworks, Graphs
 
 pool = GlobalPool(mean)
 
@@ -23,7 +23,7 @@
 
 g = Flux.batch([GNNGraph(erdos_renyi(10, 4)) for _ in 1:5])
 X = rand(32, 50)
-pool(g, X) # => 32x5 matrix
source
GraphNeuralNetworks.Set2SetType
Set2Set(n_in, n_iters, n_layers = 1)

Set2Set layer from the paper Order Matters: Sequence to sequence for sets.

For each graph in the batch, the layer computes an output vector of size 2*n_in by iterating the following steps n_iters times:

\[\mathbf{q} = \mathrm{LSTM}(\mathbf{q}_{t-1}^*) +pool(g, X) # => 32x5 matrix

source
GraphNeuralNetworks.Set2SetType
Set2Set(n_in, n_iters, n_layers = 1)

Set2Set layer from the paper Order Matters: Sequence to sequence for sets.

For each graph in the batch, the layer computes an output vector of size 2*n_in by iterating the following steps n_iters times:

\[\mathbf{q} = \mathrm{LSTM}(\mathbf{q}_{t-1}^*) \alpha_{i} = \frac{\exp(\mathbf{q}^T \mathbf{x}_i)}{\sum_{j=1}^N \exp(\mathbf{q}^T \mathbf{x}_j)} \mathbf{r} = \sum_{i=1}^N \alpha_{i} \mathbf{x}_i -\mathbf{q}^*_t = [\mathbf{q}; \mathbf{r}]\]

where N is the number of nodes in the graph, LSTM is a Long-Short-Term-Memory network with n_layers layers, input size 2*n_in and output size n_in.

Given a batch of graphs g and node features x, the layer returns a matrix of size (2*n_in, n_graphs). ```

source
GraphNeuralNetworks.TopKPoolType
TopKPool(adj, k, in_channel)

Top-k pooling layer.

Arguments

  • adj: Adjacency matrix of a graph.
  • k: Top-k nodes are selected to pool together.
  • in_channel: The dimension of input channel.
source
\ No newline at end of file +\mathbf{q}^*_t = [\mathbf{q}; \mathbf{r}]\]

where N is the number of nodes in the graph, LSTM is a Long-Short-Term-Memory network with n_layers layers, input size 2*n_in and output size n_in.

Given a batch of graphs g and node features x, the layer returns a matrix of size (2*n_in, n_graphs). ```

source
GraphNeuralNetworks.TopKPoolType
TopKPool(adj, k, in_channel)

Top-k pooling layer.

Arguments

  • adj: Adjacency matrix of a graph.
  • k: Top-k nodes are selected to pool together.
  • in_channel: The dimension of input channel.
source
\ No newline at end of file diff --git a/docs/GraphNeuralNetworks.jl/dev/api/temporalconv/index.html b/docs/GraphNeuralNetworks.jl/dev/api/temporalconv/index.html index 501afaefc..ae7b6aea3 100644 --- a/docs/GraphNeuralNetworks.jl/dev/api/temporalconv/index.html +++ b/docs/GraphNeuralNetworks.jl/dev/api/temporalconv/index.html @@ -13,7 +13,7 @@ julia> y = a3tgcn(rand_graph(5, 10), rand(Float32, 2, 5, 20)); julia> size(y) -(6, 5)
Batch size changes

Failing to call reset! when the input batch size changes can lead to unexpected behavior.

source
GraphNeuralNetworks.EvolveGCNOType
EvolveGCNO(ch; bias = true, init = glorot_uniform, init_state = Flux.zeros32)

Evolving Graph Convolutional Network (EvolveGCNO) layer from the paper EvolveGCN: Evolving Graph Convolutional Networks for Dynamic Graphs.

Perfoms a Graph Convolutional layer with parameters derived from a Long Short-Term Memory (LSTM) layer across the snapshots of the temporal graph.

Arguments

  • in: Number of input features.
  • out: Number of output features.
  • bias: Add learnable bias. Default true.
  • init: Weights' initializer. Default glorot_uniform.
  • init_state: Initial state of the hidden stat of the LSTM layer. Default zeros32.

Examples

julia> tg = TemporalSnapshotsGNNGraph([rand_graph(10,20; ndata = rand(4,10)), rand_graph(10,14; ndata = rand(4,10)), rand_graph(10,22; ndata = rand(4,10))])
+(6, 5)
Batch size changes

Failing to call reset! when the input batch size changes can lead to unexpected behavior.

source
GraphNeuralNetworks.EvolveGCNOType
EvolveGCNO(ch; bias = true, init = glorot_uniform, init_state = Flux.zeros32)

Evolving Graph Convolutional Network (EvolveGCNO) layer from the paper EvolveGCN: Evolving Graph Convolutional Networks for Dynamic Graphs.

Perfoms a Graph Convolutional layer with parameters derived from a Long Short-Term Memory (LSTM) layer across the snapshots of the temporal graph.

Arguments

  • in: Number of input features.
  • out: Number of output features.
  • bias: Add learnable bias. Default true.
  • init: Weights' initializer. Default glorot_uniform.
  • init_state: Initial state of the hidden stat of the LSTM layer. Default zeros32.

Examples

julia> tg = TemporalSnapshotsGNNGraph([rand_graph(10,20; ndata = rand(4,10)), rand_graph(10,14; ndata = rand(4,10)), rand_graph(10,22; ndata = rand(4,10))])
 TemporalSnapshotsGNNGraph:
   num_nodes: [10, 10, 10]
   num_edges: [20, 14, 22]
@@ -26,7 +26,7 @@
 (3,)
 
 julia> size(ev(tg, tg.ndata.x)[1])
-(5, 10)
source
GraphNeuralNetworks.DCGRUMethod
DCGRU(in => out, k, n; [bias, init, init_state])

Diffusion Convolutional Recurrent Neural Network (DCGRU) layer from the paper Diffusion Convolutional Recurrent Neural Network: Data-driven Traffic Forecasting.

Performs a Diffusion Convolutional layer to model spatial dependencies, followed by a Gated Recurrent Unit (GRU) cell to model temporal dependencies.

Arguments

  • in: Number of input features.
  • out: Number of output features.
  • k: Diffusion step.
  • n: Number of nodes in the graph.
  • bias: Add learnable bias. Default true.
  • init: Weights' initializer. Default glorot_uniform.
  • init_state: Initial state of the hidden stat of the LSTM layer. Default zeros32.

Examples

julia> g1, x1 = rand_graph(5, 10), rand(Float32, 2, 5);
+(5, 10)
source
GraphNeuralNetworks.DCGRUMethod
DCGRU(in => out, k, n; [bias, init, init_state])

Diffusion Convolutional Recurrent Neural Network (DCGRU) layer from the paper Diffusion Convolutional Recurrent Neural Network: Data-driven Traffic Forecasting.

Performs a Diffusion Convolutional layer to model spatial dependencies, followed by a Gated Recurrent Unit (GRU) cell to model temporal dependencies.

Arguments

  • in: Number of input features.
  • out: Number of output features.
  • k: Diffusion step.
  • n: Number of nodes in the graph.
  • bias: Add learnable bias. Default true.
  • init: Weights' initializer. Default glorot_uniform.
  • init_state: Initial state of the hidden stat of the LSTM layer. Default zeros32.

Examples

julia> g1, x1 = rand_graph(5, 10), rand(Float32, 2, 5);
 
 julia> dcgru = DCGRU(2 => 5, 2, g1.num_nodes);
 
@@ -40,7 +40,7 @@
 julia> z = dcgru(g2, x2);
 
 julia> size(z)
-(5, 5, 30)
source
GraphNeuralNetworks.GConvGRUMethod
GConvGRU(in => out, k, n; [bias, init, init_state])

Graph Convolutional Gated Recurrent Unit (GConvGRU) recurrent layer from the paper Structured Sequence Modeling with Graph Convolutional Recurrent Networks.

Performs a layer of ChebConv to model spatial dependencies, followed by a Gated Recurrent Unit (GRU) cell to model temporal dependencies.

Arguments

  • in: Number of input features.
  • out: Number of output features.
  • k: Chebyshev polynomial order.
  • n: Number of nodes in the graph.
  • bias: Add learnable bias. Default true.
  • init: Weights' initializer. Default glorot_uniform.
  • init_state: Initial state of the hidden stat of the GRU layer. Default zeros32.

Examples

julia> g1, x1 = rand_graph(5, 10), rand(Float32, 2, 5);
+(5, 5, 30)
source
GraphNeuralNetworks.GConvGRUMethod
GConvGRU(in => out, k, n; [bias, init, init_state])

Graph Convolutional Gated Recurrent Unit (GConvGRU) recurrent layer from the paper Structured Sequence Modeling with Graph Convolutional Recurrent Networks.

Performs a layer of ChebConv to model spatial dependencies, followed by a Gated Recurrent Unit (GRU) cell to model temporal dependencies.

Arguments

  • in: Number of input features.
  • out: Number of output features.
  • k: Chebyshev polynomial order.
  • n: Number of nodes in the graph.
  • bias: Add learnable bias. Default true.
  • init: Weights' initializer. Default glorot_uniform.
  • init_state: Initial state of the hidden stat of the GRU layer. Default zeros32.

Examples

julia> g1, x1 = rand_graph(5, 10), rand(Float32, 2, 5);
 
 julia> ggru = GConvGRU(2 => 5, 2, g1.num_nodes);
 
@@ -54,7 +54,7 @@
 julia> z = ggru(g2, x2);
 
 julia> size(z)
-(5, 5, 30)
source
GraphNeuralNetworks.GConvLSTMMethod
GConvLSTM(in => out, k, n; [bias, init, init_state])

Graph Convolutional Long Short-Term Memory (GConvLSTM) recurrent layer from the paper Structured Sequence Modeling with Graph Convolutional Recurrent Networks.

Performs a layer of ChebConv to model spatial dependencies, followed by a Long Short-Term Memory (LSTM) cell to model temporal dependencies.

Arguments

  • in: Number of input features.
  • out: Number of output features.
  • k: Chebyshev polynomial order.
  • n: Number of nodes in the graph.
  • bias: Add learnable bias. Default true.
  • init: Weights' initializer. Default glorot_uniform.
  • init_state: Initial state of the hidden stat of the LSTM layer. Default zeros32.

Examples

julia> g1, x1 = rand_graph(5, 10), rand(Float32, 2, 5);
+(5, 5, 30)
source
GraphNeuralNetworks.GConvLSTMMethod
GConvLSTM(in => out, k, n; [bias, init, init_state])

Graph Convolutional Long Short-Term Memory (GConvLSTM) recurrent layer from the paper Structured Sequence Modeling with Graph Convolutional Recurrent Networks.

Performs a layer of ChebConv to model spatial dependencies, followed by a Long Short-Term Memory (LSTM) cell to model temporal dependencies.

Arguments

  • in: Number of input features.
  • out: Number of output features.
  • k: Chebyshev polynomial order.
  • n: Number of nodes in the graph.
  • bias: Add learnable bias. Default true.
  • init: Weights' initializer. Default glorot_uniform.
  • init_state: Initial state of the hidden stat of the LSTM layer. Default zeros32.

Examples

julia> g1, x1 = rand_graph(5, 10), rand(Float32, 2, 5);
 
 julia> gclstm = GConvLSTM(2 => 5, 2, g1.num_nodes);
 
@@ -68,7 +68,7 @@
 julia> z = gclstm(g2, x2);
 
 julia> size(z)
-(5, 5, 30)
source
GraphNeuralNetworks.TGCNMethod
TGCN(in => out; [bias, init, init_state, add_self_loops, use_edge_weight])

Temporal Graph Convolutional Network (T-GCN) recurrent layer from the paper T-GCN: A Temporal Graph Convolutional Network for Traffic Prediction.

Performs a layer of GCNConv to model spatial dependencies, followed by a Gated Recurrent Unit (GRU) cell to model temporal dependencies.

Arguments

  • in: Number of input features.
  • out: Number of output features.
  • bias: Add learnable bias. Default true.
  • init: Weights' initializer. Default glorot_uniform.
  • init_state: Initial state of the hidden stat of the GRU layer. Default zeros32.
  • add_self_loops: Add self loops to the graph before performing the convolution. Default false.
  • use_edge_weight: If true, consider the edge weights in the input graph (if available). If add_self_loops=true the new weights will be set to 1. This option is ignored if the edge_weight is explicitly provided in the forward pass. Default false.

Examples

julia> tgcn = TGCN(2 => 6)
+(5, 5, 30)
source
GraphNeuralNetworks.TGCNMethod
TGCN(in => out; [bias, init, init_state, add_self_loops, use_edge_weight])

Temporal Graph Convolutional Network (T-GCN) recurrent layer from the paper T-GCN: A Temporal Graph Convolutional Network for Traffic Prediction.

Performs a layer of GCNConv to model spatial dependencies, followed by a Gated Recurrent Unit (GRU) cell to model temporal dependencies.

Arguments

  • in: Number of input features.
  • out: Number of output features.
  • bias: Add learnable bias. Default true.
  • init: Weights' initializer. Default glorot_uniform.
  • init_state: Initial state of the hidden stat of the GRU layer. Default zeros32.
  • add_self_loops: Add self loops to the graph before performing the convolution. Default false.
  • use_edge_weight: If true, consider the edge weights in the input graph (if available). If add_self_loops=true the new weights will be set to 1. This option is ignored if the edge_weight is explicitly provided in the forward pass. Default false.

Examples

julia> tgcn = TGCN(2 => 6)
 Recur(
   TGCNCell(
     GCNConv(2 => 6, σ),                 # 18 parameters
@@ -90,4 +90,4 @@
 julia> Flux.reset!(tgcn);
 
 julia> tgcn(rand_graph(5, 10), rand(Float32, 2, 5, 20)) |> size # batch size of 20
-(6, 5, 20)
Batch size changes

Failing to call reset! when the input batch size changes can lead to unexpected behavior.

source
\ No newline at end of file +(6, 5, 20)
Batch size changes

Failing to call reset! when the input batch size changes can lead to unexpected behavior.

source
\ No newline at end of file diff --git a/docs/GraphNeuralNetworks.jl/dev/dev/index.html b/docs/GraphNeuralNetworks.jl/dev/dev/index.html index 0e7212909..10c222a3b 100644 --- a/docs/GraphNeuralNetworks.jl/dev/dev/index.html +++ b/docs/GraphNeuralNetworks.jl/dev/dev/index.html @@ -45,4 +45,4 @@ julia> @load "perf_pr_20210803_mymachine.jld2" julia> compare(dfpr, dfmaster)

Caching tutorials

Tutorials in GraphNeuralNetworks.jl are written in Pluto and rendered using DemoCards.jl and PlutoStaticHTML.jl. Rendering a Pluto notebook is time and resource-consuming, especially in a CI environment. So we use the caching functionality provided by PlutoStaticHTML.jl to reduce CI time.

If you are contributing a new tutorial or making changes to the existing notebook, generate the docs locally before committing/pushing. For caching to work, the cache environment(your local) and the documenter CI should have the same Julia version (e.g. "v1.9.1", also the patch number must match). So use the documenter CI Julia version for generating docs locally.

julia --version # check julia version before generating docs
-julia --project=docs docs/make.jl

Note: Use juliaup for easy switching of Julia versions.

During the doc generation process, DemoCards.jl stores the cache notebooks in docs/pluto_output. So add any changes made in this folder in your git commit. Remember that every file in this folder is machine-generated and should not be edited manually.

git add docs/pluto_output # add generated cache

Check the documenter CI logs to ensure that it used the local cache:

\ No newline at end of file +julia --project=docs docs/make.jl

Note: Use juliaup for easy switching of Julia versions.

During the doc generation process, DemoCards.jl stores the cache notebooks in docs/pluto_output. So add any changes made in this folder in your git commit. Remember that every file in this folder is machine-generated and should not be edited manually.

git add docs/pluto_output # add generated cache

Check the documenter CI logs to ensure that it used the local cache:

\ No newline at end of file diff --git a/docs/GraphNeuralNetworks.jl/dev/guides/models/index.html b/docs/GraphNeuralNetworks.jl/dev/guides/models/index.html index dab8af055..02817a418 100644 --- a/docs/GraphNeuralNetworks.jl/dev/guides/models/index.html +++ b/docs/GraphNeuralNetworks.jl/dev/guides/models/index.html @@ -59,4 +59,4 @@ X = randn(Float32, din, 10) # Pass only X as input, the model already contains the graph. -y = model(X)

An example of WithGraph usage is given in the graph neural ODE script in the examples folder.

\ No newline at end of file +y = model(X)

An example of WithGraph usage is given in the graph neural ODE script in the examples folder.

\ No newline at end of file diff --git a/docs/GraphNeuralNetworks.jl/dev/index.html b/docs/GraphNeuralNetworks.jl/dev/index.html index dde514738..882de6abd 100644 --- a/docs/GraphNeuralNetworks.jl/dev/index.html +++ b/docs/GraphNeuralNetworks.jl/dev/index.html @@ -41,4 +41,4 @@ title = {GraphNeuralNetworks.jl: a geometric deep learning library for the Julia programming language}, year = 2021, url = {https://github.com/JuliaGraphs/GraphNeuralNetworks.jl} -}

Acknowledgments

GraphNeuralNetworks.jl is largely inspired by PyTorch Geometric, Deep Graph Library, and GeometricFlux.jl.

\ No newline at end of file +}

Acknowledgments

GraphNeuralNetworks.jl is largely inspired by PyTorch Geometric, Deep Graph Library, and GeometricFlux.jl.

\ No newline at end of file diff --git a/docs/GraphNeuralNetworks.jl/dev/tutorials/gnn_intro_pluto/index.html b/docs/GraphNeuralNetworks.jl/dev/tutorials/gnn_intro_pluto/index.html index fb4d7fd45..ad346387b 100644 --- a/docs/GraphNeuralNetworks.jl/dev/tutorials/gnn_intro_pluto/index.html +++ b/docs/GraphNeuralNetworks.jl/dev/tutorials/gnn_intro_pluto/index.html @@ -152,4 +152,4 @@ end end
ŷ, emb_final = model(g, g.ndata.x)
(Float32[-8.871021 -6.288402 … 7.8817716 7.3984337; 7.873129 5.5748186 … -8.054153 -7.562167; 0.6939411 2.6538918 … 0.1978332 0.633129; 0.42380208 -1.7143326 … -0.14687762 -0.5542332], Float32[-0.99049056 -0.9905237 … 0.99305063 0.87260294; -0.9905631 -0.40585023 … 0.9999852 0.99999404])
# train accuracy
 mean(onecold(ŷ[:, train_mask]) .== onecold(y[:, train_mask]))
1.0
# test accuracy
-mean(onecold(ŷ[:, .!train_mask]) .== onecold(y[:, .!train_mask]))
0.8
visualize_embeddings(emb_final, colors = labels)

As one can see, our 3-layer GCN model manages to linearly separating the communities and classifying most of the nodes correctly.

Furthermore, we did this all with a few lines of code, thanks to the GraphNeuralNetworks.jl which helped us out with data handling and GNN implementations.

\ No newline at end of file +mean(onecold(ŷ[:, .!train_mask]) .== onecold(y[:, .!train_mask]))
0.8
visualize_embeddings(emb_final, colors = labels)

As one can see, our 3-layer GCN model manages to linearly separating the communities and classifying most of the nodes correctly.

Furthermore, we did this all with a few lines of code, thanks to the GraphNeuralNetworks.jl which helped us out with data handling and GNN implementations.

\ No newline at end of file diff --git a/docs/GraphNeuralNetworks.jl/dev/tutorials/graph_classification_pluto/index.html b/docs/GraphNeuralNetworks.jl/dev/tutorials/graph_classification_pluto/index.html index 059287c9b..9e1ca7fcb 100644 --- a/docs/GraphNeuralNetworks.jl/dev/tutorials/graph_classification_pluto/index.html +++ b/docs/GraphNeuralNetworks.jl/dev/tutorials/graph_classification_pluto/index.html @@ -119,4 +119,4 @@ nout = 2 model = create_model(nin, nh, nout) train!(model) -end

As one can see, our model reaches around 74% test accuracy. Reasons for the fluctuations in accuracy can be explained by the rather small dataset (only 38 test graphs), and usually disappear once one applies GNNs to larger datasets.

(Optional) Exercise

Can we do better than this? As multiple papers pointed out (Xu et al. (2018), Morris et al. (2018)), applying neighborhood normalization decreases the expressivity of GNNs in distinguishing certain graph structures. An alternative formulation (Morris et al. (2018)) omits neighborhood normalization completely and adds a simple skip-connection to the GNN layer in order to preserve central node information:

$$\mathbf{x}_i^{(\ell+1)} = \mathbf{W}^{(\ell + 1)}_1 \mathbf{x}_i^{(\ell)} + \mathbf{W}^{(\ell + 1)}_2 \sum_{j \in \mathcal{N}(i)} \mathbf{x}_j^{(\ell)}$$

This layer is implemented under the name GraphConv in GraphNeuralNetworks.jl.

As an exercise, you are invited to complete the following code to the extent that it makes use of GraphConv rather than GCNConv. This should bring you close to 82% test accuracy.

Conclusion

In this chapter, you have learned how to apply GNNs to the task of graph classification. You have learned how graphs can be batched together for better GPU utilization, and how to apply readout layers for obtaining graph embeddings rather than node embeddings.

\ No newline at end of file +end

As one can see, our model reaches around 74% test accuracy. Reasons for the fluctuations in accuracy can be explained by the rather small dataset (only 38 test graphs), and usually disappear once one applies GNNs to larger datasets.

(Optional) Exercise

Can we do better than this? As multiple papers pointed out (Xu et al. (2018), Morris et al. (2018)), applying neighborhood normalization decreases the expressivity of GNNs in distinguishing certain graph structures. An alternative formulation (Morris et al. (2018)) omits neighborhood normalization completely and adds a simple skip-connection to the GNN layer in order to preserve central node information:

$$\mathbf{x}_i^{(\ell+1)} = \mathbf{W}^{(\ell + 1)}_1 \mathbf{x}_i^{(\ell)} + \mathbf{W}^{(\ell + 1)}_2 \sum_{j \in \mathcal{N}(i)} \mathbf{x}_j^{(\ell)}$$

This layer is implemented under the name GraphConv in GraphNeuralNetworks.jl.

As an exercise, you are invited to complete the following code to the extent that it makes use of GraphConv rather than GCNConv. This should bring you close to 82% test accuracy.

Conclusion

In this chapter, you have learned how to apply GNNs to the task of graph classification. You have learned how graphs can be batched together for better GPU utilization, and how to apply readout layers for obtaining graph embeddings rather than node embeddings.

\ No newline at end of file diff --git a/docs/GraphNeuralNetworks.jl/dev/tutorials/node_classification_pluto/index.html b/docs/GraphNeuralNetworks.jl/dev/tutorials/node_classification_pluto/index.html index c6f7207d2..5d3b36747 100644 --- a/docs/GraphNeuralNetworks.jl/dev/tutorials/node_classification_pluto/index.html +++ b/docs/GraphNeuralNetworks.jl/dev/tutorials/node_classification_pluto/index.html @@ -177,4 +177,4 @@ out_trained = gcn(g, x) |> transpose visualize_tsne(out_trained, g.ndata.targets) -end

(Optional) Exercises

  1. To achieve better model performance and to avoid overfitting, it is usually a good idea to select the best model based on an additional validation set. The Cora dataset provides a validation node set as g.ndata.val_mask, but we haven't used it yet. Can you modify the code to select and test the model with the highest validation performance? This should bring test performance to 82% accuracy.

  2. How does GCN behave when increasing the hidden feature dimensionality or the number of layers? Does increasing the number of layers help at all?

  3. You can try to use different GNN layers to see how model performance changes. What happens if you swap out all GCNConv instances with GATConv layers that make use of attention? Try to write a 2-layer GAT model that makes use of 8 attention heads in the first layer and 1 attention head in the second layer, uses a dropout ratio of 0.6 inside and outside each GATConv call, and uses a hidden_channels dimensions of 8 per head.

Conclusion

In this tutorial, we have seen how to apply GNNs to real-world problems, and, in particular, how they can effectively be used for boosting a model's performance. In the next tutorial, we will look into how GNNs can be used for the task of graph classification.

\ No newline at end of file +end

(Optional) Exercises

  1. To achieve better model performance and to avoid overfitting, it is usually a good idea to select the best model based on an additional validation set. The Cora dataset provides a validation node set as g.ndata.val_mask, but we haven't used it yet. Can you modify the code to select and test the model with the highest validation performance? This should bring test performance to 82% accuracy.

  2. How does GCN behave when increasing the hidden feature dimensionality or the number of layers? Does increasing the number of layers help at all?

  3. You can try to use different GNN layers to see how model performance changes. What happens if you swap out all GCNConv instances with GATConv layers that make use of attention? Try to write a 2-layer GAT model that makes use of 8 attention heads in the first layer and 1 attention head in the second layer, uses a dropout ratio of 0.6 inside and outside each GATConv call, and uses a hidden_channels dimensions of 8 per head.

Conclusion

In this tutorial, we have seen how to apply GNNs to real-world problems, and, in particular, how they can effectively be used for boosting a model's performance. In the next tutorial, we will look into how GNNs can be used for the task of graph classification.

\ No newline at end of file diff --git a/docs/GraphNeuralNetworks.jl/dev/tutorials/temporal_graph_classification_pluto/index.html b/docs/GraphNeuralNetworks.jl/dev/tutorials/temporal_graph_classification_pluto/index.html index c947dd3c7..d0aea2ed7 100644 --- a/docs/GraphNeuralNetworks.jl/dev/tutorials/temporal_graph_classification_pluto/index.html +++ b/docs/GraphNeuralNetworks.jl/dev/tutorials/temporal_graph_classification_pluto/index.html @@ -119,4 +119,4 @@ end return model end; -
train(brain_dataset; usecuda = true)
GenderPredictionModel(GINConv(Chain(Dense(103 => 128, relu), Dense(128 => 128, relu)), 0.5), Chain(Dense(103 => 128, relu), Dense(128 => 128, relu)), GlobalPool{typeof(mean)}(Statistics.mean), var"#4#5"(), Dense(128 => 2))  # 30_082 parameters, plus 29_824 non-trainable

We set up the training on the GPU because training takes a lot of time, especially when working on the CPU.

Conclusions

In this tutorial, we implemented a very simple architecture to classify temporal graphs in the context of gender classification using brain data. We then trained the model on the GPU for 100 epochs on the TemporalBrains dataset. The accuracy of the model is approximately 75-80%, but can be improved by fine-tuning the parameters and training on more data.

\ No newline at end of file +
train(brain_dataset; usecuda = true)
GenderPredictionModel(GINConv(Chain(Dense(103 => 128, relu), Dense(128 => 128, relu)), 0.5), Chain(Dense(103 => 128, relu), Dense(128 => 128, relu)), GlobalPool{typeof(mean)}(Statistics.mean), var"#4#5"(), Dense(128 => 2))  # 30_082 parameters, plus 29_824 non-trainable

We set up the training on the GPU because training takes a lot of time, especially when working on the CPU.

Conclusions

In this tutorial, we implemented a very simple architecture to classify temporal graphs in the context of gender classification using brain data. We then trained the model on the GPU for 100 epochs on the TemporalBrains dataset. The accuracy of the model is approximately 75-80%, but can be improved by fine-tuning the parameters and training on more data.

\ No newline at end of file diff --git a/docs/GraphNeuralNetworks.jl/dev/tutorials/traffic_prediction/index.html b/docs/GraphNeuralNetworks.jl/dev/tutorials/traffic_prediction/index.html index 5df99bc7a..30afdae9a 100644 --- a/docs/GraphNeuralNetworks.jl/dev/tutorials/traffic_prediction/index.html +++ b/docs/GraphNeuralNetworks.jl/dev/tutorials/traffic_prediction/index.html @@ -85,4 +85,4 @@ plot!(p, collect(1:length(features)), grand_truth, color = :blue, label = "Grand Truth", xticks =([i for i in 0:50:250], ["$(i)" for i in 0:4:24])) plot!(p, collect(1:length(features)), prediction, color = :red, label= "Prediction") return p -end
plot_predicted_data (generic function with 1 method)
plot_predicted_data(graph,features[301:588],targets[301:588], 1)
accuracy(ŷ, y) = 1 - Statistics.norm(y-ŷ)/Statistics.norm(y)
accuracy (generic function with 1 method)
mean([accuracy(model(graph,x), y) for (x, y) in test_loader])
0.47803628f0

The accuracy is not very good but can be improved by training using more data. We used a small subset of the dataset for this tutorial because of the computational cost of training the model. From the plot of the predictions, we can see that the model is able to capture the general trend of the traffic speed, but it is not able to capture the peaks of the traffic.

Conclusion

In this tutorial, we learned how to use a recurrent temporal graph convolutional network to predict traffic in a spatio-temporal setting. We used the TGCN model, which consists of a graph convolutional network (GCN) and a gated recurrent unit (GRU). We then trained the model for 100 epochs on a small subset of the METR-LA dataset. The accuracy of the model is not very good, but it can be improved by training on more data.

\ No newline at end of file +end
plot_predicted_data (generic function with 1 method)
plot_predicted_data(graph,features[301:588],targets[301:588], 1)
accuracy(ŷ, y) = 1 - Statistics.norm(y-ŷ)/Statistics.norm(y)
accuracy (generic function with 1 method)
mean([accuracy(model(graph,x), y) for (x, y) in test_loader])
0.47803628f0

The accuracy is not very good but can be improved by training using more data. We used a small subset of the dataset for this tutorial because of the computational cost of training the model. From the plot of the predictions, we can see that the model is able to capture the general trend of the traffic speed, but it is not able to capture the peaks of the traffic.

Conclusion

In this tutorial, we learned how to use a recurrent temporal graph convolutional network to predict traffic in a spatio-temporal setting. We used the TGCN model, which consists of a graph convolutional network (GCN) and a gated recurrent unit (GRU). We then trained the model for 100 epochs on a small subset of the METR-LA dataset. The accuracy of the model is not very good, but it can be improved by training on more data.

\ No newline at end of file diff --git a/logo.svg b/logo.svg new file mode 100644 index 000000000..cac604fcd --- /dev/null +++ b/logo.svg @@ -0,0 +1,31 @@ + + + V9 + + + + + + + + \ No newline at end of file