From 5e4e9f6e34e1ccb56eb0d9bae05ebb0718a5da7f Mon Sep 17 00:00:00 2001
From: robertturner <143536791+robertdhayanturner@users.noreply.github.com>
Date: Wed, 17 Jan 2024 04:18:34 -0500
Subject: [PATCH] Update node_representation_learning.md
remove duplicate graph_sage png
---
docs/use_cases/node_representation_learning.md | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/docs/use_cases/node_representation_learning.md b/docs/use_cases/node_representation_learning.md
index 2f68b1b02..97e3fe416 100644
--- a/docs/use_cases/node_representation_learning.md
+++ b/docs/use_cases/node_representation_learning.md
@@ -84,7 +84,7 @@ As opposed to BoW vectors, node embeddings are vector representations that captu
$P(\text{context}|\text{source}) = \frac{1}{Z}\exp(w_{c}^Tw_s)$
-->
-
+
Here, *w_c* and *w_s* are the embeddings of the context node *c* and source node *s* respectively. The variable *Z* serves as a normalization constant, which, for computational efficiency, is never explicitly computed.
@@ -209,7 +209,7 @@ The GraphSAGE layer is defined as follows:
-![GraphSAGE layer defintion](../assets/use_cases/node_representation_learning/sage_layer_eqn_v3.png)
+
Here σ is a nonlinear activation function, *W^k* is a learnable parameter of layer *k*, and *N(i)* is the set of nodes neighboring node *i*. As in traditional Neural Networks, we can stack multiple GNN layers. The resulting multi-layer GNN will have a wider receptive field. That is, it will be able to consider information from bigger distances, thanks to recursive neighborhood aggregation.