You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Nov 12, 2021. It is now read-only.
In ggnn function, the input shape of dynamic_rnn will be (self.batch_size * max_n_node, 1, 2 * hidden_size) = (# of all items in a batch, 1, 2 * hidden_size) and initial states' shape will be (self.batch_size * max_n_node, hidden_size) = (# of all items in a batch, hidden_size).
So there's only one time step for each sequence (the second element max_time in inputs = 1), which mean the first item vector in inputs (shape = (2 * hidden_size,)) will get the embedding after interacting with the first initial hidden state (shape = ( hidden_size,)) in only one time step. And for the second item vector in inputs will also get it's embedding with the second initial hidden state in one time step. Is that right ? If so, then there is no recurrent meaning in this RNN ?
Cause in my understanding, item should be fed into RNN time step by time step for a session, which means the hidden state (embedding) will be affected by the previous click item. I wonder why the inputs shape is (# of all items in a batch, 1, 2 * hidden_size) instead of (self.batch_size, max_n_node, 2 * hidden_size). What is the meaning of this setting in dynamic_rnn function ?
The text was updated successfully, but these errors were encountered:
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
In ggnn function, the input shape of dynamic_rnn will be
(self.batch_size * max_n_node, 1, 2 * hidden_size)
=(# of all items in a batch, 1, 2 * hidden_size)
and initial states' shape will be(self.batch_size * max_n_node, hidden_size)
=(# of all items in a batch, hidden_size)
.So there's only one time step for each sequence (the second element
max_time
in inputs = 1), which mean the first item vector in inputs (shape = (2 * hidden_size,)) will get the embedding after interacting with the first initial hidden state (shape = ( hidden_size,)) in only one time step. And for the second item vector in inputs will also get it's embedding with the second initial hidden state in one time step. Is that right ? If so, then there is no recurrent meaning in this RNN ?Cause in my understanding, item should be fed into RNN time step by time step for a session, which means the hidden state (embedding) will be affected by the previous click item. I wonder why the inputs shape is
(# of all items in a batch, 1, 2 * hidden_size)
instead of(self.batch_size, max_n_node, 2 * hidden_size)
. What is the meaning of this setting in dynamic_rnn function ?The text was updated successfully, but these errors were encountered: