You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Nov 1, 2021. It is now read-only.
In the example of autoencoder, the weight l4-l6 are tied to l3-l1.
-- Tie the weights in the decoding layers
l4.weight = l3.weight:t()
l4.gradWeight = l3.gradWeight:t()
l5.weight = l2.weight:t()
l5.gradWeight = l2.gradWeight:t()
l6.weight = l1.weight:t()
l6.gradWeight = l1.gradWeight:t()
Is this implicitly done in the autograd? If so, isn't the weights (for example, the ones in l4, are updated twice in one backpropagation? Since the loss are propagated through the network from l6-l1, and l4 and l1 share the weights, and isn't the weights will be updated twice?)
The text was updated successfully, but these errors were encountered:
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Hi, I read the blog post, 'https://blog.twitter.com/2015/autograd-for-torch' and find the simplicity of using autograd, thanks! I have a question about the blog post,
In the example of autoencoder, the weight l4-l6 are tied to l3-l1.
-- Tie the weights in the decoding layers
l4.weight = l3.weight:t()
l4.gradWeight = l3.gradWeight:t()
l5.weight = l2.weight:t()
l5.gradWeight = l2.gradWeight:t()
l6.weight = l1.weight:t()
l6.gradWeight = l1.gradWeight:t()
Is this implicitly done in the autograd? If so, isn't the weights (for example, the ones in l4, are updated twice in one backpropagation? Since the loss are propagated through the network from l6-l1, and l4 and l1 share the weights, and isn't the weights will be updated twice?)
The text was updated successfully, but these errors were encountered: