You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi,Just notice from the code that sth different between the code and the paper, is there any intend behind that ?
when deconv the next_layer = curr_layer +2 in the code, it means the conv4_1 is deconv by conv2_1 instead of conv3_1, but according to the paper next_layer should be equals curr_layer +1
because of the above implementation, the layer 1 ("conv4_1") is not computed by deconv.
The text was updated successfully, but these errors were encountered:
@gxlcliqi
Hi, thanks for your interest on our project.
For deconvolution, the aim is to deconvolve feature map form layer L to the layer L-1, but what we actually do is to deconvolve it to layer L-2 and then forward it to layer L-1. This is because a target feature map in layer L corresponds to many solutions in layer L-1. By incorporating part of the network (form L-2 to L-1 ) into the optimization, we can constrain the feature map we solved is with the same distribution as real network output of layer L-1.
Another way to constrain the solution is to use LBFGS-B and set proper upper and lower bounds. In this way, we can directly deconvolve it to layer L-1.
Hi,Just notice from the code that sth different between the code and the paper, is there any intend behind that ?
when deconv the next_layer = curr_layer +2 in the code, it means the conv4_1 is deconv by conv2_1 instead of conv3_1, but according to the paper next_layer should be equals curr_layer +1
because of the above implementation, the layer 1 ("conv4_1") is not computed by deconv.
The text was updated successfully, but these errors were encountered: