You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
"mlcnn.m" has a method named: "updateParams".
Unfortunately in this version, it does not update the output layer's weights, so the whole network is like a car with a working engine but without a mechanism to transfer engine's power to the tires.
I have a suggestion to make it work, to do so I have changed the body of "updateParams" method to:
function self = updateParams(self)
%net = updateParams()
%--------------------------------------------------------------------------
%Update network parameters based on states of netowrk gradient, perform
%regularization such as weight decay and weight rescaling
%--------------------------------------------------------------------------
wPenalty = 0;
for lL = 2:self.nLayers % Changed from: for lL = 2:self.nLayers-1 by Masih Azad
switch self.layers{lL}.type
% CURRENTLY, ONLY UPDATE FILTERS AND FM BIASES
% PERHAPS, IN THE FUTURE, WE'LL BE FANCY, AND DO FANCY UPDATES
case {'conv'} % Changed from: case {'conv','output'} by Masih Azad
lRate = self.layers{lL}.lRate;
for jM = 1:self.layers{lL}.nFM
% UPDATE FEATURE BIASES
self.layers{lL}.b(jM) = self.layers{lL}.b(jM) - ...
lRate*self.layers{lL}.db(jM);
% UPDATE FILTERS
for iM = 1:self.layers{lL-1}.nFM
if self.wPenalty > 0 % L2 REGULARIZATION
wPenalty = self.layers{lL}.filter(:,:,iM,jM)*self.wPenalty;
elseif self.wPenalty < 0 % L1-REGULARIZATION (SUB GRADIENTS)
wPenalty = sign(self.layers{lL}.filter(:,:,iM,jM))*abs(self.wPenalty);
end
self.layers{lL}.filter(:,:,iM,jM) = ...
self.layers{lL}.filter(:,:,iM,jM) - ...
lRate*(self.layers{lL}.dFilter(:,:,iM,jM)+wPenalty);
end
end
%% Added by Masih Azad
case {'output'}
lRate = self.layers{lL}.lRate;
self.layers{lL}.b = self.layers{lL}.b - ...
lRate*self.layers{lL}.db;
if self.wPenalty > 0 % L2 REGULARIZATION
wPenalty = self.layers{lL}.W*self.wPenalty;
elseif self.wPenalty < 0 % L1-REGULARIZATION (SUB GRADIENTS)
wPenalty = sign(self.layers{lL}.W)*abs(self.wPenalty);
end
self.layers{lL}.W = self.layers{lL}.W - lRate*(self.layers{lL}.dW+wPenalty);
%%
end
end
end
The text was updated successfully, but these errors were encountered:
"mlcnn.m" has a method named: "updateParams".
Unfortunately in this version, it does not update the output layer's weights, so the whole network is like a car with a working engine but without a mechanism to transfer engine's power to the tires.
I have a suggestion to make it work, to do so I have changed the body of "updateParams" method to:
function self = updateParams(self)
%net = updateParams()
%--------------------------------------------------------------------------
%Update network parameters based on states of netowrk gradient, perform
%regularization such as weight decay and weight rescaling
%--------------------------------------------------------------------------
wPenalty = 0;
for lL = 2:self.nLayers % Changed from: for lL = 2:self.nLayers-1 by Masih Azad
switch self.layers{lL}.type
% CURRENTLY, ONLY UPDATE FILTERS AND FM BIASES
% PERHAPS, IN THE FUTURE, WE'LL BE FANCY, AND DO FANCY UPDATES
case {'conv'} % Changed from: case {'conv','output'} by Masih Azad
lRate = self.layers{lL}.lRate;
for jM = 1:self.layers{lL}.nFM
% UPDATE FEATURE BIASES
self.layers{lL}.b(jM) = self.layers{lL}.b(jM) - ...
lRate*self.layers{lL}.db(jM);
end
The text was updated successfully, but these errors were encountered: