Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Problem with vl_nnconv while training a FCN #9

Open
omrysendik opened this issue Dec 12, 2015 · 2 comments
Open

Problem with vl_nnconv while training a FCN #9

omrysendik opened this issue Dec 12, 2015 · 2 comments

Comments

@omrysendik
Copy link

I am running the fcnTrain function in order to train a FCN model.
I am running it while using the VGG-16 model as the initialization (just as the script proposes).
However, when I do so, I get an error that:

Error using vl_nnconv
The number of elements of BIASES is not the same as the number of filters.

Note that the function is able to operate.
The error occurs in a specific layer in which:
size(params{1})=
1 1 4096 21
size(params{2})
1000 21

I have a feeling that it's got something to do with a lacking transpose operation.

Thanks,
Omry

@omrysendik
Copy link
Author

I was able to find the problem and fix it by changing lines 49 to 60 in fcnInitializeModel to:

% Modify the last fully-connected layer to have 21 output classes
% Initialize the new filters to zero
Biases = 0;
for i = net.getParamIndex(net.layers(end-1).params) ;
sz = size(net.params(i).value) ;
if(~Biases)
sz(end) = 21 ;
else
sz(end-1) = 21 ;
end
net.params(i).value = zeros(sz, 'single') ;
Biases = 1;
end

@codesteller86
Copy link

@omrysendik: Thank you. it works

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants