You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
There is something wrong with the learning process of networks with more than 1 hidden layer.
I have a small dataset, 500 lines with 10 inputs and 2 outputs each (all inputs/outputs are already between 0 and 1).
When I run a simple NN with 1 hidden layer, it gets highly optimized in a few epochs and error drops gradually until it reaches a plateau (like expected).
Ex:
var myPerceptron = new Architect.Perceptron(10, 10, 2);
var myTrainer = new Trainer(myPerceptron);
var trainingSet = [
{ input: [0,0,0,0,0,0,0,0,0,0], output: [0,1]},
{ input: [1,1,1,1,1,1,1,1,1,1], output: [1,0]},
...
}
But if I change it's structure to 2 hidden layers, it never gets optimized, even with hundreds or thousands of epochs.
var myPerceptron = new Architect.Perceptron(10, 10, 10, 2); // small change here
var myTrainer = new Trainer(myPerceptron);
var trainingSet = [
{ input: [0,0,0,0,0,0,0,0,0,0], output: [0,1]},
{ input: [1,1,1,1,1,1,1,1,1,1], output: [1,0]},
...
}
There is something wrong with the learning process when you add extra hidden layers. One extra layer should, at worst, make the learning process slower, but not interrupt it at all.
I can add the source-code or input/output data if needed.
The text was updated successfully, but these errors were encountered:
MateusAngelo97
changed the title
Multi Hidden-Layer Network not working.
Multi Hidden-Layer Network not learning appropriately.
Oct 17, 2022
There is something wrong with the learning process of networks with more than 1 hidden layer.
I have a small dataset, 500 lines with 10 inputs and 2 outputs each (all inputs/outputs are already between 0 and 1).
When I run a simple NN with 1 hidden layer, it gets highly optimized in a few epochs and error drops gradually until it reaches a plateau (like expected).
Ex:
var myPerceptron = new Architect.Perceptron(10, 10, 2);
var myTrainer = new Trainer(myPerceptron);
var trainingSet = [
{ input: [0,0,0,0,0,0,0,0,0,0], output: [0,1]},
{ input: [1,1,1,1,1,1,1,1,1,1], output: [1,0]},
...
}
myTrainer.train(trainingSet, {
rate: .1,
iterations: 10000,
error: .0005,
shuffle: true,
log: 100,
cost: Trainer.cost.CROSS_ENTROPY
});
But if I change it's structure to 2 hidden layers, it never gets optimized, even with hundreds or thousands of epochs.
var myPerceptron = new Architect.Perceptron(10, 10, 10, 2); // small change here
var myTrainer = new Trainer(myPerceptron);
var trainingSet = [
{ input: [0,0,0,0,0,0,0,0,0,0], output: [0,1]},
{ input: [1,1,1,1,1,1,1,1,1,1], output: [1,0]},
...
}
myTrainer.train(trainingSet, {
rate: .1,
iterations: 10000,
error: .0005,
shuffle: true,
log: 100,
cost: Trainer.cost.CROSS_ENTROPY
});
There is something wrong with the learning process when you add extra hidden layers. One extra layer should, at worst, make the learning process slower, but not interrupt it at all.
I can add the source-code or input/output data if needed.
The text was updated successfully, but these errors were encountered: