Skip to content
Sam Grundman edited this page Jul 28, 2018 · 10 revisions

Normally you won't work with single neurons, but use Layers instead. A layer is basically an array of neurons, they can do pretty much the same things as neurons do, but it makes the programming process faster.

To create a layer you just have to specify its size (the number of neurons in that layer):

var myLayer = new Layer(5);

project

A layer can project a connection to another layer. You have to provide the layer that you want to connect to and the connectionType:

var A = new Layer(5);
var B = new Layer(3);
A.project(B, Layer.connectionType.ALL_TO_ALL); // All the neurons in layer A now project a connection to all the neurons in layer B

Layers can also self-connect:

A.project(A, Layer.connectionType.ONE_TO_ONE);

There are two connectionTypes:

  • Layer.connectionType.ALL_TO_ALL: It connects every neuron from layer A, to every neuron in layer B.
  • Layer.connectionType.ONE_TO_ONE: It connects each neuron from layer A, to one neuron in layer B. Both layers must be the same size in order to work.
  • Layer.connectionType.ALL_TO_ELSE: Useful only in self-connections. It connects every neuron from a layer to all the other neurons in that same layer, except with itself. If this connectionType is used in a connection between different layers, it produces the same result as ALL_TO_ALL.

NOTE:If not specified, the connection type is always Layer.connectionType.ALL_TO_ALL when connecting two different layers, and is Layer.connectionType.ONE_TO_ONE when connecting a layer to itself (ie myLayer.project(myLayer))

The method project returns a LayerConnection object, that can be gated by another layer.

gate

A layer can gate a connection between two other layers, or a layers's self-connection.

var A = new Layer(5);
var B = new Layer(3);
var connection = A.project(B);

var C = new Layer(4);
C.gate(connection, Layer.gateType.INPUT_GATE); // now C gates the connection between A and B (input gate)

There are three gateType's:

  • Layer.gateType.INPUT_GATE: If layer C is gating connections between layer A and B, all the neurons from C gate all the input connections to B.
  • Layer.gateType.OUTPUT_GATE: If layer C is gating connections between layer A and B, all the neurons from C gate all the output connections from A.
  • Layer.gateType.ONE_TO_ONE: If layer C is gating connections between layer A and B, each neuron from C gates one connection from A to B. This is useful for gating self-connected layers. To use this kind of gateType, A, B and C must be the same size.

activate

When a layer activates, it just activates all the neurons in that layer in order, and returns an array with the outputs. It accepts an array of activations as parameter (for input layers):

var A = new Layer(5);
var B = new Layer(3);
A.project(B);

A.activate([1,0,1,0,1]); // [1,0,1,0,1]
B.activate(); // [0.3280457, 0.83243247, 0.5320423]

propagate

After an activation, you can teach the layer what should have been the correct output (a.k.a. train). This is done by backpropagating the error. To use the propagate method you have to provide a learning rate, and a target value (array of floats between 0 and 1).

For example, if I want to train layer B to output [0,0] when layer A activates [1,0,1,0,1]:

var A = new Layer(5);
var B = new Layer(2);
A.project(B);

var learningRate = .3;

for (var i = 0; i < 20000; i++)
{
	// when A activates [1, 0, 1, 0, 1]
	A.activate([1,0,1,0,1]);

	// train B to activate [0,0]
	B.activate();
	B.propagate(learningRate, [0,0]);
}

// test it
A.activate([1,0,1,0,1]);
B.activate(); // [0.004606949693864496, 0.004606763721459169]

Squashing function and bias

You can set the squashing function and bias of all the neurons in a layer by using the method set:

myLayer.set({
	squash: Neuron.squash.TANH,
	bias: 0
})

neurons

The method neurons() return an array with all the neurons in the layer, in activation order.

Clone this wiki locally