Skip to content

Commit

Permalink
Update README.md
Browse files Browse the repository at this point in the history
Fix double entries
  • Loading branch information
TimKoornstra authored Oct 2, 2023
1 parent 1b110bb commit 1a9a93c
Showing 1 changed file with 1 addition and 18 deletions.
19 changes: 1 addition & 18 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -157,7 +157,6 @@ Variable-size Graph Specification Language (VGSL) is a powerful tool that enable

VGSL operates through short definition strings. For instance:

`[None,64,None,1 Cr3,3,32 Mp2,2,2,2 Cr3,3,64 Mp2,2,2,2 Rc Fc64 D20 Lrs128 D20 Lrs64 D20 O1s92]`
`None,64,None,1 Cr3,3,32 Mp2,2,2,2 Cr3,3,64 Mp2,2,2,2 Rc Fc64 D20 Lrs128 D20 Lrs64 D20 O1s92`

In this example, the string defines a neural network with input layers, convolutional layers, pooling, reshaping, fully connected layers, LSTM and output layers. Each segment of the string corresponds to a specific layer or operation in the neural network. Moreover, VGSL provides the flexibility to specify the type of activation function for certain layers, enhancing customization.
Expand All @@ -166,32 +165,25 @@ In this example, the string defines a neural network with input layers, convolut

| **Layer** | **Spec** | **Example** | **Description** |
|--------------------|------------------------------------------------|--------------------|--------------------------------------------------------------------------------------------------------------|
| Input | `[batch, height, width, depth]` | `None,64,None,1` | Input layer with variable batch_size & width, depth of 1 channel |
| Input | `batch,height,width,depth]` | `None,64,None,1` | Input layer with variable batch_size & width, depth of 1 channel |
| Input | `batch,height,width,depth` | `None,64,None,1` | Input layer with variable batch_size & width, depth of 1 channel |
| Output | `O(2\|1\|0)(l\|s)` | `O1s10` | Dense layer with a 1D sequence as with 10 output classes and softmax |
| Conv2D | `C(s\|t\|r\|e\|l\|m),<x>,<y>[<s_x>,<s_y>],<d>` | `Cr,3,3,64` | Conv2D layer with Relu, a 3x3 filter, 1x1 stride and 64 filters |
| Conv2D | `C(s\|t\|r\|e\|l\|m),<x>,<y>[<s_x>,<s_y>],<d>` | `Cr3,3,64` | Conv2D layer with Relu, a 3x3 filter, 1x1 stride and 64 filters |
| Dense (FC) | `F(s\|t\|r\|l\|m)<d>` | `Fs64` | Dense layer with softmax and 64 units |
| LSTM | `L(f\|r)[s]<n>,[D<rate>,Rd<rate>]` | `Lf64` | Forward-only LSTM cell with 64 units |
| GRU | `G(f\|r)[s]<n>,[D<rate>,Rd<rate>]` | `Gr64` | Reverse-only GRU cell with 64 units |
| Bidirectional | `B(g\|l)<n>[D<rate>Rd<rate>]` | `Bl256` | Bidirectional layer wrapping a LSTM RNN with 256 units |
| BatchNormalization | `Bn` | `Bn` | BatchNormalization layer |
| MaxPooling2D | `Mp<x>,<y>,<s_x>,<s_y>` | `Mp2,2,1,1` | MaxPooling2D layer with 2x2 pool size and 1x1 strides |
| AvgPooling2D | `Ap<x>,<y>,<s_x>,<s_y>` | `Ap2,2,2,2` | AveragePooling2D layer with 2x2 pool size and 1x1 strides |
| Dropout | `D<rate>` | `Do25` | Dropout layer with `dropout` = 0.25 |
| AvgPooling2D | `Ap<x>,<y>,<s_x>,<s_y>` | `Ap2,2,2,2` | AveragePooling2D layer with 2x2 pool size and 2x2 strides |
| Dropout | `D<rate>` | `D25` | Dropout layer with `dropout` = 0.25 |
| Reshape | `Rc` | `Rc` | Reshape layer returns a new (collapsed) tf.Tensor with a different shape based on the previous layer outputs |
| ResidualBlock | `RB[d]<x>,<y>,<d>` | `RB3,3,64` | Residual Block with optional downsample. Has a kernel size of <x>,<y> and a depth of <d>. If [d] is provided, the block will downsample the input |
| ResidualBlock | `RB[d]<x>,<y>,<z>` | `RB3,3,64` | Residual Block with optional downsample. Has a kernel size of <x>,<y> and a depth of <z>. If `d` is provided, the block will downsample the input |

### Layer Details
#### Input

- **Spec**: `[batch, height, width, depth]`
- **Spec**: `batch,height,width,depth`
- **Description**: Represents the input layer in TensorFlow, based on standard TF tensor dimensions.
- **Example**: `None,64,None,1` creates a tf.layers.Input with a variable batch size, height of 64, variable width and a depth of 1 (input channels)
- **Example**: `None,64,None,1` creates a `tf.layers.Input` with a variable batch size, height of 64, variable width and a depth of 1 (input channels)

#### Output
Expand All @@ -202,7 +194,6 @@ In this example, the string defines a neural network with input layers, convolut

#### Conv2D

- **Spec**: `C(s|t|r|e|l|m)<x>,<y>,[<s_x>,<s_y>],<d>`
- **Spec**: `C(s|t|r|e|l|m)<x>,<y>[,<s_x>,<s_y>],<d>`
- **Description**: Convolutional layer using a `x`,`y` window and `d` filters. Optionally, the stride window can be set with (`s_x`, `s_y`).
- **Examples**:
Expand All @@ -217,21 +208,18 @@ In this example, the string defines a neural network with input layers, convolut

#### LSTM

- **Spec**: `L(f|r)[s]<n>,[D<rate>,Rd<rate>]`
- **Spec**: `L(f|r)[s]<n>[,D<rate>,Rd<rate>]`
- **Description**: LSTM cell running either forward-only (`f`) or reversed-only (`r`), with `n` units. Optionally, the `rate` can be set for the `dropout` and/or the `recurrent_dropout`, where `rate` indicates a percentage between 0 and 100.
- **Example**: `Lf64` creates a forward-only LSTM cell with 64 units.

#### GRU

- **Spec**: `G(f|r)[s]<n>,[D<rate>,Rd<rate>]`
- **Spec**: `G(f|r)[s]<n>[,D<rate>,Rd<rate>]`
- **Description**: GRU cell running either forward-only (`f`) or reversed-only (`r`), with `n` units. Optionally, the `rate` can be set for the `dropout` and/or the `recurrent_dropout`, where `rate` indicates a percentage between 0 and 100.
- **Example**: `Gf64` creates a forward-only GRU cell with 64 units.

#### Bidirectional

- **Spec**: `B(g|l)<n>,[D<rate>,Rd<rate>]`
- **Spec**: `B(g|l)<n>[,D<rate>,Rd<rate>]`
- **Description**: Bidirectional layer wrapping either a LSTM (`l`) or GRU (`g`) RNN layer, running in both directions, with `n` units. Optionally, the `rate` can be set for the `dropout` and/or the `recurrent_dropout`, where `rate` indicates a percentage between 0 and 100.
- **Example**: `Bl256` creates a Bidirectional RNN layer using a LSTM Cell with 256 units.
Expand All @@ -258,7 +246,6 @@ In this example, the string defines a neural network with input layers, convolut

- **Spec**: `D<rate>`
- **Description**: Regularization layer that sets input units to 0 at a rate of `rate` during training. Used to prevent overfitting.
- **Example**: `Do50` creates a Dropout layer with a dropout rate of 0.5 (`D`/100).
- **Example**: `D50` creates a Dropout layer with a dropout rate of 0.5 (`D`/100).

#### Reshape
Expand All @@ -268,8 +255,6 @@ In this example, the string defines a neural network with input layers, convolut
- **Example**: `Rc` applies a specific transformation: `layers.Reshape((-1, prev_layer_y * prev_layer_x))`.

#### ResidualBlock
- **Spec**: `RB[d]<x>,<y>,<d>`
- **Description**: A Residual Block with a kernel size of <x>,<y> and a depth of <d>. If [d] is provided, the block will downsample the input. Residual blocks are used to allow for deeper networks by adding skip connections, which helps in preventing the vanishing gradient problem.
- **Spec**: `RB[d]<x>,<y>,<z>`
- **Description**: A Residual Block with a kernel size of <x>,<y> and a depth of <z>. If [d] is provided, the block will downsample the input. Residual blocks are used to allow for deeper networks by adding skip connections, which helps in preventing the vanishing gradient problem.
- **Example**: `RB3,3,64` creates a Residual Block with a 3x3 kernel size and a depth of 64 filters.
Expand Down Expand Up @@ -400,5 +385,3 @@ This error usually indicates that there is a mismatch in the expected input dime
To resolve this:
- Ensure that your VGSL string for the LSTM layer has an `s` in it, which will make the layer return sequences. For instance, instead of "Lf128", use "Lfs128".
- Re-run the script or command with the corrected VGSL string.


0 comments on commit 1a9a93c

Please sign in to comment.