diff --git a/1-introduction.html b/1-introduction.html index 66b6918f..7dc11456 100644 --- a/1-introduction.html +++ b/1-introduction.html @@ -1063,16 +1063,16 @@
Follow the instructions in the setup document to install -Keras, Seaborn and scikit-learn.
+Follow the instructions in the setup +document to install Keras, Seaborn and scikit-learn.
Keras is available as a module within TensorFlow, as described in the -setup. Let’s therefore -check whether you have a suitable version of TensorFlow installed. Open -up a new Jupyter notebook or interactive python console and run the -following commands:
+setup. Let’s therefore check whether you +have a suitable version of TensorFlow installed. Open up a new Jupyter +notebook or interactive python console and run the following +commands:C: 3, one for each output variable class
Have a look at the output of model.summary()
:
The confusion matrix shows that the predictions for Adelie and Gentoo are decent, but could be improved. However, Chinstrap is not predicted diff --git a/3-monitor-the-model.html b/3-monitor-the-model.html index fb36bceb..e1df1303 100644 --- a/3-monitor-the-model.html +++ b/3-monitor-the-model.html @@ -620,7 +620,7 @@
Correct answer: B. To find the weights that minimize the loss function. The loss function quantifies the total error of the network, @@ -1010,7 +1010,7 @@
While the performance on the train set seems reasonable, the performance on the test set is much worse. This is a common problem @@ -1132,7 +1132,7 @@
The difference in the two curves shows that something is not completely right here. The error for the model predictions on the @@ -1277,7 +1277,7 @@
Let’s first adapt our create_nn
function so that we can
tweak the number of nodes in the 2 layers by passing arguments to the
@@ -1566,7 +1566,7 @@
This is an open question. And we don’t actually know how far one could push this sunshine hour prediction (try it out yourself if you diff --git a/4-advanced-layer-types.html b/4-advanced-layer-types.html index 0e2d9a8d..d10b28ef 100644 --- a/4-advanced-layer-types.html +++ b/4-advanced-layer-types.html @@ -504,7 +504,7 @@
The correct solution is C: 12288
There are 4096 pixels in one image (64 * 64), each pixel has 3 @@ -598,7 +598,7 @@
The correct answer is B: Each entry of the input dimensions, i.e. the
shape
of one single data point, is connected with 100
@@ -727,7 +727,7 @@
There are different ways of dealing with border pixels. You can ignore them, which means that your output image is slightly smaller then @@ -765,7 +765,7 @@
We have 100 matrices with 3 * 3 * 3 = 27 values each so that gives 27 * 100 = 2700 weights. This is a magnitude of 2000 less than the fully @@ -841,7 +841,7 @@
We add an extra Conv2D layer after the second pooling layer:
1:
Follow the instructions in the setup document to install -Keras, Seaborn and scikit-learn.
+Follow the instructions in the setup +document to install Keras, Seaborn and scikit-learn.
Keras is available as a module within TensorFlow, as described in the -setup. Let’s therefore -check whether you have a suitable version of TensorFlow installed. Open -up a new Jupyter notebook or interactive python console and run the -following commands:
+setup. Let’s therefore check whether you +have a suitable version of TensorFlow installed. Open up a new Jupyter +notebook or interactive python console and run the following +commands:C: 3, one for each output variable class
Have a look at the output of model.summary()
:
The confusion matrix shows that the predictions for Adelie and Gentoo are decent, but could be improved. However, Chinstrap is not predicted @@ -2857,7 +2857,7 @@
Correct answer: B. To find the weights that minimize the loss @@ -3282,7 +3282,7 @@
While the performance on the train set seems reasonable, the performance on the test set is much worse. This is a common problem @@ -3408,7 +3408,7 @@
The difference in the two curves shows that something is not completely right here. The error for the model predictions on the @@ -3563,7 +3563,7 @@
Let’s first adapt our create_nn
function so that we can
tweak the number of nodes in the 2 layers by passing arguments to the
@@ -3861,7 +3861,7 @@
This is an open question. And we don’t actually know how far one could push this sunshine hour prediction (try it out yourself if you @@ -4333,7 +4333,7 @@
The correct solution is C: 12288
There are 4096 pixels in one image (64 * 64), each pixel has 3 @@ -4434,7 +4434,7 @@
The correct answer is B: Each entry of the input dimensions, i.e. the
shape
of one single data point, is connected with 100
@@ -4565,7 +4565,7 @@
There are different ways of dealing with border pixels. You can ignore them, which means that your output image is slightly smaller then @@ -4603,7 +4603,7 @@
We have 100 matrices with 3 * 3 * 3 = 27 values each so that gives 27 * 100 = 2700 weights. This is a magnitude of 2000 less than the fully @@ -4681,7 +4681,7 @@
We add an extra Conv2D layer after the second pooling layer: