Skip to content

Commit

Permalink
Merge pull request #4 from chiachun0511/patch-1
Browse files Browse the repository at this point in the history
  • Loading branch information
NoobTW authored Oct 3, 2018
2 parents 680a948 + 16a681f commit 6305e87
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions tutorials/mnist.md
Original file line number Diff line number Diff line change
Expand Up @@ -101,7 +101,7 @@ model.add(tf.layers.maxPooling2d({

### 增加剩餘層

重複層結構是神經網路中常見的模式。讓我們在模型裡再增加第二個卷積層,然後再增加一個池化層。注意我們的第二個卷積層裡,我們把過濾器的數量從 8 調到 16。另外還要注意我們沒有特別指定 `inputShape`因為它可以從上一層的輸出自己推裡出來
重複層結構是神經網路中常見的模式。讓我們在模型裡再增加第二個卷積層,然後再增加一個池化層。注意我們的第二個卷積層裡,我們把過濾器的數量從 8 調到 16。另外還要注意我們沒有特別指定 `inputShape`因為它可以從上一層的輸出自己推理出來

```javascript
model.add(tf.layers.conv2d({
Expand Down Expand Up @@ -300,4 +300,4 @@ const accuracy = history.history.acc[0];

- 更多卷積網路,請查看 Chris Olah 的 [Unerstanding Convolutions](http://colah.github.io/posts/2014-07-Understanding-Convolutions/)
- 更多關於損失,請看 [Machine Learning Crash Course](https://developers.google.com/machine-learning/crash-course/)[Descending into ML](https://developers.google.com/machine-learning/crash-course/descending-into-ml/) 來深入了解機器學習損失。
- 更多梯度下降和 SGD,請查看 [Machine Learning Crash Course](https://developers.google.com/machine-learning/crash-course/)[Reducing Loss](https://developers.google.com/machine-learning/crash-course/reducing-loss/)
- 更多梯度下降和 SGD,請查看 [Machine Learning Crash Course](https://developers.google.com/machine-learning/crash-course/)[Reducing Loss](https://developers.google.com/machine-learning/crash-course/reducing-loss/)

0 comments on commit 6305e87

Please sign in to comment.