diff --git a/site/en/guide/basic_training_loops.ipynb b/site/en/guide/basic_training_loops.ipynb index a1558b1903..0af32e1dd9 100644 --- a/site/en/guide/basic_training_loops.ipynb +++ b/site/en/guide/basic_training_loops.ipynb @@ -5,9 +5,7 @@ "metadata": { "id": "5rmpybwysXGV" }, - "source": [ - "##### Copyright 2020 The TensorFlow Authors." - ] + "source": "##### Copyright 2020 The TensorFlow Authors." }, { "cell_type": "code", @@ -16,71 +14,37 @@ "cellView": "form", "id": "m8y3rGtQsYP2" }, - "outputs": [], - "source": [ - "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", - "# you may not use this file except in compliance with the License.\n", - "# You may obtain a copy of the License at\n", - "#\n", - "# https://www.apache.org/licenses/LICENSE-2.0\n", - "#\n", - "# Unless required by applicable law or agreed to in writing, software\n", - "# distributed under the License is distributed on an \"AS IS\" BASIS,\n", - "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n", - "# See the License for the specific language governing permissions and\n", - "# limitations under the License." - ] + "outputs": [ + ], + "source": "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License." }, { "cell_type": "markdown", "metadata": { "id": "hrXv0rU9sIma" }, - "source": [ - "# Basic training loops" - ] + "source": "# Basic training loops" }, { "cell_type": "markdown", "metadata": { "id": "7S0BwJ_8sLu7" }, - "source": [ - "\n", - " \n", - " \n", - " \n", - " \n", - "
\n", - " View on TensorFlow.org\n", - " \n", - " Run in Google Colab\n", - " \n", - " View source on GitHub\n", - " \n", - " Download notebook\n", - "
" - ] + "source": "\n \n \n \n \n
\n View on TensorFlow.org\n \n Run in Google Colab\n \n View source on GitHub\n \n Download notebook\n
" }, { "cell_type": "markdown", "metadata": { "id": "k2o3TTG4TFpt" }, - "source": [ - "In the previous guides, you have learned about [tensors](./tensor.ipynb), [variables](./variable.ipynb), [gradient tape](autodiff.ipynb), and [modules](./intro_to_modules.ipynb). In this guide, you will fit these all together to train models.\n", - "\n", - "TensorFlow also includes the [tf.Keras API](https://www.tensorflow.org/guide/keras/overview), a high-level neural network API that provides useful abstractions to reduce boilerplate. However, in this guide, you will use basic classes." - ] + "source": "In the previous guides, you have learned about [tensors](./tensor.ipynb), [variables](./variable.ipynb), [gradient tape](autodiff.ipynb), and [modules](./intro_to_modules.ipynb). In this guide, you will fit these all together to train models.\n\nTensorFlow also includes the [tf.Keras API](https://www.tensorflow.org/guide/keras/overview), a high-level neural network API that provides useful abstractions to reduce boilerplate. However, in this guide, you will use basic classes." }, { "cell_type": "markdown", "metadata": { "id": "3LXMVuV0VhDr" }, - "source": [ - "## Setup" - ] + "source": "## Setup" }, { "cell_type": "code", @@ -88,51 +52,23 @@ "metadata": { "id": "NiolgWMPgpwI" }, - "outputs": [], - "source": [ - "import tensorflow as tf\n", - "\n", - "import matplotlib.pyplot as plt\n", - "\n", - "colors = plt.rcParams['axes.prop_cycle'].by_key()['color']" - ] + "outputs": [ + ], + "source": "import tensorflow as tf\n\nimport matplotlib.pyplot as plt\n\ncolors = plt.rcParams['axes.prop_cycle'].by_key()['color']" }, { "cell_type": "markdown", "metadata": { "id": "iKD__8kFCKNt" }, - "source": [ - "## Solving machine learning problems\n", - "\n", - "Solving a machine learning problem usually consists of the following steps:\n", - "\n", - " - Obtain training data.\n", - " - Define the model.\n", - " - Define a loss function.\n", - " - Run through the training data, calculating loss from the ideal value\n", - " - Calculate gradients for that loss and use an *optimizer* to adjust the variables to fit the data.\n", - " - Evaluate your results.\n", - "\n", - "For illustration purposes, in this guide you'll develop a simple linear model, $f(x) = x * W + b$, which has two variables: $W$ (weights) and $b$ (bias).\n", - "\n", - "This is the most basic of machine learning problems: Given $x$ and $y$, try to find the slope and offset of a line via [simple linear regression](https://en.wikipedia.org/wiki/Linear_regression#Simple_and_multiple_linear_regression)." - ] + "source": "## Solving machine learning problems\n\nSolving a machine learning problem usually consists of the following steps:\n\n - Obtain training data.\n - Define the model.\n - Define a loss function.\n - Run through the training data, calculating loss from the ideal value\n - Calculate gradients for that loss and use an *optimizer* to adjust the variables to fit the data.\n - Evaluate your results.\n\nFor illustration purposes, in this guide you'll develop a simple linear model, $f(x) = x * W + b$, which has two variables: $W$ (weights) and $b$ (bias).\n\nThis is the most basic of machine learning problems: Given $x$ and $y$, try to find the slope and offset of a line via [simple linear regression](https://en.wikipedia.org/wiki/Linear_regression#Simple_and_multiple_linear_regression)." }, { "cell_type": "markdown", "metadata": { "id": "qutT_fkl_CBc" }, - "source": [ - "## Data\n", - "\n", - "Supervised learning uses *inputs* (usually denoted as *x*) and *outputs* (denoted *y*, often called *labels*). The goal is to learn from paired inputs and outputs so that you can predict the value of an output from an input.\n", - "\n", - "Each input of your data, in TensorFlow, is almost always represented by a tensor, and is often a vector. In supervised training, the output (or value you'd like to predict) is also a tensor.\n", - "\n", - "Here is some data synthesized by adding Gaussian (Normal) noise to points along a line." - ] + "source": "## Data\n\nSupervised learning uses *inputs* (usually denoted as *x*) and *outputs* (denoted *y*, often called *labels*). The goal is to learn from paired inputs and outputs so that you can predict the value of an output from an input.\n\nEach input of your data, in TensorFlow, is almost always represented by a tensor, and is often a vector. In supervised training, the output (or value you'd like to predict) is also a tensor.\n\nHere is some data synthesized by adding Gaussian (Normal) noise to points along a line." }, { "cell_type": "code", @@ -140,27 +76,9 @@ "metadata": { "id": "NzivK2ATByOz" }, - "outputs": [], - "source": [ - "# The actual line\n", - "TRUE_W = 3.0\n", - "TRUE_B = 2.0\n", - "\n", - "NUM_EXAMPLES = 201\n", - "\n", - "# A vector of random x values\n", - "x = tf.linspace(-2,2, NUM_EXAMPLES)\n", - "x = tf.cast(x, tf.float32)\n", - "\n", - "def f(x):\n", - " return x * TRUE_W + TRUE_B\n", - "\n", - "# Generate some noise\n", - "noise = tf.random.normal(shape=[NUM_EXAMPLES])\n", - "\n", - "# Calculate y\n", - "y = f(x) + noise" - ] + "outputs": [ + ], + "source": "# The actual line\nTRUE_W = 3.0\nTRUE_B = 2.0\n\nNUM_EXAMPLES = 201\n\n# A vector of random x values\nx = tf.linspace(-2,2, NUM_EXAMPLES)\nx = tf.cast(x, tf.float32)\n\ndef f(x):\n return x * TRUE_W + TRUE_B\n\n# Generate some noise\nnoise = tf.random.normal(shape=[NUM_EXAMPLES])\n\n# Calculate y\ny = f(x) + noise" }, { "cell_type": "code", @@ -168,36 +86,23 @@ "metadata": { "id": "IlFd_HVBFGIF" }, - "outputs": [], - "source": [ - "# Plot all the data\n", - "plt.plot(x, y, '.')\n", - "plt.show()" - ] + "outputs": [ + ], + "source": "# Plot all the data\nplt.plot(x, y, '.')\nplt.show()" }, { "cell_type": "markdown", "metadata": { "id": "UH95XUzhL99d" }, - "source": [ - "Tensors are usually gathered together in *batches*, or groups of inputs and outputs stacked together. Batching can confer some training benefits and works well with accelerators and vectorized computation. Given how small this dataset is, you can treat the entire dataset as a single batch." - ] + "source": "Tensors are usually gathered together in *batches*, or groups of inputs and outputs stacked together. Batching can confer some training benefits and works well with accelerators and vectorized computation. Given how small this dataset is, you can treat the entire dataset as a single batch." }, { "cell_type": "markdown", "metadata": { "id": "gFzH64Jn9PIm" }, - "source": [ - "## Define the model\n", - "\n", - "Use `tf.Variable` to represent all weights in a model. A `tf.Variable` stores a value and provides this in tensor form as needed. See the [variable guide](./variable.ipynb) for more details.\n", - "\n", - "Use `tf.Module` to encapsulate the variables and the computation. You could use any Python object, but this way it can be easily saved.\n", - "\n", - "Here, you define both *w* and *b* as variables." - ] + "source": "## Define the model\n\nUse `tf.Variable` to represent all weights in a model. A `tf.Variable` stores a value and provides this in tensor form as needed. See the [variable guide](./variable.ipynb) for more details.\n\nUse `tf.Module` to encapsulate the variables and the computation. You could use any Python object, but this way it can be easily saved.\n\nHere, you define both *w* and *b* as variables." }, { "cell_type": "code", @@ -205,15 +110,18 @@ "metadata": { "id": "_WRu7Pze7wk8" }, - "outputs": [], + "outputs": [ + ], "source": [ "class MyModel(tf.Module):\n", " def __init__(self, **kwargs):\n", " super().__init__(**kwargs)\n", " # Initialize the weights to `5.0` and the bias to `0.0`\n", " # In practice, these should be randomly initialized\n", - " self.w = tf.Variable(5.0)\n", - " self.b = tf.Variable(0.0)\n", + " self.w = self.add_weight(name='w', shape=(), initializer=tf.constant_initializer(5.0))\n", + " self.b = self.add_weight(name='b', shape=(), initializer=tf.constant_initializer(0.0))\n", + "\n", + " # Use add_weight so Keras can track this as a trainable weight\n", "\n", " def __call__(self, x):\n", " return self.w * x + self.b\n", @@ -232,20 +140,14 @@ "metadata": { "id": "rdpN_3ssG9D5" }, - "source": [ - "The initial variables are set here in a fixed way, but Keras comes with any of a number of [initializers](https://www.tensorflow.org/api_docs/python/tf/keras/initializers) you could use, with or without the rest of Keras." - ] + "source": "The initial variables are set here in a fixed way, but Keras comes with any of a number of [initializers](https://www.tensorflow.org/api_docs/python/tf/keras/initializers) you could use, with or without the rest of Keras." }, { "cell_type": "markdown", "metadata": { "id": "xa6j_yXa-j79" }, - "source": [ - "### Define a loss function\n", - "\n", - "A loss function measures how well the output of a model for a given input matches the target output. The goal is to minimize this difference during training. Define the standard L2 loss, also known as the \"mean squared\" error:" - ] + "source": "### Define a loss function\n\nA loss function measures how well the output of a model for a given input matches the target output. The goal is to minimize this difference during training. Define the standard L2 loss, also known as the \"mean squared\" error:" }, { "cell_type": "code", @@ -253,21 +155,16 @@ "metadata": { "id": "Y0ysUFGY924U" }, - "outputs": [], - "source": [ - "# This computes a single loss value for an entire batch\n", - "def loss(target_y, predicted_y):\n", - " return tf.reduce_mean(tf.square(target_y - predicted_y))" - ] + "outputs": [ + ], + "source": "# This computes a single loss value for an entire batch\ndef loss(target_y, predicted_y):\n return tf.reduce_mean(tf.square(target_y - predicted_y))" }, { "cell_type": "markdown", "metadata": { "id": "-50nq-wPBsAW" }, - "source": [ - "Before training the model, you can visualize the loss value by plotting the model's predictions in red and the training data in blue:" - ] + "source": "Before training the model, you can visualize the loss value by plotting the model's predictions in red and the training data in blue:" }, { "cell_type": "code", @@ -275,36 +172,16 @@ "metadata": { "id": "_eb83LtrB4nt" }, - "outputs": [], - "source": [ - "plt.plot(x, y, '.', label=\"Data\")\n", - "plt.plot(x, f(x), label=\"Ground truth\")\n", - "plt.plot(x, model(x), label=\"Predictions\")\n", - "plt.legend()\n", - "plt.show()\n", - "\n", - "print(\"Current loss: %1.6f\" % loss(y, model(x)).numpy())" - ] + "outputs": [ + ], + "source": "plt.plot(x, y, '.', label=\"Data\")\nplt.plot(x, f(x), label=\"Ground truth\")\nplt.plot(x, model(x), label=\"Predictions\")\nplt.legend()\nplt.show()\n\nprint(\"Current loss: %1.6f\" % loss(y, model(x)).numpy())" }, { "cell_type": "markdown", "metadata": { "id": "sSDP-yeq_4jE" }, - "source": [ - "### Define a training loop\n", - "\n", - "The training loop consists of repeatedly doing three tasks in order:\n", - "\n", - "* Sending a batch of inputs through the model to generate outputs\n", - "* Calculating the loss by comparing the outputs to the output (or label)\n", - "* Using gradient tape to find the gradients\n", - "* Optimizing the variables with those gradients\n", - "\n", - "For this example, you can train the model using [gradient descent](https://en.wikipedia.org/wiki/Gradient_descent).\n", - "\n", - "There are many variants of the gradient descent scheme that are captured in `tf.keras.optimizers`. But in the spirit of building from first principles, here you will implement the basic math yourself with the help of `tf.GradientTape` for automatic differentiation and `tf.assign_sub` for decrementing a value (which combines `tf.assign` and `tf.sub`):" - ] + "source": "### Define a training loop\n\nThe training loop consists of repeatedly doing three tasks in order:\n\n* Sending a batch of inputs through the model to generate outputs\n* Calculating the loss by comparing the outputs to the output (or label)\n* Using gradient tape to find the gradients\n* Optimizing the variables with those gradients\n\nFor this example, you can train the model using [gradient descent](https://en.wikipedia.org/wiki/Gradient_descent).\n\nThere are many variants of the gradient descent scheme that are captured in `tf.keras.optimizers`. But in the spirit of building from first principles, here you will implement the basic math yourself with the help of `tf.GradientTape` for automatic differentiation and `tf.assign_sub` for decrementing a value (which combines `tf.assign` and `tf.sub`):" }, { "cell_type": "code", @@ -312,31 +189,16 @@ "metadata": { "id": "MBIACgdnA55X" }, - "outputs": [], - "source": [ - "# Given a callable model, inputs, outputs, and a learning rate...\n", - "def train(model, x, y, learning_rate):\n", - "\n", - " with tf.GradientTape() as t:\n", - " # Trainable variables are automatically tracked by GradientTape\n", - " current_loss = loss(y, model(x))\n", - "\n", - " # Use GradientTape to calculate the gradients with respect to W and b\n", - " dw, db = t.gradient(current_loss, [model.w, model.b])\n", - "\n", - " # Subtract the gradient scaled by the learning rate\n", - " model.w.assign_sub(learning_rate * dw)\n", - " model.b.assign_sub(learning_rate * db)" - ] + "outputs": [ + ], + "source": "# Given a callable model, inputs, outputs, and a learning rate...\ndef train(model, x, y, learning_rate):\n\n with tf.GradientTape() as t:\n # Trainable variables are automatically tracked by GradientTape\n current_loss = loss(y, model(x))\n\n # Use GradientTape to calculate the gradients with respect to W and b\n dw, db = t.gradient(current_loss, [model.w, model.b])\n\n # Subtract the gradient scaled by the learning rate\n model.w.assign_sub(learning_rate * dw)\n model.b.assign_sub(learning_rate * db)" }, { "cell_type": "markdown", "metadata": { "id": "RwWPaJryD2aN" }, - "source": [ - "For a look at training, you can send the same batch of *x* and *y* through the training loop, and see how `W` and `b` evolve." - ] + "source": "For a look at training, you can send the same batch of *x* and *y* through the training loop, and see how `W` and `b` evolve." }, { "cell_type": "code", @@ -344,43 +206,16 @@ "metadata": { "id": "XdfkR223D9dW" }, - "outputs": [], - "source": [ - "model = MyModel()\n", - "\n", - "# Collect the history of W-values and b-values to plot later\n", - "weights = []\n", - "biases = []\n", - "epochs = range(10)\n", - "\n", - "# Define a training loop\n", - "def report(model, loss):\n", - " return f\"W = {model.w.numpy():1.2f}, b = {model.b.numpy():1.2f}, loss={loss:2.5f}\"\n", - "\n", - "\n", - "def training_loop(model, x, y):\n", - "\n", - " for epoch in epochs:\n", - " # Update the model with the single giant batch\n", - " train(model, x, y, learning_rate=0.1)\n", - "\n", - " # Track this before I update\n", - " weights.append(model.w.numpy())\n", - " biases.append(model.b.numpy())\n", - " current_loss = loss(y, model(x))\n", - "\n", - " print(f\"Epoch {epoch:2d}:\")\n", - " print(\" \", report(model, current_loss))" - ] + "outputs": [ + ], + "source": "model = MyModel()\n\n# Collect the history of W-values and b-values to plot later\nweights = []\nbiases = []\nepochs = range(10)\n\n# Define a training loop\ndef report(model, loss):\n return f\"W = {model.w.numpy():1.2f}, b = {model.b.numpy():1.2f}, loss={loss:2.5f}\"\n\n\ndef training_loop(model, x, y):\n\n for epoch in epochs:\n # Update the model with the single giant batch\n train(model, x, y, learning_rate=0.1)\n\n # Track this before I update\n weights.append(model.w.numpy())\n biases.append(model.b.numpy())\n current_loss = loss(y, model(x))\n\n print(f\"Epoch {epoch:2d}:\")\n print(\" \", report(model, current_loss))" }, { "cell_type": "markdown", "metadata": { "id": "8dKKLU4KkQEq" }, - "source": [ - "Do the training" - ] + "source": "Do the training" }, { "cell_type": "code", @@ -388,24 +223,16 @@ "metadata": { "id": "iRuNUghs1lHY" }, - "outputs": [], - "source": [ - "current_loss = loss(y, model(x))\n", - "\n", - "print(f\"Starting:\")\n", - "print(\" \", report(model, current_loss))\n", - "\n", - "training_loop(model, x, y)" - ] + "outputs": [ + ], + "source": "current_loss = loss(y, model(x))\n\nprint(f\"Starting:\")\nprint(\" \", report(model, current_loss))\n\ntraining_loop(model, x, y)" }, { "cell_type": "markdown", "metadata": { "id": "JPJgimg8kSA4" }, - "source": [ - "Plot the evolution of the weights over time:" - ] + "source": "Plot the evolution of the weights over time:" }, { "cell_type": "code", @@ -413,28 +240,16 @@ "metadata": { "id": "ND1fQw8sbTNr" }, - "outputs": [], - "source": [ - "plt.plot(epochs, weights, label='Weights', color=colors[0])\n", - "plt.plot(epochs, [TRUE_W] * len(epochs), '--',\n", - " label = \"True weight\", color=colors[0])\n", - "\n", - "plt.plot(epochs, biases, label='bias', color=colors[1])\n", - "plt.plot(epochs, [TRUE_B] * len(epochs), \"--\",\n", - " label=\"True bias\", color=colors[1])\n", - "\n", - "plt.legend()\n", - "plt.show()" - ] + "outputs": [ + ], + "source": "plt.plot(epochs, weights, label='Weights', color=colors[0])\nplt.plot(epochs, [TRUE_W] * len(epochs), '--',\n label = \"True weight\", color=colors[0])\n\nplt.plot(epochs, biases, label='bias', color=colors[1])\nplt.plot(epochs, [TRUE_B] * len(epochs), \"--\",\n label=\"True bias\", color=colors[1])\n\nplt.legend()\nplt.show()" }, { "cell_type": "markdown", "metadata": { "id": "zhlwj1ojkcUP" }, - "source": [ - "Visualize how the trained model performs" - ] + "source": "Visualize how the trained model performs" }, { "cell_type": "code", @@ -442,29 +257,16 @@ "metadata": { "id": "tpTEjWWex568" }, - "outputs": [], - "source": [ - "plt.plot(x, y, '.', label=\"Data\")\n", - "plt.plot(x, f(x), label=\"Ground truth\")\n", - "plt.plot(x, model(x), label=\"Predictions\")\n", - "plt.legend()\n", - "plt.show()\n", - "\n", - "print(\"Current loss: %1.6f\" % loss(model(x), y).numpy())" - ] + "outputs": [ + ], + "source": "plt.plot(x, y, '.', label=\"Data\")\nplt.plot(x, f(x), label=\"Ground truth\")\nplt.plot(x, model(x), label=\"Predictions\")\nplt.legend()\nplt.show()\n\nprint(\"Current loss: %1.6f\" % loss(model(x), y).numpy())" }, { "cell_type": "markdown", "metadata": { "id": "DODMMmfLIiOC" }, - "source": [ - "## The same solution, but with Keras\n", - "\n", - "It's useful to contrast the code above with the equivalent in Keras.\n", - "\n", - "Defining the model looks exactly the same if you subclass `tf.keras.Model`. Remember that Keras models inherit ultimately from module." - ] + "source": "## The same solution, but with Keras\n\nIt's useful to contrast the code above with the equivalent in Keras.\n\nDefining the model looks exactly the same if you subclass `tf.keras.Model`. Remember that Keras models inherit ultimately from module." }, { "cell_type": "code", @@ -472,38 +274,16 @@ "metadata": { "id": "Z86hCI0x1YX3" }, - "outputs": [], - "source": [ - "class MyModelKeras(tf.keras.Model):\n", - " def __init__(self, **kwargs):\n", - " super().__init__(**kwargs)\n", - " # Initialize the weights to `5.0` and the bias to `0.0`\n", - " # In practice, these should be randomly initialized\n", - " self.w = tf.Variable(5.0)\n", - " self.b = tf.Variable(0.0)\n", - "\n", - " def call(self, x):\n", - " return self.w * x + self.b\n", - "\n", - "keras_model = MyModelKeras()\n", - "\n", - "# Reuse the training loop with a Keras model\n", - "training_loop(keras_model, x, y)\n", - "\n", - "# You can also save a checkpoint using Keras's built-in support\n", - "keras_model.save_weights(\"my_checkpoint\")" - ] + "outputs": [ + ], + "source": "class MyModelKeras(tf.keras.Model):\n def __init__(self, **kwargs):\n super().__init__(**kwargs)\n # Initialize the weights to `5.0` and the bias to `0.0`\n # In practice, these should be randomly initialized\n self.w = tf.Variable(5.0)\n self.b = tf.Variable(0.0)\n\n def call(self, x):\n return self.w * x + self.b\n\nkeras_model = MyModelKeras()\n\n# Reuse the training loop with a Keras model\ntraining_loop(keras_model, x, y)\n\n# You can also save a checkpoint using Keras's built-in support\nkeras_model.save_weights(\"my_checkpoint\")" }, { "cell_type": "markdown", "metadata": { "id": "6kw5P4jt2Az8" }, - "source": [ - "Rather than write new training loops each time you create a model, you can use the built-in features of Keras as a shortcut. This can be useful when you do not want to write or debug Python training loops.\n", - "\n", - "If you do, you will need to use `model.compile()` to set the parameters, and `model.fit()` to train. It can be less code to use Keras implementations of L2 loss and gradient descent, again as a shortcut. Keras losses and optimizers can be used outside of these convenience functions, too, and the previous example could have used them." - ] + "source": "Rather than write new training loops each time you create a model, you can use the built-in features of Keras as a shortcut. This can be useful when you do not want to write or debug Python training loops.\n\nIf you do, you will need to use `model.compile()` to set the parameters, and `model.fit()` to train. It can be less code to use Keras implementations of L2 loss and gradient descent, again as a shortcut. Keras losses and optimizers can be used outside of these convenience functions, too, and the previous example could have used them." }, { "cell_type": "code", @@ -511,36 +291,16 @@ "metadata": { "id": "-nbLLfPE2pEl" }, - "outputs": [], - "source": [ - "keras_model = MyModelKeras()\n", - "\n", - "# compile sets the training parameters\n", - "keras_model.compile(\n", - " # By default, fit() uses tf.function(). You can\n", - " # turn that off for debugging, but it is on now.\n", - " run_eagerly=False,\n", - "\n", - " # Using a built-in optimizer, configuring as an object\n", - " optimizer=tf.keras.optimizers.SGD(learning_rate=0.1),\n", - "\n", - " # Keras comes with built-in MSE error\n", - " # However, you could use the loss function\n", - " # defined above\n", - " loss=tf.keras.losses.mean_squared_error,\n", - ")" - ] + "outputs": [ + ], + "source": "keras_model = MyModelKeras()\n\n# compile sets the training parameters\nkeras_model.compile(\n # By default, fit() uses tf.function(). You can\n # turn that off for debugging, but it is on now.\n run_eagerly=False,\n\n # Using a built-in optimizer, configuring as an object\n optimizer=tf.keras.optimizers.SGD(learning_rate=0.1),\n\n # Keras comes with built-in MSE error\n # However, you could use the loss function\n # defined above\n loss=tf.keras.losses.mean_squared_error,\n)" }, { "cell_type": "markdown", "metadata": { "id": "lrlHODiZccu2" }, - "source": [ - "Keras `fit` expects batched data or a complete dataset as a NumPy array. NumPy arrays are chopped into batches and default to a batch size of 32.\n", - "\n", - "In this case, to match the behavior of the hand-written loop, you should pass `x` in as a single batch of size 1000." - ] + "source": "Keras `fit` expects batched data or a complete dataset as a NumPy array. NumPy arrays are chopped into batches and default to a batch size of 32.\n\nIn this case, to match the behavior of the hand-written loop, you should pass `x` in as a single batch of size 1000." }, { "cell_type": "code", @@ -548,35 +308,23 @@ "metadata": { "id": "zfAYqtu136PO" }, - "outputs": [], - "source": [ - "print(x.shape[0])\n", - "keras_model.fit(x, y, epochs=10, batch_size=1000)" - ] + "outputs": [ + ], + "source": "print(x.shape[0])\nkeras_model.fit(x, y, epochs=10, batch_size=1000)" }, { "cell_type": "markdown", "metadata": { "id": "8zKZIO9P5s1G" }, - "source": [ - "Note that Keras prints out the loss after training, not before, so the first loss appears lower, but otherwise this shows essentially the same training performance." - ] + "source": "Note that Keras prints out the loss after training, not before, so the first loss appears lower, but otherwise this shows essentially the same training performance." }, { "cell_type": "markdown", "metadata": { "id": "vPnIVuaSJwWz" }, - "source": [ - "## Next steps\n", - "\n", - "In this guide, you have seen how to use the core classes of tensors, variables, modules, and gradient tape to build and train a model, and further how those ideas map to Keras.\n", - "\n", - "This is, however, an extremely simple problem. For a more practical introduction, see [Custom training walkthrough](../tutorials/customization/custom_training_walkthrough.ipynb).\n", - "\n", - "For more on using built-in Keras training loops, see [this guide](https://www.tensorflow.org/guide/keras/train_and_evaluate). For more on training loops and Keras, see [this guide](https://www.tensorflow.org/guide/keras/writing_a_training_loop_from_scratch). For writing custom distributed training loops, see [this guide](distributed_training.ipynb#using_tfdistributestrategy_with_basic_training_loops_loops)." - ] + "source": "## Next steps\n\nIn this guide, you have seen how to use the core classes of tensors, variables, modules, and gradient tape to build and train a model, and further how those ideas map to Keras.\n\nThis is, however, an extremely simple problem. For a more practical introduction, see [Custom training walkthrough](../tutorials/customization/custom_training_walkthrough.ipynb).\n\nFor more on using built-in Keras training loops, see [this guide](https://www.tensorflow.org/guide/keras/train_and_evaluate). For more on training loops and Keras, see [this guide](https://www.tensorflow.org/guide/keras/writing_a_training_loop_from_scratch). For writing custom distributed training loops, see [this guide](distributed_training.ipynb#using_tfdistributestrategy_with_basic_training_loops_loops)." } ], "metadata": { @@ -595,4 +343,4 @@ }, "nbformat": 4, "nbformat_minor": 0 -} +} \ No newline at end of file