From 98b9e0d789266db04b8665cd056e52a770919bcc Mon Sep 17 00:00:00 2001
From: Naomi Pentrel <5212232+npentrel@users.noreply.github.com>
Date: Mon, 9 Oct 2023 21:59:06 +0200
Subject: [PATCH] DOCS-1135: Improve ML flow (#1979)
---
docs/manage/ml/_index.md | 53 +++++++++++++++------
docs/manage/ml/train-model.md | 31 ++++++++++--
docs/manage/ml/upload-model.md | 29 +++++++++--
docs/services/ml/_index.md | 26 ++++++++--
docs/services/vision/classification.md | 24 +++++++++-
docs/services/vision/detection.md | 22 +++++++++
docs/tutorials/services/constrain-motion.md | 8 ++--
7 files changed, 162 insertions(+), 31 deletions(-)
diff --git a/docs/manage/ml/_index.md b/docs/manage/ml/_index.md
index 7e7ff7f572..4afd778add 100644
--- a/docs/manage/ml/_index.md
+++ b/docs/manage/ml/_index.md
@@ -10,31 +10,54 @@ description: "Use Viam's built-in machine learning capabilities to train image c
---
Viam includes a built-in [machine learning (ML) service](/services/ml/) which provides your robot with the ability to learn from data and adjust its behavior based on insights gathered from that data.
-Common use cases include object detection, image classification, natural language processing, and speech recognition and synthesis, but your robot can make use of machine learning with nearly any kind of data.
+Common use cases include:
+
+- Object detection and classification which enable smart machines to detect people, animals, plants, or other objects with bounding boxes, and to perform actions when they are detected.
+- Speech recognition, natural language processing, and speech synthesis, which enable smart machines to verbally communicate with us.
+
+However, your robot can make use of machine learning with nearly any kind of data.
Viam natively supports [TensorFlow Lite](https://www.tensorflow.org/lite) ML models as long as your models adhere to the [model requirements](/services/ml/#tflite_cpu-limitations).
-You can [train your own image classification models](/manage/ml/train-model/) or [add an existing model](/manage/ml/upload-model/) for object detection and classification within the platform using data from the [data management service](../../services/data/).
-Object detection and classification models are commonly used to enable robots to detect people, animals, plants, or other objects with bounding boxes, and to perform actions when they are detected.
+## Use machine learning with your smart machine
+
+{{< cards >}}
+{{% manualcard %}}
+
+
Train or upload an ML model
+
+You can [train your own models](/manage/ml/train-model/) for object detection and classification using data from the [data management service](../../services/data/) or [add an existing model](/manage/ml/upload-model/).
+
+{{% /manualcard %}}
+{{% manualcard %}}
-To make use of ML models with your robot, you can use the built-in [ML model service](/services/ml/) to deploy and run the model.
+Deploy your ML model
-Once you have [deployed the ML model service](/services/ml/#create-an-ml-model-service) to your robot, you can then add another service to make use of the model.
+To make use of ML models with your smart machine, use the built-in [ML model service](/services/ml/) to deploy and run the model.
-- For object detection and classification, you can use the [vision service](/services/vision/), which provides both [mlmodel detector](/services/vision/detection/#configure-an-mlmodel-detector) and [mlmodel classifier](/services/vision/classification/#configure-an-mlmodel-classifier) models.
-- For other usage, you can create a [modular resource](/extend/modular-resources/) to integrate it with your robot.
- For an example, see [this tutorial](/extend/modular-resources/examples/tflite-module/) which adds a modular-resource-based service that uses TensorFlow Lite to classify audio samples.
+{{% /manualcard %}}
+{{% manualcard %}}
-The video below shows the training process for an object detection model using a bounding box:
+Configure a service
-{{}}
+For object detection and classification, you can use the [vision service](/services/vision/), which provides an [ml model detector](/services/vision/detection/#configure-an-mlmodel-detector) and an [ml model classifier](/services/vision/classification/#configure-an-mlmodel-classifier) model.
+
+For other usage, you can use a [modular resource](/extend/modular-resources/) to integrate it with your robot.
+
+{{% /manualcard %}}
+{{% manualcard %}}
+
+Test your detector or classifier
+
+Test your [`mlmodel detector`](/services/vision/detection/#test-your-detector) or [`mlmodel classifier`](/services/vision/classification/#test-your-classifier).
+
+{{% /manualcard %}}
+
+{{< /cards >}}
-## Next Steps
+## Tutorials
{{< cards >}}
-{{% card link="/manage/ml/train-model" %}}
-{{% card link="/manage/ml/upload-model" %}}
-{{% card link="/services/ml" customTitle="Deploy Model" %}}
-{{% card link="/tutorials/projects/pet-treat-dispenser/" customTitle="Tutorial: Smart Pet Feeder" %}}
+{{% card link="/tutorials/projects/pet-treat-dispenser/" customTitle="Smart Pet Feeder" %}}
{{% card link="/extend/modular-resources/examples/tflite-module/" %}}
{{< /cards >}}
diff --git a/docs/manage/ml/train-model.md b/docs/manage/ml/train-model.md
index 5737a9f722..50435b8a80 100644
--- a/docs/manage/ml/train-model.md
+++ b/docs/manage/ml/train-model.md
@@ -10,7 +10,7 @@ description: "Train an image classification model on labeled image data."
# SME: Aaron Casas
---
-You can label or add bounding boxes to [images collected](../../../services/data/configure-data-capture/) by robots and use the annotated data to train a **Single Label Classification Model**, **Multi Label Classification Model** or **Object Detection Model** within Viam.
+You can label or add bounding boxes to [images collected](/services/data/configure-data-capture/) by robots and use the annotated data to train a **Single Label Classification Model**, **Multi Label Classification Model** or **Object Detection Model** within Viam.
{{}}
@@ -52,7 +52,7 @@ Once the model has finished training, it becomes visible in the **Models** secti
### Train a new version of a model
-If you [deploy a model](../../../services/ml/) to a robot, Viam automatically assumes that this is the `latest` version of the model and that you would always like to deploy the `latest` version of the model to the robot.
+If you [deploy a model](/services/ml/) to a robot, Viam automatically assumes that this is the `latest` version of the model and that you would always like to deploy the `latest` version of the model to the robot.
If you train a new version of that model, Viam will automatically deploy the new version to the robot and replace the old version.
{{< alert title="Important" color="note" >}}
@@ -60,7 +60,7 @@ The previous model remains unchanged when you are training a new version of a mo
If you are training a new model, you need to again select the images to train on because the model will be built from scratch.
{{< /alert >}}
-If you do not want Viam to automatically deploy the `latest` version of the model, you can change `packages` configuration in the [Raw JSON robot configuration](../../configuration/#the-config-tab).
+If you do not want Viam to automatically deploy the `latest` version of the model, you can change `packages` configuration in the [Raw JSON robot configuration](/manage/configuration/#the-config-tab).
You can get the version number from a specific model version by clicking on **COPY** on the model on the model page.
The model package config looks like this:
@@ -75,4 +75,27 @@ The model package config looks like this:
## Next Steps
-To deploy your model to your robot, see [deploy model](../../../services/ml/).
+{{< cards >}}
+{{% manualcard link="/services/ml/" %}}
+
+Deploy your model
+
+Create an ML model service to deploy your machine learning model to your smart machine.
+
+{{% /manualcard %}}
+{{% manualcard link="/services/vision/detection/#configure-an-mlmodel-detector"%}}
+
+Create a detector with your model
+
+Configure an `mlmodel detector`.
+
+{{% /manualcard %}}
+{{% manualcard link="/services/vision/classification/#configure-an-mlmodel-classifier"%}}
+
+Create a classifier with your model
+
+Configure your `mlmodel classifier`.
+
+{{% /manualcard %}}
+
+{{< /cards >}}
diff --git a/docs/manage/ml/upload-model.md b/docs/manage/ml/upload-model.md
index 55426b4376..b9dfc0b343 100644
--- a/docs/manage/ml/upload-model.md
+++ b/docs/manage/ml/upload-model.md
@@ -29,10 +29,10 @@ Once the model has finished training, it becomes visible in the **Models** secti
### Upload a new version of a model
-If you [deploy a model](../../../services/ml/) to a robot, Viam automatically assumes that this is the `latest` version of the model and that you would always like to deploy the `latest` version of the model to the robot.
+If you [deploy a model](/services/ml/) to a robot, Viam automatically assumes that this is the `latest` version of the model and that you would always like to deploy the `latest` version of the model to the robot.
If you upload a new version of that model, Viam will automatically deploy the new version to the robot and replace the old version.
-If you do not want Viam to automatically deploy the `latest` version of the model, you can change `packages` configuration in the [Raw JSON robot configuration](../../configuration/#the-config-tab).
+If you do not want Viam to automatically deploy the `latest` version of the model, you can change the `packages` configuration in the [Raw JSON robot configuration](/manage/configuration/#the-config-tab).
You can get the version number from a specific model version by clicking on **COPY** on the model on the model page.
The model package config looks like this:
@@ -47,4 +47,27 @@ The model package config looks like this:
## Next Steps
-To deploy your model to your robot, see [deploy model](../../../services/ml/).
+{{< cards >}}
+{{% manualcard link="/services/ml/" %}}
+
+Deploy your model
+
+Create an ML model service to deploy your machine learning model to your smart machine.
+
+{{% /manualcard %}}
+{{% manualcard link="/services/vision/detection/#configure-an-mlmodel-detector"%}}
+
+Create a detector with your model
+
+Configure your `mlmodel detector`.
+
+{{% /manualcard %}}
+{{% manualcard link="/services/vision/classification/#configure-an-mlmodel-classifier"%}}
+
+Create a classifier with your model
+
+Configure your `mlmodel classifier`.
+
+{{% /manualcard %}}
+
+{{< /cards >}}
diff --git a/docs/services/ml/_index.md b/docs/services/ml/_index.md
index 16d031537d..1bafa33f3b 100644
--- a/docs/services/ml/_index.md
+++ b/docs/services/ml/_index.md
@@ -11,7 +11,7 @@ icon: "/services/icons/ml.svg"
# SME: Aaron Casas
---
-The Machine Learning (ML) model service allows you to deploy machine learning models to your smart machine.
+Once you have [trained](/manage/ml/train-model/) or [uploaded](/manage/ml/upload-model/) your model, the Machine Learning (ML) model service allows you to deploy machine learning models to your smart machine.
## Create an ML model service
@@ -132,7 +132,25 @@ You can use one of these architectures or build your own.
## Next Steps
-To make use of your new model, follow the instructions to create:
+To make use of your model with your smart machine, add a [vision service](/services/vision/) or a [modular resource](/extend/):
-- a [`mlmodel` detector](../vision/detection/#configure-an-mlmodel-detector) or
-- a [`mlmodel` classifier](../vision/classification/#configure-an-mlmodel-classifier)
+{{< cards >}}
+
+{{% manualcard link="/services/vision/detection/#configure-an-mlmodel-detector"%}}
+
+Create a detector with your model
+
+Configure an `mlmodel detector`.
+
+{{% /manualcard %}}
+{{% manualcard link="/services/vision/classification/#configure-an-mlmodel-classifier"%}}
+
+Create a classifier with your model
+
+Configure your `mlmodel classifier`.
+
+{{% /manualcard %}}
+
+{{% card link="/extend/modular-resources/examples/tflite-module/" customTitle="Example: TensorFlow Lite Modular Service" %}}
+
+{{< /cards >}}
diff --git a/docs/services/vision/classification.md b/docs/services/vision/classification.md
index 218c2e6dc8..ee5c489f13 100644
--- a/docs/services/vision/classification.md
+++ b/docs/services/vision/classification.md
@@ -27,7 +27,29 @@ The types of classifiers supported are:
## Configure an `mlmodel` classifier
-To create an `mlmodel` classifier, you need an [ML model service with a suitable model](../../ml/).
+To create an `mlmodel` classifier, you need to first:
+
+{{< cards >}}
+{{% manualcard %}}
+
+Train or upload an ML model
+
+You can [add an existing model](/manage/ml/upload-model/) or [train your own models](/manage/ml/train-model/) for object detection and classification using data from the [data management service](/services/data/).
+
+{{% /manualcard %}}
+{{% manualcard %}}
+
+Deploy your model
+
+To make use of ML models with your smart machine, use the built-in [ML model service](/services/ml/) to deploy and run the model.
+
+{{% /manualcard %}}
+
+{{< /cards >}}
+
+
+
+Once you have deployed your ML model, configure your `mlmodel` classifier:
{{< tabs >}}
{{% tab name="Builder" %}}
diff --git a/docs/services/vision/detection.md b/docs/services/vision/detection.md
index 1e954be341..6a3d6a0063 100644
--- a/docs/services/vision/detection.md
+++ b/docs/services/vision/detection.md
@@ -142,6 +142,28 @@ Proceed to [test your detector](#test-your-detector).
## Configure an `mlmodel` detector
+To create an `mlmodel` detector, you need to first:
+
+{{< cards >}}
+{{% manualcard %}}
+
+Train or upload an ML model
+
+You can [add an existing model](/manage/ml/upload-model/) or [train your own models](/manage/ml/train-model/) for object detection and classification using data from the [data management service](/services/data/).
+
+{{% /manualcard %}}
+{{% manualcard %}}
+
+Deploy your model
+
+To make use of ML models with your smart machine, use the built-in [ML model service](/services/ml/) to deploy and run the model.
+
+{{% /manualcard %}}
+
+{{< /cards >}}
+
+
+
A machine learning detector that draws bounding boxes according to the specified tensorflow-lite model file available on the robot’s hard drive.
To create a `mlmodel` classifier, you need an [ML model service with a suitable model](../../ml/).
diff --git a/docs/tutorials/services/constrain-motion.md b/docs/tutorials/services/constrain-motion.md
index 58c2b8e5ca..54fae1e9d1 100644
--- a/docs/tutorials/services/constrain-motion.md
+++ b/docs/tutorials/services/constrain-motion.md
@@ -46,7 +46,7 @@ Before starting this tutorial, you must:
## Configure your robot
-Use the same robot configuration from [the previous tutorial](../plan-motion-with-arm-gripper/) for this tutorial, including the [arm](../../../components/arm/) and [gripper](../../../components/gripper/) components with [frames](../../../services/frame-system/) configured.
+Use the same robot configuration from [the previous tutorial](../plan-motion-with-arm-gripper/) for this tutorial, including the [arm](../../../components/arm/) and [gripper](../../../components/gripper/) components with [frames](/services/frame-system/) configured.
Make one change: Change the Z translation of the gripper frame from `90` to `0`.
The motion service is one of the "built-in" services, so you don't need to do anything to enable it on your robot.
@@ -211,7 +211,7 @@ If the axes are different from those described above, take these differences int
Imagine your cup is 120 millimeters tall with a radius of 45 millimeters.
You need to take this space into account to avoid bumping objects on the table with the cup.
-You can pass transforms to the [motion service `move` method](../../../services/motion/#move) to represent objects that are connected to the robot but are not actual robotic components.
+You can pass transforms to the [motion service `move` method](/services/motion/#move) to represent objects that are connected to the robot but are not actual robotic components.
To represent the drinking cup held in your robot's gripper, create a transform with the cup's measurements:
```python {class="line-numbers linkable-line-numbers"}
@@ -289,10 +289,10 @@ If we changed it to `theta=90` or `theta=270`, the gripper jaws would open verti
## Add a motion constraint
-To keep the cup upright as the arm moves it from one place on the table to another, create a [linear constraint](../../../services/motion/constraints/#linear-constraint).
+To keep the cup upright as the arm moves it from one place on the table to another, create a [linear constraint](/services/motion/constraints/#linear-constraint).
When you tell the robot to move the cup from one upright position to another, the linear constraint forces the gripper to move linearly and to maintain the upright orientation of the cup throughout the planned path.
-You could try using an [orientation constraint](../../../services/motion/constraints/#orientation-constraint) instead, which would also constrain the orientation.
+You could try using an [orientation constraint](/services/motion/constraints/#orientation-constraint) instead, which would also constrain the orientation.
However, since this opens up many more options for potential paths, it is much more computationally intensive than the linear constraint.
The code below creates a linear constraint and then uses that constraint to keep the cup upright and move it in a series of linear paths along the predetermined route while avoiding the obstacles we've defined: