Skip to content

Commit

Permalink
DOCS-1135: Improve ML flow (#1979)
Browse files Browse the repository at this point in the history
  • Loading branch information
npentrel authored Oct 9, 2023
1 parent 2496e8d commit 98b9e0d
Show file tree
Hide file tree
Showing 7 changed files with 162 additions and 31 deletions.
53 changes: 38 additions & 15 deletions docs/manage/ml/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,31 +10,54 @@ description: "Use Viam's built-in machine learning capabilities to train image c
---

Viam includes a built-in [machine learning (ML) service](/services/ml/) which provides your robot with the ability to learn from data and adjust its behavior based on insights gathered from that data.
Common use cases include object detection, image classification, natural language processing, and speech recognition and synthesis, but your robot can make use of machine learning with nearly any kind of data.
Common use cases include:

- Object detection and classification which enable smart machines to detect people, animals, plants, or other objects with bounding boxes, and to perform actions when they are detected.
- Speech recognition, natural language processing, and speech synthesis, which enable smart machines to verbally communicate with us.

However, your robot can make use of machine learning with nearly any kind of data.

Viam natively supports [TensorFlow Lite](https://www.tensorflow.org/lite) ML models as long as your models adhere to the [model requirements](/services/ml/#tflite_cpu-limitations).

You can [train your own image classification models](/manage/ml/train-model/) or [add an existing model](/manage/ml/upload-model/) for object detection and classification within the platform using data from the [data management service](../../services/data/).
Object detection and classification models are commonly used to enable robots to detect people, animals, plants, or other objects with bounding boxes, and to perform actions when they are detected.
## Use machine learning with your smart machine

{{< cards >}}
{{% manualcard %}}

<h4>Train or upload an ML model</h4>

You can [train your own models](/manage/ml/train-model/) for object detection and classification using data from the [data management service](../../services/data/) or [add an existing model](/manage/ml/upload-model/).

{{% /manualcard %}}
{{% manualcard %}}

To make use of ML models with your robot, you can use the built-in [ML model service](/services/ml/) to deploy and run the model.
<h4>Deploy your ML model</h4>

Once you have [deployed the ML model service](/services/ml/#create-an-ml-model-service) to your robot, you can then add another service to make use of the model.
To make use of ML models with your smart machine, use the built-in [ML model service](/services/ml/) to deploy and run the model.

- For object detection and classification, you can use the [vision service](/services/vision/), which provides both [mlmodel detector](/services/vision/detection/#configure-an-mlmodel-detector) and [mlmodel classifier](/services/vision/classification/#configure-an-mlmodel-classifier) models.
- For other usage, you can create a [modular resource](/extend/modular-resources/) to integrate it with your robot.
For an example, see [this tutorial](/extend/modular-resources/examples/tflite-module/) which adds a modular-resource-based service that uses TensorFlow Lite to classify audio samples.
{{% /manualcard %}}
{{% manualcard %}}

The video below shows the training process for an object detection model using a bounding box:
<h4>Configure a service</h4>

{{<youtube embed_url="https://www.youtube-nocookie.com/embed/CP14LR0Pq64">}}
For object detection and classification, you can use the [vision service](/services/vision/), which provides an [ml model detector](/services/vision/detection/#configure-an-mlmodel-detector) and an [ml model classifier](/services/vision/classification/#configure-an-mlmodel-classifier) model.

For other usage, you can use a [modular resource](/extend/modular-resources/) to integrate it with your robot.

{{% /manualcard %}}
{{% manualcard %}}

<h4>Test your detector or classifier</h4>

Test your [`mlmodel detector`](/services/vision/detection/#test-your-detector) or [`mlmodel classifier`](/services/vision/classification/#test-your-classifier).

{{% /manualcard %}}

{{< /cards >}}

## Next Steps
## Tutorials

{{< cards >}}
{{% card link="/manage/ml/train-model" %}}
{{% card link="/manage/ml/upload-model" %}}
{{% card link="/services/ml" customTitle="Deploy Model" %}}
{{% card link="/tutorials/projects/pet-treat-dispenser/" customTitle="Tutorial: Smart Pet Feeder" %}}
{{% card link="/tutorials/projects/pet-treat-dispenser/" customTitle="Smart Pet Feeder" %}}
{{% card link="/extend/modular-resources/examples/tflite-module/" %}}
{{< /cards >}}
31 changes: 27 additions & 4 deletions docs/manage/ml/train-model.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ description: "Train an image classification model on labeled image data."
# SME: Aaron Casas
---

You can label or add bounding boxes to [images collected](../../../services/data/configure-data-capture/) by robots and use the annotated data to train a **Single Label Classification Model**, **Multi Label Classification Model** or **Object Detection Model** within Viam.
You can label or add bounding boxes to [images collected](/services/data/configure-data-capture/) by robots and use the annotated data to train a **Single Label Classification Model**, **Multi Label Classification Model** or **Object Detection Model** within Viam.

{{<youtube embed_url="https://www.youtube-nocookie.com/embed/CP14LR0Pq64">}}

Expand Down Expand Up @@ -52,15 +52,15 @@ Once the model has finished training, it becomes visible in the **Models** secti

### Train a new version of a model

If you [deploy a model](../../../services/ml/) to a robot, Viam automatically assumes that this is the `latest` version of the model and that you would always like to deploy the `latest` version of the model to the robot.
If you [deploy a model](/services/ml/) to a robot, Viam automatically assumes that this is the `latest` version of the model and that you would always like to deploy the `latest` version of the model to the robot.
If you train a new version of that model, Viam will automatically deploy the new version to the robot and replace the old version.

{{< alert title="Important" color="note" >}}
The previous model remains unchanged when you are training a new version of a model and is not used as input.
If you are training a new model, you need to again select the images to train on because the model will be built from scratch.
{{< /alert >}}

If you do not want Viam to automatically deploy the `latest` version of the model, you can change `packages` configuration in the [Raw JSON robot configuration](../../configuration/#the-config-tab).
If you do not want Viam to automatically deploy the `latest` version of the model, you can change `packages` configuration in the [Raw JSON robot configuration](/manage/configuration/#the-config-tab).

You can get the version number from a specific model version by clicking on **COPY** on the model on the model page.
The model package config looks like this:
Expand All @@ -75,4 +75,27 @@ The model package config looks like this:

## Next Steps

To deploy your model to your robot, see [deploy model](../../../services/ml/).
{{< cards >}}
{{% manualcard link="/services/ml/" %}}

<h4>Deploy your model</h4>

Create an ML model service to deploy your machine learning model to your smart machine.

{{% /manualcard %}}
{{% manualcard link="/services/vision/detection/#configure-an-mlmodel-detector"%}}

<h4>Create a detector with your model</h4>

Configure an `mlmodel detector`.

{{% /manualcard %}}
{{% manualcard link="/services/vision/classification/#configure-an-mlmodel-classifier"%}}

<h4>Create a classifier with your model</h4>

Configure your `mlmodel classifier`.

{{% /manualcard %}}

{{< /cards >}}
29 changes: 26 additions & 3 deletions docs/manage/ml/upload-model.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,10 +29,10 @@ Once the model has finished training, it becomes visible in the **Models** secti

### Upload a new version of a model

If you [deploy a model](../../../services/ml/) to a robot, Viam automatically assumes that this is the `latest` version of the model and that you would always like to deploy the `latest` version of the model to the robot.
If you [deploy a model](/services/ml/) to a robot, Viam automatically assumes that this is the `latest` version of the model and that you would always like to deploy the `latest` version of the model to the robot.
If you upload a new version of that model, Viam will automatically deploy the new version to the robot and replace the old version.

If you do not want Viam to automatically deploy the `latest` version of the model, you can change `packages` configuration in the [Raw JSON robot configuration](../../configuration/#the-config-tab).
If you do not want Viam to automatically deploy the `latest` version of the model, you can change the `packages` configuration in the [Raw JSON robot configuration](/manage/configuration/#the-config-tab).

You can get the version number from a specific model version by clicking on **COPY** on the model on the model page.
The model package config looks like this:
Expand All @@ -47,4 +47,27 @@ The model package config looks like this:

## Next Steps

To deploy your model to your robot, see [deploy model](../../../services/ml/).
{{< cards >}}
{{% manualcard link="/services/ml/" %}}

<h4>Deploy your model</h4>

Create an ML model service to deploy your machine learning model to your smart machine.

{{% /manualcard %}}
{{% manualcard link="/services/vision/detection/#configure-an-mlmodel-detector"%}}

<h4>Create a detector with your model</h4>

Configure your `mlmodel detector`.

{{% /manualcard %}}
{{% manualcard link="/services/vision/classification/#configure-an-mlmodel-classifier"%}}

<h4>Create a classifier with your model</h4>

Configure your `mlmodel classifier`.

{{% /manualcard %}}

{{< /cards >}}
26 changes: 22 additions & 4 deletions docs/services/ml/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ icon: "/services/icons/ml.svg"
# SME: Aaron Casas
---

The Machine Learning (ML) model service allows you to deploy machine learning models to your smart machine.
Once you have [trained](/manage/ml/train-model/) or [uploaded](/manage/ml/upload-model/) your model, the Machine Learning (ML) model service allows you to deploy machine learning models to your smart machine.

## Create an ML model service

Expand Down Expand Up @@ -132,7 +132,25 @@ You can use one of these architectures or build your own.

## Next Steps

To make use of your new model, follow the instructions to create:
To make use of your model with your smart machine, add a [vision service](/services/vision/) or a [modular resource](/extend/):

- a [`mlmodel` detector](../vision/detection/#configure-an-mlmodel-detector) or
- a [`mlmodel` classifier](../vision/classification/#configure-an-mlmodel-classifier)
{{< cards >}}

{{% manualcard link="/services/vision/detection/#configure-an-mlmodel-detector"%}}

<h4>Create a detector with your model</h4>

Configure an `mlmodel detector`.

{{% /manualcard %}}
{{% manualcard link="/services/vision/classification/#configure-an-mlmodel-classifier"%}}

<h4>Create a classifier with your model</h4>

Configure your `mlmodel classifier`.

{{% /manualcard %}}

{{% card link="/extend/modular-resources/examples/tflite-module/" customTitle="Example: TensorFlow Lite Modular Service" %}}

{{< /cards >}}
24 changes: 23 additions & 1 deletion docs/services/vision/classification.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,29 @@ The types of classifiers supported are:

## Configure an `mlmodel` classifier

To create an `mlmodel` classifier, you need an [ML model service with a suitable model](../../ml/).
To create an `mlmodel` classifier, you need to first:

{{< cards >}}
{{% manualcard %}}

<h4>Train or upload an ML model</h4>

You can [add an existing model](/manage/ml/upload-model/) or [train your own models](/manage/ml/train-model/) for object detection and classification using data from the [data management service](/services/data/).

{{% /manualcard %}}
{{% manualcard %}}

<h4>Deploy your model</h4>

To make use of ML models with your smart machine, use the built-in [ML model service](/services/ml/) to deploy and run the model.

{{% /manualcard %}}

{{< /cards >}}

<br>

Once you have deployed your ML model, configure your `mlmodel` classifier:

{{< tabs >}}
{{% tab name="Builder" %}}
Expand Down
22 changes: 22 additions & 0 deletions docs/services/vision/detection.md
Original file line number Diff line number Diff line change
Expand Up @@ -142,6 +142,28 @@ Proceed to [test your detector](#test-your-detector).

## Configure an `mlmodel` detector

To create an `mlmodel` detector, you need to first:

{{< cards >}}
{{% manualcard %}}

<h4>Train or upload an ML model</h4>

You can [add an existing model](/manage/ml/upload-model/) or [train your own models](/manage/ml/train-model/) for object detection and classification using data from the [data management service](/services/data/).

{{% /manualcard %}}
{{% manualcard %}}

<h4>Deploy your model</h4>

To make use of ML models with your smart machine, use the built-in [ML model service](/services/ml/) to deploy and run the model.

{{% /manualcard %}}

{{< /cards >}}

<br>

A machine learning detector that draws bounding boxes according to the specified tensorflow-lite model file available on the robot’s hard drive.
To create a `mlmodel` classifier, you need an [ML model service with a suitable model](../../ml/).

Expand Down
8 changes: 4 additions & 4 deletions docs/tutorials/services/constrain-motion.md
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ Before starting this tutorial, you must:

## Configure your robot

Use the same robot configuration from [the previous tutorial](../plan-motion-with-arm-gripper/) for this tutorial, including the [arm](../../../components/arm/) and [gripper](../../../components/gripper/) components with [frames](../../../services/frame-system/) configured.
Use the same robot configuration from [the previous tutorial](../plan-motion-with-arm-gripper/) for this tutorial, including the [arm](../../../components/arm/) and [gripper](../../../components/gripper/) components with [frames](/services/frame-system/) configured.
Make one change: Change the Z translation of the gripper frame from `90` to `0`.

The motion service is one of the "built-in" services, so you don't need to do anything to enable it on your robot.
Expand Down Expand Up @@ -211,7 +211,7 @@ If the axes are different from those described above, take these differences int
Imagine your cup is 120 millimeters tall with a radius of 45 millimeters.
You need to take this space into account to avoid bumping objects on the table with the cup.

You can pass transforms to the [motion service `move` method](../../../services/motion/#move) to represent objects that are connected to the robot but are not actual robotic components.
You can pass transforms to the [motion service `move` method](/services/motion/#move) to represent objects that are connected to the robot but are not actual robotic components.
To represent the drinking cup held in your robot's gripper, create a transform with the cup's measurements:

```python {class="line-numbers linkable-line-numbers"}
Expand Down Expand Up @@ -289,10 +289,10 @@ If we changed it to `theta=90` or `theta=270`, the gripper jaws would open verti

## Add a motion constraint

To keep the cup upright as the arm moves it from one place on the table to another, create a [linear constraint](../../../services/motion/constraints/#linear-constraint).
To keep the cup upright as the arm moves it from one place on the table to another, create a [linear constraint](/services/motion/constraints/#linear-constraint).
When you tell the robot to move the cup from one upright position to another, the linear constraint forces the gripper to move linearly and to maintain the upright orientation of the cup throughout the planned path.

You could try using an [orientation constraint](../../../services/motion/constraints/#orientation-constraint) instead, which would also constrain the orientation.
You could try using an [orientation constraint](/services/motion/constraints/#orientation-constraint) instead, which would also constrain the orientation.
However, since this opens up many more options for potential paths, it is much more computationally intensive than the linear constraint.

The code below creates a linear constraint and then uses that constraint to keep the cup upright and move it in a series of linear paths along the predetermined route while avoiding the obstacles we've defined:
Expand Down

0 comments on commit 98b9e0d

Please sign in to comment.