Skip to content

Commit

Permalink
DOCS-1999: Move tflite_cpu to its own page (#2672)
Browse files Browse the repository at this point in the history
  • Loading branch information
sguequierre authored Mar 20, 2024
1 parent 51177fd commit 224b110
Show file tree
Hide file tree
Showing 7 changed files with 187 additions and 137 deletions.
2 changes: 1 addition & 1 deletion docs/appendix/changelog.md
Original file line number Diff line number Diff line change
Expand Up @@ -716,7 +716,7 @@ You will need to first register the machine learning model file with the [ML mod

{{% changelog date="2023-03-31" color="added" title="Machine learning for image classification models" %}}

You can now [train](/ml/train-model/) and [deploy](/ml/deploy/#create-an-ml-model-service) image classification models with the [data management service](/data/) and use your machine's image data directly within Viam.
You can now [train](/ml/train-model/) and [deploy](/ml/deploy/) image classification models with the [data management service](/data/) and use your machine's image data directly within Viam.
Additionally, you can [upload and use](/ml/upload-model/) existing machine learning models with your machines.
For more information on using data synced to the cloud to train machine learning models, read [Train a model](/ml/train-model/).

Expand Down
2 changes: 1 addition & 1 deletion docs/ml/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ However, your machine can make use of machine learning with nearly any kind of d

Viam supports:

- [TensorFlow Lite](https://www.tensorflow.org/lite) ML models as long as your models adhere to the [model requirements](/ml/deploy/#tflite_cpu-limitations)
- [TensorFlow Lite](https://www.tensorflow.org/lite) ML models as long as your models adhere to the [model requirements](/ml/deploy/tflite_cpu/#model-requirements)
- TensorFlow
- PyTorch
- ONNX
Expand Down
166 changes: 35 additions & 131 deletions docs/ml/deploy.md → docs/ml/deploy/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,24 +15,25 @@ images: ["/services/icons/ml.svg"]
---

The Machine Learning (ML) model service allows you to deploy machine learning models to your machine.
This can mean deploying:

- a model from [the registry](https://app.viam.com/registry)
- a model trained outside the Viam platform that you have [uploaded](/ml/upload-model/)
- a model that's already available on your machine

After deploying your model, you need to configure an additional service to use the deployed model.
For example, you can configure an [`mlmodel` vision service](/ml/vision/) and a [`transform` camera](/components/camera/transform/) to visualize the predictions your model makes.

## Supported models

### Built-in models

You can use the following built-in model of the service:

<!-- prettier-ignore -->
| Model | Description |
| ----- | ----------- |
| [`"tflite_cpu"` model](#create-an-ml-model-service) | Runs a tensorflow lite model that you have [trained](/ml/train-model/) or [uploaded](/ml/upload-model/). |

## Used with

{{< cards >}}
{{< relatedcard link="/ml/vision/">}}
{{< relatedcard link="/components/board/">}}
{{< relatedcard link="/components/camera/">}}
{{< /cards >}}

After deploying your model, you need to configure an additional service to use the deployed model.
For example, you can configure an [`mlmodel` vision service](/ml/vision/) and a [`transform` camera](/components/camera/transform/) to visualize the predictions your model makes.
| [`tflite_cpu`](./tflite_cpu/) | Runs a TensorFlow Lite model that you have [trained](/ml/train-model/) or [uploaded](/ml/upload-model/) on the CPU of your machine. |

### Modular resources

Expand All @@ -46,110 +47,24 @@ Follow [these instructions](/registry/advanced/mlmodel-design/) to design your m
{{< /alert >}}

{{< alert title="Note" color="note" >}}
For some models of the ML model service, like the [Triton ML model service](https://github.com/viamrobotics/viam-mlmodelservice-triton/tree/main/) for Jetson boards, you can configure the service to use the available CPU or GPU.
For some models of the ML model service, like the [Triton ML model service](https://github.com/viamrobotics/viam-mlmodelservice-triton/tree/main/) for Jetson boards, you can configure the service to use either the available CPU or a dedicated GPU.
{{< /alert >}}

## Create an ML model service

You can use the ML model service to deploy:
You can use an ML model service to deploy:

- [a model from the registry](https://app.viam.com/registry)
- a model trained outside the Viam platform that you have [uploaded](/ml/upload-model/)
- a model from [the registry](https://app.viam.com/registry)
- a model trained outside the Viam platform that you have uploaded
- a model available on your machine

{{< tabs >}}
{{% tab name="Builder" %}}

Navigate to your machine's **Config** tab on the [Viam app](https://app.viam.com/robots).
Click the **Services** subtab and click **Create service** in the lower-left corner.
Select the `ML Model` type, then select the `TFLite CPU` model.
Enter a name for your service and click **Create**.

You can choose to configure your service with an existing model on the machine or deploy a model onto your machine:

{{< tabs >}}
{{% tab name="Deploy Model on Robot" %}}

1. To configure your service and deploy a model onto your machine, select **Deploy Model On Robot** for the **Deployment** field.

2. Click on **Models** to open a dropdown with all of the ML models available to you privately, as well as all of the ML models available in [the registry](https://app.viam.com), which are shared by users.
You can select from any of these models to deploy on your robot.

{{<imgproc src="/services/deploy-model-menu.png" resize="700x" alt="Models dropdown menu with models from the registry.">}}

{{% alert title="Tip" color="tip" %}}
To see more details about a model, open its page in [the registry](https://app.viam.com).
{{% /alert %}}

3. Also, optionally select the **Number of threads**.

{{<imgproc src="/services/deploy-model.png" resize="700x" alt="Create a machine learning models service with a model to be deployed">}}

{{% /tab %}}
{{% tab name="Path to Existing Model On Robot" %}}

1. To configure your service with an existing model on the machine, select **Path to Existing Model On Robot** for the **Deployment** field.
2. Then specify the absolute **Model Path** and any **Optional Settings** such as the absolute **Label Path** and the **Number of threads**.

![Create a machine learning models service with an existing model](/services/available-models.png)

{{% /tab %}}
{{< /tabs >}}

{{% /tab %}}
{{% tab name="JSON Template" %}}

Add the `tflite_cpu` ML model object to the services array in your raw JSON configuration:

```json {class="line-numbers linkable-line-numbers"}
"services": [
{
"name": "<mlmodel_name>",
"type": "mlmodel",
"model": "tflite_cpu",
"attributes": {
"model_path": "${packages.ml_model.<model_name>}/<model-name>.tflite",
"label_path": "${packages.ml_model.<model_name>}/labels.txt",
"num_threads": <number>
}
},
... // Other services
]
```

{{% /tab %}}
{{% tab name="JSON Example" %}}

```json {class="line-numbers linkable-line-numbers"}
"services": [
{
"name": "fruit_classifier",
"type": "mlmodel",
"model": "tflite_cpu",
"attributes": {
"model_path": "${packages.ml_model.my_fruit_model}/my_fruit_model.tflite",
"label_path": "${packages.ml_model.my_fruit_model}/labels.txt",
"num_threads": 1
}
}
]
```

{{% /tab %}}
{{< /tabs >}}

The following parameters are available for a `"tflite_cpu"` model:

<!-- prettier-ignore -->
| Parameter | Inclusion | Description |
| --------- | --------- | ----------- |
| `model_path` | **Required** | The absolute path to the `.tflite model` file, as a `string`. |
| `label_path` | Optional | The absolute path to a `.txt` file that holds class labels for your TFLite model, as a `string`. This text file should contain an ordered listing of class labels. Without this file, classes will read as "1", "2", and so on. |
| `num_threads` | Optional | An integer that defines how many CPU threads to use to run inference. Default: `1`. |
## Used with

Save the configuration.
{{< cards >}}
{{< relatedcard link="/ml/vision/">}}
{{< relatedcard link="/components/board/">}}
{{< relatedcard link="/components/camera/">}}
{{< /cards >}}

### Models from registry
## Models from registry

You can search the machine learning models that are available to deploy on this service from the registry here:

Expand All @@ -166,37 +81,26 @@ You can search the machine learning models that are available to deploy on this
<div id="paginationML"></div>
</div>

### Versioning for deployed models
## Versioning for deployed models

If you upload or train a new version of a model, Viam automatically deploys the `latest` version of the model to the machine.
If you do not want Viam to automatically deploy the `latest` version of the model, you can change the `packages` configuration in the [Raw JSON machine configuration](/build/configure/#the-config-tab).
If you do not want Viam to automatically deploy the `latest` version of the model, you can edit the `"packages"` array in the [JSON configuration](/build/configure/#the-config-tab) of your machine.
This array is automatically created when you deploy the model and is not embedded your service configuration.

You can get the version number from a specific model version by navigating to the [models page](https://app.viam.com/data/models) finding the model's row, clicking on the right-side menu marked with **_..._** and selecting **Copy package JSON**. For example: `2024-02-28T13-36-51`.
The model package config looks like this:

```json
{
"package": "<model_id>/<model_name>",
"version": "YYYY-MM-DDThh-mm-ss",
"name": "<model_name>",
"type": "ml_model"
}
"packages": [
{
"package": "<model_id>/<model_name>",
"version": "YYYY-MM-DDThh-mm-ss",
"name": "<model_name>",
"type": "ml_model"
}
]
```

### `tflite_cpu` limitations

We strongly recommend that you package your `tflite_cpu` model with metadata in [the standard form](https://github.com/tensorflow/tflite-support/blob/560bc055c2f11772f803916cb9ca23236a80bf9d/tensorflow_lite_support/metadata/metadata_schema.fbs).

In the absence of metadata, your `tflite_cpu` model must satisfy the following requirements:

- A single input tensor representing the image of type UInt8 (expecting values from 0 to 255) or Float 32 (values from -1 to 1).
- At least 3 output tensors (the rest won’t be read) containing the bounding boxes, class labels, and confidence scores (in that order).
- Bounding box output tensor must be ordered [x x y y], where x is an x-boundary (xmin or xmax) of the bounding box and the same is true for y.
Each value should be between 0 and 1, designating the percentage of the image at which the boundary can be found.

These requirements are satisfied by a few publicly available model architectures including EfficientDet, MobileNet, and SSD MobileNet V1.
You can use one of these architectures or build your own.

## API

The MLModel service supports the following methods:
Expand Down
146 changes: 146 additions & 0 deletions docs/ml/deploy/tflite_cpu.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,146 @@
---
title: "Configure a tflite_cpu"
linkTitle: "tflite_cpu"
weight: 60
type: "docs"
tags: ["data management", "ml", "model training"]
description: "Configure a tflite_cpu ML model service to deploy TensorFlow lite models to your machine."
icon: true
images: ["/services/icons/ml.svg"]
# SME: Khari
---

The `tflite_cpu` ML model service allows you to deploy [TensorFlow Lite](https://www.tensorflow.org/lite) ML models as long as your models adhere to the [model requirements](#model-requirements).
It is supported on any CPU and Linux, Raspbian, MacOS and Android machines.

To work with the `tflite_cpu` ML model service, an ML model is comprised of a <file>.tflite</file> model file which defines the model, and optionally a <file>.txt</file> labels file which provides the text labels for your model.
With the `tflite_cpu` ML model service, you can deploy:

- [a model from the registry](https://app.viam.com/registry)
- a model trained outside the Viam platform that you have [uploaded](/ml/upload-model/)
- a model available on your machine

To configure a `tflite_cpu` ML model service:

{{< tabs >}}
{{% tab name="Builder" %}}

Navigate to your machine's **Config** tab on the [Viam app](https://app.viam.com/robots).
Click the **Services** subtab and click **Create service** in the lower-left corner.
Select the `ML Model` type, then select the `TFLite CPU` model.
Enter a name for your service and click **Create**.

You can choose to configure your service with an existing model on the machine or deploy a model onto your machine:

{{< tabs >}}
{{% tab name="Deploy Model on Robot" %}}

1. To configure your service and deploy a model onto your machine, select **Deploy Model On Robot** for the **Deployment** field.

2. Click on **Models** to open a dropdown with all of the ML models available to you privately, as well as all of the ML models available in [the registry](https://app.viam.com), which are shared by users.
You can select from any of these models to deploy on your robot.

{{<imgproc src="/services/deploy-model-menu.png" resize="700x" alt="Models dropdown menu with models from the registry.">}}

{{% alert title="Tip" color="tip" %}}
To see more details about a model, open its page in [the registry](https://app.viam.com).
{{% /alert %}}

3. Also, optionally select the **Number of threads**.

{{<imgproc src="/services/deploy-model.png" resize="700x" alt="Create a machine learning models service with a model to be deployed">}}

{{% /tab %}}
{{% tab name="Path to Existing Model On Robot" %}}

1. To configure your service with an existing model on the machine, select **Path to Existing Model On Robot** for the **Deployment** field.
2. Then specify the absolute **Model Path** and any **Optional Settings** such as the absolute **Label Path** and the **Number of threads**.

![Create a machine learning models service with an existing model](/services/available-models.png)

{{% /tab %}}
{{< /tabs >}}

{{% /tab %}}
{{% tab name="JSON Template" %}}

Add the `tflite_cpu` ML model object to the services array in your raw JSON configuration:

```json {class="line-numbers linkable-line-numbers"}
"services": [
{
"name": "<mlmodel_name>",
"type": "mlmodel",
"model": "tflite_cpu",
"attributes": {
"model_path": "${packages.ml_model.<model_name>}/<model-name>.tflite",
"label_path": "${packages.ml_model.<model_name>}/labels.txt",
"num_threads": <number>
}
},
... // Other services
]
```

{{% /tab %}}
{{% tab name="JSON Example" %}}

```json {class="line-numbers linkable-line-numbers"}
{
"packages": [
{
"package": "39c34811-9999-4fff-bd91-26a0e4e90644/my_fruit_model",
"version": "YYYY-MM-DDThh-mm-ss",
"name": "my_fruit_model",
"type": "ml_model"
}
], ... // < Insert "components", "modules" etc. >
"services": [
{
"name": "fruit_classifier",
"type": "mlmodel",
"model": "tflite_cpu",
"attributes": {
"model_path": "${packages.ml_model.my_fruit_model}/my_fruit_model.tflite",
"label_path": "${packages.ml_model.my_fruit_model}/labels.txt",
"num_threads": 1
}
}
]
}
```

The `"packages"` array shown above is automatically created when you deploy the model.
You do not need to edit the configuration yourself, expect if you wish to change the [Versioning for deployed models](/ml/deploy/#versioning-for-deployed-models).

{{% /tab %}}
{{< /tabs >}}

The following parameters are available for a `"tflite_cpu"` model:

<!-- prettier-ignore -->
| Parameter | Inclusion | Description |
| --------- | --------- | ----------- |
| `model_path` | **Required** | The absolute path to the `.tflite model` file, as a `string`. |
| `label_path` | Optional | The absolute path to a `.txt` file that holds class labels for your TFLite model, as a `string`. This text file should contain an ordered listing of class labels. Without this file, classes will read as "1", "2", and so on. |
| `num_threads` | Optional | An integer that defines how many CPU threads to use to run inference. Default: `1`. |

Save the configuration.

## Model requirements

{{% alert title="Tip" color="tip" %}}
Models [trained](/ml/train-model/) in the Viam app meet these requirements by design.
{{% /alert %}}

We strongly recommend that you package your TensorFlow Lite model with metadata in [the standard form](https://github.com/tensorflow/tflite-support/blob/560bc055c2f11772f803916cb9ca23236a80bf9d/tensorflow_lite_support/metadata/metadata_schema.fbs).

In the absence of metadata, your `tflite_cpu` model must satisfy the following requirements:

- A single input tensor representing the image of type UInt8 (expecting values from 0 to 255) or Float 32 (values from -1 to 1).
- At least 3 output tensors (the rest won’t be read) containing the bounding boxes, class labels, and confidence scores (in that order).
- Bounding box output tensor must be ordered [x x y y], where x is an x-boundary (xmin or xmax) of the bounding box and the same is true for y.
Each value should be between 0 and 1, designating the percentage of the image at which the boundary can be found.

These requirements are satisfied by a few publicly available model architectures including EfficientDet, MobileNet, and SSD MobileNet V1.
You can use one of these architectures or build your own.
2 changes: 1 addition & 1 deletion docs/ml/vision/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -104,7 +104,7 @@ Model | Description <a name="model-table"></a>
[`obstacles_depth`](./obstacles_depth/) | A segmenter for depth cameras that returns the perceived obstacles as a set of 3-dimensional bounding boxes, each with a Pose as a vector.
[`obstacles_distance`](./obstacles_distance/) | A segmenter that takes point clouds from a camera input and returns the average single closest point to the camera as a perceived obstacle.

### Modular Resources
### Modular resources

{{<modular-resources api="rdk:service:vision" type="vision">}}

Expand Down
2 changes: 1 addition & 1 deletion docs/registry/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -70,7 +70,7 @@ You can search the available ML models from the Viam registry here:
</div>
</noscript>

To use an existing model from the registry, [deploy the ML model to your robot](/ml/deploy//#create-an-ml-model-service) and use a [Vision service](/ml/vision/) to make detections or classifications on-machine.
To use an existing model from the registry, [deploy the ML model to your robot](/ml/deploy/) and use a [Vision service](/ml/vision/) to make detections or classifications on-machine.

## Modular resources

Expand Down
Loading

0 comments on commit 224b110

Please sign in to comment.