From 224b110adaefb8c49239c355da5e40d23eeb9273 Mon Sep 17 00:00:00 2001 From: Sierra Guequierre Date: Wed, 20 Mar 2024 17:03:25 -0400 Subject: [PATCH] DOCS-1999: Move `tflite_cpu` to its own page (#2672) --- docs/appendix/changelog.md | 2 +- docs/ml/_index.md | 2 +- docs/ml/{deploy.md => deploy/_index.md} | 166 +++++---------------- docs/ml/deploy/tflite_cpu.md | 146 ++++++++++++++++++ docs/ml/vision/_index.md | 2 +- docs/registry/_index.md | 2 +- docs/tutorials/projects/filtered-camera.md | 4 +- 7 files changed, 187 insertions(+), 137 deletions(-) rename docs/ml/{deploy.md => deploy/_index.md} (70%) create mode 100644 docs/ml/deploy/tflite_cpu.md diff --git a/docs/appendix/changelog.md b/docs/appendix/changelog.md index 4842431af8..c51fa3cf5e 100644 --- a/docs/appendix/changelog.md +++ b/docs/appendix/changelog.md @@ -716,7 +716,7 @@ You will need to first register the machine learning model file with the [ML mod {{% changelog date="2023-03-31" color="added" title="Machine learning for image classification models" %}} -You can now [train](/ml/train-model/) and [deploy](/ml/deploy/#create-an-ml-model-service) image classification models with the [data management service](/data/) and use your machine's image data directly within Viam. +You can now [train](/ml/train-model/) and [deploy](/ml/deploy/) image classification models with the [data management service](/data/) and use your machine's image data directly within Viam. Additionally, you can [upload and use](/ml/upload-model/) existing machine learning models with your machines. For more information on using data synced to the cloud to train machine learning models, read [Train a model](/ml/train-model/). diff --git a/docs/ml/_index.md b/docs/ml/_index.md index 76d7a9d019..96f8e732f8 100644 --- a/docs/ml/_index.md +++ b/docs/ml/_index.md @@ -25,7 +25,7 @@ However, your machine can make use of machine learning with nearly any kind of d Viam supports: -- [TensorFlow Lite](https://www.tensorflow.org/lite) ML models as long as your models adhere to the [model requirements](/ml/deploy/#tflite_cpu-limitations) +- [TensorFlow Lite](https://www.tensorflow.org/lite) ML models as long as your models adhere to the [model requirements](/ml/deploy/tflite_cpu/#model-requirements) - TensorFlow - PyTorch - ONNX diff --git a/docs/ml/deploy.md b/docs/ml/deploy/_index.md similarity index 70% rename from docs/ml/deploy.md rename to docs/ml/deploy/_index.md index afe8a7a8d5..69efe2af1b 100644 --- a/docs/ml/deploy.md +++ b/docs/ml/deploy/_index.md @@ -15,24 +15,25 @@ images: ["/services/icons/ml.svg"] --- The Machine Learning (ML) model service allows you to deploy machine learning models to your machine. +This can mean deploying: + +- a model from [the registry](https://app.viam.com/registry) +- a model trained outside the Viam platform that you have [uploaded](/ml/upload-model/) +- a model that's already available on your machine + +After deploying your model, you need to configure an additional service to use the deployed model. +For example, you can configure an [`mlmodel` vision service](/ml/vision/) and a [`transform` camera](/components/camera/transform/) to visualize the predictions your model makes. + +## Supported models + +### Built-in models You can use the following built-in model of the service: | Model | Description | | ----- | ----------- | -| [`"tflite_cpu"` model](#create-an-ml-model-service) | Runs a tensorflow lite model that you have [trained](/ml/train-model/) or [uploaded](/ml/upload-model/). | - -## Used with - -{{< cards >}} -{{< relatedcard link="/ml/vision/">}} -{{< relatedcard link="/components/board/">}} -{{< relatedcard link="/components/camera/">}} -{{< /cards >}} - -After deploying your model, you need to configure an additional service to use the deployed model. -For example, you can configure an [`mlmodel` vision service](/ml/vision/) and a [`transform` camera](/components/camera/transform/) to visualize the predictions your model makes. +| [`tflite_cpu`](./tflite_cpu/) | Runs a TensorFlow Lite model that you have [trained](/ml/train-model/) or [uploaded](/ml/upload-model/) on the CPU of your machine. | ### Modular resources @@ -46,110 +47,24 @@ Follow [these instructions](/registry/advanced/mlmodel-design/) to design your m {{< /alert >}} {{< alert title="Note" color="note" >}} -For some models of the ML model service, like the [Triton ML model service](https://github.com/viamrobotics/viam-mlmodelservice-triton/tree/main/) for Jetson boards, you can configure the service to use the available CPU or GPU. +For some models of the ML model service, like the [Triton ML model service](https://github.com/viamrobotics/viam-mlmodelservice-triton/tree/main/) for Jetson boards, you can configure the service to use either the available CPU or a dedicated GPU. {{< /alert >}} -## Create an ML model service - -You can use the ML model service to deploy: +You can use an ML model service to deploy: -- [a model from the registry](https://app.viam.com/registry) -- a model trained outside the Viam platform that you have [uploaded](/ml/upload-model/) +- a model from [the registry](https://app.viam.com/registry) +- a model trained outside the Viam platform that you have uploaded - a model available on your machine -{{< tabs >}} -{{% tab name="Builder" %}} - -Navigate to your machine's **Config** tab on the [Viam app](https://app.viam.com/robots). -Click the **Services** subtab and click **Create service** in the lower-left corner. -Select the `ML Model` type, then select the `TFLite CPU` model. -Enter a name for your service and click **Create**. - -You can choose to configure your service with an existing model on the machine or deploy a model onto your machine: - -{{< tabs >}} -{{% tab name="Deploy Model on Robot" %}} - -1. To configure your service and deploy a model onto your machine, select **Deploy Model On Robot** for the **Deployment** field. - -2. Click on **Models** to open a dropdown with all of the ML models available to you privately, as well as all of the ML models available in [the registry](https://app.viam.com), which are shared by users. - You can select from any of these models to deploy on your robot. - -{{}} - -{{% alert title="Tip" color="tip" %}} -To see more details about a model, open its page in [the registry](https://app.viam.com). -{{% /alert %}} - -3. Also, optionally select the **Number of threads**. - -{{}} - -{{% /tab %}} -{{% tab name="Path to Existing Model On Robot" %}} - -1. To configure your service with an existing model on the machine, select **Path to Existing Model On Robot** for the **Deployment** field. -2. Then specify the absolute **Model Path** and any **Optional Settings** such as the absolute **Label Path** and the **Number of threads**. - -![Create a machine learning models service with an existing model](/services/available-models.png) - -{{% /tab %}} -{{< /tabs >}} - -{{% /tab %}} -{{% tab name="JSON Template" %}} - -Add the `tflite_cpu` ML model object to the services array in your raw JSON configuration: - -```json {class="line-numbers linkable-line-numbers"} -"services": [ - { - "name": "", - "type": "mlmodel", - "model": "tflite_cpu", - "attributes": { - "model_path": "${packages.ml_model.}/.tflite", - "label_path": "${packages.ml_model.}/labels.txt", - "num_threads": - } - }, - ... // Other services -] -``` - -{{% /tab %}} -{{% tab name="JSON Example" %}} - -```json {class="line-numbers linkable-line-numbers"} -"services": [ - { - "name": "fruit_classifier", - "type": "mlmodel", - "model": "tflite_cpu", - "attributes": { - "model_path": "${packages.ml_model.my_fruit_model}/my_fruit_model.tflite", - "label_path": "${packages.ml_model.my_fruit_model}/labels.txt", - "num_threads": 1 - } - } -] -``` - -{{% /tab %}} -{{< /tabs >}} - -The following parameters are available for a `"tflite_cpu"` model: - - -| Parameter | Inclusion | Description | -| --------- | --------- | ----------- | -| `model_path` | **Required** | The absolute path to the `.tflite model` file, as a `string`. | -| `label_path` | Optional | The absolute path to a `.txt` file that holds class labels for your TFLite model, as a `string`. This text file should contain an ordered listing of class labels. Without this file, classes will read as "1", "2", and so on. | -| `num_threads` | Optional | An integer that defines how many CPU threads to use to run inference. Default: `1`. | +## Used with -Save the configuration. +{{< cards >}} +{{< relatedcard link="/ml/vision/">}} +{{< relatedcard link="/components/board/">}} +{{< relatedcard link="/components/camera/">}} +{{< /cards >}} -### Models from registry +## Models from registry You can search the machine learning models that are available to deploy on this service from the registry here: @@ -166,37 +81,26 @@ You can search the machine learning models that are available to deploy on this
-### Versioning for deployed models +## Versioning for deployed models If you upload or train a new version of a model, Viam automatically deploys the `latest` version of the model to the machine. -If you do not want Viam to automatically deploy the `latest` version of the model, you can change the `packages` configuration in the [Raw JSON machine configuration](/build/configure/#the-config-tab). +If you do not want Viam to automatically deploy the `latest` version of the model, you can edit the `"packages"` array in the [JSON configuration](/build/configure/#the-config-tab) of your machine. +This array is automatically created when you deploy the model and is not embedded your service configuration. You can get the version number from a specific model version by navigating to the [models page](https://app.viam.com/data/models) finding the model's row, clicking on the right-side menu marked with **_..._** and selecting **Copy package JSON**. For example: `2024-02-28T13-36-51`. The model package config looks like this: ```json -{ - "package": "/", - "version": "YYYY-MM-DDThh-mm-ss", - "name": "", - "type": "ml_model" -} +"packages": [ + { + "package": "/", + "version": "YYYY-MM-DDThh-mm-ss", + "name": "", + "type": "ml_model" + } +] ``` -### `tflite_cpu` limitations - -We strongly recommend that you package your `tflite_cpu` model with metadata in [the standard form](https://github.com/tensorflow/tflite-support/blob/560bc055c2f11772f803916cb9ca23236a80bf9d/tensorflow_lite_support/metadata/metadata_schema.fbs). - -In the absence of metadata, your `tflite_cpu` model must satisfy the following requirements: - -- A single input tensor representing the image of type UInt8 (expecting values from 0 to 255) or Float 32 (values from -1 to 1). -- At least 3 output tensors (the rest won’t be read) containing the bounding boxes, class labels, and confidence scores (in that order). -- Bounding box output tensor must be ordered [x x y y], where x is an x-boundary (xmin or xmax) of the bounding box and the same is true for y. - Each value should be between 0 and 1, designating the percentage of the image at which the boundary can be found. - -These requirements are satisfied by a few publicly available model architectures including EfficientDet, MobileNet, and SSD MobileNet V1. -You can use one of these architectures or build your own. - ## API The MLModel service supports the following methods: diff --git a/docs/ml/deploy/tflite_cpu.md b/docs/ml/deploy/tflite_cpu.md new file mode 100644 index 0000000000..feeb7289db --- /dev/null +++ b/docs/ml/deploy/tflite_cpu.md @@ -0,0 +1,146 @@ +--- +title: "Configure a tflite_cpu" +linkTitle: "tflite_cpu" +weight: 60 +type: "docs" +tags: ["data management", "ml", "model training"] +description: "Configure a tflite_cpu ML model service to deploy TensorFlow lite models to your machine." +icon: true +images: ["/services/icons/ml.svg"] +# SME: Khari +--- + +The `tflite_cpu` ML model service allows you to deploy [TensorFlow Lite](https://www.tensorflow.org/lite) ML models as long as your models adhere to the [model requirements](#model-requirements). +It is supported on any CPU and Linux, Raspbian, MacOS and Android machines. + +To work with the `tflite_cpu` ML model service, an ML model is comprised of a .tflite model file which defines the model, and optionally a .txt labels file which provides the text labels for your model. +With the `tflite_cpu` ML model service, you can deploy: + +- [a model from the registry](https://app.viam.com/registry) +- a model trained outside the Viam platform that you have [uploaded](/ml/upload-model/) +- a model available on your machine + +To configure a `tflite_cpu` ML model service: + +{{< tabs >}} +{{% tab name="Builder" %}} + +Navigate to your machine's **Config** tab on the [Viam app](https://app.viam.com/robots). +Click the **Services** subtab and click **Create service** in the lower-left corner. +Select the `ML Model` type, then select the `TFLite CPU` model. +Enter a name for your service and click **Create**. + +You can choose to configure your service with an existing model on the machine or deploy a model onto your machine: + +{{< tabs >}} +{{% tab name="Deploy Model on Robot" %}} + +1. To configure your service and deploy a model onto your machine, select **Deploy Model On Robot** for the **Deployment** field. + +2. Click on **Models** to open a dropdown with all of the ML models available to you privately, as well as all of the ML models available in [the registry](https://app.viam.com), which are shared by users. + You can select from any of these models to deploy on your robot. + +{{}} + +{{% alert title="Tip" color="tip" %}} +To see more details about a model, open its page in [the registry](https://app.viam.com). +{{% /alert %}} + +3. Also, optionally select the **Number of threads**. + +{{}} + +{{% /tab %}} +{{% tab name="Path to Existing Model On Robot" %}} + +1. To configure your service with an existing model on the machine, select **Path to Existing Model On Robot** for the **Deployment** field. +2. Then specify the absolute **Model Path** and any **Optional Settings** such as the absolute **Label Path** and the **Number of threads**. + +![Create a machine learning models service with an existing model](/services/available-models.png) + +{{% /tab %}} +{{< /tabs >}} + +{{% /tab %}} +{{% tab name="JSON Template" %}} + +Add the `tflite_cpu` ML model object to the services array in your raw JSON configuration: + +```json {class="line-numbers linkable-line-numbers"} +"services": [ + { + "name": "", + "type": "mlmodel", + "model": "tflite_cpu", + "attributes": { + "model_path": "${packages.ml_model.}/.tflite", + "label_path": "${packages.ml_model.}/labels.txt", + "num_threads": + } + }, + ... // Other services +] +``` + +{{% /tab %}} +{{% tab name="JSON Example" %}} + +```json {class="line-numbers linkable-line-numbers"} +{ +"packages": [ + { + "package": "39c34811-9999-4fff-bd91-26a0e4e90644/my_fruit_model", + "version": "YYYY-MM-DDThh-mm-ss", + "name": "my_fruit_model", + "type": "ml_model" + } +], ... // < Insert "components", "modules" etc. > +"services": [ + { + "name": "fruit_classifier", + "type": "mlmodel", + "model": "tflite_cpu", + "attributes": { + "model_path": "${packages.ml_model.my_fruit_model}/my_fruit_model.tflite", + "label_path": "${packages.ml_model.my_fruit_model}/labels.txt", + "num_threads": 1 + } + } +] +} +``` + +The `"packages"` array shown above is automatically created when you deploy the model. +You do not need to edit the configuration yourself, expect if you wish to change the [Versioning for deployed models](/ml/deploy/#versioning-for-deployed-models). + +{{% /tab %}} +{{< /tabs >}} + +The following parameters are available for a `"tflite_cpu"` model: + + +| Parameter | Inclusion | Description | +| --------- | --------- | ----------- | +| `model_path` | **Required** | The absolute path to the `.tflite model` file, as a `string`. | +| `label_path` | Optional | The absolute path to a `.txt` file that holds class labels for your TFLite model, as a `string`. This text file should contain an ordered listing of class labels. Without this file, classes will read as "1", "2", and so on. | +| `num_threads` | Optional | An integer that defines how many CPU threads to use to run inference. Default: `1`. | + +Save the configuration. + +## Model requirements + +{{% alert title="Tip" color="tip" %}} +Models [trained](/ml/train-model/) in the Viam app meet these requirements by design. +{{% /alert %}} + +We strongly recommend that you package your TensorFlow Lite model with metadata in [the standard form](https://github.com/tensorflow/tflite-support/blob/560bc055c2f11772f803916cb9ca23236a80bf9d/tensorflow_lite_support/metadata/metadata_schema.fbs). + +In the absence of metadata, your `tflite_cpu` model must satisfy the following requirements: + +- A single input tensor representing the image of type UInt8 (expecting values from 0 to 255) or Float 32 (values from -1 to 1). +- At least 3 output tensors (the rest won’t be read) containing the bounding boxes, class labels, and confidence scores (in that order). +- Bounding box output tensor must be ordered [x x y y], where x is an x-boundary (xmin or xmax) of the bounding box and the same is true for y. + Each value should be between 0 and 1, designating the percentage of the image at which the boundary can be found. + +These requirements are satisfied by a few publicly available model architectures including EfficientDet, MobileNet, and SSD MobileNet V1. +You can use one of these architectures or build your own. diff --git a/docs/ml/vision/_index.md b/docs/ml/vision/_index.md index a496939f4c..29bc7222b0 100644 --- a/docs/ml/vision/_index.md +++ b/docs/ml/vision/_index.md @@ -104,7 +104,7 @@ Model | Description [`obstacles_depth`](./obstacles_depth/) | A segmenter for depth cameras that returns the perceived obstacles as a set of 3-dimensional bounding boxes, each with a Pose as a vector. [`obstacles_distance`](./obstacles_distance/) | A segmenter that takes point clouds from a camera input and returns the average single closest point to the camera as a perceived obstacle. -### Modular Resources +### Modular resources {{}} diff --git a/docs/registry/_index.md b/docs/registry/_index.md index da94de5f31..c622a33c51 100644 --- a/docs/registry/_index.md +++ b/docs/registry/_index.md @@ -70,7 +70,7 @@ You can search the available ML models from the Viam registry here: -To use an existing model from the registry, [deploy the ML model to your robot](/ml/deploy//#create-an-ml-model-service) and use a [Vision service](/ml/vision/) to make detections or classifications on-machine. +To use an existing model from the registry, [deploy the ML model to your robot](/ml/deploy/) and use a [Vision service](/ml/vision/) to make detections or classifications on-machine. ## Modular resources diff --git a/docs/tutorials/projects/filtered-camera.md b/docs/tutorials/projects/filtered-camera.md index eaaa13a5bf..94e68a9b75 100644 --- a/docs/tutorials/projects/filtered-camera.md +++ b/docs/tutorials/projects/filtered-camera.md @@ -219,7 +219,7 @@ Your uploaded model is immediately available for use after upload. {{< imgproc src="/tutorials/filtered-camera-module/upload-model-complete.png" alt="The models subtab under the data tab in the Viam app, showing a model that has been uploaded and is ready for use" resize="1200x" >}} -If you are designing your own model, see [`tflite_cpu` limitations](/ml/deploy/#tflite_cpu-limitations) for guidance on structuring your own model. +If you are designing your own TensorFlow Lite model, see [model requirements](/ml/deploy/tflite_cpu/#model-requirements) for guidance on structuring your own model. For more information, see [Upload an existing model](/ml/upload-model/). @@ -244,7 +244,7 @@ Add the ML model service to your machine to be able to deploy and update ML mode {{< imgproc src="/tutorials/filtered-camera-module/configure-mlmodel-service.png" alt="The ML model service configuration pane with deploy model on robot selected, and the my-viam-figure-model added" resize="600x" >}} -For more information, see [Create an ML model service](/ml/deploy/#create-an-ml-model-service). +For more information, see [Create an ML model service](/ml/deploy/). ### Add the vision service