diff --git a/.github/workflows/lexi-lint.yml b/.github/workflows/lexi-lint.yml index d68a8ae296..058678933a 100644 --- a/.github/workflows/lexi-lint.yml +++ b/.github/workflows/lexi-lint.yml @@ -17,7 +17,7 @@ jobs: with: ref: ${{github.event.pull_request.head.sha}} fetch-depth: 0 - - uses: npentrel/lexi@v1.1 + - uses: Rebilly/lexi@v2 with: github-token: ${{ secrets.PR_TOKEN }} glob: '**/*.md' diff --git a/assets/services/ml-models-service.png b/assets/services/ml-models-service.png deleted file mode 100644 index 70aa6028ea..0000000000 Binary files a/assets/services/ml-models-service.png and /dev/null differ diff --git a/assets/services/vision/color_detector.png b/assets/services/vision/color_detector.png deleted file mode 100644 index ecf8d7df74..0000000000 Binary files a/assets/services/vision/color_detector.png and /dev/null differ diff --git a/assets/services/vision/detector_3d_segmenter.png b/assets/services/vision/detector_3d_segmenter.png deleted file mode 100644 index 30083de135..0000000000 Binary files a/assets/services/vision/detector_3d_segmenter.png and /dev/null differ diff --git a/assets/services/vision/mlmodel.png b/assets/services/vision/mlmodel.png deleted file mode 100644 index 5da9a8f0d4..0000000000 Binary files a/assets/services/vision/mlmodel.png and /dev/null differ diff --git a/assets/services/vision/obstacles_pointcloud.png b/assets/services/vision/obstacles_pointcloud.png deleted file mode 100644 index 8b609232ac..0000000000 Binary files a/assets/services/vision/obstacles_pointcloud.png and /dev/null differ diff --git a/assets/tutorials/confetti-bot/app-board-create.png b/assets/tutorials/confetti-bot/app-board-create.png deleted file mode 100644 index 93a18dfa10..0000000000 Binary files a/assets/tutorials/confetti-bot/app-board-create.png and /dev/null differ diff --git a/assets/tutorials/tipsy/app-board-create.png b/assets/tutorials/tipsy/app-board-create.png deleted file mode 100644 index aa793a2f2b..0000000000 Binary files a/assets/tutorials/tipsy/app-board-create.png and /dev/null differ diff --git a/assets/tutorials/tipsy/app-motor-create.png b/assets/tutorials/tipsy/app-motor-create.png deleted file mode 100644 index f1748535a9..0000000000 Binary files a/assets/tutorials/tipsy/app-motor-create.png and /dev/null differ diff --git a/assets/tutorials/tipsy/app-service-ml-create.png b/assets/tutorials/tipsy/app-service-ml-create.png deleted file mode 100644 index 0d68ce60fa..0000000000 Binary files a/assets/tutorials/tipsy/app-service-ml-create.png and /dev/null differ diff --git a/assets/tutorials/tipsy/app-service-vision-create.png b/assets/tutorials/tipsy/app-service-vision-create.png deleted file mode 100644 index 3c70df5b3e..0000000000 Binary files a/assets/tutorials/tipsy/app-service-vision-create.png and /dev/null differ diff --git a/docs/extend/modular-resources/_index.md b/docs/extend/modular-resources/_index.md index f26f666ea3..61b4e367df 100644 --- a/docs/extend/modular-resources/_index.md +++ b/docs/extend/modular-resources/_index.md @@ -35,7 +35,7 @@ Once the module has been uploaded to the Registry, you can [deploy the module](/ ### Uploading to Viam Registry After you finish programming your module, you can [upload your module to the Viam registry](/extend/modular-resources/upload/) to make it available for deployment to robots. -As part of the upload process, you decide whether your module is *public* (visible to all users) or *private* (visible only to other members of your [organization](/manage/fleet/organizations/)). +As part of the upload process, you decide whether your module is *private* (visible only to other members of your [organization](/manage/fleet/organizations/)), or *public* (visible to all Viam users). You can see details about each module in the [Viam registry](https://app.viam.com/registry) on its module details page. See the [Odrive module](https://app.viam.com/module/viam/odrive) for an example. @@ -47,7 +47,8 @@ When you make changes to your module, you can [uploaded the newer version](/exte Once you [upload a module to the Viam registry](/extend/modular-resources/upload/), you can [deploy the module](/extend/modular-resources/configure/) to any robot in your organization from [the Viam app](https://app.viam.com/). Navigate to your robot's **Configuration** tab, click the **+ Create component** button, then start typing the name of the module you would like to deploy. -If you uploaded your module and set its visibility to private, the module will only appear for users within your [organization](/manage/fleet/organizations/). + +By default, a newly-created module is *private*, meaning that the module will only appear for users within your [organization](/manage/fleet/organizations/), but you can later [update your module](/extend/modular-resources/upload/#update-an-existing-module) to set it to be *public*, which makes your module available to all Viam users. When you deploy a module to your robot, you can [choose how to update that module](/extend/modular-resources/configure/#configure-version-update-management-for-a-registry-module) when new versions become available. diff --git a/docs/extend/modular-resources/upload/_index.md b/docs/extend/modular-resources/upload/_index.md index 363f1719b0..979b53e179 100644 --- a/docs/extend/modular-resources/upload/_index.md +++ b/docs/extend/modular-resources/upload/_index.md @@ -55,7 +55,7 @@ To upload your custom module to the [Viam registry](https://app.viam.com/registr visibility string Required - Whether the module is visible to all Viam users (public), or accessible only to members of your organization (private). You can change this setting later using the viam module update command.

Default: private + Whether the module is accessible only to members of your organization (private), or visible to all Viam users (public). You can change this setting later using the viam module update command.

Default: private url @@ -102,7 +102,7 @@ To upload your custom module to the [Viam registry](https://app.viam.com/registr ``` {{% alert title="Important" color="note" %}} - If you are publishing a public module (`visibility: "public"`), the [namespace of your model](/extend/modular-resources/key-concepts/#naming-your-model) must match the [namespace of your organization](/manage/fleet/organizations/#create-a-namespace-for-your-organization). + If you are publishing a public module (`"visibility": "public"`), the [namespace of your model](/extend/modular-resources/key-concepts/#naming-your-model) must match the [namespace of your organization](/manage/fleet/organizations/#create-a-namespace-for-your-organization). In the example above, the model namespace is set to `acme` to match the owning organization's namespace. If the two namespaces do not match, the command will return an error. {{% /alert %}} diff --git a/docs/manage/CLI.md b/docs/manage/CLI.md index 824b0c1862..c9f2bfe6f5 100644 --- a/docs/manage/CLI.md +++ b/docs/manage/CLI.md @@ -356,7 +356,7 @@ All of the `module` commands accept either the `--org-id` or `--public-namespace * Use the `--public-namespace` argument to supply the [namespace](/manage/fleet/organizations/#create-a-namespace-for-your-organization) of your organization, suitable for uploading your module to the Viam registry and sharing with other users. * Use the `--org-id` to provide your organization ID instead, suitable for sharing your module privately within your organization. -You may use either argument for the `viam module create` command, but must use `--public-namespace` for the `update` and `upload` commands when uploading as a public module (`visibility: "public"`) to the Viam registry. +You may use either argument for the `viam module create` command, but must use `--public-namespace` for the `update` and `upload` commands when uploading as a public module (`"visibility": "public"`) to the Viam registry. ##### Using the `--platform` argument @@ -420,7 +420,7 @@ The `meta.json` file includes the following configuration options: visibility string Required - Whether the module is visible to all Viam users (public), or accessible only to members of your organization (private). You can change this setting later using the viam module update command.

Default: private + Whether the module is accessible only to members of your organization (private), or visible to all Viam users (public). You can change this setting later using the viam module update command.

Default: private url @@ -467,7 +467,7 @@ For example, the following represents the configuration of an example `my-module ``` {{% alert title="Important" color="note" %}} -If you are publishing a public module (`visibility: "public"`), the [namespace of your model](/extend/modular-resources/key-concepts/#naming-your-model) must match the [namespace of your organization](/manage/fleet/organizations/#create-a-namespace-for-your-organization). +If you are publishing a public module (`"visibility": "public"`), the [namespace of your model](/extend/modular-resources/key-concepts/#naming-your-model) must match the [namespace of your organization](/manage/fleet/organizations/#create-a-namespace-for-your-organization). In the example above, the model namespace is set to `acme` to match the owning organization's namespace. If the two namespaces do not match, the command will return an error. {{% /alert %}} diff --git a/docs/services/base-rc/_index.md b/docs/services/base-rc/_index.md index 9eda6df84d..bf2c671d00 100644 --- a/docs/services/base-rc/_index.md +++ b/docs/services/base-rc/_index.md @@ -1,6 +1,6 @@ --- title: "Base Remote Control Service" -linkTitle: "Remote Control" +linkTitle: "Base Remote Control" weight: 60 type: "docs" description: "The base remote control service allows you to remotely control a base with an input controller like a gamepad." @@ -33,12 +33,11 @@ You must configure a [base](/components/base/) with a [movement sensor](/compone {{% tab name="Config Builder" %}} Navigate to the **Config** tab of your robot's page in [the Viam app](https://app.viam.com). -Click on the **Services** subtab and navigate to the **Create service** menu. -Select the type `Navigation` and enter a name for your service. +Click the **Services** subtab, then click **Create service** in the lower-left corner. +Select the type `Base Remote Control`. +Enter a name for your service, then click **Create**. -Click **Create service**: - -![An example configuration for a Base Remote Control service in the Viam app Config Builder.](/services/base-rc/base-rc-ui-config.png) +![An example configuration for a base remote control service in the Viam app Config Builder.](/services/base-rc/base-rc-ui-config.png) {{% /tab %}} {{% tab name="JSON Template" %}} @@ -72,8 +71,8 @@ Click **Create service**: {{% /tab %}} {{< /tabs >}} -Next, add the JSON `"attributes"` you want the service to have. -The following attributes are available for Base Remote Control services: +Edit and fill in the attributes as applicable. +The following attributes are available for base remote control services: | Name | Type | Inclusion | Description | | ---- | ---- | --------- | ----------- | diff --git a/docs/services/data/configure-data-capture.md b/docs/services/data/configure-data-capture.md index 86bb60f9b7..85e5015ecf 100644 --- a/docs/services/data/configure-data-capture.md +++ b/docs/services/data/configure-data-capture.md @@ -12,10 +12,10 @@ tags: ["data management", "cloud", "sync"] To capture data from one or more robots, you must first add the [data management service](../): -1. On your robot's **Config** page, navigate to the **Services** tab. -2. At the bottom of the page you can create a service. +1. From your robot's **Config** tab, navigate to the **Services** subtab. +2. Click **Create service** in the lower-left corner of the page. Choose `Data Management` as the type and specify a name for your data management service, for example `data-manager`. -3. Then click `Create Service`. +3. Click **Create**. 4. On the panel that appears, you can manage the capturing and syncing functions individually and specify the interval and directory. If the capture frequency or the directory is not specified, the data management service captures data at the default frequency every 0.1 minutes (after every 6 second interval) in the default `~/.viam/capture` directory. diff --git a/docs/services/ml/_index.md b/docs/services/ml/_index.md index 9eba643eb8..c2fadafe2a 100644 --- a/docs/services/ml/_index.md +++ b/docs/services/ml/_index.md @@ -18,18 +18,10 @@ The ML Model service allows you to deploy machine learning models to your robots {{< tabs >}} {{% tab name="Builder" %}} -Navigate to the [robot page on the Viam app](https://app.viam.com/robots). -Click on the robot you wish to add the ML model service to. -Select the **Config** tab, and click on **Services**. - -Scroll to the **Create Service** section. - -1. Select `mlmodel` as the **Type**. -2. Enter a name as the **Name**. -3. Select `tflite_cpu` as the **Model**. -4. Click **Create Service**. - -![Create a machine learning models service](/services/ml-models-service.png) +Navigate to your robot's **Config** tab on the [Viam app](https://app.viam.com/robots). +Click the **Services** subtab and click **Create service** in the lower-left corner. +Select the `ML Model` type, then select the `TFLite CPU` model. +Enter a name for your service and click **Create**. You can choose to configure your service with an existing model on the robot or deploy a model onto your robot: diff --git a/docs/services/motion/_index.md b/docs/services/motion/_index.md index bfc92db19b..ed58a2f49e 100644 --- a/docs/services/motion/_index.md +++ b/docs/services/motion/_index.md @@ -437,6 +437,13 @@ Translation in obstacles is not supported by the [navigation service](/services/ - `movement_sensor_name` [(ResourceName)](https://python.viam.dev/autoapi/viam/gen/common/v1/common_pb2/index.html#viam.gen.common.v1.common_pb2.ResourceName): The `ResourceName` of the [movement sensor](/components/movement-sensor/) that you want to use to check the robot's location. - `obstacles` [(Optional[Sequence[GeoObstacle]])](https://python.viam.dev/autoapi/viam/gen/common/v1/common_pb2/index.html#viam.gen.common.v1.common_pb2.GeoObstacle): Obstacles to consider when planning the motion of the component, with each represented as a `GeoObstacle`. - `heading` [(Optional[float])](https://docs.python.org/library/typing.html#typing.Optional): The compass heading, in degrees, that the robot's movement sensor should report at the `destination` point. +- `configuration` [(Optional[MotionConfiguration])](https://python.viam.dev/autoapi/viam/proto/service/motion/index.html#viam.proto.service.motion.MotionConfiguration): The configuration you want to set across this robot for this motion service. This parameter and each of its fields are optional. + - `vision_services` [([ResourceName])](https://python.viam.dev/autoapi/viam/gen/common/v1/common_pb2/index.html#viam.gen.common.v1.common_pb2.ResourceName): The name you configured for each vision service you want to use while moving this resource. + - `position_polling_frequency_hz` [(float)](https://docs.python.org/3/library/functions.html#float): The frequency in hz to poll the position of the robot. + - `obstacle_polling_frequency_hz` [(float)](https://docs.python.org/3/library/functions.html#float): The frequency in hz to poll the vision service for new obstacles. + - `plan_deviation_m` [(float)](https://docs.python.org/3/library/functions.html#float): The distance in meters that the machine can deviate from the motion plan. + - `linear_m_per_sec` [(float)](https://docs.python.org/3/library/functions.html#float): Linear velocity this machine should target when moving. + - `angular_degs_per_sec` [(float)](https://docs.python.org/3/library/functions.html#float): Angular velocity this machine should target when turning. - `extra` [(Optional\[Dict\[str, Any\]\])](https://docs.python.org/library/typing.html#typing.Optional): Extra options to pass to the underlying RPC call. - `timeout` [(Optional\[float\])](https://docs.python.org/library/typing.html#typing.Optional): An option to set how long to wait (in seconds) before calling a time-out and closing the underlying RPC call. @@ -470,6 +477,13 @@ success = await motion.move_on_globe(component_name=my_base_resource_name, desti - `heading` [(float64)](https://pkg.go.dev/builtin#float64): The compass heading, in degrees, that the robot's movement sensor should report at the `destination` point. - `movementSensorName` [(resource.Name)](https://pkg.go.dev/go.viam.com/rdk/resource#Name): The `resource.Name` of the [movement sensor](/components/movement-sensor/) that you want to use to check the robot's location. - `obstacles` [([]*spatialmath.GeoObstacle)](https://pkg.go.dev/go.viam.com/rdk/spatialmath#GeoObstacle): Obstacles to consider when planning the motion of the component, with each represented as a `GeoObstacle`. +- `motionConfig` [(*MotionConfiguration)](https://pkg.go.dev/go.viam.com/rdk/services/motion#MotionConfiguration): The configuration you want to set across this robot for this motion service. This parameter and each of its fields are optional. + - `VisionSvc` [([]resource.Name)](https://pkg.go.dev/go.viam.com/rdk/resource#Name): The name you configured for each vision service you want to use while moving this resource. + - `PositionPollingFreqHz` [(float64)](https://pkg.go.dev/builtin#float64): The frequency in hz to poll the position of the robot. + - `ObstaclePollingFreqHz` [(float64)](https://pkg.go.dev/builtin#float64): The frequency in hz to poll the vision service for new obstacles. + - `PlanDeviationM` [(float64)](https://pkg.go.dev/builtin#float64): The distance in meters that the machine can deviate from the motion plan. + - `LinearMPerSec` [(float64)](https://pkg.go.dev/builtin#float64): Linear velocity this machine should target when moving. + - `AngularDegsPerSec` [(float64)](https://pkg.go.dev/builtin#float64): Angular velocity this machine should target when turning. - `extra` [(map\[string\]interface{})](https://go.dev/blog/maps): Extra options to pass to the underlying RPC call. **Returns:** diff --git a/docs/services/navigation/_index.md b/docs/services/navigation/_index.md index 8b4a9dc427..7449bb1011 100644 --- a/docs/services/navigation/_index.md +++ b/docs/services/navigation/_index.md @@ -33,10 +33,9 @@ Make sure the [movement sensor](/components/movement-sensor/) you use supports [ {{% tab name="Config Builder" %}} Navigate to the **Config** tab of your robot's page in [the Viam app](https://app.viam.com). -Click on the **Services** subtab and navigate to the **Create service** menu. -Select the type `navigation` and enter a name for your service. - -Click **Create service**: +Click the **Services** subtab, then click **Create service** in the lower-left corner. +Select the type `Navigation`. +Enter a name for your service, then click **Create**. ![An example configuration for a Navigation service in the Viam app Config Builder.](/services/navigation/navigation-ui-config.png) @@ -106,7 +105,7 @@ Click **Create service**: {{% /tab %}} {{< /tabs >}} -Next, add the JSON `"attributes"` you want the service to have. +Edit and fill in the attributes as applicable. The following attributes are available for `Navigation` services: | Name | Type | Inclusion | Description | diff --git a/docs/services/vision/classification.md b/docs/services/vision/classification.md index 598890450a..4a5c259f40 100644 --- a/docs/services/vision/classification.md +++ b/docs/services/vision/classification.md @@ -27,23 +27,15 @@ The types of classifiers supported are: ## Configure an `mlmodel` classifier -To create a `mlmodel` classifier, you need an [ML model service with a suitable model](../../ml/). - -Navigate to the [robot page on the Viam app](https://app.viam.com/robots). -Click on the robot you wish to add the vision service to. -Select the **Config** tab, and click on **Services**. - -Scroll to the **Create Service** section. +To create an `mlmodel` classifier, you need an [ML model service with a suitable model](../../ml/). {{< tabs >}} {{% tab name="Builder" %}} -1. Select `vision` as the **Type**. -2. Enter a name as the **Name**. -3. Select **ML Model** as the **Model**. -4. Click **Create Service**. - -![Create vision service for mlmodel](/services/vision/mlmodel.png) +Navigate to your robot's **Config** tab on the [Viam app](https://app.viam.com/robots). +Click the **Services** subtab and click **Create service** in the lower-left corner. +Select the `Vision` type, then select the `ML Model` model. +Enter a name for your service and click **Create**. In your vision service's panel, fill in the **Attributes** field. diff --git a/docs/services/vision/detection.md b/docs/services/vision/detection.md index 02335d6e5a..1340ae369f 100644 --- a/docs/services/vision/detection.md +++ b/docs/services/vision/detection.md @@ -51,19 +51,10 @@ If the color is not reliably detected, increase the `hue_tolerance_pct`. {{< tabs >}} {{% tab name="Builder" %}} -Navigate to the [robot page on the Viam app](https://app.viam.com/robots). -Click on the robot you wish to add the vision service to. -Select the **Config** tab, and click on **Services**. - -Scroll to the **Create Service** section. -To create a [vision service](/services/vision/): - -1. Select `vision` as the **Type**. -2. Enter a name as the **Name**. -3. Select **Color Detector** as the **Model**. -4. Click **Create Service**. - -![Create vision service for color detector](/services/vision/color_detector.png) +Navigate to your robot's **Config** tab on the [Viam app](https://app.viam.com/robots). +Click the **Services** subtab and click **Create service** in the lower-left corner. +Select the `ML Model` type, then select the `Color Detector` model. +Enter a name for your service and click **Create**. In your vision service's panel, select the color your vision service will be detecting, as well as a hue tolerance and a segment size (in pixels): @@ -153,21 +144,13 @@ Proceed to [test your detector](#test-your-detector). A machine learning detector that draws bounding boxes according to the specified tensorflow-lite model file available on the robot’s hard drive. To create a `mlmodel` classifier, you need an [ML model service with a suitable model](../../ml/). -Navigate to the [robot page on the Viam app](https://app.viam.com/robots). -Click on the robot you wish to add the vision service to. -Select the **Config** tab, and click on **Services**. - -Scroll to the **Create Service** section. - {{< tabs >}} {{% tab name="Builder" %}} -1. Select `vision` as the **Type**. -2. Enter a name as the **Name**. -3. Select **ML Model** as the **Model**. -4. Click **Create Service**. - -![Create vision service for mlmodel](/services/vision/mlmodel.png) +Navigate to your robot's **Config** tab on the [Viam app](https://app.viam.com/robots). +Click the **Services** subtab and click **Create service** in the lower-left corner. +Select the `Vision` type, then select the `ML Model` model. +Enter a name for your service and click **Create**. In your vision service's panel, fill in the **Attributes** field. diff --git a/docs/services/vision/segmentation.md b/docs/services/vision/segmentation.md index 23178b3857..cebc25f508 100644 --- a/docs/services/vision/segmentation.md +++ b/docs/services/vision/segmentation.md @@ -29,19 +29,10 @@ It is slower than other segmenters and can take up to 30 seconds to segment a sc {{< tabs >}} {{% tab name="Builder" %}} -Navigate to the [robot page on the Viam app](https://app.viam.com/robots). -Click on the robot you wish to add the vision service to. -Select the **Config** tab, and click on **Services**. - -Scroll to the **Create Service** section. -To create a [vision service](/services/vision/): - -1. Select `vision` as the **Type**. -2. Enter a name as the **Name**. -3. Select **Radius Clustering Segmenter** as the **Model**. -4. Click **Create Service**. - -![Create vision service for obstacles_pointcloud](/services/vision/obstacles_pointcloud.png) +Navigate to your robot's **Config** tab on the [Viam app](https://app.viam.com/robots). +Click the **Services** subtab and click **Create service** in the lower-left corner. +Select the `Vision` type, then select the `Radius Clustering Segmenter` model. +Enter a name for your service and click **Create**. In your vision service's panel, fill in the **Attributes** field. @@ -138,19 +129,10 @@ The label and the pixels associated with the 2D detections become the label and {{< tabs >}} {{% tab name="Builder" %}} -Navigate to the [robot page on the Viam app](https://app.viam.com/robots). -Click on the robot you wish to add the vision service to. -Select the **Config** tab, and click on **Services**. - -Scroll to the **Create Service** section. -To create a [vision service](/services/vision/): - -1. Select `vision` as the **Type**. -2. Enter a name as the **Name**. -3. Select **Detector to 3D Segmenter** as the **Model**. -4. Click **Create Service**. - -![Create vision service for detector_3d_segmenter](/services/vision/detector_3d_segmenter.png) +Navigate to your robot's **Config** tab on the [Viam app](https://app.viam.com/robots). +Click the **Services** subtab and click **Create service** in the lower-left corner. +Select the `Vision` type, then select the `Detector to 3D Segmenter` model. +Enter a name for your service and click **Create**. In your vision service's panel, fill in the **Attributes** field. diff --git a/docs/tutorials/get-started/confetti-bot.md b/docs/tutorials/get-started/confetti-bot.md index b98e5d2eaa..31ee54ae1f 100644 --- a/docs/tutorials/get-started/confetti-bot.md +++ b/docs/tutorials/get-started/confetti-bot.md @@ -34,7 +34,7 @@ You can expand on this project to turn a motor based on other types of inputs, s ### Hardware * A macOS or Linux computer -* A [Raspberry Pi](https://a.co/d/bxEdcAT), with a [microSD card](https://www.amazon.com/Lexar-Micro-microSDHC-Memory-Adapter/dp/B08XQ7NGG1/ref=sr_1_13), set up using [these instructions](https://docs.viam.com/installation/prepare/rpi-setup/). +* A [Raspberry Pi](https://a.co/d/bxEdcAT), with a [microSD card](https://www.amazon.com/Lexar-Micro-microSDHC-Memory-Adapter/dp/B08XQ7NGG1/ref=sr_1_13), set up using [these instructions](/installation/prepare/rpi-setup/). * A big button, like [this one](https://www.amazon.com/EG-STARTS-Buttons-Illuminated-Machine/dp/B01LZMANZ7/ref=sxts_b2b_sx_reorder_acb_business). Check the wiring diagram for the specific model you have as you wire the button. * A mini confetti cannon, like [this one](https://www.amazon.com/Confetti-Poppers-Party-Accessory-Pack/dp/B074SP7FZH/ref=sr_1_4) @@ -52,7 +52,7 @@ The STL files we use for 3D printing are adapted to the size of this motor, but * [Python3](https://www.python.org/download/releases/3.0/) * [pip](https://pip.pypa.io/en/stable/#) -* [viam-server](https://docs.viam.com/installation/#install-viam-server) +* [viam-server](/installation/#install-viam-server) * [Viam Python SDK](https://python.viam.dev/) ## Set up your hardware @@ -121,33 +121,31 @@ We named ours ConfettiBot. ![A robot page header in the Viam app, its under the location work, and named ConfettiBot.](/tutorials/confetti-bot/app-name-confettibot.png) -Then navigate to the robot’s **CONFIG** tab to start configuring your components. +Then navigate to the robot’s **Config** tab to start configuring your components. {{< tabs >}} {{% tab name="Builder UI" %}} ### Configure the Pi as a board -Click on the **Components** subtab and navigate to the **Create component** menu. +Click on the **Components** subtab and click **Create component** in the lower-left corner of the page. -Add your {{< glossary_tooltip term_id="board" text="board" >}} with the name `party`, type `board` and model `pi`. -Click **Create Component**. +Add your {{< glossary_tooltip term_id="board" text="board" >}} with type `board` and model `pi`. +Enter `party` for the name of your [board component](/components/board/), then click **Create**. -![Create component panel, with the name attribute filled as party, type attribute filled as board and model attribute filled as pi.](/tutorials/confetti-bot/app-board-create.png) - -You can name your board whatever you want as long as you refer to it the same way in your code, we picked `party` for fun. +You can name your board whatever you want as long as you refer to it the same way in your code; we picked `party` for fun. Your board configuration should now look like this: ![Board component configured in the Viam app, the component tab is named party, with a type attribute board and model attribute pi.](/tutorials/confetti-bot/app-board-attribute.png) ### Configure the motor -Add your [motor](https://docs.viam.com/components/motor/) with the name “start”, type `motor`, and model `gpio`. +Click on the **Components** subtab and click **Create component** in the lower-left corner of the page. +Select `motor` for the type and `gpio` for the model. +Enter `start` for the name of your [motor component](/components/motor/), then click **Create**. Again, we named it “start” to refer to the button being pressed, but this name is up to you as long as you remember the name and use the same name in the code later. -![Create component panel, with the name attribute filled as start, type attribute filled as motor and model attribute filled as gpio.](/tutorials/confetti-bot/app-motor-create.png) - -After clicking **Create Component**, there is a pin assignment type toggle. +After clicking **Create**, there is a pin assignment type toggle. Select **In1/In2** since that is compatible with the type of input our motor controller expects. In the drop downs for A/In1 and B/In2, choose `13 GPIO 27` and `15 GPIO 22` and for PWM choose `11 GPIO 17` corresponding to our wiring. diff --git a/docs/tutorials/projects/bedtime-songs-bot.md b/docs/tutorials/projects/bedtime-songs-bot.md index 62b4ff81d1..4459da14cc 100644 --- a/docs/tutorials/projects/bedtime-songs-bot.md +++ b/docs/tutorials/projects/bedtime-songs-bot.md @@ -63,10 +63,10 @@ First, add your personal computer's webcam to your robot as a [camera](/componen {{< tabs >}} {{% tab name="Builder UI" %}} -Click on the **Components** subtab and navigate to the **Create component** menu. +Click the **Components** subtab, then click **Create component** in the lower-left corner of the page. -Add your [camera](https://docs.viam.com/components/board/) with the name `cam`, type `camera`, and model `webcam`. -Click **Create Component**. +Select `camera` for the type, then select `webcam` for the model. +Enter `cam` for the name of your [camera component](/components/camera/), then click **Create**. ![Creation of a `webcam` camera in the Viam app config builder. The user is selecting the video_path configuration attribute from the drop-down menu](../../tutorials/bedtime-songs-bot/video-path-ui.png) diff --git a/docs/tutorials/projects/guardian.md b/docs/tutorials/projects/guardian.md index 1263dc61b3..087b0e4ec1 100644 --- a/docs/tutorials/projects/guardian.md +++ b/docs/tutorials/projects/guardian.md @@ -260,59 +260,66 @@ scp labels.txt pi@guardian.local:/home/pi/labels.txt {{% tab name="Builder UI" %}} Next, navigate to the **Config** tab of your robot's page in [the Viam app](https://app.viam.com). -Click on the **Services** subtab and navigate to the **Create service** menu. +Click the **Services** subtab. 1. **Add an ML model service.** - The [ML model service](/services/ml/) allows you to deploy the provided machine learning model to your robot. - Create an ML model with the name `mlmodel`, the type `mlmodel` and the model `tflite_cpu`. - Then click **Create Service**. + The [ML model service](/services/ml/) allows you to deploy the provided machine learning model to your robot. - In the new ML Model panel, select **Path to Existing Model On Robot** for the **Deployment**. + Click **Create service** in the lower-left corner of the page. + Select type `ML Model`, then select model `TFLite CPU`. + Enter `mlmodel` as the name for your ML model service, then click **Create**. - Then specify the absolute **Model Path** as `/home/pi/effdet0.tflite` and the **Label Path** as `/home/pi/labels.txt`. + In the new ML Model panel, select **Path to existing model on robot** for the **Deployment**. + + Then specify the absolute **Model path** as `/home/pi/effdet0.tflite` and the **Label path** as `/home/pi/labels.txt`. 2. **Add a vision service.** - Next, add a [detector](/services/vision/detection/) as a vision service to be able to make use of the ML model. - Create an vision service with the name `detector`, the type `vision` and the model `mlmodel`. - Then click **Create Service**. + Next, add a [detector](/services/vision/detection/) as a vision service to be able to make use of the ML model. + + Click **Create service** in the lower-left corner of the page. + Select type `Vision`, then select model `ML Model`. + Enter `detector` as the name, then click **Create**. - In the new detector panel, select the `mlmodel` you configured in the previous step. + In the new detector panel, select the `mlmodel` you configured in the previous step. - Click **Save config** in the bottom left corner of the screen. + Click **Save config** in the bottom left corner of the screen. 3. **Add a `transform` camera.** - To be able to test that the vision service is working, add a `transform` camera which will add bounding boxes and labels around the objects the service detects. + To be able to test that the vision service is working, add a `transform` camera which will add bounding boxes and labels around the objects the service detects. - Click on the **Components** subtab and navigate to the **Create component** menu. - Create a [transform camera](/components/camera/transform/) with the name `transform_cam`, the type `camera` and the model `transform`. + Navigate to the **Components** subtab of the **Config** tab. + Click **Create component** in the lower-left corner of the page. - Replace the attributes JSON object with the following object which specifies the camera source that the `transform` camera will be using and defines a pipeline that adds the defined `detector`: + Select `camera` for the type, then select `transform` for the model. + Enter `transform_cam` as the name for your [transform camera](/components/camera/transform/), then click **Create**. - ```json - { - "source": "cam", - "pipeline": [ - { - "type": "detections", - "attributes": { - "detector_name": "detector", - "confidence_threshold": 0.6 - } - } - ] - } - ``` + Replace the attributes JSON object with the following object which specifies the camera source that the `transform` camera will be using and defines a pipeline that adds the defined `detector`: + + ```json + { + "source": "cam", + "pipeline": [ + { + "type": "detections", + "attributes": { + "detector_name": "detector", + "confidence_threshold": 0.6 + } + } + ] + } + ``` - Click **Save config** in the bottom left corner of the screen. + Click **Save config** in the bottom left corner of the screen. {{% /tab %}} {{% tab name="Raw JSON" %}} -Next, on the [`Raw JSON` tab](/manage/configuration/#the-config-tab), replace the configuration with the following configuration which configures the [ML model service](/services/ml/), the [vision service](/services/vision/), and a [transform camera](/components/camera/transform/): +Next, on the [**Raw JSON** tab](/manage/configuration/#the-config-tab), replace the configuration with the following configuration which configures the [ML model service](/services/ml/), the [vision service](/services/vision/), and a [transform camera](/components/camera/transform/): ```json {class="line-numbers linkable-line-numbers" data-line="31-48,50-69"} { diff --git a/docs/tutorials/projects/light-up.md b/docs/tutorials/projects/light-up.md index f7137b7db6..db954503b3 100644 --- a/docs/tutorials/projects/light-up.md +++ b/docs/tutorials/projects/light-up.md @@ -87,43 +87,37 @@ If you want to train your own, you can [train a model](/manage/ml/train-model/). To use the provided Machine Learning model, copy the [effdet0.tflite](https://github.com/viam-labs/devrel-demos/raw/main/Light%20up%20bot/effdet0.tflite) file and the [labels.txt](https://github.com/viam-labs/devrel-demos/raw/main/Light%20up%20bot/labels.txt) to your project directory. -Click on the **Services** subtab and navigate to the **Create service** menu. +Navigate to the **Services** subtab of your robot's **Config** tab. -1. **Configure the ML model service** +### Configure the ML model service - Add an [mlmodel](/services/ml/) service with the name `people`, type `mlmodel`, and model `tflite_cpu`. - Click **Create service**. +Click **Create service** in the lower-left corner of the page. +Select `ML Model` for the type, then select `TFLite CPU` for the model. +Enter `people` as the name for your [mlmodel](/services/ml/), then click **Create**. - ![Create service panel, with the type attribute filled as mlmodel, name attribute filled as people, and model attribute filled as tflite_cpu.](/tutorials/tipsy/app-service-ml-create.png) +In the new ML Model service panel, configure your service. - In the new ML Model service panel, configure your service. +![mlmodel service panel with empty sections for Model Path, and Optional Settings such as Label Path and Number of threads.](/tutorials/tipsy/app-service-ml-before.png) - ![mlmodel service panel with empty sections for Model Path, and Optional Settings such as Label Path and Number of threads.](/tutorials/tipsy/app-service-ml-before.png) +Select the **Path to existing model on robot** for the **Deployment** field. +Then specify the absolute **Model path** as where your tflite file lives and any **Optional settings** such as the absolute **Label path** as where your labels.txt file lives and the **Number of threads** as `1`. - Select the **Path to Existing Model On Robot** for the **Deployment** field. - Then specify the absolute **Model Path** as where your tflite file lives and any **Optional Settings** such as the absolute **Label Path** as where your labels.txt file lives and the **Number of threads** as 1. +### Configure an mlmodel detector - 1. **Configure an mlmodel detector** +Click **Create service** in the lower-left corner of the page. +For your [vision service](/services/vision/), select type `vision` and model `mlmodel`. +Enter `myPeopleDetector` for the name, then click **Create**. - Add a [vision service](/services/vision/) with the name `myPeopleDetector`, type `vision` and model `mlmodel`. - Click **Create service**. +In the new vision service panel, configure your service. - ![Create service panel, with the type attribute filled as mlmodel, name attribute filled as people, and model attributed filled as tflite_cpu.](/tutorials/tipsy/app-service-vision-create.png) +From the **Select model** drop-down, select the name of the TFLite model (`people`). - In the new vision service panel, configure your service. - - ![vision service panel called myPeopleDetector with empty Attributes section](/tutorials/tipsy/app-service-vision-before.png) - - Name the ml model name `people`. - - ![vision service panel called myPeopleDetector with filled Attributes section, mlmodel_name is “people”.](/tutorials/tipsy/app-service-vision-after.png) - -## Configure the detection camera +### Configure the detection camera To be able to test that the vision service is working, add a `transform` camera which will add bounding boxes and labels around the objects the service detects. Click the **Components** subtab and click the **Create component** button in the lower-left corner. -Create a [transform camera](/components/camera/transform/) with type `camera` and model `transform`. +Create a [transform camera](/components/camera/transform/) by selecting type `camera` and model `transform`. Name it `detectionCam` and click **Create**. ![detectionCam component panel with type camera and model transform, Attributes section has source and pipeline but they are empty.](/tutorials/tipsy/app-detection-before.png) diff --git a/docs/tutorials/projects/pet-treat-dispenser.md b/docs/tutorials/projects/pet-treat-dispenser.md index ece1a1980f..1656f11488 100644 --- a/docs/tutorials/projects/pet-treat-dispenser.md +++ b/docs/tutorials/projects/pet-treat-dispenser.md @@ -100,20 +100,20 @@ Now that you've set up your robot, you can start configuring and testing it. ### Configure your {{< glossary_tooltip term_id="board" text="board" >}} Head to the **Config** tab on your robot's page. -Click on the **Components** subtab and navigate to the **Create component** menu. +Click on the **Components** subtab and click the **Create component** button in the lower-left corner. Select `board` as the type and `pi` as the model. -Name the component `pi`. +Name the component `pi`, then click **Create**. ![The Viam app showing the configuration page for a board component with name pi.](/tutorials/pet-treat-dispenser/app-board-pi.png) ### Configure your [webcam](/components/camera/webcam/) -Add another component with the type `camera` component and the model `webcam`. -Name the component `petcam`. +Click **Create component** and add your webcam with type `camera` and model `webcam`. +Name the component `petcam`, then click **Create**. Click on the **video path**. -If the robot is connected, a drop down with available cameras will appear. +If the robot is connected, a drop-down menu with available cameras will appear. Select your camera. ![The Viam app showing the configuration page for a camera component with model webcam.](/tutorials/pet-treat-dispenser/app-camera-webcam.png) @@ -124,11 +124,11 @@ If you are unsure which camera to select, selecte one, save the configuration an ### Configure your [stepper motor](/components/motor/gpiostepper/) -Finally, add another component with the type `motor` component and the model `gpiostepper`. +Finally, click **Create component** and add another component with type `motor` and model `gpiostepper`. -1. If you used the same pins as in the wiring diagram, set the `direction` to pin 15 GPIO 22, and the `step` logic to pin 16 GPIO 23. -1. Enable the pin setting as low and configure it to pin 18 GPIO 24. -1. Set the `ticks per rotation` to `400` and select your board model,`pi`. +1. If you used the same pins as in the wiring diagram, set the **direction** to pin `15 GPIO 22`, and the **step** logic to pin `16 GPIO 23`. +1. Set the **Enable pins** toggle to `low`, then set the resulting **Enabled Low** drop-down to pin `18 GPIO 24`. +1. Set the **ticks per rotation** to `400` and select your board model, `pi`. ![The Viam app showing the configuration page for a stepper motor component with model gpiostepper.](/tutorials/pet-treat-dispenser/app-stepper-gpiostepper.png) @@ -298,9 +298,7 @@ Once the model has finished training, deploy it by adding a [ML model service](/ 1. Create a new service, select **ML Model** as the **Type**, and name it `puppymodel`. Select `tflite_cpu` as the **Model**. -![The ML model service panel with the name puppymodel.](/tutorials/pet-treat-dispenser/app-service-mlmodel.png) - -3. To configure your service and deploy a model onto your robot, select **Deploy Model On Robot** for the **Deployment** field. +1. To configure your service and deploy a model onto your robot, select **Deploy Model On Robot** for the **Deployment** field. 1. Select your trained model (`puppymodel`) as your desired **Model**. ### Use the vision service to detect your pet diff --git a/docs/tutorials/projects/send-security-photo.md b/docs/tutorials/projects/send-security-photo.md index 8a588eec3f..2dcb10fb4e 100644 --- a/docs/tutorials/projects/send-security-photo.md +++ b/docs/tutorials/projects/send-security-photo.md @@ -99,14 +99,15 @@ If you want to train your own, you can [train a model](/manage/ml/train-model/). To use the provided Machine Learning model, copy the [effdet0.tflite](https://github.com/viam-labs/devrel-demos/raw/main/Light%20up%20bot/effdet0.tflite) file and the [labels.txt](https://github.com/viam-labs/devrel-demos/raw/main/Light%20up%20bot/labels.txt) to your project directory. -Click on the **Services** subtab and navigate to the **Create service** menu. +Click the **Services** subtab. 1. **Configure the ML model service** - Add an [mlmodel](/services/ml/) service with the name `people`, type `mlmodel`, and model `tflite_cpu`. - Click **Create service**. + Add an [mlmodel](/services/ml/) service: - ![Create service panel, with the type attribute filled as mlmodel, name attribute filled as people, and model attribute filled as tflite_cpu.](/tutorials/tipsy/app-service-ml-create.png) + Click **Create service** in the lower-left corner of the **Services** subtab. + Select type `mlmodel`, then select model `tflite_cpu`. + Enter `people` as the name, then click **Create**. In the new ML Model service panel, configure your service. @@ -120,8 +121,6 @@ Click on the **Services** subtab and navigate to the **Create service** menu. Add a [vision service](/services/vision/) with the name `myPeopleDetector`, type `vision` and model `mlmodel`. Click **Create service**. - ![Create service panel, with the type attribute filled as mlmodel, name attribute filled as people, and model attributed filled as tflite_cpu.](/tutorials/tipsy/app-service-vision-create.png) - In the new vision service panel, configure your service. ![vision service panel called myPeopleDetector with empty Attributes section](/tutorials/tipsy/app-service-vision-before.png) diff --git a/docs/tutorials/projects/tipsy.md b/docs/tutorials/projects/tipsy.md index caece880b1..f89aca1ae9 100644 --- a/docs/tutorials/projects/tipsy.md +++ b/docs/tutorials/projects/tipsy.md @@ -76,14 +76,15 @@ Follow the instructions on the **Setup** tab to install `viam-server` on your Ra {{% tab name="Builder UI" %}} Navigate to the **Config** tab of your robot's page in [the Viam app](https://app.viam.com). -Click on the **Components** subtab and navigate to the **Create component** menu. +Click on the **Components** subtab. -1. **Configure the Pi as a board** +1. **Configure the board** - Add your {{< glossary_tooltip term_id="board" text="board" >}} with the name `local`, type `board`, and model `pi`. - Click **Create component**. + Add a {{< glossary_tooltip term_id="board" text="board component" >}} to represent the Raspberry Pi: - ![Create component panel, with the name attribute filled as local, type attribute filled as board and model attribute filled as Pi.](/tutorials/tipsy/app-board-create.png) + Click the **Create component** button in the lower-left corner of the page. + Select type `board` and model `pi`. + Enter `local` as the name, then click **Create**. You can name your board whatever you want as long as you refer to it by the same name in your code. @@ -93,9 +94,7 @@ Click on the **Components** subtab and navigate to the **Create component** menu Add your right [motor](/components/motor/) with the name `rightMotor`, type `motor`, and model `gpio`. - ![Create component panel, with the name attribute filled as rightMotor, type attribute filled as motor and model attribute filled as gpio.](/tutorials/tipsy/app-motor-create.png) - - After clicking **Create component**, a panel will pop up with empty sections for Attributes, Component Pin Assignment, and other information. + After clicking **Create**, a panel will pop up with empty sections for Attributes, Component Pin Assignment, and other information. ![Alt text: rightMotor component panel with empty sections for Attributes, Component Pin Assignment, and other information.](/tutorials/tipsy/app-motor-attribute.png) @@ -373,30 +372,32 @@ scp labels.txt tipsy@tipsy.local:/home/tipsy/labels.txt {{< tabs >}} {{% tab name="Builder UI" %}} -Click on the **Services** subtab and navigate to the **Create service** menu. +Click on the **Services** subtab. 1. **Configure the ML model service** - Add an [mlmodel](/services/ml/) service with the name `people`, type `mlmodel`, and model `tflite_cpu`. - Click **Create service**. + Add an [mlmodel](/services/ml/) service: - ![Create service panel, with the type attribute filled as mlmodel, name attribute filled as people, and model attribute filled as tflite_cpu.](/tutorials/tipsy/app-service-ml-create.png) + Click **Create service** in the lower-left corner of the page. + Select type `ML Model` and model `TFLite CPU`. + Enter `people` for the name of your service, then click **Create**. In the new ML Model service panel, configure your service. ![mlmodel service panel with empty sections for Model Path, and Optional Settings such as Label Path and Number of threads.](/tutorials/tipsy/app-service-ml-before.png) - Select the **Path to Existing Model On Robot** for the **Deployment** field. - Then specify the absolute **Model Path** as /home/tipsy/effdet0.tflite and any **Optional Settings** such as the absolute **Label Path** as /home/tipsy/labels.txt and the **Number of threads** as 1. + Select the **Path to existing model on robot** for the **Deployment** field. + Then specify the absolute **Model path** as /home/tipsy/effdet0.tflite and any **Optional settings** such as the absolute **Label path** as /home/tipsy/labels.txt and the **Number of threads** as 1. ![mlmodel service panel, Deployment selected as Path to Existing Model On Robot, Model Path filled as /home/tipsy/effdet0.tflite and Label Path filled as /home/tipsy/labels.txt, Number of threads is 1.](/tutorials/tipsy/app-service-ml-after.png) -1. **Configure an mlmodel detector** +1. **Configure an ML model detector** - Add a [vision service](/services/vision/) with the name `myPeopleDetector`, type `vision`, and model `mlmodel`. - Click **Create service**. + Add a [vision service](/services/vision/) detector: - ![Create service panel, with the type attribute filled as mlmodel, name attribute filled as people, and model attributed filled as tflite_cpu.](/tutorials/tipsy/app-service-vision-create.png) + Click **Create service** in the lower-left corner of the page. + Select type `Vision`, then select model `mlmodel`. + Enter `myPeopleDetector` as the name, then click **Create**. In the new vision service panel, configure your service. @@ -408,10 +409,11 @@ Click on the **Services** subtab and navigate to the **Create service** menu. 1. **Configure the detection camera** - To be able to test that the vision service is working, add a `transform` camera which will add bounding boxes and labels around the objects the service detects. + To be able to test that the vision service is working, add a [transform camera](/components/camera/transform/) which will add bounding boxes and labels around the objects the service detects. - Click on the **Components** subtab and navigate to the **Create component** menu. - Create a [transform camera](/components/camera/transform/) with the name `detectionCam`, the type `camera`, and the model `transform`. + Click on the **Components** subtab, then click **Create component** in the lower-left corner of the page. + Select type `camera`, then select model `transform`. + Enter `detectionCam` as the name, then click **Create**. ![detectionCam component panel with type camera and model transform, Attributes section has source and pipeline but they are empty.](/tutorials/tipsy/app-detection-before.png) diff --git a/docs/tutorials/services/accessing-and-moving-robot-arm.md b/docs/tutorials/services/accessing-and-moving-robot-arm.md index 2227ae01e2..0015f7ddc9 100644 --- a/docs/tutorials/services/accessing-and-moving-robot-arm.md +++ b/docs/tutorials/services/accessing-and-moving-robot-arm.md @@ -53,12 +53,12 @@ If you are connecting to a real robotic arm during this tutorial, make sure your 2. Create a new robot. 3. Follow the instructions on the **Setup** tab. 4. Select the **Config** tab. -5. Under the **Components** section, create a component with the following attributes: +5. Under the **Components** subtab, click **Create component** in the lower-left corner and create a component with the following attributes: - * Choose `Arm` as the **Type** selection - * Choose your desired model in the **Model** selection - * If you're using an xArm 6, choose the `xArm6` model from the drop-down list - * Enter `myArm` as the **Name** for this component + * Choose `Arm` as the type. + * Choose your desired model. + * If you're using an xArm 6, choose the `xArm6` model from the drop-down list. + * Enter `myArm` as the **Name** for this component, then click **Create**. 5. In the newly created `myArm` component panel, fill in some additional details: diff --git a/docs/tutorials/services/color-detection-scuttle.md b/docs/tutorials/services/color-detection-scuttle.md index 7ce1bef810..2f3a549859 100644 --- a/docs/tutorials/services/color-detection-scuttle.md +++ b/docs/tutorials/services/color-detection-scuttle.md @@ -48,20 +48,15 @@ Turn on the power to the rover. This tutorial uses the color `#a13b4c` or `rgb(161,59,76)` (a reddish color). +To create a [color detector vision service](/services/vision/detection/): + {{< tabs >}} {{% tab name="Builder" %}} -Navigate to the [robot page on the Viam app](https://app.viam.com/robots). -Click on the robot you wish to add the vision service to. -Select the **Config** tab, and click on **Services**. - -Scroll to the **Create Service** section. -To create a [color detector vision service](/services/vision/detection/): - -1. Select `vision` as the **Type**. -2. Enter `my_color_detector` as the **Name**. -3. Select **Color Detector** as the **Model**. -4. Click **Create Service**. +Navigate to your robot's **Config** tab on the [Viam app](https://app.viam.com/robots). +Click the **Services** subtab and click **Create service** in the lower-left corner. +Select the `Vision` type, then select the `Color Detector` model. +Enter `my_color_detector` as the name for your service and click **Create**. In your vision service's panel, set the following **Attributes**: diff --git a/docs/tutorials/services/try-viam-color-detection.md b/docs/tutorials/services/try-viam-color-detection.md index f5c1c5c27d..0215b41738 100644 --- a/docs/tutorials/services/try-viam-color-detection.md +++ b/docs/tutorials/services/try-viam-color-detection.md @@ -55,20 +55,17 @@ This tutorial uses the color `#7a4f5c` or `rgb(122, 79, 92)` (a reddish color). **Hex color #7a4f5c**: {{}} +Navigate to your robot's **Config** tab on the [Viam app](https://app.viam.com/robots) and configure your [vision service color detector](/services/vision/detection/): + {{< tabs >}} {{% tab name="Builder" %}} -Navigate to the [robot page on the Viam app](https://app.viam.com/robots). -Click on the robot you wish to add the vision service to. -Select the **Config** tab, and click on **Services**. +1. Click the **Services** subtab and click **Create service** in the lower-left corner. + +1. Select the `Vision` type, then select the `ML Model` model. -Scroll to the **Create Service** section. -To create a [vision service](/services/vision/): +1. Enter `my_color_detector` as the name for your detector and click **Create**. -1. Select `Vision` as the **Type**. -1. Enter `my_color_detector` as the **Name**. -1. Select **Color Detector** as the **Model**. -1. Click **Create Service**. 1. In the resulting vision service panel, click the color picker box to set the color to be detected. For this tutorial, set the color to `rgb(122, 79, 92)` or use hex code `#7a4f5c`. diff --git a/docs/tutorials/services/webcam-line-follower-robot.md b/docs/tutorials/services/webcam-line-follower-robot.md index af52fe8a1e..609c963995 100644 --- a/docs/tutorials/services/webcam-line-follower-robot.md +++ b/docs/tutorials/services/webcam-line-follower-robot.md @@ -224,45 +224,47 @@ Now, let's configure the color detector so your rover can detect the line: {{% tab name="Builder UI" %}} Next, navigate to the **Config** tab of your robot's page in [the Viam app](https://app.viam.com). -Click on the **Services** subtab and navigate to the **Create service** menu. +Click on the **Services** subtab. 1. **Add a vision service.** - Next, add a vision service [detector](/services/vision/detection/). - Create an vision service with the name `green_detector`, the type `vision` and the model `color_detector`. - Then click **Create Service**. + Next, add a vision service [detector](/services/vision/detection/): - In your vision service’s panel, select the color your vision service will be detecting, as well as a hue tolerance and a segment size (in pixels). - Use a color picker like [colorpicker.me](https://colorpicker.me/) to approximate the color of your line and get the corresponding rgb or hex value. - We used `rgb(25,255,217)` or `#19FFD9` to match the color of our green electrical tape, and specified a segment size of 100 pixels with a tolerance of 0.06, but you can tweak these later to fine tune your line follower. + Click the **Create service** button in the lower-left corner of the **Services** subtab. + Select type `Vision` and model `Color Detector`. + Enter `green_detector` for the name, then click **Create**. + + In your vision service’s panel, select the color your vision service will be detecting, as well as a hue tolerance and a segment size (in pixels). + Use a color picker like [colorpicker.me](https://colorpicker.me/) to approximate the color of your line and get the corresponding rgb or hex value. + We used `rgb(25,255,217)` or `#19FFD9` to match the color of our green electrical tape, and specified a segment size of 100 pixels with a tolerance of 0.06, but you can tweak these later to fine tune your line follower. 2. Click **Save config** in the bottom left corner of the screen. 3. (optional) **Add a `transform` camera as a visualizer** - If you'd like to see the bounding boxes that the color detector identifies, you'll need to configure a [transform camera](/components/camera/transform/). - This isn't another piece of hardware, but rather a virtual "camera" that takes in the stream from the webcam we just configured and outputs a stream overlaid with bounding boxes representing the color detections. + If you'd like to see the bounding boxes that the color detector identifies, you'll need to configure a [transform camera](/components/camera/transform/). + This isn't another piece of hardware, but rather a virtual "camera" that takes in the stream from the webcam we just configured and outputs a stream overlaid with bounding boxes representing the color detections. - Click on the **Components** subtab and click **Create component**. - Add a [transform camera](/components/camera/transform/) with type `camera` and model `transform`. - Name it `transform_cam` and click **Create**. + Click on the **Components** subtab and click **Create component**. + Add a [transform camera](/components/camera/transform/) with type `camera` and model `transform`. + Name it `transform_cam` and click **Create**. - Replace the attributes JSON object with the following object which specifies the camera source that the `transform` camera will be using and defines a pipeline that adds the defined `detector`: + Replace the attributes JSON object with the following object which specifies the camera source that the `transform` camera will be using and defines a pipeline that adds the defined `detector`: - ```json - { - "source": "my_camera", - "pipeline": [ - { - "type": "detections", - "attributes": { - "detector_name": "green_detector", - "confidence_threshold": 0.6 - } + ```json + { + "source": "my_camera", + "pipeline": [ + { + "type": "detections", + "attributes": { + "detector_name": "green_detector", + "confidence_threshold": 0.6 } - ] - } - ``` + } + ] + } + ``` 4. Click **Save config** in the bottom left corner of the screen.