From 38bf29dfa635bf53099e550606d48e31c0c20521 Mon Sep 17 00:00:00 2001
From: Naomi Pentrel <5212232+npentrel@users.noreply.github.com>
Date: Thu, 29 Feb 2024 16:40:40 +0100
Subject: [PATCH 1/2] Update bill:effdet0 to EfficientDet-COCO
---
docs/tutorials/projects/guardian.md | 4 ++--
docs/tutorials/projects/integrating-viam-with-openai.md | 4 ++--
docs/tutorials/projects/light-up.md | 4 ++--
docs/tutorials/projects/send-security-photo.md | 4 ++--
docs/tutorials/projects/tipsy.md | 4 ++--
docs/tutorials/projects/verification-system.md | 4 ++--
6 files changed, 12 insertions(+), 12 deletions(-)
diff --git a/docs/tutorials/projects/guardian.md b/docs/tutorials/projects/guardian.md
index 8956cb3a2b..cf6e41f53b 100644
--- a/docs/tutorials/projects/guardian.md
+++ b/docs/tutorials/projects/guardian.md
@@ -245,7 +245,7 @@ Then test the components on the [machine's Control tab](/fleet/machines/#control
## Detect persons and pets
-For the guardian to be able to detect living beings, you will use a Machine Learning model from the Viam Registry called [`effdet0`](https://app.viam.com/ml-model/bill/effdet0).
+For the guardian to be able to detect living beings, you will use a Machine Learning model from the Viam Registry called [`EfficientDet-COCO`](https://app.viam.com/ml-model/viam-labs/EfficientDet-COCO).
The model can detect a variety of things which you can see in [labels.txt](https://github.com/viam-labs/devrel-demos/raw/main/Light%20up%20bot/labels.txt) file.
You can also [train your own custom model](/ml/train-model/) based on images from your robot but the provided Machine Learning model is a good one to start with.
@@ -265,7 +265,7 @@ Select type `ML Model`, then select model `TFLite CPU`.
Enter `mlmodel` as the name for your ML model service, then click **Create**.
Select the **Deploy model on robot** for the **Deployment** field.
-Then select the `bill:effdet0` model from the **Models** dropdown.
+Then select the `viam-labs:EfficientDet-COCO` model from the **Models** dropdown.
2. **Add a vision service.**
diff --git a/docs/tutorials/projects/integrating-viam-with-openai.md b/docs/tutorials/projects/integrating-viam-with-openai.md
index fb47e2f392..91ef39d2af 100644
--- a/docs/tutorials/projects/integrating-viam-with-openai.md
+++ b/docs/tutorials/projects/integrating-viam-with-openai.md
@@ -233,7 +233,7 @@ We found that if set up this way, the following positions accurately show the co
### 2. Configure the ML Model and vision services to use the detector
The [ML model service](/ml/) allows you to deploy a machine learning model to your robot.
-This tutorial uses a pre-trained machine learning (ML) model from the Viam registry named [`effdet0`](https://app.viam.com/ml-model/bill/effdet0).
+This tutorial uses a pre-trained machine learning (ML) model from the Viam registry named [`EfficientDet-COCO`](https://app.viam.com/ml-model/viam-labs/EfficientDet-COCO).
This model can detect a variety of objects, which you can find in the provided [labels.txt](https://github.com/viam-labs/devrel-demos/raw/main/Light%20up%20bot/labels.txt) file.
Click the **Config** tab and then the **Services** subtab.
@@ -243,7 +243,7 @@ Your robot will register this as a machine learning model and make it available
{{}}
Select the **Deploy model on robot** for the **Deployment** field.
-Then select the `bill:effdet0` model from the **Models** dropdown.
+Then select the `viam-labs:EfficientDet-COCO` model from the **Models** dropdown.
Now, create a new service of **type** `vision`, **model** `ML Model` named 'vis-stuff-detector'.
Your companion robot will use this to interface with the machine learning model allowing you to - well, detect stuff!
diff --git a/docs/tutorials/projects/light-up.md b/docs/tutorials/projects/light-up.md
index affa2c9c9b..54b74ded4e 100644
--- a/docs/tutorials/projects/light-up.md
+++ b/docs/tutorials/projects/light-up.md
@@ -78,7 +78,7 @@ Navigate to the **Control** tab where you can see your camera working.
## Configure your services
-This tutorial uses a pre-trained machine learning (ML) model from the Viam registry named [`effdet0`](https://app.viam.com/ml-model/bill/effdet0).
+This tutorial uses a pre-trained machine learning (ML) model from the Viam registry named [`EfficientDet-COCO`](https://app.viam.com/ml-model/viam-labs/EfficientDet-COCO).
This model can detect a variety of objects, which you can find in the provided [labels.txt](https://github.com/viam-labs/devrel-demos/raw/main/Light%20up%20bot/labels.txt) file.
If you want to train your own model instead, follow the instructions to [train a model](/ml/train-model/).
@@ -98,7 +98,7 @@ In the new ML Model service panel, configure your service.
![mlmodel service panel with empty sections for Model Path, and Optional Settings such as Label Path and Number of threads.](/tutorials/tipsy/app-service-ml-before.png)
Select the **Deploy model on robot** for the **Deployment** field.
-Then select the `bill:effdet0` model from the **Models** dropdown.
+Then select the `viam-labs:EfficientDet-COCO` model from the **Models** dropdown.
### Configure an mlmodel detector
diff --git a/docs/tutorials/projects/send-security-photo.md b/docs/tutorials/projects/send-security-photo.md
index 045f7b367b..d5e256a68a 100644
--- a/docs/tutorials/projects/send-security-photo.md
+++ b/docs/tutorials/projects/send-security-photo.md
@@ -89,7 +89,7 @@ Navigate to the **Control** tab where you can see your camera working.
### Configure your services
-This tutorial uses a pre-trained Machine Learning model from the Viam Registry called [`effdet0`](https://app.viam.com/ml-model/bill/effdet0).
+This tutorial uses a pre-trained Machine Learning model from the Viam Registry called [`EfficientDet-COCO`](https://app.viam.com/ml-model/viam-labs/EfficientDet-COCO).
The model can detect a variety of things, including `Persons`.
You can see a full list of what the model can detect in [labels.txt](https://github.com/viam-labs/devrel-demos/raw/main/Light%20up%20bot/labels.txt) file.
@@ -111,7 +111,7 @@ Click the **Services** subtab.
![mlmodel service panel with empty sections for Model Path, and Optional Settings such as Label Path and Number of threads.](/tutorials/tipsy/app-service-ml-before.png)
Select the **Deploy model on robot** for the **Deployment** field.
- Then select the `bill:effdet0` model from the **Models** dropdown.
+ Then select the `viam-labs:EfficientDet-COCO` model from the **Models** dropdown.
1. **Configure an mlmodel detector**
diff --git a/docs/tutorials/projects/tipsy.md b/docs/tutorials/projects/tipsy.md
index 5baf41a0c9..b8cbfc6cd2 100644
--- a/docs/tutorials/projects/tipsy.md
+++ b/docs/tutorials/projects/tipsy.md
@@ -306,7 +306,7 @@ On the control tab, you will see panels for each of your configured components.
## Configure your services
-This tutorial uses a pre-trained machine learning (ML) model from the Viam registry named [`effdet0`](https://app.viam.com/ml-model/bill/effdet0).
+This tutorial uses a pre-trained machine learning (ML) model from the Viam registry named [`EfficientDet-COCO`](https://app.viam.com/ml-model/viam-labs/EfficientDet-COCO).
This model can detect a variety of objects, which you can find in the provided [labels.txt](https://github.com/viam-labs/devrel-demos/raw/main/Light%20up%20bot/labels.txt) file.
If you want to train your own model instead, follow the instructions to [train a model](/ml/train-model/).
@@ -330,7 +330,7 @@ Click on the **Services** subtab.
![mlmodel service panel with empty sections for Model Path, and Optional Settings such as Label Path and Number of threads.](/tutorials/tipsy/app-service-ml-before.png)
Select the **Deploy model on robot** for the **Deployment** field.
- Then select the `bill:effdet0` model from the **Models** dropdown.
+ Then select the `viam-labs:EfficientDet-COCO` model from the **Models** dropdown.
1. **Configure an ML model detector**
diff --git a/docs/tutorials/projects/verification-system.md b/docs/tutorials/projects/verification-system.md
index 534115cf8c..789a674ab8 100644
--- a/docs/tutorials/projects/verification-system.md
+++ b/docs/tutorials/projects/verification-system.md
@@ -82,7 +82,7 @@ In order for your machine's camera to be able to detect the presence of a person
### Use an existing ML model
The [ML model service](/ml/) allows you to deploy a machine learning model to your robot.
-For your machine to be able to detect people, you will use a Machine Learning model from the Viam registry called [`effdet0`](https://app.viam.com/ml-model/bill/effdet0).
+For your machine to be able to detect people, you will use a Machine Learning model from the Viam registry called [`EfficientDet-COCO`](https://app.viam.com/ml-model/viam-labs/EfficientDet-COCO).
The model can detect a variety of things which you can see in [labels.txt](https://github.com/viam-labs/devrel-demos/raw/main/Light%20up%20bot/labels.txt) file including `person`s.
1. Navigate to your machine's **Config** tab on the [Viam app](https://app.viam.com/Machines).
@@ -90,7 +90,7 @@ The model can detect a variety of things which you can see in [labels.txt]
3. Select type `ML Model`, then select model `TFLite CPU`.
4. Enter `persondetect` as the name for your ML model service, then click **Create**.
5. Select the **Deploy model on robot** for the **Deployment** field.
-6. Then select the `bill:effdet0` model from the **Models** dropdown.
+6. Then select the `viam-labs:EfficientDet-COCO` model from the **Models** dropdown.
Finally, configure an `mlmodel` detector vision service to use your new `"persondetect"` ML model:
From 9a66a95a42089380871820c5186d65032869070c Mon Sep 17 00:00:00 2001
From: Naomi Pentrel <5212232+npentrel@users.noreply.github.com>
Date: Thu, 29 Feb 2024 17:54:56 +0100
Subject: [PATCH 2/2] Update docs/tutorials/projects/guardian.md
Co-authored-by: andf-viam <132301587+andf-viam@users.noreply.github.com>
---
docs/tutorials/projects/guardian.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/docs/tutorials/projects/guardian.md b/docs/tutorials/projects/guardian.md
index cf6e41f53b..1397c54f2d 100644
--- a/docs/tutorials/projects/guardian.md
+++ b/docs/tutorials/projects/guardian.md
@@ -245,7 +245,7 @@ Then test the components on the [machine's Control tab](/fleet/machines/#control
## Detect persons and pets
-For the guardian to be able to detect living beings, you will use a Machine Learning model from the Viam Registry called [`EfficientDet-COCO`](https://app.viam.com/ml-model/viam-labs/EfficientDet-COCO).
+For the guardian to be able to detect living beings, you will use a machine learning model from the Viam registry called [`EfficientDet-COCO`](https://app.viam.com/ml-model/viam-labs/EfficientDet-COCO).
The model can detect a variety of things which you can see in [labels.txt](https://github.com/viam-labs/devrel-demos/raw/main/Light%20up%20bot/labels.txt) file.
You can also [train your own custom model](/ml/train-model/) based on images from your robot but the provided Machine Learning model is a good one to start with.