Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update bill:effdet0 to EfficientDet-COCO #2600

Merged
merged 2 commits into from
Feb 29, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions docs/tutorials/projects/guardian.md
Original file line number Diff line number Diff line change
Expand Up @@ -245,7 +245,7 @@ Then test the components on the [machine's Control tab](/fleet/machines/#control

## Detect persons and pets

For the guardian to be able to detect living beings, you will use a Machine Learning model from the Viam Registry called [`effdet0`](https://app.viam.com/ml-model/bill/effdet0).
For the guardian to be able to detect living beings, you will use a machine learning model from the Viam registry called [`EfficientDet-COCO`](https://app.viam.com/ml-model/viam-labs/EfficientDet-COCO).
The model can detect a variety of things which you can see in <file>[labels.txt](https://github.com/viam-labs/devrel-demos/raw/main/Light%20up%20bot/labels.txt)</file> file.

You can also [train your own custom model](/ml/train-model/) based on images from your robot but the provided Machine Learning model is a good one to start with.
Expand All @@ -265,7 +265,7 @@ Select type `ML Model`, then select model `TFLite CPU`.
Enter `mlmodel` as the name for your ML model service, then click **Create**.

Select the **Deploy model on robot** for the **Deployment** field.
Then select the `bill:effdet0` model from the **Models** dropdown.
Then select the `viam-labs:EfficientDet-COCO` model from the **Models** dropdown.

2. **Add a vision service.**

Expand Down
4 changes: 2 additions & 2 deletions docs/tutorials/projects/integrating-viam-with-openai.md
Original file line number Diff line number Diff line change
Expand Up @@ -233,7 +233,7 @@ We found that if set up this way, the following positions accurately show the co
### 2. Configure the ML Model and vision services to use the detector

The [ML model service](/ml/) allows you to deploy a machine learning model to your robot.
This tutorial uses a pre-trained machine learning (ML) model from the Viam registry named [`effdet0`](https://app.viam.com/ml-model/bill/effdet0).
This tutorial uses a pre-trained machine learning (ML) model from the Viam registry named [`EfficientDet-COCO`](https://app.viam.com/ml-model/viam-labs/EfficientDet-COCO).
This model can detect a variety of objects, which you can find in the provided <file>[labels.txt](https://github.com/viam-labs/devrel-demos/raw/main/Light%20up%20bot/labels.txt)</file> file.

Click the **Config** tab and then the **Services** subtab.
Expand All @@ -243,7 +243,7 @@ Your robot will register this as a machine learning model and make it available
{{<imgproc src="/tutorials/ai-integration/mlmodels_service_add.png" resize="500x" declaredimensions=true alt="Adding the ML Models Service." style="border:1px solid #000">}}

Select the **Deploy model on robot** for the **Deployment** field.
Then select the `bill:effdet0` model from the **Models** dropdown.
Then select the `viam-labs:EfficientDet-COCO` model from the **Models** dropdown.

Now, create a new service of **type** `vision`, **model** `ML Model` named 'vis-stuff-detector'.
Your companion robot will use this to interface with the machine learning model allowing you to - well, detect stuff!
Expand Down
4 changes: 2 additions & 2 deletions docs/tutorials/projects/light-up.md
Original file line number Diff line number Diff line change
Expand Up @@ -78,7 +78,7 @@ Navigate to the **Control** tab where you can see your camera working.

## Configure your services

This tutorial uses a pre-trained machine learning (ML) model from the Viam registry named [`effdet0`](https://app.viam.com/ml-model/bill/effdet0).
This tutorial uses a pre-trained machine learning (ML) model from the Viam registry named [`EfficientDet-COCO`](https://app.viam.com/ml-model/viam-labs/EfficientDet-COCO).
This model can detect a variety of objects, which you can find in the provided <file>[labels.txt](https://github.com/viam-labs/devrel-demos/raw/main/Light%20up%20bot/labels.txt)</file> file.

If you want to train your own model instead, follow the instructions to [train a model](/ml/train-model/).
Expand All @@ -98,7 +98,7 @@ In the new ML Model service panel, configure your service.
![mlmodel service panel with empty sections for Model Path, and Optional Settings such as Label Path and Number of threads.](/tutorials/tipsy/app-service-ml-before.png)

Select the **Deploy model on robot** for the **Deployment** field.
Then select the `bill:effdet0` model from the **Models** dropdown.
Then select the `viam-labs:EfficientDet-COCO` model from the **Models** dropdown.

### Configure an mlmodel detector

Expand Down
4 changes: 2 additions & 2 deletions docs/tutorials/projects/send-security-photo.md
Original file line number Diff line number Diff line change
Expand Up @@ -89,7 +89,7 @@ Navigate to the **Control** tab where you can see your camera working.

### Configure your services

This tutorial uses a pre-trained Machine Learning model from the Viam Registry called [`effdet0`](https://app.viam.com/ml-model/bill/effdet0).
This tutorial uses a pre-trained Machine Learning model from the Viam Registry called [`EfficientDet-COCO`](https://app.viam.com/ml-model/viam-labs/EfficientDet-COCO).
The model can detect a variety of things, including `Persons`.
You can see a full list of what the model can detect in <file>[labels.txt](https://github.com/viam-labs/devrel-demos/raw/main/Light%20up%20bot/labels.txt)</file> file.

Expand All @@ -111,7 +111,7 @@ Click the **Services** subtab.
![mlmodel service panel with empty sections for Model Path, and Optional Settings such as Label Path and Number of threads.](/tutorials/tipsy/app-service-ml-before.png)

Select the **Deploy model on robot** for the **Deployment** field.
Then select the `bill:effdet0` model from the **Models** dropdown.
Then select the `viam-labs:EfficientDet-COCO` model from the **Models** dropdown.

1. **Configure an mlmodel detector**

Expand Down
4 changes: 2 additions & 2 deletions docs/tutorials/projects/tipsy.md
Original file line number Diff line number Diff line change
Expand Up @@ -306,7 +306,7 @@ On the control tab, you will see panels for each of your configured components.

## Configure your services

This tutorial uses a pre-trained machine learning (ML) model from the Viam registry named [`effdet0`](https://app.viam.com/ml-model/bill/effdet0).
This tutorial uses a pre-trained machine learning (ML) model from the Viam registry named [`EfficientDet-COCO`](https://app.viam.com/ml-model/viam-labs/EfficientDet-COCO).
This model can detect a variety of objects, which you can find in the provided <file>[labels.txt](https://github.com/viam-labs/devrel-demos/raw/main/Light%20up%20bot/labels.txt)</file> file.

If you want to train your own model instead, follow the instructions to [train a model](/ml/train-model/).
Expand All @@ -330,7 +330,7 @@ Click on the **Services** subtab.
![mlmodel service panel with empty sections for Model Path, and Optional Settings such as Label Path and Number of threads.](/tutorials/tipsy/app-service-ml-before.png)

Select the **Deploy model on robot** for the **Deployment** field.
Then select the `bill:effdet0` model from the **Models** dropdown.
Then select the `viam-labs:EfficientDet-COCO` model from the **Models** dropdown.

1. **Configure an ML model detector**

Expand Down
4 changes: 2 additions & 2 deletions docs/tutorials/projects/verification-system.md
Original file line number Diff line number Diff line change
Expand Up @@ -82,15 +82,15 @@ In order for your machine's camera to be able to detect the presence of a person
### Use an existing ML model

The [ML model service](/ml/) allows you to deploy a machine learning model to your robot.
For your machine to be able to detect people, you will use a Machine Learning model from the Viam registry called [`effdet0`](https://app.viam.com/ml-model/bill/effdet0).
For your machine to be able to detect people, you will use a Machine Learning model from the Viam registry called [`EfficientDet-COCO`](https://app.viam.com/ml-model/viam-labs/EfficientDet-COCO).
The model can detect a variety of things which you can see in <file>[labels.txt](https://github.com/viam-labs/devrel-demos/raw/main/Light%20up%20bot/labels.txt)</file> file including `person`s.

1. Navigate to your machine's **Config** tab on the [Viam app](https://app.viam.com/Machines).
2. Click **Create service** in the lower-left corner of the page.
3. Select type `ML Model`, then select model `TFLite CPU`.
4. Enter `persondetect` as the name for your ML model service, then click **Create**.
5. Select the **Deploy model on robot** for the **Deployment** field.
6. Then select the `bill:effdet0` model from the **Models** dropdown.
6. Then select the `viam-labs:EfficientDet-COCO` model from the **Models** dropdown.

Finally, configure an `mlmodel` detector vision service to use your new `"persondetect"` ML model:

Expand Down
Loading