Skip to content

Commit

Permalink
docs: Fixes and adding front matter to example README files (#610)
Browse files Browse the repository at this point in the history
Co-authored-by: caitlinwheeless <[email protected]>
  • Loading branch information
caitlinwheeless and caitlinwheeless authored Aug 19, 2024
1 parent ea4f97f commit 98cdd5e
Show file tree
Hide file tree
Showing 5 changed files with 75 additions and 16 deletions.
10 changes: 5 additions & 5 deletions label_studio_ml/examples/grounding_sam/README.md
Original file line number Diff line number Diff line change
@@ -1,28 +1,28 @@
<!--
---
title: Zero-shot object detection and image segmentation with Grounding DINO
title: Zero-shot object detection and image segmentation with Grounding DINO and SAM
type: guide
tier: all
order: 15
hide_menu: true
hide_frontmatter_title: true
meta_title: Image segmentation in Label Studio using a Grounding DINO backend
meta_description: Label Studio tutorial for using Grounding DINO for zero-shot object detection in images
meta_title: Image segmentation in Label Studio using a Grounding DINO backend and SAM
meta_description: Label Studio tutorial for using Grounding DINO and SAM for zero-shot object detection in images
categories:
- Computer Vision
- Image Annotation
- Object Detection
- Zero-shot Image Segmentation
- Grounding DINO
- Segment Anything Model
image: "/tutorials/grounding-dino.png"
image: "/tutorials/grounding-sam.png"
---
-->

https://github.com/HumanSignal/label-studio-ml-backend/assets/106922533/d1d2f233-d7c0-40ac-ba6f-368c3c01fd36


# Grounding DINO backend integration
# Grounding DINO backend integration with SAM enabled

This integration will allow you to:

Expand Down
24 changes: 21 additions & 3 deletions label_studio_ml/examples/segment_anything_2_image/README.md
Original file line number Diff line number Diff line change
@@ -1,9 +1,27 @@

Segment Anything 2, or SAM 2, is a model releaed by Meta in July 2024. An update to the original Segment Anything Model,
<!--
---
title: SAM2 with Images
type: guide
tier: all
order: 15
hide_menu: true
hide_frontmatter_title: true
meta_title: Using SAM2 with Label Studio for Image Annotation
categories:
- Computer Vision
- Image Annotation
- Object Detection
- Segment Anything Model
image: "/tutorials/sam2-images.png"
---
-->

# Using SAM2 with Label Studio for Image Annotation

Segment Anything 2, or SAM 2, is a model released by Meta in July 2024. An update to the original Segment Anything Model,
SAM 2 provides even better object segmentation for both images and video. In this guide, we'll show you how to use
SAM 2 for better image labeling with label studio.

## Using SAM2 with Label Studio (tutorial)
Click on the image below to watch our ML Evangelist Micaela Kaplan explain how to link SAM 2 to your Label Studio Project.
You'll need to follow the instructions below to stand up an instance of SAM2 before you can link your model!

Expand Down
22 changes: 21 additions & 1 deletion label_studio_ml/examples/segment_anything_2_video/README.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,23 @@
<!--
---
title: SAM2 with Videos
type: guide
tier: all
order: 15
hide_menu: true
hide_frontmatter_title: true
meta_title: Using SAM2 with Label Studio for Video Annotation
categories:
- Computer Vision
- Video Annotation
- Object Detection
- Segment Anything Model
image: "/tutorials/sam2-video.png"
---
-->

# Using SAM2 with Label Studio for Video Annotation

This guide describes the simplest way to start using **SegmentAnything 2** with Label Studio.

This repository is specifically for working with object tracking in videos. For working with images,
Expand Down Expand Up @@ -51,7 +71,7 @@ For your project, you can use any labeling config with video properties. Here's
}-->


# Known limitiations
# Known limitations
- As of 8/11/2024, SAM2 only runs on GPU servers.
- Currently, we only support the tracking of one object in video, although SAM2 can support multiple.
- Currently, we do not support video segmentation.
Expand Down
4 changes: 2 additions & 2 deletions label_studio_ml/examples/tesseract/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ Launch Label Studio. You can follow the guide from the [official documentation](
docker run -it \
-p 8080:8080 \
-v `pwd`/mydata:/label-studio/data \
humansignal/label-studio:latest
heartex/label-studio:latest
```

Optionally, you may enable local file serving in Label Studio
Expand All @@ -56,7 +56,7 @@ Launch Label Studio. You can follow the guide from the [official documentation](
-v `pwd`/mydata:/label-studio/data \
--env LABEL_STUDIO_LOCAL_FILES_SERVING_ENABLED=true \
--env LABEL_STUDIO_LOCAL_FILES_DOCUMENT_ROOT=/label-studio/data/images \
humansignal/label-studio:latest
heartex/label-studio:latest
```
If you're using local file serving, be sure to [get a copy of the API token](https://labelstud.io/guide/user_account#Access-token) from
Label Studio to connect the model.
Expand Down
31 changes: 26 additions & 5 deletions label_studio_ml/examples/watsonx_llm/README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,23 @@

<!--
---
title: Integrate WatsonX with Label Studio
type: guide
tier: all
order: 15
hide_menu: true
hide_frontmatter_title: true
meta_title: Using WatsonX with Label Studio
categories:
- Computer Vision
- Video Annotation
- Object Detection
- Segment Anything Model
image: "/tutorials/watsonx.png"
---
-->

# Integrate WatsonX to Label Studio

WatsonX offers a suite of machine learning tools, including access to many LLMs, prompt
refinement interfaces, and datastores via WatsonX.data. When you integrate WatsonX with Label Studio, you get
access to these models and can automatically keep your annotated data up to date in your WatsonX.data tables.
Expand All @@ -13,8 +30,12 @@ on webhooks, see [our documentation](https://labelstud.io/guide/webhooks)

See the configuration notes at the bottom for details on how to set up your environment variables to get the system to work.

For a video demonstration, see [Integrating Label Studio with IBM WatsonX](https://www.youtube.com/watch?v=9iP2yO4Geqc).

<video src="https://www.youtube.com/watch?v=9iP2yO4Geqc" controls="controls" style="max-width: 800px;" class="gif-border" />

## Setting up your label_config
For this project, we reccoment you start with the labeling config as defined below, but you can always edit it or expand it to
For this project, we recommend you start with the labeling config as defined below, but you can always edit it or expand it to
meet your needs! Crucially, there must be a `<TextArea>` tag for the model to insert its response into.

<View>
Expand Down Expand Up @@ -138,13 +159,13 @@ The following parameters allow you to link the WatsonX models to Label Studio:

The following parameters allow you to use the webhook connection to transfer data from Label Studio to WatsonX.data:

-`WATSONX_ENG_USERNAME`- MUST be `ibmlhapikey` for the intergration to work.
-`WATSONX_ENG_USERNAME`- MUST be `ibmlhapikey` for the integration to work.

To get the host and port information below, you can folllow the steps under [Pre-requisites](https://cloud.ibm.com/docs/watsonxdata?topic=watsonxdata-con-presto-serv#conn-to-prestjava).
To get the host and port information below, you can follow the steps under [Pre-requisites](https://cloud.ibm.com/docs/watsonxdata?topic=watsonxdata-con-presto-serv#conn-to-prestjava).

- `WATSONX_ENG_HOST` - the host information for your WatsonX.data Engine
- `WATSONX_ENG_PORT` - the port information for your WatsonX.data Engine
- `WATSONX_CATALOG` - the name of the catalog for the table you'll insert your data into. Must be created in the WatsonX.data platform.
- `WATSONX_SCHEMA` - the name of the schema for the table you'll insert your data into. Must be created in the WatsonX.data platofrm.
- `WATSONX_SCHEMA` - the name of the schema for the table you'll insert your data into. Must be created in the WatsonX.data platform.
- `WATSONX_TABLE` - the name of the table you'll insert your data into. Does not need to be already created.

0 comments on commit 98cdd5e

Please sign in to comment.