diff --git a/docs/source/blog/index.html b/docs/source/blog/index.html index 34305ee43ebf..384485bd6b62 100644 --- a/docs/source/blog/index.html +++ b/docs/source/blog/index.html @@ -31,6 +31,19 @@ + +
+ +
+
+
+
+
release notes
+
16 September 2021, 5 min read
+
Perform Interactive ML-Assisted Labeling with Label Studio 1.3.0
+
+
+
diff --git a/docs/source/blog/release-130.md b/docs/source/blog/release-130.md new file mode 100644 index 000000000000..fbf9a04bb375 --- /dev/null +++ b/docs/source/blog/release-130.md @@ -0,0 +1,75 @@ +--- +title: Perform Interactive ML-Assisted Labeling with Label Studio 1.3.0 +type: blog +image: /images/release-130/predict-owl-region.gif +order: 91 +meta_title: Label Studio Release Notes 1.3.0 +meta_description: Release notes and information about Label Studio version 1.3.0, featuring ML-assisted labeling +--- + +At Label Studio, we're always looking for ways to help you accelerate your data annotation process. With the release of version 1.3.0, you can perform [model-assisted labeling with any connected machine learning backend](/guide/ml.html#Get-interactive-preannotations). + +By interactively predicting annotations, expert human annotators can work alongside pretrained machine learning models or rule-based heuristics to more efficiently complete labeling tasks, helping you get more value from your annotation process and make progress in your machine learning workflow sooner. + +
Gif of using the smart keypoint tool to add a keypoint to an image of an adorable owl while auto-annotation is selected, then a spinner icon appears and turns into a checkmark, when it finishes there is a gray brush mask covering the owl, which is then labeled as Bird. + +You can perform ML-assisted labeling with many different types of data. Supplement your image segmentation and object detection tasks that use rectangles, ellipses, polygons, brush masks, and keypoints, even automatically inferring complex shapes like masks or polygons by interacting with simple primitives such as rectangles or keypoints. + +Beyond image labeling use cases, you can also use ML-assisted labeling for your named entity recognition tasks with HTML and text, in case you want to automatically find repetitive or semantically similar substring patterns within long text samples. + +[Upgrade to the latest version](/guide/install.html) and select **Use for interactive preannotations** when you set up an ML backend, or edit an existing ML backend connection and toggle that option to get started today! + +## Interactive preannotations with images + +Set up an object detection or image segmentation machine learning backend and you can perform interactive pre-annotation with images! For example, you can send a keypoint to the machine learning model and it can return a predicted mask region for your image. + +
Same workflow as the owl gif, but this time featuring a robin or woodpecker-like bird. + +Depending on whether speed or precision is more important in your labeling process, you can choose whether to automatically accept the predicted labels. If you deselect the option to auto accept annotation suggestions, you can manually accept predicted regions before submitting an annotation. + +
Gif of using the smart keypoint tool in labeling to add a keypoint to a bird in auto-annotation mode without having predicted regions auto-accepted. After the predicted brush mask appears, an x and a checkmark appear underneath the brush mask to let you accept or reject the mask. + +Of course, with the eraser tool (improved with new granularity in this release!) you can manually correct any incorrect automatically predicted regions. + +
Gif of selecting a brush mask region, then selecting the eraser to erase the brush mask covering some flying geese that were covered by the brush mask region that was supposed to identify not birds, but missed some geese that were out of focus in the image. + +## Selective preannotation with images + +If you only want to selectively perform ML-assisted labeling, that's an option too! When you're labeling, you can toggle **Auto-Annotation** for specific tasks so that you can manually label more complicated tasks. + +
Gif showing labeling with the manual brush tool some tree branches as not bird, then enabling auto annotation and using the smart keypoint tool to select the bird and then a gray brush mask appears on the bird and that region is labeled as bird. + +For example, with this labeling configuration, the Brush mask tool is visible for manual labeling and the KeyPoint labeling tool is visible only when in auto-annotation mode. +```xml + + + + + + + +``` +This lets you create traditional brush mask annotations for some tasks, or use the smart keypoint labeling tool to assign keypoints to images and prompt a trained ML backend to predict brush mask regions based on the keypoints. + +
Screenshot of the Label Studio UI with smart keypoint tools appearing in purple at the bottom of the toolbar because auto-annotation is enabled. + +## Interactive pre-annotations for text + +You can also get interactive pre-annotations for text or HTML when performing named entity recognition (NER) tasks. For example, if you have a long sample of text with multiple occurrences of a word or phrase, you can set up a machine learning backend to identify identical or similar text spans based on a selection. Amplify your text labeling efficiency with this functionality! + +For example, you can label all instances of opossum in this excerpt from [Ecology of the Opossum on a Natural Area in Northeastern Kansas by Henry S. Fitch et al.](https://www.gutenberg.org/ebooks/37199) using the [Text Named Entity Recognition Template](/templates/named_entity.html). + +
Gif scrolling through a long excerpt from the mentioned text about opossums in the Label Studio UI, then enabling auto-annotation and selecting the MISC tag and labeling the word opossum. After a few seconds, all other instances of opossum in the text are similarly labeled. + +You can try this yourself by downloading this example [machine learning backend for substring matching](https://github.com/heartexlabs/label-studio-ml-backend/blob/master/label_studio_ml/examples/substring_matching/substring_matching.py), or take it to the next level using a more sophisticated NLP model like a [transformer](https://github.com/heartexlabs/label-studio-transformers). See more about how to [create your own machine learning backend](/guide/ml_create.html) + +Install or upgrade Label Studio and [start using ML-assisted labeling with interactive preannotations](/guide/ml.html#Get-interactive-pre-annotations) today! + +## Other improvements + +ML-assisted labeling is the most exciting part of this release, but it's not the only improvement we've made. We improved the functionality of the filtering options on the data manager, and also improved semantic segmentation workflows. We also added new capabilities for exporting the results of large labeling projects by introducing export files. Start by [creating an export file](/api#operation/api_projects_exports_create) and then [download the export file](/api#operation/api_projects_exports_download_read) with the results. + +Check out the full list of improvements and bug fixes in the [release notes on GitHub](https://github.com/heartexlabs/label-studio/releases/tag/v1.3.0). + diff --git a/docs/source/guide/labeling.md b/docs/source/guide/labeling.md index 257ecfe066c1..2720131d72ac 100644 --- a/docs/source/guide/labeling.md +++ b/docs/source/guide/labeling.md @@ -130,6 +130,13 @@ In Label Studio Enterprise, if you're an administrator or project manager, you c 3. Select names of annotators and click the `>` arrow to assign them to the selected tasks. 4. Click **Assign**. +## Perform ML-assisted labeling with interactive preannotations + +If you have a machine learning backend set up to [get interactive preannotations](ml.html#Get-interactive-preannotations), you can choose whether to use those predictions while you label. + +1. After you start labeling, you can enable **Auto-Annotation** to see and use the smart option to assign a label to draw a shape, mask, or assign a keypoint. After using the smart option to draw on an image, or labeling a text or HTML span, the ML backend returns predictions. +2. For image labeling, you can choose whether to **Auto accept annotation suggestions** after you enable auto-annotation. If you automatically accept annotation suggestions, regions show up automatically and are immediately created. If you don't automatically accept suggestions, the regions appear but you can reject or approve them manually, either individually or all at once. Predicted text regions are automatically accepted. + ## Use keyboard shortcuts to label regions faster Use keyboard shortcuts (hotkeys) to improve your labeling performance. When performing a labeling task, click the gear icon to see more details about hotkeys or to enable or disable hotkeys. diff --git a/docs/source/guide/ml.md b/docs/source/guide/ml.md index 4368fcab9aac..e3de3496b568 100644 --- a/docs/source/guide/ml.md +++ b/docs/source/guide/ml.md @@ -122,6 +122,28 @@ If you want to retrieve predictions manually for a list of tasks **using only an ] } ``` + +### Get interactive preannotations + +ML-assisted labeling with interactive preannotations works with image segmentation and object detection tasks using rectangles, ellipses, polygons, brush masks, and keypoints, as well as with HTML and text named entity recognition tasks. Your ML backend must support the type of labeling that you're performing and recognize the input that you create and be able to respond with the relevant output for a prediction. + +1. Set up your machine learning backend for ML-assisted labeling. + 1. For your project, open **Settings > Machine Learning**. + 2. Click **Add Model** or select **Edit** for an existing machine learning backend. + 3. Type a **Title** for the machine learning backend. + 4. Enter the **URL** for the running machine learning backend. For example, `http://example.com:9090`. + 5. Enable **Use for interactive preannotation**. + 6. Click **Validate and Save**. +2. For image labeling, you can update your labeling configuration to include `smart="true"` option for the type of labeling you're performing. Smart tools appear by default if auto-annotation is enabled.
This option is supported for Rectangle, Ellipse, Polygon, Keypoint, and Brush tags. See the [tag documentation](/tags). If you only want the smart option to appear and don't want to perform manual labeling at all, use `smartOnly="true"`. + 1. For your project, open **Settings > Labeling Interface**. + 2. Click **Code** to view the XML labeling configuration. + 3. For the relevant tag type that you want to use to generate predictions with your ML backend, add the `smart="true"` parameter. For example: + `````` + 4. Save your changes. +3. After you start labeling, enable **Auto-Annotation** to see and use the smart option to draw a shape, mask, or assign a keypoint. +4. For image labeling, after you enable auto-annotation you can choose whether to **Auto accept annotation suggestions**. If you automatically accept annotation suggestions, regions show up automatically and are immediately created. If you don't automatically accept suggestions, the regions appear but you can reject or approve them manually, either individually or all at once. + +
### Delete predictions @@ -179,8 +201,8 @@ The process of creating annotated training data for supervised machine learning You can select a task ordering like `Predictions score` on Data manager and the sampling strategy will fit the active learning scenario. Label Studio will send a train signal to ML Backend automatically on each annotation submit/update. You can enable these train signals on the **machine learning** settings page for your project. -* If you need to retrieve and save predictions for all tasks, check recommendations from a [topic below](ml.html#Get-predictions-from-a-model). -* If you want to delete all predictions after your model is retrained, check [this topic](ml.html#Delete-predictions). +* If you need to retrieve and save predictions for all tasks, check recommendations for [retrieving predictions from a model](ml.html#Get-predictions-from-a-model). +* If you want to delete all predictions after your model is retrained, see how to [delete predictions](ml.html#Delete-predictions).
diff --git a/docs/source/guide/ml_create.md b/docs/source/guide/ml_create.md index a9878d70fe4e..47df48f68ff9 100644 --- a/docs/source/guide/ml_create.md +++ b/docs/source/guide/ml_create.md @@ -88,3 +88,17 @@ def fit(self, completions, workdir=None, **kwargs): After you wrap your model code with the class, define the loaders, and define the methods, you're ready to run your model as an ML backend with Label Studio. See the [Quickstart](ml.html#Quickstart). +## Support interactive preannotations in your ML backend + +If you want to support interactive preannotations in your machine learning backend, refer to [this code example for substring matching](https://github.com/heartexlabs/label-studio-ml-backend/tree/master/label_studio_ml/examples/substring_matching). + +Do the following in your code: +- Define an inference call with the **predict** method as outlined in the [inference section of this guide](ml_create.html#Inference-call). +- Within that predict method, take the task data in the `tasks` parameter, containing details about the task that is being preannotated, and the context details in `kwargs['context']`, containing details about actions performed in Label Studio. +- With the task and context data, construct a prediction from the data received from Label Studio. +- Return a result in the Label Studio predictions format. + +Refer to the code example for more details. + + + diff --git a/docs/source/guide/predictions.md b/docs/source/guide/predictions.md index 714a4af7f820..0f0c6d0a1a16 100644 --- a/docs/source/guide/predictions.md +++ b/docs/source/guide/predictions.md @@ -9,6 +9,8 @@ meta_description: Import predicted labels, predictions, pre-annotations, or pre- If you have predictions generated for your dataset from a model, either as pre-annotated tasks or pre-labeled tasks, you can import the predictions with your dataset into Label Studio for review and correction. Label Studio automatically displays the pre-annotations that you import on the Labeling page for each task. +> To generate interactive pre-annotations with a machine learning model while labeling, see [Set up machine learning with Label Studio](ml.html). + To import predicted labels into Label Studio, you must use the [Basic Label Studio JSON format](tasks.html#Basic-Label-Studio-JSON-format) and set up your tasks with the `predictions` JSON key. The Label Studio ML backend also outputs tasks in this format. For image pre-annotations, Label Studio expects the x, y, width, and height of image annotations to be provided in percentages of overall image dimension. See [Units for image annotations](predictions.html#Units_for_image_annotations) on this page for more about how to convert formats. diff --git a/docs/themes/htx/layout/partials/index.ejs b/docs/themes/htx/layout/partials/index.ejs index 4eab8525d783..4a892581a89a 100644 --- a/docs/themes/htx/layout/partials/index.ejs +++ b/docs/themes/htx/layout/partials/index.ejs @@ -15,11 +15,11 @@ menu && menu.classList.add('main__page__header')

Data Science starts with data.
Label Studio removes the pain of labeling it.

- + New -

Release 1.2

+

Release 1.3

-

Webhooks Support

+

ML-Assisted Labeling

diff --git a/docs/themes/htx/source/images/release-130/accept-predictions.gif b/docs/themes/htx/source/images/release-130/accept-predictions.gif new file mode 100644 index 000000000000..b4a26caa6960 Binary files /dev/null and b/docs/themes/htx/source/images/release-130/accept-predictions.gif differ diff --git a/docs/themes/htx/source/images/release-130/combo-manual-auto.gif b/docs/themes/htx/source/images/release-130/combo-manual-auto.gif new file mode 100644 index 000000000000..29f43b658920 Binary files /dev/null and b/docs/themes/htx/source/images/release-130/combo-manual-auto.gif differ diff --git a/docs/themes/htx/source/images/release-130/edit-predicted-mask.gif b/docs/themes/htx/source/images/release-130/edit-predicted-mask.gif new file mode 100644 index 000000000000..23265bb66194 Binary files /dev/null and b/docs/themes/htx/source/images/release-130/edit-predicted-mask.gif differ diff --git a/docs/themes/htx/source/images/release-130/labeling-yes-auto.png b/docs/themes/htx/source/images/release-130/labeling-yes-auto.png new file mode 100644 index 000000000000..0eaeaaf6289b Binary files /dev/null and b/docs/themes/htx/source/images/release-130/labeling-yes-auto.png differ diff --git a/docs/themes/htx/source/images/release-130/possum-text-annotation.gif b/docs/themes/htx/source/images/release-130/possum-text-annotation.gif new file mode 100644 index 000000000000..544ed430b730 Binary files /dev/null and b/docs/themes/htx/source/images/release-130/possum-text-annotation.gif differ diff --git a/docs/themes/htx/source/images/release-130/predict-bird-region.gif b/docs/themes/htx/source/images/release-130/predict-bird-region.gif new file mode 100644 index 000000000000..224ff95d4175 Binary files /dev/null and b/docs/themes/htx/source/images/release-130/predict-bird-region.gif differ diff --git a/docs/themes/htx/source/images/release-130/predict-owl-region.gif b/docs/themes/htx/source/images/release-130/predict-owl-region.gif new file mode 100644 index 000000000000..da3139ed1064 Binary files /dev/null and b/docs/themes/htx/source/images/release-130/predict-owl-region.gif differ