Skip to content

Commit

Permalink
Add more video links, clean up doc (computational-cell-analytics#594)
Browse files Browse the repository at this point in the history
Update the documentation
  • Loading branch information
constantinpape authored May 8, 2024
1 parent 034b101 commit e0a3685
Show file tree
Hide file tree
Showing 8 changed files with 26 additions and 24 deletions.
6 changes: 3 additions & 3 deletions doc/annotation_tools.md
Original file line number Diff line number Diff line change
Expand Up @@ -99,9 +99,9 @@ Most elements are the same as in [the 2d annotator](#annotator-2d):
7. The menu for committing the current tracking result.
8. The menu for clearing the current annotations.

The tracking annotator only supports 2d image data, volumetric data is not supported. We also do not support automatic tracking yet.
The tracking annotator only supports 2d image data with a time dimension, volumetric data + time is not supported. We also do not support automatic tracking yet.

Check out [the video tutorial](TODO) (coming soon!) for an in-depth explanation on how to use this tool.
Check out [the video tutorial](https://youtu.be/1gg8OPHqOyc) for an in-depth explanation on how to use this tool.


## Image Series Annotator
Expand All @@ -121,7 +121,7 @@ Once you click `Annotate Images` the images from the folder you have specified w

This menu will not open if you start the image series annotator from the command line or via python. In this case the input folder and other settings are passed as parameters instead.

Check out [the video tutorial](TODO) (coming soon!) for an in-depth explanation on how to use the image series annotator.
Check out [the video tutorial](https://youtu.be/HqRoImdTX3c) for an in-depth explanation on how to use the image series annotator.


## Finetuning UI
Expand Down
2 changes: 1 addition & 1 deletion doc/bioimageio/em_organelles_v2.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,5 +13,5 @@ See [the dataset overview](https://github.com/computational-cell-analytics/micro
## Validation

The easiest way to validate the model is to visually check the segmentation quality for your data.
If you have annotations you can use for validation you can also quantitative validation, see [here for details](https://github.com/computational-cell-analytics/micro-sam/blob/master/doc/bioimageio/validation.md).
If you have annotations you can use for validation you can also quantitative validation, see [here for details](https://computational-cell-analytics.github.io/micro-sam/micro_sam.html#9-how-can-i-evaluate-a-model-i-have-finetuned).
Please note that the required quality for segmentation always depends on the analysis task you want to solve.
2 changes: 1 addition & 1 deletion doc/bioimageio/lm_v2.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,5 +13,5 @@ See [the dataset overview](https://github.com/computational-cell-analytics/micro
## Validation

The easiest way to validate the model is to visually check the segmentation quality for your data.
If you have annotations you can use for validation you can also quantitative validation, see [here for details](https://github.com/computational-cell-analytics/micro-sam/blob/master/doc/bioimageio/validation.md).
If you have annotations you can use for validation you can also quantitative validation, see [here for details](https://computational-cell-analytics.github.io/micro-sam/micro_sam.html#9-how-can-i-evaluate-a-model-i-have-finetuned).
Please note that the required quality for segmentation always depends on the analysis task you want to solve.
21 changes: 12 additions & 9 deletions doc/faq.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ If you encounter a problem or question not addressed here feel free to [open an


### 1. How to install `micro_sam`?
The [installation](https://computational-cell-analytics.github.io/micro-sam/micro_sam.html#installation) for `micro_sam` is supported in three ways: [from mamba](https://computational-cell-analytics.github.io/micro-sam/micro_sam.html#from-mamba) (recommended), [from source](https://computational-cell-analytics.github.io/micro-sam/micro_sam.html#from-source) and [from installers](https://computational-cell-analytics.github.io/micro-sam/micro_sam.html#from-installer). Check out our [tutorial video](TODO) to get started with `micro_sam`, briefly walking you through the installation process and how to start the tool.
The [installation](https://computational-cell-analytics.github.io/micro-sam/micro_sam.html#installation) for `micro_sam` is supported in three ways: [from mamba](https://computational-cell-analytics.github.io/micro-sam/micro_sam.html#from-mamba) (recommended), [from source](https://computational-cell-analytics.github.io/micro-sam/micro_sam.html#from-source) and [from installers](https://computational-cell-analytics.github.io/micro-sam/micro_sam.html#from-installer). Check out our [tutorial video](https://youtu.be/gcv0fa84mCc) to get started with `micro_sam`, briefly walking you through the installation process and how to start the tool.


### 2. I cannot install `micro_sam` using the installer, I am getting some errors.
Expand Down Expand Up @@ -113,7 +113,7 @@ We want to remove these errors, so we would be very grateful if you can [open an


### 10. The objects are not segmented in my 3d data using the interactive annotation tool.
The first thing to check is: a) make sure you are using the latest version of `micro_sam` (pull the latest commit from master if your installation is from source, or update the installation from conda / mamba using `mamba update micro_sam`), and b) try out the steps from the [3d annotator tutorial video](TODO) to verify if this shows the same behaviour (or the same errors) as you faced. For 3d images, it's important to pass the inputs in the python axis convention, ZYX.
The first thing to check is: a) make sure you are using the latest version of `micro_sam` (pull the latest commit from master if your installation is from source, or update the installation from conda / mamba using `mamba update micro_sam`), and b) try out the steps from the [3d annotation tutorial video](https://youtu.be/nqpyNQSyu74) to verify if this shows the same behaviour (or the same errors) as you faced. For 3d images, it's important to pass the inputs in the python axis convention, ZYX.
c) try using a different model and change the projection mode for 3d segmentation. This is also explained in the video.


Expand Down Expand Up @@ -143,6 +143,9 @@ Yes, you can fine-tune Segment Anything on your own dataset. Here's how you can
Yes, you can fine-tune Segment Anything on your custom datasets on Kaggle (and [BAND](https://computational-cell-analytics.github.io/micro-sam/micro_sam.html#using-micro_sam-on-band)). Check out our [tutorial notebook](https://github.com/computational-cell-analytics/micro-sam/blob/master/notebooks/micro-sam-finetuning.ipynb) for this.


<!---
TODO: we should improve this explanation and add a small image that visualizes the labels.
-->
### 3. What kind of annotations do I need to finetune Segment Anything?
Annotations are referred to the instance segmentation labels, i.e. each object of interests in your microscopy images have an individual id to uniquely identify all the segmented objects. You can obtain them by `micro_sam`'s annotation tools. In `micro_sam`, it's expected to provide dense segmentations (i.e. all objects per image are annotated) for finetuning Segment Anything with the additional decoder, however it's okay to use sparse segmentations (i.e. few objects per image are annotated) for just finetuning Segment Anything (without the additional decoder).

Expand All @@ -156,20 +159,20 @@ If you are using the python library or CLI you can specify this path with the `c
`micro_sam` introduces a new segmentation decoder to the Segment Anything backbone, for enabling faster and accurate automatic instance segmentation, by predicting the [distances to the object center and boundary](https://github.com/constantinpape/torch-em/blob/main/torch_em/transform/label.py#L284) as well as predicting foregrund, and performing [seeded watershed-based postprocessing](https://github.com/constantinpape/torch-em/blob/main/torch_em/util/segmentation.py#L122) to obtain the instances.


### 6: I want to finetune only the Segment Anything model without the additional instance decoder.
The additional instance segmentation decoder is a flexible wrap around the Segment Anything model, which enables the users to either fine-tune the Segment Anything model as it is, or to fine-tune the Segment Anything model with the additional instance segmentation decoder for improved automatic instance segmentation experience (see the [example](https://github.com/computational-cell-analytics/micro-sam/tree/master/examples/finetuning#example-for-model-finetuning) for finetuning with both the objectives).
### 6. I want to finetune only the Segment Anything model without the additional instance decoder.
The instance segmentation decoder is optional. So you can only finetune SAM or SAM and the additional decoder. Finetuning with the decoder will increase training times, but will enable you to use AIS. See [this example](https://github.com/computational-cell-analytics/micro-sam/tree/master/examples/finetuning#example-for-model-finetuning) for finetuning with both the objectives.

> NOTE: To try out the other way round (i.e. the automatic instance segmentation framework without the interactive capability, i.e. a UNETR: a vision transformer encoder and a convolutional decoder), you can take inspiration from this [example on LIVECell](https://github.com/constantinpape/torch-em/blob/main/experiments/vision-transformer/unetr/for_vimunet_benchmarking/run_livecell.py).

### 7. I have a NVIDIA RTX 4090Ti GPU with 24GB VRAM. Can I finetune Segment Anything?
Finetuning Segment Anything is possible in most consumer-grade GPU and CPU resources (but training being a lot slower on the CPU). For the mentioned resource, it should be possible to finetune a ViT Base (also abbreviated as `vit_b`) by reducing the number of objects per image to 15.
This parameter has the biggest impact on the VRAM consumption and quality of the finetuned model.
You can find an overview of the resources we have tested for finetuning [here](TODO).
You can find an overview of the resources we have tested for finetuning [here](#training-your-own-model).
We also provide a the convenience function `micro_sam.training.train_sam_for_configuration` that selects the best training settings for these configuration. This function is also used by the finetuning UI.

### 8. I want to create a dataloader for my data, for finetuning Segment Anything.
### 8. I want to create a dataloader for my data, to finetune Segment Anything.
Thanks to `torch-em`, a) Creating PyTorch datasets and dataloaders using the python library is convenient and supported for various data formats and data structures.
See the [tutorial notebook](https://github.com/constantinpape/torch-em/blob/main/notebooks/tutorial_create_dataloaders.ipynb) on how to create dataloaders using `torch-em` and the [documentation](https://github.com/constantinpape/torch-em/blob/main/doc/datasets_and_dataloaders.md) for details on creating your own datasets and dataloaders; and b) finetuning using the `napari` tool eases the aforementioned process, by allowing you to add the input parameters (path to the directory for inputs and labels etc.) directly in the tool.
> NOTE: If you have images with large input shapes with a sparse density of instance segmentations, we recommend using [`sampler`](https://github.com/constantinpape/torch-em/blob/main/torch_em/data/sampler.py) for choosing the patches with valid segmentation for the finetuning purpose (see the [example](https://github.com/computational-cell-analytics/micro-sam/blob/master/finetuning/specialists/training/light_microscopy/plantseg_root_finetuning.py#L29) for PlantSeg (Root) specialist model in `micro_sam`).
Expand All @@ -179,8 +182,8 @@ See the [tutorial notebook](https://github.com/constantinpape/torch-em/blob/main
To validate a Segment Anything model for your data, you have different options, depending on the task you want to solve and whether you have segmentation annotations for your data.

- If you don't have any annotations you will have to validate the model visually. We suggest doing this with the `micro_sam` GUI tools. You can learn how to use them in the `micro_sam` documentation.
- If you have segmentation annotations you can use the `micro_sam` library to evaluate the segmentation quality of different SAM models. We provide functionality to evaluate the models for interactive and for automatic segmentation:
- You can use `run_inference_with_iterative_prompting` to evaluate models for interactive segmentation.
- You can use `run_instance_segmentation_grid_search_and_inference` to evaluate models for automatic segmentation.
- If you have segmentation annotations you can use the `micro_sam` python library to evaluate the segmentation quality of different models. We provide functionality to evaluate the models for interactive and for automatic segmentation:
- You can use `micro_sam.evaluation.evaluation.run_evaluation_for_iterative_prompting` to evaluate models for interactive segmentation.
- You can use `micro_sam.evaluation.instance_segmentation.run_instance_segmentation_grid_search_and_inference` to evaluate models for automatic segmentation.

We provide an [example notebook](https://github.com/computational-cell-analytics/micro-sam/blob/master/notebooks/inference_and_evaluation.ipynb) that shows how to use this evaluation functionality.
10 changes: 5 additions & 5 deletions doc/finetuned_models.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# Finetuned Models

In addition to the original Segment Anything models, we provide models that are finetuned on microscopy data.
The additional models are available in the [BioImage.IO Model Zoo](https://bioimage.io/#/) and are also hosted on Zenodo.
They are available in the [BioImage.IO Model Zoo](https://bioimage.io/#/) and are also hosted on Zenodo.

We currently offer the following models:

Expand Down Expand Up @@ -53,14 +53,14 @@ Previous versions of our models are available on Zenodo:

We do not recommend to use these models since our new models improve upon them significantly. But we provide the links here in case they are needed to reproduce older segmentation workflows.

We also provide additional models that were used for experiments in our publication on zenodo:
We provide additional models that were used for experiments in our publication on Zenodo:
- [LIVECell Specialist Models](https://doi.org/10.5281/zenodo.11115426)
- [TissueNet Specialist Models](https://doi.org/10.5281/zenodo.11115998)
- [NeurIPS CellSeg Specialist Models](https://doi.org/10.5281/zenodo.11116407)
- [DeepBacs Specialist Models](https://doi.org/10.5281/zenodo.11115827)
- [PlantSeg (Root) Specialist Models](https://doi.org/10.5281/zenodo.11116603)
- [CREMI Specialist Models](https://doi.org/10.5281/zenodo.11117314)
- [ASEM (ER) Specialist Models](https://doi.org/10.5281/zenodo.11117144)
- `vit_h_lm`: Finetuned Segment Anything model for cells and nuclei in light microscopy data with ViT Huge backbone. ([Zenodo](https://doi.org/10.5281/zenodo.11117559))
- `vit_h_em_organelles`: Finetuned Segment Anything model for mitochodria and nuclei in electron microscopy data with ViT Huge backbone. ([Zenodo](https://doi.org/10.5281/zenodo.11117495))
- [User Study Finetuned Models](https://doi.org/10.5281/zenodo.11117615)
- [The LM Generalist Model with ViT-H backend (vit_h_lm)](https://doi.org/10.5281/zenodo.11117559)
- [The EM Generalist Model with ViT-H backend (vit_h_em_organelles)](https://doi.org/10.5281/zenodo.11117495)
- [Finetuned Models for the user studies](https://doi.org/10.5281/zenodo.11117615)
4 changes: 2 additions & 2 deletions doc/installation.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ You can find more information on the installation and how to troubleshoot it in
## From mamba

[mamba](https://mamba.readthedocs.io/en/latest/) is a drop-in replacement for conda, but much faster.
While the steps below may also work with `conda`, we highly recommend using `mamba`.
The steps below may also work with `conda`, but we recommend using `mamba`, especially if the installation does not work with `conda`.
You can follow the instructions [here](https://mamba.readthedocs.io/en/latest/installation/mamba-installation.html) to install `mamba`.

**IMPORTANT**: Make sure to avoid installing anything in the base environment.
Expand All @@ -25,7 +25,7 @@ $ mamba create -c conda-forge -n micro-sam micro_sam
```
if you want to use the GPU you need to install PyTorch from the `pytorch` channel instead of `conda-forge`. For example:
```bash
$ mamba create -c pytorch -c nvidia -c conda-forge micro_sam pytorch pytorch-cuda=12.1
$ mamba create -c pytorch -c nvidia -c conda-forge -n micro-sam micro_sam pytorch pytorch-cuda=12.1
```
You may need to change this command to install the correct CUDA version for your system, see [https://pytorch.org/](https://pytorch.org/) for details.

Expand Down
4 changes: 2 additions & 2 deletions doc/python_library.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ The notebook explains how to train it together with the rest of SAM and how to t

More advanced examples, including quantitative and qualitative evaluation, can be found in [the finetuning directory](https://github.com/computational-cell-analytics/micro-sam/tree/master/finetuning), which contains the code for training and evaluating [our models](finetuned-models). You can find further information on model training in the [FAQ section](fine-tuning-questions).

Here is a list of resources (with the recommended settings), on which `micro_sam` has been tested for finetuning Segment Anything:
Here is a list of resources, together with their recommended training settings, for which we have tested model finetuning:

| Resource Name | Capacity | Model Type | Batch Size | Finetuned Parts | Number of Objects|
|-----------------------------|----------|------------|------------|------------------------------|------------------|
Expand All @@ -52,4 +52,4 @@ Here is a list of resources (with the recommended settings), on which `micro_sam
| GPU (NVIDIA A100) | 80GB | ViT Large | 2 | *all* | 30 |
| GPU (NVIDIA A100) | 80GB | ViT Huge | 2 | *all* | 25 |

> NOTE: The parameters can be altered based on your choice, make sure you are aware of the impact of the parameters as the number of objects per image, the batch size, and the type of model have a strong impact on the cost of your compute memory. See the [example finetuning script on HeLa](https://github.com/computational-cell-analytics/micro-sam/blob/master/examples/finetuning/finetune_hela.py) or the [finetuning notebook](https://github.com/computational-cell-analytics/micro-sam/blob/master/notebooks/automatic_segmentation.ipynb) for a brief introduction to these parameters.
> NOTE: If you use the [finetuning UI](#finetuning-ui) or `micro_sam.training.training.train_sam_for_configuration` you can specify the hardware configuration and the best settings for it will be set automatically. If your hardware is not in the settings we have tested choose the closest match. You can set the training parameters yourself when using `micro_sam.training.training.train_sam`. Be aware that the choice for the number of objects per image, the batch size, and the type of model have a strong impact on the VRAM needed for training and the duration of training. See the [finetuning notebook](https://github.com/computational-cell-analytics/micro-sam/blob/master/notebooks/sam_finetuning.ipynb) for an overview of these parameters.
1 change: 0 additions & 1 deletion micro_sam/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,6 @@
.. include:: ../doc/finetuned_models.md
.. include:: ../doc/faq.md
.. include:: ../doc/contributing.md
.. include:: ../doc/development.md
.. include:: ../doc/band.md
"""
import os
Expand Down

0 comments on commit e0a3685

Please sign in to comment.