Skip to content

Commit

Permalink
Fix merge conflicts, add image series & segmentation datasets to napa…
Browse files Browse the repository at this point in the history
…ri plugin
  • Loading branch information
GenevieveBuckley committed Aug 31, 2023
2 parents dbbd4ca + 4e4df74 commit 171dbf8
Show file tree
Hide file tree
Showing 84 changed files with 5,480 additions and 301 deletions.
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -3,3 +3,4 @@ __pycache__/
*.pth
*.tif
examples/data/*
*.out
26 changes: 16 additions & 10 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,31 +16,26 @@ We implement napari applications for:
<img src="https://github.com/computational-cell-analytics/micro-sam/assets/4263537/dfca3d9b-dba5-440b-b0f9-72a0683ac410" width="256">
<img src="https://github.com/computational-cell-analytics/micro-sam/assets/4263537/aefbf99f-e73a-4125-bb49-2e6592367a64" width="256">

**Beta version**

This is an advanced beta version. While many features are still under development, we aim to keep the user interface and python library stable.
Any feedback is welcome, but please be aware that the functionality is under active development and that some features may not be thoroughly tested yet.
We will soon provide a stand-alone application for running the `micro_sam` annotation tools, and plan to also release it as [napari plugin](https://napari.org/stable/plugins/index.html) in the future.

If you run into any problems or have questions please open an issue on Github or reach out via [image.sc](https://forum.image.sc/) using the tag `micro-sam` and tagging @constantinpape.
If you run into any problems or have questions regarding our tool please open an issue on Github or reach out via [image.sc](https://forum.image.sc/) using the tag `micro-sam` and tagging @constantinpape.


## Installation and Usage

You can install `micro_sam` via conda:
```
conda install -c conda-forge micro_sam
conda install -c conda-forge micro_sam napari pyqt
```
You can then start the `micro_sam` tools by running `$ micro_sam.annotator` in the command line.

For an introduction in how to use the napari based annotation tools check out [the video tutorials](https://www.youtube.com/watch?v=ket7bDUP9tI&list=PLwYZXQJ3f36GQPpKCrSbHjGiH39X4XjSO&pp=gAQBiAQB).
Please check out [the documentation](https://computational-cell-analytics.github.io/micro-sam/) for more details on the installation and usage of `micro_sam`.


## Citation

If you are using this repository in your research please cite
- [SegmentAnything](https://arxiv.org/abs/2304.02643)
- and our repository on [zenodo](https://doi.org/10.5281/zenodo.7919746) (we are working on a publication)
- Our [preprint](https://doi.org/10.1101/2023.08.21.554208)
- and the original [Segment Anything publication](https://arxiv.org/abs/2304.02643)


## Related Projects
Expand All @@ -56,6 +51,17 @@ Compared to these we support more applications (2d, 3d and tracking), and provid

## Release Overview

**New in version 0.2.1 and 0.2.2**

- Several bugfixes for the newly introduced functionality in 0.2.0.

**New in version 0.2.0**

- Functionality for training / finetuning and evaluation of Segment Anything Models
- Full support for our finetuned segment anything models
- Improvements of the automated instance segmentation functionality in the 2d annotator
- And several other small improvements

**New in version 0.1.1**

- Fine-tuned segment anything models for microscopy (experimental)
Expand Down
8 changes: 3 additions & 5 deletions deployment/construct.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -8,8 +8,6 @@ header_image: ../doc/images/micro-sam-logo.png
icon_image: ../doc/images/micro-sam-logo.png
channels:
- conda-forge
welcome_text: Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod
tempor incididunt ut labore et dolore magna aliqua.
conclusion_text: Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris
nisi ut aliquip ex ea commodo consequat.
initialize_by_default: false
welcome_text: Install Segment Anything for Microscopy.
conclusion_text: Segment Anything for Microscopy has been installed.
initialize_by_default: false
25 changes: 19 additions & 6 deletions doc/annotation_tools.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,9 +13,22 @@ The annotation tools can be started from the `micro_sam` GUI, the command line o
$ micro_sam.annotator
```

They are built with [napari](https://napari.org/stable/) to implement the viewer and user interaction.
They are built using [napari](https://napari.org/stable/) and [magicgui](https://pyapp-kit.github.io/magicgui/) to provide the viewer and user interface.
If you are not familiar with napari yet, [start here](https://napari.org/stable/tutorials/fundamentals/quick_start.html).
The `micro_sam` applications are mainly based on [the point layer](https://napari.org/stable/howtos/layers/points.html), [the shape layer](https://napari.org/stable/howtos/layers/shapes.html) and [the label layer](https://napari.org/stable/howtos/layers/labels.html).
The `micro_sam` tools use [the point layer](https://napari.org/stable/howtos/layers/points.html), [shape layer](https://napari.org/stable/howtos/layers/shapes.html) and [label layer](https://napari.org/stable/howtos/layers/labels.html).

The annotation tools are explained in detail below. In addition to the documentation here we also provide [video tutorials](https://www.youtube.com/watch?v=ket7bDUP9tI&list=PLwYZXQJ3f36GQPpKCrSbHjGiH39X4XjSO).


## Starting via GUI

The annotation toools can be started from a central GUI, which can be started with the command `$ micro_sam.annotator` or using the executable [from an installer](#from-installer).

In the GUI you can select with of the four annotation tools you want to use:
<img src="https://raw.githubusercontent.com/computational-cell-analytics/micro-sam/master/doc/images/micro-sam-gui.png">

And after selecting them a new window will open where you can select the input file path and other optional parameter. Then click the top button to start the tool. **Note: If you are not starting the annotation tool with a path to pre-computed embeddings then it can take several minutes to open napari after pressing the button because the embeddings are being computed.**


## Annotator 2D

Expand Down Expand Up @@ -44,7 +57,7 @@ It contains the following elements:

Note that point prompts and box prompts can be combined. When you're using point prompts you can only segment one object at a time. With box prompts you can segment several objects at once.

Check out [this video](https://youtu.be/DfWE_XRcqN8) for an example of how to use the interactive 2d annotator.
Check out [this video](https://youtu.be/ket7bDUP9tI) for a tutorial for the 2d annotation tool.

We also provide the `image series annotator`, which can be used for running the 2d annotator for several images in a folder. You can start by clicking `Image series annotator` in the GUI, running `micro_sam.image_series_annotator` in the command line or from a [python script](https://github.com/computational-cell-analytics/micro-sam/blob/master/examples/image_series_annotator.py).

Expand All @@ -69,7 +82,7 @@ Most elements are the same as in [the 2d annotator](#annotator-2d):

Note that you can only segment one object at a time with the 3d annotator.

Check out [this video](https://youtu.be/5Jo_CtIefTM) for an overview of the interactive 3d segmentation functionality.
Check out [this video](https://youtu.be/PEy9-rTCdS4) for a tutorial for the 3d annotation tool.

## Annotator Tracking

Expand All @@ -93,7 +106,7 @@ Most elements are the same as in [the 2d annotator](#annotator-2d):

Note that the tracking annotator only supports 2d image data, volumetric data is not supported.

Check out [this video](https://youtu.be/PBPW0rDOn9w) for an overview of the interactive tracking functionality.
Check out [this video](https://youtu.be/Xi5pRWMO6_w) for a tutorial for how to use the tracking annotation tool.

## Tips & Tricks

Expand All @@ -105,7 +118,7 @@ You can activate tiling by passing the parameters `tile_shape`, which determines
- If you're using the command line functions you can pass them via the options `--tile_shape 1024 1024 --halo 128 128`
- Note that prediction with tiling only works when the embeddings are cached to file, so you must specify an `embedding_path` (`-e` in the CLI).
- You should choose the `halo` such that it is larger than half of the maximal radius of the objects your segmenting.
- The applications pre-compute the image embeddings produced by SegmentAnything and (optionally) store them on disc. If you are using a CPU this step can take a while for 3d data or timeseries (you will see a progress bar with a time estimate). If you have access to a GPU without graphical interface (e.g. via a local computer cluster or a cloud provider), you can also pre-compute the embeddings there and then copy them to your laptop / local machine to speed this up. You can use the command `micro_sam.precompute_embeddings` for this (it is installed with the rest of the applications). You can specify the location of the precomputed embeddings via the `embedding_path` argument.
- The applications pre-compute the image embeddings produced by SegmentAnything and (optionally) store them on disc. If you are using a CPU this step can take a while for 3d data or timeseries (you will see a progress bar with a time estimate). If you have access to a GPU without graphical interface (e.g. via a local computer cluster or a cloud provider), you can also pre-compute the embeddings there and then copy them to your laptop / local machine to speed this up. You can use the command `micro_sam.precompute_state` for this (it is installed with the rest of the applications). You can specify the location of the precomputed embeddings via the `embedding_path` argument.
- Most other processing steps are very fast even on a CPU, so interactive annotation is possible. An exception is the automatic segmentation step (2d segmentation), which takes several minutes without a GPU (depending on the image size). For large volumes and timeseries segmenting an object in 3d / tracking across time can take a couple settings with a CPU (it is very fast with a GPU).
- You can also try using a smaller version of the SegmentAnything model to speed up the computations. For this you can pass the `model_type` argument and either set it to `vit_b` or to `vit_l` (default is `vit_h`). However, this may lead to worse results.
- You can save and load the results from the `committed_objects` / `committed_tracks` layer to correct segmentations you obtained from another tool (e.g. CellPose) or to save intermediate annotation results. The results can be saved via `File -> Save Selected Layer(s) ...` in the napari menu (see the tutorial videos for details). They can be loaded again by specifying the corresponding location via the `segmentation_result` (2d and 3d segmentation) or `tracking_result` (tracking) argument.
Expand Down
36 changes: 36 additions & 0 deletions doc/finetuned_models.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
# Finetuned models

We provide models that were finetuned on microscopy data using `micro_sam.training`. They are hosted on zenodo. We currently offer the following models:
- `vit_h`: Default Segment Anything model with vit-h backbone.
- `vit_l`: Default Segment Anything model with vit-l backbone.
- `vit_b`: Default Segment Anything model with vit-b backbone.
- `vit_h_lm`: Finetuned Segment Anything model for cells and nuclei in light microscopy data with vit-h backbone.
- `vit_b_lm`: Finetuned Segment Anything model for cells and nuclei in light microscopy data with vit-b backbone.
- `vit_h_em`: Finetuned Segment Anything model for neurites and cells in electron microscopy data with vit-h backbone.
- `vit_b_em`: Finetuned Segment Anything model for neurites and cells in electron microscopy data with vit-b backbone.

See the two figures below of the improvements through the finetuned model for LM and EM data.

<img src="https://raw.githubusercontent.com/computational-cell-analytics/micro-sam/master/doc/images/lm_comparison.png" width="768">

<img src="https://raw.githubusercontent.com/computational-cell-analytics/micro-sam/master/doc/images/em_comparison.png" width="768">

You can select which of the models is used in the annotation tools by selecting the corresponding name from the `Model Type` menu:

<img src="https://raw.githubusercontent.com/computational-cell-analytics/micro-sam/master/doc/images/model-type-selector.png" width="256">

To use a specific model in the python library you need to pass the corresponding name as value to the `model_type` parameter exposed by all relevant functions.
See for example the [2d annotator example](https://github.com/computational-cell-analytics/micro-sam/blob/master/examples/annotator_2d.py#L62) where `use_finetuned_model` can be set to `True` to use the `vit_h_lm` model.

## Which model should I choose?

As a rule of thumb:
- Use the `_lm` models for segmenting cells or nuclei in light microscopy.
- Use the `_em` models for segmenting ceells or neurites in electron microscopy.
- Note that this model does not work well for segmenting mitochondria or other organelles becuase it is biased towards segmenting the full cell / cellular compartment.
- For other cases use the default models.

See also the figures above for examples where the finetuned models work better than the vanilla models.
Currently the model `vit_h` is used by default.

We are working on releasing more fine-tuned models, in particular for mitochondria and other organelles in EM.
Binary file added doc/images/em_comparison.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added doc/images/lm_comparison.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added doc/images/micro-sam-gui.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added doc/images/model-type-selector.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file removed doc/images/vanilla-v-finetuned.png
Binary file not shown.
73 changes: 70 additions & 3 deletions doc/installation.md
Original file line number Diff line number Diff line change
@@ -1,16 +1,38 @@
# Installation

`micro_sam` requires the following dependencies:
We provide three different ways of installing `micro_sam`:
- [From conda](#from-conda) is the recommended way if you want to use all functionality.
- [From source](#from-source) for setting up a development environment to change and potentially contribute to our software.
- [From installer](#from-installer) to install without having to use conda. This mode of installation is still experimental! It only provides the annotation tools and does not enable model finetuning.

Our software requires the following dependencies:
- [PyTorch](https://pytorch.org/get-started/locally/)
- [SegmentAnything](https://github.com/facebookresearch/segment-anything#installation)
- [napari](https://napari.org/stable/)
- [elf](https://github.com/constantinpape/elf)
- [napari](https://napari.org/stable/) (for the interactive annotation tools)
- [torch_em](https://github.com/constantinpape/torch-em) (for the training functionality)

## From conda

It is available as a conda package and can be installed via
`micro_sam` is available as a conda package and can be installed via
```
$ conda install -c conda-forge micro_sam
```

This command will not install the required dependencies for the annotation tools and for training / finetuning.
To use the annotation functionality you also need to install `napari`:
```
$ conda install -c conda-forge napari pyqt
```
And to use the training functionality `torch_em`:
```
$ conda install -c conda-forge torch_em
```

In case the installation via conda takes too long consider using [mamba](https://mamba.readthedocs.io/en/latest/).
Once you have it installed you can simply replace the `conda` commands with `mamba`.


## From source

To install `micro_sam` from source, we recommend to first set up a conda environment with the necessary requirements:
Expand Down Expand Up @@ -54,3 +76,48 @@ $ pip install -e .
- Install `micro_sam` by running `pip install -e .` in this folder.
- **Note:** we have seen many issues with the pytorch installation on MAC. If a wrong pytorch version is installed for you (which will cause pytorch errors once you run the application) please try again with a clean `mambaforge` installation. Please install the `OS X, arm64` version from [here](https://github.com/conda-forge/miniforge#mambaforge).
- Some MACs require a specific installation order of packages. If the steps layed out above don't work for you please check out the procedure described [in this github issue](https://github.com/computational-cell-analytics/micro-sam/issues/77).


## From installer

We also provide installers for Linuxand Windows:
- [Linux](https://owncloud.gwdg.de/index.php/s/Cw9RmA3BlyqKJeU)
- [Windows](https://owncloud.gwdg.de/index.php/s/1iD1eIcMZvEyE6d)
<!---
- [Mac](https://owncloud.gwdg.de/index.php/s/7YupGgACw9SHy2P)
-->

**The installers are stil experimental and not fully tested.** Mac is not supported yet, but we are working on also providing an installer for it.

If you encounter problems with them then please consider installing `micro_sam` via [conda](#from-conda) instead.

**Linux Installer:**

To use the installer:
- Unpack the zip file you have downloaded.
- Make the installer executable: `$ chmod +x micro_sam-0.2.0post1-Linux-x86_64.sh`
- Run the installer: `$./micro_sam-0.2.0post1-Linux-x86_64.sh$`
- You can select where to install `micro_sam` during the installation. By default it will be installed in `$HOME/micro_sam`.
- The installer will unpack all `micro_sam` files to the installation directory.
- After the installation you can start the annotator with the command `.../micro_sam/bin/micro_sam.annotator`.
- To make it easier to run the annotation tool you can add `.../micro_sam/bin` to your `PATH` or set a softlink to `.../micro_sam/bin/micro_sam.annotator`.

<!---
**Mac Installer:**
To use the Mac installer you will need to enable installing unsigned applications. Please follow [the instructions for 'Disabling Gatekeeper for one application only' here](https://disable-gatekeeper.github.io/).
Alternative link on how to disable gatekeeper.
https://www.makeuseof.com/how-to-disable-gatekeeper-mac/
TODO detailed instruction
-->

**Windows Installer:**

- Unpack the zip file you have downloaded.
- Run the installer by double clicking on it.
- Choose installation type: `Just Me(recommended)` or `All Users(requires admin privileges)`.
- Choose installation path. By default it will be installed in `C:\Users\<Username>\micro_sam` for `Just Me` installation or in `C:\ProgramData\micro_sam` for `All Users`.
- The installer will unpack all micro_sam files to the installation directory.
- After the installation you can start the annotator by double clicking on `.\micro_sam\Scripts\micro_sam.annotator.exe` or with the command `.\micro_sam\Scripts\micro_sam.annotator.exe` from the Command Prompt.
Loading

0 comments on commit 171dbf8

Please sign in to comment.