Skip to content

Latest commit

 

History

History
172 lines (126 loc) · 9.38 KB

README.md

File metadata and controls

172 lines (126 loc) · 9.38 KB

TensorFlow Compatibility Notice

DeepTrack2 version 2.0++ does not support TensorFlow. If you need TensorFlow support, please install the legacy version 1.7.

A comprehensive deep learning framework for digital microscopy.

PyPI version PyPI version Python version

Installation ExamplesBasicsCite usLicense

We provide tools to create physical simulations of optical systems, to generate and train neural network models, and to analyze experimental data.

Installation

DeepTrack 2.1 requires at least python 3.6.

To install DeepTrack 2.1, open a terminal or command prompt and run:

pip install deeptrack

If you have a very recent version of python, you may need to install numpy before DeepTrack. This is a known issue with scikit-image.

Updating to 2.1 from 2.0

If you are already using DeepTrack 2.0 (pypi version 0.x.x), updating to DeepTrack 2.1 (pypi version 1.x.x) is painless. If you have followed deprecation warnings, no change to your code is needed. There are two breaking changes:

  • The deprecated operator + to chain features has been removed. It is now only possible using the >> operator.
  • The deprecated operator ** to duplicate a feature has been removed. It is now only possible using the ^ operator.

If you notice any other changes in behavior, please report it to us in the issues tab.

Examples of applications using DeepTrack

DeepTrack is a general purpose deep learning framework for microscopy, meaning you can use it for any task you like. Here, we show some common applications!


Single particle tracking


Training a CNN-based single particle tracker using simulated data
Unsupervised training of a single particle tracker using LodeSTAR


Multi-particle tracking


Training LodeSTAR to detect multiple cells from a single image
Training a UNet-based multi-particle tracker using simulated data


Particle tracing


Training MAGIK to trace migrating cells

Basics to learn DeepTrack 2.1

Everybody learns in different ways! Depending on your preferences, and what you want to do with DeepTrack, you may want to check out one or more of these resources.

Getting-started guides

We have a set of four notebooks which aims to teach you all you need to know to use DeepTrack to its fullest with a focus on the application.

  1. deeptrack_introduction_tutorial Gives an overview of how to use DeepTrack 2.1.
  2. tracking_particle_cnn_tutorial Demonstrates how to track a point particle with a convolutional neural network (CNN).
  3. tracking_multiple_particles_unet_tutorial Demonstrates how to track multiple particles using a U-net.
  4. distinguishing_particles_in_brightfield_tutorial Demonstrates how to use a U-net to track and distinguish particles of different sizes in brightfield microscopy.

DeepTrack 2.1 in action

Additionally, we have six more case studies which are less documented, but gives additional insight in how to use DeepTrack with real datasets

  1. Single Particle Tracking Tracks experimental videos of a single particle. (Requires opencv-python compiled with ffmpeg)
  2. Multi-Particle tracking Detect quantum dots in a low SNR image.
  3. Particle Feature Extraction Extract the radius and refractive index of particles.
  4. Cell Counting Count the number of cells in fluorescence images.
  5. 3D Multi-Particle tracking
  6. GAN image generation Use a GAN to create cell image from masks.

Model-specific examples

We also have examples that are specific for certain models. This includes

  • LodeSTAR for label-free particle tracking.
  • MAGIK for graph-based particle linking and trace characterization.

Documentation

The detailed documentation of DeepTrack 2.1 is available at the following link: https://deeptrackai.github.io/DeepTrack2

Video Tutorials

Videos are currently being updated to match with the current version of DeepTrack.

Cite us!

If you use DeepTrack 2.1 in your project, please cite us here:

Benjamin Midtvedt, Saga Helgadottir, Aykut Argun, Jesús Pineda, Daniel Midtvedt, Giovanni Volpe.
"Quantitative Digital Microscopy with Deep Learning."
Applied Physics Reviews 8 (2021), 011310.
https://doi.org/10.1063/5.0034891

See also:

https://www.nature.com/articles/s41467-022-35004-y:

Midtvedt, B., Pineda, J., Skärberg, F. et al. 
"Single-shot self-supervised object detection in microscopy." 
Nat Commun 13, 7492 (2022).

https://arxiv.org/abs/2202.06355:

Jesús Pineda, Benjamin Midtvedt, Harshith Bachimanchi, Sergio Noé, Daniel  Midtvedt, Giovanni Volpe,1 and  Carlo  Manzo
"Geometric deep learning reveals the spatiotemporal fingerprint ofmicroscopic motion."
arXiv 2202.06355 (2022).

https://doi.org/10.1364/OPTICA.6.000506:

Saga Helgadottir, Aykut Argun, and Giovanni Volpe.
"Digital video microscopy enhanced by deep learning."
Optica 6.4 (2019): 506-513.

https://github.com/softmatterlab/DeepTrack.git:

Saga Helgadottir, Aykut Argun, and Giovanni Volpe.
"DeepTrack." (2019)

Funding

This work was supported by the ERC Starting Grant ComplexSwimmers (Grant No. 677511), the ERC Starting Grant MAPEI (101001267), and the Knut and Alice Wallenberg Foundation.