Skip to content

Commit

Permalink
1.5.0 (#148)
Browse files Browse the repository at this point in the history
* chore: autopublish 2022-07-26T13:54:44Z

* Remove create-badges job

* Delete test.py

* Add multi-head masked attention

* Update multi-head gated attention to match parent layer

* Update documentation

* Test multi-head masked attention

* allow gated attention layers to use bias

* test bias in gated attention layers

* set return_attention_weights to False to avoid multi-outputs

Use MultiHeadSelfAttention and MultiHeadGatedSelfAttention if want to return the attention weights

* reformat gnns/layers.py

This commit adds new message-passing graph layers (MPN) and graph convolutional layers to dt, including vanilla MPN, GRUMPN, Masked-attention FGNN, and GraphTransformer.

* Update layers.py

* Update test_layers.py

* Update models.py

* Update test_models.py

* Update test_models.py

* Fix indexing problems related to tf.gather

* Allow multi-inputs in ContinuousGenerator

* Fix bad conversion to integer

* version bump

* Fix phase correction at focus and offset calculation

* Fix phase correction in propagation

* Fix mie phase out of foucs

* Fix mie phase out of foucs

* Update README.md

* Bm/version 1.4.0 (#137)

* Update layers.py

* Update convolutional.py

Transformer-based models can now be reused and expanded quickly and easily

* Update documentation

* Update Transformer-based models

* Delete classifying_MNIST_vit_tutorial.ipynb

* Create classifying_MNIST_vit_tutorial.ipynb

* Update datasets.py

* Allows kwargs as inputs in single_layer_call

* Update embeddings.py

* masked transformers

* reformat transformer models

* Create trajectory_analysis_tutorial.ipynb

* Add Variational autoencoders

* Add variational autoencoders

* Update vae.py

* Create MNIST_VAE_tutorial.ipynb

* Update MNIST_VAE_tutorial.ipynb

* Create folder for course examples

* Update README.md

* Update README.md

* Update examples

* Update README.md

* Update README.md

* Update MNIST VAE examples

* Added MLP regression example

* Update README.md

* Create image_segmentation_Unet.ipynb

* Update README.md

* Documented and tested cell_counting_tutorial.ipynb

* improve dnn example

* Shift variant mie

* Position mie scatterer correctly

* implement set z

* implement mnist v1

* implement z dependence

* remove logging

* Implement flattening methods

* Implement pooling and resizing

* Implement TensorflowDataset

* Finalize MNIST

* Implement Malaria classification

* alpha0 release

* fix batchsize in fit

* implement dataset.take

* Implement datasets

* fix phase in mie

* Fix mie positioning and focusing

* Commit to new branch

* add tensorflow datasets dependence

* remove test

Co-authored-by: Jesús Pineda <[email protected]>
Co-authored-by: Jesús Pineda <[email protected]>
Co-authored-by: Benjamin Midtvedt <[email protected]>
Co-authored-by: Ccx55 <[email protected]>

* Add tensorflow datasets to the list of dependencies.

* Read requirements.txt into setup.py

* remove sphinx from build

* remove create badges

* Create CITATION.cff

* Create .zenodo.json

* Update transformer models

* Update pint_definition.py

* Update requirements.txt

* create TimeDistributed CNN

* small fixes to lodestar

Co-authored-by: BenjaminMidtvedt <[email protected]>
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Co-authored-by: Jesús Pineda <[email protected]>
Co-authored-by: Benjamin Midtvedt <[email protected]>
Co-authored-by: Jesús Pineda <[email protected]>
Co-authored-by: Ccx55 <[email protected]>
  • Loading branch information
7 people authored Nov 1, 2022
1 parent 1c9f3f0 commit 8685af1
Show file tree
Hide file tree
Showing 16 changed files with 432 additions and 73 deletions.
51 changes: 51 additions & 0 deletions .zenodo.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,51 @@
{
"creators": [
{
"orcid": "0000-0001-9386-4753",
"affiliation": "Gothenburg University",
"name": "Midtvedt, Benjamin"
},
{
"orcid": "0000-0002-9197-3451",
"affiliation": "Gothenburg University",
"name": "Pineda, Jesus"
},
{
"orcid": "0000-0001-7275-6921",
"affiliation": "Chalmers University of Technology",
"name": "Klein Morberg, Henrik"
},
{
"orcid": "0000-0002-8625-0996",
"affiliation": "University of Vic",
"name": "Manzo, Carlo"
},
{
"orcid": "0000-0001-5057-1846",
"affiliation": "Gothenburg University",
"name": "Volpe, Giovanni"
}
],

"title": "DeepTrack2",

"related_identifiers": [
{
"scheme": "doi",
"identifier": "10.1063/5.0034891",
"relation": "isDocumentedBy",
"resource_type": "publication-article"
}
],

"description": "A Python software platform for microscopy enhanced by deep learning." ,

"keywords": ["Deep Learning", "Software", "Microscopy", "Particle Tracking", "Python"],

"upload_type": "software",

"communities": [
{"identifier": "www.deeptrack.org"},
{"identifier": "https://github.com/softmatterlab/DeepTrack2"}
]
}
30 changes: 30 additions & 0 deletions CITATION.cff
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
# This CITATION.cff file was generated with cffinit.
# Visit https://bit.ly/cffinit to generate yours today!

cff-version: 1.2.0
title: DeepTrack2
message: >-
If you use this software, please cite it through
this publication: Benjamin Midtvedt, Saga
Helgadottir, Aykut Argun, Jesús Pineda, Daniel
Midtvedt, Giovanni Volpe. "Quantitative Digital
Microscopy with Deep Learning." Applied Physics
Reviews 8 (2021), 011310.
https://doi.org/10.1063/5.0034891
type: software
authors:
- given-names: Benjamin
family-names: Midtvedt
orcid: 'https://orcid.org/0000-0001-9386-4753'
- given-names: Jesus
family-names: Pineda
orcid: 'https://orcid.org/0000-0002-9197-3451'
- given-names: Henrik
family-names: Klein Morberg
orcid: 'https://orcid.org/0000-0001-7275-6921'
- given-names: Carlo
family-names: Manzo
orcid: 'https://orcid.org/0000-0002-8625-0996'
- given-names: Giovanni
family-names: Volpe
orcid: 'https://orcid.org/0000-0001-5057-1846'
3 changes: 2 additions & 1 deletion deeptrack/backend/pint_definition.py
Original file line number Diff line number Diff line change
Expand Up @@ -306,7 +306,8 @@
reciprocal_centimeter = 1 / cm = cm_1 = kayser
# Velocity
[velocity] = [length] / [time] = [speed]
[velocity] = [length] / [time]
[speed] = [velocity]
knot = nautical_mile / hour = kt = knot_international = international_knot
mile_per_hour = mile / hour = mph = MPH
kilometer_per_hour = kilometer / hour = kph = KPH
Expand Down
1 change: 1 addition & 0 deletions deeptrack/datasets/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,4 +3,5 @@
segmentation_ssTEM_drosophila,
regression_holography_nanoparticles,
segmentation_fluorescence_u2os,
detection_holography_nanoparticles,
)
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
"""detection_holography_nanoparticles dataset."""

from .detection_holography_nanoparticles import DetectionHolographyNanoparticles
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
# TODO(detection_holography_nanoparticles): If your dataset downloads files, then the checksums
# will be automatically added here when running
# `tfds build --register_checksums`.
Original file line number Diff line number Diff line change
@@ -0,0 +1,77 @@
"""detection_holography_nanoparticles dataset."""

import tensorflow_datasets as tfds
import tensorflow as tf
import numpy as np

# TODO(detection_holography_nanoparticles): Markdown description that will appear on the catalog page.
_DESCRIPTION = """
"""

# TODO(detection_holography_nanoparticles): BibTeX citation
_CITATION = """
"""


class DetectionHolographyNanoparticles(tfds.core.GeneratorBasedBuilder):
"""DatasetBuilder for detection_holography_nanoparticles dataset."""

VERSION = tfds.core.Version("1.0.2")
RELEASE_NOTES = {
"1.0.0": "Initial release.",
}

def _info(self) -> tfds.core.DatasetInfo:
"""Returns the dataset metadata."""
# TODO(detection_holography_nanoparticles): Specifies the tfds.core.DatasetInfo object
return tfds.core.DatasetInfo(
builder=self,
description=_DESCRIPTION,
features=tfds.features.FeaturesDict(
{
# These are the features of your dataset like images, labels ...
"image": tfds.features.Tensor(
shape=(972, 729, 2), dtype=tf.float64
),
"label": tfds.features.Tensor(shape=(None, 7), dtype=tf.float64),
}
),
# If there's a common (input, target) tuple from the
# features, specify them here. They'll be used if
# `as_supervised=True` in `builder.as_dataset`.
supervised_keys=("image", "label"), # Set to `None` to disable
homepage="https://dataset-homepage/",
citation=_CITATION,
disable_shuffling=True,
)

def _split_generators(self, dl_manager: tfds.download.DownloadManager):
"""Returns SplitGenerators."""
# TODO(detection_holography_nanoparticles): Downloads the data and defines the splits
path = dl_manager.download_and_extract(
"https://drive.google.com/u/1/uc?id=1uAZVr9bldhZhxuXAXvdd1-Ks4m9HPRtM&export=download"
)

# TODO(detection_holography_nanoparticles): Returns the Dict[split names, Iterator[Key, Example]]
return {
"train": self._generate_examples(path),
}

def _generate_examples(self, path):
"""Yields examples."""
# TODO(detection_holography_nanoparticles): Yields (key, example) tuples from the dataset

fields = path.glob("f*.npy")
labels = path.glob("d*.npy")

# sort the files
fields = sorted(fields, key=lambda x: int(x.stem[1:]))
labels = sorted(labels, key=lambda x: int(x.stem[1:]))

for field, label in zip(fields, labels):
field_data = np.load(field)
field_data = np.stack((field_data.real, field_data.imag), axis=-1)
yield field.stem, {
"image": field_data,
"label": np.load(label),
}
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
"""detection_holography_nanoparticles dataset."""

import tensorflow_datasets as tfds
from . import detection_holography_nanoparticles


class DetectionHolographyNanoparticlesTest(tfds.testing.DatasetBuilderTestCase):
"""Tests for detection_holography_nanoparticles dataset."""
# TODO(detection_holography_nanoparticles):
DATASET_CLASS = detection_holography_nanoparticles.DetectionHolographyNanoparticles
SPLITS = {
'train': 3, # Number of fake train example
'test': 1, # Number of fake test example
}

# If you are calling `download/download_and_extract` with a dict, like:
# dl_manager.download({'some_key': 'http://a.org/out.txt', ...})
# then the tests needs to provide the fake output paths relative to the
# fake data directory
# DL_EXTRACT_RESULT = {'some_key': 'output_file1.txt', ...}


if __name__ == '__main__':
tfds.testing.test_main()
15 changes: 12 additions & 3 deletions deeptrack/extras/datasets.py
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,12 @@
"CellData": ("1CJW7msDiI7xq7oMce4l9tRkNN6O5eKtj", "CellData", ""),
"CellMigData": ("1vRsWcxjbTz6rffCkrwOfs_ezPvUjPwGw", "CellMigData", ""),
"BFC2Cells": ("1lHgJdG5I3vRnU_DRFwTr_c69nx1Xkd3X", "BFC2Cells", ""),
"STrajCh": ("1wXCSzvHuLwz1dywxUu2aQXlqbgf2V8r3", "STrajCh", "")
"STrajCh": ("1wXCSzvHuLwz1dywxUu2aQXlqbgf2V8r3", "STrajCh", ""),
"TrajectoryDiffusion": (
"1YhECLQrWPZgc_TVY2Sl2OwDcNxmA_jR5",
"TrajectoryDiffusion",
"",
),
}


Expand Down Expand Up @@ -109,7 +114,9 @@ def load(key):

# If the extracted folder is another folder with the same name, move it.
if os.path.isdir(f"datasets/{folder_name}/{folder_name}"):
os.rename(f"datasets/{folder_name}/{folder_name}", f"datasets/{folder_name}")
os.rename(
f"datasets/{folder_name}/{folder_name}", f"datasets/{folder_name}"
)


def load_model(key):
Expand Down Expand Up @@ -171,7 +178,9 @@ def load_model(key):

# If the extracted folder is another folder with the same name, move it.
if os.path.isdir(f"models/{folder_name}/{folder_name}"):
os.rename(f"models/{folder_name}/{folder_name}", f"models/{folder_name}")
os.rename(
f"models/{folder_name}/{folder_name}", f"models/{folder_name}"
)

return f"models/{folder_name}"

Expand Down
Loading

0 comments on commit 8685af1

Please sign in to comment.