Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Blobdetect lc #15

Open
wants to merge 2 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 2 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -2,4 +2,5 @@
*.snakemake/
*__pycache__/
spimquant/results/
spimquant/workflow/scripts/experiment/
spimquant/workflow/scripts/experiment/

6 changes: 4 additions & 2 deletions docs/getting_started/installation.md
Original file line number Diff line number Diff line change
@@ -1,9 +1,11 @@
## SPIMquant

SPIMquant is a BIDS App for processing SPIM (lightsheet) microscopy datasets, performing registration to a template, and quantifying microscopic features from the SPIM data.
SPIMquant is a parallel processing library that is designed to process scan
and quantify the number of cells within.

Hardware requirements: If run locally, make sure you have sufficient memory
(at least quite a bit more than 16G of memory in total), as the `greedy` diffeormorphic registration we rely on can consume a significant amount of memory during the template registration process.
(at least quite a bit more than 16G of memory in total), as the `greedy` command
line tool we rely on consumes memory heavily during the template registration process.

Software requirements: A linux machine with Singularity or Apptainer installed is
recommended. Other-wise with a Windows machine, you want to have the following libraries
Expand Down
20 changes: 10 additions & 10 deletions poetry.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

2 changes: 1 addition & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ pybids = "^0.16.5"
sparse = "^0.15.1"
bokeh = "^3.4.1"
zarrnii = "0.1.3a1"
cvpl_tools = "^0.6.3"
cvpl_tools = "^0.6.11"

[tool.poetry.scripts]
spimquant = "spimquant.run:app.run"
Expand Down
3 changes: 0 additions & 3 deletions spimquant/run.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,9 +2,6 @@
from pathlib import Path

from snakebids import bidsapp, plugins
import os
import shutil


app = bidsapp.app(
[
Expand Down
18 changes: 18 additions & 0 deletions spimquant/workflow/rules/blobdetect.smk
Original file line number Diff line number Diff line change
Expand Up @@ -303,3 +303,21 @@ rule map_volume_tsv_dseg_to_template_nii:
),
script:
"../scripts/map_tsv_dseg_to_nii.py"

# ---------------------------- Part 2: Negatively Masked Counting -------------------------------

rule negatively_masked_counting:
"""
Work in progress
"""
input:
zarr='/path_to_bids_root/bids/sub-onuska21/micr/sub-onuska21_sample_brain_acq-prestitched_SPIM.ome.zarr?slices=[1]',
neg_mask='/path_to_bids_root/resources/onuska21_patched.tiff',
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I know still a work-in-progress, but we should use a different format for the neg_mask -- either ome.zarr or nifti (and use zarrnii to get in the same space)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the pointer! I was wondering which format would work better in this case

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a rescaling component on the original OME ZARR image? Or only neg_mask is scaled but the OME ZARR is kept as scale=diag(1, 1, 1, 1)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a rescaling component on the original OME ZARR image? Or only neg_mask is scaled but the OME ZARR is kept as scale=diag(1, 1, 1, 1)

You're asking about how to go from voxel to physical coordinates? The vox2ras matrix from zarrnii does that for you. Perhaps can go over on a call or when I'm back on Friday.

params:
neg_mask_scale=(1, 1, 1),
tmp_path=f'{config["output_dir"]}/tmp'
output:
found_lc=directory(f'{config["output_dir"]}/found_lc')
threads: 32
script:
"../scripts/contour_counting/negatively_masked_counting.py"
Original file line number Diff line number Diff line change
@@ -0,0 +1,91 @@
"""
This file finds, globally in a 3d brightness image, all the contours' size and location.

Inputs to this file:
1. Path to an ome-zarr 3d single channel brightness image
2. Path to a tiffile 3d binary mask that mask off regions of the image in 1) with false positives
3. A scaling factor tuple, when multiplied from left, transforms pixel location of 2) to 1)
4. An integer (starting from 0) specifies the index of input channel to use

Output of this file is a NDBlock list of centroids saved in local file, which can be loaded
and read as a global list of all contours' size and location.
"""


if __name__ == '__main__':
import cvpl_tools.im.process.qsetup as qsetup
# IMPORT YOUR LIBRARIES HERE
import cvpl_tools.im.seg_process as seg_process
import cvpl_tools.im.process.bs_to_os as sp_bs_to_os
import cvpl_tools.im.process.os_to_cc as sp_os_to_cc
import cvpl_tools.im.process.any_to_any as sp_any_to_any
import cvpl_tools.ome_zarr.io as ome_io
import cvpl_tools.im.ndblock as ndblock
import dask.array as da
import numcodecs
import tifffile
import shutil

class Pipeline(seg_process.SegProcess):
def __init__(self):
super().__init__()
self.in_to_bs = seg_process.SimpleThreshold(.45)
self.bs_to_os = sp_bs_to_os.DirectBSToOS(is_global=True)
self.os_to_cc = sp_os_to_cc.CountOSBySize(
size_threshold=200.,
volume_weight=5.15e-3,
border_params=(3., -.5, 2.3),
min_size=8,
reduce=False,
is_global=True
)

def forward(self, im, cptr, viewer_args: dict = None):
cdir = cptr.subdir()
bs = self.in_to_bs.forward(im, cptr=cdir.cache(cid='in_to_bs'), viewer_args=viewer_args)
os = self.bs_to_os.forward(bs, cptr=cdir.cache(cid='bs_to_os'), viewer_args=viewer_args)
cc = self.os_to_cc.forward(os, cptr=cdir.cache(cid='os_to_cc'), viewer_args=viewer_args)
return cc

TMP_PATH = snakemake.params.tmp_path
with qsetup.PLComponents(TMP_PATH, 'CacheDirectoryNegMaskedCounting',
client_args=dict(threads_per_worker=12, n_workers=1),
viewer_args=dict(use_viewer=False)) as plc:
# DO DASK COMPUTATION, AND SHOW RESULTS IN plc.viewer
src_im = ome_io.load_dask_array_from_path(snakemake.input.zarr, mode='r', level=0)
pipeline = Pipeline()
src_im = da.clip(src_im / 1000, 0., 1.)
assert src_im.ndim == 3
print(f'Computing centroids size and location. Masking the image, imshape={src_im.shape}.')
src_im = src_im.rechunk(chunks=(128, 256, 256))
storage_options = dict(
dimension_separator='/',
preferred_chunksize=None, # don't re-chunk when saving and loading
multiscale=0,
compressor=numcodecs.Blosc(cname='lz4', clevel=9, shuffle=numcodecs.Blosc.BITSHUFFLE)
)
viewer_args = dict(
viewer=None,
display_points=False,
display_checkerboard=False,
client=plc.dask_client,
storage_options=storage_options
)

up_sampler = sp_any_to_any.UpsamplingByIntFactor(factor=snakemake.params.neg_mask_scale, order=0)

def compute_masking():
neg_mask = da.from_array(tifffile.imread(snakemake.input.neg_mask), chunks=(64, 64, 64))
neg_mask = up_sampler.forward(neg_mask, cptr=plc.cache_root.cache('neg_mask_upsampling'),
viewer_args=viewer_args | dict(is_label=True))
return src_im * (1 - neg_mask)
plc.cache_root.cache_im(compute_masking, cid='masked_src_im', viewer_args=viewer_args)
cc = pipeline.forward(src_im, plc.cache_root.cache(cid='global_label'), viewer_args=viewer_args)
object_scores = cc.reduce(force_numpy=True)
print(f'Total number of object score (not n_object nor volume) estimated: {object_scores.sum().item()}')

tgt_folder = snakemake.output.found_lc
shutil.move(f'{plc.cache_root.abs_path}/dir_cache_global_label/dir_cache_os_to_cc/dir_cache_os_to_lc/file_cache_lc_ndblock',
f'{tgt_folder}')
lc_snapshot = ndblock.NDBlock.load(tgt_folder).reduce(force_numpy=True)[:20]
print('Snapshot of list of centroids\n', lc_snapshot)