Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Blobdetect lc #15

Open
wants to merge 2 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 2 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -2,4 +2,5 @@
*.snakemake/
*__pycache__/
spimquant/results/
spimquant/workflow/scripts/experiment/
spimquant/workflow/scripts/experiment/

13 changes: 4 additions & 9 deletions docs/getting_started/installation.md
Original file line number Diff line number Diff line change
@@ -1,9 +1,11 @@
## SPIMquant

SPIMquant is a BIDS App for processing SPIM (lightsheet) microscopy datasets, performing registration to a template, and quantifying microscopic features from the SPIM data.
SPIMquant is a parallel processing library that is designed to process scan
and quantify the number of cells within.

Hardware requirements: If run locally, make sure you have sufficient memory
(at least quite a bit more than 16G of memory in total), as the `greedy` diffeormorphic registration we rely on can consume a significant amount of memory during the template registration process.
(at least quite a bit more than 16G of memory in total), as the `greedy` command
line tool we rely on consumes memory heavily during the template registration process.

Software requirements: A linux machine with Singularity or Apptainer installed is
recommended. Other-wise with a Windows machine, you want to have the following libraries
Expand All @@ -28,13 +30,6 @@ pip install -e git+https://github.com/khanlab/spimquant#egg=spimquant

Note: you can re-run this command to re-install with the latest version

Before running the app, you need to specify a config file to use. "SPIMquant/examples/snakebids_template.yml"
provides a starting point for specifying a config. If you are using the example dataset provided in the
above section, then a config file is also included in the zip file.

To specify the config, copy the config file into SPIMquant/spimquant/config/snakebids.yml, and change the
properties in the config file to ensure paths to the directory are properly set.

## Running the app

Do a dry-run first (`-n`) and simply print (`-p`) what would be run:
Expand Down
37 changes: 36 additions & 1 deletion docs/usage/cli.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,43 @@
## Core command-line interface
## The `spimquant` Interface

The spimquant tool command-line interface is a composition of the core
(app-based) arguments and options, and the options related to Snakemake.

As an end user, command line interface (CLI) is recommended as a way to specify
the input/output data paths and configuration settings. The arguments passed
in this way will be used to overwrite some of the default options defined in
the config file `spimquant/config/snakebids.yml` and adapt to different datasets.

Underlying SPIMquant, we use `snakemake` to manage workflow and config file.
`spimquant` is a program that writes to the config file, and supports
all Snakemake arguments.

Steps to run a typical job looks like the following:
```bash
spimquant bids_dir output_dir {participant} [snakemake_args]
# now since we run spimquant, contents in config/snakebids.yml has changed
# and we can run snakemake with the updated configuration
snakemake result_file_path
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not sure why you need this -- the spimquant cli runs snakemake already --

```
All three arguments above to `spimquant` are not optional but necessary.
The above first command will do the following:
1. Log the configuration used to run snakemake in the location output_dir,
including the snakebids.yml file and logs.
2. Pass bids_dir over to overwrite the bids_dir defined in the default
snakebids.yml.
3. Start snakemake using the snakemake_args provided

A more concrete example:
```bash
spimquant test_bids_dir test_out participant
snakemake results/result_image.nii -n
Copy link
Member

@akhanf akhanf Oct 8, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can specify targets using the spimquant cli too.. e.g. spimquant test_bids_dir test_out participant -n results/result_image.nii

So there isn't a need to instruct the user to run snakemake directly..

```
Here we use -n so you can run this command but snakemake will only output a
plan without generating any file. In an actual run, you can use `--cores all`
to specify to use all cores and `--use-apptainer` to download and use existing
container image for SPIMquant that have already installed all dependencies,
without having to set up the environment yourself.

The core BIDS App arguments and app-specific options are listed below.


Expand Down
20 changes: 10 additions & 10 deletions poetry.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

2 changes: 1 addition & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ pybids = "^0.16.5"
sparse = "^0.15.1"
bokeh = "^3.4.1"
zarrnii = "0.1.3a1"
cvpl_tools = "^0.6.3"
cvpl_tools = "^0.6.11"

[tool.poetry.scripts]
spimquant = "spimquant.run:app.run"
Expand Down
3 changes: 0 additions & 3 deletions spimquant/run.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,9 +2,6 @@
from pathlib import Path

from snakebids import bidsapp, plugins
import os
import shutil


app = bidsapp.app(
[
Expand Down
17 changes: 17 additions & 0 deletions spimquant/workflow/rules/blobdetect.smk
Original file line number Diff line number Diff line change
Expand Up @@ -303,3 +303,20 @@ rule map_volume_tsv_dseg_to_template_nii:
),
script:
"../scripts/map_tsv_dseg_to_nii.py"

# ---------------------------- Part 2: Negatively Masked Counting -------------------------------

rule negatively_masked_counting:
"""
Work in progress
"""
input:
neg_mask='C:/ProgrammingTools/ComputerVision/RobartsResearch/data/lightsheet/spimquant_ubuntu/resources/onuska21_patched.tiff',
params:
zarr='C:/ProgrammingTools/ComputerVision/RobartsResearch/data/lightsheet/spimquant_ubuntu/bids/sub-onuska21/micr/sub-onuska21_sample_brain_acq-prestitched_SPIM.ome.zarr?slices=[1]',
tmp_path=f'{config["output_dir"]}/tmp'
output:
found_lc=directory(f'{config["output_dir"]}/found_lc')
threads: 32
script:
"../scripts/contour_counting/negatively_masked_counting.py"
Original file line number Diff line number Diff line change
@@ -0,0 +1,95 @@
"""
This file finds, globally in a 3d brightness image, all the contours' size and location.

Inputs to this file:
1. Path to an ome-zarr 3d single channel brightness image
2. Path to a tiffile 3d binary mask that mask off regions of the image in 1) with false positives
3. A scaling factor tuple, when multiplied from left, transforms pixel location of 2) to 1)
4. An integer (starting from 0) specifies the index of input channel to use

Output of this file is a NDBlock list of centroids saved in local file, which can be loaded
and read as a global list of all contours' size and location.
"""


if __name__ == '__main__':
import cvpl_tools.im.process.qsetup as qsetup
# IMPORT YOUR LIBRARIES HERE
import cvpl_tools.im.seg_process as seg_process
import cvpl_tools.im.process.bs_to_os as sp_bs_to_os
import cvpl_tools.im.process.os_to_cc as sp_os_to_cc
import cvpl_tools.im.process.any_to_any as sp_any_to_any
import cvpl_tools.im.algs.dask_resize as im_resize
import cvpl_tools.ome_zarr.io as ome_io
import cvpl_tools.im.ndblock as ndblock
import dask.array as da
import numcodecs
import tifffile
import shutil

class Pipeline(seg_process.SegProcess):
def __init__(self):
super().__init__()
self.in_to_bs = seg_process.SimpleThreshold(.45)
self.bs_to_os = sp_bs_to_os.DirectBSToOS(is_global=True)
self.os_to_cc = sp_os_to_cc.CountOSBySize(
size_threshold=200.,
volume_weight=5.15e-3,
border_params=(3., -.5, 2.3),
min_size=8,
reduce=False,
is_global=True
)

def forward(self, im, cptr, viewer_args: dict = None):
cdir = cptr.subdir()
bs = self.in_to_bs.forward(im, cptr=cdir.cache(cid='in_to_bs'), viewer_args=viewer_args)
os = self.bs_to_os.forward(bs, cptr=cdir.cache(cid='bs_to_os'), viewer_args=viewer_args)
cc = self.os_to_cc.forward(os, cptr=cdir.cache(cid='os_to_cc'), viewer_args=viewer_args)
return cc

TMP_PATH = snakemake.params.tmp_path
with qsetup.PLComponents(TMP_PATH, 'CacheDirectoryNegMaskedCounting',
client_args=dict(threads_per_worker=12, n_workers=1),
viewer_args=dict(use_viewer=False)) as plc:
# DO DASK COMPUTATION, AND SHOW RESULTS IN plc.viewer
src_im = ome_io.load_dask_array_from_path(snakemake.params.zarr, mode='r', level=0)
pipeline = Pipeline()
src_im = da.clip(src_im / 1000, 0., 1.)
assert src_im.ndim == 3
print(f'Saving results in {plc.cache_root.abs_path}')
print(f'Computing centroids size and location. Masking the image, imshape={src_im.shape}.')
src_im = src_im.rechunk(chunks=(128, 256, 256))
storage_options = dict(
dimension_separator='/',
preferred_chunksize=None, # don't re-chunk when saving and loading
multiscale=0,
compressor=numcodecs.Blosc(cname='lz4', clevel=9, shuffle=numcodecs.Blosc.BITSHUFFLE)
)
viewer_args = dict(
viewer=None,
display_points=False,
display_checkerboard=False,
client=plc.dask_client,
storage_options=storage_options
)

def compute_masking():
neg_mask = da.from_array(tifffile.imread(snakemake.input.neg_mask), chunks=(64, 64, 64))
neg_mask = im_resize.upsample_pad_crop_fit(
src_arr=neg_mask,
tgt_arr=src_im,
cptr=plc.cache_root.cache('neg_mask_upsampling'),
viewer_args=viewer_args | dict(is_label=True),
)
return src_im * (1 - neg_mask)
plc.cache_root.cache_im(compute_masking, cid='masked_src_im', viewer_args=viewer_args)
cc = pipeline.forward(src_im, plc.cache_root.cache(cid='global_label'), viewer_args=viewer_args)
object_scores = cc.reduce(force_numpy=True)
print(f'Total number of object score (not n_object nor volume) estimated: {object_scores.sum().item()}')

tgt_folder = snakemake.output.found_lc
shutil.move(f'{plc.cache_root.abs_path}/dir_cache_global_label/dir_cache_os_to_cc/dir_cache_os_to_lc/file_cache_lc_ndblock',
f'{tgt_folder}')
lc_snapshot = ndblock.NDBlock.load(tgt_folder).reduce(force_numpy=True)[:20]
print('Snapshot of list of centroids\n', lc_snapshot)