-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Blobdetect lc #15
Open
Karl5766
wants to merge
2
commits into
main
Choose a base branch
from
blobdetect-lc
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Blobdetect lc #15
Changes from 1 commit
Commits
Show all changes
2 commits
Select commit
Hold shift + click to select a range
File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.
Oops, something went wrong.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
91 changes: 91 additions & 0 deletions
91
spimquant/workflow/scripts/contour_counting/negatively_masked_counting.py
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,91 @@ | ||
""" | ||
This file finds, globally in a 3d brightness image, all the contours' size and location. | ||
|
||
Inputs to this file: | ||
1. Path to an ome-zarr 3d single channel brightness image | ||
2. Path to a tiffile 3d binary mask that mask off regions of the image in 1) with false positives | ||
3. A scaling factor tuple, when multiplied from left, transforms pixel location of 2) to 1) | ||
4. An integer (starting from 0) specifies the index of input channel to use | ||
|
||
Output of this file is a NDBlock list of centroids saved in local file, which can be loaded | ||
and read as a global list of all contours' size and location. | ||
""" | ||
|
||
|
||
if __name__ == '__main__': | ||
import cvpl_tools.im.process.qsetup as qsetup | ||
# IMPORT YOUR LIBRARIES HERE | ||
import cvpl_tools.im.seg_process as seg_process | ||
import cvpl_tools.im.process.bs_to_os as sp_bs_to_os | ||
import cvpl_tools.im.process.os_to_cc as sp_os_to_cc | ||
import cvpl_tools.im.process.any_to_any as sp_any_to_any | ||
import cvpl_tools.ome_zarr.io as ome_io | ||
import cvpl_tools.im.ndblock as ndblock | ||
import dask.array as da | ||
import numcodecs | ||
import tifffile | ||
import shutil | ||
|
||
class Pipeline(seg_process.SegProcess): | ||
def __init__(self): | ||
super().__init__() | ||
self.in_to_bs = seg_process.SimpleThreshold(.45) | ||
self.bs_to_os = sp_bs_to_os.DirectBSToOS(is_global=True) | ||
self.os_to_cc = sp_os_to_cc.CountOSBySize( | ||
size_threshold=200., | ||
volume_weight=5.15e-3, | ||
border_params=(3., -.5, 2.3), | ||
min_size=8, | ||
reduce=False, | ||
is_global=True | ||
) | ||
|
||
def forward(self, im, cptr, viewer_args: dict = None): | ||
cdir = cptr.subdir() | ||
bs = self.in_to_bs.forward(im, cptr=cdir.cache(cid='in_to_bs'), viewer_args=viewer_args) | ||
os = self.bs_to_os.forward(bs, cptr=cdir.cache(cid='bs_to_os'), viewer_args=viewer_args) | ||
cc = self.os_to_cc.forward(os, cptr=cdir.cache(cid='os_to_cc'), viewer_args=viewer_args) | ||
return cc | ||
|
||
TMP_PATH = snakemake.params.tmp_path | ||
with qsetup.PLComponents(TMP_PATH, 'CacheDirectoryNegMaskedCounting', | ||
client_args=dict(threads_per_worker=12, n_workers=1), | ||
viewer_args=dict(use_viewer=False)) as plc: | ||
# DO DASK COMPUTATION, AND SHOW RESULTS IN plc.viewer | ||
src_im = ome_io.load_dask_array_from_path(snakemake.input.zarr, mode='r', level=0) | ||
pipeline = Pipeline() | ||
src_im = da.clip(src_im / 1000, 0., 1.) | ||
assert src_im.ndim == 3 | ||
print(f'Computing centroids size and location. Masking the image, imshape={src_im.shape}.') | ||
src_im = src_im.rechunk(chunks=(128, 256, 256)) | ||
storage_options = dict( | ||
dimension_separator='/', | ||
preferred_chunksize=None, # don't re-chunk when saving and loading | ||
multiscale=0, | ||
compressor=numcodecs.Blosc(cname='lz4', clevel=9, shuffle=numcodecs.Blosc.BITSHUFFLE) | ||
) | ||
viewer_args = dict( | ||
viewer=None, | ||
display_points=False, | ||
display_checkerboard=False, | ||
client=plc.dask_client, | ||
storage_options=storage_options | ||
) | ||
|
||
up_sampler = sp_any_to_any.UpsamplingByIntFactor(factor=snakemake.params.neg_mask_scale, order=0) | ||
|
||
def compute_masking(): | ||
neg_mask = da.from_array(tifffile.imread(snakemake.input.neg_mask), chunks=(64, 64, 64)) | ||
neg_mask = up_sampler.forward(neg_mask, cptr=plc.cache_root.cache('neg_mask_upsampling'), | ||
viewer_args=viewer_args | dict(is_label=True)) | ||
return src_im * (1 - neg_mask) | ||
plc.cache_root.cache_im(compute_masking, cid='masked_src_im', viewer_args=viewer_args) | ||
cc = pipeline.forward(src_im, plc.cache_root.cache(cid='global_label'), viewer_args=viewer_args) | ||
object_scores = cc.reduce(force_numpy=True) | ||
print(f'Total number of object score (not n_object nor volume) estimated: {object_scores.sum().item()}') | ||
|
||
tgt_folder = snakemake.output.found_lc | ||
shutil.move(f'{plc.cache_root.abs_path}/dir_cache_global_label/dir_cache_os_to_cc/dir_cache_os_to_lc/file_cache_lc_ndblock', | ||
f'{tgt_folder}') | ||
lc_snapshot = ndblock.NDBlock.load(tgt_folder).reduce(force_numpy=True)[:20] | ||
print('Snapshot of list of centroids\n', lc_snapshot) |
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I know still a work-in-progress, but we should use a different format for the neg_mask -- either ome.zarr or nifti (and use zarrnii to get in the same space)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the pointer! I was wondering which format would work better in this case
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a rescaling component on the original OME ZARR image? Or only neg_mask is scaled but the OME ZARR is kept as scale=diag(1, 1, 1, 1)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You're asking about how to go from voxel to physical coordinates? The vox2ras matrix from zarrnii does that for you. Perhaps can go over on a call or when I'm back on Friday.