-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update nightly #136
Open
anthonyprinaldi
wants to merge
274
commits into
nightly
Choose a base branch
from
dev
base: nightly
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Update nightly #136
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Member
anthonyprinaldi
commented
Jul 3, 2024
- Merge all recent changes from dev (ACE related) into nightly release
- This is in preparation for testing the new Python version in the nightly branch
Sometimes a None string is passed through the CLI that we must check for, otherwise the script will try and load the path None. We check if the label path is not empty and not None
Ace heatmap used to be for whole brain only. This commit adds logic to load in half brain atlas labels if half brain is passed to the ACE workflow. This also adds a flag for heatmap that masks out voxels that our outside of the brain, as determined by the atlas.
The model output by default will determine if a voxel is a neuron with 0.5 threshold. We now add an argument so we can change this threshold to be whatever we want.
Previously, creating output storage arrays based on the assumed patch size of 512^3. We now take the patch size from the input so that it can work with any patch size.
Function will plot the p-values from ACE clusterwise analysis and overlay it onto a brain template from the atlas.
ACE flow will now plot a p-value overlaid on an atalas after clusterwise analysis is complete.
Clusterwise was previously only whole brain. Pass additional arguments to allow for half brain atlas to be used.
The generate patches function will calculate the percentage of the patch that occupies a brain region. This is based on otsu thresholding. A json of each patch file and its corresponding percentage of brain region is saved during generating patches.
The json file containing the percentage of brain region covered per patch saves keys based on a full filepath. We change it now to only contain the filename.
New argument allows the user to specify what percent of a patch need to occupy the brain for the patch to be processed by the deep learning model.
Other locations in the codebase have changed so we need to update this file to reflect these changes.
Before we were using some arbitraty name to set the value of the atlas directory using a hard-coded string. Now the parser takes on the correct defautl value (based on ENV variabled) and allows for setting custom directory paths.
Better to work with pathlib objects than just strings.
Turn global variables into arguments for functions
Adds ability to number blobs in binary segmentation files. This allows for things such as counting unique blobs, filtering based on area, or computing stats.
Remove some unused parser arguments. Update stats cluster-wise TFCE permutation group of flags.
ACE correlation is not currently available. Add a print statement so users know that we are skipping. It will be available once we update the python version.
- Parser had non-needed args - Added verbose help option since parser was getting too long - Parser args need to be optional (to be useable with `miracl flow ace`), so added a hard check that the args were passed if running `miracl stats ace` - Updated docs to reflect all changes to parser
… helpful user msg The script would fail because of the GPU check. The `nvidia-smi` query would return multiple lines that would get parsed incorrectly, resulting in the script to exit because no GPU's were detected. I changed the query by counting the lines of the output with `| wc -l`. However, to prevent this bug from happening again with different `nvidia-smi` versions or changes in the future, the fn is not exiting the installation script anymore when no GPU's are detected. Instead, the entry for the GPU's is omitted from the `docker-compose.yml` and the installation will proceed as normal. At the end, the user will be informed that the GPU checks failed. If the they think that this is a mistake, and they are confident that one or more Nvidia GPU's with CUDA support are present, they are presented with the entry that they will have to add in the `docker-compose.yml` and asked to do that manually.
Third round of Andrew's feedback on ACE Fixed the installation script errors caused by `nvidia-smi` cmd check. Install script updated to be more robust to information from cmd.
Prep for 2.4.1 release
2.4.1 release
This is the release that contains the official ACE models! [feat] minor formatting improvement
Added information that the models will now be included in MIRACL automatically during installation as they are now publicly available.
This might seem redundant since the models will at this point already be downloaded through the download directive in the Dockerfile. However, because we are mounting the 'miracl' folder of the local repo by default, the 'models' folder will be overwritten by the local repo folder. If the models are not downloaded to the local repo folder as well, they will not be present when the local repo folder is mounted. If the flag that disables mounting is set during installation, the models will have been downloaded through the Dockerfile download directive and will be available in the container. It might be a good idea to forward the variables used in the install.sh script for the models URLs to the Dockerfile to make sure that changing one also changes the other. I will open an internal issue for that.
…lation steps output Docker build chache is enabled by default to guarantee clean builds when rebuild from the same local repo. Setting the flag enables the chache. The user can also choose to enable verbose output for the installation steps. This is useful for debugging, especially when used with the '-l' flag which enables writing a log file.
Instead of not adding the miracl scripts folder to the mounted volumes when the corresponding flag is set, the folder is now still added but commented out. The user will be informed that the folder can be mounted by uncommenting the line. The if/else statement is a bit more robust now as well.
[PR] official ACE models release
[PR] Merge into `master` for new release that officially includes the ACE models
Clicking on the `Edit on GitHub` link on RTD directs to an incorrect URL resulting in a 404. This is likely due to our docs not being at the root level of the repo. The `conf_py_path` entry should fix that.
[docs] Fix for incorrect RTD link and additiona of ACE paper citation
Now that ACE has been released, we need to be more agile to quickly respond to issues and user requests. The biggest bottleneck currently is building and uploading Singularity images. This requires a new token every month and we have a maximum traffic limit for builds that restricts our MIRACL releases to <5/month. In the longterm we need to find a better hosting solution and also a different way to build the sif files that does not come with restrictions. However, for now I am disabling the Singularity build/upload completely in the CircleCI workflow as a short term solution. This will make publishing new MIRACL releases easier and more agile but we will also have to manually build and upload Singularity images for the time being.
Also, change the instructions for pulling from Sylabs to downloading Apptainer binaries directly from HuggingFace using wget or curl.
[feat] disable Singularity build/upload in CircleCI workflow [docs] change Singularity docs to Apptainer docs
Registering a nifti stack that was converted with a channel prefix and number combination would break resampling tiff slices to original space. The registration function did not have a check to account for prefix and channel numbers i.e. all files in a data folder that contained e.g. both signal and autofluorescent tiffs distinguished by a prefix/number would be parsed. I added two new flags to the registration script: '-n' for channel number and '-x' for channel prefix. They have to be supplied in workflows as well.
Use a default value of '-999999' for 'cn' and 'cp' to prevent the Bash parser from failing.
Added a parser to the Python script that now expects positional args. The reason the args are positional is because because I didnt' want to have to duplicate flags between the Python and Bash scripts. Obviously this will be changed once the Pydantic implementation is finished. I also changed the method for calling the Python script from within the Bash script to calling the script directly.
This fixes several major issues in the Clarity Allen registration script: 1. Channel prefix and number are now flags in the Clarity Allen registration method. This is necessary for registrations of scans that used channel prefix and number for conversion (e.g. LaVision). Before this would fail as the registration script had no method to differentiate between e.g. autoflor and signal channels in the RAW tif folder delimited by the channel prefix/number combination. 2. The tif stack extraction uses tifffile now instead of c3d. The latter was failing with larger tif files (e.g. SMARTSpim vs. LaVision). Debugging/profiling indicated that larger files (SMARTSpim) triggered parallel processing in c3d which potentially lead to race conditions that weren't apparent when extracting smaller files (LaVision). The result was that the stack got extracted with the slides being out of order.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.