This README provides instructions on how to use the preprocessing code to transform and reorganize data for use with ZephIR, a software tool designed for tracking neurons in freely behaving and deformable brains.
Before you begin, ensure that your dataset includes the following:
- Green fluorescence imaging data acquired from Zyla 4.2
- Red fluorescence imaging data acquired from Zyla 4.2
- Worm centerline data from IR videos to extract head direction.
These data are essential for the successful application of the preprocessing steps. Note that if you don't have worm centerline data, the algorithm will use PCA-related method to identify head direction from fluophore image stacks, which also works well.
Locate the load_data.m
MATLAB file under the scripts
directory. You will need to modify this file to specify the location of your dataset.
- Open
load_data.m
in MATLAB or a text editor. - Change the
data_directory
variable to the path where your dataset is stored. - Update the
filenames
variable to match the names of your data files.
Consider whether you want to bin your raw images. Binning can reduce the resolution of your images but may significantly speed up the processing time in ZephIR. Also note that ZephIR only supports uint8
right now.
- If you choose to bin your images, ensure that the binning process is incorporated into your preprocessing steps.
With your data directory and filenames set, and your decision on binning made, run the preprocessing scripts in MATLAB:
cd scripts
load_data;
transform_and_reorganize_imagedata;
load_data
will load your specified dataset. transform_and_reorganize_imagedata
will apply necessary transformations and reorganize the data for ZephIR compatibility.
After running the scripts, the reorganized data will be saved in the same folder as data.h5
. This file is now ready to be used with ZephIR for neuron tracking analysis.
To run ZephIR on the computational server, follow these steps:
First, navigate to the data_directory
where your preprocessed data is stored.
cd path/to/data_directory
Replace path/to/data_directory with the actual path to your data directory. Copy getters.py
and metadata.json
to the data_directory
as well. Modify metadata.json
if the image stack size is different.
Activate the ZephIR Conda environment by running:
conda activate ZephIR
Next, follow the detailed instructions provided in the ZephIR guide. These instructions are available at the following URL:
Here are three major steps.
- Find a few (specified by
--n_frames
) reference frames where we first perform manual annotaions.
recommend_frames --dataset=. --n_frames=10
ZephIR would identify the similarity between different imaging frames and recommend n_frames
root frames.
- Run the following script to open the annotation GUI. Perform manual annotation on the reference frames and save the results.
annotator --dataset=. --port=5001
- Close the annoation GUI. Now perform machine tracking of the rest frames. Copy
args.json
to the dataset folder. This file contains tuning parameters that I find work well for sparsely labelled neurons in crawling worms. Now run
zephir --dataset=. --load_args=True
- Reopen the annotation GUI to check and proofread the results.
Ensure that you forward port 5000 to your local computer when using Visual Studio Code (VSCode). This setup allows you to access the annotation GUI at localhost:5000.
If port 5000 is not available or does not work, try using a different port number, such as 5001. Adjust your port forwarding settings accordingly.
Once you finish tracking all the neurons and saving your results in the annotations.h5
using ZephIR, you can run the following scripts to find (in annotations_edited.h5
) the normalized coordinates of these neurons in the original image stacks.
cd scripts
annotations
The new annotations_edited.h5
should now contain the following and more:
- t_idx: time index of each annotation, starting from 0
- x: x-coordinate as a float between (0, 1)
- y: y-coordinate as a float between (0, 1)
- z: z-coordinate as a float between (0, 1)
- x_original: x-coordinate as a float between (0, 1) in the original image stacks
- y_original: y-coordinate as a float between (0, 1) in the original image stacks
- worldline_id: track or worldline ID as an integer