-
Notifications
You must be signed in to change notification settings - Fork 0
Home
Welcome to the scMultipleX Wiki!
Create new conda environment with python 3.8+:
conda create -n scmpx python=3.8
Activate the conda environment:
conda activate scmpx
Install scMultipleX:
pip install git+https://github.com/fmi-basel/gliberal-scMultipleX.git
- SSH or Remote Desktop to vcl1060.fmi.ch . For ssh you can use
Note to Windows users: use power shell, or Putty.
- Create an output directory (ex. username/scMultiplex-demo-test)
mkdir /tungstenfs/scratch/gliberal/Users/MY_SAVE_DIR
- Copy demo config file (demo.ini) to own user folder
cp -t /tungstenfs/scratch/gliberal/Users/MY_SAVE_DIR /tungstenfs/scratch/gliberal/Code/Common/Repositories/gliberal-scMultipleX/resources/scMultipleX_testdata/demo.ini
- Check that demo.ini it is copied over:
ls /tungstenfs/scratch/gliberal/Users/MY_SAVE_DIR
You should see the demo.ini file listed in directory
- Edit this config file:
- Remote into favorite virtual machine
- Navigate to demo.ini and open it in favorite text editor (e.g. Notepad++)
- Change
base_dir_save
to MY_SAVE_DIR path, save.
- This step needs to be performed only once per machine and user Make symlink in user home bin directory that points to run_scmultiplex executable:
cd $HOME
mkdir -p bin
ln -s -t bin /tungstenfs/scratch/gliberal/Code/Common/Repositories/gliberal-scMultipleX/run_scmultiplex
ls -l bin
- Run scMultipleX on test dataset
run_scmultiplex --help
run_scmultiplex --cpus 12 --config /tungstenfs/scratch/gliberal/Users/MY_SAVE_DIR/demo.ini --tasks 0 1 2 3 4 5 6 7
- Check output folder!
General parameters for initializing FAIM-HCS experiment structure
well_pattern = Regex pattern for recognizing well ID
raw_ch_pattern = Regex pattern for recognizing channel ID in raw image files
mask_ending = Suffix of organoid segmentation image
base_dir_raw = Path to raw data directory (folder contains rounds)
base_dir_save = Path to save directory
spacing = Z,Y,X pixel spacing of region-extracted data in um/pix, comma-separated
overview_spacing = Y,X pixel spacing of well overview images, comma-separated
round_names = Names of multiplexing rounds, comma-separated
Round-specific parameters for initializing FAIM-HCS experiment structure. Include this subsection for each round and update name, e.g. round_R1
name = Round name
nuc_ending = Suffix of nuclear segmentation image
mem_ending = Suffix of membrane segmentation image
root_dir = Path to raw data directory for this round
fname_barcode_index = Number of underscores in Yokogawa barcode, integer
organoid_seg_channel = Image channel used for organoid segmentation, e.g. C01
nuclear_seg_channel = Image channel used for nuclear segmentation, e.g. C01
membrane_seg_channel = Image channel used for membrane segmentation, e.g. C04
Parameters used during feature extraction
excluded_plates = Folder name of plate (e.g. day2,day3) to exclude from analysis, comma-separated
excluded_wells = Well ID to exclude from analysis (e.g. A01,C06), comma-separated
ovr_channel = Image channel used for organoid segmentation, e.g. C01
name_ovr = Naming of regionprops file; always keep as regionprops_ovr_
iop_cutoff = Float value 0 to 1 for cutoff threshold for calling a nucleus inside a membrane. Recommended value is 0.6
iop = number of pixels in intersection of membrane and nuclear label / number of pixels in nuclear label
Closer to 1 means better match
Parameters used during organoid linking
iou_cutoff = _ Float value 0 to 1 for cutoff threshold for matching RX to R0 object. Recommended value is 0.2_
iou = number of pixels in intersection of R0 and RX object label / number of pixels in union of R0 and RX object label
Closer to 1 means better match
scMultipleX is installed on tungsten at: scratch/gliberal/Code/Common/Repositories/gliberal-scMultipleX
and can be run on any Linux machine.
run_scmultiplex --cpus [NUM CORES, INT] --config [PATH TO .INI CONFIG] --tasks [TASKS TO RUN]
Use run_scmultiplex --help
for details on arguments.
Note:
- --cpus default is number of cores available for the process on the machine
- --tasks available are integers 0 - 7
To run on CPU cluster, configure the clusterme.sh file From vcl1043, navigate to folder containing clusterme.sh and run:
sbatch clusterme.sh