Skip to content

PM_WGS_pipeline

Kai Blumberg edited this page Dec 7, 2021 · 56 revisions

cd /xdisk/bhurwitz/mig2020/rsgrps/bhurwitz/kai/planet-microbe-functional-annotation/

submit the main snakemake job which will submit other jobs

sh submit_snakemake.sh // New version submisson Can change JOBNAME for batches.

For run_snakemake.sh // Modify the -j job number for number of snakejobs. Note I have slighly modified this to be on windfall

Make sure to clean out and err directories.

squeue -u kblumberg

scancel job_ID_number

scancel -u kblumberg //cancel all my jobs

va // shows allocation remaining on HPC.

uquota shows quota for group

du -sh kai/ size of directory

scp [email protected]:/xdisk/bhurwitz/mig2020/rsgrps/bhurwitz/kai/planet-microbe-functional-annotation/bash/check_qc.sh .

Initial setup

modify the cluster.yml and config.yml files

add out and err directories

Change run_snakemake.sh pm_env to snakemake and path for cd to my version.

copied matt's verison's bowtie.simg to my singlarity dir

bowtie index folder is in: /xdisk/bhurwitz/mig2020/rsgrps/bhurwitz/planet-microbe-functional-annotation/data copy that into my version of the git repo so that the whole thing is portable

bash/run_start_lookup_server.sh has the wall time for the interpro server this was only at 12 hence why thing were failing. Matt will make some fixes and I'll reclone the repo and start it again. Making sure that jobs are finishing correctly. I can also use elgato windfall to potentially get lots of nodes.

TODO

Write 3 scripts to get quality report normalization and sample cutoff information

  1. for step 2 to grep > for the number of reads that passed qc then | wc -l

  2. For step 4 to get the number of predicted ORFS from the .faa files again grep for > e.g. grep -c "^>" file.fasta

  3. For the data files: do something like gunzip -c SRR1786608_1.fastq.gz | wc -l to see how many reads we actually have divide by 4 we should drop samples that reduce too much between this and step 2

Redo Lists

Run_1

SRR1787940_1/
SRR1790676_1/
SRR5002313/
SRR5002373/
SRR5002379/
SRR5002384/
SRR5002401/
SRR5123274/
SRR5123275/
SRR5720230_1/
SRR5720255_1/
SRR5720261_1/
SRR9178082_1/
SRR9178218_1/
SRR9178274_1/
SRR9178275_1/
SRR9178287_1/
SRR9178317_1/
SRR9178372_1/
SRR9178498_1/
SRR9178506_1/

sbatch headers

#SBATCH --account=bhurwitz
#SBATCH --partition=standard
#SBATCH --partition=windfall

Cleaning up to save space

in results/completed/

rm */bowtie/ -r

rm */step_01_trimming/ -r

rm */step_05_chunk_reads/ -r

rm */step_06_get_orfs/ -r

irods

iinit standard command to get irods started doesn't work on my head node but does in interactive

Command to copy files over from cyverse to my data directory.

iget -PT /iplant/home/shared/planetmicrobe/sra/SRR4831663.fastq.gz /xdisk/bhurwitz/mig2020/rsgrps/bhurwitz/kai/planet-microbe-functional-annotation/data

UA hpc

https://public.confluence.arizona.edu/display/UAHPC/HPC+Documentation

https://public.confluence.arizona.edu/display/UAHPC/Puma+Quick+Start

old

sbatch run_snakemake.sh //submission for old single-threaded version

//old sbatch submit_snakemake.sh //submission for old multi-threaded version

Debugging the snakemake pipeline steps should be:

  1. Check the slurm output file and see which rule crashed
  2. Check the error file and see if there's useful information about the crash
  3. Check the log file (found at e.g. results/SRR4831664/step_01_trimming/log) for specific information about the running of that step's executable.
interactive

source ~/.bashrc

conda env create -f kraken2.yml

conda env create -f bracken.yml

conda env create -f pm_env.yml   // this failed make a new pm_env.yml with snakemake

# steps to create pm_env again do this in interactive
conda create -n pm_env

conda activate pm_env

conda install -n base -c conda-forge mamba

mamba create -c conda-forge -c bioconda -n snakemake snakemake

snakemake conda env

conda install -n base -c conda-forge mamba install mamba to install snakemake

mamba create -c conda-forge -c bioconda -n snakemake snakemake install snakemake this made a new conda environment called snakemake

conda install -c conda-forge biopython added bioptyon

conda install -c anaconda java-1.7.0-openjdk-cos6-x86_64 also added java from here and here. this didn't stay after loging back in didn't get added to PATH? Try conda install -c conda-forge openjdk from here

other

interpro docs https://interproscan-docs.readthedocs.io/en/latest/UserDocs.html bash script

/groups/bhurwitz/tools/interproscan-5.46-81.0/interproscan.sh -appl Pfam -i results/SRR4831664/step_05_chunk_reads/SRR4831664_trimmed_qcd_frags_2047.faa -b results/SRR4831664/step_06_get_orfs/SRR4831664_trimmed_qcd_frags_2047_interpro -goterms -iprlookup -dra -cpu 4 after installing java 11 this works but I'm still getting the log for step 6 saying

Java version 11 is required to run InterProScan. Detected version 1.8.0_292 Please install the correct version.

Changed the Snakemake interproscan rule to activate the snakemake conda env

Switch between pipeline Run section 4 and 7

In the directory job_runs/snakefile_versions we have the files regular_Snakefile to do step 7 and Snakefile_upto_step4 to do until step 4.

From /xdisk/bhurwitz/mig2020/rsgrps/bhurwitz/kai/planet-microbe-functional-annotation:

cp job_runs/snakefile_versions/Snakefile_upto_step4 Snakefile

cp job_runs/snakefile_versions/regular_Snakefile Snakefile

When running the Snakefile_upto_step4 make sure to up the time to 48 hours in run_snakemake.sh and back to 24 to do the regular_Snakefile. Same with the cluster.yml file.

job runs

Amazon Plume

ran 23 samples (1 done in previous testing) for 48 hours all but 3 finished. (Some were <1G). Not using the multi threaded version.

Amazon River

ran 24 samples for nearly 48 hours snakejob ended but not all finished. The log file said the job had timed out and there were jobs remaining but no snakemake job. So I canceled the snakejobs and resubmitted. Not using the multi threaded version.

HOT Chisholm

First job ran the smallest 24 samples. First time with shell parameter mistake.

tests

when testing the multi threaded version a single job submission sent out (at least at the time I captured it) 117 of the multi threaded ips_0 nodes doing the interproscan step. Didn't have the interproscan working again because I pulled Matt's version of the Snakemakefile without the following shell parameters, which I added back in and re-ran the job for rule run_pipeline. Might need to also add it to rule interproscan.

bash -c '
. $HOME/.bashrc
conda activate snakemake 
Clone this wiki locally