Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Slurm workslow submits intial jobs but then hangs until ctrl-c #157

Open
freekvh opened this issue Oct 15, 2024 · 7 comments
Open

Slurm workslow submits intial jobs but then hangs until ctrl-c #157

freekvh opened this issue Oct 15, 2024 · 7 comments

Comments

@freekvh
Copy link

freekvh commented Oct 15, 2024

Software Versions

$ snakemake --version
8.23.0
$ mamba list | grep "snakemake-executor-plugin-slurm"
$ conda list | grep "snakemake-executor-plugin-slurm"
snakemake-executor-plugin-slurm 0.11.0 pyhdfd78af_0 bioconda
snakemake-executor-plugin-slurm-jobstep 0.2.1 pyhdfd78af_0 bioconda
$ sinfo --version
slurm 24.05.3

Describe the bug
When starting a snakemake workflow on a slurm cluster (surf/sara Snellius), the workflow starts but hangs on the initially submitted jobs (that seem to complete and disappear from the squeue overview). It's like Snakemake does not get the signal that the jobs finish. This is my config:

executor: slurm
default-resources:
  slurm_partition: "rome"
  time: 1h
  # slurm_extra: "'-o cluster_outputs/smk.{rule}.{jobid}.out -e cluster_outputs/smk.{rule}.{jobid}.err'"
printshellcmds: True
jobs: 100
restart-times: 3
latency-wait: 60
rerun-incomplete: True
use-conda: True
conda-prefix: /home/me/projects/snaqs_files/snakemake_envs

set-threads:
    salmon: 16

set-resources:
    # rule specific resources
  fastqc:
    slurm_partition: staging

Logs
This is how it ends (I pressed ctrl-c at this line ^CTerminating processes on user request, this might take some time.):

[Tue Oct 15 18:56:16 2024]
rule fastqc:
    input: fastq_raw/0053_P2017BB3S19R_S1_R1_001.fastq.gz
    output: qc/fastqc/0053_P2017BB3S19R_S1_R1_001_fastqc.html, qc/fastqc/0053_P2017BB3S19R_S1_R1_001_fastqc.zip
    jobid: 16
    benchmark: benchmark/0053_P2017BB3S19R_S1_R1.fastqc.fastqc.benchmark.tsv
    reason: Missing output files: qc/fastqc/0053_P2017BB3S19R_S1_R1_001_fastqc.html, qc/fastqc/0053_P2017BB3S19R_S1_R1_001_fastqc.zip
    wildcards: sample=0053_P2017BB3S19R_S1, read=R1
    resources: mem_mb=1000, mem_mib=954, disk_mb=1000, disk_mib=954, tmpdir=<TBD>, slurm_partition=staging, time=1h


        fastqc fastq_raw/0053_P2017BB3S19R_S1_R1_001.fastq.gz --outdir=qc/fastqc
        
No wall time information given. This might or might not work on your cluster. If not, specify the resource runtime in your rule or as a reasonable default via --default-resources.
Job 16 has been submitted with SLURM jobid sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 32 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 32 jobs.
sbatch: By default shared jobs get 7168 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 1 CPUs, based on the number of CPUs and the amount memory that you've requested.
8178969 (log: /gpfs/home5/me/temp/test_pipeline/.snakemake/slurm_logs/rule_fastqc/0053_P2017BB3S19R_S1_R1/sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 32 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 32 jobs.
sbatch: By default shared jobs get 7168 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 1 CPUs, based on the number of CPUs and the amount memory that you've requested.
8178969.log).
^CTerminating processes on user request, this might take some time.
WorkflowError:
Unable to cancel jobs with scancel (exit code 127): scancel: unrecognized option '--exclusive'
Try "scancel --help" for more information
/bin/sh: line 2: sbatch:: command not found
/bin/sh: line 3: sbatch:: command not found
/bin/sh: line 4: sbatch:: command not found
/bin/sh: line 9: 8178965: command not found
/bin/sh: line 10: sbatch:: command not found
/bin/sh: line 11: sbatch:: command not found
/bin/sh: line 12: sbatch:: command not found
/bin/sh: line 17: 8178967: command not found
/bin/sh: line 18: sbatch:: command not found
/bin/sh: line 19: sbatch:: command not found
/bin/sh: line 20: sbatch:: command not found
/bin/sh: line 25: 8178969: command not found

Minimal example
This is a big effort, if it's really required, I'll try to make a minimal pipelines. Apologies.

Additional context
I have also posted a question on https://bioinformatics.stackexchange.com/questions/22963/snakemake-on-a-slurm-cluster, however there I ask for support for the generic executor as well (which doesn't let met specify the partition).

I think in general, some more extensive examples of profiles where all options are used would be nice to have. And, perhaps Snellius is deviating from normal Slurm?

@cmeesters
Copy link
Member

Hi,

this is really weird. scancel as triggered by the plugin, does not have an --exclusive flag. And why should a SLURM cluster state, that the command sbatch is not found?

Right now, I am travelling, but I will find some time next week, to look into issues. Meanwhile, can you please indicate where you submitted your workflow (within a job or on a login/head node)? And perhaps run it in verbose mode (just add to the command line --verbose) and attach a full log of Snakemake, please?

I would like to know whether sbatch points to a binary or is overwritten with a wrapper (the rather informative output is not a default and admins have several methods to give you that feedback). Can you post the output of which sbatch, too?

I'm afraid, I am not familiar with Snellius, what is your output of sacct during or after the run (same day)? (Background: the plugin keeps track of job states using SLURM's accounting mechanism.)

@freekvh
Copy link
Author

freekvh commented Oct 16, 2024

Hi, thank you for your fast reply!
Here are my answers:

It's strange because the cluster-generic plugin works where the submit command is sbatch.

I submit from a login/head node. There are some restrictions there (like you can't run processes longer than 1 hour), but my tests finish in 5 min (with 10k reads files). Anyway, the cluster-generic executor works (but it does not select the right partitions for my lightweight jobs).

$ which sbatch
/usr/bin/sbatch
$ head -n 2 `which sbatch`
@)(@@@@@�@@@@ , ,00@0@__��@�@�L�L����@��@�����@��@�88@8@0hh@h@DDS�td88@8@0P�td����@��@��Q�tdR�td����@��@  /lib64/ld-linux-x86-64.so.2 GNU���GNUI�XG7���
��0>15o

(looks binary to me :))

This is during a run ->

$ sacct
JobID           JobName  Partition    Account  AllocCPUS      State ExitCode 
------------ ---------- ---------- ---------- ---------- ---------- -------- 
8183061      56d42312-+       rome    eccdcdc         16  COMPLETED      0:0 
8183061.bat+      batch               eccdcdc         16  COMPLETED      0:0 
8183061.ext+     extern               eccdcdc         16  COMPLETED      0:0 
8183061.0    python3.12               eccdcdc          1  COMPLETED      0:0 
8183062      56d42312-+       rome    eccdcdc         16     FAILED      1:0 
8183062.bat+      batch               eccdcdc         16     FAILED      1:0 
8183062.ext+     extern               eccdcdc         16  COMPLETED      0:0 
8183062.0    python3.12               eccdcdc         16 OUT_OF_ME+    0:125 
8183063      56d42312-+       rome    eccdcdc         16  COMPLETED      0:0 
8183063.bat+      batch               eccdcdc         16  COMPLETED      0:0 
8183063.ext+     extern               eccdcdc         16  COMPLETED      0:0 
8183063.0    python3.12               eccdcdc          1  COMPLETED      0:0 
8183064      56d42312-+       rome    eccdcdc         16  COMPLETED      0:0 
8183064.bat+      batch               eccdcdc         16  COMPLETED      0:0 
8183064.ext+     extern               eccdcdc         16  COMPLETED      0:0 
8183064.0    python3.12               eccdcdc          1  COMPLETED      0:0 
8183065      56d42312-+       rome    eccdcdc         16  COMPLETED      0:0 
8183065.bat+      batch               eccdcdc         16  COMPLETED      0:0 
8183065.ext+     extern               eccdcdc         16  COMPLETED      0:0 
8183065.0    python3.12               eccdcdc          1  COMPLETED      0:0 
8183066      56d42312-+       rome    eccdcdc         16     FAILED      1:0 
8183066.bat+      batch               eccdcdc         16     FAILED      1:0 
8183066.ext+     extern               eccdcdc         16  COMPLETED      0:0 
8183066.0    python3.12               eccdcdc         16 OUT_OF_ME+    0:125 
8183088      e985391f-+       rome    eccdcdc         16     FAILED      1:0 
8183088.bat+      batch               eccdcdc         16     FAILED      1:0 
8183088.ext+     extern               eccdcdc         16  COMPLETED      0:0 
8183088.0    python3.12               eccdcdc          1     FAILED      1:0 
8183089      e985391f-+       rome    eccdcdc         16  COMPLETED      0:0 
8183089.bat+      batch               eccdcdc         16  COMPLETED      0:0 
8183089.ext+     extern               eccdcdc         16  COMPLETED      0:0 
8183089.0    python3.12               eccdcdc          1  COMPLETED      0:0 
8183090      e985391f-+       rome    eccdcdc         16     FAILED      1:0 
8183090.bat+      batch               eccdcdc         16     FAILED      1:0 
8183090.ext+     extern               eccdcdc         16  COMPLETED      0:0 
8183090.0    python3.12               eccdcdc          1     FAILED      1:0 
8183091      e985391f-+       rome    eccdcdc         16  COMPLETED      0:0 
8183091.bat+      batch               eccdcdc         16  COMPLETED      0:0 
8183091.ext+     extern               eccdcdc         16  COMPLETED      0:0 
8183091.0    python3.12               eccdcdc          1  COMPLETED      0:0 
8183223      c91cbade-+       rome    eccdcdc          0    PENDING      0:0 
8183224      c91cbade-+    staging    eccdcdc          1    RUNNING      0:0 
8183224.bat+      batch               eccdcdc          1    RUNNING      0:0 
8183224.ext+     extern               eccdcdc          1    RUNNING      0:0 
8183225      c91cbade-+    staging    eccdcdc          1    RUNNING      0:0 
8183225.bat+      batch               eccdcdc          1    RUNNING      0:0 
8183225.ext+     extern               eccdcdc          1    RUNNING      0:0 
8183226      c91cbade-+       rome    eccdcdc          0    PENDING      0:0 
8183227      c91cbade-+    staging    eccdcdc          1    RUNNING      0:0 
8183227.bat+      batch               eccdcdc          1    RUNNING      0:0 
8183227.ext+     extern               eccdcdc          1    RUNNING      0:0 
8183228      c91cbade-+    staging    eccdcdc          1    RUNNING      0:0 
8183228.bat+      batch               eccdcdc          1    RUNNING      0:0 
8183228.ext+     extern               eccdcdc          1    RUNNING      0:0

I now wait for my processes to finish (no more jobs when checking with squeue), no new jobs are submitted... I then hit ctrl-c, the complete output, with --verbose is here:

$ snakemake --workflow-profile ./cluster_configs --verbose
Using workflow specific profile ./cluster_configs for setting default command line arguments.
host: int4
Building DAG of jobs...
Your conda installation is not configured to use strict channel priorities. This is however important for having robust and correct environments (for details, see https://conda-forge.org/docs/user/tipsandtricks.html). Please consider to configure strict priorities by executing 'conda config --set channel_priority strict'.
shared_storage_local_copies: True
remote_exec: False
SLURM run ID: c91cbade-e19f-4be6-8871-2c5a7f0b8fe0
Using shell: /usr/bin/bash
Provided remote nodes: 100
Job stats:
job                           count
--------------------------  -------
all                               1
complexity_20mer_counter          4
create_flagged_sampletable        1
create_pcs_raw_files              2
customqc_parameters               2
customqc_report                   1
fastqc                            4
qc_flagging                       2
rnaseq_multiqc                    1
salmon                            2
seqrun_expression_reports         1
tpm4_normalization                2
trim_galore                       2
total                            25

Resources before job selection: {'_cores': 9223372036854775807, '_nodes': 100, '_job_count': 9223372036854775807}
Ready jobs: 6
Select jobs to execute...
Welcome to the CBC MILP Solver 
Version: 2.10.12 
Build Date: Sep  3 2024 

command line - cbc /scratch-local/70716/496608b438a441e8a9c28881aa8fdb12-pulp.mps -max -sec 10 -threads 2 -timeMode elapsed -branch -printingOptions all -solution /scratch-local/70716/496608b438a441e8a9c28881aa8fdb12-pulp.sol (default strategy 1)
At line 2 NAME          MODEL
At line 3 ROWS
At line 8 COLUMNS
At line 45 RHS
At line 49 BOUNDS
At line 56 ENDATA
Problem MODEL has 3 rows, 6 columns and 18 elements
Coin0008I MODEL read with 0 errors
seconds was changed from 1e+100 to 10
threads was changed from 0 to 2
Option for timeMode changed from cpu to elapsed
Continuous objective value is 12 - 0.04 seconds
Cgl0004I processed model has 0 rows, 0 columns (0 integer (0 of which binary)) and 0 elements
Cbc3007W No integer variables - nothing to do
Cuts at root node changed objective from 12 to 1.79769e+308
Probing was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
Gomory was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
Knapsack was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
Clique was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
MixedIntegerRounding2 was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
FlowCover was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
TwoMirCuts was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
ZeroHalf was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)

Result - Optimal solution found

Objective value:                12.00000000
Enumerated nodes:               0
Total iterations:               0
Time (CPU seconds):             0.14
Time (Wallclock seconds):       0.09

Option for printingOptions changed from normal to all
Total time (CPU seconds):       0.20   (Wallclock seconds):       0.09

Selected jobs: 6
Resources after job selection: {'_cores': 9223372036854775801, '_nodes': 94, '_job_count': 9223372036854775807}
Execute 6 jobs...

[Wed Oct 16 09:10:46 2024]
rule trim_galore:
    input: fastq_raw/0053_P2017BB3S19R_S1_R1_001.fastq.gz, fastq_raw/0053_P2017BB3S19R_S1_R2_001.fastq.gz
    output: fastq_trimmed/0053_P2017BB3S19R_S1_R1_001_val_1.fq.gz, fastq_trimmed/0053_P2017BB3S19R_S1_R2_001_val_2.fq.gz, fastq_trimmed/0053_P2017BB3S19R_S1_R1_001_unpaired_1.fq.gz, fastq_trimmed/0053_P2017BB3S19R_S1_R2_001_unpaired_2.fq.gz, qc/trim_galore/0053_P2017BB3S19R_S1_R1_001.fastq.gz_trimming_report.txt, qc/trim_galore/0053_P2017BB3S19R_S1_R2_001.fastq.gz_trimming_report.txt, qc/trim_galore/0053_P2017BB3S19R_S1_R1_001_val_1_fastqc.html, qc/trim_galore/0053_P2017BB3S19R_S1_R2_001_val_2_fastqc.html, qc/trim_galore/0053_P2017BB3S19R_S1_R1_001_val_1_fastqc.zip, qc/trim_galore/0053_P2017BB3S19R_S1_R2_001_val_2_fastqc.zip
    jobid: 2
    benchmark: benchmark/0053_P2017BB3S19R_S1.trim_galore_pe.trim_galore.benchmark.tsv
    reason: Missing output files: qc/trim_galore/0053_P2017BB3S19R_S1_R2_001_val_2_fastqc.zip, qc/trim_galore/0053_P2017BB3S19R_S1_R2_001.fastq.gz_trimming_report.txt, qc/trim_galore/0053_P2017BB3S19R_S1_R2_001_val_2_fastqc.html, fastq_trimmed/0053_P2017BB3S19R_S1_R2_001_val_2.fq.gz, qc/trim_galore/0053_P2017BB3S19R_S1_R1_001_val_1_fastqc.zip, qc/trim_galore/0053_P2017BB3S19R_S1_R1_001_val_1_fastqc.html, fastq_trimmed/0053_P2017BB3S19R_S1_R1_001_val_1.fq.gz, qc/trim_galore/0053_P2017BB3S19R_S1_R1_001.fastq.gz_trimming_report.txt
    wildcards: sample=0053_P2017BB3S19R_S1
    resources: mem_mb=1000, mem_mib=954, disk_mb=1000, disk_mib=954, tmpdir=<TBD>, slurm_partition=rome, time=1h


        trim_galore --fastqc --gzip -o fastq_trimmed --paired --retain_unpaired fastq_raw/0053_P2017BB3S19R_S1_R1_001.fastq.gz fastq_raw/0053_P2017BB3S19R_S1_R2_001.fastq.gz
        # Move all qc reports from the fastq_trimmed directory to the trim_galore qc directory
        mv fastq_trimmed/0053_P2017BB3S19R_S1_R1_001.fastq.gz_trimming_report.txt qc/trim_galore
        mv fastq_trimmed/0053_P2017BB3S19R_S1_R2_001.fastq.gz_trimming_report.txt qc/trim_galore
        mv fastq_trimmed/0053_P2017BB3S19R_S1_R1_001_val_1_fastqc.html qc/trim_galore
        mv fastq_trimmed/0053_P2017BB3S19R_S1_R2_001_val_2_fastqc.html qc/trim_galore
        mv fastq_trimmed/0053_P2017BB3S19R_S1_R1_001_val_1_fastqc.zip qc/trim_galore
        mv fastq_trimmed/0053_P2017BB3S19R_S1_R2_001_val_2_fastqc.zip qc/trim_galore
        
No SLURM account given, trying to guess.
Guessed SLURM account: eccdcdc
No wall time information given. This might or might not work on your cluster. If not, specify the resource runtime in your rule or as a reasonable default via --default-resources.
General args: ['--force', '--target-files-omit-workdir-adjustment', '--keep-storage-local-copies', '--max-inventory-time 0', '--nocolor', '--notemp', '--no-hooks', '--nolock', '--ignore-incomplete', '', '--verbose ', '--rerun-triggers software-env params code mtime input', '', '', '--deployment-method conda', '--conda-frontend conda', '--conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs', '--conda-base-path /home/fvhemert/miniforge3', '', '', '', '--shared-fs-usage source-cache storage-local-copies sources software-deployment persistence input-output', '', '--wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/', '', '', '', '', '--printshellcmds ', '', '--latency-wait 60', '--scheduler ilp', '', '--local-storage-prefix .snakemake/storage', '--scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin', '', '', '--set-resources base64//ZmFzdHFjOnNsdXJtX3BhcnRpdGlvbj1zdGFnaW5n', '--set-threads base64//c2FsbW9uPTE2', '', '--default-resources base64//bWVtX21iPW1pbihtYXgoMippbnB1dC5zaXplX21iLCAxMDAwKSwgODAwMCk= base64//ZGlza19tYj1tYXgoMippbnB1dC5zaXplX21iLCAxMDAwKQ== base64//dG1wZGlyPXN5c3RlbV90bXBkaXI= base64//c2x1cm1fcGFydGl0aW9uPXJvbWU= base64//dGltZT0xaA==', '']
sbatch call: sbatch --parsable --job-name c91cbade-e19f-4be6-8871-2c5a7f0b8fe0 --output '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/slurm_logs/rule_trim_galore/0053_P2017BB3S19R_S1/%j.log' --export=ALL --comment rule_trim_galore_wildcards_0053_P2017BB3S19R_S1 -A eccdcdc -p rome --mem 1000 --ntasks=1 --cpus-per-task=1 -D /gpfs/home5/fvhemert/temp/test_pipeline --wrap="/home/fvhemert/miniforge3/envs/snaqs/bin/python3.12 -m snakemake --snakefile /gpfs/home5/fvhemert/temp/test_pipeline/Snakefile --target-jobs 'trim_galore:sample=0053_P2017BB3S19R_S1' --allowed-rules 'trim_galore' --cores 94 --attempt 1 --force-use-threads  --unneeded-temp-files 'fastq_trimmed/0053_P2017BB3S19R_S1_R1_001_unpaired_1.fq.gz' 'fastq_trimmed/0053_P2017BB3S19R_S1_R2_001_unpaired_2.fq.gz' --resources 'mem_mb=1000' 'mem_mib=954' 'disk_mb=1000' 'disk_mib=954' --wait-for-files '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.0phlmft5' 'fastq_raw/0053_P2017BB3S19R_S1_R1_001.fastq.gz' 'fastq_raw/0053_P2017BB3S19R_S1_R2_001.fastq.gz' '/home/fvhemert/projects/snaqs_files/snakemake_envs/5828be7853581bcea2e7a443f005b3a4_' --force --target-files-omit-workdir-adjustment --keep-storage-local-copies --max-inventory-time 0 --nocolor --notemp --no-hooks --nolock --ignore-incomplete --verbose  --rerun-triggers software-env params code mtime input --deployment-method conda --conda-frontend conda --conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs --conda-base-path /home/fvhemert/miniforge3 --shared-fs-usage source-cache storage-local-copies sources software-deployment persistence input-output --wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/ --printshellcmds  --latency-wait 60 --scheduler ilp --local-storage-prefix .snakemake/storage --scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin --set-resources base64//ZmFzdHFjOnNsdXJtX3BhcnRpdGlvbj1zdGFnaW5n --set-threads base64//c2FsbW9uPTE2 --default-resources base64//bWVtX21iPW1pbihtYXgoMippbnB1dC5zaXplX21iLCAxMDAwKSwgODAwMCk= base64//ZGlza19tYj1tYXgoMippbnB1dC5zaXplX21iLCAxMDAwKQ== base64//dG1wZGlyPXN5c3RlbV90bXBkaXI= base64//c2x1cm1fcGFydGl0aW9uPXJvbWU= base64//dGltZT0xaA== --executor slurm-jobstep --jobs 1 --mode remote"
Job 2 has been submitted with SLURM jobid sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 128 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 8 jobs.
sbatch: By default shared jobs get 1792 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 16 CPUs, based on the number of CPUs and the amount memory that you've requested.
8183223 (log: /gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/slurm_logs/rule_trim_galore/0053_P2017BB3S19R_S1/sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 128 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 8 jobs.
sbatch: By default shared jobs get 1792 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 16 CPUs, based on the number of CPUs and the amount memory that you've requested.
8183223.log).

[Wed Oct 16 09:10:46 2024]
rule fastqc:
    input: fastq_raw/0053_P2017BB3S20R_S2_R2_001.fastq.gz
    output: qc/fastqc/0053_P2017BB3S20R_S2_R2_001_fastqc.html, qc/fastqc/0053_P2017BB3S20R_S2_R2_001_fastqc.zip
    jobid: 19
    benchmark: benchmark/0053_P2017BB3S20R_S2_R2.fastqc.fastqc.benchmark.tsv
    reason: Missing output files: qc/fastqc/0053_P2017BB3S20R_S2_R2_001_fastqc.html, qc/fastqc/0053_P2017BB3S20R_S2_R2_001_fastqc.zip
    wildcards: sample=0053_P2017BB3S20R_S2, read=R2
    resources: mem_mb=1000, mem_mib=954, disk_mb=1000, disk_mib=954, tmpdir=<TBD>, slurm_partition=staging, time=1h


        fastqc fastq_raw/0053_P2017BB3S20R_S2_R2_001.fastq.gz --outdir=qc/fastqc
        
No wall time information given. This might or might not work on your cluster. If not, specify the resource runtime in your rule or as a reasonable default via --default-resources.
General args: ['--force', '--target-files-omit-workdir-adjustment', '--keep-storage-local-copies', '--max-inventory-time 0', '--nocolor', '--notemp', '--no-hooks', '--nolock', '--ignore-incomplete', '', '--verbose ', '--rerun-triggers software-env params code mtime input', '', '', '--deployment-method conda', '--conda-frontend conda', '--conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs', '--conda-base-path /home/fvhemert/miniforge3', '', '', '', '--shared-fs-usage source-cache storage-local-copies sources software-deployment persistence input-output', '', '--wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/', '', '', '', '', '--printshellcmds ', '', '--latency-wait 60', '--scheduler ilp', '', '--local-storage-prefix .snakemake/storage', '--scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin', '', '', '--set-resources base64//ZmFzdHFjOnNsdXJtX3BhcnRpdGlvbj1zdGFnaW5n', '--set-threads base64//c2FsbW9uPTE2', '', '--default-resources base64//bWVtX21iPW1pbihtYXgoMippbnB1dC5zaXplX21iLCAxMDAwKSwgODAwMCk= base64//ZGlza19tYj1tYXgoMippbnB1dC5zaXplX21iLCAxMDAwKQ== base64//dG1wZGlyPXN5c3RlbV90bXBkaXI= base64//c2x1cm1fcGFydGl0aW9uPXJvbWU= base64//dGltZT0xaA==', '']
sbatch call: sbatch --parsable --job-name c91cbade-e19f-4be6-8871-2c5a7f0b8fe0 --output '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/slurm_logs/rule_fastqc/0053_P2017BB3S20R_S2_R2/%j.log' --export=ALL --comment rule_fastqc_wildcards_0053_P2017BB3S20R_S2_R2 -A eccdcdc -p staging --mem 1000 --ntasks=1 --cpus-per-task=1 -D /gpfs/home5/fvhemert/temp/test_pipeline --wrap="/home/fvhemert/miniforge3/envs/snaqs/bin/python3.12 -m snakemake --snakefile /gpfs/home5/fvhemert/temp/test_pipeline/Snakefile --target-jobs 'fastqc:sample=0053_P2017BB3S20R_S2,read=R2' --allowed-rules 'fastqc' --cores 94 --attempt 1 --force-use-threads  --resources 'mem_mb=1000' 'mem_mib=954' 'disk_mb=1000' 'disk_mib=954' --wait-for-files '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.0phlmft5' 'fastq_raw/0053_P2017BB3S20R_S2_R2_001.fastq.gz' '/home/fvhemert/projects/snaqs_files/snakemake_envs/5828be7853581bcea2e7a443f005b3a4_' --force --target-files-omit-workdir-adjustment --keep-storage-local-copies --max-inventory-time 0 --nocolor --notemp --no-hooks --nolock --ignore-incomplete --verbose  --rerun-triggers software-env params code mtime input --deployment-method conda --conda-frontend conda --conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs --conda-base-path /home/fvhemert/miniforge3 --shared-fs-usage source-cache storage-local-copies sources software-deployment persistence input-output --wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/ --printshellcmds  --latency-wait 60 --scheduler ilp --local-storage-prefix .snakemake/storage --scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin --set-resources base64//ZmFzdHFjOnNsdXJtX3BhcnRpdGlvbj1zdGFnaW5n --set-threads base64//c2FsbW9uPTE2 --default-resources base64//bWVtX21iPW1pbihtYXgoMippbnB1dC5zaXplX21iLCAxMDAwKSwgODAwMCk= base64//ZGlza19tYj1tYXgoMippbnB1dC5zaXplX21iLCAxMDAwKQ== base64//dG1wZGlyPXN5c3RlbV90bXBkaXI= base64//c2x1cm1fcGFydGl0aW9uPXJvbWU= base64//dGltZT0xaA== --executor slurm-jobstep --jobs 1 --mode remote"
Job 19 has been submitted with SLURM jobid sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 32 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 32 jobs.
sbatch: By default shared jobs get 7168 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 1 CPUs, based on the number of CPUs and the amount memory that you've requested.
8183224 (log: /gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/slurm_logs/rule_fastqc/0053_P2017BB3S20R_S2_R2/sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 32 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 32 jobs.
sbatch: By default shared jobs get 7168 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 1 CPUs, based on the number of CPUs and the amount memory that you've requested.
8183224.log).

[Wed Oct 16 09:10:46 2024]
rule fastqc:
    input: fastq_raw/0053_P2017BB3S20R_S2_R1_001.fastq.gz
    output: qc/fastqc/0053_P2017BB3S20R_S2_R1_001_fastqc.html, qc/fastqc/0053_P2017BB3S20R_S2_R1_001_fastqc.zip
    jobid: 18
    benchmark: benchmark/0053_P2017BB3S20R_S2_R1.fastqc.fastqc.benchmark.tsv
    reason: Missing output files: qc/fastqc/0053_P2017BB3S20R_S2_R1_001_fastqc.html, qc/fastqc/0053_P2017BB3S20R_S2_R1_001_fastqc.zip
    wildcards: sample=0053_P2017BB3S20R_S2, read=R1
    resources: mem_mb=1000, mem_mib=954, disk_mb=1000, disk_mib=954, tmpdir=<TBD>, slurm_partition=staging, time=1h


        fastqc fastq_raw/0053_P2017BB3S20R_S2_R1_001.fastq.gz --outdir=qc/fastqc
        
No wall time information given. This might or might not work on your cluster. If not, specify the resource runtime in your rule or as a reasonable default via --default-resources.
General args: ['--force', '--target-files-omit-workdir-adjustment', '--keep-storage-local-copies', '--max-inventory-time 0', '--nocolor', '--notemp', '--no-hooks', '--nolock', '--ignore-incomplete', '', '--verbose ', '--rerun-triggers software-env params code mtime input', '', '', '--deployment-method conda', '--conda-frontend conda', '--conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs', '--conda-base-path /home/fvhemert/miniforge3', '', '', '', '--shared-fs-usage source-cache storage-local-copies sources software-deployment persistence input-output', '', '--wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/', '', '', '', '', '--printshellcmds ', '', '--latency-wait 60', '--scheduler ilp', '', '--local-storage-prefix .snakemake/storage', '--scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin', '', '', '--set-resources base64//ZmFzdHFjOnNsdXJtX3BhcnRpdGlvbj1zdGFnaW5n', '--set-threads base64//c2FsbW9uPTE2', '', '--default-resources base64//bWVtX21iPW1pbihtYXgoMippbnB1dC5zaXplX21iLCAxMDAwKSwgODAwMCk= base64//ZGlza19tYj1tYXgoMippbnB1dC5zaXplX21iLCAxMDAwKQ== base64//dG1wZGlyPXN5c3RlbV90bXBkaXI= base64//c2x1cm1fcGFydGl0aW9uPXJvbWU= base64//dGltZT0xaA==', '']
sbatch call: sbatch --parsable --job-name c91cbade-e19f-4be6-8871-2c5a7f0b8fe0 --output '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/slurm_logs/rule_fastqc/0053_P2017BB3S20R_S2_R1/%j.log' --export=ALL --comment rule_fastqc_wildcards_0053_P2017BB3S20R_S2_R1 -A eccdcdc -p staging --mem 1000 --ntasks=1 --cpus-per-task=1 -D /gpfs/home5/fvhemert/temp/test_pipeline --wrap="/home/fvhemert/miniforge3/envs/snaqs/bin/python3.12 -m snakemake --snakefile /gpfs/home5/fvhemert/temp/test_pipeline/Snakefile --target-jobs 'fastqc:sample=0053_P2017BB3S20R_S2,read=R1' --allowed-rules 'fastqc' --cores 94 --attempt 1 --force-use-threads  --resources 'mem_mb=1000' 'mem_mib=954' 'disk_mb=1000' 'disk_mib=954' --wait-for-files '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.0phlmft5' 'fastq_raw/0053_P2017BB3S20R_S2_R1_001.fastq.gz' '/home/fvhemert/projects/snaqs_files/snakemake_envs/5828be7853581bcea2e7a443f005b3a4_' --force --target-files-omit-workdir-adjustment --keep-storage-local-copies --max-inventory-time 0 --nocolor --notemp --no-hooks --nolock --ignore-incomplete --verbose  --rerun-triggers software-env params code mtime input --deployment-method conda --conda-frontend conda --conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs --conda-base-path /home/fvhemert/miniforge3 --shared-fs-usage source-cache storage-local-copies sources software-deployment persistence input-output --wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/ --printshellcmds  --latency-wait 60 --scheduler ilp --local-storage-prefix .snakemake/storage --scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin --set-resources base64//ZmFzdHFjOnNsdXJtX3BhcnRpdGlvbj1zdGFnaW5n --set-threads base64//c2FsbW9uPTE2 --default-resources base64//bWVtX21iPW1pbihtYXgoMippbnB1dC5zaXplX21iLCAxMDAwKSwgODAwMCk= base64//ZGlza19tYj1tYXgoMippbnB1dC5zaXplX21iLCAxMDAwKQ== base64//dG1wZGlyPXN5c3RlbV90bXBkaXI= base64//c2x1cm1fcGFydGl0aW9uPXJvbWU= base64//dGltZT0xaA== --executor slurm-jobstep --jobs 1 --mode remote"
Job 18 has been submitted with SLURM jobid sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 32 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 32 jobs.
sbatch: By default shared jobs get 7168 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 1 CPUs, based on the number of CPUs and the amount memory that you've requested.
8183225 (log: /gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/slurm_logs/rule_fastqc/0053_P2017BB3S20R_S2_R1/sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 32 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 32 jobs.
sbatch: By default shared jobs get 7168 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 1 CPUs, based on the number of CPUs and the amount memory that you've requested.
8183225.log).

[Wed Oct 16 09:10:47 2024]
rule trim_galore:
    input: fastq_raw/0053_P2017BB3S20R_S2_R1_001.fastq.gz, fastq_raw/0053_P2017BB3S20R_S2_R2_001.fastq.gz
    output: fastq_trimmed/0053_P2017BB3S20R_S2_R1_001_val_1.fq.gz, fastq_trimmed/0053_P2017BB3S20R_S2_R2_001_val_2.fq.gz, fastq_trimmed/0053_P2017BB3S20R_S2_R1_001_unpaired_1.fq.gz, fastq_trimmed/0053_P2017BB3S20R_S2_R2_001_unpaired_2.fq.gz, qc/trim_galore/0053_P2017BB3S20R_S2_R1_001.fastq.gz_trimming_report.txt, qc/trim_galore/0053_P2017BB3S20R_S2_R2_001.fastq.gz_trimming_report.txt, qc/trim_galore/0053_P2017BB3S20R_S2_R1_001_val_1_fastqc.html, qc/trim_galore/0053_P2017BB3S20R_S2_R2_001_val_2_fastqc.html, qc/trim_galore/0053_P2017BB3S20R_S2_R1_001_val_1_fastqc.zip, qc/trim_galore/0053_P2017BB3S20R_S2_R2_001_val_2_fastqc.zip
    jobid: 4
    benchmark: benchmark/0053_P2017BB3S20R_S2.trim_galore_pe.trim_galore.benchmark.tsv
    reason: Missing output files: fastq_trimmed/0053_P2017BB3S20R_S2_R2_001_val_2.fq.gz, fastq_trimmed/0053_P2017BB3S20R_S2_R1_001_val_1.fq.gz, qc/trim_galore/0053_P2017BB3S20R_S2_R1_001_val_1_fastqc.html, qc/trim_galore/0053_P2017BB3S20R_S2_R1_001_val_1_fastqc.zip, qc/trim_galore/0053_P2017BB3S20R_S2_R2_001_val_2_fastqc.zip, qc/trim_galore/0053_P2017BB3S20R_S2_R1_001.fastq.gz_trimming_report.txt, qc/trim_galore/0053_P2017BB3S20R_S2_R2_001.fastq.gz_trimming_report.txt, qc/trim_galore/0053_P2017BB3S20R_S2_R2_001_val_2_fastqc.html
    wildcards: sample=0053_P2017BB3S20R_S2
    resources: mem_mb=1000, mem_mib=954, disk_mb=1000, disk_mib=954, tmpdir=<TBD>, slurm_partition=rome, time=1h


        trim_galore --fastqc --gzip -o fastq_trimmed --paired --retain_unpaired fastq_raw/0053_P2017BB3S20R_S2_R1_001.fastq.gz fastq_raw/0053_P2017BB3S20R_S2_R2_001.fastq.gz
        # Move all qc reports from the fastq_trimmed directory to the trim_galore qc directory
        mv fastq_trimmed/0053_P2017BB3S20R_S2_R1_001.fastq.gz_trimming_report.txt qc/trim_galore
        mv fastq_trimmed/0053_P2017BB3S20R_S2_R2_001.fastq.gz_trimming_report.txt qc/trim_galore
        mv fastq_trimmed/0053_P2017BB3S20R_S2_R1_001_val_1_fastqc.html qc/trim_galore
        mv fastq_trimmed/0053_P2017BB3S20R_S2_R2_001_val_2_fastqc.html qc/trim_galore
        mv fastq_trimmed/0053_P2017BB3S20R_S2_R1_001_val_1_fastqc.zip qc/trim_galore
        mv fastq_trimmed/0053_P2017BB3S20R_S2_R2_001_val_2_fastqc.zip qc/trim_galore
        
No wall time information given. This might or might not work on your cluster. If not, specify the resource runtime in your rule or as a reasonable default via --default-resources.
General args: ['--force', '--target-files-omit-workdir-adjustment', '--keep-storage-local-copies', '--max-inventory-time 0', '--nocolor', '--notemp', '--no-hooks', '--nolock', '--ignore-incomplete', '', '--verbose ', '--rerun-triggers software-env params code mtime input', '', '', '--deployment-method conda', '--conda-frontend conda', '--conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs', '--conda-base-path /home/fvhemert/miniforge3', '', '', '', '--shared-fs-usage source-cache storage-local-copies sources software-deployment persistence input-output', '', '--wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/', '', '', '', '', '--printshellcmds ', '', '--latency-wait 60', '--scheduler ilp', '', '--local-storage-prefix .snakemake/storage', '--scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin', '', '', '--set-resources base64//ZmFzdHFjOnNsdXJtX3BhcnRpdGlvbj1zdGFnaW5n', '--set-threads base64//c2FsbW9uPTE2', '', '--default-resources base64//bWVtX21iPW1pbihtYXgoMippbnB1dC5zaXplX21iLCAxMDAwKSwgODAwMCk= base64//ZGlza19tYj1tYXgoMippbnB1dC5zaXplX21iLCAxMDAwKQ== base64//dG1wZGlyPXN5c3RlbV90bXBkaXI= base64//c2x1cm1fcGFydGl0aW9uPXJvbWU= base64//dGltZT0xaA==', '']
sbatch call: sbatch --parsable --job-name c91cbade-e19f-4be6-8871-2c5a7f0b8fe0 --output '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/slurm_logs/rule_trim_galore/0053_P2017BB3S20R_S2/%j.log' --export=ALL --comment rule_trim_galore_wildcards_0053_P2017BB3S20R_S2 -A eccdcdc -p rome --mem 1000 --ntasks=1 --cpus-per-task=1 -D /gpfs/home5/fvhemert/temp/test_pipeline --wrap="/home/fvhemert/miniforge3/envs/snaqs/bin/python3.12 -m snakemake --snakefile /gpfs/home5/fvhemert/temp/test_pipeline/Snakefile --target-jobs 'trim_galore:sample=0053_P2017BB3S20R_S2' --allowed-rules 'trim_galore' --cores 94 --attempt 1 --force-use-threads  --unneeded-temp-files 'fastq_trimmed/0053_P2017BB3S20R_S2_R1_001_unpaired_1.fq.gz' 'fastq_trimmed/0053_P2017BB3S20R_S2_R2_001_unpaired_2.fq.gz' --resources 'mem_mb=1000' 'mem_mib=954' 'disk_mb=1000' 'disk_mib=954' --wait-for-files '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.0phlmft5' 'fastq_raw/0053_P2017BB3S20R_S2_R1_001.fastq.gz' 'fastq_raw/0053_P2017BB3S20R_S2_R2_001.fastq.gz' '/home/fvhemert/projects/snaqs_files/snakemake_envs/5828be7853581bcea2e7a443f005b3a4_' --force --target-files-omit-workdir-adjustment --keep-storage-local-copies --max-inventory-time 0 --nocolor --notemp --no-hooks --nolock --ignore-incomplete --verbose  --rerun-triggers software-env params code mtime input --deployment-method conda --conda-frontend conda --conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs --conda-base-path /home/fvhemert/miniforge3 --shared-fs-usage source-cache storage-local-copies sources software-deployment persistence input-output --wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/ --printshellcmds  --latency-wait 60 --scheduler ilp --local-storage-prefix .snakemake/storage --scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin --set-resources base64//ZmFzdHFjOnNsdXJtX3BhcnRpdGlvbj1zdGFnaW5n --set-threads base64//c2FsbW9uPTE2 --default-resources base64//bWVtX21iPW1pbihtYXgoMippbnB1dC5zaXplX21iLCAxMDAwKSwgODAwMCk= base64//ZGlza19tYj1tYXgoMippbnB1dC5zaXplX21iLCAxMDAwKQ== base64//dG1wZGlyPXN5c3RlbV90bXBkaXI= base64//c2x1cm1fcGFydGl0aW9uPXJvbWU= base64//dGltZT0xaA== --executor slurm-jobstep --jobs 1 --mode remote"
Job 4 has been submitted with SLURM jobid sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 128 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 8 jobs.
sbatch: By default shared jobs get 1792 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 16 CPUs, based on the number of CPUs and the amount memory that you've requested.
8183226 (log: /gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/slurm_logs/rule_trim_galore/0053_P2017BB3S20R_S2/sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 128 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 8 jobs.
sbatch: By default shared jobs get 1792 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 16 CPUs, based on the number of CPUs and the amount memory that you've requested.
8183226.log).

[Wed Oct 16 09:10:47 2024]
rule fastqc:
    input: fastq_raw/0053_P2017BB3S19R_S1_R2_001.fastq.gz
    output: qc/fastqc/0053_P2017BB3S19R_S1_R2_001_fastqc.html, qc/fastqc/0053_P2017BB3S19R_S1_R2_001_fastqc.zip
    jobid: 17
    benchmark: benchmark/0053_P2017BB3S19R_S1_R2.fastqc.fastqc.benchmark.tsv
    reason: Missing output files: qc/fastqc/0053_P2017BB3S19R_S1_R2_001_fastqc.html, qc/fastqc/0053_P2017BB3S19R_S1_R2_001_fastqc.zip
    wildcards: sample=0053_P2017BB3S19R_S1, read=R2
    resources: mem_mb=1000, mem_mib=954, disk_mb=1000, disk_mib=954, tmpdir=<TBD>, slurm_partition=staging, time=1h


        fastqc fastq_raw/0053_P2017BB3S19R_S1_R2_001.fastq.gz --outdir=qc/fastqc
        
No wall time information given. This might or might not work on your cluster. If not, specify the resource runtime in your rule or as a reasonable default via --default-resources.
General args: ['--force', '--target-files-omit-workdir-adjustment', '--keep-storage-local-copies', '--max-inventory-time 0', '--nocolor', '--notemp', '--no-hooks', '--nolock', '--ignore-incomplete', '', '--verbose ', '--rerun-triggers software-env params code mtime input', '', '', '--deployment-method conda', '--conda-frontend conda', '--conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs', '--conda-base-path /home/fvhemert/miniforge3', '', '', '', '--shared-fs-usage source-cache storage-local-copies sources software-deployment persistence input-output', '', '--wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/', '', '', '', '', '--printshellcmds ', '', '--latency-wait 60', '--scheduler ilp', '', '--local-storage-prefix .snakemake/storage', '--scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin', '', '', '--set-resources base64//ZmFzdHFjOnNsdXJtX3BhcnRpdGlvbj1zdGFnaW5n', '--set-threads base64//c2FsbW9uPTE2', '', '--default-resources base64//bWVtX21iPW1pbihtYXgoMippbnB1dC5zaXplX21iLCAxMDAwKSwgODAwMCk= base64//ZGlza19tYj1tYXgoMippbnB1dC5zaXplX21iLCAxMDAwKQ== base64//dG1wZGlyPXN5c3RlbV90bXBkaXI= base64//c2x1cm1fcGFydGl0aW9uPXJvbWU= base64//dGltZT0xaA==', '']
sbatch call: sbatch --parsable --job-name c91cbade-e19f-4be6-8871-2c5a7f0b8fe0 --output '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/slurm_logs/rule_fastqc/0053_P2017BB3S19R_S1_R2/%j.log' --export=ALL --comment rule_fastqc_wildcards_0053_P2017BB3S19R_S1_R2 -A eccdcdc -p staging --mem 1000 --ntasks=1 --cpus-per-task=1 -D /gpfs/home5/fvhemert/temp/test_pipeline --wrap="/home/fvhemert/miniforge3/envs/snaqs/bin/python3.12 -m snakemake --snakefile /gpfs/home5/fvhemert/temp/test_pipeline/Snakefile --target-jobs 'fastqc:sample=0053_P2017BB3S19R_S1,read=R2' --allowed-rules 'fastqc' --cores 94 --attempt 1 --force-use-threads  --resources 'mem_mb=1000' 'mem_mib=954' 'disk_mb=1000' 'disk_mib=954' --wait-for-files '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.0phlmft5' 'fastq_raw/0053_P2017BB3S19R_S1_R2_001.fastq.gz' '/home/fvhemert/projects/snaqs_files/snakemake_envs/5828be7853581bcea2e7a443f005b3a4_' --force --target-files-omit-workdir-adjustment --keep-storage-local-copies --max-inventory-time 0 --nocolor --notemp --no-hooks --nolock --ignore-incomplete --verbose  --rerun-triggers software-env params code mtime input --deployment-method conda --conda-frontend conda --conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs --conda-base-path /home/fvhemert/miniforge3 --shared-fs-usage source-cache storage-local-copies sources software-deployment persistence input-output --wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/ --printshellcmds  --latency-wait 60 --scheduler ilp --local-storage-prefix .snakemake/storage --scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin --set-resources base64//ZmFzdHFjOnNsdXJtX3BhcnRpdGlvbj1zdGFnaW5n --set-threads base64//c2FsbW9uPTE2 --default-resources base64//bWVtX21iPW1pbihtYXgoMippbnB1dC5zaXplX21iLCAxMDAwKSwgODAwMCk= base64//ZGlza19tYj1tYXgoMippbnB1dC5zaXplX21iLCAxMDAwKQ== base64//dG1wZGlyPXN5c3RlbV90bXBkaXI= base64//c2x1cm1fcGFydGl0aW9uPXJvbWU= base64//dGltZT0xaA== --executor slurm-jobstep --jobs 1 --mode remote"
Job 17 has been submitted with SLURM jobid sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 32 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 32 jobs.
sbatch: By default shared jobs get 7168 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 1 CPUs, based on the number of CPUs and the amount memory that you've requested.
8183227 (log: /gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/slurm_logs/rule_fastqc/0053_P2017BB3S19R_S1_R2/sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 32 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 32 jobs.
sbatch: By default shared jobs get 7168 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 1 CPUs, based on the number of CPUs and the amount memory that you've requested.
8183227.log).

[Wed Oct 16 09:10:47 2024]
rule fastqc:
    input: fastq_raw/0053_P2017BB3S19R_S1_R1_001.fastq.gz
    output: qc/fastqc/0053_P2017BB3S19R_S1_R1_001_fastqc.html, qc/fastqc/0053_P2017BB3S19R_S1_R1_001_fastqc.zip
    jobid: 16
    benchmark: benchmark/0053_P2017BB3S19R_S1_R1.fastqc.fastqc.benchmark.tsv
    reason: Missing output files: qc/fastqc/0053_P2017BB3S19R_S1_R1_001_fastqc.zip, qc/fastqc/0053_P2017BB3S19R_S1_R1_001_fastqc.html
    wildcards: sample=0053_P2017BB3S19R_S1, read=R1
    resources: mem_mb=1000, mem_mib=954, disk_mb=1000, disk_mib=954, tmpdir=<TBD>, slurm_partition=staging, time=1h


        fastqc fastq_raw/0053_P2017BB3S19R_S1_R1_001.fastq.gz --outdir=qc/fastqc
        
No wall time information given. This might or might not work on your cluster. If not, specify the resource runtime in your rule or as a reasonable default via --default-resources.
General args: ['--force', '--target-files-omit-workdir-adjustment', '--keep-storage-local-copies', '--max-inventory-time 0', '--nocolor', '--notemp', '--no-hooks', '--nolock', '--ignore-incomplete', '', '--verbose ', '--rerun-triggers software-env params code mtime input', '', '', '--deployment-method conda', '--conda-frontend conda', '--conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs', '--conda-base-path /home/fvhemert/miniforge3', '', '', '', '--shared-fs-usage source-cache storage-local-copies sources software-deployment persistence input-output', '', '--wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/', '', '', '', '', '--printshellcmds ', '', '--latency-wait 60', '--scheduler ilp', '', '--local-storage-prefix .snakemake/storage', '--scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin', '', '', '--set-resources base64//ZmFzdHFjOnNsdXJtX3BhcnRpdGlvbj1zdGFnaW5n', '--set-threads base64//c2FsbW9uPTE2', '', '--default-resources base64//bWVtX21iPW1pbihtYXgoMippbnB1dC5zaXplX21iLCAxMDAwKSwgODAwMCk= base64//ZGlza19tYj1tYXgoMippbnB1dC5zaXplX21iLCAxMDAwKQ== base64//dG1wZGlyPXN5c3RlbV90bXBkaXI= base64//c2x1cm1fcGFydGl0aW9uPXJvbWU= base64//dGltZT0xaA==', '']
sbatch call: sbatch --parsable --job-name c91cbade-e19f-4be6-8871-2c5a7f0b8fe0 --output '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/slurm_logs/rule_fastqc/0053_P2017BB3S19R_S1_R1/%j.log' --export=ALL --comment rule_fastqc_wildcards_0053_P2017BB3S19R_S1_R1 -A eccdcdc -p staging --mem 1000 --ntasks=1 --cpus-per-task=1 -D /gpfs/home5/fvhemert/temp/test_pipeline --wrap="/home/fvhemert/miniforge3/envs/snaqs/bin/python3.12 -m snakemake --snakefile /gpfs/home5/fvhemert/temp/test_pipeline/Snakefile --target-jobs 'fastqc:sample=0053_P2017BB3S19R_S1,read=R1' --allowed-rules 'fastqc' --cores 94 --attempt 1 --force-use-threads  --resources 'mem_mb=1000' 'mem_mib=954' 'disk_mb=1000' 'disk_mib=954' --wait-for-files '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.0phlmft5' 'fastq_raw/0053_P2017BB3S19R_S1_R1_001.fastq.gz' '/home/fvhemert/projects/snaqs_files/snakemake_envs/5828be7853581bcea2e7a443f005b3a4_' --force --target-files-omit-workdir-adjustment --keep-storage-local-copies --max-inventory-time 0 --nocolor --notemp --no-hooks --nolock --ignore-incomplete --verbose  --rerun-triggers software-env params code mtime input --deployment-method conda --conda-frontend conda --conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs --conda-base-path /home/fvhemert/miniforge3 --shared-fs-usage source-cache storage-local-copies sources software-deployment persistence input-output --wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/ --printshellcmds  --latency-wait 60 --scheduler ilp --local-storage-prefix .snakemake/storage --scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin --set-resources base64//ZmFzdHFjOnNsdXJtX3BhcnRpdGlvbj1zdGFnaW5n --set-threads base64//c2FsbW9uPTE2 --default-resources base64//bWVtX21iPW1pbihtYXgoMippbnB1dC5zaXplX21iLCAxMDAwKSwgODAwMCk= base64//ZGlza19tYj1tYXgoMippbnB1dC5zaXplX21iLCAxMDAwKQ== base64//dG1wZGlyPXN5c3RlbV90bXBkaXI= base64//c2x1cm1fcGFydGl0aW9uPXJvbWU= base64//dGltZT0xaA== --executor slurm-jobstep --jobs 1 --mode remote"
Job 16 has been submitted with SLURM jobid sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 32 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 32 jobs.
sbatch: By default shared jobs get 7168 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 1 CPUs, based on the number of CPUs and the amount memory that you've requested.
8183228 (log: /gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/slurm_logs/rule_fastqc/0053_P2017BB3S19R_S1_R1/sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 32 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 32 jobs.
sbatch: By default shared jobs get 7168 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 1 CPUs, based on the number of CPUs and the amount memory that you've requested.
8183228.log).
The job status was queried with command: sacct -X --parsable2 --clusters all --noheader --format=JobIdRaw,State --starttime 2024-10-14T09:00 --endtime now --name c91cbade-e19f-4be6-8871-2c5a7f0b8fe0
It took: 0.03329920768737793 seconds
The output is:
'8183223|RUNNING
8183224|COMPLETED
8183225|COMPLETED
8183226|RUNNING
8183227|COMPLETED
8183228|COMPLETED
'

status_of_jobs after sacct is: {'8183223': 'RUNNING', '8183224': 'COMPLETED', '8183225': 'COMPLETED', '8183226': 'RUNNING', '8183227': 'COMPLETED', '8183228': 'COMPLETED'}
active_jobs_ids_with_current_sacct_status are: set()
active_jobs_seen_by_sacct are: set()
missing_sacct_status are: set()
The job status was queried with command: sacct -X --parsable2 --clusters all --noheader --format=JobIdRaw,State --starttime 2024-10-14T09:00 --endtime now --name c91cbade-e19f-4be6-8871-2c5a7f0b8fe0
It took: 0.031170368194580078 seconds
The output is:
'8183223|COMPLETED
8183224|COMPLETED
8183225|COMPLETED
8183226|COMPLETED
8183227|COMPLETED
8183228|COMPLETED
'

status_of_jobs after sacct is: {'8183223': 'COMPLETED', '8183224': 'COMPLETED', '8183225': 'COMPLETED', '8183226': 'COMPLETED', '8183227': 'COMPLETED', '8183228': 'COMPLETED'}
active_jobs_ids_with_current_sacct_status are: set()
active_jobs_seen_by_sacct are: set()
missing_sacct_status are: set()
The job status was queried with command: sacct -X --parsable2 --clusters all --noheader --format=JobIdRaw,State --starttime 2024-10-14T09:00 --endtime now --name c91cbade-e19f-4be6-8871-2c5a7f0b8fe0
It took: 0.032552242279052734 seconds
The output is:
'8183223|COMPLETED
8183224|COMPLETED
8183225|COMPLETED
8183226|COMPLETED
8183227|COMPLETED
8183228|COMPLETED
'

status_of_jobs after sacct is: {'8183223': 'COMPLETED', '8183224': 'COMPLETED', '8183225': 'COMPLETED', '8183226': 'COMPLETED', '8183227': 'COMPLETED', '8183228': 'COMPLETED'}
active_jobs_ids_with_current_sacct_status are: set()
active_jobs_seen_by_sacct are: set()
missing_sacct_status are: set()
The job status was queried with command: sacct -X --parsable2 --clusters all --noheader --format=JobIdRaw,State --starttime 2024-10-14T09:00 --endtime now --name c91cbade-e19f-4be6-8871-2c5a7f0b8fe0
It took: 0.03798389434814453 seconds
The output is:
'8183223|COMPLETED
8183224|COMPLETED
8183225|COMPLETED
8183226|COMPLETED
8183227|COMPLETED
8183228|COMPLETED
'

status_of_jobs after sacct is: {'8183223': 'COMPLETED', '8183224': 'COMPLETED', '8183225': 'COMPLETED', '8183226': 'COMPLETED', '8183227': 'COMPLETED', '8183228': 'COMPLETED'}
active_jobs_ids_with_current_sacct_status are: set()
active_jobs_seen_by_sacct are: set()
missing_sacct_status are: set()
^CTerminating processes on user request, this might take some time.
unlocking
removing lock
removing lock
removed all locks
Full Traceback (most recent call last):
  File "/home/fvhemert/miniforge3/envs/snaqs/lib/python3.12/site-packages/snakemake/scheduler.py", line 189, in schedule
    self._open_jobs.acquire()
  File "/home/fvhemert/miniforge3/envs/snaqs/lib/python3.12/threading.py", line 507, in acquire
    self._cond.wait(timeout)
  File "/home/fvhemert/miniforge3/envs/snaqs/lib/python3.12/threading.py", line 355, in wait
    waiter.acquire()
KeyboardInterrupt

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/fvhemert/miniforge3/envs/snaqs/lib/python3.12/site-packages/snakemake_executor_plugin_slurm/__init__.py", line 416, in cancel_jobs
    subprocess.check_output(
  File "/home/fvhemert/miniforge3/envs/snaqs/lib/python3.12/subprocess.py", line 466, in check_output
    return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/fvhemert/miniforge3/envs/snaqs/lib/python3.12/subprocess.py", line 571, in run
    raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command 'scancel sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 128 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 8 jobs.
sbatch: By default shared jobs get 1792 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 16 CPUs, based on the number of CPUs and the amount memory that you've requested.
8183223 sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 32 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 32 jobs.
sbatch: By default shared jobs get 7168 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 1 CPUs, based on the number of CPUs and the amount memory that you've requested.
8183224 sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 32 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 32 jobs.
sbatch: By default shared jobs get 7168 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 1 CPUs, based on the number of CPUs and the amount memory that you've requested.
8183225 sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 128 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 8 jobs.
sbatch: By default shared jobs get 1792 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 16 CPUs, based on the number of CPUs and the amount memory that you've requested.
8183226 sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 32 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 32 jobs.
sbatch: By default shared jobs get 7168 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 1 CPUs, based on the number of CPUs and the amount memory that you've requested.
8183227 sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 32 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 32 jobs.
sbatch: By default shared jobs get 7168 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 1 CPUs, based on the number of CPUs and the amount memory that you've requested.
8183228 --clusters=all' returned non-zero exit status 127.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/home/fvhemert/miniforge3/envs/snaqs/lib/python3.12/site-packages/snakemake/cli.py", line 2091, in args_to_api
    dag_api.execute_workflow(
  File "/home/fvhemert/miniforge3/envs/snaqs/lib/python3.12/site-packages/snakemake/api.py", line 595, in execute_workflow
    workflow.execute(
  File "/home/fvhemert/miniforge3/envs/snaqs/lib/python3.12/site-packages/snakemake/workflow.py", line 1264, in execute
    raise e
  File "/home/fvhemert/miniforge3/envs/snaqs/lib/python3.12/site-packages/snakemake/workflow.py", line 1260, in execute
    success = self.scheduler.schedule()
              ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/fvhemert/miniforge3/envs/snaqs/lib/python3.12/site-packages/snakemake/scheduler.py", line 318, in schedule
    self._executor.cancel()
  File "/home/fvhemert/miniforge3/envs/snaqs/lib/python3.12/site-packages/snakemake_interface_executor_plugins/executors/remote.py", line 109, in cancel
    self.cancel_jobs(active_jobs)
  File "/home/fvhemert/miniforge3/envs/snaqs/lib/python3.12/site-packages/snakemake_executor_plugin_slurm/__init__.py", line 429, in cancel_jobs
    raise WorkflowError(
snakemake_interface_common.exceptions.WorkflowError: Unable to cancel jobs with scancel (exit code 127): scancel: unrecognized option '--exclusive'
Try "scancel --help" for more information
/bin/sh: line 2: sbatch:: command not found
/bin/sh: line 3: sbatch:: command not found
/bin/sh: line 4: sbatch:: command not found
/bin/sh: line 9: 8183224: command not found
/bin/sh: line 10: sbatch:: command not found
/bin/sh: line 11: sbatch:: command not found
/bin/sh: line 12: sbatch:: command not found
/bin/sh: line 17: 8183226: command not found
/bin/sh: line 18: sbatch:: command not found
/bin/sh: line 19: sbatch:: command not found
/bin/sh: line 20: sbatch:: command not found
/bin/sh: line 25: 8183228: command not found

WorkflowError:
Unable to cancel jobs with scancel (exit code 127): scancel: unrecognized option '--exclusive'
Try "scancel --help" for more information
/bin/sh: line 2: sbatch:: command not found
/bin/sh: line 3: sbatch:: command not found
/bin/sh: line 4: sbatch:: command not found
/bin/sh: line 9: 8183224: command not found
/bin/sh: line 10: sbatch:: command not found
/bin/sh: line 11: sbatch:: command not found
/bin/sh: line 12: sbatch:: command not found
/bin/sh: line 17: 8183226: command not found
/bin/sh: line 18: sbatch:: command not found
/bin/sh: line 19: sbatch:: command not found
/bin/sh: line 20: sbatch:: command not found
/bin/sh: line 25: 8183228: command not found

Restart snakemake

If I then restart snakemake, it starts with the right tasks and finishes those, and then it sits again... Here is the output of the second start:

$ snakemake --workflow-profile ./cluster_configs --verbose
Using workflow specific profile ./cluster_configs for setting default command line arguments.
host: int4
Building DAG of jobs...
Your conda installation is not configured to use strict channel priorities. This is however important for having robust and correct environments (for details, see https://conda-forge.org/docs/user/tipsandtricks.html). Please consider to configure strict priorities by executing 'conda config --set channel_priority strict'.
shared_storage_local_copies: True
remote_exec: False
SLURM run ID: 659f8275-565c-40d3-bdfb-2a9135623e26
Using shell: /usr/bin/bash
Provided remote nodes: 100
Job stats:
job                           count
--------------------------  -------
all                               1
complexity_20mer_counter          4
create_flagged_sampletable        1
create_pcs_raw_files              2
customqc_parameters               2
customqc_report                   1
qc_flagging                       2
rnaseq_multiqc                    1
salmon                            2
seqrun_expression_reports         1
tpm4_normalization                2
total                            19

Resources before job selection: {'_cores': 9223372036854775807, '_nodes': 100, '_job_count': 9223372036854775807}
Ready jobs: 6
Select jobs to execute...
Welcome to the CBC MILP Solver 
Version: 2.10.12 
Build Date: Sep  3 2024 

command line - cbc /scratch-local/70716/115d8df167be4821887d4855d1c1c86b-pulp.mps -max -sec 10 -threads 2 -timeMode elapsed -branch -printingOptions all -solution /scratch-local/70716/115d8df167be4821887d4855d1c1c86b-pulp.sol (default strategy 1)
At line 2 NAME          MODEL
At line 3 ROWS
At line 16 COLUMNS
At line 89 RHS
At line 101 BOUNDS
At line 116 ENDATA
Problem MODEL has 11 rows, 14 columns and 38 elements
Coin0008I MODEL read with 0 errors
seconds was changed from 1e+100 to 10
threads was changed from 0 to 2
Option for timeMode changed from cpu to elapsed
Continuous objective value is 72.0033 - 0.00 seconds
Cgl0004I processed model has 0 rows, 0 columns (0 integer (0 of which binary)) and 0 elements
Cbc3007W No integer variables - nothing to do
Cuts at root node changed objective from 72.0033 to 1.79769e+308
Probing was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
Gomory was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
Knapsack was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
Clique was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
MixedIntegerRounding2 was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
FlowCover was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
TwoMirCuts was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
ZeroHalf was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)

Result - Optimal solution found

Objective value:                72.00330996
Enumerated nodes:               0
Total iterations:               0
Time (CPU seconds):             0.00
Time (Wallclock seconds):       0.01

Option for printingOptions changed from normal to all
Total time (CPU seconds):       0.14   (Wallclock seconds):       0.10

Selected jobs: 6
Resources after job selection: {'_cores': 9223372036854775771, '_nodes': 94, '_job_count': 9223372036854775807}
Execute 6 jobs...

[Wed Oct 16 09:16:04 2024]
rule complexity_20mer_counter:
    input: fastq_trimmed/0053_P2017BB3S20R_S2_R2_001_val_2.fq.gz
    output: qc/20mer_counter/0053_P2017BB3S20R_S2_R2_20mer_counter.txt
    jobid: 12
    benchmark: benchmark/0053_P2017BB3S20R_S2_R2.20mer_counter.complexity_20mer_counter.benchmark.tsv
    reason: Missing output files: qc/20mer_counter/0053_P2017BB3S20R_S2_R2_20mer_counter.txt
    wildcards: sample=0053_P2017BB3S20R_S2, read=R2
    resources: mem_mb=1000, mem_mib=954, disk_mb=1000, disk_mib=954, tmpdir=<TBD>, slurm_partition=rome, time=1h


        perl scripts/20mer_counter.pl fastq_trimmed/0053_P2017BB3S20R_S2_R2_001_val_2.fq.gz > qc/20mer_counter/0053_P2017BB3S20R_S2_R2_20mer_counter.txt
        
No SLURM account given, trying to guess.
Guessed SLURM account: eccdcdc
No wall time information given. This might or might not work on your cluster. If not, specify the resource runtime in your rule or as a reasonable default via --default-resources.
General args: ['--force', '--target-files-omit-workdir-adjustment', '--keep-storage-local-copies', '--max-inventory-time 0', '--nocolor', '--notemp', '--no-hooks', '--nolock', '--ignore-incomplete', '', '--verbose ', '--rerun-triggers software-env mtime params code input', '', '', '--deployment-method conda', '--conda-frontend conda', '--conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs', '--conda-base-path /home/fvhemert/miniforge3', '', '', '', '--shared-fs-usage persistence input-output source-cache storage-local-copies software-deployment sources', '', '--wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/', '', '', '', '', '--printshellcmds ', '', '--latency-wait 60', '--scheduler ilp', '', '--local-storage-prefix .snakemake/storage', '--scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin', '', '', '--set-resources base64//ZmFzdHFjOnNsdXJtX3BhcnRpdGlvbj1zdGFnaW5n', '--set-threads base64//c2FsbW9uPTE2', '', '--default-resources base64//bWVtX21iPW1pbihtYXgoMippbnB1dC5zaXplX21iLCAxMDAwKSwgODAwMCk= base64//ZGlza19tYj1tYXgoMippbnB1dC5zaXplX21iLCAxMDAwKQ== base64//dG1wZGlyPXN5c3RlbV90bXBkaXI= base64//c2x1cm1fcGFydGl0aW9uPXJvbWU= base64//dGltZT0xaA==', '']
sbatch call: sbatch --parsable --job-name 659f8275-565c-40d3-bdfb-2a9135623e26 --output '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/slurm_logs/rule_complexity_20mer_counter/0053_P2017BB3S20R_S2_R2/%j.log' --export=ALL --comment rule_complexity_20mer_counter_wildcards_0053_P2017BB3S20R_S2_R2 -A eccdcdc -p rome --mem 1000 --ntasks=1 --cpus-per-task=1 -D /gpfs/home5/fvhemert/temp/test_pipeline --wrap="/home/fvhemert/miniforge3/envs/snaqs/bin/python3.12 -m snakemake --snakefile /gpfs/home5/fvhemert/temp/test_pipeline/Snakefile --target-jobs 'complexity_20mer_counter:sample=0053_P2017BB3S20R_S2,read=R2' --allowed-rules 'complexity_20mer_counter' --cores 94 --attempt 1 --force-use-threads  --resources 'mem_mb=1000' 'mem_mib=954' 'disk_mb=1000' 'disk_mib=954' --wait-for-files '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.zqg1jppu' 'fastq_trimmed/0053_P2017BB3S20R_S2_R2_001_val_2.fq.gz' --force --target-files-omit-workdir-adjustment --keep-storage-local-copies --max-inventory-time 0 --nocolor --notemp --no-hooks --nolock --ignore-incomplete --verbose  --rerun-triggers software-env mtime params code input --deployment-method conda --conda-frontend conda --conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs --conda-base-path /home/fvhemert/miniforge3 --shared-fs-usage persistence input-output source-cache storage-local-copies software-deployment sources --wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/ --printshellcmds  --latency-wait 60 --scheduler ilp --local-storage-prefix .snakemake/storage --scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin --set-resources base64//ZmFzdHFjOnNsdXJtX3BhcnRpdGlvbj1zdGFnaW5n --set-threads base64//c2FsbW9uPTE2 --default-resources base64//bWVtX21iPW1pbihtYXgoMippbnB1dC5zaXplX21iLCAxMDAwKSwgODAwMCk= base64//ZGlza19tYj1tYXgoMippbnB1dC5zaXplX21iLCAxMDAwKQ== base64//dG1wZGlyPXN5c3RlbV90bXBkaXI= base64//c2x1cm1fcGFydGl0aW9uPXJvbWU= base64//dGltZT0xaA== --executor slurm-jobstep --jobs 1 --mode remote"
Job 12 has been submitted with SLURM jobid sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 128 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 8 jobs.
sbatch: By default shared jobs get 1792 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 16 CPUs, based on the number of CPUs and the amount memory that you've requested.
8183255 (log: /gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/slurm_logs/rule_complexity_20mer_counter/0053_P2017BB3S20R_S2_R2/sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 128 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 8 jobs.
sbatch: By default shared jobs get 1792 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 16 CPUs, based on the number of CPUs and the amount memory that you've requested.
8183255.log).

[Wed Oct 16 09:16:04 2024]
rule salmon:
    input: fastq_trimmed/0053_P2017BB3S19R_S1_R1_001_val_1.fq.gz, fastq_trimmed/0053_P2017BB3S19R_S1_R2_001_val_2.fq.gz
    output: analyzed/salmon_0053_P2017BB3S19R_S1/quant.genes.sf
    jobid: 1
    benchmark: benchmark/0053_P2017BB3S19R_S1.salmon.salmon.benchmark.tsv
    reason: Missing output files: analyzed/salmon_0053_P2017BB3S19R_S1/quant.genes.sf
    wildcards: sample=0053_P2017BB3S19R_S1
    threads: 16
    resources: mem_mb=1000, mem_mib=954, disk_mb=1000, disk_mib=954, tmpdir=<TBD>, slurm_partition=rome, time=1h


        salmon quant         --index /home/fvhemert/projects/snaqs_files/reference_files/gencode_salmon/salmon_index         --geneMap /home/fvhemert/projects/snaqs_files/reference_files/gencode_salmon/gencode.v46.annotation.gtf         --libType A         --mates1 fastq_trimmed/0053_P2017BB3S19R_S1_R1_001_val_1.fq.gz         --mates2 fastq_trimmed/0053_P2017BB3S19R_S1_R2_001_val_2.fq.gz         --validateMappings         --threads 16         --output analyzed/salmon_0053_P2017BB3S19R_S1
        
No wall time information given. This might or might not work on your cluster. If not, specify the resource runtime in your rule or as a reasonable default via --default-resources.
General args: ['--force', '--target-files-omit-workdir-adjustment', '--keep-storage-local-copies', '--max-inventory-time 0', '--nocolor', '--notemp', '--no-hooks', '--nolock', '--ignore-incomplete', '', '--verbose ', '--rerun-triggers software-env mtime params code input', '', '', '--deployment-method conda', '--conda-frontend conda', '--conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs', '--conda-base-path /home/fvhemert/miniforge3', '', '', '', '--shared-fs-usage persistence input-output source-cache storage-local-copies software-deployment sources', '', '--wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/', '', '', '', '', '--printshellcmds ', '', '--latency-wait 60', '--scheduler ilp', '', '--local-storage-prefix .snakemake/storage', '--scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin', '', '', '--set-resources base64//ZmFzdHFjOnNsdXJtX3BhcnRpdGlvbj1zdGFnaW5n', '--set-threads base64//c2FsbW9uPTE2', '', '--default-resources base64//bWVtX21iPW1pbihtYXgoMippbnB1dC5zaXplX21iLCAxMDAwKSwgODAwMCk= base64//ZGlza19tYj1tYXgoMippbnB1dC5zaXplX21iLCAxMDAwKQ== base64//dG1wZGlyPXN5c3RlbV90bXBkaXI= base64//c2x1cm1fcGFydGl0aW9uPXJvbWU= base64//dGltZT0xaA==', '']
sbatch call: sbatch --parsable --job-name 659f8275-565c-40d3-bdfb-2a9135623e26 --output '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/slurm_logs/rule_salmon/0053_P2017BB3S19R_S1/%j.log' --export=ALL --comment rule_salmon_wildcards_0053_P2017BB3S19R_S1 -A eccdcdc -p rome --mem 1000 --ntasks=1 --cpus-per-task=16 -D /gpfs/home5/fvhemert/temp/test_pipeline --wrap="/home/fvhemert/miniforge3/envs/snaqs/bin/python3.12 -m snakemake --snakefile /gpfs/home5/fvhemert/temp/test_pipeline/Snakefile --target-jobs 'salmon:sample=0053_P2017BB3S19R_S1' --allowed-rules 'salmon' --cores 94 --attempt 1 --force-use-threads  --resources 'mem_mb=1000' 'mem_mib=954' 'disk_mb=1000' 'disk_mib=954' --wait-for-files '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.zqg1jppu' 'fastq_trimmed/0053_P2017BB3S19R_S1_R1_001_val_1.fq.gz' 'fastq_trimmed/0053_P2017BB3S19R_S1_R2_001_val_2.fq.gz' '/home/fvhemert/projects/snaqs_files/snakemake_envs/fdd42d6c6ccfbbce54b3edf8d70cf513_' --force --target-files-omit-workdir-adjustment --keep-storage-local-copies --max-inventory-time 0 --nocolor --notemp --no-hooks --nolock --ignore-incomplete --verbose  --rerun-triggers software-env mtime params code input --deployment-method conda --conda-frontend conda --conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs --conda-base-path /home/fvhemert/miniforge3 --shared-fs-usage persistence input-output source-cache storage-local-copies software-deployment sources --wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/ --printshellcmds  --latency-wait 60 --scheduler ilp --local-storage-prefix .snakemake/storage --scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin --set-resources base64//ZmFzdHFjOnNsdXJtX3BhcnRpdGlvbj1zdGFnaW5n --set-threads base64//c2FsbW9uPTE2 --default-resources base64//bWVtX21iPW1pbihtYXgoMippbnB1dC5zaXplX21iLCAxMDAwKSwgODAwMCk= base64//ZGlza19tYj1tYXgoMippbnB1dC5zaXplX21iLCAxMDAwKQ== base64//dG1wZGlyPXN5c3RlbV90bXBkaXI= base64//c2x1cm1fcGFydGl0aW9uPXJvbWU= base64//dGltZT0xaA== --executor slurm-jobstep --jobs 1 --mode remote"
Job 1 has been submitted with SLURM jobid sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 128 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 8 jobs.
sbatch: By default shared jobs get 1792 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 16 CPUs, based on the number of CPUs and the amount memory that you've requested.
8183256 (log: /gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/slurm_logs/rule_salmon/0053_P2017BB3S19R_S1/sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 128 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 8 jobs.
sbatch: By default shared jobs get 1792 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 16 CPUs, based on the number of CPUs and the amount memory that you've requested.
8183256.log).

[Wed Oct 16 09:16:04 2024]
rule complexity_20mer_counter:
    input: fastq_trimmed/0053_P2017BB3S19R_S1_R1_001_val_1.fq.gz
    output: qc/20mer_counter/0053_P2017BB3S19R_S1_R1_20mer_counter.txt
    jobid: 9
    benchmark: benchmark/0053_P2017BB3S19R_S1_R1.20mer_counter.complexity_20mer_counter.benchmark.tsv
    reason: Missing output files: qc/20mer_counter/0053_P2017BB3S19R_S1_R1_20mer_counter.txt
    wildcards: sample=0053_P2017BB3S19R_S1, read=R1
    resources: mem_mb=1000, mem_mib=954, disk_mb=1000, disk_mib=954, tmpdir=<TBD>, slurm_partition=rome, time=1h


        perl scripts/20mer_counter.pl fastq_trimmed/0053_P2017BB3S19R_S1_R1_001_val_1.fq.gz > qc/20mer_counter/0053_P2017BB3S19R_S1_R1_20mer_counter.txt
        
No wall time information given. This might or might not work on your cluster. If not, specify the resource runtime in your rule or as a reasonable default via --default-resources.
General args: ['--force', '--target-files-omit-workdir-adjustment', '--keep-storage-local-copies', '--max-inventory-time 0', '--nocolor', '--notemp', '--no-hooks', '--nolock', '--ignore-incomplete', '', '--verbose ', '--rerun-triggers software-env mtime params code input', '', '', '--deployment-method conda', '--conda-frontend conda', '--conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs', '--conda-base-path /home/fvhemert/miniforge3', '', '', '', '--shared-fs-usage persistence input-output source-cache storage-local-copies software-deployment sources', '', '--wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/', '', '', '', '', '--printshellcmds ', '', '--latency-wait 60', '--scheduler ilp', '', '--local-storage-prefix .snakemake/storage', '--scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin', '', '', '--set-resources base64//ZmFzdHFjOnNsdXJtX3BhcnRpdGlvbj1zdGFnaW5n', '--set-threads base64//c2FsbW9uPTE2', '', '--default-resources base64//bWVtX21iPW1pbihtYXgoMippbnB1dC5zaXplX21iLCAxMDAwKSwgODAwMCk= base64//ZGlza19tYj1tYXgoMippbnB1dC5zaXplX21iLCAxMDAwKQ== base64//dG1wZGlyPXN5c3RlbV90bXBkaXI= base64//c2x1cm1fcGFydGl0aW9uPXJvbWU= base64//dGltZT0xaA==', '']
sbatch call: sbatch --parsable --job-name 659f8275-565c-40d3-bdfb-2a9135623e26 --output '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/slurm_logs/rule_complexity_20mer_counter/0053_P2017BB3S19R_S1_R1/%j.log' --export=ALL --comment rule_complexity_20mer_counter_wildcards_0053_P2017BB3S19R_S1_R1 -A eccdcdc -p rome --mem 1000 --ntasks=1 --cpus-per-task=1 -D /gpfs/home5/fvhemert/temp/test_pipeline --wrap="/home/fvhemert/miniforge3/envs/snaqs/bin/python3.12 -m snakemake --snakefile /gpfs/home5/fvhemert/temp/test_pipeline/Snakefile --target-jobs 'complexity_20mer_counter:sample=0053_P2017BB3S19R_S1,read=R1' --allowed-rules 'complexity_20mer_counter' --cores 94 --attempt 1 --force-use-threads  --resources 'mem_mb=1000' 'mem_mib=954' 'disk_mb=1000' 'disk_mib=954' --wait-for-files '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.zqg1jppu' 'fastq_trimmed/0053_P2017BB3S19R_S1_R1_001_val_1.fq.gz' --force --target-files-omit-workdir-adjustment --keep-storage-local-copies --max-inventory-time 0 --nocolor --notemp --no-hooks --nolock --ignore-incomplete --verbose  --rerun-triggers software-env mtime params code input --deployment-method conda --conda-frontend conda --conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs --conda-base-path /home/fvhemert/miniforge3 --shared-fs-usage persistence input-output source-cache storage-local-copies software-deployment sources --wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/ --printshellcmds  --latency-wait 60 --scheduler ilp --local-storage-prefix .snakemake/storage --scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin --set-resources base64//ZmFzdHFjOnNsdXJtX3BhcnRpdGlvbj1zdGFnaW5n --set-threads base64//c2FsbW9uPTE2 --default-resources base64//bWVtX21iPW1pbihtYXgoMippbnB1dC5zaXplX21iLCAxMDAwKSwgODAwMCk= base64//ZGlza19tYj1tYXgoMippbnB1dC5zaXplX21iLCAxMDAwKQ== base64//dG1wZGlyPXN5c3RlbV90bXBkaXI= base64//c2x1cm1fcGFydGl0aW9uPXJvbWU= base64//dGltZT0xaA== --executor slurm-jobstep --jobs 1 --mode remote"
Job 9 has been submitted with SLURM jobid sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 128 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 8 jobs.
sbatch: By default shared jobs get 1792 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 16 CPUs, based on the number of CPUs and the amount memory that you've requested.
8183257 (log: /gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/slurm_logs/rule_complexity_20mer_counter/0053_P2017BB3S19R_S1_R1/sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 128 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 8 jobs.
sbatch: By default shared jobs get 1792 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 16 CPUs, based on the number of CPUs and the amount memory that you've requested.
8183257.log).

[Wed Oct 16 09:16:05 2024]
rule complexity_20mer_counter:
    input: fastq_trimmed/0053_P2017BB3S19R_S1_R2_001_val_2.fq.gz
    output: qc/20mer_counter/0053_P2017BB3S19R_S1_R2_20mer_counter.txt
    jobid: 10
    benchmark: benchmark/0053_P2017BB3S19R_S1_R2.20mer_counter.complexity_20mer_counter.benchmark.tsv
    reason: Missing output files: qc/20mer_counter/0053_P2017BB3S19R_S1_R2_20mer_counter.txt
    wildcards: sample=0053_P2017BB3S19R_S1, read=R2
    resources: mem_mb=1000, mem_mib=954, disk_mb=1000, disk_mib=954, tmpdir=<TBD>, slurm_partition=rome, time=1h


        perl scripts/20mer_counter.pl fastq_trimmed/0053_P2017BB3S19R_S1_R2_001_val_2.fq.gz > qc/20mer_counter/0053_P2017BB3S19R_S1_R2_20mer_counter.txt
        
No wall time information given. This might or might not work on your cluster. If not, specify the resource runtime in your rule or as a reasonable default via --default-resources.
General args: ['--force', '--target-files-omit-workdir-adjustment', '--keep-storage-local-copies', '--max-inventory-time 0', '--nocolor', '--notemp', '--no-hooks', '--nolock', '--ignore-incomplete', '', '--verbose ', '--rerun-triggers software-env mtime params code input', '', '', '--deployment-method conda', '--conda-frontend conda', '--conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs', '--conda-base-path /home/fvhemert/miniforge3', '', '', '', '--shared-fs-usage persistence input-output source-cache storage-local-copies software-deployment sources', '', '--wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/', '', '', '', '', '--printshellcmds ', '', '--latency-wait 60', '--scheduler ilp', '', '--local-storage-prefix .snakemake/storage', '--scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin', '', '', '--set-resources base64//ZmFzdHFjOnNsdXJtX3BhcnRpdGlvbj1zdGFnaW5n', '--set-threads base64//c2FsbW9uPTE2', '', '--default-resources base64//bWVtX21iPW1pbihtYXgoMippbnB1dC5zaXplX21iLCAxMDAwKSwgODAwMCk= base64//ZGlza19tYj1tYXgoMippbnB1dC5zaXplX21iLCAxMDAwKQ== base64//dG1wZGlyPXN5c3RlbV90bXBkaXI= base64//c2x1cm1fcGFydGl0aW9uPXJvbWU= base64//dGltZT0xaA==', '']
sbatch call: sbatch --parsable --job-name 659f8275-565c-40d3-bdfb-2a9135623e26 --output '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/slurm_logs/rule_complexity_20mer_counter/0053_P2017BB3S19R_S1_R2/%j.log' --export=ALL --comment rule_complexity_20mer_counter_wildcards_0053_P2017BB3S19R_S1_R2 -A eccdcdc -p rome --mem 1000 --ntasks=1 --cpus-per-task=1 -D /gpfs/home5/fvhemert/temp/test_pipeline --wrap="/home/fvhemert/miniforge3/envs/snaqs/bin/python3.12 -m snakemake --snakefile /gpfs/home5/fvhemert/temp/test_pipeline/Snakefile --target-jobs 'complexity_20mer_counter:sample=0053_P2017BB3S19R_S1,read=R2' --allowed-rules 'complexity_20mer_counter' --cores 94 --attempt 1 --force-use-threads  --resources 'mem_mb=1000' 'mem_mib=954' 'disk_mb=1000' 'disk_mib=954' --wait-for-files '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.zqg1jppu' 'fastq_trimmed/0053_P2017BB3S19R_S1_R2_001_val_2.fq.gz' --force --target-files-omit-workdir-adjustment --keep-storage-local-copies --max-inventory-time 0 --nocolor --notemp --no-hooks --nolock --ignore-incomplete --verbose  --rerun-triggers software-env mtime params code input --deployment-method conda --conda-frontend conda --conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs --conda-base-path /home/fvhemert/miniforge3 --shared-fs-usage persistence input-output source-cache storage-local-copies software-deployment sources --wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/ --printshellcmds  --latency-wait 60 --scheduler ilp --local-storage-prefix .snakemake/storage --scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin --set-resources base64//ZmFzdHFjOnNsdXJtX3BhcnRpdGlvbj1zdGFnaW5n --set-threads base64//c2FsbW9uPTE2 --default-resources base64//bWVtX21iPW1pbihtYXgoMippbnB1dC5zaXplX21iLCAxMDAwKSwgODAwMCk= base64//ZGlza19tYj1tYXgoMippbnB1dC5zaXplX21iLCAxMDAwKQ== base64//dG1wZGlyPXN5c3RlbV90bXBkaXI= base64//c2x1cm1fcGFydGl0aW9uPXJvbWU= base64//dGltZT0xaA== --executor slurm-jobstep --jobs 1 --mode remote"
Job 10 has been submitted with SLURM jobid sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 128 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 8 jobs.
sbatch: By default shared jobs get 1792 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 16 CPUs, based on the number of CPUs and the amount memory that you've requested.
8183258 (log: /gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/slurm_logs/rule_complexity_20mer_counter/0053_P2017BB3S19R_S1_R2/sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 128 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 8 jobs.
sbatch: By default shared jobs get 1792 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 16 CPUs, based on the number of CPUs and the amount memory that you've requested.
8183258.log).

[Wed Oct 16 09:16:05 2024]
rule complexity_20mer_counter:
    input: fastq_trimmed/0053_P2017BB3S20R_S2_R1_001_val_1.fq.gz
    output: qc/20mer_counter/0053_P2017BB3S20R_S2_R1_20mer_counter.txt
    jobid: 11
    benchmark: benchmark/0053_P2017BB3S20R_S2_R1.20mer_counter.complexity_20mer_counter.benchmark.tsv
    reason: Missing output files: qc/20mer_counter/0053_P2017BB3S20R_S2_R1_20mer_counter.txt
    wildcards: sample=0053_P2017BB3S20R_S2, read=R1
    resources: mem_mb=1000, mem_mib=954, disk_mb=1000, disk_mib=954, tmpdir=<TBD>, slurm_partition=rome, time=1h


        perl scripts/20mer_counter.pl fastq_trimmed/0053_P2017BB3S20R_S2_R1_001_val_1.fq.gz > qc/20mer_counter/0053_P2017BB3S20R_S2_R1_20mer_counter.txt
        
No wall time information given. This might or might not work on your cluster. If not, specify the resource runtime in your rule or as a reasonable default via --default-resources.
General args: ['--force', '--target-files-omit-workdir-adjustment', '--keep-storage-local-copies', '--max-inventory-time 0', '--nocolor', '--notemp', '--no-hooks', '--nolock', '--ignore-incomplete', '', '--verbose ', '--rerun-triggers software-env mtime params code input', '', '', '--deployment-method conda', '--conda-frontend conda', '--conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs', '--conda-base-path /home/fvhemert/miniforge3', '', '', '', '--shared-fs-usage persistence input-output source-cache storage-local-copies software-deployment sources', '', '--wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/', '', '', '', '', '--printshellcmds ', '', '--latency-wait 60', '--scheduler ilp', '', '--local-storage-prefix .snakemake/storage', '--scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin', '', '', '--set-resources base64//ZmFzdHFjOnNsdXJtX3BhcnRpdGlvbj1zdGFnaW5n', '--set-threads base64//c2FsbW9uPTE2', '', '--default-resources base64//bWVtX21iPW1pbihtYXgoMippbnB1dC5zaXplX21iLCAxMDAwKSwgODAwMCk= base64//ZGlza19tYj1tYXgoMippbnB1dC5zaXplX21iLCAxMDAwKQ== base64//dG1wZGlyPXN5c3RlbV90bXBkaXI= base64//c2x1cm1fcGFydGl0aW9uPXJvbWU= base64//dGltZT0xaA==', '']
sbatch call: sbatch --parsable --job-name 659f8275-565c-40d3-bdfb-2a9135623e26 --output '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/slurm_logs/rule_complexity_20mer_counter/0053_P2017BB3S20R_S2_R1/%j.log' --export=ALL --comment rule_complexity_20mer_counter_wildcards_0053_P2017BB3S20R_S2_R1 -A eccdcdc -p rome --mem 1000 --ntasks=1 --cpus-per-task=1 -D /gpfs/home5/fvhemert/temp/test_pipeline --wrap="/home/fvhemert/miniforge3/envs/snaqs/bin/python3.12 -m snakemake --snakefile /gpfs/home5/fvhemert/temp/test_pipeline/Snakefile --target-jobs 'complexity_20mer_counter:sample=0053_P2017BB3S20R_S2,read=R1' --allowed-rules 'complexity_20mer_counter' --cores 94 --attempt 1 --force-use-threads  --resources 'mem_mb=1000' 'mem_mib=954' 'disk_mb=1000' 'disk_mib=954' --wait-for-files '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.zqg1jppu' 'fastq_trimmed/0053_P2017BB3S20R_S2_R1_001_val_1.fq.gz' --force --target-files-omit-workdir-adjustment --keep-storage-local-copies --max-inventory-time 0 --nocolor --notemp --no-hooks --nolock --ignore-incomplete --verbose  --rerun-triggers software-env mtime params code input --deployment-method conda --conda-frontend conda --conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs --conda-base-path /home/fvhemert/miniforge3 --shared-fs-usage persistence input-output source-cache storage-local-copies software-deployment sources --wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/ --printshellcmds  --latency-wait 60 --scheduler ilp --local-storage-prefix .snakemake/storage --scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin --set-resources base64//ZmFzdHFjOnNsdXJtX3BhcnRpdGlvbj1zdGFnaW5n --set-threads base64//c2FsbW9uPTE2 --default-resources base64//bWVtX21iPW1pbihtYXgoMippbnB1dC5zaXplX21iLCAxMDAwKSwgODAwMCk= base64//ZGlza19tYj1tYXgoMippbnB1dC5zaXplX21iLCAxMDAwKQ== base64//dG1wZGlyPXN5c3RlbV90bXBkaXI= base64//c2x1cm1fcGFydGl0aW9uPXJvbWU= base64//dGltZT0xaA== --executor slurm-jobstep --jobs 1 --mode remote"
Job 11 has been submitted with SLURM jobid sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 128 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 8 jobs.
sbatch: By default shared jobs get 1792 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 16 CPUs, based on the number of CPUs and the amount memory that you've requested.
8183259 (log: /gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/slurm_logs/rule_complexity_20mer_counter/0053_P2017BB3S20R_S2_R1/sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 128 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 8 jobs.
sbatch: By default shared jobs get 1792 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 16 CPUs, based on the number of CPUs and the amount memory that you've requested.
8183259.log).

[Wed Oct 16 09:16:05 2024]
rule salmon:
    input: fastq_trimmed/0053_P2017BB3S20R_S2_R1_001_val_1.fq.gz, fastq_trimmed/0053_P2017BB3S20R_S2_R2_001_val_2.fq.gz
    output: analyzed/salmon_0053_P2017BB3S20R_S2/quant.genes.sf
    jobid: 3
    benchmark: benchmark/0053_P2017BB3S20R_S2.salmon.salmon.benchmark.tsv
    reason: Missing output files: analyzed/salmon_0053_P2017BB3S20R_S2/quant.genes.sf
    wildcards: sample=0053_P2017BB3S20R_S2
    threads: 16
    resources: mem_mb=1000, mem_mib=954, disk_mb=1000, disk_mib=954, tmpdir=<TBD>, slurm_partition=rome, time=1h


        salmon quant         --index /home/fvhemert/projects/snaqs_files/reference_files/gencode_salmon/salmon_index         --geneMap /home/fvhemert/projects/snaqs_files/reference_files/gencode_salmon/gencode.v46.annotation.gtf         --libType A         --mates1 fastq_trimmed/0053_P2017BB3S20R_S2_R1_001_val_1.fq.gz         --mates2 fastq_trimmed/0053_P2017BB3S20R_S2_R2_001_val_2.fq.gz         --validateMappings         --threads 16         --output analyzed/salmon_0053_P2017BB3S20R_S2
        
No wall time information given. This might or might not work on your cluster. If not, specify the resource runtime in your rule or as a reasonable default via --default-resources.
General args: ['--force', '--target-files-omit-workdir-adjustment', '--keep-storage-local-copies', '--max-inventory-time 0', '--nocolor', '--notemp', '--no-hooks', '--nolock', '--ignore-incomplete', '', '--verbose ', '--rerun-triggers software-env mtime params code input', '', '', '--deployment-method conda', '--conda-frontend conda', '--conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs', '--conda-base-path /home/fvhemert/miniforge3', '', '', '', '--shared-fs-usage persistence input-output source-cache storage-local-copies software-deployment sources', '', '--wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/', '', '', '', '', '--printshellcmds ', '', '--latency-wait 60', '--scheduler ilp', '', '--local-storage-prefix .snakemake/storage', '--scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin', '', '', '--set-resources base64//ZmFzdHFjOnNsdXJtX3BhcnRpdGlvbj1zdGFnaW5n', '--set-threads base64//c2FsbW9uPTE2', '', '--default-resources base64//bWVtX21iPW1pbihtYXgoMippbnB1dC5zaXplX21iLCAxMDAwKSwgODAwMCk= base64//ZGlza19tYj1tYXgoMippbnB1dC5zaXplX21iLCAxMDAwKQ== base64//dG1wZGlyPXN5c3RlbV90bXBkaXI= base64//c2x1cm1fcGFydGl0aW9uPXJvbWU= base64//dGltZT0xaA==', '']
sbatch call: sbatch --parsable --job-name 659f8275-565c-40d3-bdfb-2a9135623e26 --output '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/slurm_logs/rule_salmon/0053_P2017BB3S20R_S2/%j.log' --export=ALL --comment rule_salmon_wildcards_0053_P2017BB3S20R_S2 -A eccdcdc -p rome --mem 1000 --ntasks=1 --cpus-per-task=16 -D /gpfs/home5/fvhemert/temp/test_pipeline --wrap="/home/fvhemert/miniforge3/envs/snaqs/bin/python3.12 -m snakemake --snakefile /gpfs/home5/fvhemert/temp/test_pipeline/Snakefile --target-jobs 'salmon:sample=0053_P2017BB3S20R_S2' --allowed-rules 'salmon' --cores 94 --attempt 1 --force-use-threads  --resources 'mem_mb=1000' 'mem_mib=954' 'disk_mb=1000' 'disk_mib=954' --wait-for-files '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.zqg1jppu' 'fastq_trimmed/0053_P2017BB3S20R_S2_R1_001_val_1.fq.gz' 'fastq_trimmed/0053_P2017BB3S20R_S2_R2_001_val_2.fq.gz' '/home/fvhemert/projects/snaqs_files/snakemake_envs/fdd42d6c6ccfbbce54b3edf8d70cf513_' --force --target-files-omit-workdir-adjustment --keep-storage-local-copies --max-inventory-time 0 --nocolor --notemp --no-hooks --nolock --ignore-incomplete --verbose  --rerun-triggers software-env mtime params code input --deployment-method conda --conda-frontend conda --conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs --conda-base-path /home/fvhemert/miniforge3 --shared-fs-usage persistence input-output source-cache storage-local-copies software-deployment sources --wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/ --printshellcmds  --latency-wait 60 --scheduler ilp --local-storage-prefix .snakemake/storage --scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin --set-resources base64//ZmFzdHFjOnNsdXJtX3BhcnRpdGlvbj1zdGFnaW5n --set-threads base64//c2FsbW9uPTE2 --default-resources base64//bWVtX21iPW1pbihtYXgoMippbnB1dC5zaXplX21iLCAxMDAwKSwgODAwMCk= base64//ZGlza19tYj1tYXgoMippbnB1dC5zaXplX21iLCAxMDAwKQ== base64//dG1wZGlyPXN5c3RlbV90bXBkaXI= base64//c2x1cm1fcGFydGl0aW9uPXJvbWU= base64//dGltZT0xaA== --executor slurm-jobstep --jobs 1 --mode remote"
Job 3 has been submitted with SLURM jobid sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 128 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 8 jobs.
sbatch: By default shared jobs get 1792 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 16 CPUs, based on the number of CPUs and the amount memory that you've requested.
8183260 (log: /gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/slurm_logs/rule_salmon/0053_P2017BB3S20R_S2/sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 128 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 8 jobs.
sbatch: By default shared jobs get 1792 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 16 CPUs, based on the number of CPUs and the amount memory that you've requested.
8183260.log).
The job status was queried with command: sacct -X --parsable2 --clusters all --noheader --format=JobIdRaw,State --starttime 2024-10-14T09:00 --endtime now --name 659f8275-565c-40d3-bdfb-2a9135623e26
It took: 0.0313105583190918 seconds
The output is:
'8183255|COMPLETED
8183256|FAILED
8183257|COMPLETED
8183258|COMPLETED
8183259|COMPLETED
8183260|FAILED
'

status_of_jobs after sacct is: {'8183255': 'COMPLETED', '8183256': 'FAILED', '8183257': 'COMPLETED', '8183258': 'COMPLETED', '8183259': 'COMPLETED', '8183260': 'FAILED'}
active_jobs_ids_with_current_sacct_status are: set()
active_jobs_seen_by_sacct are: set()
missing_sacct_status are: set()
The job status was queried with command: sacct -X --parsable2 --clusters all --noheader --format=JobIdRaw,State --starttime 2024-10-14T09:00 --endtime now --name 659f8275-565c-40d3-bdfb-2a9135623e26
It took: 0.03162860870361328 seconds
The output is:
'8183255|COMPLETED
8183256|FAILED
8183257|COMPLETED
8183258|COMPLETED
8183259|COMPLETED
8183260|FAILED
'

status_of_jobs after sacct is: {'8183255': 'COMPLETED', '8183256': 'FAILED', '8183257': 'COMPLETED', '8183258': 'COMPLETED', '8183259': 'COMPLETED', '8183260': 'FAILED'}
active_jobs_ids_with_current_sacct_status are: set()
active_jobs_seen_by_sacct are: set()
missing_sacct_status are: set()
The job status was queried with command: sacct -X --parsable2 --clusters all --noheader --format=JobIdRaw,State --starttime 2024-10-14T09:00 --endtime now --name 659f8275-565c-40d3-bdfb-2a9135623e26
It took: 0.02939462661743164 seconds
The output is:
'8183255|COMPLETED
8183256|FAILED
8183257|COMPLETED
8183258|COMPLETED
8183259|COMPLETED
8183260|FAILED
'

status_of_jobs after sacct is: {'8183255': 'COMPLETED', '8183256': 'FAILED', '8183257': 'COMPLETED', '8183258': 'COMPLETED', '8183259': 'COMPLETED', '8183260': 'FAILED'}
active_jobs_ids_with_current_sacct_status are: set()
active_jobs_seen_by_sacct are: set()
missing_sacct_status are: set()
The job status was queried with command: sacct -X --parsable2 --clusters all --noheader --format=JobIdRaw,State --starttime 2024-10-14T09:00 --endtime now --name 659f8275-565c-40d3-bdfb-2a9135623e26
It took: 0.03914189338684082 seconds
The output is:
'8183255|COMPLETED
8183256|FAILED
8183257|COMPLETED
8183258|COMPLETED
8183259|COMPLETED
8183260|FAILED
'

status_of_jobs after sacct is: {'8183255': 'COMPLETED', '8183256': 'FAILED', '8183257': 'COMPLETED', '8183258': 'COMPLETED', '8183259': 'COMPLETED', '8183260': 'FAILED'}
active_jobs_ids_with_current_sacct_status are: set()
active_jobs_seen_by_sacct are: set()
missing_sacct_status are: set()
The job status was queried with command: sacct -X --parsable2 --clusters all --noheader --format=JobIdRaw,State --starttime 2024-10-14T09:00 --endtime now --name 659f8275-565c-40d3-bdfb-2a9135623e26
It took: 0.03030538558959961 seconds
The output is:
'8183255|COMPLETED
8183256|FAILED
8183257|COMPLETED
8183258|COMPLETED
8183259|COMPLETED
8183260|FAILED
'

status_of_jobs after sacct is: {'8183255': 'COMPLETED', '8183256': 'FAILED', '8183257': 'COMPLETED', '8183258': 'COMPLETED', '8183259': 'COMPLETED', '8183260': 'FAILED'}
active_jobs_ids_with_current_sacct_status are: set()
active_jobs_seen_by_sacct are: set()
missing_sacct_status are: set()
^CTerminating processes on user request, this might take some time.
unlocking
removing lock
removing lock
removed all locks
Full Traceback (most recent call last):
  File "/home/fvhemert/miniforge3/envs/snaqs/lib/python3.12/site-packages/snakemake/scheduler.py", line 189, in schedule
    self._open_jobs.acquire()
  File "/home/fvhemert/miniforge3/envs/snaqs/lib/python3.12/threading.py", line 507, in acquire
    self._cond.wait(timeout)
  File "/home/fvhemert/miniforge3/envs/snaqs/lib/python3.12/threading.py", line 355, in wait
    waiter.acquire()
KeyboardInterrupt

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/fvhemert/miniforge3/envs/snaqs/lib/python3.12/site-packages/snakemake_executor_plugin_slurm/__init__.py", line 416, in cancel_jobs
    subprocess.check_output(
  File "/home/fvhemert/miniforge3/envs/snaqs/lib/python3.12/subprocess.py", line 466, in check_output
    return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/fvhemert/miniforge3/envs/snaqs/lib/python3.12/subprocess.py", line 571, in run
    raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command 'scancel sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 128 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 8 jobs.
sbatch: By default shared jobs get 1792 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 16 CPUs, based on the number of CPUs and the amount memory that you've requested.
8183255 sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 128 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 8 jobs.
sbatch: By default shared jobs get 1792 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 16 CPUs, based on the number of CPUs and the amount memory that you've requested.
8183256 sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 128 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 8 jobs.
sbatch: By default shared jobs get 1792 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 16 CPUs, based on the number of CPUs and the amount memory that you've requested.
8183257 sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 128 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 8 jobs.
sbatch: By default shared jobs get 1792 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 16 CPUs, based on the number of CPUs and the amount memory that you've requested.
8183258 sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 128 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 8 jobs.
sbatch: By default shared jobs get 1792 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 16 CPUs, based on the number of CPUs and the amount memory that you've requested.
8183259 sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 128 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 8 jobs.
sbatch: By default shared jobs get 1792 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 16 CPUs, based on the number of CPUs and the amount memory that you've requested.
8183260 --clusters=all' returned non-zero exit status 127.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/home/fvhemert/miniforge3/envs/snaqs/lib/python3.12/site-packages/snakemake/cli.py", line 2091, in args_to_api
    dag_api.execute_workflow(
  File "/home/fvhemert/miniforge3/envs/snaqs/lib/python3.12/site-packages/snakemake/api.py", line 595, in execute_workflow
    workflow.execute(
  File "/home/fvhemert/miniforge3/envs/snaqs/lib/python3.12/site-packages/snakemake/workflow.py", line 1264, in execute
    raise e
  File "/home/fvhemert/miniforge3/envs/snaqs/lib/python3.12/site-packages/snakemake/workflow.py", line 1260, in execute
    success = self.scheduler.schedule()
              ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/fvhemert/miniforge3/envs/snaqs/lib/python3.12/site-packages/snakemake/scheduler.py", line 318, in schedule
    self._executor.cancel()
  File "/home/fvhemert/miniforge3/envs/snaqs/lib/python3.12/site-packages/snakemake_interface_executor_plugins/executors/remote.py", line 109, in cancel
    self.cancel_jobs(active_jobs)
  File "/home/fvhemert/miniforge3/envs/snaqs/lib/python3.12/site-packages/snakemake_executor_plugin_slurm/__init__.py", line 429, in cancel_jobs
    raise WorkflowError(
snakemake_interface_common.exceptions.WorkflowError: Unable to cancel jobs with scancel (exit code 127): scancel: unrecognized option '--exclusive'
Try "scancel --help" for more information
/bin/sh: line 2: sbatch:: command not found
/bin/sh: line 3: sbatch:: command not found
/bin/sh: line 4: sbatch:: command not found
/bin/sh: line 9: 8183256: command not found
/bin/sh: line 10: sbatch:: command not found
/bin/sh: line 11: sbatch:: command not found
/bin/sh: line 12: sbatch:: command not found
/bin/sh: line 17: 8183258: command not found
/bin/sh: line 18: sbatch:: command not found
/bin/sh: line 19: sbatch:: command not found
/bin/sh: line 20: sbatch:: command not found
/bin/sh: line 25: 8183260: command not found

WorkflowError:
Unable to cancel jobs with scancel (exit code 127): scancel: unrecognized option '--exclusive'
Try "scancel --help" for more information
/bin/sh: line 2: sbatch:: command not found
/bin/sh: line 3: sbatch:: command not found
/bin/sh: line 4: sbatch:: command not found
/bin/sh: line 9: 8183256: command not found
/bin/sh: line 10: sbatch:: command not found
/bin/sh: line 11: sbatch:: command not found
/bin/sh: line 12: sbatch:: command not found
/bin/sh: line 17: 8183258: command not found
/bin/sh: line 18: sbatch:: command not found
/bin/sh: line 19: sbatch:: command not found
/bin/sh: line 20: sbatch:: command not found
/bin/sh: line 25: 8183260: command not found

Looks like Salmon did not poduce the expected output? (Or it's deleted by Snakemake because I see some "untracked" files, not sure what happened, I do know the workflow work in other ways so it's not that).

@freekvh
Copy link
Author

freekvh commented Oct 16, 2024

Maybe nice for reference, this is the same workflow with the generic-cluster executor (which finished succesfully).

The configuration:

executor: cluster-generic
cluster-generic-submit-cmd:
  sbatch
    --cpus-per-task=16
    --job-name={rule}-{jobid}
    --output=cluster_outputs/{rule}/{rule}-{wildcards}-%j.out
    --parsable
    --partition=rome
restart-times: 3
max-jobs-per-second: 10
max-status-checks-per-second: 1
local-cores: 1
latency-wait: 60
jobs: 100 # Check what the max is
keep-going: True
rerun-incomplete: True
printshellcmds: True
use-conda: True
conda-prefix: /home/fvhemert/projects/snaqs_files/snakemake_envs

set-threads:
  salmon: 16

set-resources:
  fastqc:
    partition: staging

The workflow:

$ snakemake --workflow-profile ./cluster_configs --verbose
Using workflow specific profile ./cluster_configs for setting default command line arguments.
host: int4
Building DAG of jobs...
Your conda installation is not configured to use strict channel priorities. This is however important for having robust and correct environments (for details, see https://conda-forge.org/docs/user/tipsandtricks.html). Please consider to configure strict priorities by executing 'conda config --set channel_priority strict'.
shared_storage_local_copies: True
remote_exec: False
Using shell: /usr/bin/bash
Provided remote nodes: 100
Job stats:
job                           count
--------------------------  -------
all                               1
complexity_20mer_counter          4
create_flagged_sampletable        1
create_pcs_raw_files              2
customqc_parameters               2
customqc_report                   1
fastqc                            4
qc_flagging                       2
rnaseq_multiqc                    1
salmon                            2
seqrun_expression_reports         1
tpm4_normalization                2
trim_galore                       2
total                            25

Resources before job selection: {'_cores': 9223372036854775807, '_nodes': 100, '_job_count': 9223372036854775807}
Ready jobs: 6
Select jobs to execute...
Welcome to the CBC MILP Solver 
Version: 2.10.12 
Build Date: Sep  3 2024 

command line - cbc /scratch-local/70716/6fb55e55c9e14a9489496e0e262ab7c5-pulp.mps -max -sec 10 -threads 2 -timeMode elapsed -branch -printingOptions all -solution /scratch-local/70716/6fb55e55c9e14a9489496e0e262ab7c5-pulp.sol (default strategy 1)
At line 2 NAME          MODEL
At line 3 ROWS
At line 8 COLUMNS
At line 45 RHS
At line 49 BOUNDS
At line 56 ENDATA
Problem MODEL has 3 rows, 6 columns and 18 elements
Coin0008I MODEL read with 0 errors
seconds was changed from 1e+100 to 10
threads was changed from 0 to 2
Option for timeMode changed from cpu to elapsed
Continuous objective value is 12 - 0.02 seconds
Cgl0004I processed model has 0 rows, 0 columns (0 integer (0 of which binary)) and 0 elements
Cbc3007W No integer variables - nothing to do
Cuts at root node changed objective from 12 to 1.79769e+308
Probing was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
Gomory was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
Knapsack was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
Clique was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
MixedIntegerRounding2 was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
FlowCover was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
TwoMirCuts was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
ZeroHalf was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)

Result - Optimal solution found

Objective value:                12.00000000
Enumerated nodes:               0
Total iterations:               0
Time (CPU seconds):             0.09
Time (Wallclock seconds):       0.00

Option for printingOptions changed from normal to all
Total time (CPU seconds):       0.20   (Wallclock seconds):       0.09

Selected jobs: 6
Resources after job selection: {'_cores': 9223372036854775801, '_nodes': 94, '_job_count': 10}
Execute 6 jobs...

[Wed Oct 16 09:56:29 2024]
rule fastqc:
    input: fastq_raw/0053_P2017BB3S20R_S2_R1_001.fastq.gz
    output: qc/fastqc/0053_P2017BB3S20R_S2_R1_001_fastqc.html, qc/fastqc/0053_P2017BB3S20R_S2_R1_001_fastqc.zip
    jobid: 18
    benchmark: benchmark/0053_P2017BB3S20R_S2_R1.fastqc.fastqc.benchmark.tsv
    reason: Missing output files: qc/fastqc/0053_P2017BB3S20R_S2_R1_001_fastqc.html, qc/fastqc/0053_P2017BB3S20R_S2_R1_001_fastqc.zip
    wildcards: sample=0053_P2017BB3S20R_S2, read=R1
    resources: tmpdir=<TBD>, partition=staging


        fastqc fastq_raw/0053_P2017BB3S20R_S2_R1_001.fastq.gz --outdir=qc/fastqc
        
General args: ['--force', '--target-files-omit-workdir-adjustment', '--keep-storage-local-copies', '--max-inventory-time 0', '--nocolor', '--notemp', '--no-hooks', '--nolock', '--ignore-incomplete', '', '--verbose ', '--rerun-triggers input params mtime code software-env', '', '', '--deployment-method conda', '--conda-frontend conda', '--conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs', '--conda-base-path /home/fvhemert/miniforge3', '', '', '', '--shared-fs-usage input-output storage-local-copies persistence software-deployment sources source-cache', '', '--wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/', '', '', '', '', '--printshellcmds ', '', '--latency-wait 60', '--scheduler ilp', '', '--local-storage-prefix .snakemake/storage', '--scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin', '', '', '--set-resources base64//ZmFzdHFjOnBhcnRpdGlvbj1zdGFnaW5n', '--set-threads base64//c2FsbW9uPTE2', '', '--default-resources base64//dG1wZGlyPXN5c3RlbV90bXBkaXI=']
Jobscript:
#!/bin/sh
# properties = {"type": "single", "rule": "fastqc", "local": false, "input": ["fastq_raw/0053_P2017BB3S20R_S2_R1_001.fastq.gz"], "output": ["qc/fastqc/0053_P2017BB3S20R_S2_R1_001_fastqc.html", "qc/fastqc/0053_P2017BB3S20R_S2_R1_001_fastqc.zip"], "wildcards": {"sample": "0053_P2017BB3S20R_S2", "read": "R1"}, "params": {}, "log": [], "threads": 1, "resources": {"tmpdir": "<TBD>", "partition": "staging"}, "jobid": 18}
cd /gpfs/home5/fvhemert/temp/test_pipeline && /home/fvhemert/miniforge3/envs/snaqs/bin/python3.12 -m snakemake --snakefile /gpfs/home5/fvhemert/temp/test_pipeline/Snakefile --target-jobs 'fastqc:sample=0053_P2017BB3S20R_S2,read=R1' --allowed-rules 'fastqc' --cores 94 --attempt 1 --force-use-threads  --wait-for-files '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf' 'fastq_raw/0053_P2017BB3S20R_S2_R1_001.fastq.gz' '/home/fvhemert/projects/snaqs_files/snakemake_envs/5828be7853581bcea2e7a443f005b3a4_' --force --target-files-omit-workdir-adjustment --keep-storage-local-copies --max-inventory-time 0 --nocolor --notemp --no-hooks --nolock --ignore-incomplete --verbose  --rerun-triggers input params mtime code software-env --deployment-method conda --conda-frontend conda --conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs --conda-base-path /home/fvhemert/miniforge3 --shared-fs-usage input-output storage-local-copies persistence software-deployment sources source-cache --wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/ --printshellcmds  --latency-wait 60 --scheduler ilp --local-storage-prefix .snakemake/storage --scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin --set-resources base64//ZmFzdHFjOnBhcnRpdGlvbj1zdGFnaW5n --set-threads base64//c2FsbW9uPTE2 --default-resources base64//dG1wZGlyPXN5c3RlbV90bXBkaXI= --mode remote && touch '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf/18.jobfinished' || (touch '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf/18.jobfailed'; exit 1)

sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 128 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 8 jobs.
sbatch: By default shared jobs get 1792 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 16 CPUs, based on the number of CPUs and the amount memory that you've requested.
Submitted job 18 with external jobid '8183872'.

[Wed Oct 16 09:56:29 2024]
rule trim_galore:
    input: fastq_raw/0053_P2017BB3S20R_S2_R1_001.fastq.gz, fastq_raw/0053_P2017BB3S20R_S2_R2_001.fastq.gz
    output: fastq_trimmed/0053_P2017BB3S20R_S2_R1_001_val_1.fq.gz, fastq_trimmed/0053_P2017BB3S20R_S2_R2_001_val_2.fq.gz, fastq_trimmed/0053_P2017BB3S20R_S2_R1_001_unpaired_1.fq.gz, fastq_trimmed/0053_P2017BB3S20R_S2_R2_001_unpaired_2.fq.gz, qc/trim_galore/0053_P2017BB3S20R_S2_R1_001.fastq.gz_trimming_report.txt, qc/trim_galore/0053_P2017BB3S20R_S2_R2_001.fastq.gz_trimming_report.txt, qc/trim_galore/0053_P2017BB3S20R_S2_R1_001_val_1_fastqc.html, qc/trim_galore/0053_P2017BB3S20R_S2_R2_001_val_2_fastqc.html, qc/trim_galore/0053_P2017BB3S20R_S2_R1_001_val_1_fastqc.zip, qc/trim_galore/0053_P2017BB3S20R_S2_R2_001_val_2_fastqc.zip
    jobid: 4
    benchmark: benchmark/0053_P2017BB3S20R_S2.trim_galore_pe.trim_galore.benchmark.tsv
    reason: Missing output files: qc/trim_galore/0053_P2017BB3S20R_S2_R1_001.fastq.gz_trimming_report.txt, fastq_trimmed/0053_P2017BB3S20R_S2_R1_001_val_1.fq.gz, qc/trim_galore/0053_P2017BB3S20R_S2_R2_001_val_2_fastqc.zip, qc/trim_galore/0053_P2017BB3S20R_S2_R1_001_val_1_fastqc.html, qc/trim_galore/0053_P2017BB3S20R_S2_R2_001.fastq.gz_trimming_report.txt, qc/trim_galore/0053_P2017BB3S20R_S2_R2_001_val_2_fastqc.html, fastq_trimmed/0053_P2017BB3S20R_S2_R2_001_val_2.fq.gz, qc/trim_galore/0053_P2017BB3S20R_S2_R1_001_val_1_fastqc.zip
    wildcards: sample=0053_P2017BB3S20R_S2
    resources: tmpdir=<TBD>


        trim_galore --fastqc --gzip -o fastq_trimmed --paired --retain_unpaired fastq_raw/0053_P2017BB3S20R_S2_R1_001.fastq.gz fastq_raw/0053_P2017BB3S20R_S2_R2_001.fastq.gz
        # Move all qc reports from the fastq_trimmed directory to the trim_galore qc directory
        mv fastq_trimmed/0053_P2017BB3S20R_S2_R1_001.fastq.gz_trimming_report.txt qc/trim_galore
        mv fastq_trimmed/0053_P2017BB3S20R_S2_R2_001.fastq.gz_trimming_report.txt qc/trim_galore
        mv fastq_trimmed/0053_P2017BB3S20R_S2_R1_001_val_1_fastqc.html qc/trim_galore
        mv fastq_trimmed/0053_P2017BB3S20R_S2_R2_001_val_2_fastqc.html qc/trim_galore
        mv fastq_trimmed/0053_P2017BB3S20R_S2_R1_001_val_1_fastqc.zip qc/trim_galore
        mv fastq_trimmed/0053_P2017BB3S20R_S2_R2_001_val_2_fastqc.zip qc/trim_galore
        
General args: ['--force', '--target-files-omit-workdir-adjustment', '--keep-storage-local-copies', '--max-inventory-time 0', '--nocolor', '--notemp', '--no-hooks', '--nolock', '--ignore-incomplete', '', '--verbose ', '--rerun-triggers input params mtime code software-env', '', '', '--deployment-method conda', '--conda-frontend conda', '--conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs', '--conda-base-path /home/fvhemert/miniforge3', '', '', '', '--shared-fs-usage input-output storage-local-copies persistence software-deployment sources source-cache', '', '--wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/', '', '', '', '', '--printshellcmds ', '', '--latency-wait 60', '--scheduler ilp', '', '--local-storage-prefix .snakemake/storage', '--scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin', '', '', '--set-resources base64//ZmFzdHFjOnBhcnRpdGlvbj1zdGFnaW5n', '--set-threads base64//c2FsbW9uPTE2', '', '--default-resources base64//dG1wZGlyPXN5c3RlbV90bXBkaXI=']
Jobscript:
#!/bin/sh
# properties = {"type": "single", "rule": "trim_galore", "local": false, "input": ["fastq_raw/0053_P2017BB3S20R_S2_R1_001.fastq.gz", "fastq_raw/0053_P2017BB3S20R_S2_R2_001.fastq.gz"], "output": ["fastq_trimmed/0053_P2017BB3S20R_S2_R1_001_val_1.fq.gz", "fastq_trimmed/0053_P2017BB3S20R_S2_R2_001_val_2.fq.gz", "fastq_trimmed/0053_P2017BB3S20R_S2_R1_001_unpaired_1.fq.gz", "fastq_trimmed/0053_P2017BB3S20R_S2_R2_001_unpaired_2.fq.gz", "qc/trim_galore/0053_P2017BB3S20R_S2_R1_001.fastq.gz_trimming_report.txt", "qc/trim_galore/0053_P2017BB3S20R_S2_R2_001.fastq.gz_trimming_report.txt", "qc/trim_galore/0053_P2017BB3S20R_S2_R1_001_val_1_fastqc.html", "qc/trim_galore/0053_P2017BB3S20R_S2_R2_001_val_2_fastqc.html", "qc/trim_galore/0053_P2017BB3S20R_S2_R1_001_val_1_fastqc.zip", "qc/trim_galore/0053_P2017BB3S20R_S2_R2_001_val_2_fastqc.zip"], "wildcards": {"sample": "0053_P2017BB3S20R_S2"}, "params": {"trimming_report_read1": "fastq_trimmed/0053_P2017BB3S20R_S2_R1_001.fastq.gz_trimming_report.txt", "trimming_report_read2": "fastq_trimmed/0053_P2017BB3S20R_S2_R2_001.fastq.gz_trimming_report.txt", "fastqc_html_read1": "fastq_trimmed/0053_P2017BB3S20R_S2_R1_001_val_1_fastqc.html", "fastqc_html_read2": "fastq_trimmed/0053_P2017BB3S20R_S2_R2_001_val_2_fastqc.html", "fastqc_zip_read1": "fastq_trimmed/0053_P2017BB3S20R_S2_R1_001_val_1_fastqc.zip", "fastqc_zip_read2": "fastq_trimmed/0053_P2017BB3S20R_S2_R2_001_val_2_fastqc.zip"}, "log": [], "threads": 1, "resources": {"tmpdir": "<TBD>"}, "jobid": 4}
cd /gpfs/home5/fvhemert/temp/test_pipeline && /home/fvhemert/miniforge3/envs/snaqs/bin/python3.12 -m snakemake --snakefile /gpfs/home5/fvhemert/temp/test_pipeline/Snakefile --target-jobs 'trim_galore:sample=0053_P2017BB3S20R_S2' --allowed-rules 'trim_galore' --cores 94 --attempt 1 --force-use-threads  --unneeded-temp-files 'fastq_trimmed/0053_P2017BB3S20R_S2_R1_001_unpaired_1.fq.gz' 'fastq_trimmed/0053_P2017BB3S20R_S2_R2_001_unpaired_2.fq.gz' --wait-for-files '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf' 'fastq_raw/0053_P2017BB3S20R_S2_R1_001.fastq.gz' 'fastq_raw/0053_P2017BB3S20R_S2_R2_001.fastq.gz' '/home/fvhemert/projects/snaqs_files/snakemake_envs/5828be7853581bcea2e7a443f005b3a4_' --force --target-files-omit-workdir-adjustment --keep-storage-local-copies --max-inventory-time 0 --nocolor --notemp --no-hooks --nolock --ignore-incomplete --verbose  --rerun-triggers input params mtime code software-env --deployment-method conda --conda-frontend conda --conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs --conda-base-path /home/fvhemert/miniforge3 --shared-fs-usage input-output storage-local-copies persistence software-deployment sources source-cache --wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/ --printshellcmds  --latency-wait 60 --scheduler ilp --local-storage-prefix .snakemake/storage --scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin --set-resources base64//ZmFzdHFjOnBhcnRpdGlvbj1zdGFnaW5n --set-threads base64//c2FsbW9uPTE2 --default-resources base64//dG1wZGlyPXN5c3RlbV90bXBkaXI= --mode remote && touch '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf/4.jobfinished' || (touch '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf/4.jobfailed'; exit 1)

sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 128 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 8 jobs.
sbatch: By default shared jobs get 1792 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 16 CPUs, based on the number of CPUs and the amount memory that you've requested.
Submitted job 4 with external jobid '8183873'.

[Wed Oct 16 09:56:29 2024]
rule fastqc:
    input: fastq_raw/0053_P2017BB3S19R_S1_R2_001.fastq.gz
    output: qc/fastqc/0053_P2017BB3S19R_S1_R2_001_fastqc.html, qc/fastqc/0053_P2017BB3S19R_S1_R2_001_fastqc.zip
    jobid: 17
    benchmark: benchmark/0053_P2017BB3S19R_S1_R2.fastqc.fastqc.benchmark.tsv
    reason: Missing output files: qc/fastqc/0053_P2017BB3S19R_S1_R2_001_fastqc.html, qc/fastqc/0053_P2017BB3S19R_S1_R2_001_fastqc.zip
    wildcards: sample=0053_P2017BB3S19R_S1, read=R2
    resources: tmpdir=<TBD>, partition=staging


        fastqc fastq_raw/0053_P2017BB3S19R_S1_R2_001.fastq.gz --outdir=qc/fastqc
        
General args: ['--force', '--target-files-omit-workdir-adjustment', '--keep-storage-local-copies', '--max-inventory-time 0', '--nocolor', '--notemp', '--no-hooks', '--nolock', '--ignore-incomplete', '', '--verbose ', '--rerun-triggers input params mtime code software-env', '', '', '--deployment-method conda', '--conda-frontend conda', '--conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs', '--conda-base-path /home/fvhemert/miniforge3', '', '', '', '--shared-fs-usage input-output storage-local-copies persistence software-deployment sources source-cache', '', '--wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/', '', '', '', '', '--printshellcmds ', '', '--latency-wait 60', '--scheduler ilp', '', '--local-storage-prefix .snakemake/storage', '--scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin', '', '', '--set-resources base64//ZmFzdHFjOnBhcnRpdGlvbj1zdGFnaW5n', '--set-threads base64//c2FsbW9uPTE2', '', '--default-resources base64//dG1wZGlyPXN5c3RlbV90bXBkaXI=']
Jobscript:
#!/bin/sh
# properties = {"type": "single", "rule": "fastqc", "local": false, "input": ["fastq_raw/0053_P2017BB3S19R_S1_R2_001.fastq.gz"], "output": ["qc/fastqc/0053_P2017BB3S19R_S1_R2_001_fastqc.html", "qc/fastqc/0053_P2017BB3S19R_S1_R2_001_fastqc.zip"], "wildcards": {"sample": "0053_P2017BB3S19R_S1", "read": "R2"}, "params": {}, "log": [], "threads": 1, "resources": {"tmpdir": "<TBD>", "partition": "staging"}, "jobid": 17}
cd /gpfs/home5/fvhemert/temp/test_pipeline && /home/fvhemert/miniforge3/envs/snaqs/bin/python3.12 -m snakemake --snakefile /gpfs/home5/fvhemert/temp/test_pipeline/Snakefile --target-jobs 'fastqc:sample=0053_P2017BB3S19R_S1,read=R2' --allowed-rules 'fastqc' --cores 94 --attempt 1 --force-use-threads  --wait-for-files '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf' 'fastq_raw/0053_P2017BB3S19R_S1_R2_001.fastq.gz' '/home/fvhemert/projects/snaqs_files/snakemake_envs/5828be7853581bcea2e7a443f005b3a4_' --force --target-files-omit-workdir-adjustment --keep-storage-local-copies --max-inventory-time 0 --nocolor --notemp --no-hooks --nolock --ignore-incomplete --verbose  --rerun-triggers input params mtime code software-env --deployment-method conda --conda-frontend conda --conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs --conda-base-path /home/fvhemert/miniforge3 --shared-fs-usage input-output storage-local-copies persistence software-deployment sources source-cache --wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/ --printshellcmds  --latency-wait 60 --scheduler ilp --local-storage-prefix .snakemake/storage --scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin --set-resources base64//ZmFzdHFjOnBhcnRpdGlvbj1zdGFnaW5n --set-threads base64//c2FsbW9uPTE2 --default-resources base64//dG1wZGlyPXN5c3RlbV90bXBkaXI= --mode remote && touch '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf/17.jobfinished' || (touch '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf/17.jobfailed'; exit 1)

sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 128 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 8 jobs.
sbatch: By default shared jobs get 1792 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 16 CPUs, based on the number of CPUs and the amount memory that you've requested.
Submitted job 17 with external jobid '8183874'.

[Wed Oct 16 09:56:29 2024]
rule fastqc:
    input: fastq_raw/0053_P2017BB3S19R_S1_R1_001.fastq.gz
    output: qc/fastqc/0053_P2017BB3S19R_S1_R1_001_fastqc.html, qc/fastqc/0053_P2017BB3S19R_S1_R1_001_fastqc.zip
    jobid: 16
    benchmark: benchmark/0053_P2017BB3S19R_S1_R1.fastqc.fastqc.benchmark.tsv
    reason: Missing output files: qc/fastqc/0053_P2017BB3S19R_S1_R1_001_fastqc.html, qc/fastqc/0053_P2017BB3S19R_S1_R1_001_fastqc.zip
    wildcards: sample=0053_P2017BB3S19R_S1, read=R1
    resources: tmpdir=<TBD>, partition=staging


        fastqc fastq_raw/0053_P2017BB3S19R_S1_R1_001.fastq.gz --outdir=qc/fastqc
        
General args: ['--force', '--target-files-omit-workdir-adjustment', '--keep-storage-local-copies', '--max-inventory-time 0', '--nocolor', '--notemp', '--no-hooks', '--nolock', '--ignore-incomplete', '', '--verbose ', '--rerun-triggers input params mtime code software-env', '', '', '--deployment-method conda', '--conda-frontend conda', '--conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs', '--conda-base-path /home/fvhemert/miniforge3', '', '', '', '--shared-fs-usage input-output storage-local-copies persistence software-deployment sources source-cache', '', '--wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/', '', '', '', '', '--printshellcmds ', '', '--latency-wait 60', '--scheduler ilp', '', '--local-storage-prefix .snakemake/storage', '--scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin', '', '', '--set-resources base64//ZmFzdHFjOnBhcnRpdGlvbj1zdGFnaW5n', '--set-threads base64//c2FsbW9uPTE2', '', '--default-resources base64//dG1wZGlyPXN5c3RlbV90bXBkaXI=']
Jobscript:
#!/bin/sh
# properties = {"type": "single", "rule": "fastqc", "local": false, "input": ["fastq_raw/0053_P2017BB3S19R_S1_R1_001.fastq.gz"], "output": ["qc/fastqc/0053_P2017BB3S19R_S1_R1_001_fastqc.html", "qc/fastqc/0053_P2017BB3S19R_S1_R1_001_fastqc.zip"], "wildcards": {"sample": "0053_P2017BB3S19R_S1", "read": "R1"}, "params": {}, "log": [], "threads": 1, "resources": {"tmpdir": "<TBD>", "partition": "staging"}, "jobid": 16}
cd /gpfs/home5/fvhemert/temp/test_pipeline && /home/fvhemert/miniforge3/envs/snaqs/bin/python3.12 -m snakemake --snakefile /gpfs/home5/fvhemert/temp/test_pipeline/Snakefile --target-jobs 'fastqc:sample=0053_P2017BB3S19R_S1,read=R1' --allowed-rules 'fastqc' --cores 94 --attempt 1 --force-use-threads  --wait-for-files '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf' 'fastq_raw/0053_P2017BB3S19R_S1_R1_001.fastq.gz' '/home/fvhemert/projects/snaqs_files/snakemake_envs/5828be7853581bcea2e7a443f005b3a4_' --force --target-files-omit-workdir-adjustment --keep-storage-local-copies --max-inventory-time 0 --nocolor --notemp --no-hooks --nolock --ignore-incomplete --verbose  --rerun-triggers input params mtime code software-env --deployment-method conda --conda-frontend conda --conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs --conda-base-path /home/fvhemert/miniforge3 --shared-fs-usage input-output storage-local-copies persistence software-deployment sources source-cache --wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/ --printshellcmds  --latency-wait 60 --scheduler ilp --local-storage-prefix .snakemake/storage --scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin --set-resources base64//ZmFzdHFjOnBhcnRpdGlvbj1zdGFnaW5n --set-threads base64//c2FsbW9uPTE2 --default-resources base64//dG1wZGlyPXN5c3RlbV90bXBkaXI= --mode remote && touch '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf/16.jobfinished' || (touch '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf/16.jobfailed'; exit 1)

sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 128 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 8 jobs.
sbatch: By default shared jobs get 1792 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 16 CPUs, based on the number of CPUs and the amount memory that you've requested.
Submitted job 16 with external jobid '8183875'.

[Wed Oct 16 09:56:30 2024]
rule trim_galore:
    input: fastq_raw/0053_P2017BB3S19R_S1_R1_001.fastq.gz, fastq_raw/0053_P2017BB3S19R_S1_R2_001.fastq.gz
    output: fastq_trimmed/0053_P2017BB3S19R_S1_R1_001_val_1.fq.gz, fastq_trimmed/0053_P2017BB3S19R_S1_R2_001_val_2.fq.gz, fastq_trimmed/0053_P2017BB3S19R_S1_R1_001_unpaired_1.fq.gz, fastq_trimmed/0053_P2017BB3S19R_S1_R2_001_unpaired_2.fq.gz, qc/trim_galore/0053_P2017BB3S19R_S1_R1_001.fastq.gz_trimming_report.txt, qc/trim_galore/0053_P2017BB3S19R_S1_R2_001.fastq.gz_trimming_report.txt, qc/trim_galore/0053_P2017BB3S19R_S1_R1_001_val_1_fastqc.html, qc/trim_galore/0053_P2017BB3S19R_S1_R2_001_val_2_fastqc.html, qc/trim_galore/0053_P2017BB3S19R_S1_R1_001_val_1_fastqc.zip, qc/trim_galore/0053_P2017BB3S19R_S1_R2_001_val_2_fastqc.zip
    jobid: 2
    benchmark: benchmark/0053_P2017BB3S19R_S1.trim_galore_pe.trim_galore.benchmark.tsv
    reason: Missing output files: qc/trim_galore/0053_P2017BB3S19R_S1_R1_001_val_1_fastqc.zip, fastq_trimmed/0053_P2017BB3S19R_S1_R1_001_val_1.fq.gz, qc/trim_galore/0053_P2017BB3S19R_S1_R2_001_val_2_fastqc.zip, fastq_trimmed/0053_P2017BB3S19R_S1_R2_001_val_2.fq.gz, qc/trim_galore/0053_P2017BB3S19R_S1_R2_001_val_2_fastqc.html, qc/trim_galore/0053_P2017BB3S19R_S1_R2_001.fastq.gz_trimming_report.txt, qc/trim_galore/0053_P2017BB3S19R_S1_R1_001.fastq.gz_trimming_report.txt, qc/trim_galore/0053_P2017BB3S19R_S1_R1_001_val_1_fastqc.html
    wildcards: sample=0053_P2017BB3S19R_S1
    resources: tmpdir=<TBD>


        trim_galore --fastqc --gzip -o fastq_trimmed --paired --retain_unpaired fastq_raw/0053_P2017BB3S19R_S1_R1_001.fastq.gz fastq_raw/0053_P2017BB3S19R_S1_R2_001.fastq.gz
        # Move all qc reports from the fastq_trimmed directory to the trim_galore qc directory
        mv fastq_trimmed/0053_P2017BB3S19R_S1_R1_001.fastq.gz_trimming_report.txt qc/trim_galore
        mv fastq_trimmed/0053_P2017BB3S19R_S1_R2_001.fastq.gz_trimming_report.txt qc/trim_galore
        mv fastq_trimmed/0053_P2017BB3S19R_S1_R1_001_val_1_fastqc.html qc/trim_galore
        mv fastq_trimmed/0053_P2017BB3S19R_S1_R2_001_val_2_fastqc.html qc/trim_galore
        mv fastq_trimmed/0053_P2017BB3S19R_S1_R1_001_val_1_fastqc.zip qc/trim_galore
        mv fastq_trimmed/0053_P2017BB3S19R_S1_R2_001_val_2_fastqc.zip qc/trim_galore
        
General args: ['--force', '--target-files-omit-workdir-adjustment', '--keep-storage-local-copies', '--max-inventory-time 0', '--nocolor', '--notemp', '--no-hooks', '--nolock', '--ignore-incomplete', '', '--verbose ', '--rerun-triggers input params mtime code software-env', '', '', '--deployment-method conda', '--conda-frontend conda', '--conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs', '--conda-base-path /home/fvhemert/miniforge3', '', '', '', '--shared-fs-usage input-output storage-local-copies persistence software-deployment sources source-cache', '', '--wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/', '', '', '', '', '--printshellcmds ', '', '--latency-wait 60', '--scheduler ilp', '', '--local-storage-prefix .snakemake/storage', '--scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin', '', '', '--set-resources base64//ZmFzdHFjOnBhcnRpdGlvbj1zdGFnaW5n', '--set-threads base64//c2FsbW9uPTE2', '', '--default-resources base64//dG1wZGlyPXN5c3RlbV90bXBkaXI=']
Jobscript:
#!/bin/sh
# properties = {"type": "single", "rule": "trim_galore", "local": false, "input": ["fastq_raw/0053_P2017BB3S19R_S1_R1_001.fastq.gz", "fastq_raw/0053_P2017BB3S19R_S1_R2_001.fastq.gz"], "output": ["fastq_trimmed/0053_P2017BB3S19R_S1_R1_001_val_1.fq.gz", "fastq_trimmed/0053_P2017BB3S19R_S1_R2_001_val_2.fq.gz", "fastq_trimmed/0053_P2017BB3S19R_S1_R1_001_unpaired_1.fq.gz", "fastq_trimmed/0053_P2017BB3S19R_S1_R2_001_unpaired_2.fq.gz", "qc/trim_galore/0053_P2017BB3S19R_S1_R1_001.fastq.gz_trimming_report.txt", "qc/trim_galore/0053_P2017BB3S19R_S1_R2_001.fastq.gz_trimming_report.txt", "qc/trim_galore/0053_P2017BB3S19R_S1_R1_001_val_1_fastqc.html", "qc/trim_galore/0053_P2017BB3S19R_S1_R2_001_val_2_fastqc.html", "qc/trim_galore/0053_P2017BB3S19R_S1_R1_001_val_1_fastqc.zip", "qc/trim_galore/0053_P2017BB3S19R_S1_R2_001_val_2_fastqc.zip"], "wildcards": {"sample": "0053_P2017BB3S19R_S1"}, "params": {"trimming_report_read1": "fastq_trimmed/0053_P2017BB3S19R_S1_R1_001.fastq.gz_trimming_report.txt", "trimming_report_read2": "fastq_trimmed/0053_P2017BB3S19R_S1_R2_001.fastq.gz_trimming_report.txt", "fastqc_html_read1": "fastq_trimmed/0053_P2017BB3S19R_S1_R1_001_val_1_fastqc.html", "fastqc_html_read2": "fastq_trimmed/0053_P2017BB3S19R_S1_R2_001_val_2_fastqc.html", "fastqc_zip_read1": "fastq_trimmed/0053_P2017BB3S19R_S1_R1_001_val_1_fastqc.zip", "fastqc_zip_read2": "fastq_trimmed/0053_P2017BB3S19R_S1_R2_001_val_2_fastqc.zip"}, "log": [], "threads": 1, "resources": {"tmpdir": "<TBD>"}, "jobid": 2}
cd /gpfs/home5/fvhemert/temp/test_pipeline && /home/fvhemert/miniforge3/envs/snaqs/bin/python3.12 -m snakemake --snakefile /gpfs/home5/fvhemert/temp/test_pipeline/Snakefile --target-jobs 'trim_galore:sample=0053_P2017BB3S19R_S1' --allowed-rules 'trim_galore' --cores 94 --attempt 1 --force-use-threads  --unneeded-temp-files 'fastq_trimmed/0053_P2017BB3S19R_S1_R1_001_unpaired_1.fq.gz' 'fastq_trimmed/0053_P2017BB3S19R_S1_R2_001_unpaired_2.fq.gz' --wait-for-files '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf' 'fastq_raw/0053_P2017BB3S19R_S1_R1_001.fastq.gz' 'fastq_raw/0053_P2017BB3S19R_S1_R2_001.fastq.gz' '/home/fvhemert/projects/snaqs_files/snakemake_envs/5828be7853581bcea2e7a443f005b3a4_' --force --target-files-omit-workdir-adjustment --keep-storage-local-copies --max-inventory-time 0 --nocolor --notemp --no-hooks --nolock --ignore-incomplete --verbose  --rerun-triggers input params mtime code software-env --deployment-method conda --conda-frontend conda --conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs --conda-base-path /home/fvhemert/miniforge3 --shared-fs-usage input-output storage-local-copies persistence software-deployment sources source-cache --wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/ --printshellcmds  --latency-wait 60 --scheduler ilp --local-storage-prefix .snakemake/storage --scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin --set-resources base64//ZmFzdHFjOnBhcnRpdGlvbj1zdGFnaW5n --set-threads base64//c2FsbW9uPTE2 --default-resources base64//dG1wZGlyPXN5c3RlbV90bXBkaXI= --mode remote && touch '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf/2.jobfinished' || (touch '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf/2.jobfailed'; exit 1)

sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 128 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 8 jobs.
sbatch: By default shared jobs get 1792 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 16 CPUs, based on the number of CPUs and the amount memory that you've requested.
Submitted job 2 with external jobid '8183876'.

[Wed Oct 16 09:56:30 2024]
rule fastqc:
    input: fastq_raw/0053_P2017BB3S20R_S2_R2_001.fastq.gz
    output: qc/fastqc/0053_P2017BB3S20R_S2_R2_001_fastqc.html, qc/fastqc/0053_P2017BB3S20R_S2_R2_001_fastqc.zip
    jobid: 19
    benchmark: benchmark/0053_P2017BB3S20R_S2_R2.fastqc.fastqc.benchmark.tsv
    reason: Missing output files: qc/fastqc/0053_P2017BB3S20R_S2_R2_001_fastqc.zip, qc/fastqc/0053_P2017BB3S20R_S2_R2_001_fastqc.html
    wildcards: sample=0053_P2017BB3S20R_S2, read=R2
    resources: tmpdir=<TBD>, partition=staging


        fastqc fastq_raw/0053_P2017BB3S20R_S2_R2_001.fastq.gz --outdir=qc/fastqc
        
General args: ['--force', '--target-files-omit-workdir-adjustment', '--keep-storage-local-copies', '--max-inventory-time 0', '--nocolor', '--notemp', '--no-hooks', '--nolock', '--ignore-incomplete', '', '--verbose ', '--rerun-triggers input params mtime code software-env', '', '', '--deployment-method conda', '--conda-frontend conda', '--conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs', '--conda-base-path /home/fvhemert/miniforge3', '', '', '', '--shared-fs-usage input-output storage-local-copies persistence software-deployment sources source-cache', '', '--wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/', '', '', '', '', '--printshellcmds ', '', '--latency-wait 60', '--scheduler ilp', '', '--local-storage-prefix .snakemake/storage', '--scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin', '', '', '--set-resources base64//ZmFzdHFjOnBhcnRpdGlvbj1zdGFnaW5n', '--set-threads base64//c2FsbW9uPTE2', '', '--default-resources base64//dG1wZGlyPXN5c3RlbV90bXBkaXI=']
Jobscript:
#!/bin/sh
# properties = {"type": "single", "rule": "fastqc", "local": false, "input": ["fastq_raw/0053_P2017BB3S20R_S2_R2_001.fastq.gz"], "output": ["qc/fastqc/0053_P2017BB3S20R_S2_R2_001_fastqc.html", "qc/fastqc/0053_P2017BB3S20R_S2_R2_001_fastqc.zip"], "wildcards": {"sample": "0053_P2017BB3S20R_S2", "read": "R2"}, "params": {}, "log": [], "threads": 1, "resources": {"tmpdir": "<TBD>", "partition": "staging"}, "jobid": 19}
cd /gpfs/home5/fvhemert/temp/test_pipeline && /home/fvhemert/miniforge3/envs/snaqs/bin/python3.12 -m snakemake --snakefile /gpfs/home5/fvhemert/temp/test_pipeline/Snakefile --target-jobs 'fastqc:sample=0053_P2017BB3S20R_S2,read=R2' --allowed-rules 'fastqc' --cores 94 --attempt 1 --force-use-threads  --wait-for-files '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf' 'fastq_raw/0053_P2017BB3S20R_S2_R2_001.fastq.gz' '/home/fvhemert/projects/snaqs_files/snakemake_envs/5828be7853581bcea2e7a443f005b3a4_' --force --target-files-omit-workdir-adjustment --keep-storage-local-copies --max-inventory-time 0 --nocolor --notemp --no-hooks --nolock --ignore-incomplete --verbose  --rerun-triggers input params mtime code software-env --deployment-method conda --conda-frontend conda --conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs --conda-base-path /home/fvhemert/miniforge3 --shared-fs-usage input-output storage-local-copies persistence software-deployment sources source-cache --wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/ --printshellcmds  --latency-wait 60 --scheduler ilp --local-storage-prefix .snakemake/storage --scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin --set-resources base64//ZmFzdHFjOnBhcnRpdGlvbj1zdGFnaW5n --set-threads base64//c2FsbW9uPTE2 --default-resources base64//dG1wZGlyPXN5c3RlbV90bXBkaXI= --mode remote && touch '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf/19.jobfinished' || (touch '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf/19.jobfailed'; exit 1)

sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 128 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 8 jobs.
sbatch: By default shared jobs get 1792 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 16 CPUs, based on the number of CPUs and the amount memory that you've requested.
Submitted job 19 with external jobid '8183877'.
[Wed Oct 16 09:57:24 2024]
Finished job 18.
1 of 25 steps (4%) done
[Wed Oct 16 09:57:25 2024]
Finished job 4.
2 of 25 steps (8%) done
Removing temporary output fastq_trimmed/0053_P2017BB3S20R_S2_R1_001_unpaired_1.fq.gz.
Removing temporary output fastq_trimmed/0053_P2017BB3S20R_S2_R2_001_unpaired_2.fq.gz.
Resources before job selection: {'_cores': 9223372036854775803, '_nodes': 96, '_job_count': 10}
Ready jobs: 3
Select jobs to execute...
Welcome to the CBC MILP Solver 
Version: 2.10.12 
Build Date: Sep  3 2024 

command line - cbc /scratch-local/70716/83010442c6a54ae5a8b39f8d6f21fa28-pulp.mps -max -sec 10 -threads 2 -timeMode elapsed -branch -printingOptions all -solution /scratch-local/70716/83010442c6a54ae5a8b39f8d6f21fa28-pulp.sol (default strategy 1)
At line 2 NAME          MODEL
At line 3 ROWS
At line 12 COLUMNS
At line 49 RHS
At line 57 BOUNDS
At line 65 ENDATA
Problem MODEL has 7 rows, 7 columns and 19 elements
Coin0008I MODEL read with 0 errors
seconds was changed from 1e+100 to 10
threads was changed from 0 to 2
Option for timeMode changed from cpu to elapsed
Continuous objective value is 36.0016 - 0.02 seconds
Cgl0004I processed model has 0 rows, 0 columns (0 integer (0 of which binary)) and 0 elements
Cbc3007W No integer variables - nothing to do
Cuts at root node changed objective from 36.0016 to 1.79769e+308
Probing was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
Gomory was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
Knapsack was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
Clique was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
MixedIntegerRounding2 was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
FlowCover was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
TwoMirCuts was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
ZeroHalf was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)

Result - Optimal solution found

Objective value:                36.00164002
Enumerated nodes:               0
Total iterations:               0
Time (CPU seconds):             0.11
Time (Wallclock seconds):       0.01

Option for printingOptions changed from normal to all
Total time (CPU seconds):       0.22   (Wallclock seconds):       0.09

Selected jobs: 3
Resources after job selection: {'_cores': 9223372036854775785, '_nodes': 93, '_job_count': 10}
Execute 3 jobs...

[Wed Oct 16 09:57:25 2024]
rule complexity_20mer_counter:
    input: fastq_trimmed/0053_P2017BB3S20R_S2_R1_001_val_1.fq.gz
    output: qc/20mer_counter/0053_P2017BB3S20R_S2_R1_20mer_counter.txt
    jobid: 11
    benchmark: benchmark/0053_P2017BB3S20R_S2_R1.20mer_counter.complexity_20mer_counter.benchmark.tsv
    reason: Missing output files: qc/20mer_counter/0053_P2017BB3S20R_S2_R1_20mer_counter.txt; Input files updated by another job: fastq_trimmed/0053_P2017BB3S20R_S2_R1_001_val_1.fq.gz
    wildcards: sample=0053_P2017BB3S20R_S2, read=R1
    resources: tmpdir=<TBD>


        perl scripts/20mer_counter.pl fastq_trimmed/0053_P2017BB3S20R_S2_R1_001_val_1.fq.gz > qc/20mer_counter/0053_P2017BB3S20R_S2_R1_20mer_counter.txt
        
General args: ['--force', '--target-files-omit-workdir-adjustment', '--keep-storage-local-copies', '--max-inventory-time 0', '--nocolor', '--notemp', '--no-hooks', '--nolock', '--ignore-incomplete', '', '--verbose ', '--rerun-triggers input params mtime code software-env', '', '', '--deployment-method conda', '--conda-frontend conda', '--conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs', '--conda-base-path /home/fvhemert/miniforge3', '', '', '', '--shared-fs-usage input-output storage-local-copies persistence software-deployment sources source-cache', '', '--wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/', '', '', '', '', '--printshellcmds ', '', '--latency-wait 60', '--scheduler ilp', '', '--local-storage-prefix .snakemake/storage', '--scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin', '', '', '--set-resources base64//ZmFzdHFjOnBhcnRpdGlvbj1zdGFnaW5n', '--set-threads base64//c2FsbW9uPTE2', '', '--default-resources base64//dG1wZGlyPXN5c3RlbV90bXBkaXI=']
Jobscript:
#!/bin/sh
# properties = {"type": "single", "rule": "complexity_20mer_counter", "local": false, "input": ["fastq_trimmed/0053_P2017BB3S20R_S2_R1_001_val_1.fq.gz"], "output": ["qc/20mer_counter/0053_P2017BB3S20R_S2_R1_20mer_counter.txt"], "wildcards": {"sample": "0053_P2017BB3S20R_S2", "read": "R1"}, "params": {}, "log": [], "threads": 1, "resources": {"tmpdir": "<TBD>"}, "jobid": 11}
cd /gpfs/home5/fvhemert/temp/test_pipeline && /home/fvhemert/miniforge3/envs/snaqs/bin/python3.12 -m snakemake --snakefile /gpfs/home5/fvhemert/temp/test_pipeline/Snakefile --target-jobs 'complexity_20mer_counter:sample=0053_P2017BB3S20R_S2,read=R1' --allowed-rules 'complexity_20mer_counter' --cores 94 --attempt 1 --force-use-threads  --wait-for-files '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf' 'fastq_trimmed/0053_P2017BB3S20R_S2_R1_001_val_1.fq.gz' --force --target-files-omit-workdir-adjustment --keep-storage-local-copies --max-inventory-time 0 --nocolor --notemp --no-hooks --nolock --ignore-incomplete --verbose  --rerun-triggers input params mtime code software-env --deployment-method conda --conda-frontend conda --conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs --conda-base-path /home/fvhemert/miniforge3 --shared-fs-usage input-output storage-local-copies persistence software-deployment sources source-cache --wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/ --printshellcmds  --latency-wait 60 --scheduler ilp --local-storage-prefix .snakemake/storage --scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin --set-resources base64//ZmFzdHFjOnBhcnRpdGlvbj1zdGFnaW5n --set-threads base64//c2FsbW9uPTE2 --default-resources base64//dG1wZGlyPXN5c3RlbV90bXBkaXI= --mode remote && touch '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf/11.jobfinished' || (touch '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf/11.jobfailed'; exit 1)

sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 128 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 8 jobs.
sbatch: By default shared jobs get 1792 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 16 CPUs, based on the number of CPUs and the amount memory that you've requested.
Submitted job 11 with external jobid '8183882'.

[Wed Oct 16 09:57:25 2024]
rule salmon:
    input: fastq_trimmed/0053_P2017BB3S20R_S2_R1_001_val_1.fq.gz, fastq_trimmed/0053_P2017BB3S20R_S2_R2_001_val_2.fq.gz
    output: analyzed/salmon_0053_P2017BB3S20R_S2/quant.genes.sf
    jobid: 3
    benchmark: benchmark/0053_P2017BB3S20R_S2.salmon.salmon.benchmark.tsv
    reason: Missing output files: analyzed/salmon_0053_P2017BB3S20R_S2/quant.genes.sf; Input files updated by another job: fastq_trimmed/0053_P2017BB3S20R_S2_R2_001_val_2.fq.gz, fastq_trimmed/0053_P2017BB3S20R_S2_R1_001_val_1.fq.gz
    wildcards: sample=0053_P2017BB3S20R_S2
    threads: 16
    resources: tmpdir=<TBD>


        salmon quant         --index /home/fvhemert/projects/snaqs_files/reference_files/gencode_salmon/salmon_index         --geneMap /home/fvhemert/projects/snaqs_files/reference_files/gencode_salmon/gencode.v46.annotation.gtf         --libType A         --mates1 fastq_trimmed/0053_P2017BB3S20R_S2_R1_001_val_1.fq.gz         --mates2 fastq_trimmed/0053_P2017BB3S20R_S2_R2_001_val_2.fq.gz         --validateMappings         --threads 16         --output analyzed/salmon_0053_P2017BB3S20R_S2
        
General args: ['--force', '--target-files-omit-workdir-adjustment', '--keep-storage-local-copies', '--max-inventory-time 0', '--nocolor', '--notemp', '--no-hooks', '--nolock', '--ignore-incomplete', '', '--verbose ', '--rerun-triggers input params mtime code software-env', '', '', '--deployment-method conda', '--conda-frontend conda', '--conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs', '--conda-base-path /home/fvhemert/miniforge3', '', '', '', '--shared-fs-usage input-output storage-local-copies persistence software-deployment sources source-cache', '', '--wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/', '', '', '', '', '--printshellcmds ', '', '--latency-wait 60', '--scheduler ilp', '', '--local-storage-prefix .snakemake/storage', '--scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin', '', '', '--set-resources base64//ZmFzdHFjOnBhcnRpdGlvbj1zdGFnaW5n', '--set-threads base64//c2FsbW9uPTE2', '', '--default-resources base64//dG1wZGlyPXN5c3RlbV90bXBkaXI=']
Jobscript:
#!/bin/sh
# properties = {"type": "single", "rule": "salmon", "local": false, "input": ["fastq_trimmed/0053_P2017BB3S20R_S2_R1_001_val_1.fq.gz", "fastq_trimmed/0053_P2017BB3S20R_S2_R2_001_val_2.fq.gz"], "output": ["analyzed/salmon_0053_P2017BB3S20R_S2/quant.genes.sf"], "wildcards": {"sample": "0053_P2017BB3S20R_S2"}, "params": {"salmon_index": "/home/fvhemert/projects/snaqs_files/reference_files/gencode_salmon/salmon_index", "gtf_file": "/home/fvhemert/projects/snaqs_files/reference_files/gencode_salmon/gencode.v46.annotation.gtf"}, "log": [], "threads": 16, "resources": {"tmpdir": "<TBD>"}, "jobid": 3}
cd /gpfs/home5/fvhemert/temp/test_pipeline && /home/fvhemert/miniforge3/envs/snaqs/bin/python3.12 -m snakemake --snakefile /gpfs/home5/fvhemert/temp/test_pipeline/Snakefile --target-jobs 'salmon:sample=0053_P2017BB3S20R_S2' --allowed-rules 'salmon' --cores 94 --attempt 1 --force-use-threads  --wait-for-files '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf' 'fastq_trimmed/0053_P2017BB3S20R_S2_R1_001_val_1.fq.gz' 'fastq_trimmed/0053_P2017BB3S20R_S2_R2_001_val_2.fq.gz' '/home/fvhemert/projects/snaqs_files/snakemake_envs/fdd42d6c6ccfbbce54b3edf8d70cf513_' --force --target-files-omit-workdir-adjustment --keep-storage-local-copies --max-inventory-time 0 --nocolor --notemp --no-hooks --nolock --ignore-incomplete --verbose  --rerun-triggers input params mtime code software-env --deployment-method conda --conda-frontend conda --conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs --conda-base-path /home/fvhemert/miniforge3 --shared-fs-usage input-output storage-local-copies persistence software-deployment sources source-cache --wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/ --printshellcmds  --latency-wait 60 --scheduler ilp --local-storage-prefix .snakemake/storage --scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin --set-resources base64//ZmFzdHFjOnBhcnRpdGlvbj1zdGFnaW5n --set-threads base64//c2FsbW9uPTE2 --default-resources base64//dG1wZGlyPXN5c3RlbV90bXBkaXI= --mode remote && touch '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf/3.jobfinished' || (touch '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf/3.jobfailed'; exit 1)

sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 128 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 8 jobs.
sbatch: By default shared jobs get 1792 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 16 CPUs, based on the number of CPUs and the amount memory that you've requested.
Submitted job 3 with external jobid '8183883'.

[Wed Oct 16 09:57:25 2024]
rule complexity_20mer_counter:
    input: fastq_trimmed/0053_P2017BB3S20R_S2_R2_001_val_2.fq.gz
    output: qc/20mer_counter/0053_P2017BB3S20R_S2_R2_20mer_counter.txt
    jobid: 12
    benchmark: benchmark/0053_P2017BB3S20R_S2_R2.20mer_counter.complexity_20mer_counter.benchmark.tsv
    reason: Missing output files: qc/20mer_counter/0053_P2017BB3S20R_S2_R2_20mer_counter.txt; Input files updated by another job: fastq_trimmed/0053_P2017BB3S20R_S2_R2_001_val_2.fq.gz
    wildcards: sample=0053_P2017BB3S20R_S2, read=R2
    resources: tmpdir=<TBD>


        perl scripts/20mer_counter.pl fastq_trimmed/0053_P2017BB3S20R_S2_R2_001_val_2.fq.gz > qc/20mer_counter/0053_P2017BB3S20R_S2_R2_20mer_counter.txt
        
General args: ['--force', '--target-files-omit-workdir-adjustment', '--keep-storage-local-copies', '--max-inventory-time 0', '--nocolor', '--notemp', '--no-hooks', '--nolock', '--ignore-incomplete', '', '--verbose ', '--rerun-triggers input params mtime code software-env', '', '', '--deployment-method conda', '--conda-frontend conda', '--conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs', '--conda-base-path /home/fvhemert/miniforge3', '', '', '', '--shared-fs-usage input-output storage-local-copies persistence software-deployment sources source-cache', '', '--wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/', '', '', '', '', '--printshellcmds ', '', '--latency-wait 60', '--scheduler ilp', '', '--local-storage-prefix .snakemake/storage', '--scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin', '', '', '--set-resources base64//ZmFzdHFjOnBhcnRpdGlvbj1zdGFnaW5n', '--set-threads base64//c2FsbW9uPTE2', '', '--default-resources base64//dG1wZGlyPXN5c3RlbV90bXBkaXI=']
Jobscript:
#!/bin/sh
# properties = {"type": "single", "rule": "complexity_20mer_counter", "local": false, "input": ["fastq_trimmed/0053_P2017BB3S20R_S2_R2_001_val_2.fq.gz"], "output": ["qc/20mer_counter/0053_P2017BB3S20R_S2_R2_20mer_counter.txt"], "wildcards": {"sample": "0053_P2017BB3S20R_S2", "read": "R2"}, "params": {}, "log": [], "threads": 1, "resources": {"tmpdir": "<TBD>"}, "jobid": 12}
cd /gpfs/home5/fvhemert/temp/test_pipeline && /home/fvhemert/miniforge3/envs/snaqs/bin/python3.12 -m snakemake --snakefile /gpfs/home5/fvhemert/temp/test_pipeline/Snakefile --target-jobs 'complexity_20mer_counter:sample=0053_P2017BB3S20R_S2,read=R2' --allowed-rules 'complexity_20mer_counter' --cores 94 --attempt 1 --force-use-threads  --wait-for-files '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf' 'fastq_trimmed/0053_P2017BB3S20R_S2_R2_001_val_2.fq.gz' --force --target-files-omit-workdir-adjustment --keep-storage-local-copies --max-inventory-time 0 --nocolor --notemp --no-hooks --nolock --ignore-incomplete --verbose  --rerun-triggers input params mtime code software-env --deployment-method conda --conda-frontend conda --conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs --conda-base-path /home/fvhemert/miniforge3 --shared-fs-usage input-output storage-local-copies persistence software-deployment sources source-cache --wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/ --printshellcmds  --latency-wait 60 --scheduler ilp --local-storage-prefix .snakemake/storage --scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin --set-resources base64//ZmFzdHFjOnBhcnRpdGlvbj1zdGFnaW5n --set-threads base64//c2FsbW9uPTE2 --default-resources base64//dG1wZGlyPXN5c3RlbV90bXBkaXI= --mode remote && touch '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf/12.jobfinished' || (touch '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf/12.jobfailed'; exit 1)

sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 128 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 8 jobs.
sbatch: By default shared jobs get 1792 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 16 CPUs, based on the number of CPUs and the amount memory that you've requested.
Submitted job 12 with external jobid '8183884'.
[Wed Oct 16 09:57:26 2024]
Finished job 17.
3 of 25 steps (12%) done
[Wed Oct 16 09:57:27 2024]
Finished job 16.
4 of 25 steps (16%) done
[Wed Oct 16 09:57:28 2024]
Finished job 2.
5 of 25 steps (20%) done
Removing temporary output fastq_trimmed/0053_P2017BB3S19R_S1_R1_001_unpaired_1.fq.gz.
Removing temporary output fastq_trimmed/0053_P2017BB3S19R_S1_R2_001_unpaired_2.fq.gz.
Resources before job selection: {'_cores': 9223372036854775788, '_nodes': 96, '_job_count': 10}
Ready jobs: 3
Select jobs to execute...
Welcome to the CBC MILP Solver 
Version: 2.10.12 
Build Date: Sep  3 2024 

command line - cbc /scratch-local/70716/ab50c915c1604451a597f96291d02981-pulp.mps -max -sec 10 -threads 2 -timeMode elapsed -branch -printingOptions all -solution /scratch-local/70716/ab50c915c1604451a597f96291d02981-pulp.sol (default strategy 1)
At line 2 NAME          MODEL
At line 3 ROWS
At line 12 COLUMNS
At line 49 RHS
At line 57 BOUNDS
At line 65 ENDATA
Problem MODEL has 7 rows, 7 columns and 19 elements
Coin0008I MODEL read with 0 errors
seconds was changed from 1e+100 to 10
threads was changed from 0 to 2
Option for timeMode changed from cpu to elapsed
Continuous objective value is 36.0017 - 0.00 seconds
Cgl0004I processed model has 0 rows, 0 columns (0 integer (0 of which binary)) and 0 elements
Cbc3007W No integer variables - nothing to do
Cuts at root node changed objective from 36.0017 to 1.79769e+308
Probing was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
Gomory was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
Knapsack was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
Clique was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
MixedIntegerRounding2 was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
FlowCover was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
TwoMirCuts was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
ZeroHalf was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)

Result - Optimal solution found

Objective value:                36.00166994
Enumerated nodes:               0
Total iterations:               0
Time (CPU seconds):             0.00
Time (Wallclock seconds):       0.00

Option for printingOptions changed from normal to all
Total time (CPU seconds):       0.00   (Wallclock seconds):       0.00

Selected jobs: 3
Resources after job selection: {'_cores': 9223372036854775770, '_nodes': 93, '_job_count': 10}
Execute 3 jobs...

[Wed Oct 16 09:57:28 2024]
rule salmon:
    input: fastq_trimmed/0053_P2017BB3S19R_S1_R1_001_val_1.fq.gz, fastq_trimmed/0053_P2017BB3S19R_S1_R2_001_val_2.fq.gz
    output: analyzed/salmon_0053_P2017BB3S19R_S1/quant.genes.sf
    jobid: 1
    benchmark: benchmark/0053_P2017BB3S19R_S1.salmon.salmon.benchmark.tsv
    reason: Missing output files: analyzed/salmon_0053_P2017BB3S19R_S1/quant.genes.sf; Input files updated by another job: fastq_trimmed/0053_P2017BB3S19R_S1_R2_001_val_2.fq.gz, fastq_trimmed/0053_P2017BB3S19R_S1_R1_001_val_1.fq.gz
    wildcards: sample=0053_P2017BB3S19R_S1
    threads: 16
    resources: tmpdir=<TBD>


        salmon quant         --index /home/fvhemert/projects/snaqs_files/reference_files/gencode_salmon/salmon_index         --geneMap /home/fvhemert/projects/snaqs_files/reference_files/gencode_salmon/gencode.v46.annotation.gtf         --libType A         --mates1 fastq_trimmed/0053_P2017BB3S19R_S1_R1_001_val_1.fq.gz         --mates2 fastq_trimmed/0053_P2017BB3S19R_S1_R2_001_val_2.fq.gz         --validateMappings         --threads 16         --output analyzed/salmon_0053_P2017BB3S19R_S1
        
General args: ['--force', '--target-files-omit-workdir-adjustment', '--keep-storage-local-copies', '--max-inventory-time 0', '--nocolor', '--notemp', '--no-hooks', '--nolock', '--ignore-incomplete', '', '--verbose ', '--rerun-triggers input params mtime code software-env', '', '', '--deployment-method conda', '--conda-frontend conda', '--conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs', '--conda-base-path /home/fvhemert/miniforge3', '', '', '', '--shared-fs-usage input-output storage-local-copies persistence software-deployment sources source-cache', '', '--wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/', '', '', '', '', '--printshellcmds ', '', '--latency-wait 60', '--scheduler ilp', '', '--local-storage-prefix .snakemake/storage', '--scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin', '', '', '--set-resources base64//ZmFzdHFjOnBhcnRpdGlvbj1zdGFnaW5n', '--set-threads base64//c2FsbW9uPTE2', '', '--default-resources base64//dG1wZGlyPXN5c3RlbV90bXBkaXI=']
Jobscript:
#!/bin/sh
# properties = {"type": "single", "rule": "salmon", "local": false, "input": ["fastq_trimmed/0053_P2017BB3S19R_S1_R1_001_val_1.fq.gz", "fastq_trimmed/0053_P2017BB3S19R_S1_R2_001_val_2.fq.gz"], "output": ["analyzed/salmon_0053_P2017BB3S19R_S1/quant.genes.sf"], "wildcards": {"sample": "0053_P2017BB3S19R_S1"}, "params": {"salmon_index": "/home/fvhemert/projects/snaqs_files/reference_files/gencode_salmon/salmon_index", "gtf_file": "/home/fvhemert/projects/snaqs_files/reference_files/gencode_salmon/gencode.v46.annotation.gtf"}, "log": [], "threads": 16, "resources": {"tmpdir": "<TBD>"}, "jobid": 1}
cd /gpfs/home5/fvhemert/temp/test_pipeline && /home/fvhemert/miniforge3/envs/snaqs/bin/python3.12 -m snakemake --snakefile /gpfs/home5/fvhemert/temp/test_pipeline/Snakefile --target-jobs 'salmon:sample=0053_P2017BB3S19R_S1' --allowed-rules 'salmon' --cores 94 --attempt 1 --force-use-threads  --wait-for-files '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf' 'fastq_trimmed/0053_P2017BB3S19R_S1_R1_001_val_1.fq.gz' 'fastq_trimmed/0053_P2017BB3S19R_S1_R2_001_val_2.fq.gz' '/home/fvhemert/projects/snaqs_files/snakemake_envs/fdd42d6c6ccfbbce54b3edf8d70cf513_' --force --target-files-omit-workdir-adjustment --keep-storage-local-copies --max-inventory-time 0 --nocolor --notemp --no-hooks --nolock --ignore-incomplete --verbose  --rerun-triggers input params mtime code software-env --deployment-method conda --conda-frontend conda --conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs --conda-base-path /home/fvhemert/miniforge3 --shared-fs-usage input-output storage-local-copies persistence software-deployment sources source-cache --wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/ --printshellcmds  --latency-wait 60 --scheduler ilp --local-storage-prefix .snakemake/storage --scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin --set-resources base64//ZmFzdHFjOnBhcnRpdGlvbj1zdGFnaW5n --set-threads base64//c2FsbW9uPTE2 --default-resources base64//dG1wZGlyPXN5c3RlbV90bXBkaXI= --mode remote && touch '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf/1.jobfinished' || (touch '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf/1.jobfailed'; exit 1)

sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 128 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 8 jobs.
sbatch: By default shared jobs get 1792 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 16 CPUs, based on the number of CPUs and the amount memory that you've requested.
Submitted job 1 with external jobid '8183885'.

[Wed Oct 16 09:57:28 2024]
rule complexity_20mer_counter:
    input: fastq_trimmed/0053_P2017BB3S19R_S1_R1_001_val_1.fq.gz
    output: qc/20mer_counter/0053_P2017BB3S19R_S1_R1_20mer_counter.txt
    jobid: 9
    benchmark: benchmark/0053_P2017BB3S19R_S1_R1.20mer_counter.complexity_20mer_counter.benchmark.tsv
    reason: Missing output files: qc/20mer_counter/0053_P2017BB3S19R_S1_R1_20mer_counter.txt; Input files updated by another job: fastq_trimmed/0053_P2017BB3S19R_S1_R1_001_val_1.fq.gz
    wildcards: sample=0053_P2017BB3S19R_S1, read=R1
    resources: tmpdir=<TBD>


        perl scripts/20mer_counter.pl fastq_trimmed/0053_P2017BB3S19R_S1_R1_001_val_1.fq.gz > qc/20mer_counter/0053_P2017BB3S19R_S1_R1_20mer_counter.txt
        
General args: ['--force', '--target-files-omit-workdir-adjustment', '--keep-storage-local-copies', '--max-inventory-time 0', '--nocolor', '--notemp', '--no-hooks', '--nolock', '--ignore-incomplete', '', '--verbose ', '--rerun-triggers input params mtime code software-env', '', '', '--deployment-method conda', '--conda-frontend conda', '--conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs', '--conda-base-path /home/fvhemert/miniforge3', '', '', '', '--shared-fs-usage input-output storage-local-copies persistence software-deployment sources source-cache', '', '--wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/', '', '', '', '', '--printshellcmds ', '', '--latency-wait 60', '--scheduler ilp', '', '--local-storage-prefix .snakemake/storage', '--scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin', '', '', '--set-resources base64//ZmFzdHFjOnBhcnRpdGlvbj1zdGFnaW5n', '--set-threads base64//c2FsbW9uPTE2', '', '--default-resources base64//dG1wZGlyPXN5c3RlbV90bXBkaXI=']
Jobscript:
#!/bin/sh
# properties = {"type": "single", "rule": "complexity_20mer_counter", "local": false, "input": ["fastq_trimmed/0053_P2017BB3S19R_S1_R1_001_val_1.fq.gz"], "output": ["qc/20mer_counter/0053_P2017BB3S19R_S1_R1_20mer_counter.txt"], "wildcards": {"sample": "0053_P2017BB3S19R_S1", "read": "R1"}, "params": {}, "log": [], "threads": 1, "resources": {"tmpdir": "<TBD>"}, "jobid": 9}
cd /gpfs/home5/fvhemert/temp/test_pipeline && /home/fvhemert/miniforge3/envs/snaqs/bin/python3.12 -m snakemake --snakefile /gpfs/home5/fvhemert/temp/test_pipeline/Snakefile --target-jobs 'complexity_20mer_counter:sample=0053_P2017BB3S19R_S1,read=R1' --allowed-rules 'complexity_20mer_counter' --cores 94 --attempt 1 --force-use-threads  --wait-for-files '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf' 'fastq_trimmed/0053_P2017BB3S19R_S1_R1_001_val_1.fq.gz' --force --target-files-omit-workdir-adjustment --keep-storage-local-copies --max-inventory-time 0 --nocolor --notemp --no-hooks --nolock --ignore-incomplete --verbose  --rerun-triggers input params mtime code software-env --deployment-method conda --conda-frontend conda --conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs --conda-base-path /home/fvhemert/miniforge3 --shared-fs-usage input-output storage-local-copies persistence software-deployment sources source-cache --wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/ --printshellcmds  --latency-wait 60 --scheduler ilp --local-storage-prefix .snakemake/storage --scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin --set-resources base64//ZmFzdHFjOnBhcnRpdGlvbj1zdGFnaW5n --set-threads base64//c2FsbW9uPTE2 --default-resources base64//dG1wZGlyPXN5c3RlbV90bXBkaXI= --mode remote && touch '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf/9.jobfinished' || (touch '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf/9.jobfailed'; exit 1)

sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 128 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 8 jobs.
sbatch: By default shared jobs get 1792 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 16 CPUs, based on the number of CPUs and the amount memory that you've requested.
Submitted job 9 with external jobid '8183886'.

[Wed Oct 16 09:57:28 2024]
rule complexity_20mer_counter:
    input: fastq_trimmed/0053_P2017BB3S19R_S1_R2_001_val_2.fq.gz
    output: qc/20mer_counter/0053_P2017BB3S19R_S1_R2_20mer_counter.txt
    jobid: 10
    benchmark: benchmark/0053_P2017BB3S19R_S1_R2.20mer_counter.complexity_20mer_counter.benchmark.tsv
    reason: Missing output files: qc/20mer_counter/0053_P2017BB3S19R_S1_R2_20mer_counter.txt; Input files updated by another job: fastq_trimmed/0053_P2017BB3S19R_S1_R2_001_val_2.fq.gz
    wildcards: sample=0053_P2017BB3S19R_S1, read=R2
    resources: tmpdir=<TBD>


        perl scripts/20mer_counter.pl fastq_trimmed/0053_P2017BB3S19R_S1_R2_001_val_2.fq.gz > qc/20mer_counter/0053_P2017BB3S19R_S1_R2_20mer_counter.txt
        
General args: ['--force', '--target-files-omit-workdir-adjustment', '--keep-storage-local-copies', '--max-inventory-time 0', '--nocolor', '--notemp', '--no-hooks', '--nolock', '--ignore-incomplete', '', '--verbose ', '--rerun-triggers input params mtime code software-env', '', '', '--deployment-method conda', '--conda-frontend conda', '--conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs', '--conda-base-path /home/fvhemert/miniforge3', '', '', '', '--shared-fs-usage input-output storage-local-copies persistence software-deployment sources source-cache', '', '--wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/', '', '', '', '', '--printshellcmds ', '', '--latency-wait 60', '--scheduler ilp', '', '--local-storage-prefix .snakemake/storage', '--scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin', '', '', '--set-resources base64//ZmFzdHFjOnBhcnRpdGlvbj1zdGFnaW5n', '--set-threads base64//c2FsbW9uPTE2', '', '--default-resources base64//dG1wZGlyPXN5c3RlbV90bXBkaXI=']
Jobscript:
#!/bin/sh
# properties = {"type": "single", "rule": "complexity_20mer_counter", "local": false, "input": ["fastq_trimmed/0053_P2017BB3S19R_S1_R2_001_val_2.fq.gz"], "output": ["qc/20mer_counter/0053_P2017BB3S19R_S1_R2_20mer_counter.txt"], "wildcards": {"sample": "0053_P2017BB3S19R_S1", "read": "R2"}, "params": {}, "log": [], "threads": 1, "resources": {"tmpdir": "<TBD>"}, "jobid": 10}
cd /gpfs/home5/fvhemert/temp/test_pipeline && /home/fvhemert/miniforge3/envs/snaqs/bin/python3.12 -m snakemake --snakefile /gpfs/home5/fvhemert/temp/test_pipeline/Snakefile --target-jobs 'complexity_20mer_counter:sample=0053_P2017BB3S19R_S1,read=R2' --allowed-rules 'complexity_20mer_counter' --cores 94 --attempt 1 --force-use-threads  --wait-for-files '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf' 'fastq_trimmed/0053_P2017BB3S19R_S1_R2_001_val_2.fq.gz' --force --target-files-omit-workdir-adjustment --keep-storage-local-copies --max-inventory-time 0 --nocolor --notemp --no-hooks --nolock --ignore-incomplete --verbose  --rerun-triggers input params mtime code software-env --deployment-method conda --conda-frontend conda --conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs --conda-base-path /home/fvhemert/miniforge3 --shared-fs-usage input-output storage-local-copies persistence software-deployment sources source-cache --wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/ --printshellcmds  --latency-wait 60 --scheduler ilp --local-storage-prefix .snakemake/storage --scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin --set-resources base64//ZmFzdHFjOnBhcnRpdGlvbj1zdGFnaW5n --set-threads base64//c2FsbW9uPTE2 --default-resources base64//dG1wZGlyPXN5c3RlbV90bXBkaXI= --mode remote && touch '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf/10.jobfinished' || (touch '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf/10.jobfailed'; exit 1)

sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 128 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 8 jobs.
sbatch: By default shared jobs get 1792 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 16 CPUs, based on the number of CPUs and the amount memory that you've requested.
Submitted job 10 with external jobid '8183887'.
[Wed Oct 16 09:57:29 2024]
Finished job 19.
6 of 25 steps (24%) done
[Wed Oct 16 09:57:54 2024]
Finished job 11.
7 of 25 steps (28%) done
[Wed Oct 16 09:58:24 2024]
Finished job 12.
8 of 25 steps (32%) done
[Wed Oct 16 09:58:26 2024]
Finished job 9.
9 of 25 steps (36%) done
[Wed Oct 16 09:58:27 2024]
Finished job 10.
10 of 25 steps (40%) done
[Wed Oct 16 09:59:10 2024]
Finished job 3.
11 of 25 steps (44%) done
Removing temporary output fastq_trimmed/0053_P2017BB3S20R_S2_R2_001_val_2.fq.gz.
Removing temporary output fastq_trimmed/0053_P2017BB3S20R_S2_R1_001_val_1.fq.gz.
Resources before job selection: {'_cores': 9223372036854775791, '_nodes': 99, '_job_count': 10}
Ready jobs: 1
Select jobs to execute...
Selecting jobs to run using greedy solver.
Selected jobs: 1
Resources after job selection: {'_cores': 9223372036854775790, '_nodes': 98, '_job_count': 10}
Execute 1 jobs...

[Wed Oct 16 09:59:10 2024]
rule tpm4_normalization:
    input: analyzed/salmon_0053_P2017BB3S20R_S2/quant.genes.sf
    output: analyzed/0053_P2017BB3S20R_S2.genes.tpm4.results
    jobid: 8
    benchmark: benchmark/0053_P2017BB3S20R_S2.tpm4_normalization_salmon.tpm4_normalization.benchmark.tsv
    reason: Missing output files: analyzed/0053_P2017BB3S20R_S2.genes.tpm4.results; Input files updated by another job: analyzed/salmon_0053_P2017BB3S20R_S2/quant.genes.sf
    wildcards: sample=0053_P2017BB3S20R_S2
    resources: tmpdir=<TBD>

General args: ['--force', '--target-files-omit-workdir-adjustment', '--keep-storage-local-copies', '--max-inventory-time 0', '--nocolor', '--notemp', '--no-hooks', '--nolock', '--ignore-incomplete', '', '--verbose ', '--rerun-triggers input params mtime code software-env', '', '', '--deployment-method conda', '--conda-frontend conda', '--conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs', '--conda-base-path /home/fvhemert/miniforge3', '', '', '', '--shared-fs-usage input-output storage-local-copies persistence software-deployment sources source-cache', '', '--wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/', '', '', '', '', '--printshellcmds ', '', '--latency-wait 60', '--scheduler ilp', '', '--local-storage-prefix .snakemake/storage', '--scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin', '', '', '--set-resources base64//ZmFzdHFjOnBhcnRpdGlvbj1zdGFnaW5n', '--set-threads base64//c2FsbW9uPTE2', '', '--default-resources base64//dG1wZGlyPXN5c3RlbV90bXBkaXI=']
Jobscript:
#!/bin/sh
# properties = {"type": "single", "rule": "tpm4_normalization", "local": false, "input": ["analyzed/salmon_0053_P2017BB3S20R_S2/quant.genes.sf"], "output": ["analyzed/0053_P2017BB3S20R_S2.genes.tpm4.results"], "wildcards": {"sample": "0053_P2017BB3S20R_S2"}, "params": {}, "log": [], "threads": 1, "resources": {"tmpdir": "<TBD>"}, "jobid": 8}
cd /gpfs/home5/fvhemert/temp/test_pipeline && /home/fvhemert/miniforge3/envs/snaqs/bin/python3.12 -m snakemake --snakefile /gpfs/home5/fvhemert/temp/test_pipeline/Snakefile --target-jobs 'tpm4_normalization:sample=0053_P2017BB3S20R_S2' --allowed-rules 'tpm4_normalization' --cores 94 --attempt 1 --force-use-threads  --wait-for-files '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf' 'analyzed/salmon_0053_P2017BB3S20R_S2/quant.genes.sf' --force --target-files-omit-workdir-adjustment --keep-storage-local-copies --max-inventory-time 0 --nocolor --notemp --no-hooks --nolock --ignore-incomplete --verbose  --rerun-triggers input params mtime code software-env --deployment-method conda --conda-frontend conda --conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs --conda-base-path /home/fvhemert/miniforge3 --shared-fs-usage input-output storage-local-copies persistence software-deployment sources source-cache --wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/ --printshellcmds  --latency-wait 60 --scheduler ilp --local-storage-prefix .snakemake/storage --scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin --set-resources base64//ZmFzdHFjOnBhcnRpdGlvbj1zdGFnaW5n --set-threads base64//c2FsbW9uPTE2 --default-resources base64//dG1wZGlyPXN5c3RlbV90bXBkaXI= --mode remote && touch '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf/8.jobfinished' || (touch '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf/8.jobfailed'; exit 1)

sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 128 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 8 jobs.
sbatch: By default shared jobs get 1792 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 16 CPUs, based on the number of CPUs and the amount memory that you've requested.
Submitted job 8 with external jobid '8183922'.
[Wed Oct 16 09:59:44 2024]
Finished job 1.
12 of 25 steps (48%) done
Removing temporary output fastq_trimmed/0053_P2017BB3S19R_S1_R2_001_val_2.fq.gz.
Removing temporary output fastq_trimmed/0053_P2017BB3S19R_S1_R1_001_val_1.fq.gz.
Resources before job selection: {'_cores': 9223372036854775806, '_nodes': 99, '_job_count': 10}
Ready jobs: 2
Select jobs to execute...
Welcome to the CBC MILP Solver 
Version: 2.10.12 
Build Date: Sep  3 2024 

command line - cbc /scratch-local/70716/955cce5d485c44c5b6356a7e25d76490-pulp.mps -max -sec 10 -threads 2 -timeMode elapsed -branch -printingOptions all -solution /scratch-local/70716/955cce5d485c44c5b6356a7e25d76490-pulp.sol (default strategy 1)
At line 2 NAME          MODEL
At line 3 ROWS
At line 8 COLUMNS
At line 21 RHS
At line 25 BOUNDS
At line 28 ENDATA
Problem MODEL has 3 rows, 2 columns and 6 elements
Coin0008I MODEL read with 0 errors
seconds was changed from 1e+100 to 10
threads was changed from 0 to 2
Option for timeMode changed from cpu to elapsed
Continuous objective value is 4 - 0.02 seconds
Cgl0004I processed model has 0 rows, 0 columns (0 integer (0 of which binary)) and 0 elements
Cbc3007W No integer variables - nothing to do
Cuts at root node changed objective from 4 to 1.79769e+308
Probing was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
Gomory was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
Knapsack was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
Clique was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
MixedIntegerRounding2 was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
FlowCover was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
TwoMirCuts was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
ZeroHalf was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)

Result - Optimal solution found

Objective value:                4.00000000
Enumerated nodes:               0
Total iterations:               0
Time (CPU seconds):             0.11
Time (Wallclock seconds):       0.01

Option for printingOptions changed from normal to all
Total time (CPU seconds):       0.20   (Wallclock seconds):       0.01

Selected jobs: 2
Resources after job selection: {'_cores': 9223372036854775804, '_nodes': 97, '_job_count': 10}
Execute 2 jobs...

[Wed Oct 16 09:59:44 2024]
rule tpm4_normalization:
    input: analyzed/salmon_0053_P2017BB3S19R_S1/quant.genes.sf
    output: analyzed/0053_P2017BB3S19R_S1.genes.tpm4.results
    jobid: 6
    benchmark: benchmark/0053_P2017BB3S19R_S1.tpm4_normalization_salmon.tpm4_normalization.benchmark.tsv
    reason: Missing output files: analyzed/0053_P2017BB3S19R_S1.genes.tpm4.results; Input files updated by another job: analyzed/salmon_0053_P2017BB3S19R_S1/quant.genes.sf
    wildcards: sample=0053_P2017BB3S19R_S1
    resources: tmpdir=<TBD>

General args: ['--force', '--target-files-omit-workdir-adjustment', '--keep-storage-local-copies', '--max-inventory-time 0', '--nocolor', '--notemp', '--no-hooks', '--nolock', '--ignore-incomplete', '', '--verbose ', '--rerun-triggers input params mtime code software-env', '', '', '--deployment-method conda', '--conda-frontend conda', '--conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs', '--conda-base-path /home/fvhemert/miniforge3', '', '', '', '--shared-fs-usage input-output storage-local-copies persistence software-deployment sources source-cache', '', '--wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/', '', '', '', '', '--printshellcmds ', '', '--latency-wait 60', '--scheduler ilp', '', '--local-storage-prefix .snakemake/storage', '--scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin', '', '', '--set-resources base64//ZmFzdHFjOnBhcnRpdGlvbj1zdGFnaW5n', '--set-threads base64//c2FsbW9uPTE2', '', '--default-resources base64//dG1wZGlyPXN5c3RlbV90bXBkaXI=']
Jobscript:
#!/bin/sh
# properties = {"type": "single", "rule": "tpm4_normalization", "local": false, "input": ["analyzed/salmon_0053_P2017BB3S19R_S1/quant.genes.sf"], "output": ["analyzed/0053_P2017BB3S19R_S1.genes.tpm4.results"], "wildcards": {"sample": "0053_P2017BB3S19R_S1"}, "params": {}, "log": [], "threads": 1, "resources": {"tmpdir": "<TBD>"}, "jobid": 6}
cd /gpfs/home5/fvhemert/temp/test_pipeline && /home/fvhemert/miniforge3/envs/snaqs/bin/python3.12 -m snakemake --snakefile /gpfs/home5/fvhemert/temp/test_pipeline/Snakefile --target-jobs 'tpm4_normalization:sample=0053_P2017BB3S19R_S1' --allowed-rules 'tpm4_normalization' --cores 94 --attempt 1 --force-use-threads  --wait-for-files '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf' 'analyzed/salmon_0053_P2017BB3S19R_S1/quant.genes.sf' --force --target-files-omit-workdir-adjustment --keep-storage-local-copies --max-inventory-time 0 --nocolor --notemp --no-hooks --nolock --ignore-incomplete --verbose  --rerun-triggers input params mtime code software-env --deployment-method conda --conda-frontend conda --conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs --conda-base-path /home/fvhemert/miniforge3 --shared-fs-usage input-output storage-local-copies persistence software-deployment sources source-cache --wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/ --printshellcmds  --latency-wait 60 --scheduler ilp --local-storage-prefix .snakemake/storage --scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin --set-resources base64//ZmFzdHFjOnBhcnRpdGlvbj1zdGFnaW5n --set-threads base64//c2FsbW9uPTE2 --default-resources base64//dG1wZGlyPXN5c3RlbV90bXBkaXI= --mode remote && touch '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf/6.jobfinished' || (touch '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf/6.jobfailed'; exit 1)

sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 128 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 8 jobs.
sbatch: By default shared jobs get 1792 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 16 CPUs, based on the number of CPUs and the amount memory that you've requested.
Submitted job 6 with external jobid '8183926'.

[Wed Oct 16 09:59:44 2024]
rule rnaseq_multiqc:
    input: qc/fastqc/0053_P2017BB3S19R_S1_R1_001_fastqc.html, qc/fastqc/0053_P2017BB3S19R_S1_R2_001_fastqc.html, qc/fastqc/0053_P2017BB3S20R_S2_R1_001_fastqc.html, qc/fastqc/0053_P2017BB3S20R_S2_R2_001_fastqc.html, qc/fastqc/0053_P2017BB3S19R_S1_R1_001_fastqc.zip, qc/fastqc/0053_P2017BB3S19R_S1_R2_001_fastqc.zip, qc/fastqc/0053_P2017BB3S20R_S2_R1_001_fastqc.zip, qc/fastqc/0053_P2017BB3S20R_S2_R2_001_fastqc.zip, qc/trim_galore/0053_P2017BB3S19R_S1_R1_001.fastq.gz_trimming_report.txt, qc/trim_galore/0053_P2017BB3S20R_S2_R1_001.fastq.gz_trimming_report.txt, qc/trim_galore/0053_P2017BB3S19R_S1_R2_001.fastq.gz_trimming_report.txt, qc/trim_galore/0053_P2017BB3S20R_S2_R2_001.fastq.gz_trimming_report.txt, qc/trim_galore/0053_P2017BB3S19R_S1_R1_001_val_1_fastqc.html, qc/trim_galore/0053_P2017BB3S20R_S2_R1_001_val_1_fastqc.html, qc/trim_galore/0053_P2017BB3S19R_S1_R2_001_val_2_fastqc.html, qc/trim_galore/0053_P2017BB3S20R_S2_R2_001_val_2_fastqc.html, qc/trim_galore/0053_P2017BB3S19R_S1_R1_001_val_1_fastqc.zip, qc/trim_galore/0053_P2017BB3S20R_S2_R1_001_val_1_fastqc.zip, qc/trim_galore/0053_P2017BB3S19R_S1_R2_001_val_2_fastqc.zip, qc/trim_galore/0053_P2017BB3S20R_S2_R2_001_val_2_fastqc.zip, analyzed/salmon_0053_P2017BB3S19R_S1/quant.genes.sf, analyzed/salmon_0053_P2017BB3S20R_S2/quant.genes.sf
    output: qc/multiqc_report.html
    jobid: 21
    benchmark: benchmark/rnaseq_multiqc_salmon.rnaseq_multiqc.benchmark.tsv
    reason: Missing output files: qc/multiqc_report.html; Input files updated by another job: qc/trim_galore/0053_P2017BB3S19R_S1_R1_001_val_1_fastqc.zip, qc/trim_galore/0053_P2017BB3S20R_S2_R1_001.fastq.gz_trimming_report.txt, qc/fastqc/0053_P2017BB3S20R_S2_R2_001_fastqc.html, qc/trim_galore/0053_P2017BB3S20R_S2_R2_001_val_2_fastqc.zip, qc/trim_galore/0053_P2017BB3S19R_S1_R2_001_val_2_fastqc.zip, qc/fastqc/0053_P2017BB3S19R_S1_R2_001_fastqc.zip, qc/fastqc/0053_P2017BB3S19R_S1_R1_001_fastqc.zip, qc/trim_galore/0053_P2017BB3S19R_S1_R2_001_val_2_fastqc.html, qc/trim_galore/0053_P2017BB3S20R_S2_R2_001_val_2_fastqc.html, qc/trim_galore/0053_P2017BB3S19R_S1_R2_001.fastq.gz_trimming_report.txt, analyzed/salmon_0053_P2017BB3S19R_S1/quant.genes.sf, qc/trim_galore/0053_P2017BB3S20R_S2_R1_001_val_1_fastqc.zip, qc/trim_galore/0053_P2017BB3S19R_S1_R1_001_val_1_fastqc.html, analyzed/salmon_0053_P2017BB3S20R_S2/quant.genes.sf, qc/fastqc/0053_P2017BB3S20R_S2_R1_001_fastqc.html, qc/fastqc/0053_P2017BB3S20R_S2_R1_001_fastqc.zip, qc/trim_galore/0053_P2017BB3S20R_S2_R1_001_val_1_fastqc.html, qc/fastqc/0053_P2017BB3S20R_S2_R2_001_fastqc.zip, qc/trim_galore/0053_P2017BB3S20R_S2_R2_001.fastq.gz_trimming_report.txt, qc/fastqc/0053_P2017BB3S19R_S1_R1_001_fastqc.html, qc/trim_galore/0053_P2017BB3S19R_S1_R1_001.fastq.gz_trimming_report.txt, qc/fastqc/0053_P2017BB3S19R_S1_R2_001_fastqc.html
    resources: tmpdir=<TBD>


        export LC_ALL=en_US.UTF-8
        export LANG=en_US.UTF-8
        multiqc qc analyzed -o qc -f
        
General args: ['--force', '--target-files-omit-workdir-adjustment', '--keep-storage-local-copies', '--max-inventory-time 0', '--nocolor', '--notemp', '--no-hooks', '--nolock', '--ignore-incomplete', '', '--verbose ', '--rerun-triggers input params mtime code software-env', '', '', '--deployment-method conda', '--conda-frontend conda', '--conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs', '--conda-base-path /home/fvhemert/miniforge3', '', '', '', '--shared-fs-usage input-output storage-local-copies persistence software-deployment sources source-cache', '', '--wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/', '', '', '', '', '--printshellcmds ', '', '--latency-wait 60', '--scheduler ilp', '', '--local-storage-prefix .snakemake/storage', '--scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin', '', '', '--set-resources base64//ZmFzdHFjOnBhcnRpdGlvbj1zdGFnaW5n', '--set-threads base64//c2FsbW9uPTE2', '', '--default-resources base64//dG1wZGlyPXN5c3RlbV90bXBkaXI=']
Jobscript:
#!/bin/sh
# properties = {"type": "single", "rule": "rnaseq_multiqc", "local": false, "input": ["qc/fastqc/0053_P2017BB3S19R_S1_R1_001_fastqc.html", "qc/fastqc/0053_P2017BB3S19R_S1_R2_001_fastqc.html", "qc/fastqc/0053_P2017BB3S20R_S2_R1_001_fastqc.html", "qc/fastqc/0053_P2017BB3S20R_S2_R2_001_fastqc.html", "qc/fastqc/0053_P2017BB3S19R_S1_R1_001_fastqc.zip", "qc/fastqc/0053_P2017BB3S19R_S1_R2_001_fastqc.zip", "qc/fastqc/0053_P2017BB3S20R_S2_R1_001_fastqc.zip", "qc/fastqc/0053_P2017BB3S20R_S2_R2_001_fastqc.zip", "qc/trim_galore/0053_P2017BB3S19R_S1_R1_001.fastq.gz_trimming_report.txt", "qc/trim_galore/0053_P2017BB3S20R_S2_R1_001.fastq.gz_trimming_report.txt", "qc/trim_galore/0053_P2017BB3S19R_S1_R2_001.fastq.gz_trimming_report.txt", "qc/trim_galore/0053_P2017BB3S20R_S2_R2_001.fastq.gz_trimming_report.txt", "qc/trim_galore/0053_P2017BB3S19R_S1_R1_001_val_1_fastqc.html", "qc/trim_galore/0053_P2017BB3S20R_S2_R1_001_val_1_fastqc.html", "qc/trim_galore/0053_P2017BB3S19R_S1_R2_001_val_2_fastqc.html", "qc/trim_galore/0053_P2017BB3S20R_S2_R2_001_val_2_fastqc.html", "qc/trim_galore/0053_P2017BB3S19R_S1_R1_001_val_1_fastqc.zip", "qc/trim_galore/0053_P2017BB3S20R_S2_R1_001_val_1_fastqc.zip", "qc/trim_galore/0053_P2017BB3S19R_S1_R2_001_val_2_fastqc.zip", "qc/trim_galore/0053_P2017BB3S20R_S2_R2_001_val_2_fastqc.zip", "analyzed/salmon_0053_P2017BB3S19R_S1/quant.genes.sf", "analyzed/salmon_0053_P2017BB3S20R_S2/quant.genes.sf"], "output": ["qc/multiqc_report.html"], "wildcards": {}, "params": {}, "log": [], "threads": 1, "resources": {"tmpdir": "<TBD>"}, "jobid": 21}
cd /gpfs/home5/fvhemert/temp/test_pipeline && /home/fvhemert/miniforge3/envs/snaqs/bin/python3.12 -m snakemake --snakefile /gpfs/home5/fvhemert/temp/test_pipeline/Snakefile --target-jobs 'rnaseq_multiqc:' --allowed-rules 'rnaseq_multiqc' --cores 94 --attempt 1 --force-use-threads  --wait-for-files-file /gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf/snakejob.rnaseq_multiqc.21.sh.waitforfilesfile.txt --force --target-files-omit-workdir-adjustment --keep-storage-local-copies --max-inventory-time 0 --nocolor --notemp --no-hooks --nolock --ignore-incomplete --verbose  --rerun-triggers input params mtime code software-env --deployment-method conda --conda-frontend conda --conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs --conda-base-path /home/fvhemert/miniforge3 --shared-fs-usage input-output storage-local-copies persistence software-deployment sources source-cache --wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/ --printshellcmds  --latency-wait 60 --scheduler ilp --local-storage-prefix .snakemake/storage --scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin --set-resources base64//ZmFzdHFjOnBhcnRpdGlvbj1zdGFnaW5n --set-threads base64//c2FsbW9uPTE2 --default-resources base64//dG1wZGlyPXN5c3RlbV90bXBkaXI= --mode remote && touch '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf/21.jobfinished' || (touch '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf/21.jobfailed'; exit 1)

sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 128 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 8 jobs.
sbatch: By default shared jobs get 1792 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 16 CPUs, based on the number of CPUs and the amount memory that you've requested.
Submitted job 21 with external jobid '8183927'.
[Wed Oct 16 10:00:20 2024]
Finished job 21.
13 of 25 steps (52%) done
[Wed Oct 16 10:00:41 2024]
Finished job 8.
14 of 25 steps (56%) done
Resources before job selection: {'_cores': 9223372036854775806, '_nodes': 99, '_job_count': 10}
Ready jobs: 2
Select jobs to execute...
Welcome to the CBC MILP Solver 
Version: 2.10.12 
Build Date: Sep  3 2024 

command line - cbc /scratch-local/70716/f7d4cc79144d4ae9bfc9d51ff453c539-pulp.mps -max -sec 10 -threads 2 -timeMode elapsed -branch -printingOptions all -solution /scratch-local/70716/f7d4cc79144d4ae9bfc9d51ff453c539-pulp.sol (default strategy 1)
At line 2 NAME          MODEL
At line 3 ROWS
At line 8 COLUMNS
At line 21 RHS
At line 25 BOUNDS
At line 28 ENDATA
Problem MODEL has 3 rows, 2 columns and 6 elements
Coin0008I MODEL read with 0 errors
seconds was changed from 1e+100 to 10
threads was changed from 0 to 2
Option for timeMode changed from cpu to elapsed
Continuous objective value is 4 - 0.02 seconds
Cgl0004I processed model has 0 rows, 0 columns (0 integer (0 of which binary)) and 0 elements
Cbc3007W No integer variables - nothing to do
Cuts at root node changed objective from 4 to 1.79769e+308
Probing was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
Gomory was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
Knapsack was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
Clique was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
MixedIntegerRounding2 was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
FlowCover was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
TwoMirCuts was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
ZeroHalf was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)

Result - Optimal solution found

Objective value:                4.00000000
Enumerated nodes:               0
Total iterations:               0
Time (CPU seconds):             0.06
Time (Wallclock seconds):       0.00

Option for printingOptions changed from normal to all
Total time (CPU seconds):       0.09   (Wallclock seconds):       0.01

Selected jobs: 2
Resources after job selection: {'_cores': 9223372036854775804, '_nodes': 97, '_job_count': 10}
Execute 2 jobs...

[Wed Oct 16 10:00:41 2024]
rule customqc_parameters:
    input: analyzed/0053_P2017BB3S20R_S2.genes.tpm4.results
    output: qc/customqc/0053_P2017BB3S20R_S2.cumulative_percentage_of_tpm4_values.tsv.gz, qc/customqc/0053_P2017BB3S20R_S2.cumulative_percentage_of_protein_coding_reads.tsv.gz, qc/customqc/0053_P2017BB3S20R_S2.cumulative_percentage_of_raw_reads.tsv.gz, qc/customqc/0053_P2017BB3S20R_S2.tpm4_cumulative_plot.png, qc/customqc/0053_P2017BB3S20R_S2.expression_heterogeneity.tsv.gz, qc/customqc/0053_P2017BB3S20R_S2.refgene_pattern.png, qc/customqc/0053_P2017BB3S20R_S2.refgene_pattern_plot.tsv.gz, qc/customqc/0053_P2017BB3S20R_S2.refgene_pattern_score.tsv.gz, qc/customqc/0053_P2017BB3S20R_S2.sorted_biotypes.tsv.gz, qc/customqc/0053_P2017BB3S20R_S2.sorted_tpm4_values.tsv.gz, qc/customqc/0053_P2017BB3S20R_S2.tpm4_total.tsv.gz
    jobid: 15
    benchmark: benchmark/0053_P2017BB3S20R_S2.customqc_parameters.customqc_parameters.benchmark.tsv
    reason: Missing output files: qc/customqc/0053_P2017BB3S20R_S2.sorted_tpm4_values.tsv.gz, qc/customqc/0053_P2017BB3S20R_S2.sorted_biotypes.tsv.gz, qc/customqc/0053_P2017BB3S20R_S2.cumulative_percentage_of_tpm4_values.tsv.gz, qc/customqc/0053_P2017BB3S20R_S2.refgene_pattern_score.tsv.gz, qc/customqc/0053_P2017BB3S20R_S2.expression_heterogeneity.tsv.gz, qc/customqc/0053_P2017BB3S20R_S2.refgene_pattern.png, qc/customqc/0053_P2017BB3S20R_S2.refgene_pattern_plot.tsv.gz, qc/customqc/0053_P2017BB3S20R_S2.cumulative_percentage_of_protein_coding_reads.tsv.gz, qc/customqc/0053_P2017BB3S20R_S2.cumulative_percentage_of_raw_reads.tsv.gz, qc/customqc/0053_P2017BB3S20R_S2.tpm4_total.tsv.gz, qc/customqc/0053_P2017BB3S20R_S2.tpm4_cumulative_plot.png; Input files updated by another job: analyzed/0053_P2017BB3S20R_S2.genes.tpm4.results
    wildcards: sample=0053_P2017BB3S20R_S2
    resources: tmpdir=<TBD>

General args: ['--force', '--target-files-omit-workdir-adjustment', '--keep-storage-local-copies', '--max-inventory-time 0', '--nocolor', '--notemp', '--no-hooks', '--nolock', '--ignore-incomplete', '', '--verbose ', '--rerun-triggers input params mtime code software-env', '', '', '--deployment-method conda', '--conda-frontend conda', '--conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs', '--conda-base-path /home/fvhemert/miniforge3', '', '', '', '--shared-fs-usage input-output storage-local-copies persistence software-deployment sources source-cache', '', '--wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/', '', '', '', '', '--printshellcmds ', '', '--latency-wait 60', '--scheduler ilp', '', '--local-storage-prefix .snakemake/storage', '--scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin', '', '', '--set-resources base64//ZmFzdHFjOnBhcnRpdGlvbj1zdGFnaW5n', '--set-threads base64//c2FsbW9uPTE2', '', '--default-resources base64//dG1wZGlyPXN5c3RlbV90bXBkaXI=']
Jobscript:
#!/bin/sh
# properties = {"type": "single", "rule": "customqc_parameters", "local": false, "input": ["analyzed/0053_P2017BB3S20R_S2.genes.tpm4.results"], "output": ["qc/customqc/0053_P2017BB3S20R_S2.cumulative_percentage_of_tpm4_values.tsv.gz", "qc/customqc/0053_P2017BB3S20R_S2.cumulative_percentage_of_protein_coding_reads.tsv.gz", "qc/customqc/0053_P2017BB3S20R_S2.cumulative_percentage_of_raw_reads.tsv.gz", "qc/customqc/0053_P2017BB3S20R_S2.tpm4_cumulative_plot.png", "qc/customqc/0053_P2017BB3S20R_S2.expression_heterogeneity.tsv.gz", "qc/customqc/0053_P2017BB3S20R_S2.refgene_pattern.png", "qc/customqc/0053_P2017BB3S20R_S2.refgene_pattern_plot.tsv.gz", "qc/customqc/0053_P2017BB3S20R_S2.refgene_pattern_score.tsv.gz", "qc/customqc/0053_P2017BB3S20R_S2.sorted_biotypes.tsv.gz", "qc/customqc/0053_P2017BB3S20R_S2.sorted_tpm4_values.tsv.gz", "qc/customqc/0053_P2017BB3S20R_S2.tpm4_total.tsv.gz"], "wildcards": {"sample": "0053_P2017BB3S20R_S2"}, "params": {}, "log": [], "threads": 1, "resources": {"tmpdir": "<TBD>"}, "jobid": 15}
cd /gpfs/home5/fvhemert/temp/test_pipeline && /home/fvhemert/miniforge3/envs/snaqs/bin/python3.12 -m snakemake --snakefile /gpfs/home5/fvhemert/temp/test_pipeline/Snakefile --target-jobs 'customqc_parameters:sample=0053_P2017BB3S20R_S2' --allowed-rules 'customqc_parameters' --cores 94 --attempt 1 --force-use-threads  --wait-for-files '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf' 'analyzed/0053_P2017BB3S20R_S2.genes.tpm4.results' --force --target-files-omit-workdir-adjustment --keep-storage-local-copies --max-inventory-time 0 --nocolor --notemp --no-hooks --nolock --ignore-incomplete --verbose  --rerun-triggers input params mtime code software-env --deployment-method conda --conda-frontend conda --conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs --conda-base-path /home/fvhemert/miniforge3 --shared-fs-usage input-output storage-local-copies persistence software-deployment sources source-cache --wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/ --printshellcmds  --latency-wait 60 --scheduler ilp --local-storage-prefix .snakemake/storage --scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin --set-resources base64//ZmFzdHFjOnBhcnRpdGlvbj1zdGFnaW5n --set-threads base64//c2FsbW9uPTE2 --default-resources base64//dG1wZGlyPXN5c3RlbV90bXBkaXI= --mode remote && touch '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf/15.jobfinished' || (touch '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf/15.jobfailed'; exit 1)

sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 128 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 8 jobs.
sbatch: By default shared jobs get 1792 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 16 CPUs, based on the number of CPUs and the amount memory that you've requested.
Submitted job 15 with external jobid '8183955'.

[Wed Oct 16 10:00:41 2024]
rule create_pcs_raw_files:
    input: analyzed/0053_P2017BB3S20R_S2.genes.tpm4.results
    output: pcs/test_pipeline/raw/0053_P2017BB3S20R_S2_rna-seq_salmonTPM4.tsv
    jobid: 7
    benchmark: benchmark/0053_P2017BB3S20R_S2.create_pcs_raw_files.create_pcs_raw_files.benchmark.tsv
    reason: Missing output files: pcs/test_pipeline/raw/0053_P2017BB3S20R_S2_rna-seq_salmonTPM4.tsv; Input files updated by another job: analyzed/0053_P2017BB3S20R_S2.genes.tpm4.results
    wildcards: sample=0053_P2017BB3S20R_S2
    resources: tmpdir=<TBD>

General args: ['--force', '--target-files-omit-workdir-adjustment', '--keep-storage-local-copies', '--max-inventory-time 0', '--nocolor', '--notemp', '--no-hooks', '--nolock', '--ignore-incomplete', '', '--verbose ', '--rerun-triggers input params mtime code software-env', '', '', '--deployment-method conda', '--conda-frontend conda', '--conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs', '--conda-base-path /home/fvhemert/miniforge3', '', '', '', '--shared-fs-usage input-output storage-local-copies persistence software-deployment sources source-cache', '', '--wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/', '', '', '', '', '--printshellcmds ', '', '--latency-wait 60', '--scheduler ilp', '', '--local-storage-prefix .snakemake/storage', '--scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin', '', '', '--set-resources base64//ZmFzdHFjOnBhcnRpdGlvbj1zdGFnaW5n', '--set-threads base64//c2FsbW9uPTE2', '', '--default-resources base64//dG1wZGlyPXN5c3RlbV90bXBkaXI=']
Jobscript:
#!/bin/sh
# properties = {"type": "single", "rule": "create_pcs_raw_files", "local": false, "input": ["analyzed/0053_P2017BB3S20R_S2.genes.tpm4.results"], "output": ["pcs/test_pipeline/raw/0053_P2017BB3S20R_S2_rna-seq_salmonTPM4.tsv"], "wildcards": {"sample": "0053_P2017BB3S20R_S2"}, "params": {}, "log": [], "threads": 1, "resources": {"tmpdir": "<TBD>"}, "jobid": 7}
cd /gpfs/home5/fvhemert/temp/test_pipeline && /home/fvhemert/miniforge3/envs/snaqs/bin/python3.12 -m snakemake --snakefile /gpfs/home5/fvhemert/temp/test_pipeline/Snakefile --target-jobs 'create_pcs_raw_files:sample=0053_P2017BB3S20R_S2' --allowed-rules 'create_pcs_raw_files' --cores 94 --attempt 1 --force-use-threads  --wait-for-files '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf' 'analyzed/0053_P2017BB3S20R_S2.genes.tpm4.results' --force --target-files-omit-workdir-adjustment --keep-storage-local-copies --max-inventory-time 0 --nocolor --notemp --no-hooks --nolock --ignore-incomplete --verbose  --rerun-triggers input params mtime code software-env --deployment-method conda --conda-frontend conda --conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs --conda-base-path /home/fvhemert/miniforge3 --shared-fs-usage input-output storage-local-copies persistence software-deployment sources source-cache --wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/ --printshellcmds  --latency-wait 60 --scheduler ilp --local-storage-prefix .snakemake/storage --scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin --set-resources base64//ZmFzdHFjOnBhcnRpdGlvbj1zdGFnaW5n --set-threads base64//c2FsbW9uPTE2 --default-resources base64//dG1wZGlyPXN5c3RlbV90bXBkaXI= --mode remote && touch '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf/7.jobfinished' || (touch '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf/7.jobfailed'; exit 1)

sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 128 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 8 jobs.
sbatch: By default shared jobs get 1792 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 16 CPUs, based on the number of CPUs and the amount memory that you've requested.
Submitted job 7 with external jobid '8183956'.
[Wed Oct 16 10:01:17 2024]
Finished job 7.
15 of 25 steps (60%) done
[Wed Oct 16 10:01:18 2024]
Finished job 6.
16 of 25 steps (64%) done
Resources before job selection: {'_cores': 9223372036854775806, '_nodes': 99, '_job_count': 10}
Ready jobs: 3
Select jobs to execute...
Welcome to the CBC MILP Solver 
Version: 2.10.12 
Build Date: Sep  3 2024 

command line - cbc /scratch-local/70716/44f15623e1c74abe93f61157816bbb61-pulp.mps -max -sec 10 -threads 2 -timeMode elapsed -branch -printingOptions all -solution /scratch-local/70716/44f15623e1c74abe93f61157816bbb61-pulp.sol (default strategy 1)
At line 2 NAME          MODEL
At line 3 ROWS
At line 8 COLUMNS
At line 27 RHS
At line 31 BOUNDS
At line 35 ENDATA
Problem MODEL has 3 rows, 3 columns and 9 elements
Coin0008I MODEL read with 0 errors
seconds was changed from 1e+100 to 10
threads was changed from 0 to 2
Option for timeMode changed from cpu to elapsed
Continuous objective value is 6 - 0.03 seconds
Cgl0004I processed model has 0 rows, 0 columns (0 integer (0 of which binary)) and 0 elements
Cbc3007W No integer variables - nothing to do
Cuts at root node changed objective from 6 to 1.79769e+308
Probing was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
Gomory was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
Knapsack was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
Clique was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
MixedIntegerRounding2 was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
FlowCover was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
TwoMirCuts was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
ZeroHalf was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)

Result - Optimal solution found

Objective value:                6.00000000
Enumerated nodes:               0
Total iterations:               0
Time (CPU seconds):             0.08
Time (Wallclock seconds):       0.00

Option for printingOptions changed from normal to all
Total time (CPU seconds):       0.15   (Wallclock seconds):       0.10

Selected jobs: 3
Resources after job selection: {'_cores': 9223372036854775803, '_nodes': 96, '_job_count': 10}
Execute 3 jobs...

[Wed Oct 16 10:01:18 2024]
rule customqc_parameters:
    input: analyzed/0053_P2017BB3S19R_S1.genes.tpm4.results
    output: qc/customqc/0053_P2017BB3S19R_S1.cumulative_percentage_of_tpm4_values.tsv.gz, qc/customqc/0053_P2017BB3S19R_S1.cumulative_percentage_of_protein_coding_reads.tsv.gz, qc/customqc/0053_P2017BB3S19R_S1.cumulative_percentage_of_raw_reads.tsv.gz, qc/customqc/0053_P2017BB3S19R_S1.tpm4_cumulative_plot.png, qc/customqc/0053_P2017BB3S19R_S1.expression_heterogeneity.tsv.gz, qc/customqc/0053_P2017BB3S19R_S1.refgene_pattern.png, qc/customqc/0053_P2017BB3S19R_S1.refgene_pattern_plot.tsv.gz, qc/customqc/0053_P2017BB3S19R_S1.refgene_pattern_score.tsv.gz, qc/customqc/0053_P2017BB3S19R_S1.sorted_biotypes.tsv.gz, qc/customqc/0053_P2017BB3S19R_S1.sorted_tpm4_values.tsv.gz, qc/customqc/0053_P2017BB3S19R_S1.tpm4_total.tsv.gz
    jobid: 14
    benchmark: benchmark/0053_P2017BB3S19R_S1.customqc_parameters.customqc_parameters.benchmark.tsv
    reason: Missing output files: qc/customqc/0053_P2017BB3S19R_S1.tpm4_cumulative_plot.png, qc/customqc/0053_P2017BB3S19R_S1.refgene_pattern_score.tsv.gz, qc/customqc/0053_P2017BB3S19R_S1.cumulative_percentage_of_protein_coding_reads.tsv.gz, qc/customqc/0053_P2017BB3S19R_S1.expression_heterogeneity.tsv.gz, qc/customqc/0053_P2017BB3S19R_S1.sorted_tpm4_values.tsv.gz, qc/customqc/0053_P2017BB3S19R_S1.cumulative_percentage_of_raw_reads.tsv.gz, qc/customqc/0053_P2017BB3S19R_S1.tpm4_total.tsv.gz, qc/customqc/0053_P2017BB3S19R_S1.sorted_biotypes.tsv.gz, qc/customqc/0053_P2017BB3S19R_S1.refgene_pattern.png, qc/customqc/0053_P2017BB3S19R_S1.refgene_pattern_plot.tsv.gz, qc/customqc/0053_P2017BB3S19R_S1.cumulative_percentage_of_tpm4_values.tsv.gz; Input files updated by another job: analyzed/0053_P2017BB3S19R_S1.genes.tpm4.results
    wildcards: sample=0053_P2017BB3S19R_S1
    resources: tmpdir=<TBD>

General args: ['--force', '--target-files-omit-workdir-adjustment', '--keep-storage-local-copies', '--max-inventory-time 0', '--nocolor', '--notemp', '--no-hooks', '--nolock', '--ignore-incomplete', '', '--verbose ', '--rerun-triggers input params mtime code software-env', '', '', '--deployment-method conda', '--conda-frontend conda', '--conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs', '--conda-base-path /home/fvhemert/miniforge3', '', '', '', '--shared-fs-usage input-output storage-local-copies persistence software-deployment sources source-cache', '', '--wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/', '', '', '', '', '--printshellcmds ', '', '--latency-wait 60', '--scheduler ilp', '', '--local-storage-prefix .snakemake/storage', '--scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin', '', '', '--set-resources base64//ZmFzdHFjOnBhcnRpdGlvbj1zdGFnaW5n', '--set-threads base64//c2FsbW9uPTE2', '', '--default-resources base64//dG1wZGlyPXN5c3RlbV90bXBkaXI=']
Jobscript:
#!/bin/sh
# properties = {"type": "single", "rule": "customqc_parameters", "local": false, "input": ["analyzed/0053_P2017BB3S19R_S1.genes.tpm4.results"], "output": ["qc/customqc/0053_P2017BB3S19R_S1.cumulative_percentage_of_tpm4_values.tsv.gz", "qc/customqc/0053_P2017BB3S19R_S1.cumulative_percentage_of_protein_coding_reads.tsv.gz", "qc/customqc/0053_P2017BB3S19R_S1.cumulative_percentage_of_raw_reads.tsv.gz", "qc/customqc/0053_P2017BB3S19R_S1.tpm4_cumulative_plot.png", "qc/customqc/0053_P2017BB3S19R_S1.expression_heterogeneity.tsv.gz", "qc/customqc/0053_P2017BB3S19R_S1.refgene_pattern.png", "qc/customqc/0053_P2017BB3S19R_S1.refgene_pattern_plot.tsv.gz", "qc/customqc/0053_P2017BB3S19R_S1.refgene_pattern_score.tsv.gz", "qc/customqc/0053_P2017BB3S19R_S1.sorted_biotypes.tsv.gz", "qc/customqc/0053_P2017BB3S19R_S1.sorted_tpm4_values.tsv.gz", "qc/customqc/0053_P2017BB3S19R_S1.tpm4_total.tsv.gz"], "wildcards": {"sample": "0053_P2017BB3S19R_S1"}, "params": {}, "log": [], "threads": 1, "resources": {"tmpdir": "<TBD>"}, "jobid": 14}
cd /gpfs/home5/fvhemert/temp/test_pipeline && /home/fvhemert/miniforge3/envs/snaqs/bin/python3.12 -m snakemake --snakefile /gpfs/home5/fvhemert/temp/test_pipeline/Snakefile --target-jobs 'customqc_parameters:sample=0053_P2017BB3S19R_S1' --allowed-rules 'customqc_parameters' --cores 94 --attempt 1 --force-use-threads  --wait-for-files '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf' 'analyzed/0053_P2017BB3S19R_S1.genes.tpm4.results' --force --target-files-omit-workdir-adjustment --keep-storage-local-copies --max-inventory-time 0 --nocolor --notemp --no-hooks --nolock --ignore-incomplete --verbose  --rerun-triggers input params mtime code software-env --deployment-method conda --conda-frontend conda --conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs --conda-base-path /home/fvhemert/miniforge3 --shared-fs-usage input-output storage-local-copies persistence software-deployment sources source-cache --wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/ --printshellcmds  --latency-wait 60 --scheduler ilp --local-storage-prefix .snakemake/storage --scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin --set-resources base64//ZmFzdHFjOnBhcnRpdGlvbj1zdGFnaW5n --set-threads base64//c2FsbW9uPTE2 --default-resources base64//dG1wZGlyPXN5c3RlbV90bXBkaXI= --mode remote && touch '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf/14.jobfinished' || (touch '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf/14.jobfailed'; exit 1)

sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 128 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 8 jobs.
sbatch: By default shared jobs get 1792 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 16 CPUs, based on the number of CPUs and the amount memory that you've requested.
Submitted job 14 with external jobid '8183962'.

[Wed Oct 16 10:01:18 2024]
rule seqrun_expression_reports:
    input: analyzed/0053_P2017BB3S19R_S1.genes.tpm4.results, analyzed/0053_P2017BB3S20R_S2.genes.tpm4.results
    output: /gpfs/home5/fvhemert/temp/test_pipeline/results/test_pipeline_PCTPM_allgenes_log2values.tsv, /gpfs/home5/fvhemert/temp/test_pipeline/results/test_pipeline_TPM_allgenes_log2values.tsv, /gpfs/home5/fvhemert/temp/test_pipeline/results/test_pipeline_TPM4_allgenes_log2values.tsv, /gpfs/home5/fvhemert/temp/test_pipeline/results/test_pipeline_PCTPM_allgenes_values.tsv, /gpfs/home5/fvhemert/temp/test_pipeline/results/test_pipeline_TPM_allgenes_values.tsv, /gpfs/home5/fvhemert/temp/test_pipeline/results/test_pipeline_TPM4_allgenes_values.tsv
    jobid: 23
    benchmark: benchmark/seqrun_expression_reports.seqrun_expression_reports.benchmark.tsv
    reason: Missing output files: /gpfs/home5/fvhemert/temp/test_pipeline/results/test_pipeline_PCTPM_allgenes_values.tsv, /gpfs/home5/fvhemert/temp/test_pipeline/results/test_pipeline_TPM_allgenes_values.tsv, /gpfs/home5/fvhemert/temp/test_pipeline/results/test_pipeline_TPM4_allgenes_log2values.tsv, /gpfs/home5/fvhemert/temp/test_pipeline/results/test_pipeline_TPM_allgenes_log2values.tsv, /gpfs/home5/fvhemert/temp/test_pipeline/results/test_pipeline_TPM4_allgenes_values.tsv, /gpfs/home5/fvhemert/temp/test_pipeline/results/test_pipeline_PCTPM_allgenes_log2values.tsv; Input files updated by another job: analyzed/0053_P2017BB3S19R_S1.genes.tpm4.results, analyzed/0053_P2017BB3S20R_S2.genes.tpm4.results
    resources: tmpdir=<TBD>

General args: ['--force', '--target-files-omit-workdir-adjustment', '--keep-storage-local-copies', '--max-inventory-time 0', '--nocolor', '--notemp', '--no-hooks', '--nolock', '--ignore-incomplete', '', '--verbose ', '--rerun-triggers input params mtime code software-env', '', '', '--deployment-method conda', '--conda-frontend conda', '--conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs', '--conda-base-path /home/fvhemert/miniforge3', '', '', '', '--shared-fs-usage input-output storage-local-copies persistence software-deployment sources source-cache', '', '--wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/', '', '', '', '', '--printshellcmds ', '', '--latency-wait 60', '--scheduler ilp', '', '--local-storage-prefix .snakemake/storage', '--scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin', '', '', '--set-resources base64//ZmFzdHFjOnBhcnRpdGlvbj1zdGFnaW5n', '--set-threads base64//c2FsbW9uPTE2', '', '--default-resources base64//dG1wZGlyPXN5c3RlbV90bXBkaXI=']
Jobscript:
#!/bin/sh
# properties = {"type": "single", "rule": "seqrun_expression_reports", "local": false, "input": ["analyzed/0053_P2017BB3S19R_S1.genes.tpm4.results", "analyzed/0053_P2017BB3S20R_S2.genes.tpm4.results"], "output": ["/gpfs/home5/fvhemert/temp/test_pipeline/results/test_pipeline_PCTPM_allgenes_log2values.tsv", "/gpfs/home5/fvhemert/temp/test_pipeline/results/test_pipeline_TPM_allgenes_log2values.tsv", "/gpfs/home5/fvhemert/temp/test_pipeline/results/test_pipeline_TPM4_allgenes_log2values.tsv", "/gpfs/home5/fvhemert/temp/test_pipeline/results/test_pipeline_PCTPM_allgenes_values.tsv", "/gpfs/home5/fvhemert/temp/test_pipeline/results/test_pipeline_TPM_allgenes_values.tsv", "/gpfs/home5/fvhemert/temp/test_pipeline/results/test_pipeline_TPM4_allgenes_values.tsv"], "wildcards": {}, "params": {}, "log": [], "threads": 1, "resources": {"tmpdir": "<TBD>"}, "jobid": 23}
cd /gpfs/home5/fvhemert/temp/test_pipeline && /home/fvhemert/miniforge3/envs/snaqs/bin/python3.12 -m snakemake --snakefile /gpfs/home5/fvhemert/temp/test_pipeline/Snakefile --target-jobs 'seqrun_expression_reports:' --allowed-rules 'seqrun_expression_reports' --cores 94 --attempt 1 --force-use-threads  --wait-for-files '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf' 'analyzed/0053_P2017BB3S19R_S1.genes.tpm4.results' 'analyzed/0053_P2017BB3S20R_S2.genes.tpm4.results' --force --target-files-omit-workdir-adjustment --keep-storage-local-copies --max-inventory-time 0 --nocolor --notemp --no-hooks --nolock --ignore-incomplete --verbose  --rerun-triggers input params mtime code software-env --deployment-method conda --conda-frontend conda --conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs --conda-base-path /home/fvhemert/miniforge3 --shared-fs-usage input-output storage-local-copies persistence software-deployment sources source-cache --wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/ --printshellcmds  --latency-wait 60 --scheduler ilp --local-storage-prefix .snakemake/storage --scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin --set-resources base64//ZmFzdHFjOnBhcnRpdGlvbj1zdGFnaW5n --set-threads base64//c2FsbW9uPTE2 --default-resources base64//dG1wZGlyPXN5c3RlbV90bXBkaXI= --mode remote && touch '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf/23.jobfinished' || (touch '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf/23.jobfailed'; exit 1)

sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 128 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 8 jobs.
sbatch: By default shared jobs get 1792 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 16 CPUs, based on the number of CPUs and the amount memory that you've requested.
Submitted job 23 with external jobid '8183963'.

[Wed Oct 16 10:01:18 2024]
rule create_pcs_raw_files:
    input: analyzed/0053_P2017BB3S19R_S1.genes.tpm4.results
    output: pcs/test_pipeline/raw/0053_P2017BB3S19R_S1_rna-seq_salmonTPM4.tsv
    jobid: 5
    benchmark: benchmark/0053_P2017BB3S19R_S1.create_pcs_raw_files.create_pcs_raw_files.benchmark.tsv
    reason: Missing output files: pcs/test_pipeline/raw/0053_P2017BB3S19R_S1_rna-seq_salmonTPM4.tsv; Input files updated by another job: analyzed/0053_P2017BB3S19R_S1.genes.tpm4.results
    wildcards: sample=0053_P2017BB3S19R_S1
    resources: tmpdir=<TBD>

General args: ['--force', '--target-files-omit-workdir-adjustment', '--keep-storage-local-copies', '--max-inventory-time 0', '--nocolor', '--notemp', '--no-hooks', '--nolock', '--ignore-incomplete', '', '--verbose ', '--rerun-triggers input params mtime code software-env', '', '', '--deployment-method conda', '--conda-frontend conda', '--conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs', '--conda-base-path /home/fvhemert/miniforge3', '', '', '', '--shared-fs-usage input-output storage-local-copies persistence software-deployment sources source-cache', '', '--wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/', '', '', '', '', '--printshellcmds ', '', '--latency-wait 60', '--scheduler ilp', '', '--local-storage-prefix .snakemake/storage', '--scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin', '', '', '--set-resources base64//ZmFzdHFjOnBhcnRpdGlvbj1zdGFnaW5n', '--set-threads base64//c2FsbW9uPTE2', '', '--default-resources base64//dG1wZGlyPXN5c3RlbV90bXBkaXI=']
Jobscript:
#!/bin/sh
# properties = {"type": "single", "rule": "create_pcs_raw_files", "local": false, "input": ["analyzed/0053_P2017BB3S19R_S1.genes.tpm4.results"], "output": ["pcs/test_pipeline/raw/0053_P2017BB3S19R_S1_rna-seq_salmonTPM4.tsv"], "wildcards": {"sample": "0053_P2017BB3S19R_S1"}, "params": {}, "log": [], "threads": 1, "resources": {"tmpdir": "<TBD>"}, "jobid": 5}
cd /gpfs/home5/fvhemert/temp/test_pipeline && /home/fvhemert/miniforge3/envs/snaqs/bin/python3.12 -m snakemake --snakefile /gpfs/home5/fvhemert/temp/test_pipeline/Snakefile --target-jobs 'create_pcs_raw_files:sample=0053_P2017BB3S19R_S1' --allowed-rules 'create_pcs_raw_files' --cores 94 --attempt 1 --force-use-threads  --wait-for-files '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf' 'analyzed/0053_P2017BB3S19R_S1.genes.tpm4.results' --force --target-files-omit-workdir-adjustment --keep-storage-local-copies --max-inventory-time 0 --nocolor --notemp --no-hooks --nolock --ignore-incomplete --verbose  --rerun-triggers input params mtime code software-env --deployment-method conda --conda-frontend conda --conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs --conda-base-path /home/fvhemert/miniforge3 --shared-fs-usage input-output storage-local-copies persistence software-deployment sources source-cache --wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/ --printshellcmds  --latency-wait 60 --scheduler ilp --local-storage-prefix .snakemake/storage --scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin --set-resources base64//ZmFzdHFjOnBhcnRpdGlvbj1zdGFnaW5n --set-threads base64//c2FsbW9uPTE2 --default-resources base64//dG1wZGlyPXN5c3RlbV90bXBkaXI= --mode remote && touch '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf/5.jobfinished' || (touch '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf/5.jobfailed'; exit 1)

sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 128 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 8 jobs.
sbatch: By default shared jobs get 1792 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 16 CPUs, based on the number of CPUs and the amount memory that you've requested.
Submitted job 5 with external jobid '8183964'.
[Wed Oct 16 10:01:43 2024]
Finished job 23.
17 of 25 steps (68%) done
[Wed Oct 16 10:01:44 2024]
Finished job 5.
18 of 25 steps (72%) done
[Wed Oct 16 10:02:16 2024]
Finished job 15.
19 of 25 steps (76%) done
[Wed Oct 16 10:02:47 2024]
Finished job 14.
20 of 25 steps (80%) done
Resources before job selection: {'_cores': 9223372036854775807, '_nodes': 100, '_job_count': 10}
Ready jobs: 3
Select jobs to execute...
Welcome to the CBC MILP Solver 
Version: 2.10.12 
Build Date: Sep  3 2024 

command line - cbc /scratch-local/70716/8f4ef96319b64ee79f501ed3d3b536ad-pulp.mps -max -sec 10 -threads 2 -timeMode elapsed -branch -printingOptions all -solution /scratch-local/70716/8f4ef96319b64ee79f501ed3d3b536ad-pulp.sol (default strategy 1)
At line 2 NAME          MODEL
At line 3 ROWS
At line 8 COLUMNS
At line 27 RHS
At line 31 BOUNDS
At line 35 ENDATA
Problem MODEL has 3 rows, 3 columns and 9 elements
Coin0008I MODEL read with 0 errors
seconds was changed from 1e+100 to 10
threads was changed from 0 to 2
Option for timeMode changed from cpu to elapsed
Continuous objective value is 6 - 0.02 seconds
Cgl0004I processed model has 0 rows, 0 columns (0 integer (0 of which binary)) and 0 elements
Cbc3007W No integer variables - nothing to do
Cuts at root node changed objective from 6 to 1.79769e+308
Probing was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
Gomory was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
Knapsack was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
Clique was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
MixedIntegerRounding2 was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
FlowCover was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
TwoMirCuts was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
ZeroHalf was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)

Result - Optimal solution found

Objective value:                6.00000000
Enumerated nodes:               0
Total iterations:               0
Time (CPU seconds):             0.11
Time (Wallclock seconds):       0.00

Option for printingOptions changed from normal to all
Total time (CPU seconds):       0.17   (Wallclock seconds):       0.01

Selected jobs: 3
Resources after job selection: {'_cores': 9223372036854775804, '_nodes': 97, '_job_count': 10}
Execute 3 jobs...

[Wed Oct 16 10:02:47 2024]
rule customqc_report:
    input: qc/customqc/0053_P2017BB3S19R_S1.cumulative_percentage_of_tpm4_values.tsv.gz, qc/customqc/0053_P2017BB3S20R_S2.cumulative_percentage_of_tpm4_values.tsv.gz, qc/customqc/0053_P2017BB3S19R_S1.cumulative_percentage_of_protein_coding_reads.tsv.gz, qc/customqc/0053_P2017BB3S20R_S2.cumulative_percentage_of_protein_coding_reads.tsv.gz, qc/customqc/0053_P2017BB3S19R_S1.cumulative_percentage_of_raw_reads.tsv.gz, qc/customqc/0053_P2017BB3S20R_S2.cumulative_percentage_of_raw_reads.tsv.gz, qc/customqc/0053_P2017BB3S19R_S1.tpm4_cumulative_plot.png, qc/customqc/0053_P2017BB3S20R_S2.tpm4_cumulative_plot.png, qc/customqc/0053_P2017BB3S19R_S1.expression_heterogeneity.tsv.gz, qc/customqc/0053_P2017BB3S20R_S2.expression_heterogeneity.tsv.gz, qc/customqc/0053_P2017BB3S19R_S1.refgene_pattern.png, qc/customqc/0053_P2017BB3S20R_S2.refgene_pattern.png, qc/customqc/0053_P2017BB3S19R_S1.refgene_pattern_plot.tsv.gz, qc/customqc/0053_P2017BB3S20R_S2.refgene_pattern_plot.tsv.gz, qc/customqc/0053_P2017BB3S19R_S1.refgene_pattern_score.tsv.gz, qc/customqc/0053_P2017BB3S20R_S2.refgene_pattern_score.tsv.gz, qc/customqc/0053_P2017BB3S19R_S1.sorted_biotypes.tsv.gz, qc/customqc/0053_P2017BB3S20R_S2.sorted_biotypes.tsv.gz, qc/customqc/0053_P2017BB3S19R_S1.sorted_tpm4_values.tsv.gz, qc/customqc/0053_P2017BB3S20R_S2.sorted_tpm4_values.tsv.gz, qc/customqc/0053_P2017BB3S19R_S1.tpm4_total.tsv.gz, qc/customqc/0053_P2017BB3S20R_S2.tpm4_total.tsv.gz
    output: qc/customqc_report.html, qc/customqc/cumulative_percentage_of_raw_reads.png, qc/customqc/normalized_refgene_pattern_all.png, qc/customqc/refgene_pattern_all.png
    jobid: 22
    benchmark: benchmark/customqc_report_salmon.customqc_report.benchmark.tsv
    reason: Missing output files: qc/customqc_report.html; Input files updated by another job: qc/customqc/0053_P2017BB3S20R_S2.sorted_tpm4_values.tsv.gz, qc/customqc/0053_P2017BB3S19R_S1.refgene_pattern_score.tsv.gz, qc/customqc/0053_P2017BB3S19R_S1.cumulative_percentage_of_protein_coding_reads.tsv.gz, qc/customqc/0053_P2017BB3S19R_S1.tpm4_total.tsv.gz, qc/customqc/0053_P2017BB3S20R_S2.refgene_pattern_score.tsv.gz, qc/customqc/0053_P2017BB3S20R_S2.refgene_pattern.png, qc/customqc/0053_P2017BB3S20R_S2.refgene_pattern_plot.tsv.gz, qc/customqc/0053_P2017BB3S19R_S1.sorted_biotypes.tsv.gz, qc/customqc/0053_P2017BB3S19R_S1.refgene_pattern.png, qc/customqc/0053_P2017BB3S20R_S2.tpm4_total.tsv.gz, qc/customqc/0053_P2017BB3S20R_S2.cumulative_percentage_of_raw_reads.tsv.gz, qc/customqc/0053_P2017BB3S19R_S1.tpm4_cumulative_plot.png, qc/customqc/0053_P2017BB3S20R_S2.sorted_biotypes.tsv.gz, qc/customqc/0053_P2017BB3S20R_S2.cumulative_percentage_of_tpm4_values.tsv.gz, qc/customqc/0053_P2017BB3S19R_S1.expression_heterogeneity.tsv.gz, qc/customqc/0053_P2017BB3S19R_S1.sorted_tpm4_values.tsv.gz, qc/customqc/0053_P2017BB3S19R_S1.cumulative_percentage_of_raw_reads.tsv.gz, qc/customqc/0053_P2017BB3S20R_S2.expression_heterogeneity.tsv.gz, qc/customqc/0053_P2017BB3S20R_S2.cumulative_percentage_of_protein_coding_reads.tsv.gz, qc/customqc/0053_P2017BB3S19R_S1.refgene_pattern_plot.tsv.gz, qc/customqc/0053_P2017BB3S19R_S1.cumulative_percentage_of_tpm4_values.tsv.gz, qc/customqc/0053_P2017BB3S20R_S2.tpm4_cumulative_plot.png
    resources: tmpdir=<TBD>

General args: ['--force', '--target-files-omit-workdir-adjustment', '--keep-storage-local-copies', '--max-inventory-time 0', '--nocolor', '--notemp', '--no-hooks', '--nolock', '--ignore-incomplete', '', '--verbose ', '--rerun-triggers input params mtime code software-env', '', '', '--deployment-method conda', '--conda-frontend conda', '--conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs', '--conda-base-path /home/fvhemert/miniforge3', '', '', '', '--shared-fs-usage input-output storage-local-copies persistence software-deployment sources source-cache', '', '--wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/', '', '', '', '', '--printshellcmds ', '', '--latency-wait 60', '--scheduler ilp', '', '--local-storage-prefix .snakemake/storage', '--scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin', '', '', '--set-resources base64//ZmFzdHFjOnBhcnRpdGlvbj1zdGFnaW5n', '--set-threads base64//c2FsbW9uPTE2', '', '--default-resources base64//dG1wZGlyPXN5c3RlbV90bXBkaXI=']
Jobscript:
#!/bin/sh
# properties = {"type": "single", "rule": "customqc_report", "local": false, "input": ["qc/customqc/0053_P2017BB3S19R_S1.cumulative_percentage_of_tpm4_values.tsv.gz", "qc/customqc/0053_P2017BB3S20R_S2.cumulative_percentage_of_tpm4_values.tsv.gz", "qc/customqc/0053_P2017BB3S19R_S1.cumulative_percentage_of_protein_coding_reads.tsv.gz", "qc/customqc/0053_P2017BB3S20R_S2.cumulative_percentage_of_protein_coding_reads.tsv.gz", "qc/customqc/0053_P2017BB3S19R_S1.cumulative_percentage_of_raw_reads.tsv.gz", "qc/customqc/0053_P2017BB3S20R_S2.cumulative_percentage_of_raw_reads.tsv.gz", "qc/customqc/0053_P2017BB3S19R_S1.tpm4_cumulative_plot.png", "qc/customqc/0053_P2017BB3S20R_S2.tpm4_cumulative_plot.png", "qc/customqc/0053_P2017BB3S19R_S1.expression_heterogeneity.tsv.gz", "qc/customqc/0053_P2017BB3S20R_S2.expression_heterogeneity.tsv.gz", "qc/customqc/0053_P2017BB3S19R_S1.refgene_pattern.png", "qc/customqc/0053_P2017BB3S20R_S2.refgene_pattern.png", "qc/customqc/0053_P2017BB3S19R_S1.refgene_pattern_plot.tsv.gz", "qc/customqc/0053_P2017BB3S20R_S2.refgene_pattern_plot.tsv.gz", "qc/customqc/0053_P2017BB3S19R_S1.refgene_pattern_score.tsv.gz", "qc/customqc/0053_P2017BB3S20R_S2.refgene_pattern_score.tsv.gz", "qc/customqc/0053_P2017BB3S19R_S1.sorted_biotypes.tsv.gz", "qc/customqc/0053_P2017BB3S20R_S2.sorted_biotypes.tsv.gz", "qc/customqc/0053_P2017BB3S19R_S1.sorted_tpm4_values.tsv.gz", "qc/customqc/0053_P2017BB3S20R_S2.sorted_tpm4_values.tsv.gz", "qc/customqc/0053_P2017BB3S19R_S1.tpm4_total.tsv.gz", "qc/customqc/0053_P2017BB3S20R_S2.tpm4_total.tsv.gz"], "output": ["qc/customqc_report.html", "qc/customqc/cumulative_percentage_of_raw_reads.png", "qc/customqc/normalized_refgene_pattern_all.png", "qc/customqc/refgene_pattern_all.png"], "wildcards": {}, "params": {}, "log": [], "threads": 1, "resources": {"tmpdir": "<TBD>"}, "jobid": 22}
cd /gpfs/home5/fvhemert/temp/test_pipeline && /home/fvhemert/miniforge3/envs/snaqs/bin/python3.12 -m snakemake --snakefile /gpfs/home5/fvhemert/temp/test_pipeline/Snakefile --target-jobs 'customqc_report:' --allowed-rules 'customqc_report' --cores 94 --attempt 1 --force-use-threads  --wait-for-files-file /gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf/snakejob.customqc_report.22.sh.waitforfilesfile.txt --force --target-files-omit-workdir-adjustment --keep-storage-local-copies --max-inventory-time 0 --nocolor --notemp --no-hooks --nolock --ignore-incomplete --verbose  --rerun-triggers input params mtime code software-env --deployment-method conda --conda-frontend conda --conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs --conda-base-path /home/fvhemert/miniforge3 --shared-fs-usage input-output storage-local-copies persistence software-deployment sources source-cache --wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/ --printshellcmds  --latency-wait 60 --scheduler ilp --local-storage-prefix .snakemake/storage --scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin --set-resources base64//ZmFzdHFjOnBhcnRpdGlvbj1zdGFnaW5n --set-threads base64//c2FsbW9uPTE2 --default-resources base64//dG1wZGlyPXN5c3RlbV90bXBkaXI= --mode remote && touch '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf/22.jobfinished' || (touch '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf/22.jobfailed'; exit 1)

sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 128 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 8 jobs.
sbatch: By default shared jobs get 1792 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 16 CPUs, based on the number of CPUs and the amount memory that you've requested.
Submitted job 22 with external jobid '8183987'.

[Wed Oct 16 10:02:48 2024]
rule qc_flagging:
    input: qc/customqc/0053_P2017BB3S19R_S1.cumulative_percentage_of_tpm4_values.tsv.gz, qc/customqc/0053_P2017BB3S20R_S2.cumulative_percentage_of_tpm4_values.tsv.gz, qc/customqc/0053_P2017BB3S19R_S1.cumulative_percentage_of_protein_coding_reads.tsv.gz, qc/customqc/0053_P2017BB3S20R_S2.cumulative_percentage_of_protein_coding_reads.tsv.gz, qc/customqc/0053_P2017BB3S19R_S1.cumulative_percentage_of_raw_reads.tsv.gz, qc/customqc/0053_P2017BB3S20R_S2.cumulative_percentage_of_raw_reads.tsv.gz, qc/customqc/0053_P2017BB3S19R_S1.tpm4_cumulative_plot.png, qc/customqc/0053_P2017BB3S20R_S2.tpm4_cumulative_plot.png, qc/customqc/0053_P2017BB3S19R_S1.expression_heterogeneity.tsv.gz, qc/customqc/0053_P2017BB3S20R_S2.expression_heterogeneity.tsv.gz, qc/customqc/0053_P2017BB3S19R_S1.refgene_pattern.png, qc/customqc/0053_P2017BB3S20R_S2.refgene_pattern.png, qc/customqc/0053_P2017BB3S19R_S1.refgene_pattern_plot.tsv.gz, qc/customqc/0053_P2017BB3S20R_S2.refgene_pattern_plot.tsv.gz, qc/customqc/0053_P2017BB3S19R_S1.refgene_pattern_score.tsv.gz, qc/customqc/0053_P2017BB3S20R_S2.refgene_pattern_score.tsv.gz, qc/customqc/0053_P2017BB3S19R_S1.sorted_biotypes.tsv.gz, qc/customqc/0053_P2017BB3S20R_S2.sorted_biotypes.tsv.gz, qc/customqc/0053_P2017BB3S19R_S1.sorted_tpm4_values.tsv.gz, qc/customqc/0053_P2017BB3S20R_S2.sorted_tpm4_values.tsv.gz, qc/fastqc/0053_P2017BB3S19R_S1_R1_001_fastqc.html, qc/fastqc/0053_P2017BB3S19R_S1_R2_001_fastqc.html, qc/fastqc/0053_P2017BB3S20R_S2_R1_001_fastqc.html, qc/fastqc/0053_P2017BB3S20R_S2_R2_001_fastqc.html, qc/fastqc/0053_P2017BB3S19R_S1_R1_001_fastqc.zip, qc/fastqc/0053_P2017BB3S19R_S1_R2_001_fastqc.zip, qc/fastqc/0053_P2017BB3S20R_S2_R1_001_fastqc.zip, qc/fastqc/0053_P2017BB3S20R_S2_R2_001_fastqc.zip, qc/20mer_counter/0053_P2017BB3S19R_S1_R1_20mer_counter.txt, qc/20mer_counter/0053_P2017BB3S19R_S1_R2_20mer_counter.txt, qc/20mer_counter/0053_P2017BB3S20R_S2_R1_20mer_counter.txt, qc/20mer_counter/0053_P2017BB3S20R_S2_R2_20mer_counter.txt, qc/trim_galore/0053_P2017BB3S19R_S1_R1_001.fastq.gz_trimming_report.txt, qc/trim_galore/0053_P2017BB3S20R_S2_R1_001.fastq.gz_trimming_report.txt, qc/trim_galore/0053_P2017BB3S19R_S1_R2_001.fastq.gz_trimming_report.txt, qc/trim_galore/0053_P2017BB3S20R_S2_R2_001.fastq.gz_trimming_report.txt, qc/trim_galore/0053_P2017BB3S19R_S1_R1_001_val_1_fastqc.html, qc/trim_galore/0053_P2017BB3S20R_S2_R1_001_val_1_fastqc.html, qc/trim_galore/0053_P2017BB3S19R_S1_R2_001_val_2_fastqc.html, qc/trim_galore/0053_P2017BB3S20R_S2_R2_001_val_2_fastqc.html, qc/trim_galore/0053_P2017BB3S19R_S1_R1_001_val_1_fastqc.zip, qc/trim_galore/0053_P2017BB3S20R_S2_R1_001_val_1_fastqc.zip, qc/trim_galore/0053_P2017BB3S19R_S1_R2_001_val_2_fastqc.zip, qc/trim_galore/0053_P2017BB3S20R_S2_R2_001_val_2_fastqc.zip, analyzed/salmon_0053_P2017BB3S19R_S1/quant.genes.sf, analyzed/salmon_0053_P2017BB3S20R_S2/quant.genes.sf
    output: qc/qc_flags/0053_P2017BB3S19R_S1_qcflags.json
    jobid: 13
    benchmark: benchmark/0053_P2017BB3S19R_S1.qc_flagging_salmon.qc_flagging.benchmark.tsv
    reason: Missing output files: qc/qc_flags/0053_P2017BB3S19R_S1_qcflags.json; Input files updated by another job: qc/trim_galore/0053_P2017BB3S19R_S1_R1_001_val_1_fastqc.zip, qc/trim_galore/0053_P2017BB3S20R_S2_R1_001.fastq.gz_trimming_report.txt, qc/fastqc/0053_P2017BB3S20R_S2_R2_001_fastqc.html, qc/trim_galore/0053_P2017BB3S20R_S2_R2_001_val_2_fastqc.zip, qc/fastqc/0053_P2017BB3S19R_S1_R2_001_fastqc.zip, qc/customqc/0053_P2017BB3S20R_S2.refgene_pattern.png, qc/customqc/0053_P2017BB3S19R_S1.sorted_biotypes.tsv.gz, qc/customqc/0053_P2017BB3S19R_S1.refgene_pattern.png, analyzed/salmon_0053_P2017BB3S19R_S1/quant.genes.sf, qc/trim_galore/0053_P2017BB3S19R_S1_R1_001_val_1_fastqc.html, qc/fastqc/0053_P2017BB3S20R_S2_R1_001_fastqc.html, qc/customqc/0053_P2017BB3S20R_S2.sorted_biotypes.tsv.gz, qc/trim_galore/0053_P2017BB3S20R_S2_R1_001_val_1_fastqc.html, qc/customqc/0053_P2017BB3S19R_S1.cumulative_percentage_of_raw_reads.tsv.gz, qc/20mer_counter/0053_P2017BB3S20R_S2_R1_20mer_counter.txt, qc/trim_galore/0053_P2017BB3S20R_S2_R2_001.fastq.gz_trimming_report.txt, qc/customqc/0053_P2017BB3S20R_S2.expression_heterogeneity.tsv.gz, qc/fastqc/0053_P2017BB3S19R_S1_R1_001_fastqc.html, qc/customqc/0053_P2017BB3S20R_S2.cumulative_percentage_of_protein_coding_reads.tsv.gz, qc/trim_galore/0053_P2017BB3S19R_S1_R1_001.fastq.gz_trimming_report.txt, qc/fastqc/0053_P2017BB3S19R_S1_R2_001_fastqc.html, qc/customqc/0053_P2017BB3S19R_S1.cumulative_percentage_of_tpm4_values.tsv.gz, qc/customqc/0053_P2017BB3S20R_S2.tpm4_cumulative_plot.png, qc/customqc/0053_P2017BB3S20R_S2.sorted_tpm4_values.tsv.gz, qc/customqc/0053_P2017BB3S19R_S1.refgene_pattern_score.tsv.gz, qc/customqc/0053_P2017BB3S19R_S1.cumulative_percentage_of_protein_coding_reads.tsv.gz, qc/trim_galore/0053_P2017BB3S19R_S1_R2_001_val_2_fastqc.zip, qc/fastqc/0053_P2017BB3S19R_S1_R1_001_fastqc.zip, qc/customqc/0053_P2017BB3S20R_S2.refgene_pattern_score.tsv.gz, qc/customqc/0053_P2017BB3S20R_S2.refgene_pattern_plot.tsv.gz, qc/trim_galore/0053_P2017BB3S19R_S1_R2_001_val_2_fastqc.html, qc/trim_galore/0053_P2017BB3S20R_S2_R2_001_val_2_fastqc.html, qc/trim_galore/0053_P2017BB3S19R_S1_R2_001.fastq.gz_trimming_report.txt, qc/20mer_counter/0053_P2017BB3S19R_S1_R2_20mer_counter.txt, qc/customqc/0053_P2017BB3S20R_S2.cumulative_percentage_of_raw_reads.tsv.gz, qc/trim_galore/0053_P2017BB3S20R_S2_R1_001_val_1_fastqc.zip, qc/customqc/0053_P2017BB3S19R_S1.tpm4_cumulative_plot.png, analyzed/salmon_0053_P2017BB3S20R_S2/quant.genes.sf, qc/20mer_counter/0053_P2017BB3S20R_S2_R2_20mer_counter.txt, qc/fastqc/0053_P2017BB3S20R_S2_R1_001_fastqc.zip, qc/customqc/0053_P2017BB3S20R_S2.cumulative_percentage_of_tpm4_values.tsv.gz, qc/customqc/0053_P2017BB3S19R_S1.expression_heterogeneity.tsv.gz, qc/fastqc/0053_P2017BB3S20R_S2_R2_001_fastqc.zip, qc/customqc/0053_P2017BB3S19R_S1.sorted_tpm4_values.tsv.gz, qc/20mer_counter/0053_P2017BB3S19R_S1_R1_20mer_counter.txt, qc/customqc/0053_P2017BB3S19R_S1.refgene_pattern_plot.tsv.gz
    wildcards: sample=0053_P2017BB3S19R_S1
    resources: tmpdir=<TBD>

General args: ['--force', '--target-files-omit-workdir-adjustment', '--keep-storage-local-copies', '--max-inventory-time 0', '--nocolor', '--notemp', '--no-hooks', '--nolock', '--ignore-incomplete', '', '--verbose ', '--rerun-triggers input params mtime code software-env', '', '', '--deployment-method conda', '--conda-frontend conda', '--conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs', '--conda-base-path /home/fvhemert/miniforge3', '', '', '', '--shared-fs-usage input-output storage-local-copies persistence software-deployment sources source-cache', '', '--wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/', '', '', '', '', '--printshellcmds ', '', '--latency-wait 60', '--scheduler ilp', '', '--local-storage-prefix .snakemake/storage', '--scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin', '', '', '--set-resources base64//ZmFzdHFjOnBhcnRpdGlvbj1zdGFnaW5n', '--set-threads base64//c2FsbW9uPTE2', '', '--default-resources base64//dG1wZGlyPXN5c3RlbV90bXBkaXI=']
Jobscript:
#!/bin/sh
# properties = {"type": "single", "rule": "qc_flagging", "local": false, "input": ["qc/customqc/0053_P2017BB3S19R_S1.cumulative_percentage_of_tpm4_values.tsv.gz", "qc/customqc/0053_P2017BB3S20R_S2.cumulative_percentage_of_tpm4_values.tsv.gz", "qc/customqc/0053_P2017BB3S19R_S1.cumulative_percentage_of_protein_coding_reads.tsv.gz", "qc/customqc/0053_P2017BB3S20R_S2.cumulative_percentage_of_protein_coding_reads.tsv.gz", "qc/customqc/0053_P2017BB3S19R_S1.cumulative_percentage_of_raw_reads.tsv.gz", "qc/customqc/0053_P2017BB3S20R_S2.cumulative_percentage_of_raw_reads.tsv.gz", "qc/customqc/0053_P2017BB3S19R_S1.tpm4_cumulative_plot.png", "qc/customqc/0053_P2017BB3S20R_S2.tpm4_cumulative_plot.png", "qc/customqc/0053_P2017BB3S19R_S1.expression_heterogeneity.tsv.gz", "qc/customqc/0053_P2017BB3S20R_S2.expression_heterogeneity.tsv.gz", "qc/customqc/0053_P2017BB3S19R_S1.refgene_pattern.png", "qc/customqc/0053_P2017BB3S20R_S2.refgene_pattern.png", "qc/customqc/0053_P2017BB3S19R_S1.refgene_pattern_plot.tsv.gz", "qc/customqc/0053_P2017BB3S20R_S2.refgene_pattern_plot.tsv.gz", "qc/customqc/0053_P2017BB3S19R_S1.refgene_pattern_score.tsv.gz", "qc/customqc/0053_P2017BB3S20R_S2.refgene_pattern_score.tsv.gz", "qc/customqc/0053_P2017BB3S19R_S1.sorted_biotypes.tsv.gz", "qc/customqc/0053_P2017BB3S20R_S2.sorted_biotypes.tsv.gz", "qc/customqc/0053_P2017BB3S19R_S1.sorted_tpm4_values.tsv.gz", "qc/customqc/0053_P2017BB3S20R_S2.sorted_tpm4_values.tsv.gz", "qc/fastqc/0053_P2017BB3S19R_S1_R1_001_fastqc.html", "qc/fastqc/0053_P2017BB3S19R_S1_R2_001_fastqc.html", "qc/fastqc/0053_P2017BB3S20R_S2_R1_001_fastqc.html", "qc/fastqc/0053_P2017BB3S20R_S2_R2_001_fastqc.html", "qc/fastqc/0053_P2017BB3S19R_S1_R1_001_fastqc.zip", "qc/fastqc/0053_P2017BB3S19R_S1_R2_001_fastqc.zip", "qc/fastqc/0053_P2017BB3S20R_S2_R1_001_fastqc.zip", "qc/fastqc/0053_P2017BB3S20R_S2_R2_001_fastqc.zip", "qc/20mer_counter/0053_P2017BB3S19R_S1_R1_20mer_counter.txt", "qc/20mer_counter/0053_P2017BB3S19R_S1_R2_20mer_counter.txt", "qc/20mer_counter/0053_P2017BB3S20R_S2_R1_20mer_counter.txt", "qc/20mer_counter/0053_P2017BB3S20R_S2_R2_20mer_counter.txt", "qc/trim_galore/0053_P2017BB3S19R_S1_R1_001.fastq.gz_trimming_report.txt", "qc/trim_galore/0053_P2017BB3S20R_S2_R1_001.fastq.gz_trimming_report.txt", "qc/trim_galore/0053_P2017BB3S19R_S1_R2_001.fastq.gz_trimming_report.txt", "qc/trim_galore/0053_P2017BB3S20R_S2_R2_001.fastq.gz_trimming_report.txt", "qc/trim_galore/0053_P2017BB3S19R_S1_R1_001_val_1_fastqc.html", "qc/trim_galore/0053_P2017BB3S20R_S2_R1_001_val_1_fastqc.html", "qc/trim_galore/0053_P2017BB3S19R_S1_R2_001_val_2_fastqc.html", "qc/trim_galore/0053_P2017BB3S20R_S2_R2_001_val_2_fastqc.html", "qc/trim_galore/0053_P2017BB3S19R_S1_R1_001_val_1_fastqc.zip", "qc/trim_galore/0053_P2017BB3S20R_S2_R1_001_val_1_fastqc.zip", "qc/trim_galore/0053_P2017BB3S19R_S1_R2_001_val_2_fastqc.zip", "qc/trim_galore/0053_P2017BB3S20R_S2_R2_001_val_2_fastqc.zip", "analyzed/salmon_0053_P2017BB3S19R_S1/quant.genes.sf", "analyzed/salmon_0053_P2017BB3S20R_S2/quant.genes.sf"], "output": ["qc/qc_flags/0053_P2017BB3S19R_S1_qcflags.json"], "wildcards": {"sample": "0053_P2017BB3S19R_S1"}, "params": {}, "log": [], "threads": 1, "resources": {"tmpdir": "<TBD>"}, "jobid": 13}
cd /gpfs/home5/fvhemert/temp/test_pipeline && /home/fvhemert/miniforge3/envs/snaqs/bin/python3.12 -m snakemake --snakefile /gpfs/home5/fvhemert/temp/test_pipeline/Snakefile --target-jobs 'qc_flagging:sample=0053_P2017BB3S19R_S1' --allowed-rules 'qc_flagging' --cores 94 --attempt 1 --force-use-threads  --wait-for-files-file /gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf/snakejob.qc_flagging.13.sh.waitforfilesfile.txt --force --target-files-omit-workdir-adjustment --keep-storage-local-copies --max-inventory-time 0 --nocolor --notemp --no-hooks --nolock --ignore-incomplete --verbose  --rerun-triggers input params mtime code software-env --deployment-method conda --conda-frontend conda --conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs --conda-base-path /home/fvhemert/miniforge3 --shared-fs-usage input-output storage-local-copies persistence software-deployment sources source-cache --wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/ --printshellcmds  --latency-wait 60 --scheduler ilp --local-storage-prefix .snakemake/storage --scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin --set-resources base64//ZmFzdHFjOnBhcnRpdGlvbj1zdGFnaW5n --set-threads base64//c2FsbW9uPTE2 --default-resources base64//dG1wZGlyPXN5c3RlbV90bXBkaXI= --mode remote && touch '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf/13.jobfinished' || (touch '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf/13.jobfailed'; exit 1)

sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 128 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 8 jobs.
sbatch: By default shared jobs get 1792 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 16 CPUs, based on the number of CPUs and the amount memory that you've requested.
Submitted job 13 with external jobid '8183988'.

[Wed Oct 16 10:02:48 2024]
rule qc_flagging:
    input: qc/customqc/0053_P2017BB3S19R_S1.cumulative_percentage_of_tpm4_values.tsv.gz, qc/customqc/0053_P2017BB3S20R_S2.cumulative_percentage_of_tpm4_values.tsv.gz, qc/customqc/0053_P2017BB3S19R_S1.cumulative_percentage_of_protein_coding_reads.tsv.gz, qc/customqc/0053_P2017BB3S20R_S2.cumulative_percentage_of_protein_coding_reads.tsv.gz, qc/customqc/0053_P2017BB3S19R_S1.cumulative_percentage_of_raw_reads.tsv.gz, qc/customqc/0053_P2017BB3S20R_S2.cumulative_percentage_of_raw_reads.tsv.gz, qc/customqc/0053_P2017BB3S19R_S1.tpm4_cumulative_plot.png, qc/customqc/0053_P2017BB3S20R_S2.tpm4_cumulative_plot.png, qc/customqc/0053_P2017BB3S19R_S1.expression_heterogeneity.tsv.gz, qc/customqc/0053_P2017BB3S20R_S2.expression_heterogeneity.tsv.gz, qc/customqc/0053_P2017BB3S19R_S1.refgene_pattern.png, qc/customqc/0053_P2017BB3S20R_S2.refgene_pattern.png, qc/customqc/0053_P2017BB3S19R_S1.refgene_pattern_plot.tsv.gz, qc/customqc/0053_P2017BB3S20R_S2.refgene_pattern_plot.tsv.gz, qc/customqc/0053_P2017BB3S19R_S1.refgene_pattern_score.tsv.gz, qc/customqc/0053_P2017BB3S20R_S2.refgene_pattern_score.tsv.gz, qc/customqc/0053_P2017BB3S19R_S1.sorted_biotypes.tsv.gz, qc/customqc/0053_P2017BB3S20R_S2.sorted_biotypes.tsv.gz, qc/customqc/0053_P2017BB3S19R_S1.sorted_tpm4_values.tsv.gz, qc/customqc/0053_P2017BB3S20R_S2.sorted_tpm4_values.tsv.gz, qc/fastqc/0053_P2017BB3S19R_S1_R1_001_fastqc.html, qc/fastqc/0053_P2017BB3S19R_S1_R2_001_fastqc.html, qc/fastqc/0053_P2017BB3S20R_S2_R1_001_fastqc.html, qc/fastqc/0053_P2017BB3S20R_S2_R2_001_fastqc.html, qc/fastqc/0053_P2017BB3S19R_S1_R1_001_fastqc.zip, qc/fastqc/0053_P2017BB3S19R_S1_R2_001_fastqc.zip, qc/fastqc/0053_P2017BB3S20R_S2_R1_001_fastqc.zip, qc/fastqc/0053_P2017BB3S20R_S2_R2_001_fastqc.zip, qc/20mer_counter/0053_P2017BB3S19R_S1_R1_20mer_counter.txt, qc/20mer_counter/0053_P2017BB3S19R_S1_R2_20mer_counter.txt, qc/20mer_counter/0053_P2017BB3S20R_S2_R1_20mer_counter.txt, qc/20mer_counter/0053_P2017BB3S20R_S2_R2_20mer_counter.txt, qc/trim_galore/0053_P2017BB3S19R_S1_R1_001.fastq.gz_trimming_report.txt, qc/trim_galore/0053_P2017BB3S20R_S2_R1_001.fastq.gz_trimming_report.txt, qc/trim_galore/0053_P2017BB3S19R_S1_R2_001.fastq.gz_trimming_report.txt, qc/trim_galore/0053_P2017BB3S20R_S2_R2_001.fastq.gz_trimming_report.txt, qc/trim_galore/0053_P2017BB3S19R_S1_R1_001_val_1_fastqc.html, qc/trim_galore/0053_P2017BB3S20R_S2_R1_001_val_1_fastqc.html, qc/trim_galore/0053_P2017BB3S19R_S1_R2_001_val_2_fastqc.html, qc/trim_galore/0053_P2017BB3S20R_S2_R2_001_val_2_fastqc.html, qc/trim_galore/0053_P2017BB3S19R_S1_R1_001_val_1_fastqc.zip, qc/trim_galore/0053_P2017BB3S20R_S2_R1_001_val_1_fastqc.zip, qc/trim_galore/0053_P2017BB3S19R_S1_R2_001_val_2_fastqc.zip, qc/trim_galore/0053_P2017BB3S20R_S2_R2_001_val_2_fastqc.zip, analyzed/salmon_0053_P2017BB3S19R_S1/quant.genes.sf, analyzed/salmon_0053_P2017BB3S20R_S2/quant.genes.sf
    output: qc/qc_flags/0053_P2017BB3S20R_S2_qcflags.json
    jobid: 20
    benchmark: benchmark/0053_P2017BB3S20R_S2.qc_flagging_salmon.qc_flagging.benchmark.tsv
    reason: Missing output files: qc/qc_flags/0053_P2017BB3S20R_S2_qcflags.json; Input files updated by another job: qc/trim_galore/0053_P2017BB3S19R_S1_R1_001_val_1_fastqc.zip, qc/trim_galore/0053_P2017BB3S20R_S2_R1_001.fastq.gz_trimming_report.txt, qc/fastqc/0053_P2017BB3S20R_S2_R2_001_fastqc.html, qc/trim_galore/0053_P2017BB3S20R_S2_R2_001_val_2_fastqc.zip, qc/fastqc/0053_P2017BB3S19R_S1_R2_001_fastqc.zip, qc/customqc/0053_P2017BB3S20R_S2.refgene_pattern.png, qc/customqc/0053_P2017BB3S19R_S1.sorted_biotypes.tsv.gz, qc/customqc/0053_P2017BB3S19R_S1.refgene_pattern.png, analyzed/salmon_0053_P2017BB3S19R_S1/quant.genes.sf, qc/trim_galore/0053_P2017BB3S19R_S1_R1_001_val_1_fastqc.html, qc/fastqc/0053_P2017BB3S20R_S2_R1_001_fastqc.html, qc/customqc/0053_P2017BB3S20R_S2.sorted_biotypes.tsv.gz, qc/trim_galore/0053_P2017BB3S20R_S2_R1_001_val_1_fastqc.html, qc/customqc/0053_P2017BB3S19R_S1.cumulative_percentage_of_raw_reads.tsv.gz, qc/20mer_counter/0053_P2017BB3S20R_S2_R1_20mer_counter.txt, qc/trim_galore/0053_P2017BB3S20R_S2_R2_001.fastq.gz_trimming_report.txt, qc/customqc/0053_P2017BB3S20R_S2.expression_heterogeneity.tsv.gz, qc/fastqc/0053_P2017BB3S19R_S1_R1_001_fastqc.html, qc/customqc/0053_P2017BB3S20R_S2.cumulative_percentage_of_protein_coding_reads.tsv.gz, qc/trim_galore/0053_P2017BB3S19R_S1_R1_001.fastq.gz_trimming_report.txt, qc/fastqc/0053_P2017BB3S19R_S1_R2_001_fastqc.html, qc/customqc/0053_P2017BB3S19R_S1.cumulative_percentage_of_tpm4_values.tsv.gz, qc/customqc/0053_P2017BB3S20R_S2.tpm4_cumulative_plot.png, qc/customqc/0053_P2017BB3S20R_S2.sorted_tpm4_values.tsv.gz, qc/customqc/0053_P2017BB3S19R_S1.refgene_pattern_score.tsv.gz, qc/customqc/0053_P2017BB3S19R_S1.cumulative_percentage_of_protein_coding_reads.tsv.gz, qc/trim_galore/0053_P2017BB3S19R_S1_R2_001_val_2_fastqc.zip, qc/fastqc/0053_P2017BB3S19R_S1_R1_001_fastqc.zip, qc/customqc/0053_P2017BB3S20R_S2.refgene_pattern_score.tsv.gz, qc/customqc/0053_P2017BB3S20R_S2.refgene_pattern_plot.tsv.gz, qc/trim_galore/0053_P2017BB3S19R_S1_R2_001_val_2_fastqc.html, qc/trim_galore/0053_P2017BB3S20R_S2_R2_001_val_2_fastqc.html, qc/trim_galore/0053_P2017BB3S19R_S1_R2_001.fastq.gz_trimming_report.txt, qc/20mer_counter/0053_P2017BB3S19R_S1_R2_20mer_counter.txt, qc/customqc/0053_P2017BB3S20R_S2.cumulative_percentage_of_raw_reads.tsv.gz, qc/trim_galore/0053_P2017BB3S20R_S2_R1_001_val_1_fastqc.zip, qc/customqc/0053_P2017BB3S19R_S1.tpm4_cumulative_plot.png, analyzed/salmon_0053_P2017BB3S20R_S2/quant.genes.sf, qc/20mer_counter/0053_P2017BB3S20R_S2_R2_20mer_counter.txt, qc/fastqc/0053_P2017BB3S20R_S2_R1_001_fastqc.zip, qc/customqc/0053_P2017BB3S20R_S2.cumulative_percentage_of_tpm4_values.tsv.gz, qc/customqc/0053_P2017BB3S19R_S1.expression_heterogeneity.tsv.gz, qc/fastqc/0053_P2017BB3S20R_S2_R2_001_fastqc.zip, qc/customqc/0053_P2017BB3S19R_S1.sorted_tpm4_values.tsv.gz, qc/20mer_counter/0053_P2017BB3S19R_S1_R1_20mer_counter.txt, qc/customqc/0053_P2017BB3S19R_S1.refgene_pattern_plot.tsv.gz
    wildcards: sample=0053_P2017BB3S20R_S2
    resources: tmpdir=<TBD>

General args: ['--force', '--target-files-omit-workdir-adjustment', '--keep-storage-local-copies', '--max-inventory-time 0', '--nocolor', '--notemp', '--no-hooks', '--nolock', '--ignore-incomplete', '', '--verbose ', '--rerun-triggers input params mtime code software-env', '', '', '--deployment-method conda', '--conda-frontend conda', '--conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs', '--conda-base-path /home/fvhemert/miniforge3', '', '', '', '--shared-fs-usage input-output storage-local-copies persistence software-deployment sources source-cache', '', '--wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/', '', '', '', '', '--printshellcmds ', '', '--latency-wait 60', '--scheduler ilp', '', '--local-storage-prefix .snakemake/storage', '--scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin', '', '', '--set-resources base64//ZmFzdHFjOnBhcnRpdGlvbj1zdGFnaW5n', '--set-threads base64//c2FsbW9uPTE2', '', '--default-resources base64//dG1wZGlyPXN5c3RlbV90bXBkaXI=']
Jobscript:
#!/bin/sh
# properties = {"type": "single", "rule": "qc_flagging", "local": false, "input": ["qc/customqc/0053_P2017BB3S19R_S1.cumulative_percentage_of_tpm4_values.tsv.gz", "qc/customqc/0053_P2017BB3S20R_S2.cumulative_percentage_of_tpm4_values.tsv.gz", "qc/customqc/0053_P2017BB3S19R_S1.cumulative_percentage_of_protein_coding_reads.tsv.gz", "qc/customqc/0053_P2017BB3S20R_S2.cumulative_percentage_of_protein_coding_reads.tsv.gz", "qc/customqc/0053_P2017BB3S19R_S1.cumulative_percentage_of_raw_reads.tsv.gz", "qc/customqc/0053_P2017BB3S20R_S2.cumulative_percentage_of_raw_reads.tsv.gz", "qc/customqc/0053_P2017BB3S19R_S1.tpm4_cumulative_plot.png", "qc/customqc/0053_P2017BB3S20R_S2.tpm4_cumulative_plot.png", "qc/customqc/0053_P2017BB3S19R_S1.expression_heterogeneity.tsv.gz", "qc/customqc/0053_P2017BB3S20R_S2.expression_heterogeneity.tsv.gz", "qc/customqc/0053_P2017BB3S19R_S1.refgene_pattern.png", "qc/customqc/0053_P2017BB3S20R_S2.refgene_pattern.png", "qc/customqc/0053_P2017BB3S19R_S1.refgene_pattern_plot.tsv.gz", "qc/customqc/0053_P2017BB3S20R_S2.refgene_pattern_plot.tsv.gz", "qc/customqc/0053_P2017BB3S19R_S1.refgene_pattern_score.tsv.gz", "qc/customqc/0053_P2017BB3S20R_S2.refgene_pattern_score.tsv.gz", "qc/customqc/0053_P2017BB3S19R_S1.sorted_biotypes.tsv.gz", "qc/customqc/0053_P2017BB3S20R_S2.sorted_biotypes.tsv.gz", "qc/customqc/0053_P2017BB3S19R_S1.sorted_tpm4_values.tsv.gz", "qc/customqc/0053_P2017BB3S20R_S2.sorted_tpm4_values.tsv.gz", "qc/fastqc/0053_P2017BB3S19R_S1_R1_001_fastqc.html", "qc/fastqc/0053_P2017BB3S19R_S1_R2_001_fastqc.html", "qc/fastqc/0053_P2017BB3S20R_S2_R1_001_fastqc.html", "qc/fastqc/0053_P2017BB3S20R_S2_R2_001_fastqc.html", "qc/fastqc/0053_P2017BB3S19R_S1_R1_001_fastqc.zip", "qc/fastqc/0053_P2017BB3S19R_S1_R2_001_fastqc.zip", "qc/fastqc/0053_P2017BB3S20R_S2_R1_001_fastqc.zip", "qc/fastqc/0053_P2017BB3S20R_S2_R2_001_fastqc.zip", "qc/20mer_counter/0053_P2017BB3S19R_S1_R1_20mer_counter.txt", "qc/20mer_counter/0053_P2017BB3S19R_S1_R2_20mer_counter.txt", "qc/20mer_counter/0053_P2017BB3S20R_S2_R1_20mer_counter.txt", "qc/20mer_counter/0053_P2017BB3S20R_S2_R2_20mer_counter.txt", "qc/trim_galore/0053_P2017BB3S19R_S1_R1_001.fastq.gz_trimming_report.txt", "qc/trim_galore/0053_P2017BB3S20R_S2_R1_001.fastq.gz_trimming_report.txt", "qc/trim_galore/0053_P2017BB3S19R_S1_R2_001.fastq.gz_trimming_report.txt", "qc/trim_galore/0053_P2017BB3S20R_S2_R2_001.fastq.gz_trimming_report.txt", "qc/trim_galore/0053_P2017BB3S19R_S1_R1_001_val_1_fastqc.html", "qc/trim_galore/0053_P2017BB3S20R_S2_R1_001_val_1_fastqc.html", "qc/trim_galore/0053_P2017BB3S19R_S1_R2_001_val_2_fastqc.html", "qc/trim_galore/0053_P2017BB3S20R_S2_R2_001_val_2_fastqc.html", "qc/trim_galore/0053_P2017BB3S19R_S1_R1_001_val_1_fastqc.zip", "qc/trim_galore/0053_P2017BB3S20R_S2_R1_001_val_1_fastqc.zip", "qc/trim_galore/0053_P2017BB3S19R_S1_R2_001_val_2_fastqc.zip", "qc/trim_galore/0053_P2017BB3S20R_S2_R2_001_val_2_fastqc.zip", "analyzed/salmon_0053_P2017BB3S19R_S1/quant.genes.sf", "analyzed/salmon_0053_P2017BB3S20R_S2/quant.genes.sf"], "output": ["qc/qc_flags/0053_P2017BB3S20R_S2_qcflags.json"], "wildcards": {"sample": "0053_P2017BB3S20R_S2"}, "params": {}, "log": [], "threads": 1, "resources": {"tmpdir": "<TBD>"}, "jobid": 20}
cd /gpfs/home5/fvhemert/temp/test_pipeline && /home/fvhemert/miniforge3/envs/snaqs/bin/python3.12 -m snakemake --snakefile /gpfs/home5/fvhemert/temp/test_pipeline/Snakefile --target-jobs 'qc_flagging:sample=0053_P2017BB3S20R_S2' --allowed-rules 'qc_flagging' --cores 94 --attempt 1 --force-use-threads  --wait-for-files-file /gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf/snakejob.qc_flagging.20.sh.waitforfilesfile.txt --force --target-files-omit-workdir-adjustment --keep-storage-local-copies --max-inventory-time 0 --nocolor --notemp --no-hooks --nolock --ignore-incomplete --verbose  --rerun-triggers input params mtime code software-env --deployment-method conda --conda-frontend conda --conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs --conda-base-path /home/fvhemert/miniforge3 --shared-fs-usage input-output storage-local-copies persistence software-deployment sources source-cache --wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/ --printshellcmds  --latency-wait 60 --scheduler ilp --local-storage-prefix .snakemake/storage --scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin --set-resources base64//ZmFzdHFjOnBhcnRpdGlvbj1zdGFnaW5n --set-threads base64//c2FsbW9uPTE2 --default-resources base64//dG1wZGlyPXN5c3RlbV90bXBkaXI= --mode remote && touch '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf/20.jobfinished' || (touch '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf/20.jobfailed'; exit 1)

sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 128 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 8 jobs.
sbatch: By default shared jobs get 1792 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 16 CPUs, based on the number of CPUs and the amount memory that you've requested.
Submitted job 20 with external jobid '8183989'.
[Wed Oct 16 10:03:21 2024]
Finished job 22.
21 of 25 steps (84%) done
[Wed Oct 16 10:03:22 2024]
Finished job 13.
22 of 25 steps (88%) done
[Wed Oct 16 10:03:23 2024]
Finished job 20.
23 of 25 steps (92%) done
Resources before job selection: {'_cores': 9223372036854775807, '_nodes': 100, '_job_count': 10}
Ready jobs: 1
Select jobs to execute...
Selecting jobs to run using greedy solver.
Selected jobs: 1
Resources after job selection: {'_cores': 9223372036854775806, '_nodes': 99, '_job_count': 10}
Execute 1 jobs...

[Wed Oct 16 10:03:23 2024]
rule create_flagged_sampletable:
    input: qc/qc_flags/0053_P2017BB3S19R_S1_qcflags.json, qc/qc_flags/0053_P2017BB3S20R_S2_qcflags.json
    output: results/test_pipeline_samples.txt
    jobid: 24
    benchmark: benchmark/create_flagged_sampletable.create_flagged_sampletable.benchmark.tsv
    reason: Missing output files: results/test_pipeline_samples.txt; Input files updated by another job: qc/qc_flags/0053_P2017BB3S19R_S1_qcflags.json, qc/qc_flags/0053_P2017BB3S20R_S2_qcflags.json
    resources: tmpdir=<TBD>

General args: ['--force', '--target-files-omit-workdir-adjustment', '--keep-storage-local-copies', '--max-inventory-time 0', '--nocolor', '--notemp', '--no-hooks', '--nolock', '--ignore-incomplete', '', '--verbose ', '--rerun-triggers input params mtime code software-env', '', '', '--deployment-method conda', '--conda-frontend conda', '--conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs', '--conda-base-path /home/fvhemert/miniforge3', '', '', '', '--shared-fs-usage input-output storage-local-copies persistence software-deployment sources source-cache', '', '--wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/', '', '', '', '', '--printshellcmds ', '', '--latency-wait 60', '--scheduler ilp', '', '--local-storage-prefix .snakemake/storage', '--scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin', '', '', '--set-resources base64//ZmFzdHFjOnBhcnRpdGlvbj1zdGFnaW5n', '--set-threads base64//c2FsbW9uPTE2', '', '--default-resources base64//dG1wZGlyPXN5c3RlbV90bXBkaXI=']
Jobscript:
#!/bin/sh
# properties = {"type": "single", "rule": "create_flagged_sampletable", "local": false, "input": ["qc/qc_flags/0053_P2017BB3S19R_S1_qcflags.json", "qc/qc_flags/0053_P2017BB3S20R_S2_qcflags.json"], "output": ["results/test_pipeline_samples.txt"], "wildcards": {}, "params": {}, "log": [], "threads": 1, "resources": {"tmpdir": "<TBD>"}, "jobid": 24}
cd /gpfs/home5/fvhemert/temp/test_pipeline && /home/fvhemert/miniforge3/envs/snaqs/bin/python3.12 -m snakemake --snakefile /gpfs/home5/fvhemert/temp/test_pipeline/Snakefile --target-jobs 'create_flagged_sampletable:' --allowed-rules 'create_flagged_sampletable' --cores 94 --attempt 1 --force-use-threads  --wait-for-files '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf' 'qc/qc_flags/0053_P2017BB3S19R_S1_qcflags.json' 'qc/qc_flags/0053_P2017BB3S20R_S2_qcflags.json' --force --target-files-omit-workdir-adjustment --keep-storage-local-copies --max-inventory-time 0 --nocolor --notemp --no-hooks --nolock --ignore-incomplete --verbose  --rerun-triggers input params mtime code software-env --deployment-method conda --conda-frontend conda --conda-prefix /home/fvhemert/projects/snaqs_files/snakemake_envs --conda-base-path /home/fvhemert/miniforge3 --shared-fs-usage input-output storage-local-copies persistence software-deployment sources source-cache --wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/ --printshellcmds  --latency-wait 60 --scheduler ilp --local-storage-prefix .snakemake/storage --scheduler-solver-path /home/fvhemert/miniforge3/envs/snaqs/bin --set-resources base64//ZmFzdHFjOnBhcnRpdGlvbj1zdGFnaW5n --set-threads base64//c2FsbW9uPTE2 --default-resources base64//dG1wZGlyPXN5c3RlbV90bXBkaXI= --mode remote && touch '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf/24.jobfinished' || (touch '/gpfs/home5/fvhemert/temp/test_pipeline/.snakemake/tmp.8x1wwatf/24.jobfailed'; exit 1)

sbatch: Single-node jobs run on a shared node by default. Add --exclusive if you want to use a node exclusively.
sbatch: A full node consists of 128 CPU cores, 229376 MiB of memory and 0 GPUs and can be shared by up to 8 jobs.
sbatch: By default shared jobs get 1792 MiB of memory per CPU core, unless explicitly overridden with --mem-per-cpu, --mem-per-gpu or --mem.
sbatch: You will be charged for 16 CPUs, based on the number of CPUs and the amount memory that you've requested.
Submitted job 24 with external jobid '8183997'.
[Wed Oct 16 10:03:43 2024]
Finished job 24.
24 of 25 steps (96%) done
Resources before job selection: {'_cores': 9223372036854775807, '_nodes': 100, '_job_count': 10}
Ready jobs: 1
Select jobs to execute...
Selecting jobs to run using greedy solver.
Selected jobs: 1
Resources after job selection: {'_cores': 9223372036854775806, '_nodes': 99, '_job_count': 10}
Execute 1 jobs...

[Wed Oct 16 10:03:43 2024]
localrule all:
    input: analyzed/salmon_0053_P2017BB3S19R_S1/quant.genes.sf, analyzed/salmon_0053_P2017BB3S20R_S2/quant.genes.sf, pcs/test_pipeline/raw/0053_P2017BB3S19R_S1_rna-seq_salmonTPM4.tsv, pcs/test_pipeline/raw/0053_P2017BB3S20R_S2_rna-seq_salmonTPM4.tsv, qc/20mer_counter/0053_P2017BB3S19R_S1_R1_20mer_counter.txt, qc/20mer_counter/0053_P2017BB3S19R_S1_R2_20mer_counter.txt, qc/20mer_counter/0053_P2017BB3S20R_S2_R1_20mer_counter.txt, qc/20mer_counter/0053_P2017BB3S20R_S2_R2_20mer_counter.txt, qc/qc_flags/0053_P2017BB3S19R_S1_qcflags.json, qc/qc_flags/0053_P2017BB3S20R_S2_qcflags.json, qc/multiqc_report.html, qc/customqc_report.html, /gpfs/home5/fvhemert/temp/test_pipeline/results/test_pipeline_PCTPM_allgenes_log2values.tsv, /gpfs/home5/fvhemert/temp/test_pipeline/results/test_pipeline_TPM_allgenes_log2values.tsv, /gpfs/home5/fvhemert/temp/test_pipeline/results/test_pipeline_TPM4_allgenes_log2values.tsv, /gpfs/home5/fvhemert/temp/test_pipeline/results/test_pipeline_PCTPM_allgenes_values.tsv, /gpfs/home5/fvhemert/temp/test_pipeline/results/test_pipeline_TPM_allgenes_values.tsv, /gpfs/home5/fvhemert/temp/test_pipeline/results/test_pipeline_TPM4_allgenes_values.tsv, results/test_pipeline_samples.txt
    jobid: 0
    reason: Input files updated by another job: qc/customqc_report.html, /gpfs/home5/fvhemert/temp/test_pipeline/results/test_pipeline_TPM_allgenes_log2values.tsv, analyzed/salmon_0053_P2017BB3S19R_S1/quant.genes.sf, qc/qc_flags/0053_P2017BB3S20R_S2_qcflags.json, /gpfs/home5/fvhemert/temp/test_pipeline/results/test_pipeline_TPM4_allgenes_log2values.tsv, /gpfs/home5/fvhemert/temp/test_pipeline/results/test_pipeline_TPM4_allgenes_values.tsv, qc/20mer_counter/0053_P2017BB3S20R_S2_R1_20mer_counter.txt, /gpfs/home5/fvhemert/temp/test_pipeline/results/test_pipeline_PCTPM_allgenes_log2values.tsv, results/test_pipeline_samples.txt, qc/multiqc_report.html, pcs/test_pipeline/raw/0053_P2017BB3S20R_S2_rna-seq_salmonTPM4.tsv, pcs/test_pipeline/raw/0053_P2017BB3S19R_S1_rna-seq_salmonTPM4.tsv, qc/20mer_counter/0053_P2017BB3S19R_S1_R2_20mer_counter.txt, /gpfs/home5/fvhemert/temp/test_pipeline/results/test_pipeline_PCTPM_allgenes_values.tsv, /gpfs/home5/fvhemert/temp/test_pipeline/results/test_pipeline_TPM_allgenes_values.tsv, analyzed/salmon_0053_P2017BB3S20R_S2/quant.genes.sf, qc/20mer_counter/0053_P2017BB3S20R_S2_R2_20mer_counter.txt, qc/20mer_counter/0053_P2017BB3S19R_S1_R1_20mer_counter.txt, qc/qc_flags/0053_P2017BB3S19R_S1_qcflags.json
    resources: tmpdir=/scratch-local/70716

[Wed Oct 16 10:03:43 2024]
Finished job 0.
25 of 25 steps (100%) done
Complete log: .snakemake/log/2024-10-16T095626.528041.snakemake.log
unlocking
removing lock
removing lock
removed all locks

@cmeesters
Copy link
Member

This does not make any sense to me: scancel sbatch .... You can clearly see, all the code does is to attempt to cancel job ids. Is the cluster-generic code in the same environment? Or configuration components, thereof?

@freekvh
Copy link
Author

freekvh commented Oct 16, 2024

This does not make any sense to me: scancel sbatch .... You can clearly see, all the code does is to attempt to cancel job ids. Is the cluster-generic code in the same environment? Or configuration components, thereof?

It is the same environment and the same pipeline, I just change the config.yaml. The cancelling is because I ctrl-c the main process, because it appears to hang (no follow-up jobs are submitted). f I ctrl-c the cluster-generic workflow it ends as expected:

sbatch: You will be charged for 16 CPUs, based on the number of CPUs and the amount memory that you've requested.
Submitted job 10 with external jobid '8189916'.
[Wed Oct 16 20:03:09 2024]
Finished job 19.
6 of 25 steps (24%) done
^CTerminating processes on user request, this might take some time.
No --cluster-cancel given. Will exit after finishing currently running jobs.
Complete log: .snakemake/log/2024-10-16T200207.048442.snakemake.log
WorkflowError:
At least one job did not complete successfully.
(snaqs) [fvhemert@int4 test_pipeline]$ 

It could be me... if you have a working config.yaml I could also try that? I didn't find many examples...

@r-blanchet
Copy link

Hello, I do have the same problem, from slurm plugin version 0.10.0. Downgrading to 0.9. Fixed the issue....

@freekvh
Copy link
Author

freekvh commented Oct 17, 2024

Hello, I do have the same problem, from slurm plugin version 0.10.0. Downgrading to 0.9. Fixed the issue....

This prompted me to test...

  • 0.10.2 does not work: same issue as 0.11.0 (the issue above)
  • 0.9.0, does not work: only initial jobs are submitted and finish, however, hitting crtl-c does end the workflow correctly
  • 0.8.0 Works as expected! Workflow finishes, and executes on the specified partitions with the requested resources!

@cmeesters let me know if you need more info to pinpoint this, for now I will get going with 0.8.0.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants