Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

No signature of method: java.lang.Boolean.call() is applicable for argument types #393

Closed
FerrenaAlexander opened this issue Nov 26, 2024 · 1 comment
Labels
bug Something isn't working

Comments

@FerrenaAlexander
Copy link

Description of the bug

Hello, I am getting this error while trying to run nf-core/atacseq.

nf-core/atacseq execution completed unsuccessfully!
The exit status of the task that caused the workflow execution to fail was: null.

The full error message was:

Error executing process > 'NFCORE_ATACSEQ:ATACSEQ:FASTQ_FASTQC_UMITOOLS_TRIMGALORE:TRIMGALORE (NoPFOA_LPS_REP2_T1)'

Caused by:
  No signature of method: java.lang.Boolean.call() is applicable for argument types: (_nf_config_c7082799$_run_closure3$_closure4$_closure5) values: [_nf_config_c7082799$_run_closure3$_closure4$_closure5@718b8b52]
Possible solutions: wait(), any(), wait(long), and(java.lang.Boolean), each(groovy.lang.Closure), tap(groovy.lang.Closure)

I am using nextflow version 23.04.4 and pipeline revision 2.1.2. I am also using a custom config file modified from the NYU Bigpurple profile - the full config is below, it is based on singularity. I am able to run the "test" profile while passing my custom config with -c.

Command used and terminal output

module load singularity/3.11.5
module load squashfs-tools/4.6.1
module load nextflow/23.04.4




#date
d1=$(date +%s)

cd /gpfs/data/newmanlab/ATACseq_aug2024/nfcore_results/atactest_2024.11.26



echo "submitting runscript"


#prepare paths:

#sample sheet
#samplesheet=/gpfs/data/newmanlab/ATACseq_aug2024/nfcore_results/SharrineATAC_nfcore_sampsheet.csv
samplesheet=/gpfs/data/newmanlab/ATACseq_aug2024/nfcore_results/SharrineATAC_nfcore_sampsheet_noControl.csv


#reference --> using a custom reference, can also just specify "--genome" if not using custome
fasta=/gpfs/data/newmanlab/ATACseq_aug2024/refs/Homo_sapiens.GRCh38.dna.primary_assembly.fa
gtf=/gpfs/data/newmanlab/ATACseq_aug2024/refs/Homo_sapiens.GRCh38.113.gtf


#custom config, set some resource limits and tell which queues
custom_config=/gpfs/data/newmanlab/ATACseq_aug2024/nfcore_results/custom_config.config


## for ATAC, we must provide either read_length or macs_gsize
# read_length=51
# * --read_length: '51' is not a valid choice (Available choices: 50, 75, 100, 150, 200)
read_length=50

printf '\n\n\nLAUNCHING RUN\n\n\n'


nextflow run nf-core/atacseq \
  -r 2.1.2 \
  --input $samplesheet \
  --fasta $fasta \
  --gtf $gtf \
  --read_length $read_length \
  --outdir results/ \
  -c $custom_config

Relevant files

Below is my custom_config file, it is copied and modified from the NYU Bigpurple profile

singularityDir = "/gpfs/scratch/${USER}/singularity_images_nextflow"
 
params {
    config_profile_description = """
    NYU School of Medicine BigPurple cluster profile provided by nf-core/configs.
    module load both singularity/3.1 and squashfs-tools/4.3 before running the pipeline with this profile!!
    Run from your scratch or lab directory - Nextflow makes a lot of files!!
    Also consider running the pipeline on a compute node (srun --pty /bin/bash -t=01:00:00) the first time, as it will be pulling the docker image, which will be converted into a singularity image, which is heavy on the login node and will take some time. Subsequent runs can be done on the login node, as the docker image will only be pulled and converted once. By default the images will be stored in ${singularityDir}
    """.stripIndent()
    config_profile_contact     = 'Tobias Schraink (@tobsecret)'
    config_profile_url         = 'https://github.com/nf-core/configs/blob/master/docs/bigpurple.md'
}
 
singularity {
    enabled    = true
    autoMounts = true
    cacheDir   = singularityDir
}
 
process {
    beforeScript = """
    module load singularity/3.11.5
    module load squashfs-tools/4.6.1
    """.stripIndent()
    executor     = 'slurm'
  resourceLimits = [
    cpus: 16,
    memory: 64.GB,
    time: 120.h
  ]
  queue = {
	if (task.time <= 12.h) {
            'cpu_short'
        }
	else (task.time > 12.h) {
            'cpu_medium'
        }
    }
}

System information

  • Nextflow version: 23.04.4
  • Hardware: HPC
  • Executor: SLURM
  • Container engine: Singularity
  • OS: Red Hat Enterprise Linux version 8.8
  • Version of nf-core/atacseq: 2.1.2
@FerrenaAlexander FerrenaAlexander added the bug Something isn't working label Nov 26, 2024
@FerrenaAlexander
Copy link
Author

Looking more closely at my config file, I realized I made an error in my queue selection block, replacing else (task.time > 12.h) with just else appears to have fixed this. I think having that parenthesis conditional was causing a java error. I guess the test profile was causing the pipeline to just ignore my custom config.

Thanks for all of your hard work!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant