diff --git a/.github/CODEOWNERS b/.github/CODEOWNERS index 9cdc30bcd..1b90d8bf0 100644 --- a/.github/CODEOWNERS +++ b/.github/CODEOWNERS @@ -1,136 +1,111 @@ -**abims** @lecorguille -**adcra** @kalayaneech -**alice** @osteobjorn -**apollo** @coh.org -**aws_tower** @ggabernet -**awsbatch** @apeltzer -**azurebatch** @adamrtalbot -**azurebatchdev** @abhi18av -**bi** @apeltzer -**bigpurple** @tobsecret -**binac** @apeltzer -**biohpc_gen** @phue -**biowulf** @hpc.nih.gov' -**cambridge** @cam.ac.uk -**cbe** @phue -**ccga_dx** @marchoeppner -**ccga_med** @marchoeppner -**cedars** @rajewski -**ceres** @MillironX -**cfc** @FriederikeHanssen -**cfc_dev** @FriederikeHanssen -**cheaha** @uab.edu -**computerome** @marcmtk -**crg** @joseespinosa -**crick** @chris-cheshire -**crukmi** @sppearce -**czbiohub_aws** @olgabot -**denbi_qbic** @apeltzer -**dkfz** @dkfz-heidelberg.de' -**ebc** @marcel-keller -**ebi_codon** @saulpierotti-ebi -**ebi_codon_slurm** @saulpierotti-ebi -**eddie** @ameynert -**engaging** @PhilPalmer -**ethz_euler** @hest.ethz.ch -**eva** @jfy133 -**fgcz** @fgcz.ethz.ch" -**fub_curta** @zedat.fu-berlin.de' -**genotoul** @inra.fr' -**genouest** @abretaud -**gis** @andreas-wilm -**google** @evanfloden -**googlebatch** @hnawar' -**googlels** @hnawar' -**hasta** config_profile_contact = 'Clinical Genomics, Stockholm' -**hki** @jfy133 @jfy133 @jfy133 @jfy133 -**hypatia** @lusacristan -**icr_davros** @adrlar -**ifb_core** config_profile_contact = 'https://community.france-bioinformatique.fr' -**imperial** @imperial.ac.uk -**incliva** @incliva.es' -**ipop_up** @parisepigenetics.com' -**janelia** @janelia.hhmi.org -**jax** @flynnb -**ku_sund_dangpu** @sund.ku.dk>' -**leicester** @cam.ac.uk' -**lugh** @BarryDigby -**maestro** @pierrespc -**mana** config_profile_contact = 'Cedric Arisdakessian' -**marvin** @gmail.com (Pablo Carrion -**medair** @gu.se -**mjolnir_globe** @ashildv -**mpcdf** @jfy133 -**munin** @maxulysse -**nci_gadi** @mattdton -**nu_genomics** @NUjon -**oist** @oist.jp>' -**pasteur** @rplanel -**pawsey_nimbus** @SarahBeecroft' -**pawsey_setonix** @georgiesamaha -**pdc_kth** @pontus -**phoenix** @apeltzer -**binac** @apeltzer -**uppmax** @lnu.se -**aws_tower** @emiller88 -**crick** @ChristopherBarrington -**eva** @jfy133 @jfy133 @jfy133 -**maestro** -**mpcdf** @jfy133 @jfy133 -**hki** @jfy133 -**engaging** @PhilPalmer -**eva** @jfy133 -**crg** @joseespinosa -**hasta** -**hasta** -**munin** -**azurebatch_pools_Edv4** @vsmalladi -**eddie** -**mpcdf** @jfy133 -**utd_sysbio** @emiller88 -**munin** @praveenraj2018 -**cfc** @FriederikeHanssen -**eddie** -**eva** @jfy133 -**icr_davros** -**munin** @maxulysse -**uppmax** @MaxUlysse -**imperial** config_profile_contact = 'NA' -**eva** @jfy133 -**hasta** @sofstam -**eddie** -**genomes** -**prince** @tobsecret -**psmn** @l-modolo -**rosalind** config_profile_contact = 'Theo Portlock' -**rosalind_uge** @gregorysprenger -**sage** @BrunoGrandePhD -**sahmri** @sahmri.com -**sanbi_ilifu** @pvanheus -**sanger** @priyanka-surana -**scw** @bangor.ac.uk' -**seawulf** @davidecarlson -**seg_globe** @ashildv -**software_license** @maxulysse -**tigem** @giusmar -**tubingen_apg** @sc13-bioinf -**tuos_stanage** @sheffield.ac.uk -**ucd_sonic** @brucemoran -**ucl_myriad** @ucl.ac.uk -**uct_hpc** @kviljoen -**uge** @gregorysprenger -**unc_lccc** @alanhoyle -**unibe_ibu** @bioinformatics.unibe.ch" -**uod_hpc** @dundee.ac.uk -**uppmax** @ewels -**utd_ganymede** @emiller88 -**utd_sysbio** @emiller88 -**uw_hyak_pedslabs** @CarsonJM -**uzh** @apeltzer -**vai** @njspix -**vsc_kul_uhasselt** @kuleuven.be' @kuleuven.be' @kuleuven.be' -**vsc_ugent** @nvnieuwk @matthdsm ict@cmgg.be -**wcm** @DoaneAS -**wehi** @wehi.edu.au -**wustl_htcf** @wustl.edu>" -**xanadu** @uconn.edu' +**/abims** @lecorguille +**/adcra** @kalayaneech +**/alice** @bbartholdy +**/apollo** @drejom +**/aws_tower** @ggabernet +**/awsbatch** @apeltzer +**/azurebatch** @adamrtalbot +**/azurebatchdev** @abhi18av +**/bi** @apeltzer +**/bigpurple** @tobsecret +**/binac** @apeltzer +**/biohpc_gen** @phue +**/biowulf** @qiyubio +**/cambridge** @EmelineFavreau +**/cbe** @phue +**/ccga_dx** @marchoeppner +**/ccga_med** @marchoeppner +**/cedars** @rajewski +**/ceres** @MillironX +**/cfc** @FriederikeHanssen +**/cfc_dev** @FriederikeHanssen +**/cheaha** @lianov @atrull314 +**/computerome** @marcmtk +**/crg** @joseespinosa +**/crick** @chris-cheshire @ChristopherBarrington +**/crukmi** @sppearce +**/czbiohub_aws** @olgabot +**/denbi_qbic** @apeltzer +**/dkfz** @kubranarci +**/ebc** @marcel-keller +**/ebi_codon** @saulpierotti +**/ebi_codon_slurm** @saulpierotti +**/eddie** @ameynert +**/engaging** @PhilPalmer +**/ethz_euler** @jpadesousa +**/eva** @jfy133 +**/fgcz** @zajacn +**/fub_curta** @wassimsalam01 +**/genotoul** @chklopp +**/genouest** @abretaud +**/gis** @andreas-wilm +**/google** @FIXME +**/googlebatch** @hnawar +**/googlels** @hnawar +**/hki** @jfy133 +**/hypatia** @lusacristan +**/icr_davros** @adrlar +**/ifb_core** @FIXME +# **imperial** @FIXME +# **incliva** @FIXME +# **ipop_up** @FIXME +# **janelia** @FIXME +**/jax** @flynnb +# **ku_sund_dangpu** @FIXME +# **leicester** @FIXME +**/lugh** @BarryDigby +**/maestro** @pierrespc +# **mana** @FIXME +# **marvin** @FIXME +# **medair** @FIXME +**/mjolnir_globe** @ashildv +**/mpcdf** @jfy133 +**/munin** @praveenraj2018 @maxulysse +**/nci_gadi** @mattdton +**/nu_genomics** @RoganGrant @NUjon +# **oist** @FIXME +**/pasteur** @rplanel +**/pawsey_nimbus** @marcodelapierre @SarahBeecroft +**/pawsey_setonix** @georgiesamaha @SarahBeecroft +**/pdc_kth** @pontus +**/phoenix** @apeltzer +**/uppmax** @ewels @MaxUlysse +**/demultiplex** @nf-core/demultiplex +**/azurebatch_pools_Edv4** @vsmalladi +# **icr_davros** @FIXME +**/hasta** @sofstam +**/psmn** @l-modolo +**/rosalind** @theoportlock +**/rosalind_uge** @gregorysprenger +**/sage** @BrunoGrandePhD +# **sahmri** @FIXME +**/ilifu** @pvanheus +**/sanger** @priyanka-surana +# **scw** @FIXME +**/seawulf** @davidecarlson +**/seg_globe** @ashildv +**/software_license** @maxulysse +**/tigem** @giusmar +**/tubingen_apg** @sc13-bioinf +# **tuos_stanage** @FIXME +**/ucd_sonic** @brucemoran +# **ucl_myriad** @FIXME +**/uct_hpc** @kviljoen +**/uge** @gregorysprenger +**/unc_lccc** @alanhoyle +**/unc_longleaf** @ahepperla +# **unibe_ibu** @FIXME +# **/uod_hpc** @FIXME +**/utd_europa** @edmundmiller +**/utd_ganymede** @edmundmiller @alyssa-ab +**/utd_sysbio** @edmundmiller +**/uw_hyak_pedslabs** @CarsonJM +**/uzh** @apeltzer +**/vai** @njspix +# **/vsc_kul_uhasselt** @FIXME +**/vsc_ugent** @nvnieuwk @matthdsm +**/wcm** @DoaneAS +# **/wehi** @FIXME +# **/wustl_htcf** @FIXME +# **/xanadu** @FIXME +**/tufts** @zhan4429 diff --git a/.github/workflows/fix-linting.yml b/.github/workflows/fix-linting.yml index 9b7e51629..2f1843890 100644 --- a/.github/workflows/fix-linting.yml +++ b/.github/workflows/fix-linting.yml @@ -4,7 +4,7 @@ on: types: [created] jobs: - deploy: + fix-linting: # Only run if comment is on a PR with the main repo, and if it contains the magic keywords if: > contains(github.event.comment.html_url, '/pull/') && @@ -13,10 +13,17 @@ jobs: runs-on: ubuntu-latest steps: # Use the @nf-core-bot token to check out so we can push later - - uses: actions/checkout@v3 + - uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11 # v4 with: token: ${{ secrets.nf_core_bot_auth_token }} + # indication that the linting is being fixed + - name: React on comment + uses: peter-evans/create-or-update-comment@71345be0265236311c031f5c7866368bd1eff043 # v4 + with: + comment-id: ${{ github.event.comment.id }} + reactions: eyes + # Action runs on the issue comment, so we don't get the PR by default # Use the gh cli to check out the PR - name: Checkout Pull Request @@ -24,32 +31,59 @@ jobs: env: GITHUB_TOKEN: ${{ secrets.nf_core_bot_auth_token }} - - uses: actions/setup-node@v2 + # Install and run pre-commit + - uses: actions/setup-python@0a5c61591373683505ea898e09a3ea4f39ef2b9c # v5 + with: + python-version: 3.11 - - name: Install Prettier - run: npm install -g prettier @prettier/plugin-php + - name: Install pre-commit + run: pip install pre-commit - # Check that we actually need to fix something - - name: Run 'prettier --check' - id: prettier_status - run: | - if prettier --check ${GITHUB_WORKSPACE}; then - echo "::set-output name=result::pass" - else - echo "::set-output name=result::fail" - fi + - name: Run pre-commit + id: pre-commit + run: pre-commit run --all-files + continue-on-error: true - - name: Run 'prettier --write' - if: steps.prettier_status.outputs.result == 'fail' - run: prettier --write ${GITHUB_WORKSPACE} + # indication that the linting has finished + - name: react if linting finished succesfully + if: steps.pre-commit.outcome == 'success' + uses: peter-evans/create-or-update-comment@71345be0265236311c031f5c7866368bd1eff043 # v4 + with: + comment-id: ${{ github.event.comment.id }} + reactions: "+1" - name: Commit & push changes - if: steps.prettier_status.outputs.result == 'fail' + id: commit-and-push + if: steps.pre-commit.outcome == 'failure' run: | git config user.email "core@nf-co.re" git config user.name "nf-core-bot" git config push.default upstream git add . git status - git commit -m "[automated] Fix linting with Prettier" + git commit -m "[automated] Fix code linting" git push + + - name: react if linting errors were fixed + id: react-if-fixed + if: steps.commit-and-push.outcome == 'success' + uses: peter-evans/create-or-update-comment@71345be0265236311c031f5c7866368bd1eff043 # v4 + with: + comment-id: ${{ github.event.comment.id }} + reactions: hooray + + - name: react if linting errors were not fixed + if: steps.commit-and-push.outcome == 'failure' + uses: peter-evans/create-or-update-comment@71345be0265236311c031f5c7866368bd1eff043 # v4 + with: + comment-id: ${{ github.event.comment.id }} + reactions: confused + + - name: react if linting errors were not fixed + if: steps.commit-and-push.outcome == 'failure' + uses: peter-evans/create-or-update-comment@71345be0265236311c031f5c7866368bd1eff043 # v4 + with: + issue-number: ${{ github.event.issue.number }} + body: | + @${{ github.actor }} I tried to fix the linting errors, but it didn't work. Please fix them manually. + See [CI log](https://github.com/nf-core/configs/actions/runs/${{ github.run_id }}) for more details. diff --git a/.github/workflows/linting.yml b/.github/workflows/linting.yml index 6bf333438..34e8cb0fd 100644 --- a/.github/workflows/linting.yml +++ b/.github/workflows/linting.yml @@ -1,22 +1,21 @@ -name: Code Linting +name: Lint tools code formatting on: - pull_request: push: branches: - master + pull_request: + +# Cancel if a newer run is started +concurrency: + group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }} + cancel-in-progress: true jobs: - prettier: + pre-commit: runs-on: ubuntu-latest + env: + GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} steps: - - name: Check out repository - uses: actions/checkout@v2 - - - name: Install NodeJS - uses: actions/setup-node@v2 - - - name: Install Prettier - run: npm install -g prettier - - - name: Run Prettier --check - run: prettier --check ${GITHUB_WORKSPACE} + - uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11 # v4 + - uses: actions/setup-python@0a5c61591373683505ea898e09a3ea4f39ef2b9c # v5 + - uses: pre-commit/action@v3.0.1 diff --git a/.github/workflows/linting_comment.yml b/.github/workflows/linting_comment.yml new file mode 100644 index 000000000..2f19378f0 --- /dev/null +++ b/.github/workflows/linting_comment.yml @@ -0,0 +1,28 @@ +name: nf-core linting comment +# This workflow is triggered after the linting action is complete +# It posts an automated comment to the PR, even if the PR is coming from a fork {%- raw %} + +on: + workflow_run: + workflows: ["nf-core linting"] + +jobs: + test: + runs-on: ubuntu-latest + steps: + - name: Download lint results + uses: dawidd6/action-download-artifact@f6b0bace624032e30a85a8fd9c1a7f8f611f5737 # v3 + with: + workflow: linting.yml + workflow_conclusion: completed + + - name: Get PR number + id: pr_number + run: echo "pr_number=$(cat linting-logs/PR_number.txt)" >> $GITHUB_OUTPUT + + - name: Post PR comment + uses: marocchino/sticky-pull-request-comment@331f8f5b4215f0445d3c07b4967662a32a2d3e31 # v2 + with: + GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} + number: ${{ steps.pr_number.outputs.pr_number }} + path: linting-logs/lint_results.md diff --git a/.github/workflows/main.yml b/.github/workflows/main.yml index 68fa28e58..11a410cca 100644 --- a/.github/workflows/main.yml +++ b/.github/workflows/main.yml @@ -2,6 +2,11 @@ name: Configs tests on: [pull_request, push] +# Cancel if a newer run is started +concurrency: + group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }} + cancel-in-progress: true + jobs: test_all_profiles: runs-on: ubuntu-latest @@ -33,6 +38,7 @@ jobs: - "adcra" - "alice" - "apollo" + - "arcc" - "aws_tower" - "awsbatch" - "azurebatch" @@ -90,6 +96,7 @@ jobs: - "ku_sund_dangpu" - "leicester" - "lugh" + - "m3c" - "maestro" - "mana" - "marvin" @@ -108,6 +115,7 @@ jobs: - "pdc_kth" - "phoenix" - "psmn" + - "qmul_apocrita" - "rosalind" - "rosalind_uge" - "sage" @@ -120,7 +128,9 @@ jobs: - "software_license" - "tigem" - "tubingen_apg" + - "tufts" - "tuos_stanage" + - "ucl_cscluster" - "ucl_myriad" - "uct_hpc" - "ucd_sonic" @@ -130,24 +140,27 @@ jobs: - "unc_longleaf" - "uod_hpc" - "uppmax" + - "utd_europa" - "utd_ganymede" - "utd_sysbio" - "uw_hyak_pedslabs" - "uzh" - "uzl_omics" - "vai" + - "vsc_calcua" - "vsc_kul_uhasselt" - "vsc_ugent" - "wehi" - "wustl_htcf" - "xanadu" + - "york_viking" steps: - uses: actions/checkout@v4 - name: Install Nextflow - run: | - wget -qO- get.nextflow.io | bash - sudo mv nextflow /usr/local/bin/ + uses: nf-core/setup-nextflow@v2 + with: + version: "latest-everything" - name: Check ${{ matrix.profile }} profile env: SCRATCH: "~" diff --git a/.gitignore b/.gitignore index 8aa0735bb..ce13dbe0d 100644 --- a/.gitignore +++ b/.gitignore @@ -3,4 +3,4 @@ work/ data/ results/ .DS_Store -*.code-workspace \ No newline at end of file +*.code-workspace diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml new file mode 100644 index 000000000..edf1bc116 --- /dev/null +++ b/.pre-commit-config.yaml @@ -0,0 +1,29 @@ +repos: + - repo: https://github.com/astral-sh/ruff-pre-commit + rev: v0.2.2 + hooks: + - id: ruff # linter + args: [--fix, --exit-non-zero-on-fix] # sort imports and fix + - id: ruff-format # formatter + + - repo: https://github.com/pre-commit/mirrors-prettier + rev: "v3.1.0" + hooks: + - id: prettier + + - repo: https://github.com/editorconfig-checker/editorconfig-checker.python + rev: "2.7.3" + hooks: + - id: editorconfig-checker + alias: ec + + - repo: local + hooks: + - id: codeowners-validator + name: CODEOWNERS validator + language: docker_image + pass_filenames: false + entry: > + -e REPOSITORY_PATH="." + -e CHECKS="files,duppatterns,syntax" + ghcr.io/mszostok/codeowners-validator:v0.7.4 diff --git a/README.md b/README.md index d0146726e..112dc021d 100644 --- a/README.md +++ b/README.md @@ -91,6 +91,7 @@ Currently documentation is available for the following systems: - [ADCRA](docs/adcra.md) - [ALICE](docs/alice.md) - [APOLLO](docs/apollo.md) +- [ARCC](docs/arcc.md) - [AWSBATCH](docs/awsbatch.md) - [AWS_TOWER](docs/aws_tower.md) - [AZUREBATCH](docs/azurebatch.md) @@ -146,6 +147,7 @@ Currently documentation is available for the following systems: - [Jex](docs/jex.md) - [KU SUND DANGPU](docs/ku_sund_dangpu.md) - [LUGH](docs/lugh.md) +- [M3C](docs/m3c.md) - [MAESTRO](docs/maestro.md) - [Mana](docs/mana.md) - [MARVIN](docs/marvin.md) @@ -164,18 +166,22 @@ Currently documentation is available for the following systems: - [PDC](docs/pdc_kth.md) - [PHOENIX](docs/phoenix.md) - [PSMN](docs/psmn.md) +- [QMUL_APOCRITA](docs/qmul_apocrita.md) - [ROSALIND](docs/rosalind.md) - [ROSALIND_UGE](docs/rosalind_uge.md) - [SAGE BIONETWORKS](docs/sage.md) - [SANGER](docs/sanger.md) +- [SEATTLECHILDRENS](docs/seattlechildrens.md) - [SEAWULF](docs/seawulf.md) - [SEG_GLOBE](docs/seg_globe.md) - [self-hosted-runner](docs/self-hosted-runner.md) - [Super Computing Wales](docs/scw.md) - [TIGEM](docs/tigem.md) - [TUBINGEN_APG](docs/tubingen_apg.md) +- [TUFTS](docs/tufts.md) - [TUOS_STANAGE](docs/tuos_stanage.md) - [UCD_SONIC](docs/ucd_sonic.md) +- [UCL_CSCLUSTER](docs/ucl_cscluster.md) - [UCL_MYRIAD](docs/ucl_myriad.md) - [UCT_HPC](docs/uct_hpc.md) - [UNC_LCCC](docs/unc_lccc.md) @@ -184,17 +190,20 @@ Currently documentation is available for the following systems: - [UNIBE_IBU](docs/unibe_ibu.md) - [UOD_HPC](docs/uod_hpc.md) - [UPPMAX](docs/uppmax.md) +- [UTD_EUROPA](docs/utd_europa.md) - [UTD_GANYMEDE](docs/utd_ganymede.md) - [UTD_SYSBIO](docs/utd_sysbio.md) - [UW_HYAK_PEDSLABS](docs/uw_hyak_pedslabs.md) - [UZH](docs/uzh.md) - [UZL_OMICS](docs/uzl_omics.md) - [VAI](docs/vai.md) +- [VSC_CALCUA](docs/vsc_calcua.md) - [VSC_KUL_UHASSELT](docs/vsc_kul_uhasselt.md) - [VSC_UGENT](docs/vsc_ugent.md) - [WEHI](docs/wehi.md) - [WUSTL_HTCF](docs/wustl_htcf.md) - [XANADU](docs/xanadu.md) +- [YORK_VIKING](docs/york_viking.md) ### Uploading to `nf-core/configs` diff --git a/bin/cchecker.py b/bin/cchecker.py index cac7e4aa2..10b8d54c5 100644 --- a/bin/cchecker.py +++ b/bin/cchecker.py @@ -6,7 +6,6 @@ ####################################################################### ####################################################################### -import os import sys import argparse import re @@ -18,13 +17,17 @@ ############################################ ############################################ -Description = 'Double check custom config file and github actions file to test all cases' -Epilog = """Example usage: python cchecker.py """ +Description = ( + "Double check custom config file and github actions file to test all cases" +) +Epilog = ( + """Example usage: python cchecker.py """ +) argParser = argparse.ArgumentParser(description=Description, epilog=Epilog) ## REQUIRED PARAMETERS -argParser.add_argument('CUSTOM_CONFIG', help="Input nfcore_custom.config.") -argParser.add_argument('GITHUB_CONFIG', help="Input Github Actions YAML") +argParser.add_argument("CUSTOM_CONFIG", help="Input nfcore_custom.config.") +argParser.add_argument("GITHUB_CONFIG", help="Input Github Actions YAML") args = argParser.parse_args() @@ -34,27 +37,26 @@ ############################################ ############################################ -def check_config(Config, Github): - regex = 'includeConfig*' - ERROR_STR = 'ERROR: Please check config file! Did you really update the profiles?' +def check_config(Config, Github): + regex = "includeConfig*" ## CHECK Config First config_profiles = set() - with open(Config, 'r') as cfg: + with open(Config, "r") as cfg: for line in cfg: if re.search(regex, line): - hit = line.split('/')[2].split('.')[0] + hit = line.split("/")[2].split(".")[0] config_profiles.add(hit.strip()) ### Check Github Config now tests = set() ### Ignore these profiles - ignore_me = ['czbiohub_aws'] + ignore_me = ["czbiohub_aws"] tests.update(ignore_me) # parse yaml GitHub actions file try: - with open(Github, 'r') as ghfile: + with open(Github, "r") as ghfile: wf = yaml.safe_load(ghfile) profile_list = wf["jobs"]["profile_test"]["strategy"]["matrix"]["profile"] except Exception as e: @@ -67,9 +69,12 @@ def check_config(Config, Github): ###Check if sets are equal try: assert tests == config_profiles - except (AssertionError): - print("Tests don't seem to test these profiles properly. Please check whether you added the profile to the Github Actions testing YAML.\n") + except AssertionError: + print( + "Tests don't seem to test these profiles properly. Please check whether you added the profile to the Github Actions testing YAML.\n" + ) print(config_profiles.symmetric_difference(tests)) sys.exit(1) -check_config(Config=args.CUSTOM_CONFIG,Github=args.GITHUB_CONFIG) + +check_config(Config=args.CUSTOM_CONFIG, Github=args.GITHUB_CONFIG) diff --git a/conf/abims.config b/conf/abims.config index 0c93e70f6..312ca4607 100644 --- a/conf/abims.config +++ b/conf/abims.config @@ -1,25 +1,25 @@ //Profile config names for nf-core/configs params { - config_profile_description = 'The ABiMS cluster profile' - config_profile_contact = 'Gildas Le Corguillé (@lecorguille)' - config_profile_url = 'https://abims.sb-roscoff.fr' + config_profile_description = 'The ABiMS cluster profile' + config_profile_contact = 'Gildas Le Corguillé (@lecorguille)' + config_profile_url = 'https://abims.sb-roscoff.fr' } singularity { - enabled = true - autoMounts = true - runOptions = '-B /shared:/shared' - cacheDir = "/shared/software/singularity/images/nf-core/" + enabled = true + autoMounts = true + runOptions = '-B /shared:/shared' + cacheDir = "/shared/software/singularity/images/nf-core/" } process { - executor = 'slurm' - queue = { task.memory <= 250.GB ? (task.time <= 24.h ? 'fast' : 'long') : 'bigmem' } + executor = 'slurm' + queue = { task.memory <= 250.GB ? (task.time <= 24.h ? 'fast' : 'long') : 'bigmem' } } params { - igenomes_ignore = true - max_memory = 750.GB - max_cpus = 200 - max_time = 30.d + igenomes_ignore = true + max_memory = 750.GB + max_cpus = 200 + max_time = 30.d } diff --git a/conf/adcra.config b/conf/adcra.config index 8ed7f6493..430a354e9 100644 --- a/conf/adcra.config +++ b/conf/adcra.config @@ -5,36 +5,36 @@ */ params { - config_profile_name = 'adcra' - config_profile_description = 'CRA HPC profile provided by nf-core/configs' - config_profile_contact = 'Kalayanee Chairat (@kalayaneech)' - config_profile_url = 'https://bioinformatics.kmutt.ac.th/' - } - -params { - max_cpus = 16 - max_memory = 128.GB - max_time = 120.h + config_profile_name = 'adcra' + config_profile_description = 'CRA HPC profile provided by nf-core/configs' + config_profile_contact = 'Kalayanee Chairat (@kalayaneech)' + config_profile_url = 'https://bioinformatics.kmutt.ac.th/' + } + +params { + max_cpus = 16 + max_memory = 128.GB + max_time = 120.h } // Specify the job scheduler -executor { - name = 'slurm' - queueSize = 20 - submitRateLimit = '6/1min' +executor { + name = 'slurm' + queueSize = 20 + submitRateLimit = '6/1min' } singularity { - enabled = true - autoMounts = true + enabled = true + autoMounts = true } process { - scratch = true - queue = 'unlimit' - queueStatInterval = '10 min' - maxRetries = 3 - errorStrategy = { task.attempt <=3 ? 'retry' : 'finish' } - cache = 'lenient' - exitStatusReadTimeoutMillis = '2700000' + scratch = true + queue = 'unlimit' + queueStatInterval = '10 min' + maxRetries = 3 + errorStrategy = { task.attempt <=3 ? 'retry' : 'finish' } + cache = 'lenient' + exitStatusReadTimeoutMillis = '2700000' } diff --git a/conf/alice.config b/conf/alice.config index e935ef2d4..473db6802 100644 --- a/conf/alice.config +++ b/conf/alice.config @@ -1,39 +1,39 @@ params { - config_profile_name = 'ALICE' - config_profile_description = 'Profile for use on Academic Leiden Interdisciplinary Cluster Environment (ALICE).' - config_profile_contact = 'Bjorn Peare Bartholdy (@osteobjorn)' - config_profile_url = 'https://wiki.alice.universiteitleiden.nl/' - max_cpus = 24 - max_memory = 240.GB - max_time = 168.h + config_profile_name = 'ALICE' + config_profile_description = 'Profile for use on Academic Leiden Interdisciplinary Cluster Environment (ALICE).' + config_profile_contact = 'Bjorn Peare Bartholdy (@bbartholdy)' + config_profile_url = 'https://wiki.alice.universiteitleiden.nl/' + max_cpus = 24 + max_memory = 240.GB + max_time = 168.h } process { - executor = 'slurm' - queue = { task.time < 3.h ? 'cpu-short' : task.time < 24.h ? 'cpu-medium' : 'cpu-long' } + executor = 'slurm' + queue = { task.time < 3.h ? 'cpu-short' : task.time < 24.h ? 'cpu-medium' : 'cpu-long' } } singularity { - enabled = true - autoMounts = true + enabled = true + autoMounts = true } // Preform work directory cleanup after a successful run cleanup = true - // Profile to deactivate automatic cleanup of work directory after a successful run. Overwrites cleanup option. +// Profile to deactivate automatic cleanup of work directory after a successful run. Overwrites cleanup option. profiles { - mem { - params { - max_cpus = 24 - max_memory = 2.TB - max_time = 336.h + mem { + params { + max_cpus = 24 + max_memory = 2.TB + max_time = 336.h + } + process { + queue = 'mem' + } } - process { - queue = 'mem' + debug { + cleanup = false } - } - debug { - cleanup = false - } } diff --git a/conf/apollo.config b/conf/apollo.config index 50debb4cf..a37caf1b7 100644 --- a/conf/apollo.config +++ b/conf/apollo.config @@ -29,7 +29,7 @@ executor { cleanup = true profiles { - debug { + debug { cleanup = false - } + } } diff --git a/conf/arcc.config b/conf/arcc.config new file mode 100644 index 000000000..b99e44430 --- /dev/null +++ b/conf/arcc.config @@ -0,0 +1,147 @@ +params { + // Config Params + config_profile_name = 'ARCC - Beartooth, Moran, Teton Partitions' + config_profile_description = 'Advanced Research Computing Center (ARCC) for the University of Wyoming' + config_profile_contact = 'Robert A. Petit III (@rpetit3)' + config_profile_url = 'http://www.uwyo.edu/arcc/' + + // Default Resources + max_memory = 64.GB + max_cpus = 16 +} + +process { + executor = 'slurm' + scratch = false + clusterOptions = "--account=${System.getenv('SBATCH_ACCOUNT')}" + + // Default partitions + queue = 'beartooth,moran,teton' +} + +singularity { + enabled = true + autoMounts = true +} + +profiles { + beartooth { + params { + config_profile_name = 'ARCC - Beartooth Partition' + config_profile_description = 'Advanced Research Computing Center (ARCC) for the University of Wyoming' + config_profile_contact = 'Robert A. Petit III (@rpetit3)' + config_profile_url = 'http://www.uwyo.edu/arcc/' + max_memory = 256.GB + max_cpus = 56 + } + + process { + queue = 'beartooth' + } + } + + beartooth_bigmem { + params { + config_profile_name = 'ARCC - Beartooth BigMem Partition' + config_profile_description = 'Advanced Research Computing Center (ARCC) for the University of Wyoming' + config_profile_contact = 'Robert A. Petit III (@rpetit3)' + config_profile_url = 'http://www.uwyo.edu/arcc/' + max_memory = 512.GB + max_cpus = 56 + } + + process { + queue = 'beartooth-bigmem' + } + } + + beartooth_hugemem { + params { + config_profile_name = 'ARCC - Beartooth HugeMem Partition' + config_profile_description = 'Advanced Research Computing Center (ARCC) for the University of Wyoming' + config_profile_contact = 'Robert A. Petit III (@rpetit3)' + config_profile_url = 'http://www.uwyo.edu/arcc/' + max_memory = 1024.GB + max_cpus = 56 + } + + process { + queue = 'beartooth-hugemem' + } + } + + moran { + params { + config_profile_name = 'ARCC - Moran Partition' + config_profile_description = 'Advanced Research Computing Center (ARCC) for the University of Wyoming' + config_profile_contact = 'Robert A. Petit III (@rpetit3)' + config_profile_url = 'http://www.uwyo.edu/arcc/' + max_memory = 64.GB + max_cpus = 16 + } + + process { + queue = 'moran' + } + } + + teton { + params { + config_profile_name = 'ARCC - Teton Partition' + config_profile_description = 'Advanced Research Computing Center (ARCC) for the University of Wyoming' + config_profile_contact = 'Robert A. Petit III (@rpetit3)' + config_profile_url = 'http://www.uwyo.edu/arcc/' + max_memory = 128.GB + max_cpus = 32 + } + + process { + queue = 'teton' + } + } + + teton_cascade { + params { + config_profile_name = 'ARCC - Teton Cascade Partition' + config_profile_description = 'Advanced Research Computing Center (ARCC) for the University of Wyoming' + config_profile_contact = 'Robert A. Petit III (@rpetit3)' + config_profile_url = 'http://www.uwyo.edu/arcc/' + max_memory = 768.GB + max_cpus = 40 + } + + process { + queue = 'teton-cascade' + } + } + + teton_hugemem { + params { + config_profile_name = 'ARCC - Teton HugeMem Partition' + config_profile_description = 'Advanced Research Computing Center (ARCC) for the University of Wyoming' + config_profile_contact = 'Robert A. Petit III (@rpetit3)' + config_profile_url = 'http://www.uwyo.edu/arcc/' + max_memory = 1024.GB + max_cpus = 32 + } + + process { + queue = 'teton-hugemem' + } + } + + teton_knl { + params { + config_profile_name = 'ARCC - Teton KNL Partition' + config_profile_description = 'Advanced Research Computing Center (ARCC) for the University of Wyoming' + config_profile_contact = 'Robert A. Petit III (@rpetit3)' + config_profile_url = 'http://www.uwyo.edu/arcc/' + max_memory = 384.GB + max_cpus = 72 + } + + process { + queue = 'teton-knl' + } + } +} diff --git a/conf/awsbatch.config b/conf/awsbatch.config index a8b61b856..d879e7599 100644 --- a/conf/awsbatch.config +++ b/conf/awsbatch.config @@ -1,28 +1,28 @@ -//Nextflow config file for running on AWS batch -params { - config_profile_description = 'AWSBATCH Cloud Profile' - config_profile_contact = 'Alexander Peltzer (@apeltzer)' - config_profile_url = 'https://aws.amazon.com/batch/' - - awsqueue = false - awsregion = 'eu-west-1' - awscli = '/home/ec2-user/miniconda/bin/aws' -} - -timeline { - overwrite = true -} -report { - overwrite = true -} -trace { - overwrite = true -} -dag { - overwrite = true -} - -process.executor = 'awsbatch' -process.queue = params.awsqueue -aws.region = params.awsregion -aws.batch.cliPath = params.awscli +//Nextflow config file for running on AWS batch +params { + config_profile_description = 'AWSBATCH Cloud Profile' + config_profile_contact = 'Alexander Peltzer (@apeltzer)' + config_profile_url = 'https://aws.amazon.com/batch/' + + awsqueue = false + awsregion = 'eu-west-1' + awscli = '/home/ec2-user/miniconda/bin/aws' +} + +timeline { + overwrite = true +} +report { + overwrite = true +} +trace { + overwrite = true +} +dag { + overwrite = true +} + +process.executor = 'awsbatch' +process.queue = params.awsqueue +aws.region = params.awsregion +aws.batch.cliPath = params.awscli diff --git a/conf/azurebatch.config b/conf/azurebatch.config index 2d3de2a65..797e1477f 100644 --- a/conf/azurebatch.config +++ b/conf/azurebatch.config @@ -1,30 +1,30 @@ //Nextflow config file for running on Azure batch params { - config_profile_description = 'Azure BATCH Cloud Profile' - config_profile_contact = 'Venkat Malladi (@vsmalladi) & Adam Talbot (@adamrtalbot)' - config_profile_url = 'https://azure.microsoft.com/services/batch/' + config_profile_description = 'Azure BATCH Cloud Profile' + config_profile_contact = 'Venkat Malladi (@vsmalladi) & Adam Talbot (@adamrtalbot)' + config_profile_url = 'https://azure.microsoft.com/services/batch/' - // Storage - storage_name = null - storage_key = null - storage_sas = null + // Storage + storage_name = null + storage_key = null + storage_sas = null - // Batch - az_location = "westus2" - batch_name = null - batch_key = null + // Batch + az_location = "westus2" + batch_name = null + batch_key = null - vm_type = "Standard_D8s_v3" - autopoolmode = true - allowpoolcreation = true - deletejobs = true - deletepools = true - az_worker_pool = "auto" + vm_type = "Standard_D8s_v3" + autopoolmode = true + allowpoolcreation = true + deletejobs = true + deletepools = true + az_worker_pool = "auto" - // ACR - acr_registry = null - acr_username = null - acr_password = null + // ACR + acr_registry = null + acr_username = null + acr_password = null } @@ -33,35 +33,35 @@ process { } azure { - process { - queue = params.az_worker_pool - } - storage { - accountName = params.storage_name - accountKey = params.storage_key - sasToken = params.storage_sas - } - batch { - location = params.az_location - accountName = params.batch_name - accountKey = params.batch_key - tokenDuration = "24h" - autoPoolMode = params.autopoolmode - allowPoolCreation = params.allowpoolcreation - deleteJobsOnCompletion = params.deletejobs - deletePoolsOnCompletion = params.deletepools - pools { - auto { - vmType = params.vm_type - autoScale = true - vmCount = 1 - maxVmCount = 12 - } - } - } - registry { - server = params.acr_registry - userName = params.acr_username - password = params.acr_password - } + process { + queue = params.az_worker_pool + } + storage { + accountName = params.storage_name + accountKey = params.storage_key + sasToken = params.storage_sas + } + batch { + location = params.az_location + accountName = params.batch_name + accountKey = params.batch_key + tokenDuration = "24h" + autoPoolMode = params.autopoolmode + allowPoolCreation = params.allowpoolcreation + deleteJobsOnCompletion = params.deletejobs + deletePoolsOnCompletion = params.deletepools + pools { + auto { + vmType = params.vm_type + autoScale = true + vmCount = 1 + maxVmCount = 12 + } + } + } + registry { + server = params.acr_registry + userName = params.acr_username + password = params.acr_password + } } diff --git a/conf/azurebatchdev.config b/conf/azurebatchdev.config index 0ca92501f..3266145d3 100644 --- a/conf/azurebatchdev.config +++ b/conf/azurebatchdev.config @@ -1,31 +1,31 @@ -//Nextflow config file for running on Azure batch +// Nextflow config file for running on Azure batch params { - config_profile_description = 'Azure BATCH Dev Cloud Profile' - config_profile_contact = 'Venkat Malladi (@vsmalladi)'; Abhinav Sharma (@abhi18av)' - config_profile_url = 'https://azure.microsoft.com/services/batch/' + config_profile_description = 'Azure BATCH Dev Cloud Profile' + config_profile_contact = 'Venkat Malladi (@vsmalladi)'; Abhinav Sharma (@abhi18av)' + config_profile_url = 'https://azure.microsoft.com/services/batch/' - // Active Directory - principal_id = null - principal_secret = null - tenant_id = null + // Active Directory + principal_id = null + principal_secret = null + tenant_id = null - // Storage - storage_name = null + // Storage + storage_name = null - // Batch - az_location = "westus2" - batch_name = null + // Batch + az_location = "westus2" + batch_name = null - vm_type = "Standard_D8s_v3" - autopoolmode = false - allowpoolcreation = true - deletejobs = true - deletepools = false + vm_type = "Standard_D8s_v3" + autopoolmode = false + allowpoolcreation = true + deletejobs = true + deletepools = false - // ACR - acr_registry = null - acr_username = null - acr_password = null + // ACR + acr_registry = null + acr_username = null + acr_password = null } @@ -34,90 +34,90 @@ process { } azure { - process { - queue = 'Standard_D2d_v4' - withLabel:process_low {queue = 'Standard_D4d_v4'} - withLabel:process_medium {queue = 'Standard_D16d_v4'} - withLabel:process_high {queue = 'Standard_D32d_v4'} - withLabel:process_high_memory {queue = 'Standard_D48d_v4'} - } - activeDirectory { - servicePrincipalId = params.principal_id - servicePrincipalSecret = params.principal_secret - tenantId = params.tenant_id - } - storage { - accountName = params.storage_name - } - batch { - location = params.az_location - accountName = params.batch_name - tokenDuration = "24h" - autoPoolMode = params.autopoolmode - allowPoolCreation = params.allowpoolcreation - deleteJobsOnCompletion = params.deletejobs - deletePoolsOnCompletion = params.deletepools - pools { - Standard_D2d_v4 { - autoScale = true - vmType = 'Standard_D2d_v4' - vmCount = 2 - maxVmCount = 20 - scaleFormula = ''' + process { + queue = 'Standard_D2d_v4' + withLabel:process_low {queue = 'Standard_D4d_v4'} + withLabel:process_medium {queue = 'Standard_D16d_v4'} + withLabel:process_high {queue = 'Standard_D32d_v4'} + withLabel:process_high_memory {queue = 'Standard_D48d_v4'} + } + activeDirectory { + servicePrincipalId = params.principal_id + servicePrincipalSecret = params.principal_secret + tenantId = params.tenant_id + } + storage { + accountName = params.storage_name + } + batch { + location = params.az_location + accountName = params.batch_name + tokenDuration = "24h" + autoPoolMode = params.autopoolmode + allowPoolCreation = params.allowpoolcreation + deleteJobsOnCompletion = params.deletejobs + deletePoolsOnCompletion = params.deletepools + pools { + Standard_D2d_v4 { + autoScale = true + vmType = 'Standard_D2d_v4' + vmCount = 2 + maxVmCount = 20 + scaleFormula = ''' $TargetLowPriorityNodes = 1; $TargetDedicatedNodes = 0; $NodeDeallocationOption = taskcompletion; - ''' - } - Standard_D4d_v4 { - autoScale = true - vmType = 'Standard_D4d_v4' - vmCount = 2 - maxVmCount = 20 - scaleFormula = ''' + ''' + } + Standard_D4d_v4 { + autoScale = true + vmType = 'Standard_D4d_v4' + vmCount = 2 + maxVmCount = 20 + scaleFormula = ''' $TargetLowPriorityNodes = 1; $TargetDedicatedNodes = 0; $NodeDeallocationOption = taskcompletion; - ''' - } - Standard_D16d_v4 { - autoScale = true - vmType = 'Standard_D16d_v4' - vmCount = 2 - maxVmCount = 20 - scaleFormula = ''' + ''' + } + Standard_D16d_v4 { + autoScale = true + vmType = 'Standard_D16d_v4' + vmCount = 2 + maxVmCount = 20 + scaleFormula = ''' $TargetLowPriorityNodes = 1; $TargetDedicatedNodes = 0; $NodeDeallocationOption = taskcompletion; - ''' - } - Standard_D32d_v4 { - autoScale = true - vmType = 'Standard_D32d_v4' - vmCount = 2 - maxVmCount = 20 - scaleFormula = ''' + ''' + } + Standard_D32d_v4 { + autoScale = true + vmType = 'Standard_D32d_v4' + vmCount = 2 + maxVmCount = 20 + scaleFormula = ''' $TargetLowPriorityNodes = 1; $TargetDedicatedNodes = 0; $NodeDeallocationOption = taskcompletion; - ''' - } - Standard_D48d_v4 { - autoScale = true - vmType = 'Standard_D48d_v4' - vmCount = 2 - maxVmCount = 10 - scaleFormula = ''' + ''' + } + Standard_D48d_v4 { + autoScale = true + vmType = 'Standard_D48d_v4' + vmCount = 2 + maxVmCount = 10 + scaleFormula = ''' $TargetLowPriorityNodes = 1; $TargetDedicatedNodes = 0; $NodeDeallocationOption = taskcompletion; - ''' + ''' + } } - } - } - registry { - server = params.acr_registry - userName = params.acr_username - password = params.acr_password - } + } + registry { + server = params.acr_registry + userName = params.acr_username + password = params.acr_password + } } diff --git a/conf/bi.config b/conf/bi.config index 1a218f10e..b04251921 100644 --- a/conf/bi.config +++ b/conf/bi.config @@ -1,7 +1,7 @@ params{ - config_profile_description = 'Boehringer Ingelheim internal profile provided by nf-core/configs.' - config_profile_contact = 'Alexander Peltzer (@apeltzer)' - config_profile_url = 'https://www.boehringer-ingelheim.com/' + config_profile_description = 'Boehringer Ingelheim internal profile provided by nf-core/configs.' + config_profile_contact = 'Alexander Peltzer (@apeltzer)' + config_profile_url = 'https://www.boehringer-ingelheim.com/' } params.globalConfig = System.getenv('NXF_GLOBAL_CONFIG') diff --git a/conf/bigpurple.config b/conf/bigpurple.config index 5fdf7e825..235e8d7ed 100644 --- a/conf/bigpurple.config +++ b/conf/bigpurple.config @@ -2,11 +2,11 @@ singularityDir = "/gpfs/scratch/${USER}/singularity_images_nextflow" params { config_profile_description = """ - NYU School of Medicine BigPurple cluster profile provided by nf-core/configs. - module load both singularity/3.1 and squashfs-tools/4.3 before running the pipeline with this profile!! - Run from your scratch or lab directory - Nextflow makes a lot of files!! - Also consider running the pipeline on a compute node (srun --pty /bin/bash -t=01:00:00) the first time, as it will be pulling the docker image, which will be converted into a singularity image, which is heavy on the login node and will take some time. Subsequent runs can be done on the login node, as the docker image will only be pulled and converted once. By default the images will be stored in $singularityDir - """.stripIndent() + NYU School of Medicine BigPurple cluster profile provided by nf-core/configs. + module load both singularity/3.1 and squashfs-tools/4.3 before running the pipeline with this profile!! + Run from your scratch or lab directory - Nextflow makes a lot of files!! + Also consider running the pipeline on a compute node (srun --pty /bin/bash -t=01:00:00) the first time, as it will be pulling the docker image, which will be converted into a singularity image, which is heavy on the login node and will take some time. Subsequent runs can be done on the login node, as the docker image will only be pulled and converted once. By default the images will be stored in $singularityDir + """.stripIndent() config_profile_contact = 'Tobias Schraink (@tobsecret)' config_profile_url = 'https://github.com/nf-core/configs/blob/master/docs/bigpurple.md' } @@ -19,9 +19,8 @@ singularity { process { beforeScript = """ - module load singularity/3.1 - module load squashfs-tools/4.3 - """ - .stripIndent() + module load singularity/3.1 + module load squashfs-tools/4.3 + """.stripIndent() executor = 'slurm' } diff --git a/conf/binac.config b/conf/binac.config index d3624dfac..919d35eb8 100644 --- a/conf/binac.config +++ b/conf/binac.config @@ -1,29 +1,29 @@ //Profile config names for nf-core/configs params { - config_profile_description = 'BINAC cluster profile provided by nf-core/configs.' - config_profile_contact = 'Alexander Peltzer (@apeltzer)' - config_profile_url = 'https://www.bwhpc-c5.de/wiki/index.php/Category:BwForCluster_BinAC' + config_profile_description = 'BINAC cluster profile provided by nf-core/configs.' + config_profile_contact = 'Alexander Peltzer (@apeltzer)' + config_profile_url = 'https://www.bwhpc-c5.de/wiki/index.php/Category:BwForCluster_BinAC' } singularity { - enabled = true - envWhitelist = 'TZ' + enabled = true + envWhitelist = 'TZ' } process { - beforeScript = 'module load devel/singularity/3.4.2' - executor = 'pbs' - queue = { task.memory >= 128.GB ? 'smp': task.time <= 20.m ? 'tiny' : task.time > 48.h ? 'long' : 'short'} + beforeScript = 'module load devel/singularity/3.4.2' + executor = 'pbs' + queue = { task.memory >= 128.GB ? 'smp': task.time <= 20.m ? 'tiny' : task.time > 48.h ? 'long' : 'short'} } params { - igenomes_base = '/nfsmounts/igenomes' - max_memory = 1000.GB - max_cpus = 28 - max_time = 168.h + igenomes_base = '/nfsmounts/igenomes' + max_memory = 1000.GB + max_cpus = 28 + max_time = 168.h } weblog{ - enabled = true - url = 'https://services.qbic.uni-tuebingen.de/flowstore/workflows' + enabled = true + url = 'https://services.qbic.uni-tuebingen.de/flowstore/workflows' } diff --git a/conf/biohpc_gen.config b/conf/biohpc_gen.config index 0cdc78948..539eeb05b 100755 --- a/conf/biohpc_gen.config +++ b/conf/biohpc_gen.config @@ -1,26 +1,26 @@ //Profile config names for nf-core/configs params { - config_profile_description = 'BioHPC Genomics (biohpc_gen) cluster profile provided by nf-core/configs' - config_profile_contact = 'Patrick Hüther (@phue)' - config_profile_url = 'https://collab.lmu.de/display/BioHPCGenomics/BioHPC+Genomics' + config_profile_description = 'BioHPC Genomics (biohpc_gen) cluster profile provided by nf-core/configs' + config_profile_contact = 'Patrick Hüther (@phue)' + config_profile_url = 'https://collab.lmu.de/display/BioHPCGenomics/BioHPC+Genomics' } env { - SLURM_CLUSTERS='biohpc_gen' + SLURM_CLUSTERS='biohpc_gen' } process { - executor = 'slurm' - queue = { task.memory <= 1536.GB ? (task.time > 2.d || task.memory > 384.GB ? 'biohpc_gen_production' : 'biohpc_gen_normal') : 'biohpc_gen_highmem' } - module = 'charliecloud/0.30' + executor = 'slurm' + queue = { task.memory <= 1536.GB ? (task.time > 2.d || task.memory > 384.GB ? 'biohpc_gen_production' : 'biohpc_gen_normal') : 'biohpc_gen_highmem' } + module = 'charliecloud/0.30' } charliecloud { - enabled = true + enabled = true } params { - params.max_time = 14.d - params.max_cpus = 80 - params.max_memory = 3.TB + params.max_time = 14.d + params.max_cpus = 80 + params.max_memory = 3.TB } diff --git a/conf/biowulf.config b/conf/biowulf.config index 1d59ef4f7..b38be5176 100644 --- a/conf/biowulf.config +++ b/conf/biowulf.config @@ -1,17 +1,17 @@ params { - config_profile_description = 'Biowulf nf-core config' - config_profile_contact = 'staff@hpc.nih.gov' - config_profile_url = 'https://hpc.nih.gov/apps/nextflow.html' - max_memory = '224 GB' - max_cpus = 32 - max_time = '72 h' - - igenomes_base = '/fdb/igenomes/' + config_profile_description = 'Biowulf nf-core config' + config_profile_contact = 'staff@hpc.nih.gov' + config_profile_url = 'https://hpc.nih.gov/apps/nextflow.html' + max_memory = '224 GB' + max_cpus = 32 + max_time = '72 h' + + igenomes_base = '/fdb/igenomes/' } executor { - + $slurm { queue = 'norm' queueSize = 200 @@ -49,6 +49,6 @@ process { // for running pipeline on group sharing data directory, this can avoid inconsistent files timestamps cache = 'lenient' } - + diff --git a/conf/cambridge.config b/conf/cambridge.config index 8e5e3805c..c74f24ec0 100644 --- a/conf/cambridge.config +++ b/conf/cambridge.config @@ -1,22 +1,23 @@ // Description is overwritten with user specific flags params { - config_profile_description = 'Cambridge HPC cluster profile.' - config_profile_contact = 'Andries van Tonder (ajv37@cam.ac.uk)' - config_profile_url = "https://docs.hpc.cam.ac.uk/hpc" - partition = null - project = null - max_memory = 192.GB - max_cpus = 56 - max_time = 12.h + config_profile_description = 'Cambridge HPC cluster profile.' + // FIXME EmelineFavreau was the last to edit this + config_profile_contact = 'Andries van Tonder (ajv37@cam.ac.uk)' + config_profile_url = "https://docs.hpc.cam.ac.uk/hpc" + partition = null + project = null + max_memory = 192.GB + max_cpus = 56 + max_time = 12.h } -// Description is overwritten with user specific flags +// Description is overwritten with user specific flags singularity { - enabled = true - autoMounts = true -} + enabled = true + autoMounts = true +} process { - executor = 'slurm' - clusterOptions = "-A $params.project -p $params.partition" + executor = 'slurm' + clusterOptions = "-A $params.project -p $params.partition" } diff --git a/conf/cbe.config b/conf/cbe.config index fbc0812ba..e1ccadb48 100755 --- a/conf/cbe.config +++ b/conf/cbe.config @@ -1,25 +1,25 @@ //Profile config names for nf-core/configs params { - config_profile_description = 'CLIP BATCH ENVIRONMENT (CBE) cluster profile provided by nf-core/configs' - config_profile_contact = 'Patrick Hüther (@phue)' - config_profile_url = 'https://clip.science' + config_profile_description = 'CLIP BATCH ENVIRONMENT (CBE) cluster profile provided by nf-core/configs' + config_profile_contact = 'Patrick Hüther (@phue)' + config_profile_url = 'https://clip.science' } process { - executor = 'slurm' - queue = { task.memory <= 120.GB ? 'c' : 'm' } - module = ['build-env/.f2021', 'build-env/f2021', 'anaconda3/2021.11'] - clusterOptions = { ( task.queue == 'g' ? '--gres gpu:1 ' : '' ) << ( (task.queue == 'c' & task.time <= 1.h) ? '--qos rapid' : ( task.time <= 8.h ? '--qos short': ( task.time <= 48.h ? '--qos medium' : '--qos long' ) ) ) } + executor = 'slurm' + queue = { task.memory <= 120.GB ? 'c' : 'm' } + module = ['build-env/.f2021', 'build-env/f2021', 'anaconda3/2021.11'] + clusterOptions = { ( task.queue == 'g' ? '--gres gpu:1 ' : '' ) << ( (task.queue == 'c' & task.time <= 1.h) ? '--qos rapid' : ( task.time <= 8.h ? '--qos short': ( task.time <= 48.h ? '--qos medium' : '--qos long' ) ) ) } } singularity { - enabled = true - cacheDir = '/resources/containers' + enabled = true + cacheDir = '/resources/containers' } params { - params.max_time = 14.d - params.max_cpus = 36 - params.max_memory = 1800.GB - igenomes_base = '/resources/references/igenomes' + params.max_time = 14.d + params.max_cpus = 36 + params.max_memory = 1800.GB + igenomes_base = '/resources/references/igenomes' } diff --git a/conf/ccga_dx.config b/conf/ccga_dx.config index 595d3e102..3f4ab9022 100644 --- a/conf/ccga_dx.config +++ b/conf/ccga_dx.config @@ -1,8 +1,8 @@ //Profile config names for nf-core/configs params { - config_profile_description = 'CCGA DX cluster profile provided by nf-core/configs.' - config_profile_contact = 'Marc Hoeppner (@marchoeppner)' - config_profile_url = 'https://www.ccga.uni-kiel.de/' + config_profile_description = 'CCGA DX cluster profile provided by nf-core/configs.' + config_profile_contact = 'Marc Hoeppner (@marchoeppner)' + config_profile_url = 'https://www.ccga.uni-kiel.de/' } /* @@ -12,27 +12,27 @@ params { */ singularity { - enabled = true - runOptions = "-B /mnt -B /work_ifs" + enabled = true + runOptions = "-B /mnt -B /work_ifs" } executor { - queueSize=100 + queueSize=100 } process { - // Global process config - executor = 'slurm' - queue = 'htc' + // Global process config + executor = 'slurm' + queue = 'htc' } params { - // illumina iGenomes reference file paths on DX Cluster - igenomes_base = '/work_ifs/ikmb_repository/references/iGenomes/references/' - saveReference = true - max_memory = 250.GB - max_cpus = 20 - max_time = 240.h + // illumina iGenomes reference file paths on DX Cluster + igenomes_base = '/work_ifs/ikmb_repository/references/iGenomes/references/' + saveReference = true + max_memory = 250.GB + max_cpus = 20 + max_time = 240.h } diff --git a/conf/ccga_med.config b/conf/ccga_med.config index c9b7b4407..5ba399f67 100644 --- a/conf/ccga_med.config +++ b/conf/ccga_med.config @@ -1,8 +1,8 @@ //Profile config names for nf-core/configs params { - config_profile_description = 'CCGA Med cluster profile provided by nf-core/configs.' - config_profile_contact = 'Marc Hoeppner (@marchoeppner)' - config_profile_url = 'https://www.ccga.uni-kiel.de/' + config_profile_description = 'CCGA Med cluster profile provided by nf-core/configs.' + config_profile_contact = 'Marc Hoeppner (@marchoeppner)' + config_profile_url = 'https://www.ccga.uni-kiel.de/' } /* @@ -12,28 +12,28 @@ params { */ singularity { - enabled = true - runOptions = "-B /work_ifs -B /scratch -B /work_beegfs" - cacheDir = "/work_beegfs/ikmb_repository/singularity_cache/" + enabled = true + runOptions = "-B /work_ifs -B /scratch -B /work_beegfs" + cacheDir = "/work_beegfs/ikmb_repository/singularity_cache/" } executor { - queueSize=100 + queueSize=100 } process { - // Global process config - executor = 'slurm' - queue = 'all' + // Global process config + executor = 'slurm' + queue = 'all' } params { - // illumina iGenomes reference file paths on RZCluster - igenomes_base = '/work_beegfs/ikmb_repository/references/iGenomes/references/' - saveReference = true - max_memory = 250.GB - max_cpus = 24 - max_time = 120.h + // illumina iGenomes reference file paths on RZCluster + igenomes_base = '/work_beegfs/ikmb_repository/references/iGenomes/references/' + saveReference = true + max_memory = 250.GB + max_cpus = 24 + max_time = 120.h } diff --git a/conf/cedars.config b/conf/cedars.config index d9b902735..97cef6a27 100644 --- a/conf/cedars.config +++ b/conf/cedars.config @@ -1,26 +1,26 @@ //Profile config names for nf-core/configs params { - config_profile_description = 'Cedars-Sinai Medical Center HPC Profile' - config_profile_contact = 'Alex Rajewski (@rajewski)' - config_profile_url = 'https://www.cedars-sinai.edu/research/cores/informatics-computing/resources.html' - max_memory = 90.GB - max_cpus = 10 - max_time = 240.h + config_profile_description = 'Cedars-Sinai Medical Center HPC Profile' + config_profile_contact = 'Alex Rajewski (@rajewski)' + config_profile_url = 'https://www.cedars-sinai.edu/research/cores/informatics-computing/resources.html' + max_memory = 90.GB + max_cpus = 10 + max_time = 240.h } // Specify the queing system executor { - name = "sge" + name = "sge" } process { - penv = 'smp' - beforeScript = - """ - module load 'singularity/3.6.0' - """ + penv = 'smp' + beforeScript = + """ + module load 'singularity/3.6.0' + """ } singularity { - enabled = true + enabled = true } diff --git a/conf/ceres.config b/conf/ceres.config index 8a96dabc5..b4866913d 100644 --- a/conf/ceres.config +++ b/conf/ceres.config @@ -1,41 +1,41 @@ params { - config_profile_description = 'USDA ARS SCINet Ceres Cluster profile' - config_profile_contact = 'Thomas A. Christensen II (@MillironX)' - config_profile_url = 'https://scinet.usda.gov/guide/ceres/' + config_profile_description = 'USDA ARS SCINet Ceres Cluster profile' + config_profile_contact = 'Thomas A. Christensen II (@MillironX)' + config_profile_url = 'https://scinet.usda.gov/guide/ceres/' - max_memory = 640.GB - max_cpus = 36 - max_time = 60.d + max_memory = 640.GB + max_cpus = 36 + max_time = 60.d } singularity { - enabled = true - autoMounts = true + enabled = true + autoMounts = true } process { - executor = 'slurm' - scratch = true - queue = { - switch (task.memory) { - case { it >= 216.GB }: - switch (task.time) { - case { it >= 7.d }: - return 'longmem' - default: - return 'mem' - } - default: - switch (task.time) { - case { it >= 21.d }: - return 'long60' - case { it >= 7.d }: - return 'long' - case { it >= 48.h }: - return 'medium' - default: - return 'short' - } + executor = 'slurm' + scratch = true + queue = { + switch (task.memory) { + case { it >= 216.GB }: + switch (task.time) { + case { it >= 7.d }: + return 'longmem' + default: + return 'mem' + } + default: + switch (task.time) { + case { it >= 21.d }: + return 'long60' + case { it >= 7.d }: + return 'long' + case { it >= 48.h }: + return 'medium' + default: + return 'short' + } + } } - } } diff --git a/conf/cfc.config b/conf/cfc.config index 65f5c8fba..b5a86c0c2 100644 --- a/conf/cfc.config +++ b/conf/cfc.config @@ -1,24 +1,24 @@ //Profile config names for nf-core/configs params { - config_profile_description = 'QBiC Core Facility cluster profile provided by nf-core/configs.' - config_profile_contact = 'Sabrina Krakau (@skrakau)' - config_profile_url = 'http://qbic.uni-tuebingen.de/' + config_profile_description = 'QBiC Core Facility cluster profile provided by nf-core/configs.' + config_profile_contact = 'Sabrina Krakau (@skrakau)' + config_profile_url = 'http://qbic.uni-tuebingen.de/' } singularity { - enabled = true - cacheDir = '/nfsmounts/container' + enabled = true + cacheDir = '/nfsmounts/container' } process { - executor = 'slurm' - queue = 'qbic' - scratch = 'true' + executor = 'slurm' + queue = 'qbic' + scratch = 'true' } params { - igenomes_base = '/nfsmounts/igenomes' - max_memory = 1992.GB - max_cpus = 128 - max_time = 168.h + igenomes_base = '/nfsmounts/igenomes' + max_memory = 1992.GB + max_cpus = 128 + max_time = 168.h } diff --git a/conf/cfc_dev.config b/conf/cfc_dev.config index e85a02b1d..a78f89c6b 100644 --- a/conf/cfc_dev.config +++ b/conf/cfc_dev.config @@ -1,23 +1,23 @@ //Profile config names for nf-core/configs params { - config_profile_description = 'QBiC Core Facility cluster dev profile without container cache provided by nf-core/configs.' - config_profile_contact = 'Sabrina Krakau (@skrakau)' - config_profile_url = 'http://qbic.uni-tuebingen.de/' + config_profile_description = 'QBiC Core Facility cluster dev profile without container cache provided by nf-core/configs.' + config_profile_contact = 'Sabrina Krakau (@skrakau)' + config_profile_url = 'http://qbic.uni-tuebingen.de/' } singularity { - enabled = true + enabled = true } process { - executor = 'slurm' - queue = 'qbic' - scratch = 'true' + executor = 'slurm' + queue = 'qbic' + scratch = 'true' } params { - igenomes_base = '/nfsmounts/igenomes' - max_memory = 1992.GB - max_cpus = 128 - max_time = 168.h + igenomes_base = '/nfsmounts/igenomes' + max_memory = 1992.GB + max_cpus = 128 + max_time = 168.h } diff --git a/conf/computerome.config b/conf/computerome.config index b5e7fa7dc..b42cbf353 100644 --- a/conf/computerome.config +++ b/conf/computerome.config @@ -1,30 +1,30 @@ //Profile config names for nf-core/configs params { - config_profile_description = 'Computerome 2.0 cluster profile provided by nf-core/configs.' - config_profile_contact = 'Marc Trunjer Kusk Nielsen (@marcmtk)' - config_profile_url = 'https://www.computerome.dk/' - project = null - cache_dir = "/home/projects/$params.project/scratch" - schema_ignore_params = "project,cache_dir,genomes,modules" - validationSchemaIgnoreParams = "project,cache_dir,genomes,modules,schema_ignore_params" - - //Thin nodes with 192GB and Fat nodes with ~1500GB. Torque should be allowed to handle this - max_memory = 1500.GB - max_cpus = 40 - - //There is no max walltime on the cluster, but a week seems sensible if not directly specified - max_time = 168.h + config_profile_description = 'Computerome 2.0 cluster profile provided by nf-core/configs.' + config_profile_contact = 'Marc Trunjer Kusk Nielsen (@marcmtk)' + config_profile_url = 'https://www.computerome.dk/' + project = null + cache_dir = "/home/projects/$params.project/scratch" + schema_ignore_params = "project,cache_dir,genomes,modules" + validationSchemaIgnoreParams = "project,cache_dir,genomes,modules,schema_ignore_params" + + //Thin nodes with 192GB and Fat nodes with ~1500GB. Torque should be allowed to handle this + max_memory = 1500.GB + max_cpus = 40 + + //There is no max walltime on the cluster, but a week seems sensible if not directly specified + max_time = 168.h } singularity { - enabled = true - autoMounts = true - cacheDir = params.cache_dir + enabled = true + autoMounts = true + cacheDir = params.cache_dir } process { - beforeScript = "module load tools singularity/3.8.0; export _JAVA_OPTIONS=-Djava.io.tmpdir=$params.cache_dir" - executor = 'pbs' - queueSize = 2000 - clusterOptions = "-A $params.project -W group_list=$params.project" -} + beforeScript = "module load tools singularity/3.8.0; export _JAVA_OPTIONS=-Djava.io.tmpdir=$params.cache_dir" + executor = 'pbs' + queueSize = 2000 + clusterOptions = "-A $params.project -W group_list=$params.project" +} diff --git a/conf/create.config b/conf/create.config index a2e9ec24e..0ca41249d 100644 --- a/conf/create.config +++ b/conf/create.config @@ -1,6 +1,6 @@ params { config_profile_description = "e-Research King's College London CREATE HPC" - config_profile_contact = "e-Research (e-research@kcl.ac.uk)" + config_profile_contact = "e-Research (support@er.kcl.ac.uk)" config_profile_url = "https://docs.er.kcl.ac.uk/" max_memory = 1024.GB max_cpus = 128 @@ -14,4 +14,4 @@ singularity { process { executor = 'slurm' -} \ No newline at end of file +} diff --git a/conf/crg.config b/conf/crg.config index 16f473c3e..27176b7f6 100755 --- a/conf/crg.config +++ b/conf/crg.config @@ -1,14 +1,15 @@ //Profile config names for nf-core/configs params { - config_profile_description = 'Centre for Genomic Regulation (CRG) cluster profile provided by nf-core/configs' - config_profile_contact = 'Jose Espinosa-Carrasco (@joseespinosa)' - config_profile_url = 'http://www.linux.crg.es/index.php/Main_Page' + config_profile_description = 'Centre for Genomic Regulation (CRG) cluster profile provided by nf-core/configs' + config_profile_contact = 'Jose Espinosa-Carrasco (@joseespinosa)' + config_profile_url = 'http://www.linux.crg.es/index.php/Main_Page' } process { - executor = 'crg' + executor = 'crg' + queue = 'short-centos79,long-centos79' } singularity { - enabled = true + enabled = true } diff --git a/conf/crick.config b/conf/crick.config index 6d58dd4cd..6e64f9ec7 100755 --- a/conf/crick.config +++ b/conf/crick.config @@ -1,24 +1,24 @@ -//Profile config names for nf-core/configs -params { - config_profile_description = 'The Francis Crick Institute CAMP HPC cluster profile provided by nf-core/configs.' - config_profile_contact = 'Chris Cheshire (@chris-cheshire)' - config_profile_url = 'https://www.crick.ac.uk/research/platforms-and-facilities/scientific-computing/technologies' -} - -singularity { - enabled = true - autoMounts = true - runOptions = '--bind /nemo --bind /flask' -} - -process { - executor = 'slurm' -} - -params { - max_memory = 224.GB - max_cpus = 32 - max_time = '72.h' - - igenomes_base = '/flask/reference/Genomics/aws-igenomes' -} +//Profile config names for nf-core/configs +params { + config_profile_description = 'The Francis Crick Institute CAMP HPC cluster profile provided by nf-core/configs.' + config_profile_contact = 'Chris Cheshire (@chris-cheshire)' + config_profile_url = 'https://www.crick.ac.uk/research/platforms-and-facilities/scientific-computing/technologies' +} + +singularity { + enabled = true + autoMounts = true + runOptions = '--bind /nemo --bind /flask' +} + +process { + executor = 'slurm' +} + +params { + max_memory = 224.GB + max_cpus = 32 + max_time = '72.h' + + igenomes_base = '/flask/reference/Genomics/aws-igenomes' +} diff --git a/conf/crukmi.config b/conf/crukmi.config index e20003457..16ff22d29 100644 --- a/conf/crukmi.config +++ b/conf/crukmi.config @@ -1,44 +1,44 @@ //Profile config names for nf-core/configs params { - config_profile_description = 'Cancer Research UK Manchester Institute HPC cluster profile provided by nf-core/configs' - config_profile_contact = 'Stephen Kitcatt, Simon Pearce (@skitcattCRUKMI, @sppearce)' - config_profile_url = 'http://scicom.picr.man.ac.uk/projects/user-support/wiki' + config_profile_description = 'Cancer Research UK Manchester Institute HPC cluster profile provided by nf-core/configs' + config_profile_contact = 'Stephen Kitcatt, Simon Pearce (@skitcattCRUKMI, @sppearce)' + config_profile_url = 'http://scicom.picr.man.ac.uk/projects/user-support/wiki' } singularity { - cacheDir = '/lmod/nextflow_software' - enabled = true - autoMounts = true + cacheDir = '/lmod/nextflow_software' + enabled = true + autoMounts = true } process { - beforeScript = 'module load apps/apptainer' - executor = 'slurm' - queue = { task.memory <= 240.GB ? 'compute' : 'hmem' } + beforeScript = 'module load apps/apptainer' + executor = 'slurm' + queue = { task.memory <= 240.GB ? 'compute' : 'hmem' } - errorStrategy = {task.exitStatus in [143,137,104,134,139,140] ? 'retry' : 'finish'} - maxErrors = '-1' - maxRetries = 3 + errorStrategy = {task.exitStatus in [143,137,104,134,139,140] ? 'retry' : 'finish'} + maxErrors = '-1' + maxRetries = 3 - withLabel:process_single { + withLabel:process_single { cpus = { check_max( 1 * task.attempt, 'cpus' ) } memory = { check_max( 5.GB * task.attempt, 'memory' ) } - } + } - withLabel:process_low { + withLabel:process_low { cpus = { check_max( 1 * task.attempt, 'cpus' ) } memory = { check_max( 5.GB * task.attempt, 'memory' ) } - } + } - withLabel:process_medium { + withLabel:process_medium { cpus = { check_max( 4 * task.attempt, 'cpus' ) } memory = { check_max( 20.GB * task.attempt, 'memory' ) } - } + } - withLabel:process_high { + withLabel:process_high { cpus = { check_max( 48 * task.attempt, 'cpus' ) } memory = { check_max( 240.GB * task.attempt, 'memory' ) } - } + } } @@ -49,7 +49,7 @@ executor { } params { - max_memory = 4000.GB - max_cpus = 96 - max_time = 72.h + max_memory = 4000.GB + max_cpus = 96 + max_time = 72.h } diff --git a/conf/csiro_petrichor.config b/conf/csiro_petrichor.config index 6095f12f9..7ca54c98f 100644 --- a/conf/csiro_petrichor.config +++ b/conf/csiro_petrichor.config @@ -1,28 +1,28 @@ // CSIRO Petrichor nf-core configuration profile params { - config_profile_description = 'CSIRO Petrichor HPC profile provided by nf-core/configs' - config_profile_contact = 'Mitchell OBrien (@mitchob)' - config_profile_url = 'https://confluence.csiro.au/display/SC/CSIRO+SC+Shared+Cluster+-+Petrichor' + config_profile_description = 'CSIRO Petrichor HPC profile provided by nf-core/configs' + config_profile_contact = 'Mitchell OBrien (@mitchob)' + config_profile_url = 'https://confluence.csiro.au/display/SC/CSIRO+SC+Shared+Cluster+-+Petrichor' } // Enable use of Singularity to run containers singularity { - enabled = true - autoMounts = true - autoCleanUp = true + enabled = true + autoMounts = true + autoCleanUp = true } -// Submit up to XX concurrent jobs +// Submit up to XX concurrent jobs //executor { // queueSize = XX //} // Define process resource limits process { - executor = 'slurm' - clusterOptions = "--account=${System.getenv('SBATCH_ACCOUNT')}" - module = 'singularity/3.8.7' - cache = 'lenient' - stageInMode = 'symlink' - queue = 'defq' + executor = 'slurm' + clusterOptions = "--account=${System.getenv('SBATCH_ACCOUNT')}" + module = 'singularity/3.8.7' + cache = 'lenient' + stageInMode = 'symlink' + queue = 'defq' } diff --git a/conf/czbiohub_aws.config b/conf/czbiohub_aws.config index 7132352e1..736747148 100644 --- a/conf/czbiohub_aws.config +++ b/conf/czbiohub_aws.config @@ -7,21 +7,21 @@ * profile in nextflow.config */ - //Profile config names for nf-core/configs - params { - config_profile_description = 'Chan Zuckerberg Biohub AWS Batch profile provided by nf-core/configs.' - config_profile_contact = 'Olga Botvinnik (@olgabot)' - config_profile_url = 'https://www.czbiohub.org/' - } +//Profile config names for nf-core/configs +params { + config_profile_description = 'Chan Zuckerberg Biohub AWS Batch profile provided by nf-core/configs.' + config_profile_contact = 'Olga Botvinnik (@olgabot)' + config_profile_url = 'https://www.czbiohub.org/' +} docker { - enabled = true + enabled = true } process { - executor = 'awsbatch' - queue = 'default-971039e0-830c-11e9-9e0b-02c5b84a8036' - errorStrategy = 'ignore' + executor = 'awsbatch' + queue = 'default-971039e0-830c-11e9-9e0b-02c5b84a8036' + errorStrategy = 'ignore' } workDir = "s3://czb-nextflow/intermediates/" @@ -31,116 +31,116 @@ aws.batch.cliPath = '/home/ec2-user/miniconda/bin/aws' params.tracedir = './' params { - saveReference = true - - // Largest SPOT instances available on AWS: https://ec2instances.info/ - max_memory = 1952.GB - max_cpus = 96 - max_time = 240.h - - // Compatible with multiple versions of rnaseq pipeline - seq_center = "czbiohub" - seqCenter = "czbiohub" - - // illumina iGenomes reference file paths on CZ Biohub reference s3 bucket - // No final slash because it's added later - igenomes_base = "s3://czbiohub-reference/igenomes" - - // GENCODE (human + mouse) reference file paths on CZ Biohub reference s3 bucket - // No final slash because it's added later - gencode_base = "s3://czbiohub-reference/gencode" - transgenes_base = "s3://czbiohub-reference/transgenes" - refseq_base = "s3://czbiohub-reference/ncbi/genomes/refseq/" - - // AWS configurations - awsregion = "us-west-2" - awsqueue = 'default-971039e0-830c-11e9-9e0b-02c5b84a8036' - - igenomes_ignore = true - igenomesIgnore = true //deprecated - - fc_extra_attributes = 'gene_name' - fc_group_features = 'gene_id' - fc_group_features_type = 'gene_type' - - trim_pattern = '_+S\\d+' - - // GENCODE GTF and fasta files - genomes { - 'GRCh38' { - fasta = "${params.gencode_base}/human/v30/GRCh38.p12.genome.ERCC92.fa" - gtf = "${params.gencode_base}/human/v30/gencode.v30.annotation.ERCC92.gtf" - transcript_fasta = "${params.gencode_base}/human/v30/gencode.v30.transcripts.ERCC92.fa" - star = "${params.gencode_base}/human/v30/STARIndex/" - salmon_index = "${params.gencode_base}/human/v30/salmon_index/" - } - 'GRCm38' { - fasta = "${params.gencode_base}/mouse/vM21/GRCm38.p6.genome.ERCC92.fa" - gtf = "${params.gencode_base}/mouse/vM21/gencode.vM21.annotation.ERCC92.gtf" - transcript_fasta = "${params.gencode_base}/mouse/vM21/gencode.vM21.transcripts.ERCC92.fa" - star = "${params.gencode_base}/mouse/vM21/STARIndex/" - } - 'AaegL5.0' { - fasta = "${params.refseq_base}/invertebrate/Aedes_aegypti/GCF_002204515.2_AaegL5.0/nf-core--rnaseq/reference_genome/GCF_002204515.2_AaegL5.0_genomic.fna" - gtf = "${params.refseq_base}/invertebrate/Aedes_aegypti/GCF_002204515.2_AaegL5.0/nf-core--rnaseq/reference_genome/GCF_002204515.2_AaegL5.0_genomic.gtf" - bed = "${params.refseq_base}/invertebrate/Aedes_aegypti/GCF_002204515.2_AaegL5.0/nf-core--rnaseq/reference_genome/GCF_002204515.2_AaegL5.0_genomic.bed" - star = "${params.refseq_base}/invertebrate/Aedes_aegypti/GCF_002204515.2_AaegL5.0/nf-core--rnaseq/reference_genome/star/" + saveReference = true + + // Largest SPOT instances available on AWS: https://ec2instances.info/ + max_memory = 1952.GB + max_cpus = 96 + max_time = 240.h + + // Compatible with multiple versions of rnaseq pipeline + seq_center = "czbiohub" + seqCenter = "czbiohub" + + // illumina iGenomes reference file paths on CZ Biohub reference s3 bucket + // No final slash because it's added later + igenomes_base = "s3://czbiohub-reference/igenomes" + + // GENCODE (human + mouse) reference file paths on CZ Biohub reference s3 bucket + // No final slash because it's added later + gencode_base = "s3://czbiohub-reference/gencode" + transgenes_base = "s3://czbiohub-reference/transgenes" + refseq_base = "s3://czbiohub-reference/ncbi/genomes/refseq/" + + // AWS configurations + awsregion = "us-west-2" + awsqueue = 'default-971039e0-830c-11e9-9e0b-02c5b84a8036' + + igenomes_ignore = true + igenomesIgnore = true //deprecated + + fc_extra_attributes = 'gene_name' + fc_group_features = 'gene_id' + fc_group_features_type = 'gene_type' + + trim_pattern = '_+S\\d+' + + // GENCODE GTF and fasta files + genomes { + 'GRCh38' { + fasta = "${params.gencode_base}/human/v30/GRCh38.p12.genome.ERCC92.fa" + gtf = "${params.gencode_base}/human/v30/gencode.v30.annotation.ERCC92.gtf" + transcript_fasta = "${params.gencode_base}/human/v30/gencode.v30.transcripts.ERCC92.fa" + star = "${params.gencode_base}/human/v30/STARIndex/" + salmon_index = "${params.gencode_base}/human/v30/salmon_index/" + } + 'GRCm38' { + fasta = "${params.gencode_base}/mouse/vM21/GRCm38.p6.genome.ERCC92.fa" + gtf = "${params.gencode_base}/mouse/vM21/gencode.vM21.annotation.ERCC92.gtf" + transcript_fasta = "${params.gencode_base}/mouse/vM21/gencode.vM21.transcripts.ERCC92.fa" + star = "${params.gencode_base}/mouse/vM21/STARIndex/" + } + 'AaegL5.0' { + fasta = "${params.refseq_base}/invertebrate/Aedes_aegypti/GCF_002204515.2_AaegL5.0/nf-core--rnaseq/reference_genome/GCF_002204515.2_AaegL5.0_genomic.fna" + gtf = "${params.refseq_base}/invertebrate/Aedes_aegypti/GCF_002204515.2_AaegL5.0/nf-core--rnaseq/reference_genome/GCF_002204515.2_AaegL5.0_genomic.gtf" + bed = "${params.refseq_base}/invertebrate/Aedes_aegypti/GCF_002204515.2_AaegL5.0/nf-core--rnaseq/reference_genome/GCF_002204515.2_AaegL5.0_genomic.bed" + star = "${params.refseq_base}/invertebrate/Aedes_aegypti/GCF_002204515.2_AaegL5.0/nf-core--rnaseq/reference_genome/star/" + } } - } - transgenes { - 'ChR2' { - fasta = "${params.transgenes_base}/ChR2/ChR2.fa" - gtf = "${params.transgenes_base}/ChR2/ChR2.gtf" - } - 'Cre' { - fasta = "${params.transgenes_base}/Cre/Cre.fa" - gtf = "${params.transgenes_base}/Cre/Cre.gtf" - } - 'ERCC' { - fasta = "${params.transgenes_base}/ERCC92/ERCC92.fa" - gtf = "${params.transgenes_base}/ERCC92/ERCC92.gtf" - } - 'GCaMP6m' { - fasta = "${params.transgenes_base}/GCaMP6m/GCaMP6m.fa" - gtf = "${params.transgenes_base}/GCaMP6m/GCaMP6m.gtf" - } - 'GFP' { - fasta = "${params.transgenes_base}/Gfp/Gfp.fa" - gtf = "${params.transgenes_base}/Gfp/Gfp.gtf" - } - 'NpHR' { - fasta = "${params.transgenes_base}/NpHR/NpHR.fa" - gtf = "${params.transgenes_base}/NpHR/NpHR.gtf" - } - 'RCaMP' { - fasta = "${params.transgenes_base}/RCaMP/RCaMP.fa" - gtf = "${params.transgenes_base}/RCaMP/RCaMP.gtf" - } - 'RGECO' { - fasta = "${params.transgenes_base}/RGECO/RGECO.fa" - gtf = "${params.transgenes_base}/RGECO/RGECO.gtf" - } - 'Tdtom' { - fasta = "${params.transgenes_base}/Tdtom/Tdtom.fa" - gtf = "${params.transgenes_base}/Tdtom/Tdtom.gtf" - } - 'Car-T' { - fasta = "${params.transgenes_base}/car-t/car-t.fa" - gtf = "${params.transgenes_base}/car-t/car-t.gtf" - } - 'zsGreen' { - fasta = "${params.transgenes_base}/zsGreen/zsGreen.fa" - gtf = "${params.transgenes_base}/zsGreen/zsGreen.gtf" + transgenes { + 'ChR2' { + fasta = "${params.transgenes_base}/ChR2/ChR2.fa" + gtf = "${params.transgenes_base}/ChR2/ChR2.gtf" + } + 'Cre' { + fasta = "${params.transgenes_base}/Cre/Cre.fa" + gtf = "${params.transgenes_base}/Cre/Cre.gtf" + } + 'ERCC' { + fasta = "${params.transgenes_base}/ERCC92/ERCC92.fa" + gtf = "${params.transgenes_base}/ERCC92/ERCC92.gtf" + } + 'GCaMP6m' { + fasta = "${params.transgenes_base}/GCaMP6m/GCaMP6m.fa" + gtf = "${params.transgenes_base}/GCaMP6m/GCaMP6m.gtf" + } + 'GFP' { + fasta = "${params.transgenes_base}/Gfp/Gfp.fa" + gtf = "${params.transgenes_base}/Gfp/Gfp.gtf" + } + 'NpHR' { + fasta = "${params.transgenes_base}/NpHR/NpHR.fa" + gtf = "${params.transgenes_base}/NpHR/NpHR.gtf" + } + 'RCaMP' { + fasta = "${params.transgenes_base}/RCaMP/RCaMP.fa" + gtf = "${params.transgenes_base}/RCaMP/RCaMP.gtf" + } + 'RGECO' { + fasta = "${params.transgenes_base}/RGECO/RGECO.fa" + gtf = "${params.transgenes_base}/RGECO/RGECO.gtf" + } + 'Tdtom' { + fasta = "${params.transgenes_base}/Tdtom/Tdtom.fa" + gtf = "${params.transgenes_base}/Tdtom/Tdtom.gtf" + } + 'Car-T' { + fasta = "${params.transgenes_base}/car-t/car-t.fa" + gtf = "${params.transgenes_base}/car-t/car-t.gtf" + } + 'zsGreen' { + fasta = "${params.transgenes_base}/zsGreen/zsGreen.fa" + gtf = "${params.transgenes_base}/zsGreen/zsGreen.gtf" + } } - } } profiles { - highpriority { - process { - queue = 'highpriority-971039e0-830c-11e9-9e0b-02c5b84a8036' + highpriority { + process { + queue = 'highpriority-971039e0-830c-11e9-9e0b-02c5b84a8036' + } } - } } diff --git a/conf/daisybio.config b/conf/daisybio.config index 2ca1cab8a..99dc27d75 100644 --- a/conf/daisybio.config +++ b/conf/daisybio.config @@ -1,30 +1,30 @@ params { - config_profile_description = 'DaiSyBio cluster profile provided by nf-core/configs.' - config_profile_contact = 'Johannes Kersting (Johannes Kersting)' - config_profile_url = 'https://biomedical-big-data.de/' - max_memory = 1.TB - max_cpus = 120 - max_time = 96.h - igenomes_base = '/nfs/data/references/igenomes' + config_profile_description = 'DaiSyBio cluster profile provided by nf-core/configs.' + config_profile_contact = 'Johannes Kersting (Johannes Kersting)' + config_profile_url = 'https://biomedical-big-data.de/' + max_memory = 1.TB + max_cpus = 120 + max_time = 96.h + igenomes_base = '/nfs/data/references/igenomes' } process { - executor = 'slurm' - queue = 'shared-cpu' - maxRetries = 2 + executor = 'slurm' + queue = 'shared-cpu' + maxRetries = 2 } executor { - queueSize = 30 - submitRateLimit = '10 sec' + queueSize = 30 + submitRateLimit = '10 sec' } singularity { - cacheDir = '/nfs/scratch/singularity_cache' + cacheDir = '/nfs/scratch/singularity_cache' } apptainer { - cacheDir = '/nfs/scratch/apptainer_cache' + cacheDir = '/nfs/scratch/apptainer_cache' } diff --git a/conf/denbi_qbic.config b/conf/denbi_qbic.config index 0d73aae23..3cdbbfd6b 100644 --- a/conf/denbi_qbic.config +++ b/conf/denbi_qbic.config @@ -1,26 +1,26 @@ //Profile config names for nf-core/configs params { - config_profile_description = 'de.NBI cluster profile provided by nf-core/configs.' - config_profile_contact = 'Alexander Peltzer (@apeltzer)' - config_profile_url = 'https://cloud.denbi.de/' + config_profile_description = 'de.NBI cluster profile provided by nf-core/configs.' + config_profile_contact = 'Alexander Peltzer (@apeltzer)' + config_profile_url = 'https://cloud.denbi.de/' } singularity { - enabled = true + enabled = true } process { - executor = 'pbs' - queue = { task.memory > 64.GB ? 'highmem': 'batch'} + executor = 'pbs' + queue = { task.memory > 64.GB ? 'highmem': 'batch'} } params { - max_memory = 512.GB - max_cpus = 28 - max_time = 960.h + max_memory = 512.GB + max_cpus = 28 + max_time = 960.h } weblog{ - enabled = true - url = 'https://services.qbic.uni-tuebingen.de/flowstore/workflows' + enabled = true + url = 'https://services.qbic.uni-tuebingen.de/flowstore/workflows' } diff --git a/conf/dkfz.config b/conf/dkfz.config index 65d6be889..be0ff1adb 100644 --- a/conf/dkfz.config +++ b/conf/dkfz.config @@ -12,19 +12,19 @@ params { singularity { - enabled = true - autoMounts = true + enabled = true + autoMounts = true } process { - executor = 'lsf' - scratch = '$SCRATCHDIR/$LSB_JOBID' + executor = 'lsf' + scratch = '$SCRATCHDIR/$LSB_JOBID' } executor { - name = 'lsf' - perTaskReserve = false - perJobMemLimit = true - queueSize = 10 - submitRateLimit = '3 sec' + name = 'lsf' + perTaskReserve = false + perJobMemLimit = true + queueSize = 10 + submitRateLimit = '3 sec' } diff --git a/conf/ebc.config b/conf/ebc.config index 8f007ed67..474f7e766 100644 --- a/conf/ebc.config +++ b/conf/ebc.config @@ -1,25 +1,25 @@ - //Profile config names for nf-core/configs - params { - config_profile_description = 'Generic Estonian Biocentre profile provided by nf-core/configs.' - config_profile_contact = 'Marcel Keller (@marcel-keller)' - config_profile_url = 'https://genomics.ut.ee/en/about-us/estonian-biocentre' - } +//Profile config names for nf-core/configs +params { + config_profile_description = 'Generic Estonian Biocentre profile provided by nf-core/configs.' + config_profile_contact = 'Marcel Keller (@marcel-keller)' + config_profile_url = 'https://genomics.ut.ee/en/about-us/estonian-biocentre' +} - cleanup = true +cleanup = true - conda { - cacheDir = '/gpfs/space/GI/ebc_data/software/nf-core/conda' - } - process { - executor = 'slurm' - conda = "$baseDir/environment.yml" - beforeScript = 'module load nextflow' - } - executor { - queueSize = 64 - } - params { - max_memory = 12.GB - max_cpus = 20 - max_time = 120.h - } +conda { + cacheDir = '/gpfs/space/GI/ebc_data/software/nf-core/conda' +} +process { + executor = 'slurm' + conda = "$baseDir/environment.yml" + beforeScript = 'module load nextflow' +} +executor { + queueSize = 64 +} +params { + max_memory = 12.GB + max_cpus = 20 + max_time = 120.h +} diff --git a/conf/ebi_codon.config b/conf/ebi_codon.config index 1b4aecdd7..e580816c0 100644 --- a/conf/ebi_codon.config +++ b/conf/ebi_codon.config @@ -8,7 +8,7 @@ Mail: saul@ebi.ac.uk */ params { - config_profile_contact = "Saul Pierotti (@saulpierotti-ebi)" + config_profile_contact = "Saul Pierotti (@saulpierotti)" config_profile_description = "The European Bioinformatics Institute HPC cluster (codon) profile" config_profile_url = "https://www.ebi.ac.uk/" } diff --git a/conf/ebi_codon_slurm.config b/conf/ebi_codon_slurm.config index f2732c34e..fbf91fdb4 100644 --- a/conf/ebi_codon_slurm.config +++ b/conf/ebi_codon_slurm.config @@ -8,7 +8,7 @@ Mail: saul@ebi.ac.uk */ params { - config_profile_contact = "Saul Pierotti (@saulpierotti-ebi)" + config_profile_contact = "Saul Pierotti (@saulpierotti)" config_profile_description = "The European Bioinformatics Institute HPC cluster (codon) profile for the SLURM login nodes" config_profile_url = "https://www.ebi.ac.uk/" } diff --git a/conf/eddie.config b/conf/eddie.config index ad75af9e1..910f8c296 100644 --- a/conf/eddie.config +++ b/conf/eddie.config @@ -1,50 +1,50 @@ //Profile config names for nf-core/configs params { - config_profile_description = 'University of Edinburgh (eddie) cluster profile provided by nf-core/configs.' - config_profile_contact = 'Alison Meynert (@ameynert)' - config_profile_url = 'https://www.ed.ac.uk/information-services/research-support/research-computing/ecdf/high-performance-computing' + config_profile_description = 'University of Edinburgh (eddie) cluster profile provided by nf-core/configs.' + config_profile_contact = 'Alison Meynert (@ameynert)' + config_profile_url = 'https://www.ed.ac.uk/information-services/research-support/research-computing/ecdf/high-performance-computing' } executor { - name = "sge" - queueSize = "100" + name = "sge" + queueSize = "100" } process { - clusterOptions = { task.memory ? "-l h_vmem=${task.memory.bytes/task.cpus}" : null } - stageInMode = 'symlink' - scratch = 'false' - penv = { task.cpus > 1 ? "sharedmem" : null } + clusterOptions = { task.memory ? "-l h_vmem=${task.memory.bytes/task.cpus}" : null } + stageInMode = 'symlink' + scratch = 'false' + penv = { task.cpus > 1 ? "sharedmem" : null } - // common SGE error statuses - errorStrategy = {task.exitStatus in [143,137,104,134,139,140] ? 'retry' : 'finish'} - maxErrors = '-1' - maxRetries = 3 + // common SGE error statuses + errorStrategy = {task.exitStatus in [143,137,104,134,139,140] ? 'retry' : 'finish'} + maxErrors = '-1' + maxRetries = 3 - beforeScript = - """ - . /etc/profile.d/modules.sh - module load 'roslin/singularity/3.5.3' - export SINGULARITY_TMPDIR="\$TMPDIR" - """ + beforeScript = + """ + . /etc/profile.d/modules.sh + module load 'roslin/singularity/3.5.3' + export SINGULARITY_TMPDIR="\$TMPDIR" + """ } params { - // iGenomes reference base - igenomes_base = '/exports/igmm/eddie/BioinformaticsResources/igenomes' - max_memory = 384.GB - max_cpus = 32 - max_time = 240.h + // iGenomes reference base + igenomes_base = '/exports/igmm/eddie/BioinformaticsResources/igenomes' + max_memory = 384.GB + max_cpus = 32 + max_time = 240.h } env { - MALLOC_ARENA_MAX=1 + MALLOC_ARENA_MAX=1 } singularity { - envWhitelist = "SINGULARITY_TMPDIR,TMPDIR" - runOptions = '-p -B "$TMPDIR"' - enabled = true - autoMounts = true - cacheDir = "/exports/igmm/eddie/BioinformaticsResources/nfcore/singularity-images" + envWhitelist = "SINGULARITY_TMPDIR,TMPDIR" + runOptions = '-p -B "$TMPDIR"' + enabled = true + autoMounts = true + cacheDir = "/exports/igmm/eddie/BioinformaticsResources/nfcore/singularity-images" } diff --git a/conf/engaging.config b/conf/engaging.config index 0719cb28c..6325ae518 100644 --- a/conf/engaging.config +++ b/conf/engaging.config @@ -19,4 +19,4 @@ params { max_memory = 64.GB max_cpus = 16 max_time = 12.h -} \ No newline at end of file +} diff --git a/conf/ethz_euler.config b/conf/ethz_euler.config index 5db6ce7db..746dcb0bb 100644 --- a/conf/ethz_euler.config +++ b/conf/ethz_euler.config @@ -13,8 +13,8 @@ params { max_memory = 4.TB max_cpus = 128 max_time = 120.h - - igenomes_base = '/cluster/project/igenomes' + + igenomes_base = '/cluster/project/igenomes' igenomes_ignore = false } @@ -45,5 +45,5 @@ cleanup = true // Allows to override the default cleanup = true behaviour for debugging debug { - cleanup = false -} \ No newline at end of file + cleanup = false +} diff --git a/conf/eva.config b/conf/eva.config index b9383fe84..e5a3072de 100644 --- a/conf/eva.config +++ b/conf/eva.config @@ -1,8 +1,8 @@ //Profile config names for nf-core/configs params { - config_profile_description = 'Generic MPI-EVA cluster(s) profile provided by nf-core/configs.' - config_profile_contact = 'James Fellows Yates (@jfy133)' - config_profile_url = 'https://eva.mpg.de' + config_profile_description = 'Generic MPI-EVA cluster(s) profile provided by nf-core/configs.' + config_profile_contact = 'James Fellows Yates (@jfy133)' + config_profile_url = 'https://eva.mpg.de' } // Preform work directory cleanup after a successful run @@ -22,27 +22,27 @@ process { profiles { archgen { - params { - igenomes_base = "/mnt/archgen/public_data/igenomes" - config_profile_description = 'MPI-EVA archgen profile, provided by nf-core/configs.' - max_memory = 256.GB - max_cpus = 32 - max_time = 365.d - //Illumina iGenomes reference file path - } + params { + igenomes_base = "/mnt/archgen/public_data/igenomes" + config_profile_description = 'MPI-EVA archgen profile, provided by nf-core/configs.' + max_memory = 256.GB + max_cpus = 32 + max_time = 365.d + //Illumina iGenomes reference file path + } - process { - queue = { task.memory > 700.GB ? 'bigmem.q' : 'archgen.q' } - clusterOptions = { "-S /bin/bash -V -j y -o output.sge -l h_vmem=${task.memory.toGiga()}G" } - } + process { + queue = { task.memory > 700.GB ? 'bigmem.q' : 'archgen.q' } + clusterOptions = { "-S /bin/bash -V -j y -o output.sge -l h_vmem=${task.memory.toGiga()}G" } + } - singularity { - cacheDir = "/mnt/archgen/tools/singularity/containers/" - } + singularity { + cacheDir = "/mnt/archgen/tools/singularity/containers/" + } } - // Profile to deactivate automatic cleanup of work directory after a successful run. Overwrites cleanup option. + // Profile to deactivate automatic cleanup of work directory after a successful run. Overwrites cleanup option. debug { - cleanup = false + cleanup = false } } diff --git a/conf/fgcz.config b/conf/fgcz.config index 1e15ff08c..d66f9d3e7 100644 --- a/conf/fgcz.config +++ b/conf/fgcz.config @@ -1,24 +1,24 @@ params { - config_profile_description = "FGCZ ETH/UZH" - config_profile_contact = "natalia.zajac@fgcz.ethz.ch" - max_memory = 500.GB - max_cpus = 64 - max_time = 240.h + config_profile_description = "FGCZ ETH/UZH" + config_profile_contact = "natalia.zajac@fgcz.ethz.ch" + max_memory = 500.GB + max_cpus = 64 + max_time = 240.h } process { - executor = "slurm" - maxRetries = 2 + executor = "slurm" + maxRetries = 2 } executor { - queueSize = 30 + queueSize = 30 } singularity { - enabled = true - autoMounts = true - cacheDir = "/srv/GT/nextflow/singularity/" + enabled = true + autoMounts = true + cacheDir = "/srv/GT/nextflow/singularity/" } diff --git a/conf/fub_curta.config b/conf/fub_curta.config index 42bd4e91c..eecb0c3fe 100644 --- a/conf/fub_curta.config +++ b/conf/fub_curta.config @@ -1,6 +1,8 @@ // Config profile metadata params { config_profile_contact = 'Wassim Salam (@wassimsalam01)' + config_profile_contact_github = '@wassimsalam01' + config_profile_contact_email = 'TODO' config_profile_name = 'FUB Curta' config_profile_description = 'Freie Universität Berlin HPC (Curta) profile' config_profile_url = 'https://www.fu-berlin.de/en/sites/high-performance-computing/index.html' diff --git a/conf/genotoul.config b/conf/genotoul.config index 7a6cde27d..27dd09ee3 100644 --- a/conf/genotoul.config +++ b/conf/genotoul.config @@ -1,27 +1,27 @@ //Profile config names for nf-core/configs params { - config_profile_description = 'The Genotoul cluster profile' - config_profile_contact = 'support.bioinfo.genotoul@inra.fr' - config_profile_url = 'http://bioinfo.genotoul.fr/' + config_profile_description = 'The Genotoul cluster profile' + config_profile_contact = 'support.bioinfo.genotoul@inra.fr' + config_profile_url = 'http://bioinfo.genotoul.fr/' } singularity { - // need one image per execution - enabled = true - runOptions = '-B /bank -B /work -B /save -B /home' - + // need one image per execution + enabled = true + runOptions = '-B /bank -B /work -B /save -B /home' + } process { - executor = 'slurm' + executor = 'slurm' } params { - save_reference = true - igenomes_ignore = true - igenomesIgnore = true //deprecated - // Max resources requested by a normal node on genotoul. - max_memory = 120.GB - max_cpus = 48 - max_time = 96.h + save_reference = true + igenomes_ignore = true + igenomesIgnore = true //deprecated + // Max resources requested by a normal node on genotoul. + max_memory = 120.GB + max_cpus = 48 + max_time = 96.h } diff --git a/conf/genouest.config b/conf/genouest.config index 6a16056cb..3f8d104fb 100644 --- a/conf/genouest.config +++ b/conf/genouest.config @@ -1,24 +1,24 @@ //Profile config names for nf-core/configs params { - config_profile_description = 'The GenOuest cluster profile' - config_profile_contact = 'Anthony Bretaudeau (@abretaud)' - config_profile_url = 'https://www.genouest.org' + config_profile_description = 'The GenOuest cluster profile' + config_profile_contact = 'Anthony Bretaudeau (@abretaud)' + config_profile_url = 'https://www.genouest.org' } singularity { - enabled = true - autoMounts = true - runOptions = '-B /scratch:/scratch -B /local:/local -B /db:/db -B /groups:/groups' + enabled = true + autoMounts = true + runOptions = '-B /scratch:/scratch -B /local:/local -B /db:/db -B /groups:/groups' } process { - executor = 'slurm' + executor = 'slurm' } params { - igenomes_ignore = true - igenomesIgnore = true //deprecated - max_memory = 3000.GB - max_cpus = 160 - max_time = 336.h + igenomes_ignore = true + igenomesIgnore = true //deprecated + max_memory = 3000.GB + max_cpus = 160 + max_time = 336.h } diff --git a/conf/gis.config b/conf/gis.config index e1645e597..979fceaf0 100644 --- a/conf/gis.config +++ b/conf/gis.config @@ -1,20 +1,20 @@ //Profile config names for nf-core/configs params { - config_profile_description = 'Genome Institute of Singapore (Aquila) cluster profile provided by nf-core/configs.' - config_profile_contact = 'Andreas Wilm (@andreas-wilm)' - config_profile_url = 'https://www.a-star.edu.sg/gis/' + config_profile_description = 'Genome Institute of Singapore (Aquila) cluster profile provided by nf-core/configs.' + config_profile_contact = 'Andreas Wilm (@andreas-wilm)' + config_profile_url = 'https://www.a-star.edu.sg/gis/' } process { - executor = 'sge' - clusterOptions = { "-l mem_free=" + task.memory.toString().replaceAll(/[\sB]/,'') } - penv = 'OpenMP' - errorStrategy = { task.attempt < 2 ? 'retry' : 'finish' } - // auto translate container name into conda environment name - beforeScript = { 'source /mnt/projects/rpd/rc/init.2017-04; module load miniconda3; set +u; source activate nfcore-rnaseq-1.0dev; set -u;' } + executor = 'sge' + clusterOptions = { "-l mem_free=" + task.memory.toString().replaceAll(/[\sB]/,'') } + penv = 'OpenMP' + errorStrategy = { task.attempt < 2 ? 'retry' : 'finish' } + // auto translate container name into conda environment name + beforeScript = { 'source /mnt/projects/rpd/rc/init.2017-04; module load miniconda3; set +u; source activate nfcore-rnaseq-1.0dev; set -u;' } } params { - saveReference = true - // illumina iGenomes reference file paths on GIS Aquila - igenomes_base = '/mnt/projects/rpd/genomes.testing/S3_igenomes/' + saveReference = true + // illumina iGenomes reference file paths on GIS Aquila + igenomes_base = '/mnt/projects/rpd/genomes.testing/S3_igenomes/' } diff --git a/conf/google.config b/conf/google.config index 6e8a45a91..382a52151 100644 --- a/conf/google.config +++ b/conf/google.config @@ -1,24 +1,24 @@ -// Nextflow config file for running on Google Cloud Life Sciences -params { - config_profile_description = 'Google Cloud Life Sciences Profile' - config_profile_contact = 'Evan Floden, Seqera Labs (@evanfloden)' - config_profile_url = 'https://cloud.google.com/life-sciences' - - google_zone = 'europe-west2-c' - google_bucket = false - google_debug = false - google_preemptible = true -} - -process.executor = 'google-lifesciences' -google.zone = params.google_zone -google.lifeSciences.debug = params.google_debug -workDir = params.google_bucket -google.lifeSciences.preemptible = params.google_preemptible - -if (google.lifeSciences.preemptible) { - process.errorStrategy = { task.exitStatus in [8,10,14] ? 'retry' : 'terminate' } - process.maxRetries = 5 -} - -process.machineType = { task.memory > task.cpus * 6.GB ? ['custom', task.cpus, task.cpus * 6656].join('-') : null } +// Nextflow config file for running on Google Cloud Life Sciences +params { + config_profile_description = 'Google Cloud Life Sciences Profile' + config_profile_contact = 'Evan Floden, Seqera Labs (@evanfloden)' + config_profile_url = 'https://cloud.google.com/life-sciences' + + google_zone = 'europe-west2-c' + google_bucket = false + google_debug = false + google_preemptible = true +} + +process.executor = 'google-lifesciences' +google.zone = params.google_zone +google.lifeSciences.debug = params.google_debug +workDir = params.google_bucket +google.lifeSciences.preemptible = params.google_preemptible + +if (google.lifeSciences.preemptible) { + process.errorStrategy = { task.exitStatus in [8,10,14] ? 'retry' : 'terminate' } + process.maxRetries = 5 +} + +process.machineType = { task.memory > task.cpus * 6.GB ? ['custom', task.cpus, task.cpus * 6656].join('-') : null } diff --git a/conf/googlels.config b/conf/googlels.config index 779f2261e..2ab4fe29d 100644 --- a/conf/googlels.config +++ b/conf/googlels.config @@ -33,10 +33,10 @@ google { project = params.project_id lifeSciences.network = params.custom_vpc lifeSciences.subnetwork = params.custom_subnet - lifeSciences.usePrivateAddress = params.use_spot - lifeSciences.preemptible = params.use_private_ip + lifeSciences.usePrivateAddress = params.use_private_ip + lifeSciences.preemptible = params.use_spot lifeSciences.serviceAccountEmail = params.workers_service_account lifeSciences.bootDiskSize = '20 GB' - } + } diff --git a/conf/hasta.config b/conf/hasta.config index 845fcca60..e8124ee9a 100644 --- a/conf/hasta.config +++ b/conf/hasta.config @@ -1,17 +1,17 @@ // Profile config names for nf-core/configs params { - config_profile_description = 'Hasta, a local cluster setup at Clinical Genomics, Stockholm.' - config_profile_contact = 'Clinical Genomics, Stockholm' - config_profile_url = 'https://github.com/Clinical-Genomics' - priority = null - clusterOptions = null - schema_ignore_params = "priority,clusterOptions" - validationSchemaIgnoreParams = "priority,clusterOptions,schema_ignore_params" + config_profile_description = 'Hasta, a local cluster setup at Clinical Genomics, Stockholm.' + config_profile_contact = 'Clinical Genomics, Stockholm' + config_profile_url = 'https://github.com/Clinical-Genomics' + priority = null + clusterOptions = null + schema_ignore_params = "priority,clusterOptions" + validationSchemaIgnoreParams = "priority,clusterOptions,schema_ignore_params" } singularity { - enabled = true - envWhitelist = ['_JAVA_OPTIONS'] + enabled = true + envWhitelist = ['_JAVA_OPTIONS'] } params { @@ -28,11 +28,11 @@ process { profiles { stub_prio { params { - priority = 'development' - clusterOptions = "--qos=low" - max_memory = 6.GB - max_cpus = 1 - max_time = 1.h + priority = 'development' + clusterOptions = "--qos=low" + max_memory = 6.GB + max_cpus = 1 + max_time = 1.h } } diff --git a/conf/hki.config b/conf/hki.config index 63718bf99..d83b17a91 100644 --- a/conf/hki.config +++ b/conf/hki.config @@ -1,7 +1,7 @@ params { - config_profile_description = 'HKI clusters profile provided by nf-core/configs.' - config_profile_contact = 'James Fellows Yates (@jfy133)' - config_profile_url = 'https://leibniz-hki.de' + config_profile_description = 'HKI clusters profile provided by nf-core/configs.' + config_profile_contact = 'James Fellows Yates (@jfy133)' + config_profile_url = 'https://leibniz-hki.de' } profiles { diff --git a/conf/hypatia.config b/conf/hypatia.config index 26323b0e8..e4d2bc94c 100644 --- a/conf/hypatia.config +++ b/conf/hypatia.config @@ -1,24 +1,24 @@ //Profile config names for Hypatia cluster in Universidad de los Andes params { - config_profile_description = 'Universidad de los Andes cluster profile provided by nf-core/configs.' - config_profile_contact = 'Luisa Sacristan (@lusacristan)' - config_profile_url = 'https://exacore.uniandes.edu.co/es/' + config_profile_description = 'Universidad de los Andes cluster profile provided by nf-core/configs.' + config_profile_contact = 'Luisa Sacristan (@lusacristan)' + config_profile_url = 'https://exacore.uniandes.edu.co/es/' } singularity { - enabled = true - autoMounts = true + enabled = true + autoMounts = true } process { - executor = 'slurm' - queue = 'medium' - scratch = 'true' + executor = 'slurm' + queue = 'medium' + scratch = 'true' } params { - max_memory = 550.GB - max_cpus = 40 - max_time = 168.h + max_memory = 550.GB + max_cpus = 40 + max_time = 168.h } diff --git a/conf/icr_davros.config b/conf/icr_davros.config index 841dc2e20..c9b16fd75 100644 --- a/conf/icr_davros.config +++ b/conf/icr_davros.config @@ -1,39 +1,39 @@ /* - * ------------------------------------------------- - * Nextflow nf-core config file for ICR davros HPC - * ------------------------------------------------- - * Defines LSF process executor and singularity - * settings. - * - */ + * ------------------------------------------------- + * Nextflow nf-core config file for ICR davros HPC + * ------------------------------------------------- + * Defines LSF process executor and singularity + * settings. + * + */ params { - config_profile_description = "Nextflow nf-core profile for ICR davros HPC" - config_profile_contact = "Adrian Larkeryd (@adrlar)" + config_profile_description = "Nextflow nf-core profile for ICR davros HPC" + config_profile_contact = "Adrian Larkeryd (@adrlar)" } singularity { - enabled = true - runOptions = "--bind /mnt:/mnt --bind /data:/data" - // autoMounts = true // autoMounts sometimes causes a rare bug with the installed version of singularity + enabled = true + runOptions = "--bind /mnt:/mnt --bind /data:/data" + // autoMounts = true // autoMounts sometimes causes a rare bug with the installed version of singularity } executor { - // This is set because of an issue with too many - // singularity containers launching at once, they - // cause an singularity error with exit code 255. - submitRateLimit = "2 sec" + // This is set because of an issue with too many + // singularity containers launching at once, they + // cause an singularity error with exit code 255. + submitRateLimit = "2 sec" } process { - executor = "LSF" + executor = "LSF" } params { - // LSF cluster set up with memory tied to cores, - // it can't be requested. Locked at 12G per core. - cpus = 10 - max_cpus = 20 - max_memory = 12.GB - max_time = 168.h - igenomes_base = "/mnt/scratch/readonly/igenomes" + // LSF cluster set up with memory tied to cores, + // it can't be requested. Locked at 12G per core. + cpus = 10 + max_cpus = 20 + max_memory = 12.GB + max_time = 168.h + igenomes_base = "/mnt/scratch/readonly/igenomes" } diff --git a/conf/ifb_core.config b/conf/ifb_core.config index 6331278a0..3987fa248 100644 --- a/conf/ifb_core.config +++ b/conf/ifb_core.config @@ -1,23 +1,23 @@ //Profile config names for nf-core/configs params { - config_profile_description = 'The IFB core cluster profile' - config_profile_contact = 'https://community.france-bioinformatique.fr' - config_profile_url = 'https://ifb-elixirfr.gitlab.io/cluster/doc/cluster-desc/' + config_profile_description = 'The IFB core cluster profile' + config_profile_contact = 'https://community.france-bioinformatique.fr' + config_profile_url = 'https://ifb-elixirfr.gitlab.io/cluster/doc/cluster-desc/' } singularity { - enabled = true - runOptions = '-B /shared' + enabled = true + runOptions = '-B /shared' } process { - executor = 'slurm' - queue = { task.time <= 24.h ? 'fast' : 'long' } + executor = 'slurm' + queue = { task.time <= 24.h ? 'fast' : 'long' } } params { - igenomes_ignore = true - max_memory = 252.GB - max_cpus = 56 - max_time = 720.h + igenomes_ignore = true + max_memory = 252.GB + max_cpus = 56 + max_time = 720.h } diff --git a/conf/ilifu.config b/conf/ilifu.config index f6dfc9941..b3ed80e6e 100644 --- a/conf/ilifu.config +++ b/conf/ilifu.config @@ -1,7 +1,7 @@ params { config_profile_description = """ - Ilifu (https://ilifu.ac.za) slurm cluster profile provided by nf-core/configs. - """.stripIndent() + Ilifu (https://ilifu.ac.za) slurm cluster profile provided by nf-core/configs. + """.stripIndent() config_profile_contact = 'Peter van Heusden (@pvanheus)' config_profile_url = 'https://github.com/nf-core/configs/blob/master/docs/ilifu.md' max_memory = 1500.GB @@ -16,9 +16,8 @@ singularity { process { beforeScript = """ - module load singularity - """ - .stripIndent() + module load singularity + """.stripIndent() executor = 'slurm' queue = { task.accelerator != null && task.accelerator.contains('nvidia') ? 'GPU' : (task.memory >= 250.GB ? 'HighMem' : 'Main' ) } diff --git a/conf/imb.config b/conf/imb.config index 665b8af8d..66563fa64 100644 --- a/conf/imb.config +++ b/conf/imb.config @@ -11,8 +11,8 @@ params { } singularity { - enabled = true - autoMounts = true + enabled = true + autoMounts = true } executor { diff --git a/conf/imperial.config b/conf/imperial.config index 354d197f2..d5bf58362 100644 --- a/conf/imperial.config +++ b/conf/imperial.config @@ -16,7 +16,7 @@ profiles { imperial { process { executor = 'pbspro' - + // Update amount of max retries and set "retry" as the error strategy for all error codes errorStrategy = 'retry' maxRetries = 5 @@ -26,8 +26,8 @@ profiles { // General resource requirements queue = { 4 * task.attempt > 8 ? 'v1_throughput72' : 'v1_short8' } cpus = { 1 * task.attempt } - memory = { 6.GB * task.attempt } - time = { 4.h * task.attempt } + memory = { 6.GB * task.attempt } + time = { 4.h * task.attempt } // Process-specific resource requirements withLabel:process_single { @@ -37,50 +37,58 @@ profiles { } withLabel:process_low { - queue = 'v1_short8' + queue = 'v1_throughput72' cpus = { 2 * task.attempt } - memory = { 12.GB * task.attempt } - time = { 2 * task.attempt > 8 ? 8.h : 2.h * task.attempt } + memory = { 48.GB * task.attempt } + time = { 8 * task.attempt } } - withLabel:process_medium { - // TARGET QUEUE: medium - queue = 'v1_medium72' - cpus = { 9 * task.attempt } - memory = { 36.GB * task.attempt } - time = { 9.h * task.attempt } + withLabel:process_medium { + // TARGET QUEUE: throughput + queue = 'v1_throughput72' + cpus = { 8 * task.attempt } + memory = { 64.GB * task.attempt } + time = { 12.h * task.attempt } } withLabel:process_high { // TARGET QUEUE: medium queue = 'v1_medium72' cpus = { 12 * task.attempt } - memory = { 72.GB * task.attempt } - time = { 14.h * task.attempt } + memory = { 120.GB * task.attempt } + time = { 12.h * task.attempt } } withLabel:process_long { // TARGET QUEUE: medium queue = 'v1_medium72' cpus = 9 - memory = 96.GB - time = { 14.h * task.attempt } + memory = 100.GB + time = { 24.h * task.attempt } } withLabel:process_high_memory { // TARGET QUEUE: medium or largemem based on memory - queue = { 200 * task.attempt < 921 ? 'v1_medium72' : 'v1_largemem72' } + queue = { 200 * task.attempt < 921 ? 'v1_medium72' : 'v1_largemem72' } cpus = { 10 * task.attempt } memory = { 200.GB * task.attempt } - time = { 12.h * task.attempt } + time = { 24.h * task.attempt } + } + withLabel: with_gpus { + queue = 'gpu72' + time = 24.h + clusterOptions = '-l select=1:ncpus=4:mem=24gb:ngpus=1:gpu_type=RTX6000' + maxForks = 1 + containerOptions = { workflow.containerEngine == "singularity" ? '--nv --env CUDA_VISIBLE_DEVICES=$CUDA_VISIBLE_DEVICES': + ( workflow.containerEngine == "docker" ? '--gpus all': null ) } + beforeScript = 'module load tools/prod' } } } medbio { process { executor = 'pbspro' - queue = 'pqmedbio-tput' - //queue = 'med-bio' //!! this is an alias and shouldn't be used + //queue = 'pqmedbio-tput' } } } @@ -89,6 +97,7 @@ executor { $pbspro { queueSize = 49 submitRateLimit = '10 sec' + maxForks = 49 } $local { diff --git a/conf/incliva.config b/conf/incliva.config index 6e8cc8d08..2053fe983 100644 --- a/conf/incliva.config +++ b/conf/incliva.config @@ -13,31 +13,28 @@ def getHostname() { } // Function to set singularity path according to which host nextflow is running on -def setHostConfig(String hostname) { - if (hostname == 'vlinuxcervantes3srv') { - System.out.println("\nINFO: working on ${hostname}\n") - - // Resources details - params.max_memory = 60.GB - params.max_cpus = 15 - singularity.cacheDir = "/nfs/home/software/singularity/nf_cacheDir" - - } else if (hostname == 'vlinuxcervantes4srv') { - System.out.println("\nINFO: working on ${hostname}.\n") - - // Resources details - params.max_memory = 120.GB - params.max_cpus = 19 - singularity.cacheDir = "/nfs/home/software/singularity/nf_cacheDir" - - } else { - System.err.println("\nERROR: unknown machine. Update incliva.config on nf-core/configs if you are working on another host.\n") - } -} -def hostname = getHostname() +def hostname = { getHostname() } + +if (hostname == 'vlinuxcervantes3srv') { + System.out.println("\nINFO: working on ${hostname}\n") + + // Resources details + params.max_memory = 60.GB + params.max_cpus = 15 + singularity.cacheDir = "/nfs/home/software/singularity/nf_cacheDir" -setHostConfig(hostname) +} else if (hostname == 'vlinuxcervantes4srv') { + System.out.println("\nINFO: working on ${hostname}.\n") + + // Resources details + params.max_memory = 120.GB + params.max_cpus = 19 + singularity.cacheDir = "/nfs/home/software/singularity/nf_cacheDir" + +} else { + System.err.println("\nERROR: unknown machine. Update incliva.config on nf-core/configs if you are working on another host.\n") +} // Singularity details singularity { diff --git a/conf/ipop_up.config b/conf/ipop_up.config index 53992a258..f439d5b02 100644 --- a/conf/ipop_up.config +++ b/conf/ipop_up.config @@ -1,25 +1,25 @@ //Profile config names for nf-core/configs params { - config_profile_description = 'iPOP-UP cluster profile, Université paris Cité' - config_profile_contact = 'Magali Hennion, bibs@parisepigenetics.com' - config_profile_url = 'https://discourse.rpbs.univ-paris-diderot.fr/c/ipop-up' + config_profile_description = 'iPOP-UP cluster profile, Université paris Cité' + config_profile_contact = 'Magali Hennion, bibs@parisepigenetics.com' + config_profile_url = 'https://discourse.rpbs.univ-paris-diderot.fr/c/ipop-up' } singularity { - // need one image per execution - enabled = true - runOptions = '-B /shared' + // need one image per execution + enabled = true + runOptions = '-B /shared' } process { - executor = 'slurm' - queue = 'ipop-up' + executor = 'slurm' + queue = 'ipop-up' } params { - igenomes_ignore = true - // Max resources requested by a normal node on iPOP-UP cluster. - max_memory = 100.GB - max_cpus = 28 - max_time = 96.h + igenomes_ignore = true + // Max resources requested by a normal node on iPOP-UP cluster. + max_memory = 100.GB + max_cpus = 28 + max_time = 96.h } diff --git a/conf/janelia.config b/conf/janelia.config index bcddfb2a1..09af0d8f9 100644 --- a/conf/janelia.config +++ b/conf/janelia.config @@ -14,18 +14,18 @@ params { } singularity { - enabled = true - autoMounts = true + enabled = true + autoMounts = true } process { - executor = 'lsf' - scratch = '/scratch/$USER' - clusterOptions = params.lsf_opts + executor = 'lsf' + scratch = '/scratch/$USER' + clusterOptions = params.lsf_opts } executor { - perTaskReserve = false - perJobMemLimit = true + perTaskReserve = false + perJobMemLimit = true } diff --git a/conf/jax.config b/conf/jax.config index a56df71ae..e64f433b8 100644 --- a/conf/jax.config +++ b/conf/jax.config @@ -1,9 +1,9 @@ params { - config_profile_description = 'The Jackson Laboratory Sumner HPC profile provided by nf-core/configs.' - config_profile_contact = 'Bill Flynn (@flynnb)' - config_profile_url = 'https://jacksonlaboratory.sharepoint.com/sites/ResearchIT/SitePages/Welcome-to-Sumner.aspx' - singularity_cache_dir = '/fastscratch/singularity_cache_nfcore' - } + config_profile_description = 'The Jackson Laboratory Sumner HPC profile provided by nf-core/configs.' + config_profile_contact = 'Bill Flynn (@flynnb)' + config_profile_url = 'https://jacksonlaboratory.sharepoint.com/sites/ResearchIT/SitePages/Welcome-to-Sumner.aspx' + singularity_cache_dir = '/fastscratch/singularity_cache_nfcore' +} executor.$slurm.queueSize = 250 process { @@ -19,7 +19,7 @@ singularity{ cacheDir = params.singularity_cache_dir } params { - max_memory = 320.GB - max_cpus = 32 - max_time = 336.h - } + max_memory = 320.GB + max_cpus = 32 + max_time = 336.h +} diff --git a/conf/jex.config b/conf/jex.config index c06925b48..e81bdf50f 100644 --- a/conf/jex.config +++ b/conf/jex.config @@ -1,32 +1,32 @@ params { - config_profile_name = 'Jex' - config_profile_description = 'Nextflow config file for the MRC LMS Jex cluster' - config_profile_contact = 'George Young (@A-N-Other)' - config_profile_url = 'https://lms.mrc.ac.uk/research-facility/bioinformatics-facility/' + config_profile_name = 'Jex' + config_profile_description = 'Nextflow config file for the MRC LMS Jex cluster' + config_profile_contact = 'George Young (@A-N-Other)' + config_profile_url = 'https://lms.mrc.ac.uk/research-facility/bioinformatics-facility/' } process { - executor = 'slurm' - queue = { - if ( task.time <= 6.h && task.cpus <= 8 && task.memory <= 64.GB ) { - 'nice' - } else if ( task.memory > 256.GB ) { - 'hmem' - } else { - 'cpu' + executor = 'slurm' + queue = { + if ( task.time <= 6.h && task.cpus <= 8 && task.memory <= 64.GB ) { + 'nice' + } else if ( task.memory > 256.GB ) { + 'hmem' + } else { + 'cpu' + } } - } - clusterOptions = '--qos qos_batch' + clusterOptions = '--qos qos_batch' } singularity { - enabled = true - autoMounts = true - cacheDir = '/opt/resources/apps/singularity/cache' + enabled = true + autoMounts = true + cacheDir = '/opt/resources/apps/singularity/cache' } params { - max_memory = 4000.GB - max_cpus = 16 - max_time = 3.d + max_memory = 4000.GB + max_cpus = 16 + max_time = 3.d } diff --git a/conf/ku_sund_dangpu.config b/conf/ku_sund_dangpu.config index 8c82a746e..933b8ac71 100644 --- a/conf/ku_sund_dangpu.config +++ b/conf/ku_sund_dangpu.config @@ -1,25 +1,25 @@ params { - config_profile_contact = 'Adrija Kalvisa ' - config_profile_description = 'dangpufl01 configuration' - config_profile_url = '' - - // General cpus/memory/time requirements - max_cpus = 8 - max_memory = 64.GB - max_time = 72.h + config_profile_contact = 'Adrija Kalvisa ' + config_profile_description = 'dangpufl01 configuration' + config_profile_url = '' + + // General cpus/memory/time requirements + max_cpus = 8 + max_memory = 64.GB + max_time = 72.h } process { - executor = 'slurm' - + executor = 'slurm' + } executor { - queueSize = 5 + queueSize = 5 } singularity { - enabled = true - autoMounts = true - runOptions = '--bind /projects:/projects' + enabled = true + autoMounts = true + runOptions = '--bind /projects:/projects' } diff --git a/conf/leicester.config b/conf/leicester.config index edc45ff9b..f3bd04850 100644 --- a/conf/leicester.config +++ b/conf/leicester.config @@ -1,33 +1,33 @@ params { - config_profile_description = 'ALICE and SPECTRE cluster profile provided by nf-core/configs.' - config_profile_contact = 'Matiss Ozols - mo246@leichester.ac.uk | mo11@sanger.ac.uk | matiss.ozols@manchester.ac.uk | mo513@cam.ac.uk' - ACCOUNT = "cellfunc" // users, please set the bash variable DEFAULT_ACCOUNT or provide account to be used in analysis - max_cpus = 24 - max_memory = 240.GB - max_time = 168.h + config_profile_description = 'ALICE and SPECTRE cluster profile provided by nf-core/configs.' + config_profile_contact = 'Matiss Ozols - mo246@leichester.ac.uk | mo11@sanger.ac.uk | matiss.ozols@manchester.ac.uk | mo513@cam.ac.uk' + ACCOUNT = "cellfunc" // users, please set the bash variable DEFAULT_ACCOUNT or provide account to be used in analysis + max_cpus = 24 + max_memory = 240.GB + max_time = 168.h } singularity { - enabled = true - envWhitelist = 'TZ' + enabled = true + envWhitelist = 'TZ' } process { - executor = 'slurm' - cpus = 1 - pollInterval = '1 min' - queueStatInterval = '2 min' - memory = 24.GB - time = 12.h + executor = 'slurm' + cpus = 1 + pollInterval = '1 min' + queueStatInterval = '2 min' + memory = 24.GB + time = 12.h - withLabel: gpu { - beforeScript = 'module load gcc/12.3.0 && module load cuda12.1/toolkit && module load cudnn8.9-cuda12.1' - clusterOptions = { "--gres=gpu:ampere:1 --account="+params.ACCOUNT } - containerOptions = { + withLabel: gpu { + beforeScript = 'module load gcc/12.3.0 && module load cuda12.1/toolkit && module load cudnn8.9-cuda12.1' + clusterOptions = { "--gres=gpu:ampere:1 --account="+params.ACCOUNT } + containerOptions = { workflow.containerEngine == "singularity" ? '--containall --cleanenv --nv': ( workflow.containerEngine == "docker" ? '--gpus all': null ) } - } -} \ No newline at end of file + } +} diff --git a/conf/lugh.config b/conf/lugh.config index ef809315d..5af04ba8c 100644 --- a/conf/lugh.config +++ b/conf/lugh.config @@ -1,32 +1,31 @@ //Profile config names for nf-core/configs params { - config_profile_description = 'National University of Ireland, Galway Lugh cluster profile provided by nf-core/configs' - config_profile_contact = 'Barry Digby (@BarryDigby)' - config_profile_url = 'https://github.com/nf-core/configs/blob/master/docs/lugh.md' + config_profile_description = 'National University of Ireland, Galway Lugh cluster profile provided by nf-core/configs' + config_profile_contact = 'Barry Digby (@BarryDigby)' + config_profile_url = 'https://github.com/nf-core/configs/blob/master/docs/lugh.md' } singularity { - enabled = true - autoMounts = true - cacheDir = '/data/containers' + enabled = true + autoMounts = true + cacheDir = '/data/containers' } process { - beforeScript = """ - module load EasyBuild/3.4.1 - module load Java/1.8.0_144 - module load singularity/3.4.1 - ulimit -s unlimited - """ - .stripIndent() - containerOptions = '-B /data/' - executor = 'slurm' - queue = { task.memory >= 64.GB || task.cpus > 16 ? 'highmem' : 'normal' } + beforeScript = """ + module load EasyBuild/3.4.1 + module load Java/1.8.0_144 + module load singularity/3.4.1 + ulimit -s unlimited + """.stripIndent() + containerOptions = '-B /data/' + executor = 'slurm' + queue = { task.memory >= 64.GB || task.cpus > 16 ? 'highmem' : 'normal' } } params { - max_time = 120.h - max_memory = 128.GB - max_cpus = 32 + max_time = 120.h + max_memory = 128.GB + max_cpus = 32 } diff --git a/conf/m3c.config b/conf/m3c.config new file mode 100644 index 000000000..71feafa08 --- /dev/null +++ b/conf/m3c.config @@ -0,0 +1,24 @@ +//Profile config names for nf-core/configs +params { + config_profile_description = 'The M3 Research Center HPC cluster profile provided by nf-core/configs' + config_profile_contact = 'Sabrina Krakau (@skrakau)' + config_profile_url = 'https://www.medizin.uni-tuebingen.de/de/das-klinikum/einrichtungen/zentren/m3' +} + +singularity { + enabled = true + // cacheDir has to be set up group-wise +} + +process { + executor = 'slurm' + queue = {task.time > 23.h ? 'cpu3-long' : (task.memory > 460.GB || task.cpus > 64 ? 'cpu2-hm' : 'cpu1')} + scratch = 'true' + containerOptions = '--bind $TMPDIR' +} + +params { + max_memory = 1843.GB + max_cpus = 128 + max_time = 14.d +} diff --git a/conf/maestro.config b/conf/maestro.config index 881593235..8002b53c1 100644 --- a/conf/maestro.config +++ b/conf/maestro.config @@ -1,29 +1,30 @@ params { - config_profile_description = 'Institut Pasteur Maestro cluster profile' - config_profile_url = 'https://research.pasteur.fr/en/equipment/maestro-compute-cluster/' - config_profile_contact = 'Pierre Luisi (@pierrespc)' + config_profile_description = 'Institut Pasteur Maestro cluster profile' + config_profile_url = 'https://research.pasteur.fr/en/equipment/maestro-compute-cluster/' + config_profile_contact = 'Pierre Luisi (@pierrespc)' } singularity { - enabled = true - autoMounts = true - runOptions = '--home $HOME:/home/$USER --bind /pasteur' + enabled = true + autoMounts = true + runOptions = '--home $HOME:/home/$USER --bind /pasteur' } profiles { - + normal { process { - executor = 'slurm' - scratch = false - queue = 'common' - clusterOptions = '--qos=normal' + executor = 'slurm' + scratch = false + queue = 'common' + queueSize = 20 + clusterOptions = '--qos=normal' } params { igenomes_ignore = true igenomesIgnore = true - max_memory = 400.GB + max_memory = 500.GB max_cpus = 96 max_time = 24.h } @@ -31,16 +32,16 @@ profiles { long { process { - executor = 'slurm' - scratch = false - queue = 'common' - clusterOptions = '--qos=long' + executor = 'slurm' + scratch = false + queue = 'long' + clusterOptions = '--qos=long -p long' } params { igenomes_ignore = true igenomesIgnore = true - max_memory = 400.GB + max_memory = 500.GB max_cpus = 5 max_time = 8760.h } diff --git a/conf/mana.config b/conf/mana.config index 93d674c53..9ac12a93f 100644 --- a/conf/mana.config +++ b/conf/mana.config @@ -1,21 +1,21 @@ params { - config_profile_description = 'University of Hawaii at Manoa' - config_profile_url = 'http://www.hawaii.edu/its/ci/' - config_profile_contact = 'Cedric Arisdakessian' + config_profile_description = 'University of Hawaii at Manoa' + config_profile_url = 'http://www.hawaii.edu/its/ci/' + config_profile_contact = 'Cedric Arisdakessian' - max_memory = 400.GB - max_cpus = 96 - max_time = 72.h + max_memory = 400.GB + max_cpus = 96 + max_time = 72.h } process { - executor = 'slurm' - queue = 'shared,exclusive,kill-shared,kill-exclusive' - module = 'tools/Singularity' + executor = 'slurm' + queue = 'shared,exclusive,kill-shared,kill-exclusive' + module = 'tools/Singularity' } singularity { - enabled = true - cacheDir = "$HOME/.singularity_images_cache" - autoMounts = true + enabled = true + cacheDir = "$HOME/.singularity_images_cache" + autoMounts = true } diff --git a/conf/marvin.config b/conf/marvin.config index f7a4b2698..4df4ee7f9 100644 --- a/conf/marvin.config +++ b/conf/marvin.config @@ -1,11 +1,11 @@ //Profile config names for nf-core/configs params { - config_profile_description = 'Config file for Marvin Cluster (UPF-CSIC), based on nf-core/configs' - config_profile_contact = 'pc.quilis@gmail.com (Pablo Carrion)' - config_profile_url = 'https://www.ibe.upf-csic.es' - max_memory = 256.GB - max_cpus = 32 - max_time = 960.h + config_profile_description = 'Config file for Marvin Cluster (UPF-CSIC), based on nf-core/configs' + config_profile_contact = 'pc.quilis@gmail.com (Pablo Carrion)' + config_profile_url = 'https://www.ibe.upf-csic.es' + max_memory = 256.GB + max_cpus = 32 + max_time = 960.h } cleanup = false diff --git a/conf/medair.config b/conf/medair.config index d14764354..cb0f691e0 100644 --- a/conf/medair.config +++ b/conf/medair.config @@ -1,8 +1,8 @@ //Profile config names for nf-core/configs params { - config_profile_description = 'Cluster profile for medair (local cluster of Clinical Genomics Gothenburg)' - config_profile_contact = 'Clinical Genomics, Gothenburg (cgg-rd@gu.se, cgg-it@gu.se)' - config_profile_url = 'https://www.scilifelab.se/units/clinical-genomics-goteborg/' + config_profile_description = 'Cluster profile for medair (local cluster of Clinical Genomics Gothenburg)' + config_profile_contact = 'Clinical Genomics, Gothenburg (cgg-rd@gu.se, cgg-it@gu.se)' + config_profile_url = 'https://www.scilifelab.se/units/clinical-genomics-goteborg/' } //Nextflow parameters @@ -12,35 +12,35 @@ singularity { } profiles { - - wgs { - process { - queue = 'wgs.q' - executor = 'sge' - penv = 'mpi' - process.clusterOptions = '-l excl=1' - params.max_cpus = 40 - params.max_time = 48.h - params.max_memory = 128.GB + + wgs { + process { + queue = 'wgs.q' + executor = 'sge' + penv = 'mpi' + process.clusterOptions = '-l excl=1' + params.max_cpus = 40 + params.max_time = 48.h + params.max_memory = 128.GB + } } - } - production { - process { - queue = 'production.q' - executor = 'sge' - penv = 'mpi' - process.clusterOptions = '-l excl=1' - params.max_cpus = 40 - params.max_time = 480.h - params.max_memory = 128.GB + production { + process { + queue = 'production.q' + executor = 'sge' + penv = 'mpi' + process.clusterOptions = '-l excl=1' + params.max_cpus = 40 + params.max_time = 480.h + params.max_memory = 128.GB + } } - } } //Specific parameter for pipelines that can use Sentieon (e.g. nf-core/sarek, nf-core/raredisease) process { - withLabel:'sentieon' { - container = "/apps/bio/singularities/sentieon-211204-peta.simg" - } + withLabel:'sentieon' { + container = "/apps/bio/singularities/sentieon-211204-peta.simg" + } } diff --git a/conf/mjolnir_globe.config b/conf/mjolnir_globe.config index 0f3031323..9e3f5789f 100644 --- a/conf/mjolnir_globe.config +++ b/conf/mjolnir_globe.config @@ -1,11 +1,11 @@ //Profile config names for nf-core/configs params { - config_profile_description = 'Section for Hologenomics and Section for Molecular Ecology and Evolution @ Globe Institute, University of Copenhagen - mjolnir_globe profile provided by nf-core/configs.' - config_profile_contact = 'Aashild Vaagene (@ashildv)' - config_profile_url = 'https://globe.ku.dk/research/' - max_memory = 500.GB - max_cpus = 50 - max_time = 720.h + config_profile_description = 'Section for Hologenomics and Section for Molecular Ecology and Evolution @ Globe Institute, University of Copenhagen - mjolnir_globe profile provided by nf-core/configs.' + config_profile_contact = 'Aashild Vaagene (@ashildv)' + config_profile_url = 'https://globe.ku.dk/research/' + max_memory = 500.GB + max_cpus = 50 + max_time = 720.h } singularity { @@ -13,13 +13,13 @@ singularity { autoMounts = true cacheDir = '/maps/projects/mjolnir1/data/cache/nf-core/singularity' } - + process { - executor = 'slurm' + executor = 'slurm' } - + cleanup = true - + executor { - queueSize = 10 + queueSize = 10 } diff --git a/conf/mpcdf.config b/conf/mpcdf.config index 93e292436..6c3bac44e 100644 --- a/conf/mpcdf.config +++ b/conf/mpcdf.config @@ -1,34 +1,34 @@ params { - config_profile_description = 'MPCDF HPC profiles (unoffically) provided by nf-core/configs.' - config_profile_contact = 'James Fellows Yates (@jfy133)' - config_profile_url = 'https://www.mpcdf.mpg.de/services/supercomputing' + config_profile_description = 'MPCDF HPC profiles (unoffically) provided by nf-core/configs.' + config_profile_contact = 'James Fellows Yates (@jfy133)' + config_profile_url = 'https://www.mpcdf.mpg.de/services/supercomputing' } profiles { - + cobra { - + cleanup = true - + process { beforeScript = 'module load singularity' executor = 'slurm' } - + executor { queueSize = 8 pollInterval = '1 min' queueStatInterval = '5 min' } - - // Set $NXF_SINGULARITY_CACHEDIR in your ~/.bash_profile + + // Set $NXF_SINGULARITY_CACHEDIR in your ~/.bash_profile // to stop downloading the same image for every run singularity { enabled = true autoMounts = true } - + params { config_profile_description = 'MPCDF cobra profile (unofficially) provided by nf-core/configs.' max_memory = 725.GB @@ -36,37 +36,37 @@ profiles { max_time = 24.h } } - + raven { - + cleanup = true - + process { beforeScript = 'module load singularity' executor = 'slurm' } - + executor { queueSize = 30 pollInterval = '1 min' queueStatInterval = '5 min' } - - // Set $NXF_SINGULARITY_CACHEDIR in your ~/.bash_profile + + // Set $NXF_SINGULARITY_CACHEDIR in your ~/.bash_profile // to stop downloading the same image for every run singularity { enabled = true autoMounts = true } - + params { config_profile_description = 'MPCDF raven profile (unofficially) provided by nf-core/configs.' max_memory = 2000000.MB max_cpus = 72 max_time = 24.h } - } - + } + debug { cleanup = false } diff --git a/conf/munin.config b/conf/munin.config index 0fca21407..93809f133 100644 --- a/conf/munin.config +++ b/conf/munin.config @@ -1,44 +1,44 @@ // Profile config names for nf-core/configs params { - // Specific nf-core/configs params - config_profile_contact = 'Maxime Garcia (@maxulysse)' - config_profile_description = 'MUNIN profile provided by nf-core/configs' - config_profile_url = 'https://ki.se/forskning/barntumorbanken' + // Specific nf-core/configs params + config_profile_contact = 'Maxime Garcia (@maxulysse)' + config_profile_description = 'MUNIN profile provided by nf-core/configs' + config_profile_url = 'https://ki.se/forskning/barntumorbanken' - // Local AWS iGenomes reference file paths on munin - igenomes_base = '/data1/references/igenomes/' + // Local AWS iGenomes reference file paths on munin + igenomes_base = '/data1/references/igenomes/' - // General cpus/memory/time requirements - max_cpus = 46 - max_memory = 752.GB - max_time = 72.h + // General cpus/memory/time requirements + max_cpus = 46 + max_memory = 752.GB + max_time = 72.h } process { - executor = 'local' - maxForks = 46 + executor = 'local' + maxForks = 46 -// Limit cpus for Mutect2 - withName:'Mutect2|Mutect2Single|PileupSummariesForMutect2' { - time = {48.h * task.attempt} - maxForks = 12 - } + // Limit cpus for Mutect2 + withName:'Mutect2|Mutect2Single|PileupSummariesForMutect2' { + time = {48.h * task.attempt} + maxForks = 12 + } } singularity { - cacheDir = '/data1/containers/' - enabled = true - //runOptions = "--bind /media/BTB_2021_01" + cacheDir = '/data1/containers/' + enabled = true + //runOptions = "--bind /media/BTB_2021_01" } // To use docker, use nextflow run -profile munin,docker profiles { - docker { docker { - enabled = false - mountFlags = 'z' - fixOwnership = true + docker { + enabled = false + mountFlags = 'z' + fixOwnership = true + } } - } } diff --git a/conf/nu_genomics.config b/conf/nu_genomics.config index a8e9e022b..2435293e3 100644 --- a/conf/nu_genomics.config +++ b/conf/nu_genomics.config @@ -1,31 +1,31 @@ //Profile config names for nf-core/configs params { - config_profile_description = 'Northwestern University Quest HPC (Genomics Nodes) config provided by nf-core/configs' - config_profile_contact = 'Rogan Grant / Janna Nugent (@RoganGrant, @NUjon)' - config_profile_url = 'https://www.it.northwestern.edu/research/user-services/quest/' - max_memory = 190.GB - max_cpus = 40 - max_time = 240.h - igenomes_base = "/projects/genomicsshare/AWS_iGenomes/references" + config_profile_description = 'Northwestern University Quest HPC (Genomics Nodes) config provided by nf-core/configs' + config_profile_contact = 'Rogan Grant / Janna Nugent (@RoganGrant, @NUjon)' + config_profile_url = 'https://www.it.northwestern.edu/research/user-services/quest/' + max_memory = 190.GB + max_cpus = 40 + max_time = 240.h + igenomes_base = "/projects/genomicsshare/AWS_iGenomes/references" } singularity { - enabled = true - autoMounts = true - cacheDir = "/projects/b1042/singularity_cache" + enabled = true + autoMounts = true + cacheDir = "/projects/b1042/singularity_cache" } process { - beforeScript = 'module purge; module load singularity/latest; module load graphviz/2.40.1' - executor = 'slurm' - queue = {task.memory >= 190.GB ? 'genomics-himem' : task.time >= 48.h ? 'genomicslong' : 'genomics'} - clusterOptions = '-A b1042' + beforeScript = 'module purge; module load singularity/latest; module load graphviz/2.40.1; module load java/jdk11.0.10' + executor = 'slurm' + queue = {task.memory >= 190.GB ? 'genomics-himem' : task.time >= 48.h ? 'genomicslong' : 'genomics'} + clusterOptions = '-A b1042' } executor { - submitRateLimit = '1sec' + submitRateLimit = '1sec' } diff --git a/conf/nygc.config b/conf/nygc.config index 6075eb27f..819e0d341 100644 --- a/conf/nygc.config +++ b/conf/nygc.config @@ -2,23 +2,23 @@ singularityDir = "${HOME}/.singularity/singularity_images_nextflow" singularity { - enabled = true - autoMounts = true - cacheDir = singularityDir + enabled = true + autoMounts = true + cacheDir = singularityDir } process { - executor = 'slurm' - queue = { task.memory <= 100.GB ? 'pe2' : 'bigmem' } + executor = 'slurm' + queue = { task.memory <= 100.GB ? 'pe2' : 'bigmem' } } params { - config_profile_contact = 'John Zinno (@jzinno)' - config_profile_description = 'New York Genome Center (NYGC) cluster profile provided by nf-core/configs.' - config_profile_url = 'https://www.nygenome.org/' + config_profile_contact = 'John Zinno (@jzinno)' + config_profile_description = 'New York Genome Center (NYGC) cluster profile provided by nf-core/configs.' + config_profile_url = 'https://www.nygenome.org/' } executor { - queueSize = 196 - submitRateLimit = '5 sec' -} \ No newline at end of file + queueSize = 196 + submitRateLimit = '5 sec' +} diff --git a/conf/nyu_hpc.config b/conf/nyu_hpc.config index 32158a055..def88daa6 100644 --- a/conf/nyu_hpc.config +++ b/conf/nyu_hpc.config @@ -1,21 +1,21 @@ params { - config_profile_description = 'New York University HPC profile provided by nf-core/configs.' - config_profile_contact = 'HPC@nyu.edu' - config_profile_url = 'https://hpc.nyu.edu' - max_memory = 3000.GB - max_cpus = 96 - max_time = 7.d + config_profile_description = 'New York University HPC profile provided by nf-core/configs.' + config_profile_contact = 'HPC@nyu.edu' + config_profile_url = 'https://hpc.nyu.edu' + max_memory = 3000.GB + max_cpus = 96 + max_time = 7.d } singularity.enabled = true process { - executor = 'slurm' - clusterOptions = '--export=NONE' - maxRetries = 2 + executor = 'slurm' + clusterOptions = '--export=NONE' + maxRetries = 2 } executor { - queueSize = 1900 - submitRateLimit = '20 sec' + queueSize = 1900 + submitRateLimit = '20 sec' } diff --git a/conf/oist.config b/conf/oist.config index 8815ed4a1..b026c1a70 100644 --- a/conf/oist.config +++ b/conf/oist.config @@ -1,22 +1,22 @@ //Profile config names for nf-core/configs params { - config_profile_description = 'The Okinawa Institute of Science and Technology Graduate University (OIST) HPC cluster profile provided by nf-core/configs.' - config_profile_contact = 'OISTs Bioinformatics User Group ' - config_profile_url = 'https://github.com/nf-core/configs/blob/master/docs/oist.md' + config_profile_description = 'The Okinawa Institute of Science and Technology Graduate University (OIST) HPC cluster profile provided by nf-core/configs.' + config_profile_contact = 'OISTs Bioinformatics User Group ' + config_profile_url = 'https://github.com/nf-core/configs/blob/master/docs/oist.md' } singularity { - enabled = true + enabled = true } process { - executor = 'slurm' - queue = 'compute' - clusterOptions = '-C zen2' + executor = 'slurm' + queue = 'compute' + clusterOptions = '-C zen2' } params { - max_memory = 500.GB - max_cpus = 128 - max_time = 90.h + max_memory = 500.GB + max_cpus = 128 + max_time = 90.h } diff --git a/conf/pasteur.config b/conf/pasteur.config index 01d522710..5b1aa5ec3 100644 --- a/conf/pasteur.config +++ b/conf/pasteur.config @@ -1,24 +1,24 @@ //Profile config names for nf-core/configs params { - config_profile_description = 'The Institut Pasteur HPC cluster profile' - config_profile_contact = 'Remi Planel (@rplanel)' - config_profile_url = 'https://research.pasteur.fr/en/service/tars-cluster' + config_profile_description = 'The Institut Pasteur HPC cluster profile' + config_profile_contact = 'Remi Planel (@rplanel)' + config_profile_url = 'https://research.pasteur.fr/en/service/tars-cluster' } singularity { - enabled = true - autoMounts = true - runOptions = '-B /local/scratch:/tmp' + enabled = true + autoMounts = true + runOptions = '-B /local/scratch:/tmp' } process { - executor = 'slurm' + executor = 'slurm' } params { - igenomes_ignore = true - igenomesIgnore = true //deprecated - max_memory = 256.GB - max_cpus = 28 - max_time = 24.h + igenomes_ignore = true + igenomesIgnore = true //deprecated + max_memory = 256.GB + max_cpus = 28 + max_time = 24.h } diff --git a/conf/pawsey_nimbus.config b/conf/pawsey_nimbus.config index 4805358d4..5523d71fc 100644 --- a/conf/pawsey_nimbus.config +++ b/conf/pawsey_nimbus.config @@ -12,46 +12,46 @@ process { profiles { -// To use singularity, use nextflow run -profile pawsey_nimbus,singularity - singularity { + // To use singularity, use nextflow run -profile pawsey_nimbus,singularity singularity { - enabled = true - autoMounts = true + singularity { + enabled = true + autoMounts = true + } } - } -// To use docker, use nextflow run -profile pawsey_nimbus,docker - docker { + // To use docker, use nextflow run -profile pawsey_nimbus,docker docker { - enabled = true + docker { + enabled = true + } } - } - c2r8 { - params { - max_cpus = 2 - max_memory = '6.GB' + c2r8 { + params { + max_cpus = 2 + max_memory = '6.GB' + } } - } - c4r16 { - params { - max_cpus = 4 - max_memory = '14.GB' + c4r16 { + params { + max_cpus = 4 + max_memory = '14.GB' + } } - } - c8r32 { - params { - max_cpus = 8 - max_memory = '30.GB' + c8r32 { + params { + max_cpus = 8 + max_memory = '30.GB' + } } - } - c16r64 { - params { - max_cpus = 16 - max_memory = '62.GB' + c16r64 { + params { + max_cpus = 16 + max_memory = '62.GB' + } } - } } diff --git a/conf/pawsey_setonix.config b/conf/pawsey_setonix.config index ef7e720a8..00f546647 100644 --- a/conf/pawsey_setonix.config +++ b/conf/pawsey_setonix.config @@ -1,31 +1,31 @@ // Pawsey Setonix nf-core configuration profile params { - config_profile_description = 'Pawsey Setonix HPC profile provided by nf-core/configs' - config_profile_contact = 'Sarah Beecroft (@SarahBeecroft), Georgie Samaha (@georgiesamaha)' - config_profile_url = 'https://support.pawsey.org.au/documentation/display/US/Setonix+User+Guide' - max_cpus = 64 - max_memory = 230.Gb + config_profile_description = 'Pawsey Setonix HPC profile provided by nf-core/configs' + config_profile_contact = 'Sarah Beecroft (@SarahBeecroft), Georgie Samaha (@georgiesamaha)' + config_profile_url = 'https://support.pawsey.org.au/documentation/display/US/Setonix+User+Guide' + max_cpus = 64 + max_memory = 230.Gb } // Enable use of Singularity to run containers singularity { - enabled = true - autoMounts = true - autoCleanUp = true + enabled = true + autoMounts = true + autoCleanUp = true } // Submit up to 256 concurrent jobs (Setonix work partition max) executor { - queueSize = 1024 + queueSize = 1024 } // Define process resource limits // See: https://support.pawsey.org.au/documentation/pages/viewpage.action?pageId=121479736#RunningJobsonSetonix-Overview process { - executor = 'slurm' - clusterOptions = "--account=${System.getenv('PAWSEY_PROJECT')}" - module = 'singularity/3.11.4-slurm' - cache = 'lenient' - stageInMode = 'symlink' - queue = { task.memory < 230.GB ? 'work' : (task.memory > 230.GB && task.memory <= 980.GB ? 'highmem' : '') } + executor = 'slurm' + clusterOptions = "--account=${System.getenv('PAWSEY_PROJECT')}" + module = 'singularity/3.11.4-slurm' + cache = 'lenient' + stageInMode = 'symlink' + queue = { task.memory < 230.GB ? 'work' : (task.memory > 230.GB && task.memory <= 980.GB ? 'highmem' : '') } } diff --git a/conf/pdc_kth.config b/conf/pdc_kth.config index ddc89d3e0..b98e008d1 100644 --- a/conf/pdc_kth.config +++ b/conf/pdc_kth.config @@ -9,81 +9,81 @@ try { } params { - config_profile_description = 'PDC profile.' - config_profile_contact = 'Pontus Freyhult (@pontus)' - config_profile_url = "https://www.pdc.kth.se/" + config_profile_description = 'PDC profile.' + config_profile_contact = 'Pontus Freyhult (@pontus)' + config_profile_url = "https://www.pdc.kth.se/" - max_memory = 1790.GB - max_cpus = 256 - max_time = 7.d + max_memory = 1790.GB + max_cpus = 256 + max_time = 7.d - schema_ignore_params = "genomes,input_paths,cluster-options,clusterOptions,project,validationSchemaIgnoreParams" - validationSchemaIgnoreParams = "genomes,input_paths,cluster-options,clusterOptions,project,schema_ignore_params" + schema_ignore_params = "genomes,input_paths,cluster-options,clusterOptions,project,validationSchemaIgnoreParams" + validationSchemaIgnoreParams = "genomes,input_paths,cluster-options,clusterOptions,project,schema_ignore_params" } def containerOptionsCreator = { - switch(cluster) { - case "dardel": - return '-B /cfs/klemming/' - } + switch(cluster) { + case "dardel": + return '-B /cfs/klemming/' + } - return '' + return '' } def clusterOptionsCreator = { mem, time, cpus -> - String base = "-A $params.project ${params.clusterOptions ?: ''}" - - switch(cluster) { - case "dardel": - String extra = '' - - if (time < 1.d && mem <= 222.GB && cpus < 256) { - extra += ' -p shared ' - } - else if (time < 1.d) { - // Shortish - if (mem > 222.GB) { - extra += ' -p memory,main ' - } else { - extra += ' -p main ' - } - } else { - // Not shortish - if (mem > 222.GB) { - extra += ' -p memory ' - } else { - extra += ' -p long ' - } - } - - if (!mem || mem < 6.GB) { - // Impose minimum memory if request is below - extra += ' --mem=6G ' - } - - return base+extra - } - - return base + String base = "-A $params.project ${params.clusterOptions ?: ''}" + + switch(cluster) { + case "dardel": + String extra = '' + + if (time < 1.d && mem <= 222.GB && cpus < 256) { + extra += ' -p shared ' + } + else if (time < 1.d) { + // Shortish + if (mem > 222.GB) { + extra += ' -p memory,main ' + } else { + extra += ' -p main ' + } + } else { + // Not shortish + if (mem > 222.GB) { + extra += ' -p memory ' + } else { + extra += ' -p long ' + } + } + + if (!mem || mem < 6.GB) { + // Impose minimum memory if request is below + extra += ' --mem=6G ' + } + + return base+extra + } + + return base } singularity { - enabled = true - containerOptions = containerOptionsCreator + enabled = true + runOptions = containerOptionsCreator } process { - // Should we lock these to specific versions? - beforeScript = 'module load PDC singularity' + // Should we lock these to specific versions? + beforeScript = 'module load PDC singularity' - executor = 'slurm' - clusterOptions = { clusterOptionsCreator(task.memory, task.time, task.cpus) } + executor = 'slurm' + clusterOptions = { clusterOptionsCreator(task.memory, task.time, task.cpus) } } env { - // Handle java logging on stdout when discovering duplicated cgroups when - // running in singularity with Lustre mount - JAVA_TOOL_OPTIONS = "-Xlog:disable" + // Handle java logging on stdout when discovering duplicated cgroups when + // running in singularity with Lustre mount + JAVA_TOOL_OPTIONS = "-Xlog:disable" } diff --git a/conf/phoenix.config b/conf/phoenix.config index b4577630f..d658d38e5 100644 --- a/conf/phoenix.config +++ b/conf/phoenix.config @@ -1,23 +1,23 @@ //Profile config names for nf-core/configs params { - config_profile_description = 'University of Adelaide Phoenix HPC cluster profile provided by nf-core/configs' - config_profile_contact = 'Yassine Souilmi / Alexander Peltzer (@yassineS, @apeltzer)' - config_profile_url = 'https://www.adelaide.edu.au/phoenix/' + config_profile_description = 'University of Adelaide Phoenix HPC cluster profile provided by nf-core/configs' + config_profile_contact = 'Yassine Souilmi / Alexander Peltzer (@yassineS, @apeltzer)' + config_profile_url = 'https://www.adelaide.edu.au/phoenix/' } singularity { - enabled = true - envWhitelist='SINGULARITY_BINDPATH' - autoMounts = true + enabled = true + envWhitelist='SINGULARITY_BINDPATH' + autoMounts = true } process { - beforeScript = 'module load Singularity/2.5.2-GCC-5.4.0-2.26' - executor = 'slurm' + beforeScript = 'module load Singularity/2.5.2-GCC-5.4.0-2.26' + executor = 'slurm' } params { - max_memory = 128.GB - max_cpus = 32 - max_time = 48.h + max_memory = 128.GB + max_cpus = 32 + max_time = 48.h } diff --git a/conf/pipeline/ampliseq/binac.config b/conf/pipeline/ampliseq/binac.config index 13629cfbc..f027ca0cd 100644 --- a/conf/pipeline/ampliseq/binac.config +++ b/conf/pipeline/ampliseq/binac.config @@ -1,11 +1,11 @@ // Profile config names for nf-core/configs params { - // Specific nf-core/configs params - config_profile_contact = 'Alexander Peltzer (@apeltzer)' - config_profile_description = 'nf-core/ampliseq BINAC profile provided by nf-core/configs' + // Specific nf-core/configs params + config_profile_contact = 'Alexander Peltzer (@apeltzer)' + config_profile_description = 'nf-core/ampliseq BINAC profile provided by nf-core/configs' } env { - TZ='Europe/Berlin' -} \ No newline at end of file + TZ='Europe/Berlin' +} diff --git a/conf/pipeline/ampliseq/uppmax.config b/conf/pipeline/ampliseq/uppmax.config index 2a8bc3469..862f6ec3a 100644 --- a/conf/pipeline/ampliseq/uppmax.config +++ b/conf/pipeline/ampliseq/uppmax.config @@ -1,20 +1,20 @@ // Profile config names for nf-core/configs params { - // Specific nf-core/configs params - config_profile_contact = 'Daniel Lundin (daniel.lundin@lnu.se)' - config_profile_description = 'nf-core/ampliseq UPPMAX profile provided by nf-core/configs' + // Specific nf-core/configs params + config_profile_contact = 'Daniel Lundin (daniel.lundin@lnu.se)' + config_profile_description = 'nf-core/ampliseq UPPMAX profile provided by nf-core/configs' } process { - withName: classifier_extract_seq { + withName: classifier_extract_seq { clusterOptions = { "-A $params.project -p core -n 1 -t 7-00:00:00 ${params.clusterOptions ?: ''}" } - } + } - withName: classifier_train { + withName: classifier_train { clusterOptions = { "-A $params.project -C fat -p node -N 1 -t 24:00:00 ${params.clusterOptions ?: ''}" } - } + } - withName: classifier { + withName: classifier { clusterOptions = { "-A $params.project -C fat -p node -N 1 ${params.clusterOptions ?: ''}" } - } + } } diff --git a/conf/pipeline/demultiplex/aws_tower.config b/conf/pipeline/demultiplex/aws_tower.config index 520487f68..eece6efd9 100644 --- a/conf/pipeline/demultiplex/aws_tower.config +++ b/conf/pipeline/demultiplex/aws_tower.config @@ -1,24 +1,24 @@ // Profile config names for nf-core/configs params { - // Specific nf-core/configs params - config_profile_contact = 'Edmund Miller(@emiller88)' - config_profile_description = 'nf-core/demultiplex AWS Tower profile provided by nf-core/configs' + // Specific nf-core/configs params + config_profile_contact = 'Edmund Miller(@edmundmiller)' + config_profile_description = 'nf-core/demultiplex AWS Tower profile provided by nf-core/configs' } aws { - batch { - maxParallelTransfers = 24 - maxTransferAttempts = 3 - } - client { - maxConnections = 24 - uploadMaxThreads = 24 - maxErrorRetry = 3 - socketTimeout = 3600000 - uploadRetrySleep = 1000 - uploadChunkSize = 32.MB - } + batch { + maxParallelTransfers = 24 + maxTransferAttempts = 3 + } + client { + maxConnections = 24 + uploadMaxThreads = 24 + maxErrorRetry = 3 + socketTimeout = 3600000 + uploadRetrySleep = 1000 + uploadChunkSize = 32.MB + } } process { diff --git a/conf/pipeline/eager/crick.config b/conf/pipeline/eager/crick.config index 88fe7244d..b9cd4d422 100644 --- a/conf/pipeline/eager/crick.config +++ b/conf/pipeline/eager/crick.config @@ -1,52 +1,52 @@ - params { - config_profile_contact = "Christopher Barrington (@ChristopherBarrington)" - config_profile_description = "nf-core/eager Crick profile provided by nf-core/configs" +params { + config_profile_contact = "Christopher Barrington (@ChristopherBarrington)" + config_profile_description = "nf-core/eager Crick profile provided by nf-core/configs" } profiles { - screening { - process { - withName:bwa { - cpus = 12 - memory = '15 GB' - time = '6h' - } + screening { + process { + withName:bwa { + cpus = 12 + memory = '15 GB' + time = '6h' + } + } } - } - production { - process { - withName:adapter_removal { - time = '3d' - } - withName:fastp { - time = '8h' - } - withName:bwa { - cpus = 8 - memory = '56 GB' - time = '3d' - } - withName:samtools_filter { - time = '3d' - } - withName:dedup { - cpus = 6 - memory = '20 GB' - time = '3d' - } - withName:damageprofiler { - memory = '64 GB' - time = '3d' - errorStrategy = { task.exitStatus in [1,143,137,104,134,139] ? 'retry' : 'finish' } - } - } - } - external { - process { - withName:samtools_filter { - cpus = 12 - memory = '72 GB' - } + production { + process { + withName:adapter_removal { + time = '3d' + } + withName:fastp { + time = '8h' + } + withName:bwa { + cpus = 8 + memory = '56 GB' + time = '3d' + } + withName:samtools_filter { + time = '3d' + } + withName:dedup { + cpus = 6 + memory = '20 GB' + time = '3d' + } + withName:damageprofiler { + memory = '64 GB' + time = '3d' + errorStrategy = { task.exitStatus in [1,143,137,104,134,139] ? 'retry' : 'finish' } + } + } + } + external { + process { + withName:samtools_filter { + cpus = 12 + memory = '72 GB' + } + } } - } } diff --git a/conf/pipeline/eager/eva.config b/conf/pipeline/eager/eva.config index 02ee6e26d..94017baf9 100644 --- a/conf/pipeline/eager/eva.config +++ b/conf/pipeline/eager/eva.config @@ -1,9 +1,9 @@ // Profile config names for nf-core/configs params { - // Specific nf-core/configs params - config_profile_contact = 'James Fellows Yates (@jfy133)' - config_profile_description = 'nf-core/eager EVA profile provided by nf-core/configs' + // Specific nf-core/configs params + config_profile_contact = 'James Fellows Yates (@jfy133)' + config_profile_description = 'nf-core/eager EVA profile provided by nf-core/configs' } env { @@ -15,583 +15,583 @@ env { // Specific nf-core/eager process configuration process { - maxRetries = 2 - - // Solution for clusterOptions comes from here: https://github.com/nextflow-io/nextflow/issues/332 + personal toMega conversion - clusterOptions = { "-S /bin/bash -V -j y -o output.log -l h_vmem=${task.memory.toGiga()}G" } - - withLabel:'sc_tiny'{ - cpus = { check_max( 1, 'cpus' ) } - memory = { check_max( 1.GB * task.attempt, 'memory' ) } - time = '365.d' - } - - withLabel:'sc_small'{ - cpus = { check_max( 1, 'cpus' ) } - memory = { check_max( 4.GB * task.attempt, 'memory' ) } - time = '365.d' - } - - withLabel:'sc_medium'{ - cpus = { check_max( 1, 'cpus' ) } - memory = { check_max( 8.GB * task.attempt, 'memory' ) } - time = '365.d' - } - - withLabel:'mc_small'{ - cpus = { check_max( 2, 'cpus' ) } - memory = { check_max( 4.GB * task.attempt, 'memory' ) } - time = '365.d' - } - - withLabel:'mc_medium' { - cpus = { check_max( 4, 'cpus' ) } - memory = { check_max( 8.GB * task.attempt, 'memory' ) } - time = '365.d' - } - - withLabel:'mc_large'{ - cpus = { check_max( 8, 'cpus' ) } - memory = { check_max( 16.GB * task.attempt, 'memory' ) } - time = '365.d' - } - - withLabel:'mc_huge'{ - cpus = { check_max( 32, 'cpus' ) } - memory = { check_max( 256.GB * task.attempt, 'memory' ) } - time = '365.d' - } - - // Fixes for SGE and Java incompatibility due to Java using more memory than you tell it to use - - withName: makeSeqDict { - clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 2)}G" } - errorStrategy = { task.exitStatus in [1,143,137,104,134,139,140] ? 'retry' : 'finish' } - } - - withName: fastqc { - clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 2)}G" } - errorStrategy = { task.exitStatus in [1,143,137,104,134,139,140] ? 'retry' : 'finish' } - } - - withName: fastqc_after_clipping { - clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 2)}G" } - errorStrategy = { task.exitStatus in [1,143,137,104,134,139,140] ? 'retry' : 'finish' } - } - - withName: adapter_removal { - clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 2)}G" } - errorStrategy = { task.exitStatus in [1,143,137,104,134,139,140] ? 'retry' : 'finish' } - } - - withName: bwa { - clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga())}G,h=!(bionode01|bionode02|bionode03|bionode04|bionode05|bionode06)" } - } - - withName: bwamem { - clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga())}G,h=!(bionode01|bionode02|bionode03|bionode04|bionode05|bionode06)" } - } - - withName: circularmapper { - clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga())}G,h=!(bionode01|bionode02|bionode03|bionode04|bionode05|bionode06)" } - } - - withName: bowtie2 { - clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga())}G,h=!(bionode01|bionode02|bionode03|bionode04|bionode05|bionode06)" } - } - - withName: samtools_flagstat { - clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 2)}G" } - } - - withName: samtools_flagstat_after_filter { - clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 2)}G" } - } - - withName: dedup { - clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 2)}G" } - errorStrategy = { task.exitStatus in [1,143,137,104,134,139,140] ? 'retry' : 'finish' } - } - - withName: markduplicates { - clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 2)}G" } - memory = { check_max( 20.GB * task.attempt, 'memory' ) } - errorStrategy = { task.exitStatus in [1,143,137,104,134,139,140] ? 'retry' : 'finish' } - } - - withName: library_merge { - clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 2)}G" } - } - - withName: seqtype_merge { - clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 2)}G" } - } - - withName: additional_library_merge { - clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 2)}G" } - memory = { check_max( 4.GB * task.attempt, 'memory' ) } - } - - withName: metagenomic_complexity_filter { - clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 2)}G" } - } - - withName: malt { - clusterOptions = { "-S /bin/bash -V -l h_vmem=1000G" } - cpus = { check_max( 32, 'cpus' ) } - memory = { check_max( 955.GB * task.attempt, 'memory' ) } - } - - withName: maltextract { - clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 2)}G" } - errorStrategy = { task.exitStatus in [1,143,137,104,134,139,140] ? 'retry' : 'finish' } - } - - withName: multivcfanalyzer { - clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 2)}G" } - errorStrategy = { task.exitStatus in [1,143,137,104,134,139,140] ? 'retry' : 'finish' } - } - - withName: mtnucratio { - clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 2)}G" } - errorStrategy = { task.exitStatus in [1,143,137,104,134,139,140] ? 'retry' : 'finish' } - } - - withName: vcf2genome { - clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 2)}G" } - errorStrategy = { task.exitStatus in [1,143,137,104,134,139,140] ? 'retry' : 'finish' } - } - - withName: qualimap { - clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 2)}G" } - errorStrategy = { task.exitStatus in [1,143,137,104,134,139,140] ? 'retry' : task.exitStatus in [255] ? 'ignore' : 'finish' } - } - - withName: damageprofiler { - clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 2)}G" } - memory = { check_max( 8.GB * task.attempt, 'memory' ) } - errorStrategy = { task.exitStatus in [1,143,137,104,134,139,140] ? 'retry' : 'finish' } - } - - withName: circularmapper { - clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 2)}G" } - errorStrategy = { task.exitStatus in [1,143,137,104,134,139,140] ? 'retry' : 'finish' } - } - - withName: circulargenerator { - clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 2)}G" } - errorStrategy = { task.exitStatus in [1,143,137,104,134,139,140] ? 'retry' : 'finish' } - } - - withName: preseq { - clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 2)}G" } - errorStrategy = { task.exitStatus in [143,137,104,134,139] ? 'retry' : 'ignore' } - } - - withName: picard_addorreplacereadgroups { - clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 2)}G" } - } - - withName: genotyping_ug { - clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 2)}G" } - errorStrategy = { task.exitStatus in [1,143,137,104,134,139,140] ? 'retry' : 'finish' } - } - - withName:get_software_versions { - cache = false - clusterOptions = { "-S /bin/bash -V -l h=!(bionode06)" } - clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toMega() * 8)}M" } - errorStrategy = { task.exitStatus in [1,143,137,104,134,139,140] ? 'retry' : 'finish' } - } - - withName:multiqc { - clusterOptions = { "-S /bin/bash -V -j y -o output.log -l h_vmem=${task.memory.toGiga() * 2}G" } - } + maxRetries = 2 -} - -profiles { - - medium_data { + // Solution for clusterOptions comes from here: https://github.com/nextflow-io/nextflow/issues/332 + personal toMega conversion + clusterOptions = { "-S /bin/bash -V -j y -o output.log -l h_vmem=${task.memory.toGiga()}G" } - params { - // Specific nf-core/configs params - config_profile_contact = 'James Fellows Yates (@jfy133)' - config_profile_description = 'nf-core/eager medium-data EVA profile provided by nf-core/configs' + withLabel:'sc_tiny'{ + cpus = { check_max( 1, 'cpus' ) } + memory = { check_max( 1.GB * task.attempt, 'memory' ) } + time = '365.d' } - executor { - queueSize = 8 + withLabel:'sc_small'{ + cpus = { check_max( 1, 'cpus' ) } + memory = { check_max( 4.GB * task.attempt, 'memory' ) } + time = '365.d' } - process { - - maxRetries = 2 - - // Solution for clusterOptions comes from here: https://github.com/nextflow-io/nextflow/issues/332 + personal toMega conversion - clusterOptions = { "-S /bin/bash -V -j y -o output.log -l h_vmem=${task.memory.toGiga()}G" } - - withLabel:'sc_tiny'{ - cpus = { check_max( 1, 'cpus' ) } - memory = { check_max( 1.5.GB * task.attempt, 'memory' ) } - } - - withLabel:'sc_small'{ - cpus = { check_max( 1, 'cpus' ) } - memory = { check_max( 6.GB * task.attempt, 'memory' ) } - } - - withLabel:'sc_medium'{ - cpus = { check_max( 1, 'cpus' ) } - memory = { check_max( 12.GB * task.attempt, 'memory' ) } - } - - withLabel:'mc_small'{ - cpus = { check_max( 2, 'cpus' ) } - memory = { check_max( 6.GB * task.attempt, 'memory' ) } - } - - withLabel:'mc_medium' { - cpus = { check_max( 4, 'cpus' ) } - memory = { check_max( 12.GB * task.attempt, 'memory' ) } - } - - withLabel:'mc_large'{ - cpus = { check_max( 8, 'cpus' ) } - memory = { check_max( 24.GB * task.attempt, 'memory' ) } - } - - withLabel:'mc_huge'{ - cpus = { check_max( 32, 'cpus' ) } - memory = { check_max( 256.GB * task.attempt, 'memory' ) } - } - - // Fixes for SGE and Java incompatibility due to (and also some samtools?!) using more memory than you tell it to use - - withName: makeSeqDict { - clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 3)}G" } - errorStrategy = { task.exitStatus in [1,143,137,104,134,139,140] ? 'retry' : 'finish' } - } - - withName: fastqc { - clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 3)}G" } - errorStrategy = { task.exitStatus in [1,143,137,104,134,139,140] ? 'retry' : 'finish' } - } - - withName: fastqc_after_clipping { - clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 3)}G" } - errorStrategy = { task.exitStatus in [1,143,137,104,134,139,140] ? 'retry' : 'finish' } - } - - withName: adapter_removal { - clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 3)}G" } - errorStrategy = { task.exitStatus in [1,143,137,104,134,139,140] ? 'retry' : 'finish' } - } + withLabel:'sc_medium'{ + cpus = { check_max( 1, 'cpus' ) } + memory = { check_max( 8.GB * task.attempt, 'memory' ) } + time = '365.d' + } - withName: samtools_flagstat { - clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 3)}G" } - } + withLabel:'mc_small'{ + cpus = { check_max( 2, 'cpus' ) } + memory = { check_max( 4.GB * task.attempt, 'memory' ) } + time = '365.d' + } - withName: samtools_flagstat_after_filter { - clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 3)}G" } - } + withLabel:'mc_medium' { + cpus = { check_max( 4, 'cpus' ) } + memory = { check_max( 8.GB * task.attempt, 'memory' ) } + time = '365.d' + } - withName: dedup { - clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 3)}G" } - errorStrategy = { task.exitStatus in [1,143,137,104,134,139,140] ? 'retry' : 'finish' } - } + withLabel:'mc_large'{ + cpus = { check_max( 8, 'cpus' ) } + memory = { check_max( 16.GB * task.attempt, 'memory' ) } + time = '365.d' + } - withName: markduplicates { - clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 3)}G" } - memory = { check_max( 32.GB * task.attempt, 'memory' ) } - errorStrategy = { task.exitStatus in [1,143,137,104,134,139,140] ? 'retry' : 'finish' } - } + withLabel:'mc_huge'{ + cpus = { check_max( 32, 'cpus' ) } + memory = { check_max( 256.GB * task.attempt, 'memory' ) } + time = '365.d' + } - withName: library_merge { - clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 3)}G" } - } + // Fixes for SGE and Java incompatibility due to Java using more memory than you tell it to use - withName: seqtype_merge { - clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 3)}G" } - } + withName: makeSeqDict { + clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 2)}G" } + errorStrategy = { task.exitStatus in [1,143,137,104,134,139,140] ? 'retry' : 'finish' } + } - withName: additional_library_merge { - clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 3)}G" } - memory = { check_max( 4.GB * task.attempt, 'memory' ) } - } + withName: fastqc { + clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 2)}G" } + errorStrategy = { task.exitStatus in [1,143,137,104,134,139,140] ? 'retry' : 'finish' } + } - withName: metagenomic_complexity_filter { - clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 3)}G" } - } + withName: fastqc_after_clipping { + clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 2)}G" } + errorStrategy = { task.exitStatus in [1,143,137,104,134,139,140] ? 'retry' : 'finish' } + } - withName: malt { - clusterOptions = { "-S /bin/bash -V -l h_vmem=1000G" } - cpus = { check_max( 32, 'cpus' ) } - memory = { check_max( 955.GB * task.attempt, 'memory' ) } - errorStrategy = { task.exitStatus in [1,143,137,104,134,139,140] ? 'retry' : 'finish' } - } + withName: adapter_removal { + clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 2)}G" } + errorStrategy = { task.exitStatus in [1,143,137,104,134,139,140] ? 'retry' : 'finish' } + } - withName:hostremoval_input_fastq { - memory = { check_max( 32.GB * task.attempt, 'memory' ) } - } + withName: bwa { + clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga())}G,h=!(bionode01|bionode02|bionode03|bionode04|bionode05|bionode06)" } + } - withName: maltextract { - clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 3)}G" } - errorStrategy = { task.exitStatus in [1,143,137,104,134,139,140] ? 'retry' : 'finish' } - } + withName: bwamem { + clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga())}G,h=!(bionode01|bionode02|bionode03|bionode04|bionode05|bionode06)" } + } - withName: multivcfanalyzer { - clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 3)}G" } - errorStrategy = { task.exitStatus in [1,143,137,104,134,139,140] ? 'retry' : 'finish' } - } + withName: circularmapper { + clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga())}G,h=!(bionode01|bionode02|bionode03|bionode04|bionode05|bionode06)" } + } - withName: mtnucratio { - clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 3)}G" } - errorStrategy = { task.exitStatus in [1,143,137,104,134,139,140] ? 'retry' : 'finish' } - } + withName: bowtie2 { + clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga())}G,h=!(bionode01|bionode02|bionode03|bionode04|bionode05|bionode06)" } + } - withName: vcf2genome { - clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 3)}G" } - errorStrategy = { task.exitStatus in [1,143,137,104,134,139,140] ? 'retry' : 'finish' } - } + withName: samtools_flagstat { + clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 2)}G" } + } - withName: qualimap { - clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 3)}G" } - errorStrategy = { task.exitStatus in [1,143,137,104,134,139,140] ? 'retry' : task.exitStatus in [255] ? 'ignore' : 'finish' } - } + withName: samtools_flagstat_after_filter { + clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 2)}G" } + } - withName: damageprofiler { - clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 3)}G" } - memory = { check_max( 16.GB * task.attempt, 'memory' ) } - errorStrategy = { task.exitStatus in [1,143,137,104,134,139,140] ? 'retry' : 'finish' } - } + withName: dedup { + clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 2)}G" } + errorStrategy = { task.exitStatus in [1,143,137,104,134,139,140] ? 'retry' : 'finish' } + } - withName: circularmapper { - clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 3)}G" } - errorStrategy = { task.exitStatus in [1,143,137,104,134,139,140] ? 'retry' : 'finish' } - } + withName: markduplicates { + clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 2)}G" } + memory = { check_max( 20.GB * task.attempt, 'memory' ) } + errorStrategy = { task.exitStatus in [1,143,137,104,134,139,140] ? 'retry' : 'finish' } + } - withName: circulargenerator { - clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 3)}G" } - errorStrategy = { task.exitStatus in [1,143,137,104,134,139,140] ? 'retry' : 'finish' } - } + withName: library_merge { + clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 2)}G" } + } - withName: picard_addorreplacereadgroups { - clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 3)}G" } - } + withName: seqtype_merge { + clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 2)}G" } + } - withName: genotyping_ug { - clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 3)}G" } - errorStrategy = { task.exitStatus in [1,143,137,104,134,139,140] ? 'retry' : 'finish' } - } + withName: additional_library_merge { + clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 2)}G" } + memory = { check_max( 4.GB * task.attempt, 'memory' ) } + } - withName: preseq { - clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 3)}G" } - errorStrategy = { task.exitStatus in [143,137,104,134,139] ? 'retry' : 'ignore' } - } - } - } + withName: metagenomic_complexity_filter { + clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 2)}G" } + } - big_data { + withName: malt { + clusterOptions = { "-S /bin/bash -V -l h_vmem=1000G" } + cpus = { check_max( 32, 'cpus' ) } + memory = { check_max( 955.GB * task.attempt, 'memory' ) } + } - params { - // Specific nf-core/configs params - config_profile_contact = 'James Fellows Yates (@jfy133)' - config_profile_description = 'nf-core/eager big-data EVA profile provided by nf-core/configs' + withName: maltextract { + clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 2)}G" } + errorStrategy = { task.exitStatus in [1,143,137,104,134,139,140] ? 'retry' : 'finish' } } - executor { - queueSize = 6 + withName: multivcfanalyzer { + clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 2)}G" } + errorStrategy = { task.exitStatus in [1,143,137,104,134,139,140] ? 'retry' : 'finish' } } - process { + withName: mtnucratio { + clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 2)}G" } + errorStrategy = { task.exitStatus in [1,143,137,104,134,139,140] ? 'retry' : 'finish' } + } - maxRetries = 2 + withName: vcf2genome { + clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 2)}G" } + errorStrategy = { task.exitStatus in [1,143,137,104,134,139,140] ? 'retry' : 'finish' } + } - // Solution for clusterOptions comes from here: https://github.com/nextflow-io/nextflow/issues/332 + personal toMega conversion - clusterOptions = { "-S /bin/bash -V -j y -o output.log -l h_vmem=${task.memory.toGiga()}G" } + withName: qualimap { + clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 2)}G" } + errorStrategy = { task.exitStatus in [1,143,137,104,134,139,140] ? 'retry' : task.exitStatus in [255] ? 'ignore' : 'finish' } + } - withLabel:'sc_tiny'{ - cpus = { check_max( 1, 'cpus' ) } - memory = { check_max( 2.GB * task.attempt, 'memory' ) } - } + withName: damageprofiler { + clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 2)}G" } + memory = { check_max( 8.GB * task.attempt, 'memory' ) } + errorStrategy = { task.exitStatus in [1,143,137,104,134,139,140] ? 'retry' : 'finish' } + } - withLabel:'sc_small'{ - cpus = { check_max( 1, 'cpus' ) } - memory = { check_max( 8.GB * task.attempt, 'memory' ) } - } + withName: circularmapper { + clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 2)}G" } + errorStrategy = { task.exitStatus in [1,143,137,104,134,139,140] ? 'retry' : 'finish' } + } - withLabel:'sc_medium'{ - cpus = { check_max( 1, 'cpus' ) } - memory = { check_max( 16.GB * task.attempt, 'memory' ) } - } + withName: circulargenerator { + clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 2)}G" } + errorStrategy = { task.exitStatus in [1,143,137,104,134,139,140] ? 'retry' : 'finish' } + } - withLabel:'mc_small'{ - cpus = { check_max( 2, 'cpus' ) } - memory = { check_max( 8.GB * task.attempt, 'memory' ) } - } + withName: preseq { + clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 2)}G" } + errorStrategy = { task.exitStatus in [143,137,104,134,139] ? 'retry' : 'ignore' } + } - withLabel:'mc_medium' { - cpus = { check_max( 4, 'cpus' ) } - memory = { check_max( 16.GB * task.attempt, 'memory' ) } - } + withName: picard_addorreplacereadgroups { + clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 2)}G" } + } - withLabel:'mc_large'{ - cpus = { check_max( 8, 'cpus' ) } - memory = { check_max( 32.GB * task.attempt, 'memory' ) } - } + withName: genotyping_ug { + clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 2)}G" } + errorStrategy = { task.exitStatus in [1,143,137,104,134,139,140] ? 'retry' : 'finish' } + } - withLabel:'mc_huge'{ - cpus = { check_max( 32, 'cpus' ) } - memory = { check_max( 512.GB * task.attempt, 'memory' ) } - } + withName:get_software_versions { + cache = false + clusterOptions = { "-S /bin/bash -V -l h=!(bionode06)" } + clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toMega() * 8)}M" } + errorStrategy = { task.exitStatus in [1,143,137,104,134,139,140] ? 'retry' : 'finish' } + } - // Fixes for SGE and Java incompatibility due to Java using more memory than you tell it to use + withName:multiqc { + clusterOptions = { "-S /bin/bash -V -j y -o output.log -l h_vmem=${task.memory.toGiga() * 2}G" } + } - withName: makeSeqDict { - clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 6)}G" } - errorStrategy = { task.exitStatus in [1,143,137,104,134,139,140] ? 'retry' : 'finish' } - } +} - withName: fastqc { - clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 6)}G" } - errorStrategy = { task.exitStatus in [1,143,137,104,134,139,140] ? 'retry' : 'finish' } - } +profiles { - withName: fastqc_after_clipping { - clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 6)}G" } - errorStrategy = { task.exitStatus in [1,143,137,104,134,139,140] ? 'retry' : 'finish' } - } + medium_data { - withName: adapter_removal { - clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 6)}G" } - errorStrategy = { task.exitStatus in [1,143,137,104,134,139,140] ? 'retry' : 'finish' } + params { + // Specific nf-core/configs params + config_profile_contact = 'James Fellows Yates (@jfy133)' + config_profile_description = 'nf-core/eager medium-data EVA profile provided by nf-core/configs' } - withName: samtools_flagstat { - clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 6)}G" } + executor { + queueSize = 8 } - withName: samtools_flagstat_after_filter { - clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 6)}G" } - } + process { - withName: dedup { - clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 6)}G" } - errorStrategy = { task.exitStatus in [1,143,137,104,134,139,140] ? 'retry' : 'finish' } - } + maxRetries = 2 - withName: markduplicates { - clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 6)}G" } - memory = { check_max( 48.GB * task.attempt, 'memory' ) } - errorStrategy = { task.exitStatus in [1,143,137,104,134,139,140] ? 'retry' : 'finish' } - } + // Solution for clusterOptions comes from here: https://github.com/nextflow-io/nextflow/issues/332 + personal toMega conversion + clusterOptions = { "-S /bin/bash -V -j y -o output.log -l h_vmem=${task.memory.toGiga()}G" } - withName: library_merge { - clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 6)}G" } - memory = { check_max( 6.GB * task.attempt, 'memory' ) } - } + withLabel:'sc_tiny'{ + cpus = { check_max( 1, 'cpus' ) } + memory = { check_max( 1.5.GB * task.attempt, 'memory' ) } + } - withName: seqtype_merge { - clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 6)}G" } - } + withLabel:'sc_small'{ + cpus = { check_max( 1, 'cpus' ) } + memory = { check_max( 6.GB * task.attempt, 'memory' ) } + } - withName: additional_library_merge { - clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 6)}G" } - memory = { check_max( 6.GB * task.attempt, 'memory' ) } - } + withLabel:'sc_medium'{ + cpus = { check_max( 1, 'cpus' ) } + memory = { check_max( 12.GB * task.attempt, 'memory' ) } + } - withName:hostremoval_input_fastq { - memory = { check_max( 32.GB * task.attempt, 'memory' ) } + withLabel:'mc_small'{ + cpus = { check_max( 2, 'cpus' ) } + memory = { check_max( 6.GB * task.attempt, 'memory' ) } + } + + withLabel:'mc_medium' { + cpus = { check_max( 4, 'cpus' ) } + memory = { check_max( 12.GB * task.attempt, 'memory' ) } + } + + withLabel:'mc_large'{ + cpus = { check_max( 8, 'cpus' ) } + memory = { check_max( 24.GB * task.attempt, 'memory' ) } + } + + withLabel:'mc_huge'{ + cpus = { check_max( 32, 'cpus' ) } + memory = { check_max( 256.GB * task.attempt, 'memory' ) } + } + + // Fixes for SGE and Java incompatibility due to (and also some samtools?!) using more memory than you tell it to use + + withName: makeSeqDict { + clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 3)}G" } + errorStrategy = { task.exitStatus in [1,143,137,104,134,139,140] ? 'retry' : 'finish' } + } + + withName: fastqc { + clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 3)}G" } + errorStrategy = { task.exitStatus in [1,143,137,104,134,139,140] ? 'retry' : 'finish' } + } + + withName: fastqc_after_clipping { + clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 3)}G" } + errorStrategy = { task.exitStatus in [1,143,137,104,134,139,140] ? 'retry' : 'finish' } + } + + withName: adapter_removal { + clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 3)}G" } + errorStrategy = { task.exitStatus in [1,143,137,104,134,139,140] ? 'retry' : 'finish' } + } + + withName: samtools_flagstat { + clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 3)}G" } + } + + withName: samtools_flagstat_after_filter { + clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 3)}G" } + } + + withName: dedup { + clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 3)}G" } + errorStrategy = { task.exitStatus in [1,143,137,104,134,139,140] ? 'retry' : 'finish' } + } + + withName: markduplicates { + clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 3)}G" } + memory = { check_max( 32.GB * task.attempt, 'memory' ) } + errorStrategy = { task.exitStatus in [1,143,137,104,134,139,140] ? 'retry' : 'finish' } + } + + withName: library_merge { + clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 3)}G" } + } + + withName: seqtype_merge { + clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 3)}G" } + } + + withName: additional_library_merge { + clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 3)}G" } + memory = { check_max( 4.GB * task.attempt, 'memory' ) } + } + + withName: metagenomic_complexity_filter { + clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 3)}G" } + } + + withName: malt { + clusterOptions = { "-S /bin/bash -V -l h_vmem=1000G" } + cpus = { check_max( 32, 'cpus' ) } + memory = { check_max( 955.GB * task.attempt, 'memory' ) } + errorStrategy = { task.exitStatus in [1,143,137,104,134,139,140] ? 'retry' : 'finish' } + } + + withName:hostremoval_input_fastq { + memory = { check_max( 32.GB * task.attempt, 'memory' ) } + } + + withName: maltextract { + clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 3)}G" } + errorStrategy = { task.exitStatus in [1,143,137,104,134,139,140] ? 'retry' : 'finish' } + } + + withName: multivcfanalyzer { + clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 3)}G" } + errorStrategy = { task.exitStatus in [1,143,137,104,134,139,140] ? 'retry' : 'finish' } + } + + withName: mtnucratio { + clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 3)}G" } + errorStrategy = { task.exitStatus in [1,143,137,104,134,139,140] ? 'retry' : 'finish' } + } + + withName: vcf2genome { + clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 3)}G" } + errorStrategy = { task.exitStatus in [1,143,137,104,134,139,140] ? 'retry' : 'finish' } + } + + withName: qualimap { + clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 3)}G" } + errorStrategy = { task.exitStatus in [1,143,137,104,134,139,140] ? 'retry' : task.exitStatus in [255] ? 'ignore' : 'finish' } + } + + withName: damageprofiler { + clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 3)}G" } + memory = { check_max( 16.GB * task.attempt, 'memory' ) } + errorStrategy = { task.exitStatus in [1,143,137,104,134,139,140] ? 'retry' : 'finish' } + } + + withName: circularmapper { + clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 3)}G" } + errorStrategy = { task.exitStatus in [1,143,137,104,134,139,140] ? 'retry' : 'finish' } + } + + withName: circulargenerator { + clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 3)}G" } + errorStrategy = { task.exitStatus in [1,143,137,104,134,139,140] ? 'retry' : 'finish' } + } + + withName: picard_addorreplacereadgroups { + clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 3)}G" } + } + + withName: genotyping_ug { + clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 3)}G" } + errorStrategy = { task.exitStatus in [1,143,137,104,134,139,140] ? 'retry' : 'finish' } + } + + withName: preseq { + clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 3)}G" } + errorStrategy = { task.exitStatus in [143,137,104,134,139] ? 'retry' : 'ignore' } + } } + } - withName: metagenomic_complexity_filter { - clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 6)}G" } - } + big_data { - withName: malt { - clusterOptions = { "-S /bin/bash -V -l h_vmem=1000G" } - cpus = { check_max( 32, 'cpus' ) } - memory = { check_max( 955.GB * task.attempt, 'memory' ) } - errorStrategy = { task.exitStatus in [1,143,137,104,134,139,140] ? 'retry' : 'finish' } + params { + // Specific nf-core/configs params + config_profile_contact = 'James Fellows Yates (@jfy133)' + config_profile_description = 'nf-core/eager big-data EVA profile provided by nf-core/configs' } - withName: maltextract { - clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 6)}G" } - errorStrategy = { task.exitStatus in [1,143,137,104,134,139,140] ? 'retry' : 'finish' } + executor { + queueSize = 6 } - withName: multivcfanalyzer { - clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 6)}G" } - errorStrategy = { task.exitStatus in [1,143,137,104,134,139,140] ? 'retry' : 'finish' } - } + process { - withName: mtnucratio { - clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 6)}G" } - errorStrategy = { task.exitStatus in [1,143,137,104,134,139,140] ? 'retry' : 'finish' } - } + maxRetries = 2 - withName: vcf2genome { - clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 6)}G" } - errorStrategy = { task.exitStatus in [1,143,137,104,134,139,140] ? 'retry' : 'finish' } - } + // Solution for clusterOptions comes from here: https://github.com/nextflow-io/nextflow/issues/332 + personal toMega conversion + clusterOptions = { "-S /bin/bash -V -j y -o output.log -l h_vmem=${task.memory.toGiga()}G" } - withName: qualimap { - clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 6)}G" } - errorStrategy = { task.exitStatus in [1,143,137,104,134,139,140] ? 'retry' : task.exitStatus in [255] ? 'ignore' : 'finish' } - } + withLabel:'sc_tiny'{ + cpus = { check_max( 1, 'cpus' ) } + memory = { check_max( 2.GB * task.attempt, 'memory' ) } + } - withName: damageprofiler { - clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 6)}G" } - memory = { check_max( 32.GB * task.attempt, 'memory' ) } - errorStrategy = { task.exitStatus in [1,143,137,104,134,139,140] ? 'retry' : 'finish' } - } + withLabel:'sc_small'{ + cpus = { check_max( 1, 'cpus' ) } + memory = { check_max( 8.GB * task.attempt, 'memory' ) } + } - withName: circularmapper { - clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 6)}G" } - errorStrategy = { task.exitStatus in [1,143,137,104,134,139,140] ? 'retry' : 'finish' } - } + withLabel:'sc_medium'{ + cpus = { check_max( 1, 'cpus' ) } + memory = { check_max( 16.GB * task.attempt, 'memory' ) } + } - withName: circulargenerator { - clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 6)}G" } - errorStrategy = { task.exitStatus in [1,143,137,104,134,139,140] ? 'retry' : 'finish' } + withLabel:'mc_small'{ + cpus = { check_max( 2, 'cpus' ) } + memory = { check_max( 8.GB * task.attempt, 'memory' ) } + } + + withLabel:'mc_medium' { + cpus = { check_max( 4, 'cpus' ) } + memory = { check_max( 16.GB * task.attempt, 'memory' ) } + } + + withLabel:'mc_large'{ + cpus = { check_max( 8, 'cpus' ) } + memory = { check_max( 32.GB * task.attempt, 'memory' ) } + } + + withLabel:'mc_huge'{ + cpus = { check_max( 32, 'cpus' ) } + memory = { check_max( 512.GB * task.attempt, 'memory' ) } + } + + // Fixes for SGE and Java incompatibility due to Java using more memory than you tell it to use + + withName: makeSeqDict { + clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 6)}G" } + errorStrategy = { task.exitStatus in [1,143,137,104,134,139,140] ? 'retry' : 'finish' } + } + + withName: fastqc { + clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 6)}G" } + errorStrategy = { task.exitStatus in [1,143,137,104,134,139,140] ? 'retry' : 'finish' } + } + + withName: fastqc_after_clipping { + clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 6)}G" } + errorStrategy = { task.exitStatus in [1,143,137,104,134,139,140] ? 'retry' : 'finish' } + } + + withName: adapter_removal { + clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 6)}G" } + errorStrategy = { task.exitStatus in [1,143,137,104,134,139,140] ? 'retry' : 'finish' } + } + + withName: samtools_flagstat { + clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 6)}G" } + } + + withName: samtools_flagstat_after_filter { + clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 6)}G" } + } + + withName: dedup { + clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 6)}G" } + errorStrategy = { task.exitStatus in [1,143,137,104,134,139,140] ? 'retry' : 'finish' } + } + + withName: markduplicates { + clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 6)}G" } + memory = { check_max( 48.GB * task.attempt, 'memory' ) } + errorStrategy = { task.exitStatus in [1,143,137,104,134,139,140] ? 'retry' : 'finish' } + } + + withName: library_merge { + clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 6)}G" } + memory = { check_max( 6.GB * task.attempt, 'memory' ) } + } + + withName: seqtype_merge { + clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 6)}G" } + } + + withName: additional_library_merge { + clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 6)}G" } + memory = { check_max( 6.GB * task.attempt, 'memory' ) } + } + + withName:hostremoval_input_fastq { + memory = { check_max( 32.GB * task.attempt, 'memory' ) } + } + + withName: metagenomic_complexity_filter { + clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 6)}G" } + } + + withName: malt { + clusterOptions = { "-S /bin/bash -V -l h_vmem=1000G" } + cpus = { check_max( 32, 'cpus' ) } + memory = { check_max( 955.GB * task.attempt, 'memory' ) } + errorStrategy = { task.exitStatus in [1,143,137,104,134,139,140] ? 'retry' : 'finish' } + } + + withName: maltextract { + clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 6)}G" } + errorStrategy = { task.exitStatus in [1,143,137,104,134,139,140] ? 'retry' : 'finish' } + } + + withName: multivcfanalyzer { + clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 6)}G" } + errorStrategy = { task.exitStatus in [1,143,137,104,134,139,140] ? 'retry' : 'finish' } + } + + withName: mtnucratio { + clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 6)}G" } + errorStrategy = { task.exitStatus in [1,143,137,104,134,139,140] ? 'retry' : 'finish' } + } + + withName: vcf2genome { + clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 6)}G" } + errorStrategy = { task.exitStatus in [1,143,137,104,134,139,140] ? 'retry' : 'finish' } + } + + withName: qualimap { + clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 6)}G" } + errorStrategy = { task.exitStatus in [1,143,137,104,134,139,140] ? 'retry' : task.exitStatus in [255] ? 'ignore' : 'finish' } + } + + withName: damageprofiler { + clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 6)}G" } + memory = { check_max( 32.GB * task.attempt, 'memory' ) } + errorStrategy = { task.exitStatus in [1,143,137,104,134,139,140] ? 'retry' : 'finish' } + } + + withName: circularmapper { + clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 6)}G" } + errorStrategy = { task.exitStatus in [1,143,137,104,134,139,140] ? 'retry' : 'finish' } + } + + withName: circulargenerator { + clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 6)}G" } + errorStrategy = { task.exitStatus in [1,143,137,104,134,139,140] ? 'retry' : 'finish' } + } + + withName: picard_addorreplacereadgroups { + clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 6)}G" } + } + + withName: genotyping_ug { + clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 6)}G" } + errorStrategy = { task.exitStatus in [1,143,137,104,134,139,140] ? 'retry' : 'finish' } + } + + withName: preseq { + clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 6)}G" } + errorStrategy = { task.exitStatus in [143,137,104,134,139] ? 'retry' : 'ignore' } + } } + } - withName: picard_addorreplacereadgroups { - clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 6)}G" } + pathogen_loose { + params { + config_profile_description = 'Pathogen (loose) MPI-EVA profile, provided by nf-core/configs.' + bwaalnn = 0.01 + bwaalnl = 16 } - - withName: genotyping_ug { - clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 6)}G" } - errorStrategy = { task.exitStatus in [1,143,137,104,134,139,140] ? 'retry' : 'finish' } + } + pathogen_strict { + params { + config_profile_description = 'Pathogen (strict) MPI-EVA SDAG profile, provided by nf-core/configs.' + bwaalnn = 0.1 + bwaalnl = 32 } - - withName: preseq { - clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 6)}G" } - errorStrategy = { task.exitStatus in [143,137,104,134,139] ? 'retry' : 'ignore' } + } + human { + params { + config_profile_description = 'Human MPI-EVA SDAG profile, provided by nf-core/configs.' + bwaalnn = 0.01 + bwaalnl = 16500 } - } - } - - pathogen_loose { - params { - config_profile_description = 'Pathogen (loose) MPI-EVA profile, provided by nf-core/configs.' - bwaalnn = 0.01 - bwaalnl = 16 - } - } - pathogen_strict { - params { - config_profile_description = 'Pathogen (strict) MPI-EVA SDAG profile, provided by nf-core/configs.' - bwaalnn = 0.1 - bwaalnl = 32 - } - } - human { - params { - config_profile_description = 'Human MPI-EVA SDAG profile, provided by nf-core/configs.' - bwaalnn = 0.01 - bwaalnl = 16500 - } - } + } } diff --git a/conf/pipeline/eager/maestro.config b/conf/pipeline/eager/maestro.config index 4a6a18506..fb1be0bdb 100644 --- a/conf/pipeline/eager/maestro.config +++ b/conf/pipeline/eager/maestro.config @@ -1,116 +1,116 @@ /* * ------------------------------------------------- - * Nextflow config file for running nf-core eager on whole genome data or mitogenomes + * Nextflow config file for running nf-core eager on whole genome data or mitogenomes * ------------------------------------------------- * nextflow run nf-core/eager -profile maestro,,maestro, (where is long or normal and is nuclear, mitocondrial or unlimitedtime) */ params { - config_profile_name = 'nf-core/eager nuclear/mitocondrial - human profiles' + config_profile_name = 'nf-core/eager nuclear/mitocondrial - human profiles' - config_profile_description = "Simple profiles for assessing computational ressources that fit human nuclear dna, human mitogenomes processing. unlimitedtime is also available " + config_profile_description = "Simple profiles for assessing computational ressources that fit human nuclear dna, human mitogenomes processing. unlimitedtime is also available " } profiles { - nuclear { - process { - errorStrategy = 'retry' - maxRetries = 2 + nuclear { + process { + errorStrategy = 'retry' + maxRetries = 2 - withName:'makeBWAIndex'{ - cpus = { check_max( 8 * task.attempt, 'cpus' ) } - memory = { check_max( 8.GB * task.attempt, 'memory' ) } - time = { check_max( 12.h * task.attempt, 'time' ) } - } - withName:'adapter_removal'{ - cpus = { check_max( 8 * task.attempt, 'cpus' ) } - memory = { check_max( 16.GB * task.attempt, 'memory' ) } - time = { check_max( 12.h * task.attempt, 'time' ) } - } - withName:'bwa'{ - cpus = { check_max( 40 * task.attempt, 'cpus' ) } - memory = { check_max( 40.GB * task.attempt, 'memory' ) } - time = 24.h - cache = 'deep' - } - withName:'markduplicates'{ - errorStrategy = { task.exitStatus in [143,137,104,134,139] ? 'retry' : 'finish' } - cpus = { check_max( 16 * task.attempt, 'cpus' ) } - memory = { check_max( 16.GB * task.attempt, 'memory' ) } - time = { check_max( 12.h * task.attempt, 'time' ) } - } - withName:'damageprofiler'{ - cpus = 1 - memory = { check_max( 8.GB * task.attempt, 'memory' ) } - time = { check_max( 6.h * task.attempt, 'time' ) } - } - withName:'fastp'{ - cpus = 8 - memory = { check_max( 8.GB * task.attempt, 'memory' ) } - time = { check_max( 6.h * task.attempt, 'time' ) } - } - withName:'fastqc'{ - cpus = 2 - memory = { check_max( 8.GB * task.attempt, 'memory' ) } - time = { check_max( 6.h * task.attempt, 'time' ) } - } - } - } + withName:'makeBWAIndex'{ + cpus = { check_max( 8 * task.attempt, 'cpus' ) } + memory = { check_max( 8.GB * task.attempt, 'memory' ) } + time = { check_max( 12.h * task.attempt, 'time' ) } + } + withName:'adapter_removal'{ + cpus = { check_max( 8 * task.attempt, 'cpus' ) } + memory = { check_max( 16.GB * task.attempt, 'memory' ) } + time = { check_max( 12.h * task.attempt, 'time' ) } + } + withName:'bwa'{ + cpus = { check_max( 40 * task.attempt, 'cpus' ) } + memory = { check_max( 40.GB * task.attempt, 'memory' ) } + time = 24.h + cache = 'deep' + } + withName:'markduplicates'{ + errorStrategy = { task.exitStatus in [143,137,104,134,139] ? 'retry' : 'finish' } + cpus = { check_max( 16 * task.attempt, 'cpus' ) } + memory = { check_max( 16.GB * task.attempt, 'memory' ) } + time = { check_max( 12.h * task.attempt, 'time' ) } + } + withName:'damageprofiler'{ + cpus = 1 + memory = { check_max( 8.GB * task.attempt, 'memory' ) } + time = { check_max( 6.h * task.attempt, 'time' ) } + } + withName:'fastp'{ + cpus = 8 + memory = { check_max( 8.GB * task.attempt, 'memory' ) } + time = { check_max( 6.h * task.attempt, 'time' ) } + } + withName:'fastqc'{ + cpus = 2 + memory = { check_max( 8.GB * task.attempt, 'memory' ) } + time = { check_max( 6.h * task.attempt, 'time' ) } + } + } + } - mitocondrial { - process { - errorStrategy = 'retry' - maxRetries = 2 + mitocondrial { + process { + errorStrategy = 'retry' + maxRetries = 2 - withName:'makeBWAIndex'{ - cpus = { check_max( 8 * task.attempt, 'cpus' ) } - memory = { check_max( 8.GB * task.attempt, 'memory' ) } - time = { check_max( 12.h * task.attempt, 'time' ) } - } - withName:'adapter_removal'{ - cpus = { check_max( 8 * task.attempt, 'cpus' ) } - memory = { check_max( 16.GB * task.attempt, 'memory' ) } - time = { check_max( 12.h * task.attempt, 'time' ) } - } - withName:'bwa'{ - cpus = { check_max( 5 * task.attempt, 'cpus' ) } - memory = { check_max( 5.GB * task.attempt, 'memory' ) } - time = 24.h - } - withName:'markduplicates'{ - errorStrategy = { task.exitStatus in [143,137,104,134,139] ? 'retry' : 'finish' } - cpus = { check_max( 5 * task.attempt, 'cpus' ) } - memory = { check_max( 5.GB * task.attempt, 'memory' ) } - time = { check_max( 6.h * task.attempt, 'time' ) } - } - withName:'damageprofiler'{ - cpus = 1 - memory = { check_max( 5.GB * task.attempt, 'memory' ) } - time = { check_max( 3.h * task.attempt, 'time' ) } - } - withName:'fastp'{ - cpus = 8 - memory = { check_max( 5.GB * task.attempt, 'memory' ) } - time = { check_max( 3.h * task.attempt, 'time' ) } - } - withName:'fastqc'{ - cpus = 2 - memory = { check_max( 8.GB * task.attempt, 'memory' ) } - time = { check_max( 6.h * task.attempt, 'time' ) } - } - } - } - unlimitedtime { - process { - errorStrategy = 'finish' + withName:'makeBWAIndex'{ + cpus = { check_max( 8 * task.attempt, 'cpus' ) } + memory = { check_max( 8.GB * task.attempt, 'memory' ) } + time = { check_max( 12.h * task.attempt, 'time' ) } + } + withName:'adapter_removal'{ + cpus = { check_max( 8 * task.attempt, 'cpus' ) } + memory = { check_max( 16.GB * task.attempt, 'memory' ) } + time = { check_max( 12.h * task.attempt, 'time' ) } + } + withName:'bwa'{ + cpus = { check_max( 5 * task.attempt, 'cpus' ) } + memory = { check_max( 5.GB * task.attempt, 'memory' ) } + time = 24.h + } + withName:'markduplicates'{ + errorStrategy = { task.exitStatus in [143,137,104,134,139] ? 'retry' : 'finish' } + cpus = { check_max( 5 * task.attempt, 'cpus' ) } + memory = { check_max( 5.GB * task.attempt, 'memory' ) } + time = { check_max( 6.h * task.attempt, 'time' ) } + } + withName:'damageprofiler'{ + cpus = 1 + memory = { check_max( 5.GB * task.attempt, 'memory' ) } + time = { check_max( 3.h * task.attempt, 'time' ) } + } + withName:'fastp'{ + cpus = 8 + memory = { check_max( 5.GB * task.attempt, 'memory' ) } + time = { check_max( 3.h * task.attempt, 'time' ) } + } + withName:'fastqc'{ + cpus = 2 + memory = { check_max( 8.GB * task.attempt, 'memory' ) } + time = { check_max( 6.h * task.attempt, 'time' ) } + } + } + } + unlimitedtime { + process { + errorStrategy = 'finish' - cpus = 5 - memory = 200.GB - time = 8760.h + cpus = 5 + memory = 200.GB + time = 8760.h + } } -} diff --git a/conf/pipeline/eager/mpcdf.config b/conf/pipeline/eager/mpcdf.config index 3b979de25..aec5c4337 100644 --- a/conf/pipeline/eager/mpcdf.config +++ b/conf/pipeline/eager/mpcdf.config @@ -1,121 +1,121 @@ // Profile config names for nf-core/configs profile { - cobra { - params { - // Specific nf-core/configs params - config_profile_contact = 'James Fellows Yates (@jfy133)' - config_profile_description = 'nf-core/eager MPCDF cobra profile provided by nf-core/configs' - } - process { - - withName: malt { - maxRetries = 1 - memory = 725.GB - cpus = 40 - time = 24.h - } - - withLabel:'sc_tiny'{ - cpus = { check_max( 1, 'cpus' ) } - memory = { check_max( 1.GB * task.attempt, 'memory' ) } - time = 24.h - } - - withLabel:'sc_small'{ - cpus = { check_max( 1, 'cpus' ) } - memory = { check_max( 4.GB * task.attempt, 'memory' ) } - time = 24.h - } - - withLabel:'sc_medium'{ - cpus = { check_max( 1, 'cpus' ) } - memory = { check_max( 8.GB * task.attempt, 'memory' ) } - time = 24.h - } - - withLabel:'mc_small'{ - cpus = { check_max( 2 * task.attempt, 'cpus' ) } - memory = { check_max( 4.GB * task.attempt, 'memory' ) } - time = 24.h - } - - withLabel:'mc_medium' { - cpus = { check_max( 4 * task.attempt, 'cpus' ) } - memory = { check_max( 8.GB * task.attempt, 'memory' ) } - time = 24.h - } - - withLabel:'mc_large'{ - cpus = { check_max( 8 * task.attempt, 'cpus' ) } - memory = { check_max( 16.GB * task.attempt, 'memory' ) } - time = 24.h - } - - withLabel:'mc_huge'{ - cpus = { check_max( 32 * task.attempt, 'cpus' ) } - memory = { check_max( 256.GB * task.attempt, 'memory' ) } - time = 24.h - } + cobra { + params { + // Specific nf-core/configs params + config_profile_contact = 'James Fellows Yates (@jfy133)' + config_profile_description = 'nf-core/eager MPCDF cobra profile provided by nf-core/configs' + } + process { + + withName: malt { + maxRetries = 1 + memory = 725.GB + cpus = 40 + time = 24.h + } + + withLabel:'sc_tiny'{ + cpus = { check_max( 1, 'cpus' ) } + memory = { check_max( 1.GB * task.attempt, 'memory' ) } + time = 24.h + } + + withLabel:'sc_small'{ + cpus = { check_max( 1, 'cpus' ) } + memory = { check_max( 4.GB * task.attempt, 'memory' ) } + time = 24.h + } + + withLabel:'sc_medium'{ + cpus = { check_max( 1, 'cpus' ) } + memory = { check_max( 8.GB * task.attempt, 'memory' ) } + time = 24.h + } + + withLabel:'mc_small'{ + cpus = { check_max( 2 * task.attempt, 'cpus' ) } + memory = { check_max( 4.GB * task.attempt, 'memory' ) } + time = 24.h + } + + withLabel:'mc_medium' { + cpus = { check_max( 4 * task.attempt, 'cpus' ) } + memory = { check_max( 8.GB * task.attempt, 'memory' ) } + time = 24.h + } + + withLabel:'mc_large'{ + cpus = { check_max( 8 * task.attempt, 'cpus' ) } + memory = { check_max( 16.GB * task.attempt, 'memory' ) } + time = 24.h + } + + withLabel:'mc_huge'{ + cpus = { check_max( 32 * task.attempt, 'cpus' ) } + memory = { check_max( 256.GB * task.attempt, 'memory' ) } + time = 24.h + } + } } - } - raven { - // Specific nf-core/eager process configuration - params { - // Specific nf-core/configs params - config_profile_contact = 'James Fellows Yates (@jfy133)' - config_profile_description = 'nf-core/eager MPCDF raven profile provided by nf-core/configs' - } - process { - - withName: malt { - maxRetries = 1 - memory = 2000000.MB - cpus = 72 - time = 24.h - } - - withLabel:'sc_tiny'{ - cpus = { check_max( 1, 'cpus' ) } - memory = { check_max( 1.GB * task.attempt, 'memory' ) } - time = 24.h - } - - withLabel:'sc_small'{ - cpus = { check_max( 1, 'cpus' ) } - memory = { check_max( 4.GB * task.attempt, 'memory' ) } - time = 24.h - } - - withLabel:'sc_medium'{ - cpus = { check_max( 1, 'cpus' ) } - memory = { check_max( 8.GB * task.attempt, 'memory' ) } - time = 24.h - } - - withLabel:'mc_small'{ - cpus = { check_max( 2 * task.attempt, 'cpus' ) } - memory = { check_max( 4.GB * task.attempt, 'memory' ) } - time = 24.h - } - - withLabel:'mc_medium' { - cpus = { check_max( 4 * task.attempt, 'cpus' ) } - memory = { check_max( 8.GB * task.attempt, 'memory' ) } - time = 24.h - } - - withLabel:'mc_large'{ - cpus = { check_max( 8 * task.attempt, 'cpus' ) } - memory = { check_max( 16.GB * task.attempt, 'memory' ) } - time = 24.h - } - - withLabel:'mc_huge'{ - cpus = { check_max( 72, 'cpus' ) } - memory = { check_max( 240.GB * task.attempt, 'memory' ) } - time = 24.h - } + raven { + // Specific nf-core/eager process configuration + params { + // Specific nf-core/configs params + config_profile_contact = 'James Fellows Yates (@jfy133)' + config_profile_description = 'nf-core/eager MPCDF raven profile provided by nf-core/configs' + } + process { + + withName: malt { + maxRetries = 1 + memory = 2000000.MB + cpus = 72 + time = 24.h + } + + withLabel:'sc_tiny'{ + cpus = { check_max( 1, 'cpus' ) } + memory = { check_max( 1.GB * task.attempt, 'memory' ) } + time = 24.h + } + + withLabel:'sc_small'{ + cpus = { check_max( 1, 'cpus' ) } + memory = { check_max( 4.GB * task.attempt, 'memory' ) } + time = 24.h + } + + withLabel:'sc_medium'{ + cpus = { check_max( 1, 'cpus' ) } + memory = { check_max( 8.GB * task.attempt, 'memory' ) } + time = 24.h + } + + withLabel:'mc_small'{ + cpus = { check_max( 2 * task.attempt, 'cpus' ) } + memory = { check_max( 4.GB * task.attempt, 'memory' ) } + time = 24.h + } + + withLabel:'mc_medium' { + cpus = { check_max( 4 * task.attempt, 'cpus' ) } + memory = { check_max( 8.GB * task.attempt, 'memory' ) } + time = 24.h + } + + withLabel:'mc_large'{ + cpus = { check_max( 8 * task.attempt, 'cpus' ) } + memory = { check_max( 16.GB * task.attempt, 'memory' ) } + time = 24.h + } + + withLabel:'mc_huge'{ + cpus = { check_max( 72, 'cpus' ) } + memory = { check_max( 240.GB * task.attempt, 'memory' ) } + time = 24.h + } + } } - } } diff --git a/conf/pipeline/funcscan/hki.config b/conf/pipeline/funcscan/hki.config index 37fa3a316..b9b4304e6 100644 --- a/conf/pipeline/funcscan/hki.config +++ b/conf/pipeline/funcscan/hki.config @@ -1,5 +1,5 @@ params { - config_profile_description = 'nf-core/funcscan profile for HKI clusters provided by nf-core/configs.' - config_profile_contact = 'James Fellows Yates (@jfy133)' - config_profile_url = 'https://leibniz-hki.de' + config_profile_description = 'nf-core/funcscan profile for HKI clusters provided by nf-core/configs.' + config_profile_contact = 'James Fellows Yates (@jfy133)' + config_profile_url = 'https://leibniz-hki.de' } diff --git a/conf/pipeline/mag/eva.config b/conf/pipeline/mag/eva.config index 9c80812ea..7922669b9 100644 --- a/conf/pipeline/mag/eva.config +++ b/conf/pipeline/mag/eva.config @@ -1,7 +1,7 @@ params { - // Specific nf-core/configs params - config_profile_contact = 'James Fellows Yates (@jfy133)' - config_profile_description = 'nf-core/mag EVA profile provided by nf-core/configs' + // Specific nf-core/configs params + config_profile_contact = 'James Fellows Yates (@jfy133)' + config_profile_description = 'nf-core/mag EVA profile provided by nf-core/configs' } env { diff --git a/conf/pipeline/methylseq/ku_sund_dangpu.config b/conf/pipeline/methylseq/ku_sund_dangpu.config new file mode 100644 index 000000000..fa1bca494 --- /dev/null +++ b/conf/pipeline/methylseq/ku_sund_dangpu.config @@ -0,0 +1,16 @@ +params { + config_profile_contact = 'Adrija Kalvisa (@adrijak)' + config_profile_description = 'nf-core/methylseq ku_sund_dangpu profile provided by nf-core/configs' + + // methylseq usually runs extremely long hours, use 2x the normal max_time allowance for this pipeline + max_time = 144.h +} + +process { + withName: 'NFCORE_METHYLSEQ:METHYLSEQ:PREPARE_GENOME:BISMARK_GENOMEPREPARATION' { + stageInMode = 'copy' + } + withName: 'NFCORE_METHYLSEQ:METHYLSEQ:BISMARK:BISMARK_ALIGN' { + multicore = 1 + } +} diff --git a/conf/pipeline/rnafusion/munin.config b/conf/pipeline/rnafusion/munin.config index b18f6ad61..f71183a69 100644 --- a/conf/pipeline/rnafusion/munin.config +++ b/conf/pipeline/rnafusion/munin.config @@ -1,10 +1,10 @@ // rnafusion/munin specific profile config params { - max_cpus = 24 - max_memory = 256.GB - max_time = 72.h + max_cpus = 24 + max_memory = 256.GB + max_time = 72.h - // Paths - genomes_base = '/data1/references/rnafusion/dev/' + // Paths + genomes_base = '/data1/references/rnafusion/dev/' } diff --git a/conf/pipeline/rnaseq/azurebatch_pools_Edv4.config b/conf/pipeline/rnaseq/azurebatch_pools_Edv4.config index 183165db3..de9b4b60c 100644 --- a/conf/pipeline/rnaseq/azurebatch_pools_Edv4.config +++ b/conf/pipeline/rnaseq/azurebatch_pools_Edv4.config @@ -2,13 +2,13 @@ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ nf-core/rnaseq Nextflow config file for Azure Batch pools ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - Defines several Azure Batch pools with virtual machines from Edv4-series, + Defines several Azure Batch pools with virtual machines from Edv4-series, assigns pools to labels according to the requirements defined in base.config. You might need to adjust vmCount and maxVmCount depending on your Batch account quotas. Use as follows: nextflow run nf-core/rnaseq -profile azurebatch - --input 'az://' --outdir 'az://' + --input 'az://' --outdir 'az://' -w 'az://' [] ---------------------------------------------------------------------------------------- */ diff --git a/conf/pipeline/rnaseq/ku_sund_dangpu.config b/conf/pipeline/rnaseq/ku_sund_dangpu.config new file mode 100644 index 000000000..703c56759 --- /dev/null +++ b/conf/pipeline/rnaseq/ku_sund_dangpu.config @@ -0,0 +1,7 @@ +process { + // Use more memory with processes labeled as 'process_high' to enable sufficient memory access to STAR_GENOMEGENERATE + // and other memory-intensive processes + withLabel: 'process_high' { + memory = 128.GB + } +} diff --git a/conf/pipeline/rnaseq/mpcdf.config b/conf/pipeline/rnaseq/mpcdf.config index f26de1fd3..762ce628e 100644 --- a/conf/pipeline/rnaseq/mpcdf.config +++ b/conf/pipeline/rnaseq/mpcdf.config @@ -3,9 +3,9 @@ profiles { cobra { params { - // Specific nf-core/configs params - config_profile_contact = 'James Fellows Yates (@jfy133)' - config_profile_description = 'nf-core/rnaseq MPCDF cobra profile provided by nf-core/configs' + // Specific nf-core/configs params + config_profile_contact = 'James Fellows Yates (@jfy133)' + config_profile_description = 'nf-core/rnaseq MPCDF cobra profile provided by nf-core/configs' } process { cpus = { check_max( 1 * task.attempt, 'cpus' ) } diff --git a/conf/pipeline/rnaseq/utd_sysbio.config b/conf/pipeline/rnaseq/utd_sysbio.config index 0c9dd7d9c..94891ae4d 100644 --- a/conf/pipeline/rnaseq/utd_sysbio.config +++ b/conf/pipeline/rnaseq/utd_sysbio.config @@ -1,19 +1,19 @@ params { - config_profile_description = 'University of Texas at Dallas HPC cluster profile provided by nf-core/configs' - config_profile_contact = 'Edmund Miller(@emiller88)' - config_profile_url = 'http://docs.oithpc.utdallas.edu/' + config_profile_description = 'University of Texas at Dallas HPC cluster profile provided by nf-core/configs' + config_profile_contact = 'Edmund Miller(@edmundmiller)' + config_profile_url = 'http://docs.oithpc.utdallas.edu/' } process { - withName : "STAR_ALIGN" { + withName : "STAR_ALIGN" { memory = 36.GB - } + } - withLabel:process_high { + withLabel:process_high { cpus = { check_max( 16 * task.attempt, 'cpus' ) } memory = { check_max( 60.GB * task.attempt, 'memory' ) } time = { check_max( 16.h * task.attempt, 'time' ) } - } + } } diff --git a/conf/pipeline/rnavar/munin.config b/conf/pipeline/rnavar/munin.config index c5737819e..881369365 100644 --- a/conf/pipeline/rnavar/munin.config +++ b/conf/pipeline/rnavar/munin.config @@ -1,44 +1,44 @@ // rnavar/munin specific profile config params { - // Specific nf-core/configs params - config_profile_contact = 'Praveen Raj (@praveenraj2018)' - config_profile_description = 'nf-core/rnavar MUNIN profile provided by nf-core/configs' - config_profile_url = 'https://ki.se/forskning/barntumorbanken' - - // Specific nf-core/rnavar params - - igenomes_ignore = true - - // Genome references - genome = 'GRCh38' - fasta = '/data1/references/CTAT_GenomeLib_v37_Mar012021/GRCh38_gencode_v37_CTAT_lib_Mar012021.plug-n-play/ctat_genome_lib_build_dir/ref_genome.fa' - fasta_fai = '/data1/references/CTAT_GenomeLib_v37_Mar012021/GRCh38_gencode_v37_CTAT_lib_Mar012021.plug-n-play/ctat_genome_lib_build_dir/ref_genome.fa.fai' - gtf = '/data1/references/CTAT_GenomeLib_v37_Mar012021/GRCh38_gencode_v37_CTAT_lib_Mar012021.plug-n-play/ctat_genome_lib_build_dir/ref_annot.gtf' - gene_bed = '/data1/references/CTAT_GenomeLib_v37_Mar012021/GRCh38_gencode_v37_CTAT_lib_Mar012021.plug-n-play/ctat_genome_lib_build_dir/ref_annot.bed' - - // Known genome resources - dbsnp = '/data1/references/annotations/GATK_bundle/dbsnp_146.hg38.vcf.gz' - dbsnp_tbi = '/data1/references/annotations/GATK_bundle/dbsnp_146.hg38.vcf.gz.tbi' - known_indels = '/data1/references/annotations/GATK_bundle/Mills_and_1000G_gold_standard.indels.hg38.vcf.gz' - known_indels_tbi = '/data1/references/annotations/GATK_bundle/Mills_and_1000G_gold_standard.indels.hg38.vcf.gz.tbi' - - // STAR index - star_index = '/data1/references/CTAT_GenomeLib_v37_Mar012021/GRCh38_gencode_v37_CTAT_lib_Mar012021.plug-n-play/ctat_genome_lib_build_dir/STAR.2.7.9a_2x151bp/' - read_length = 151 - - // Annotation settings - annotation_cache = true - cadd_cache = true - cadd_indels = '/data1/cache/CADD/v1.4/InDels.tsv.gz' - cadd_indels_tbi = '/data1/cache/CADD/v1.4/InDels.tsv.gz.tbi' - cadd_wg_snvs = '/data1/cache/CADD/v1.4/whole_genome_SNVs.tsv.gz' - cadd_wg_snvs_tbi = '/data1/cache/CADD/v1.4/whole_genome_SNVs.tsv.gz.tbi' - snpeff_cache = '/data1/cache/snpEff/' - snpeff_db = 'GRCh38.99' - vep_cache = '/data1/cache/VEP/' - vep_genome = 'GRCh38' - vep_species = 'homo_sapiens' - vep_cache_version = '99' + // Specific nf-core/configs params + config_profile_contact = 'Praveen Raj (@praveenraj2018)' + config_profile_description = 'nf-core/rnavar MUNIN profile provided by nf-core/configs' + config_profile_url = 'https://ki.se/forskning/barntumorbanken' + + // Specific nf-core/rnavar params + + igenomes_ignore = true + + // Genome references + genome = 'GRCh38' + fasta = '/data1/references/CTAT_GenomeLib_v37_Mar012021/GRCh38_gencode_v37_CTAT_lib_Mar012021.plug-n-play/ctat_genome_lib_build_dir/ref_genome.fa' + fasta_fai = '/data1/references/CTAT_GenomeLib_v37_Mar012021/GRCh38_gencode_v37_CTAT_lib_Mar012021.plug-n-play/ctat_genome_lib_build_dir/ref_genome.fa.fai' + gtf = '/data1/references/CTAT_GenomeLib_v37_Mar012021/GRCh38_gencode_v37_CTAT_lib_Mar012021.plug-n-play/ctat_genome_lib_build_dir/ref_annot.gtf' + gene_bed = '/data1/references/CTAT_GenomeLib_v37_Mar012021/GRCh38_gencode_v37_CTAT_lib_Mar012021.plug-n-play/ctat_genome_lib_build_dir/ref_annot.bed' + + // Known genome resources + dbsnp = '/data1/references/annotations/GATK_bundle/dbsnp_146.hg38.vcf.gz' + dbsnp_tbi = '/data1/references/annotations/GATK_bundle/dbsnp_146.hg38.vcf.gz.tbi' + known_indels = '/data1/references/annotations/GATK_bundle/Mills_and_1000G_gold_standard.indels.hg38.vcf.gz' + known_indels_tbi = '/data1/references/annotations/GATK_bundle/Mills_and_1000G_gold_standard.indels.hg38.vcf.gz.tbi' + + // STAR index + star_index = '/data1/references/CTAT_GenomeLib_v37_Mar012021/GRCh38_gencode_v37_CTAT_lib_Mar012021.plug-n-play/ctat_genome_lib_build_dir/STAR.2.7.9a_2x151bp/' + read_length = 151 + + // Annotation settings + annotation_cache = true + cadd_cache = true + cadd_indels = '/data1/cache/CADD/v1.4/InDels.tsv.gz' + cadd_indels_tbi = '/data1/cache/CADD/v1.4/InDels.tsv.gz.tbi' + cadd_wg_snvs = '/data1/cache/CADD/v1.4/whole_genome_SNVs.tsv.gz' + cadd_wg_snvs_tbi = '/data1/cache/CADD/v1.4/whole_genome_SNVs.tsv.gz.tbi' + snpeff_cache = '/data1/cache/snpEff/' + snpeff_db = 'GRCh38.99' + vep_cache = '/data1/cache/VEP/' + vep_genome = 'GRCh38' + vep_species = 'homo_sapiens' + vep_cache_version = '99' } diff --git a/conf/pipeline/sarek/cfc.config b/conf/pipeline/sarek/cfc.config index 792b40163..e43b81c2a 100644 --- a/conf/pipeline/sarek/cfc.config +++ b/conf/pipeline/sarek/cfc.config @@ -1,41 +1,41 @@ // Profile config names for nf-core/configs params { - // Specific nf-core/configs params - config_profile_contact = 'Friederike Hanssen (@FriederikeHanssen)' - config_profile_description = 'nf-core/sarek CFC profile provided by nf-core/configs' + // Specific nf-core/configs params + config_profile_contact = 'Friederike Hanssen (@FriederikeHanssen)' + config_profile_description = 'nf-core/sarek CFC profile provided by nf-core/configs' } // Specific nf-core/sarek process configuration process { - withName:'StrelkaSingle|Strelka|StrelkaBP|MantaSingle|Manta' { - cpus = { check_resource( 20 * task.attempt) } - memory = { check_resource( 59.GB * task.attempt) } - } - withName:'MSIsensor_scan|MSIsensor_msi' { - memory = { check_resource( 55.GB * task.attempt ) } + withName:'StrelkaSingle|Strelka|StrelkaBP|MantaSingle|Manta' { + cpus = { check_resource( 20 * task.attempt) } + memory = { check_resource( 59.GB * task.attempt) } + } + withName:'MSIsensor_scan|MSIsensor_msi' { + memory = { check_resource( 55.GB * task.attempt ) } - } - withName:BamQC { - memory = { check_resource( 372.GB * task.attempt) } - } + } + withName:BamQC { + memory = { check_resource( 372.GB * task.attempt) } + } - withName:MapReads{ - cpus = { check_resource( 20 * task.attempt ) } - memory = { check_resource( 59.GB * task.attempt) } - } + withName:MapReads{ + cpus = { check_resource( 20 * task.attempt ) } + memory = { check_resource( 59.GB * task.attempt) } + } } def check_resource(obj) { try { - if (obj.getClass() == nextflow.util.MemoryUnit && obj.compareTo(params.max_memory as nextflow.util.MemoryUnit) == 1) - return params.max_memory as nextflow.util.MemoryUnit - else if (obj.getClass() == nextflow.util.Duration && obj.compareTo(params.max_time as nextflow.util.Duration) == 1) - return params.max_time as nextflow.util.Duration - else if (obj.getClass() == java.lang.Integer) - return Math.min(obj, params.max_cpus as int) - else - return obj + if (obj.getClass() == nextflow.util.MemoryUnit && obj.compareTo(params.max_memory as nextflow.util.MemoryUnit) == 1) + return params.max_memory as nextflow.util.MemoryUnit + else if (obj.getClass() == nextflow.util.Duration && obj.compareTo(params.max_time as nextflow.util.Duration) == 1) + return params.max_time as nextflow.util.Duration + else if (obj.getClass() == java.lang.Integer) + return Math.min(obj, params.max_cpus as int) + else + return obj } catch (all) { println " ### ERROR ### Max params max_memory:'${params.max_memory}', max_time:'${params.max_time}' or max_cpus:'${params.max_cpus}' is not valid! Using default value: $obj" } diff --git a/conf/pipeline/sarek/eddie.config b/conf/pipeline/sarek/eddie.config index 7fd6bac1d..25baa1a16 100644 --- a/conf/pipeline/sarek/eddie.config +++ b/conf/pipeline/sarek/eddie.config @@ -1,46 +1,46 @@ process { - withLabel: "process_single" { + withLabel: "process_single" { cpus = 1 memory = 4.GB - } - withName: ".*GATK4.*" { + } + withName: ".*GATK4.*" { memory = 16.GB clusterOptions = {"-l h_vmem=${(task.memory + 4.GB).bytes/task.cpus}"} - } - withName: "GETPILEUPSUMMARIES.*" { + } + withName: "GETPILEUPSUMMARIES.*" { memory = 16.GB clusterOptions = {"-l h_vmem=${(task.memory + 4.GB).bytes/task.cpus}"} - } - withName: "UNZIP.*|UNTAR.*|TABIX.*|BUILD_INTERVALS|CREATE_INTERVALS_BED|CUSTOM_DUMPSOFTWAREVERSIONS|VCFTOOLS|BCFTOOLS.*|SAMTOOLS_INDEX|MOSDEPTH" { + } + withName: "UNZIP.*|UNTAR.*|TABIX.*|BUILD_INTERVALS|CREATE_INTERVALS_BED|CUSTOM_DUMPSOFTWAREVERSIONS|VCFTOOLS|BCFTOOLS.*|SAMTOOLS_INDEX|MOSDEPTH" { cpus = 1 memory = 4.GB - } - withName: "BCFTOOLS_SORT" { + } + withName: "BCFTOOLS_SORT" { cpus=4 memory=24.GB - } - withName: "STRELKA_SINGLE" { + } + withName: "STRELKA_SINGLE" { memory=12.GB - } - withName: "FREEBAYES" { + } + withName: "FREEBAYES" { cpus = 1 memory = 16.GB time = 24.h - } - withName: "MULTIQC" { + } + withName: "MULTIQC" { cpus = 1 memory = 12.GB - } - withName: "BCFTOOLS_SORT" { + } + withName: "BCFTOOLS_SORT" { cpus = 1 memory = 8.GB - } - withName: "MUTECT2_PAIRED" { + } + withName: "MUTECT2_PAIRED" { cpus = 1 memory = 16.GB time = 24.h - } - withName: "SAMTOOLS_MPILEUP" { + } + withName: "SAMTOOLS_MPILEUP" { time = 24.h - } + } } diff --git a/conf/pipeline/sarek/eva.config b/conf/pipeline/sarek/eva.config index fd29ae025..661c69ace 100644 --- a/conf/pipeline/sarek/eva.config +++ b/conf/pipeline/sarek/eva.config @@ -2,9 +2,9 @@ // Profile config names for nf-core/configs params { - // Specific nf-core/configs params - config_profile_contact = 'James A. Fellows Yates (@jfy133)' - config_profile_description = 'nf-core/sarek EVA profile provided by nf-core/configs' + // Specific nf-core/configs params + config_profile_contact = 'James A. Fellows Yates (@jfy133)' + config_profile_description = 'nf-core/sarek EVA profile provided by nf-core/configs' } env { @@ -14,79 +14,79 @@ env { } process { - withName:GATK4_APPLYBQSR { + withName:GATK4_APPLYBQSR { clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 3)}G" } - } - withName:GATK4_APPLYBQSR_SPARK { + } + withName:GATK4_APPLYBQSR_SPARK { clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 3)}G" } - } - withName:GATK4_APPLYVQSR { + } + withName:GATK4_APPLYVQSR { clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 3)}G" } - } - withName:GATK4_BASERECALIBRATOR { + } + withName:GATK4_BASERECALIBRATOR { clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 3)}G" } - } - withName:GATK4_BASERECALIBRATOR_SPARK { + } + withName:GATK4_BASERECALIBRATOR_SPARK { clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 3)}G" } - } - withName:GATK4_CALCULATECONTAMINATION { + } + withName:GATK4_CALCULATECONTAMINATION { clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 3)}G" } - } - withName:GATK4_CNNSCOREVARIANTS { + } + withName:GATK4_CNNSCOREVARIANTS { clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 3)}G" } - } - withName:GATK4_CREATESEQUENCEDICTIONARY { + } + withName:GATK4_CREATESEQUENCEDICTIONARY { clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 3)}G" } - } - withName:GATK4_ESTIMATELIBRARYCOMPLEXITY { + } + withName:GATK4_ESTIMATELIBRARYCOMPLEXITY { clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 3)}G" } - } - withName:GATK4_FILTERMUTECTCALLS { + } + withName:GATK4_FILTERMUTECTCALLS { clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 3)}G" } - } - withName:GATK4_FILTERVARIANTTRANCHES { + } + withName:GATK4_FILTERVARIANTTRANCHES { clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 3)}G" } - } - withName:GATK4_GATHERBQSRREPORTS { + } + withName:GATK4_GATHERBQSRREPORTS { clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 3)}G" } - } - withName:GATK4_GATHERPILEUPSUMMARIES { + } + withName:GATK4_GATHERPILEUPSUMMARIES { clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 3)}G" } - } - withName:GATK4_GENOMICSDBIMPORT { + } + withName:GATK4_GENOMICSDBIMPORT { clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 3)}G" } - } - withName:GATK4_GENOTYPEGVCFS { + } + withName:GATK4_GENOTYPEGVCFS { clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 3)}G" } - } - withName:GATK4_GETPILEUPSUMMARIES { + } + withName:GATK4_GETPILEUPSUMMARIES { clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 3)}G" } - } - withName:GATK4_HAPLOTYPECALLER { + } + withName:GATK4_HAPLOTYPECALLER { clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 3)}G" } - } - withName:GATK4_INTERVALLISTTOBED { + } + withName:GATK4_INTERVALLISTTOBED { clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 3)}G" } - } - withName:GATK4_LEARNREADORIENTATIONMODEL { + } + withName:GATK4_LEARNREADORIENTATIONMODEL { clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 3)}G" } - } - withName:GATK4_MARKDUPLICATES { + } + withName:GATK4_MARKDUPLICATES { clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 3)}G" } - } - withName:GATK4_MARKDUPLICATES_SPARK { + } + withName:GATK4_MARKDUPLICATES_SPARK { clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 3)}G" } - } - withName:GATK4_MERGEMUTECTSTATS { + } + withName:GATK4_MERGEMUTECTSTATS { clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 3)}G" } - } - withName:GATK4_MERGEVCFS { + } + withName:GATK4_MERGEVCFS { clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 3)}G" } - } - withName:GATK4_MUTECT2 { + } + withName:GATK4_MUTECT2 { clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 3)}G" } - } - withName:GATK4_VARIANTRECALIBRATOR { + } + withName:GATK4_VARIANTRECALIBRATOR { clusterOptions = { "-S /bin/bash -V -l h_vmem=${(task.memory.toGiga() * 3)}G" } - } + } } diff --git a/conf/pipeline/sarek/icr_davros.config b/conf/pipeline/sarek/icr_davros.config index 8051337bc..5290a51f3 100644 --- a/conf/pipeline/sarek/icr_davros.config +++ b/conf/pipeline/sarek/icr_davros.config @@ -1,13 +1,13 @@ /* - * ------------------------------------------------- - * Nextflow nf-core config file for ICR davros HPC - * ------------------------------------------------- - */ + * ------------------------------------------------- + * Nextflow nf-core config file for ICR davros HPC + * ------------------------------------------------- + */ process { - errorStrategy = {task.exitStatus in [104,134,137,139,141,143,255] ? 'retry' : 'finish'} - maxRetries = 5 - withName:MapReads { + errorStrategy = {task.exitStatus in [104,134,137,139,141,143,255] ? 'retry' : 'finish'} + maxRetries = 5 + withName:MapReads { memory = {check_resource(12.GB)} time = {check_resource(48.h * task.attempt)} - } -} \ No newline at end of file + } +} diff --git a/conf/pipeline/sarek/munin.config b/conf/pipeline/sarek/munin.config index 77f76f0a4..f61bfdfd3 100644 --- a/conf/pipeline/sarek/munin.config +++ b/conf/pipeline/sarek/munin.config @@ -1,29 +1,29 @@ // sarek/munin specific profile config params { - // Specific nf-core/configs params - config_profile_contact = 'Maxime Garcia (@maxulysse)' - config_profile_description = 'nf-core/sarek MUNIN profile provided by nf-core/configs' - config_profile_url = 'https://ki.se/forskning/barntumorbanken' + // Specific nf-core/configs params + config_profile_contact = 'Maxime Garcia (@maxulysse)' + config_profile_description = 'nf-core/sarek MUNIN profile provided by nf-core/configs' + config_profile_url = 'https://ki.se/forskning/barntumorbanken' - // Specific nf-core/sarek params - annotation_cache = true - cadd_cache = true - cadd_indels = '/data1/cache/CADD/v1.4/InDels.tsv.gz' - cadd_indels_tbi = '/data1/cache/CADD/v1.4/InDels.tsv.gz.tbi' - cadd_wg_snvs = '/data1/cache/CADD/v1.4/whole_genome_SNVs.tsv.gz' - cadd_wg_snvs_tbi = '/data1/cache/CADD/v1.4/whole_genome_SNVs.tsv.gz.tbi' - pon = '/data1/PON/vcfs/BTB.PON.vcf.gz' - pon_index = '/data1/PON/vcfs/BTB.PON.vcf.gz.tbi' - snpeff_cache = '/data1/cache/snpEff/' - vep_cache = '/data1/cache/VEP/' - vep_cache_version = '95' + // Specific nf-core/sarek params + annotation_cache = true + cadd_cache = true + cadd_indels = '/data1/cache/CADD/v1.4/InDels.tsv.gz' + cadd_indels_tbi = '/data1/cache/CADD/v1.4/InDels.tsv.gz.tbi' + cadd_wg_snvs = '/data1/cache/CADD/v1.4/whole_genome_SNVs.tsv.gz' + cadd_wg_snvs_tbi = '/data1/cache/CADD/v1.4/whole_genome_SNVs.tsv.gz.tbi' + pon = '/data1/PON/vcfs/BTB.PON.vcf.gz' + pon_index = '/data1/PON/vcfs/BTB.PON.vcf.gz.tbi' + snpeff_cache = '/data1/cache/snpEff/' + vep_cache = '/data1/cache/VEP/' + vep_cache_version = '95' } // Specific nf-core/sarek process configuration process { - withLabel:sentieon { + withLabel:sentieon { module = {params.sentieon ? 'sentieon/202112.02' : null} container = {params.sentieon ? null : container} - } + } } diff --git a/conf/pipeline/sarek/uppmax.config b/conf/pipeline/sarek/uppmax.config index b52c4ffc3..bcc752ed0 100644 --- a/conf/pipeline/sarek/uppmax.config +++ b/conf/pipeline/sarek/uppmax.config @@ -1,33 +1,33 @@ // sarek/uppmax specific profile config params { - config_profile_contact = 'Maxime Garcia (@MaxUlysse)' - config_profile_description = 'nf-core/sarek uppmax profile provided by nf-core/configs' + config_profile_contact = 'Maxime Garcia (@MaxUlysse)' + config_profile_description = 'nf-core/sarek uppmax profile provided by nf-core/configs' - single_cpu_mem = 7000.MB -// Just useful until iGenomes is updated on UPPMAX - igenomes_ignore = true - genomes_base = params.genome == 'GRCh37' ? '/sw/data/uppnex/ToolBox/ReferenceAssemblies/hg38make/bundle/2.8/b37' : '/sw/data/uppnex/ToolBox/hg38bundle' + single_cpu_mem = 7000.MB + // Just useful until iGenomes is updated on UPPMAX + igenomes_ignore = true + genomes_base = params.genome == 'GRCh37' ? '/sw/data/uppnex/ToolBox/ReferenceAssemblies/hg38make/bundle/2.8/b37' : '/sw/data/uppnex/ToolBox/hg38bundle' } def hostname = "hostname".execute().text.trim() if (hostname ==~ "r.*") { - params.single_cpu_mem = 6400.MB + params.single_cpu_mem = 6400.MB - process { - withName:BamQC { - cpus = {params.max_cpus} - memory = {params.max_memory} + process { + withName:BamQC { + cpus = {params.max_cpus} + memory = {params.max_memory} + } } - } } if (hostname ==~ "i.*") { - params.single_cpu_mem = 15.GB + params.single_cpu_mem = 15.GB } // Miarka-specific config if (hostname ==~ "m.*") { - params.single_cpu_mem = 7.GB + params.single_cpu_mem = 7.GB } diff --git a/conf/pipeline/scflow/imperial.config b/conf/pipeline/scflow/imperial.config index 78a4a8b3f..6380486f2 100644 --- a/conf/pipeline/scflow/imperial.config +++ b/conf/pipeline/scflow/imperial.config @@ -1,18 +1,18 @@ // scflow/imperial specific profile config params { - // Config Params - config_profile_description = 'Imperial College London - HPC - nf-core/scFlow Profile -- provided by nf-core/configs.' - config_profile_contact = 'NA' + // Config Params + config_profile_description = 'Imperial College London - HPC - nf-core/scFlow Profile -- provided by nf-core/configs.' + config_profile_contact = 'NA' - // Analysis Resource Params - ctd_folder = "/rds/general/user/$USER/projects/ukdrmultiomicsproject/live/Analyses/scFlowResources/refs/ctd" - ensembl_mappings = "/rds/general/user/$USER/projects/ukdrmultiomicsproject/live/Analyses/scFlowResources/src/ensembl-ids/ensembl_mappings.tsv" + // Analysis Resource Params + ctd_folder = "/rds/general/user/$USER/projects/ukdrmultiomicsproject/live/Analyses/scFlowResources/refs/ctd" + ensembl_mappings = "/rds/general/user/$USER/projects/ukdrmultiomicsproject/live/Analyses/scFlowResources/src/ensembl-ids/ensembl_mappings.tsv" } singularity { - enabled = true - autoMounts = true - cacheDir = "/rds/general/user/$USER/projects/ukdrmultiomicsproject/live/.singularity-cache" - runOptions = "-B /rds/,/rdsgpfs/,/rds/general/user/$USER/ephemeral/tmp/:/tmp,/rds/general/user/$USER/ephemeral/tmp/:/var/tmp" + enabled = true + autoMounts = true + cacheDir = "/rds/general/user/$USER/projects/ukdrmultiomicsproject/live/.singularity-cache" + runOptions = "-B /rds/,/rdsgpfs/,/rds/general/user/$USER/ephemeral/tmp/:/tmp,/rds/general/user/$USER/ephemeral/tmp/:/var/tmp" } diff --git a/conf/pipeline/taxprofiler/eva.config b/conf/pipeline/taxprofiler/eva.config index 826662543..2fe617a64 100644 --- a/conf/pipeline/taxprofiler/eva.config +++ b/conf/pipeline/taxprofiler/eva.config @@ -1,7 +1,7 @@ params { - // Specific nf-core/configs params - config_profile_contact = 'James Fellows Yates (@jfy133)' - config_profile_description = 'nf-core/taxprofiler EVA profile provided by nf-core/configs' + // Specific nf-core/configs params + config_profile_contact = 'James Fellows Yates (@jfy133)' + config_profile_description = 'nf-core/taxprofiler EVA profile provided by nf-core/configs' } env { diff --git a/conf/pipeline/taxprofiler/hasta.config b/conf/pipeline/taxprofiler/hasta.config index ae43fc519..6e46e259d 100644 --- a/conf/pipeline/taxprofiler/hasta.config +++ b/conf/pipeline/taxprofiler/hasta.config @@ -1,8 +1,8 @@ params { - // Specific nf-core/configs params - config_profile_contact = 'Sofia Stamouli (@sofstam)' - config_profile_description = 'nf-core/taxprofiler HASTA profile provided by nf-core/configs' - validationSchemaIgnoreParams = "priority,clusterOptions,schema_ignore_params,genomes,fasta" + // Specific nf-core/configs params + config_profile_contact = 'Sofia Stamouli (@sofstam)' + config_profile_description = 'nf-core/taxprofiler HASTA profile provided by nf-core/configs' + validationSchemaIgnoreParams = "priority,clusterOptions,schema_ignore_params,genomes,fasta" } process { diff --git a/conf/pipeline/viralrecon/eddie.config b/conf/pipeline/viralrecon/eddie.config index 8f0463d17..6b691a616 100644 --- a/conf/pipeline/viralrecon/eddie.config +++ b/conf/pipeline/viralrecon/eddie.config @@ -1,13 +1,13 @@ env { - BLASTDB_LMDB_MAP_SIZE=100000000 + BLASTDB_LMDB_MAP_SIZE=100000000 } process { - withName : '.*PICARD.*' { + withName : '.*PICARD.*' { clusterOptions = {"-l h_vmem=${(task.memory + 4.GB).bytes/task.cpus}"} - } + } - withName : '.*SNPEFF.*' { + withName : '.*SNPEFF.*' { clusterOptions = {"-l h_vmem=${(task.memory + 4.GB).bytes/task.cpus}"} - } + } } diff --git a/conf/pipeline/viralrecon/genomes.config b/conf/pipeline/viralrecon/genomes.config index 16bacefb3..b23b70974 100644 --- a/conf/pipeline/viralrecon/genomes.config +++ b/conf/pipeline/viralrecon/genomes.config @@ -6,120 +6,120 @@ */ params { - // Genome reference file paths - genomes { + // Genome reference file paths + genomes { - // SARS-CoV-2 - 'NC_045512.2' { - // This version of the reference has been kept here for backwards compatibility. - // Please use 'MN908947.3' if possible because all primer sets are available / have been pre-prepared relative to that assembly - fasta = 'https://github.com/nf-core/test-datasets/raw/viralrecon/genome/NC_045512.2/GCF_009858895.2_ASM985889v3_genomic.200409.fna.gz' - gff = 'https://github.com/nf-core/test-datasets/raw/viralrecon/genome/NC_045512.2/GCF_009858895.2_ASM985889v3_genomic.200409.gff.gz' - nextclade_dataset = 'https://github.com/nf-core/test-datasets/raw/viralrecon/genome/MN908947.3/nextclade_sars-cov-2_MN908947_2022-06-14T12_00_00Z.tar.gz' - nextclade_dataset_name = 'sars-cov-2' - nextclade_dataset_reference = 'MN908947' - nextclade_dataset_tag = '2022-06-14T12:00:00Z' - } - - // SARS-CoV-2 - 'MN908947.3' { - fasta = 'https://github.com/nf-core/test-datasets/raw/viralrecon/genome/MN908947.3/GCA_009858895.3_ASM985889v3_genomic.200409.fna.gz' - gff = 'https://github.com/nf-core/test-datasets/raw/viralrecon/genome/MN908947.3/GCA_009858895.3_ASM985889v3_genomic.200409.gff.gz' - nextclade_dataset = 'https://github.com/nf-core/test-datasets/raw/viralrecon/genome/MN908947.3/nextclade_sars-cov-2_MN908947_2022-06-14T12_00_00Z.tar.gz' - nextclade_dataset_name = 'sars-cov-2' - nextclade_dataset_reference = 'MN908947' - nextclade_dataset_tag = '2022-06-14T12:00:00Z' - primer_sets { - artic { - '1' { - fasta = 'https://github.com/artic-network/artic-ncov2019/raw/master/primer_schemes/nCoV-2019/V1/nCoV-2019.reference.fasta' - gff = 'https://github.com/nf-core/test-datasets/raw/viralrecon/genome/MN908947.3/GCA_009858895.3_ASM985889v3_genomic.200409.gff.gz' - primer_bed = 'https://github.com/artic-network/artic-ncov2019/raw/master/primer_schemes/nCoV-2019/V1/nCoV-2019.primer.bed' - scheme = 'nCoV-2019' - } - '2' { - fasta = 'https://github.com/artic-network/artic-ncov2019/raw/master/primer_schemes/nCoV-2019/V2/nCoV-2019.reference.fasta' - gff = 'https://github.com/nf-core/test-datasets/raw/viralrecon/genome/MN908947.3/GCA_009858895.3_ASM985889v3_genomic.200409.gff.gz' - primer_bed = 'https://github.com/artic-network/artic-ncov2019/raw/master/primer_schemes/nCoV-2019/V2/nCoV-2019.primer.bed' - scheme = 'nCoV-2019' - } - '3' { - fasta = 'https://github.com/artic-network/artic-ncov2019/raw/master/primer_schemes/nCoV-2019/V3/nCoV-2019.reference.fasta' - gff = 'https://github.com/nf-core/test-datasets/raw/viralrecon/genome/MN908947.3/GCA_009858895.3_ASM985889v3_genomic.200409.gff.gz' - primer_bed = 'https://github.com/artic-network/artic-ncov2019/raw/master/primer_schemes/nCoV-2019/V3/nCoV-2019.primer.bed' - scheme = 'nCoV-2019' - } - '4' { - fasta = 'https://github.com/artic-network/artic-ncov2019/raw/master/primer_schemes/nCoV-2019/V4/SARS-CoV-2.reference.fasta' - gff = 'https://github.com/nf-core/test-datasets/raw/viralrecon/genome/MN908947.3/GCA_009858895.3_ASM985889v3_genomic.200409.gff.gz' - primer_bed = 'https://github.com/artic-network/artic-ncov2019/raw/master/primer_schemes/nCoV-2019/V4/SARS-CoV-2.scheme.bed' - scheme = 'SARS-CoV-2' - } - '4.1' { - fasta = 'https://github.com/artic-network/artic-ncov2019/raw/master/primer_schemes/nCoV-2019/V4.1/SARS-CoV-2.reference.fasta' - gff = 'https://github.com/nf-core/test-datasets/raw/viralrecon/genome/MN908947.3/GCA_009858895.3_ASM985889v3_genomic.200409.gff.gz' - primer_bed = 'https://github.com/artic-network/artic-ncov2019/raw/master/primer_schemes/nCoV-2019/V4.1/SARS-CoV-2.scheme.bed' - scheme = 'SARS-CoV-2' - } - '5.3.2' { - fasta = 'https://github.com/artic-network/artic-ncov2019/raw/master/primer_schemes/nCoV-2019/V5.3.2/SARS-CoV-2.reference.fasta' - gff = 'https://github.com/nf-core/test-datasets/raw/viralrecon/genome/MN908947.3/GCA_009858895.3_ASM985889v3_genomic.200409.gff.gz' - primer_bed = 'https://github.com/artic-network/artic-ncov2019/raw/master/primer_schemes/nCoV-2019/V5.3.2/SARS-CoV-2.scheme.bed' - scheme = 'SARS-CoV-2' - } - '1200' { - fasta = 'https://github.com/nf-core/test-datasets/raw/viralrecon/genome/MN908947.3/primer_schemes/artic/nCoV-2019/V1200/nCoV-2019.reference.fasta' - gff = 'https://github.com/nf-core/test-datasets/raw/viralrecon/genome/MN908947.3/GCA_009858895.3_ASM985889v3_genomic.200409.gff.gz' - primer_bed = 'https://github.com/nf-core/test-datasets/raw/viralrecon/genome/MN908947.3/primer_schemes/artic/nCoV-2019/V1200/nCoV-2019.bed' - scheme = 'nCoV-2019' - } + // SARS-CoV-2 + 'NC_045512.2' { + // This version of the reference has been kept here for backwards compatibility. + // Please use 'MN908947.3' if possible because all primer sets are available / have been pre-prepared relative to that assembly + fasta = 'https://github.com/nf-core/test-datasets/raw/viralrecon/genome/NC_045512.2/GCF_009858895.2_ASM985889v3_genomic.200409.fna.gz' + gff = 'https://github.com/nf-core/test-datasets/raw/viralrecon/genome/NC_045512.2/GCF_009858895.2_ASM985889v3_genomic.200409.gff.gz' + nextclade_dataset = 'https://github.com/nf-core/test-datasets/raw/viralrecon/genome/MN908947.3/nextclade_sars-cov-2_MN908947_2022-06-14T12_00_00Z.tar.gz' + nextclade_dataset_name = 'sars-cov-2' + nextclade_dataset_reference = 'MN908947' + nextclade_dataset_tag = '2022-06-14T12:00:00Z' } - 'NEB' { - // VarSkip short primers - 'vss1' { - fasta = 'https://github.com/nf-core/test-datasets/raw/viralrecon/genome/MN908947.3/primer_schemes/artic/nCoV-2019/V1200/nCoV-2019.reference.fasta' - gff = 'https://github.com/nf-core/test-datasets/raw/viralrecon/genome/MN908947.3/GCA_009858895.3_ASM985889v3_genomic.200409.gff.gz' - primer_bed = 'https://github.com/nf-core/test-datasets/raw/viralrecon/genome/MN908947.3/primer_schemes/NEB/nCov-2019/vss1/neb_vss1.primer.bed' - scheme = 'nCoV-2019' - } - // VarSkip long primers - 'vsl1' { - fasta = 'https://github.com/nf-core/test-datasets/raw/viralrecon/genome/MN908947.3/primer_schemes/artic/nCoV-2019/V1200/nCoV-2019.reference.fasta' - gff = 'https://github.com/nf-core/test-datasets/raw/viralrecon/genome/MN908947.3/GCA_009858895.3_ASM985889v3_genomic.200409.gff.gz' - primer_bed = 'https://github.com/nf-core/test-datasets/raw/viralrecon/genome/MN908947.3/primer_schemes/NEB/nCov-2019/vsl1/neb_vsl1.primer.bed' - scheme = 'nCoV-2019' - } + + // SARS-CoV-2 + 'MN908947.3' { + fasta = 'https://github.com/nf-core/test-datasets/raw/viralrecon/genome/MN908947.3/GCA_009858895.3_ASM985889v3_genomic.200409.fna.gz' + gff = 'https://github.com/nf-core/test-datasets/raw/viralrecon/genome/MN908947.3/GCA_009858895.3_ASM985889v3_genomic.200409.gff.gz' + nextclade_dataset = 'https://github.com/nf-core/test-datasets/raw/viralrecon/genome/MN908947.3/nextclade_sars-cov-2_MN908947_2022-06-14T12_00_00Z.tar.gz' + nextclade_dataset_name = 'sars-cov-2' + nextclade_dataset_reference = 'MN908947' + nextclade_dataset_tag = '2022-06-14T12:00:00Z' + primer_sets { + artic { + '1' { + fasta = 'https://github.com/artic-network/artic-ncov2019/raw/master/primer_schemes/nCoV-2019/V1/nCoV-2019.reference.fasta' + gff = 'https://github.com/nf-core/test-datasets/raw/viralrecon/genome/MN908947.3/GCA_009858895.3_ASM985889v3_genomic.200409.gff.gz' + primer_bed = 'https://github.com/artic-network/artic-ncov2019/raw/master/primer_schemes/nCoV-2019/V1/nCoV-2019.primer.bed' + scheme = 'nCoV-2019' + } + '2' { + fasta = 'https://github.com/artic-network/artic-ncov2019/raw/master/primer_schemes/nCoV-2019/V2/nCoV-2019.reference.fasta' + gff = 'https://github.com/nf-core/test-datasets/raw/viralrecon/genome/MN908947.3/GCA_009858895.3_ASM985889v3_genomic.200409.gff.gz' + primer_bed = 'https://github.com/artic-network/artic-ncov2019/raw/master/primer_schemes/nCoV-2019/V2/nCoV-2019.primer.bed' + scheme = 'nCoV-2019' + } + '3' { + fasta = 'https://github.com/artic-network/artic-ncov2019/raw/master/primer_schemes/nCoV-2019/V3/nCoV-2019.reference.fasta' + gff = 'https://github.com/nf-core/test-datasets/raw/viralrecon/genome/MN908947.3/GCA_009858895.3_ASM985889v3_genomic.200409.gff.gz' + primer_bed = 'https://github.com/artic-network/artic-ncov2019/raw/master/primer_schemes/nCoV-2019/V3/nCoV-2019.primer.bed' + scheme = 'nCoV-2019' + } + '4' { + fasta = 'https://github.com/artic-network/artic-ncov2019/raw/master/primer_schemes/nCoV-2019/V4/SARS-CoV-2.reference.fasta' + gff = 'https://github.com/nf-core/test-datasets/raw/viralrecon/genome/MN908947.3/GCA_009858895.3_ASM985889v3_genomic.200409.gff.gz' + primer_bed = 'https://github.com/artic-network/artic-ncov2019/raw/master/primer_schemes/nCoV-2019/V4/SARS-CoV-2.scheme.bed' + scheme = 'SARS-CoV-2' + } + '4.1' { + fasta = 'https://github.com/artic-network/artic-ncov2019/raw/master/primer_schemes/nCoV-2019/V4.1/SARS-CoV-2.reference.fasta' + gff = 'https://github.com/nf-core/test-datasets/raw/viralrecon/genome/MN908947.3/GCA_009858895.3_ASM985889v3_genomic.200409.gff.gz' + primer_bed = 'https://github.com/artic-network/artic-ncov2019/raw/master/primer_schemes/nCoV-2019/V4.1/SARS-CoV-2.scheme.bed' + scheme = 'SARS-CoV-2' + } + '5.3.2' { + fasta = 'https://github.com/artic-network/artic-ncov2019/raw/master/primer_schemes/nCoV-2019/V5.3.2/SARS-CoV-2.reference.fasta' + gff = 'https://github.com/nf-core/test-datasets/raw/viralrecon/genome/MN908947.3/GCA_009858895.3_ASM985889v3_genomic.200409.gff.gz' + primer_bed = 'https://github.com/artic-network/artic-ncov2019/raw/master/primer_schemes/nCoV-2019/V5.3.2/SARS-CoV-2.scheme.bed' + scheme = 'SARS-CoV-2' + } + '1200' { + fasta = 'https://github.com/nf-core/test-datasets/raw/viralrecon/genome/MN908947.3/primer_schemes/artic/nCoV-2019/V1200/nCoV-2019.reference.fasta' + gff = 'https://github.com/nf-core/test-datasets/raw/viralrecon/genome/MN908947.3/GCA_009858895.3_ASM985889v3_genomic.200409.gff.gz' + primer_bed = 'https://github.com/nf-core/test-datasets/raw/viralrecon/genome/MN908947.3/primer_schemes/artic/nCoV-2019/V1200/nCoV-2019.bed' + scheme = 'nCoV-2019' + } + } + 'NEB' { + // VarSkip short primers + 'vss1' { + fasta = 'https://github.com/nf-core/test-datasets/raw/viralrecon/genome/MN908947.3/primer_schemes/artic/nCoV-2019/V1200/nCoV-2019.reference.fasta' + gff = 'https://github.com/nf-core/test-datasets/raw/viralrecon/genome/MN908947.3/GCA_009858895.3_ASM985889v3_genomic.200409.gff.gz' + primer_bed = 'https://github.com/nf-core/test-datasets/raw/viralrecon/genome/MN908947.3/primer_schemes/NEB/nCov-2019/vss1/neb_vss1.primer.bed' + scheme = 'nCoV-2019' + } + // VarSkip long primers + 'vsl1' { + fasta = 'https://github.com/nf-core/test-datasets/raw/viralrecon/genome/MN908947.3/primer_schemes/artic/nCoV-2019/V1200/nCoV-2019.reference.fasta' + gff = 'https://github.com/nf-core/test-datasets/raw/viralrecon/genome/MN908947.3/GCA_009858895.3_ASM985889v3_genomic.200409.gff.gz' + primer_bed = 'https://github.com/nf-core/test-datasets/raw/viralrecon/genome/MN908947.3/primer_schemes/NEB/nCov-2019/vsl1/neb_vsl1.primer.bed' + scheme = 'nCoV-2019' + } + } + 'atoplex' { + fasta = 'https://github.com/nf-core/test-datasets/raw/viralrecon/genome/MN908947.3/primer_schemes/artic/nCoV-2019/V1200/nCoV-2019.reference.fasta' + gff = 'https://github.com/nf-core/test-datasets/raw/viralrecon/genome/NC_045512.2/GCF_009858895.2_ASM985889v3_genomic.200409.gff.gz' + primer_bed = 'https://github.com/nf-core/test-datasets/raw/viralrecon/genome/NC_045512.2/amplicon/nCoV-2019.atoplex.V1.bed' + scheme = 'nCoV-2019' + } + } } - 'atoplex' { - fasta = 'https://github.com/nf-core/test-datasets/raw/viralrecon/genome/MN908947.3/primer_schemes/artic/nCoV-2019/V1200/nCoV-2019.reference.fasta' - gff = 'https://github.com/nf-core/test-datasets/raw/viralrecon/genome/NC_045512.2/GCF_009858895.2_ASM985889v3_genomic.200409.gff.gz' - primer_bed = 'https://github.com/nf-core/test-datasets/raw/viralrecon/genome/NC_045512.2/amplicon/nCoV-2019.atoplex.V1.bed' - scheme = 'nCoV-2019' + + // Monkeypox + 'NC_063383.1' { + fasta = 'https://github.com/nf-core/test-datasets/raw/viralrecon/genome/NC_063383.1/GCF_014621545.1_ASM1462154v1_genomic.220824.fna.gz' + gff = 'https://github.com/nf-core/test-datasets/raw/viralrecon/genome/NC_063383.1/GCF_014621545.1_ASM1462154v1_genomic.220824.gff.gz' + nextclade_dataset = 'https://github.com/nf-core/test-datasets/raw/viralrecon/genome/NC_063383.1/nextclade_hMPXV_NC_063383.1_2022-08-19T12_00_00Z.tar.gz' + nextclade_dataset_name = 'hMPXV' + nextclade_dataset_reference = 'NC_063383.1' + nextclade_dataset_tag = '2022-08-19T12:00:00Z' } - } - } - // Monkeypox - 'NC_063383.1' { - fasta = 'https://github.com/nf-core/test-datasets/raw/viralrecon/genome/NC_063383.1/GCF_014621545.1_ASM1462154v1_genomic.220824.fna.gz' - gff = 'https://github.com/nf-core/test-datasets/raw/viralrecon/genome/NC_063383.1/GCF_014621545.1_ASM1462154v1_genomic.220824.gff.gz' - nextclade_dataset = 'https://github.com/nf-core/test-datasets/raw/viralrecon/genome/NC_063383.1/nextclade_hMPXV_NC_063383.1_2022-08-19T12_00_00Z.tar.gz' - nextclade_dataset_name = 'hMPXV' - nextclade_dataset_reference = 'NC_063383.1' - nextclade_dataset_tag = '2022-08-19T12:00:00Z' - } + // Monkeypox + 'ON563414.3' { + fasta = 'https://github.com/nf-core/test-datasets/raw/viralrecon/genome/ON563414.3/GCA_023516015.3_ASM2351601v1_genomic.220824.fna.gz' + gff = 'https://github.com/nf-core/test-datasets/raw/viralrecon/genome/ON563414.3/GCA_023516015.3_ASM2351601v1_genomic.220824.gff.gz' + } - // Monkeypox - 'ON563414.3' { - fasta = 'https://github.com/nf-core/test-datasets/raw/viralrecon/genome/ON563414.3/GCA_023516015.3_ASM2351601v1_genomic.220824.fna.gz' - gff = 'https://github.com/nf-core/test-datasets/raw/viralrecon/genome/ON563414.3/GCA_023516015.3_ASM2351601v1_genomic.220824.gff.gz' - } + // Monkeypox + 'MT903344.1' { + fasta = 'https://github.com/nf-core/test-datasets/raw/viralrecon/genome/MT903344.1/GCA_014621585.1_ASM1462158v1_genomic.220824.fna.gz' + gff = 'https://github.com/nf-core/test-datasets/raw/viralrecon/genome/MT903344.1/GCA_014621585.1_ASM1462158v1_genomic.220824.gff.gz' + } - // Monkeypox - 'MT903344.1' { - fasta = 'https://github.com/nf-core/test-datasets/raw/viralrecon/genome/MT903344.1/GCA_014621585.1_ASM1462158v1_genomic.220824.fna.gz' - gff = 'https://github.com/nf-core/test-datasets/raw/viralrecon/genome/MT903344.1/GCA_014621585.1_ASM1462158v1_genomic.220824.gff.gz' } - - } } diff --git a/conf/psmn.config b/conf/psmn.config index 01c0ab0d9..2f0c49e61 100644 --- a/conf/psmn.config +++ b/conf/psmn.config @@ -1,24 +1,65 @@ //Profile config names for nf-core/configs params { - config_profile_description = 'PSMN (Pôle Scientifique de Modélisation Numérique) HPC cluster profile' - config_profile_contact = 'Laurent Modolo (@l-modolo)' - config_profile_url = 'http://www.ens-lyon.fr/PSMN/doku.php?id=en:accueil' + config_profile_description = 'PSMN (Pôle Scientifique de Modélisation Numérique) HPC cluster profile' + config_profile_contact = 'Laurent Modolo (@l-modolo)' + config_profile_url = 'http://www.ens-lyon.fr/PSMN/doku.php?id=en:accueil' } charliecloud { - enabled = true - cacheDir = "/Xnfs/abc/charliecloud" - runOptions = "--bind /scratch:/scratch --bind /Xnfs:/Xnfs --bind /home:/home" - readOnlyInputs = true + enabled = true + cacheDir = "/Xnfs/abc/charliecloud" + runOptions = "--bind /scratch:/scratch --bind /Xnfs:/Xnfs --bind /home:/home" + readOnlyInputs = true } process { - executor = 'slurm' - clusterOptions = "--partition=Lake" + executor = 'slurm' + clusterOptions = "--partition=E5,Lake,Lake-flix" + + cpus = 1 + memory = 32.GB + time = 24.h + + withLabel: 'process_single|process_single_thread|sc_tiny|sc_small|sc_medium' { + clusterOptions = "--partition=E5,Lake,Lake-flix" + cpus = 1 + memory = 114.GB + time = 24.h + } + withLabel:'process_low|mc_small|process_very_low' { + clusterOptions = "--partition=E5,Lake,Lake-flix" + cpus = 16 + memory = 90.GB + time = 24.h + } + withLabel:'process_medium|mc_medium' { + clusterOptions = "--partition=Lake,Lake-flix" + cpus = 32 + memory = 180.GB + time = 48.h + } + withLabel:'process_high|mc_large|mc_huge|process_high_cpus|cpus_max' { + clusterOptions = "--partition=Lake,Lake-flix" + cpus = 32 + memory = 370.GB + time = 48.h + } + withLabel: 'process_long|process_maximum_time|process_long_parallelized' { + clusterOptions = "--partition=Lake" + time = 96.h + } + withLabel: 'process_high_memory|memory_max' { + clusterOptions = "--partition=Lake,Lake-flix" + memory = 370.GB + } + withLabel: gpu { + clusterOptions = "--partition=E5-GPU" + } } params { - max_memory = 512.GB - max_cpus = 32 - max_time = 24.h + max_memory = 370.GB + max_cpus = 32 + max_time = 96.h } + diff --git a/conf/qmul_apocrita.config b/conf/qmul_apocrita.config new file mode 100644 index 000000000..f113da63d --- /dev/null +++ b/conf/qmul_apocrita.config @@ -0,0 +1,32 @@ +params { + + config_profile_description = 'Queen Mary Universtiy of London' + config_profile_contact = 'Simon Murray (simon . murray AT ucl . ac . uk)' + config_profile_url = 'https://docs.hpc.qmul.ac.uk/' + +} + +executor { + name = 'sge' +} + +apptainer.runOptions = "-B ${HOME},${PWD}" + +process { + + //NEED TO SET PARALLEL ENVIRONMENT TO SMP SO MULTIPLE CPUS CAN BE SUBMITTED + penv = 'smp' + + //ADD MEMORY TO CLUSTEROPTIONS + clusterOptions = { "-S /bin/bash -l h_vmem=${(task.memory.mega/task.cpus)}M" } + + withLabel:process_high { + clusterOptions = { "-S /bin/bash -l h_vmem=${(task.memory.mega/task.cpus)}M -l highmem" } + } + withLabel:process_long { + clusterOptions = { "-S /bin/bash -l h_vmem=${(task.memory.mega/task.cpus)}M -l highmem" } + } + withLabel:process_high_memory { + clusterOptions = { "-S /bin/bash -l h_vmem=${(task.memory.mega/task.cpus)}M -l highmem" } + } +} diff --git a/conf/rosalind.config b/conf/rosalind.config index 3e3bbf4c4..ae9f3f1ed 100644 --- a/conf/rosalind.config +++ b/conf/rosalind.config @@ -1,30 +1,30 @@ params { - config_profile_description = 'Kings College London Rosalind HPC' - config_profile_contact = 'Theo Portlock' - config_profile_url = 'https://www.rosalind.kcl.ac.uk/' + config_profile_description = 'Kings College London Rosalind HPC' + config_profile_contact = 'Theo Portlock' + config_profile_url = 'https://www.rosalind.kcl.ac.uk/' } singularity { - enabled = true - autoMounts = true - docker.enabled = false + enabled = true + autoMounts = true + docker.enabled = false } params { - max_memory = 64.GB - max_cpus = 16 - max_time = 24.h - partition = 'shared' - schema_ignore_params = 'partition,genomes,modules' - validationSchemaIgnoreParams = "partition,genomes,modules,schema_ignore_params" + max_memory = 64.GB + max_cpus = 16 + max_time = 24.h + partition = 'shared' + schema_ignore_params = 'partition,genomes,modules' + validationSchemaIgnoreParams = "partition,genomes,modules,schema_ignore_params" } process { - executor = 'slurm' - maxRetries = 3 - clusterOptions = { "--partition=$params.partition" } + executor = 'slurm' + maxRetries = 3 + clusterOptions = { "--partition=$params.partition" } } executor { - submitRateLimit = '1 sec' + submitRateLimit = '1 sec' } diff --git a/conf/rosalind_uge.config b/conf/rosalind_uge.config index 7e87ed698..7974406c0 100644 --- a/conf/rosalind_uge.config +++ b/conf/rosalind_uge.config @@ -28,7 +28,7 @@ process { // Error and retry handling errorStrategy = { task.exitStatus in [143,137,104,134,139,71,255] ? 'retry' : 'finish' } maxRetries = 3 - + // Executor and queue information executor = 'sge' penv = 'smp' diff --git a/conf/sage.config b/conf/sage.config index 2aa572981..8efe51b23 100644 --- a/conf/sage.config +++ b/conf/sage.config @@ -1,25 +1,25 @@ // Config profile metadata params { - config_profile_description = 'The Sage Bionetworks Nextflow Config Profile' - config_profile_contact = 'Bruno Grande (@BrunoGrandePhD)' - config_profile_url = 'https://github.com/Sage-Bionetworks-Workflows' + config_profile_description = 'The Sage Bionetworks Nextflow Config Profile' + config_profile_contact = 'Bruno Grande (@BrunoGrandePhD)' + config_profile_url = 'https://github.com/Sage-Bionetworks-Workflows' } // Leverage us-east-1 mirror of select human and mouse genomes params { - igenomes_base = 's3://sage-igenomes/igenomes' - cpus = 4 - max_cpus = 32 - max_memory = 128.GB - max_time = 240.h - single_cpu_mem = 6.GB + igenomes_base = 's3://sage-igenomes/igenomes' + cpus = 4 + max_cpus = 32 + max_memory = 128.GB + max_time = 240.h + single_cpu_mem = 6.GB } // Enable retries globally for certain exit codes process { - maxErrors = '-1' - maxRetries = 5 - errorStrategy = { task.attempt <= 5 ? 'retry' : 'finish' } + maxErrors = '-1' + maxRetries = 5 + errorStrategy = { task.attempt <= 5 ? 'retry' : 'finish' } } // Increase time limit to allow file transfers to finish @@ -28,55 +28,55 @@ threadPool.FileTransfer.maxAwait = '24 hour' // Configure Nextflow to be more reliable on AWS aws { - region = "us-east-1" - client { + region = "us-east-1" + client { uploadMaxThreads = 4 - } - batch { + } + batch { retryMode = 'built-in' maxParallelTransfers = 1 maxTransferAttempts = 10 delayBetweenAttempts = '60 sec' - } + } } // Adjust default resource allocations (see `../docs/sage.md`) process { - cpus = { check_max( 1 * factor(task, 2), 'cpus' ) } - memory = { check_max( 6.GB * factor(task, 1), 'memory' ) } - time = { check_max( 24.h * factor(task, 1), 'time' ) } + cpus = { check_max( 1 * factor(task, 2), 'cpus' ) } + memory = { check_max( 6.GB * factor(task, 1), 'memory' ) } + time = { check_max( 24.h * factor(task, 1), 'time' ) } - // Process-specific resource requirements - withLabel: 'process_single' { + // Process-specific resource requirements + withLabel: 'process_single' { cpus = { check_max( 1 * factor(task, 2), 'cpus' ) } memory = { check_max( 6.GB * factor(task, 1), 'memory' ) } time = { check_max( 24.h * factor(task, 1), 'time' ) } - } - withLabel: 'process_low' { + } + withLabel: 'process_low' { cpus = { check_max( 2 * factor(task, 2), 'cpus' ) } memory = { check_max( 12.GB * factor(task, 1), 'memory' ) } time = { check_max( 24.h * factor(task, 1), 'time' ) } - } - withLabel: 'process_medium' { + } + withLabel: 'process_medium' { cpus = { check_max( 8 * factor(task, 2), 'cpus' ) } memory = { check_max( 32.GB * factor(task, 1), 'memory' ) } time = { check_max( 48.h * factor(task, 1), 'time' ) } - } - withLabel: 'process_high' { + } + withLabel: 'process_high' { cpus = { check_max( 16 * factor(task, 2), 'cpus' ) } memory = { check_max( 64.GB * factor(task, 1), 'memory' ) } time = { check_max( 96.h * factor(task, 1), 'time' ) } - } - withLabel: 'process_long' { + } + withLabel: 'process_long' { time = { check_max( 96.h * factor(task, 1), 'time' ) } - } - withLabel: 'process_high_memory|memory_max' { + } + withLabel: 'process_high_memory|memory_max' { memory = { check_max( 128.GB * factor(task, 1), 'memory' ) } - } - withLabel: 'cpus_max' { + } + withLabel: 'cpus_max' { cpus = { check_max( 32 * factor(task, 2), 'cpus' ) } - } + } } diff --git a/conf/sahmri.config b/conf/sahmri.config index b47db52bd..f346c63b0 100644 --- a/conf/sahmri.config +++ b/conf/sahmri.config @@ -1,34 +1,34 @@ params { - config_profile_description = 'South Australian Health and Medical Research Institute (SAHMRI) HPC cluster profile.' - config_profile_contact = 'Nathan Watson-Haigh (nathan.watson-haigh@sahmri.com)' - config_profile_url = "https://sahmri.org.au" - max_memory = 375.GB - max_cpus = 32 - max_time = 14.d - igenomes_base = '/cancer/storage/shared/igenomes/references/' + config_profile_description = 'South Australian Health and Medical Research Institute (SAHMRI) HPC cluster profile.' + config_profile_contact = 'Nathan Watson-Haigh (nathan.watson-haigh@sahmri.com)' + config_profile_url = "https://sahmri.org.au" + max_memory = 375.GB + max_cpus = 32 + max_time = 14.d + igenomes_base = '/cancer/storage/shared/igenomes/references/' } process { - executor = 'slurm' - queue = 'sahmri_prod_hpc,sahmri_cancer_hpc' - maxRetries = 2 + executor = 'slurm' + queue = 'sahmri_prod_hpc,sahmri_cancer_hpc' + maxRetries = 2 - cpus = { check_max( 2 * task.attempt, 'cpus') } - memory = { check_max( 1.GB * task.attempt, 'memory') } - time = { check_max( 10.m * task.attempt, 'time') } + cpus = { check_max( 2 * task.attempt, 'cpus') } + memory = { check_max( 1.GB * task.attempt, 'memory') } + time = { check_max( 10.m * task.attempt, 'time') } } executor { - queueSize = 50 - submitRateLimit = '10 sec' + queueSize = 50 + submitRateLimit = '10 sec' } singularity { - enabled = true - autoMounts = true - beforeScript = 'export PATH=/apps/opt/singularity/latest/bin:${PATH}' - cacheDir = '/cancer/storage/shared/simg' + enabled = true + autoMounts = true + beforeScript = 'export PATH=/apps/opt/singularity/latest/bin:${PATH}' + cacheDir = '/cancer/storage/shared/simg' } cleanup = true profiles { - debug { + debug { cleanup = false - } + } } diff --git a/conf/sanger.config b/conf/sanger.config index adaece982..f1b089b28 100644 --- a/conf/sanger.config +++ b/conf/sanger.config @@ -1,6 +1,6 @@ // Extract the name of the cluster to tune the parameters below -def clustername = "farm5" +def clustername = "farm22" try { clustername = ['/bin/bash', '-c', 'lsid | awk \'$0 ~ /^My cluster name is/ {print $5}\''].execute().text.trim() } catch (java.io.IOException e) { diff --git a/conf/scw.config b/conf/scw.config index 272fbea14..827f6ab7a 100644 --- a/conf/scw.config +++ b/conf/scw.config @@ -1,22 +1,22 @@ params { - config_profile_description = 'Super Computing Wales' - config_profile_contact = 'j.downie@bangor.ac.uk' - config_profile_url = 'https://supercomputing.wales/' + config_profile_description = 'Super Computing Wales' + config_profile_contact = 'j.downie@bangor.ac.uk' + config_profile_url = 'https://supercomputing.wales/' } singularity { - enabled = true - autoMounts = true + enabled = true + autoMounts = true } executor { - name = 'slurm' - queueSize = 10 - queue = 'htc' + name = 'slurm' + queueSize = 10 + queue = 'htc' } params { - max_memory = 384.GB - max_cpus = 20 - max_time = 72.h + max_memory = 384.GB + max_cpus = 20 + max_time = 72.h } process { - beforeScript = 'module load singularity-ce/3.11.4' + beforeScript = 'module load singularity-ce/3.11.4' } diff --git a/conf/seattlechildrens.config b/conf/seattlechildrens.config new file mode 100644 index 000000000..d736fc812 --- /dev/null +++ b/conf/seattlechildrens.config @@ -0,0 +1,29 @@ +//Create profiles to easily switch between the different process executors and platforms. + +//global parameters +params { + config_profile_description = 'The SCRI (seattle childrens research institute) cluster profile' + config_profile_contact = 'Research Scientific Computing (@RSC-RP)' + config_profile_url = 'https://github.com/RSC-RP' + + // SCRI HPC project params + queue = "paidq" // freeq + project = "${params.project}" +} + + +profiles { + //For running on an interactive session on cybertron with singularity module loaded + local_singularity { + process.executor = 'local' + singularity.enabled = true + } + //For executing the jobs on the HPC cluster with singularity containers + PBS_singularity { + process.executor = 'pbspro' + process.queue = "${params.queue}" + process.clusterOptions = "-P ${params.project}" + process.beforeScript = 'module load singularity' + singularity.enabled = true + } +} diff --git a/conf/seawulf.config b/conf/seawulf.config index f4d2e99bb..e9848c5cd 100644 --- a/conf/seawulf.config +++ b/conf/seawulf.config @@ -1,23 +1,23 @@ singularity { - enabled = true - autoMounts = true + enabled = true + autoMounts = true } process { - executor = 'slurm' - maxRetries = 4 - queue = { task.cpus <= 40 ? 'long-40core' : 'long-96core' } + executor = 'slurm' + maxRetries = 4 + queue = { task.cpus <= 40 ? 'long-40core' : 'long-96core' } } params { - config_profile_contact = 'David Carlson (@davidecarlson)' - config_profile_url = 'https://it.stonybrook.edu/services/high-performance-computing' - config_profile_description = 'Stony Brook Universitys seaWulf cluster profile provided by nf-core/configs.' - max_time = 48.h - max_memory = 251.GB - max_cpus = 96 + config_profile_contact = 'David Carlson (@davidecarlson)' + config_profile_url = 'https://it.stonybrook.edu/services/high-performance-computing' + config_profile_description = 'Stony Brook Universitys seaWulf cluster profile provided by nf-core/configs.' + max_time = 48.h + max_memory = 251.GB + max_cpus = 96 } executor { - queueSize = 25 - submitRateLimit = '5 sec' + queueSize = 25 + submitRateLimit = '5 sec' } diff --git a/conf/seg_globe.config b/conf/seg_globe.config index 41a3d6e27..b5d1658aa 100644 --- a/conf/seg_globe.config +++ b/conf/seg_globe.config @@ -1,27 +1,27 @@ //Profile config names for nf-core/configs params { - config_profile_description = 'Section for Evolutionary Genomics @ GLOBE, University of Copenhagen - seg_globe profile provided by nf-core/configs.' - config_profile_contact = 'Aashild Vaagene (@ashildv)' - config_profile_url = 'https://globe.ku.dk/research/evogenomics/' - max_memory = 250.GB - max_cpus = 35 - max_time = 720.h + config_profile_description = 'Section for Evolutionary Genomics @ GLOBE, University of Copenhagen - seg_globe profile provided by nf-core/configs.' + config_profile_contact = 'Aashild Vaagene (@ashildv)' + config_profile_url = 'https://globe.ku.dk/research/evogenomics/' + max_memory = 250.GB + max_cpus = 35 + max_time = 720.h } singularity { - enabled = true - autoMounts = true - cacheDir = '/shared/volume/hologenomics/data/cache/nf-eager/singularity' + enabled = true + autoMounts = true + cacheDir = '/shared/volume/hologenomics/data/cache/nf-eager/singularity' } process { - executor = 'slurm' - queue = { task.time < 24.h ? 'hologenomics-short' : task.time < 168.h ? 'hologenomics' : 'hologenomics-long' } + executor = 'slurm' + queue = { task.time < 24.h ? 'hologenomics-short' : task.time < 168.h ? 'hologenomics' : 'hologenomics-long' } } - + cleanup = true - + executor { - queueSize = 8 + queueSize = 8 } diff --git a/conf/tigem.config b/conf/tigem.config index e06253885..bbe2dffcf 100644 --- a/conf/tigem.config +++ b/conf/tigem.config @@ -1,7 +1,7 @@ params { - config_profile_description = 'Telethon Institute of Genetic and Medicine (TIGEM) provided by nf-core/configs.' - config_profile_contact = 'Giuseppe Martone (@giusmar)' - config_profile_url = 'https://github.com/giusmar' + config_profile_description = 'Telethon Institute of Genetic and Medicine (TIGEM) provided by nf-core/configs.' + config_profile_contact = 'Giuseppe Martone (@giusmar)' + config_profile_url = 'https://github.com/giusmar' } process.executor = 'slurm' diff --git a/conf/tubingen_apg.config b/conf/tubingen_apg.config index 56ae8a6c5..22afa90bd 100644 --- a/conf/tubingen_apg.config +++ b/conf/tubingen_apg.config @@ -27,7 +27,7 @@ singularity { } profiles { - // Profile to deactivate automatic cleanup of work directory after a successful run. Overwrites cleanup option. + // Profile to deactivate automatic cleanup of work directory after a successful run. Overwrites cleanup option. debug { cleanup = false } diff --git a/conf/tufts.config b/conf/tufts.config new file mode 100644 index 000000000..322a60f46 --- /dev/null +++ b/conf/tufts.config @@ -0,0 +1,35 @@ +//Profile config names for nf-core/configs +params { + config_profile_description = 'The Tufts University HPC cluster profile provided by nf-core/configs.' + config_profile_contact = 'Yucheng Zhang' + config_profile_contact_github = '@zhan4429' + config_profile_contact_email = 'Yucheng.Zhang@tufts.edu' + config_profile_url = 'https://it.tufts.edu/high-performance-computing' +} + +params { + max_memory = 120.GB + max_cpus = 72 + max_time = 168.h + partition = 'batch' + igenomes_base = '/cluster/tufts/biocontainers/datasets/igenomes/' +} + +process { + executor = 'slurm' + clusterOptions = "-N 1 -n 1 -p $params.partition" + } + +executor { + queueSize = 16 + pollInterval = '1 min' + queueStatInterval = '5 min' + submitRateLimit = '10 sec' +} + +// Set $NXF_SINGULARITY_CACHEDIR in your ~/.bashrc +// to stop downloading the same image for every run +singularity { + enabled = true + autoMounts = true +} diff --git a/conf/tuos_stanage.config b/conf/tuos_stanage.config index 46c158195..bcb277fbc 100644 --- a/conf/tuos_stanage.config +++ b/conf/tuos_stanage.config @@ -6,9 +6,9 @@ params { - config_profile_description = 'Sheffield Bioinformatics Core - Stanage' - config_profile_contact = 'Sheffield Bioinformatics Core (bioinformatics-core@sheffield.ac.uk)' - config_profile_url = 'https://docs.hpc.shef.ac.uk/en/latest/stanage/index.html#stanage' + config_profile_description = 'Sheffield Bioinformatics Core - Stanage' + config_profile_contact = 'Sheffield Bioinformatics Core (bioinformatics-core@sheffield.ac.uk)' + config_profile_url = 'https://docs.hpc.shef.ac.uk/en/latest/stanage/index.html#stanage' } @@ -16,10 +16,10 @@ params { // hpc resource limits params { - - max_cpus = 64 - max_memory = 251.GB - max_time = 96.h + + max_cpus = 64 + max_memory = 251.GB + max_time = 96.h } @@ -28,9 +28,9 @@ params { process { - // scheduler + // scheduler - executor = 'slurm' + executor = 'slurm' } @@ -39,8 +39,8 @@ process { executor { - queueSize = 50 - submitRateLimit = '1 sec' + queueSize = 50 + submitRateLimit = '1 sec' } @@ -49,8 +49,8 @@ executor { singularity { - enabled = true - autoMounts = true + enabled = true + autoMounts = true } diff --git a/conf/ucd_sonic.config b/conf/ucd_sonic.config index 9eb519446..37999fb80 100644 --- a/conf/ucd_sonic.config +++ b/conf/ucd_sonic.config @@ -1,22 +1,22 @@ params { - config_profile_name = 'UCD_SONIC' - config_profile_description = 'University College Dublin Sonic HPC profile provided by nf-core/configs.' - config_profile_contact = 'Bruce Moran (@brucemoran)' - config_profile_url = 'https://www.ucd.ie/itservices/ourservices/researchit/researchcomputing/sonichpc/' - max_cpus = 40 - max_time = 12.h + config_profile_name = 'UCD_SONIC' + config_profile_description = 'University College Dublin Sonic HPC profile provided by nf-core/configs.' + config_profile_contact = 'Bruce Moran (@brucemoran)' + config_profile_url = 'https://www.ucd.ie/itservices/ourservices/researchit/researchcomputing/sonichpc/' + max_cpus = 40 + max_time = 12.h } process { - executor = 'slurm' - queue = 'shared' - queueSize = 50 - submitRateLimit = '10 sec' - maxRetries = 2 - beforeScript = 'export NXF_OPTS="-Xms2G -Xmx40G"; module load nextflow/22.04.5.5708 singularity/3.5.2' - clusterOptions = { "--mem 1M" } - cache = 'lenient' - memory = 1.MB + executor = 'slurm' + queue = 'shared' + queueSize = 50 + submitRateLimit = '10 sec' + maxRetries = 2 + beforeScript = 'export NXF_OPTS="-Xms2G -Xmx40G"; module load nextflow/22.04.5.5708 singularity/3.5.2' + clusterOptions = { "--mem 1M" } + cache = 'lenient' + memory = 1.MB } cleanup = true diff --git a/conf/ucl_cscluster.config b/conf/ucl_cscluster.config new file mode 100644 index 000000000..c1015ddde --- /dev/null +++ b/conf/ucl_cscluster.config @@ -0,0 +1,22 @@ +params { + + config_profile_description = 'University College London CS cluster' + config_profile_contact = 'Simon Murray (simon . murray AT ucl . ac . uk)' + config_profile_url = 'https://hpc.cs.ucl.ac.uk/' + +} + +executor { + name = 'sge' +} + +singularity.runOptions = "-B ${HOME},${PWD}" + +process { + + //NEED TO SET PARALLEL ENVIRONMENT TO SMP SO MULTIPLE CPUS CAN BE SUBMITTED + penv = 'smp' + + //ADD MEMORY TO CLUSTEROPTIONS + clusterOptions = { "-S /bin/bash -l tmem=${task.memory.mega}M,h_vmem=${task.memory.mega}M" } +} diff --git a/conf/ucl_myriad.config b/conf/ucl_myriad.config index 3f9425c98..2efbb5c3d 100644 --- a/conf/ucl_myriad.config +++ b/conf/ucl_myriad.config @@ -1,34 +1,22 @@ params { - config_profile_description = 'University College London Myriad cluster' - config_profile_contact = 'Chris Wyatt (ucbtcdr@ucl.ac.uk)' - config_profile_url = 'https://www.rc.ucl.ac.uk/docs/Clusters/Myriad/' + config_profile_description = 'University College London Myriad cluster' + config_profile_contact = 'Chris Wyatt (ucbtcdr@ucl.ac.uk)' + config_profile_url = 'https://www.rc.ucl.ac.uk/docs/Clusters/Myriad/' } -process { - executor='sge' - penv = 'smp' -} - -params { - // Defaults only, expecting to be overwritten - max_memory = 128.GB - max_cpus = 36 - max_time = 72.h - // igenomes_base = 's3://ngi-igenomes/igenomes/' +executor { + name = 'sge' } -// optional executor settings +apptainer.runOptions = "-B ${HOME},${PWD}" -executor { +process { - queueSize = 10 - submitRateLimit = '1 sec' + //NEED TO SET PARALLEL ENVIRONMENT TO SMP SO MULTIPLE CPUS CAN BE SUBMITTED + penv = 'smp' + //PROVIDE EXTRA PARAMETERS AS CLUSTER OPTIONS + clusterOptions = "-S /bin/bash" } - -singularity { - enabled = true - autoMounts = true -} \ No newline at end of file diff --git a/conf/uct_hpc.config b/conf/uct_hpc.config index e7218ba20..0141ea8ca 100644 --- a/conf/uct_hpc.config +++ b/conf/uct_hpc.config @@ -1,41 +1,41 @@ /* - * ------------------------------------------------- - * HPC cluster config file - * ------------------------------------------------- - * http://www.hpc.uct.ac.za/ - */ + * ------------------------------------------------- + * HPC cluster config file + * ------------------------------------------------- + * http://www.hpc.uct.ac.za/ + */ params { - config_profile_description = 'University of Cape Town High Performance Cluster config file provided by nf-core/configs.' - config_profile_contact = 'Katie Lennard (@kviljoen)' - config_profile_url = 'http://hpc.uct.ac.za/index.php/hpc-cluster/' + config_profile_description = 'University of Cape Town High Performance Cluster config file provided by nf-core/configs.' + config_profile_contact = 'Katie Lennard (@kviljoen)' + config_profile_url = 'http://hpc.uct.ac.za/index.php/hpc-cluster/' - singularity_cache_dir = "/bb/DB/bio/singularity-containers/" - igenomes_base = '/bb/DB/bio/rna-seq/references' - max_memory = 384.GB - max_cpus = 40 - max_time = 1000.h - hpc_queue = 'ada' - hpc_account = '--account cbio' - genome = 'GRCh37' + singularity_cache_dir = "/bb/DB/bio/singularity-containers/" + igenomes_base = '/bb/DB/bio/rna-seq/references' + max_memory = 384.GB + max_cpus = 40 + max_time = 1000.h + hpc_queue = 'ada' + hpc_account = '--account cbio' + genome = 'GRCh37' } singularity { - enabled = true - cacheDir = params.singularity_cache_dir - autoMounts = true + enabled = true + cacheDir = params.singularity_cache_dir + autoMounts = true } process { - executor = 'slurm' - queue = params.hpc_queue - // Increasing maxRetries, this will overwrite what we have in base.config - maxRetries = 4 - clusterOptions = params.hpc_account - stageInMode = 'symlink' - stageOutMode = 'rsync' + executor = 'slurm' + queue = params.hpc_queue + // Increasing maxRetries, this will overwrite what we have in base.config + maxRetries = 4 + clusterOptions = params.hpc_account + stageInMode = 'symlink' + stageOutMode = 'rsync' } executor { - queueSize = 15 + queueSize = 15 } diff --git a/conf/uge.config b/conf/uge.config index fd03df982..2b82da42a 100644 --- a/conf/uge.config +++ b/conf/uge.config @@ -28,7 +28,7 @@ process { // Error and retry handling errorStrategy = { task.exitStatus in [143,137,104,134,139,71,255] ? 'retry' : 'finish' } maxRetries = 3 - + // Executor and queue information executor = 'sge' penv = 'smp' diff --git a/conf/unc_longleaf.config b/conf/unc_longleaf.config index 557033d49..2ec50b18c 100644 --- a/conf/unc_longleaf.config +++ b/conf/unc_longleaf.config @@ -1,20 +1,21 @@ params { - config_profile_description = "BARC nf-core profile for UNC's Longleaf HPC." - config_profile_contact = "Austin Hepperla (hepperla@unc.edu)" - config_profile_url = "https://help.rc.unc.edu/longleaf-cluster/" + config_profile_description = "BARC nf-core profile for UNC's Longleaf HPC." + config_profile_contact = 'Austin Hepperla' + config_profile_contact_github = '@ahepperla' + config_profile_contact_email = 'hepperla@unc.edu' + config_profile_url = "https://help.rc.unc.edu/longleaf-cluster/" } singularity { - enabled = true - autoMounts = true - cacheDir = "/work/appscr/singularity/nf-core/singularity_images_cache" - registry = 'quay.io' + enabled = true + autoMounts = true + cacheDir = "/work/appscr/singularity/nf-core/singularity_images_cache" + registry = 'quay.io' } process { - executor = 'slurm' - queue = 'general' - clusterOptions = '--exclude=b1024' + executor = 'slurm' + queue = 'general' } executor { @@ -22,8 +23,8 @@ executor { } params { - max_memory = 3041.GB - max_cpus = 256 - max_time = 10.h + max_memory = 3041.GB + max_cpus = 256 + max_time = 10.h } diff --git a/conf/unibe_ibu.config b/conf/unibe_ibu.config index 6ebce5a5c..19ec4cc33 100644 --- a/conf/unibe_ibu.config +++ b/conf/unibe_ibu.config @@ -1,23 +1,23 @@ params { - config_profile_description = "University of Bern, Interfaculty Bioinformatics Unit cluster profile" - config_profile_contact = "irene.keller@dbmr.unibe.ch; info@bioinformatics.unibe.ch" - config_profile_url = "https://www.bioinformatics.unibe.ch/" - max_memory = 500.GB - max_cpus = 128 - max_time = 240.h + config_profile_description = "University of Bern, Interfaculty Bioinformatics Unit cluster profile" + config_profile_contact = "irene.keller@dbmr.unibe.ch; info@bioinformatics.unibe.ch" + config_profile_url = "https://www.bioinformatics.unibe.ch/" + max_memory = 500.GB + max_cpus = 128 + max_time = 240.h } process { - executor = "slurm" - maxRetries = 2 - beforeScript = 'mkdir -p ./tmp/ && export TMPDIR=./tmp/' + executor = "slurm" + maxRetries = 2 + beforeScript = 'mkdir -p ./tmp/ && export TMPDIR=./tmp/' } executor { - queueSize = 30 + queueSize = 30 } singularity { - enabled = true - autoMounts = true -} \ No newline at end of file + enabled = true + autoMounts = true +} diff --git a/conf/uod_hpc.config b/conf/uod_hpc.config index 9e7bf3413..65f59521c 100644 --- a/conf/uod_hpc.config +++ b/conf/uod_hpc.config @@ -1,23 +1,23 @@ params { - config_profile_description = 'University of Dundee Compute Cluster' - config_profile_contact = 'Dominic Sloan-Murphy (dsloanmurphy001@dundee.ac.uk)' - config_profile_url = 'https://uod-hpc.readthedocs.io/en/latest/software/nextflow/' + config_profile_description = 'University of Dundee Compute Cluster' + config_profile_contact = 'Dominic Sloan-Murphy (dsloanmurphy001@dundee.ac.uk)' + config_profile_url = 'https://uod-hpc.readthedocs.io/en/latest/software/nextflow/' } process { - executor = 'sge' - penv = 'smp' - queue = 'all.q' - clusterOptions = '-jc nextflow' + executor = 'sge' + penv = 'smp' + queue = 'all.q' + clusterOptions = '-jc nextflow' } executor { - max_memory = 128.GB - max_cpus = 24 - max_time = 72.h + max_memory = 128.GB + max_cpus = 24 + max_time = 72.h } singularity { - enabled = true - autoMounts = true + enabled = true + autoMounts = true } diff --git a/conf/uppmax.config b/conf/uppmax.config index 5f7f59c43..2ef68e8a5 100644 --- a/conf/uppmax.config +++ b/conf/uppmax.config @@ -1,25 +1,25 @@ // UPPMAX Config Profile params { - // Description is overwritten for other clusters below - config_profile_description = 'UPPMAX (Bianca) cluster profile provided by nf-core/configs.' - config_profile_contact = 'Phil Ewels (@ewels)' - config_profile_url = 'https://www.uppmax.uu.se/' - project = null - clusterOptions = null - schema_ignore_params = "genomes,input_paths,cluster-options,clusterOptions,project" - validationSchemaIgnoreParams = "genomes,input_paths,cluster-options,clusterOptions,project,schema_ignore_params" - save_reference = true - // Defaults set for Bianca - other clusters set below - max_memory = 500.GB - max_cpus = 16 - max_time = 240.h - // illumina iGenomes reference file paths on UPPMAX - igenomes_base = '/sw/data/igenomes/' + // Description is overwritten for other clusters below + config_profile_description = 'UPPMAX (Bianca) cluster profile provided by nf-core/configs.' + config_profile_contact = 'Phil Ewels (@ewels)' + config_profile_url = 'https://www.uppmax.uu.se/' + project = null + clusterOptions = null + schema_ignore_params = "genomes,input_paths,cluster-options,clusterOptions,project" + validationSchemaIgnoreParams = "genomes,input_paths,cluster-options,clusterOptions,project,schema_ignore_params" + save_reference = true + // Defaults set for Bianca - other clusters set below + max_memory = 500.GB + max_cpus = 16 + max_time = 240.h + // illumina iGenomes reference file paths on UPPMAX + igenomes_base = '/sw/data/igenomes/' } singularity { - enabled = true - envWhitelist = 'SNIC_TMP' + enabled = true + envWhitelist = 'SNIC_TMP' } def hostname = "r1" @@ -62,8 +62,8 @@ def clusterOptionsCreator = { m -> } if (m > 500.GB) { - // Special case for snowy very fat node (only remaining case that's above 500 GB) - return base + " -p veryfat " + // Special case for snowy very fat node (only remaining case that's above 500 GB) + return base + " -p veryfat " } // Should only be cases for mem512GB left (snowy and bianca) @@ -71,40 +71,40 @@ def clusterOptionsCreator = { m -> } process { - executor = 'slurm' - clusterOptions = { clusterOptionsCreator(task.memory) } - // Use node local storage for execution. - scratch = '$SNIC_TMP' + executor = 'slurm' + clusterOptions = { clusterOptionsCreator(task.memory) } + // Use node local storage for execution. + scratch = '$SNIC_TMP' } // Cluster: Snowy // Caution: Bianca nodes will be project name-nodenumber, e.g. sens2021500-001 // so cannot rely on just starting with 's' if (hostname.matches("^s[0-9][0-9]*")) { - params.max_time = 700.h - params.max_memory = 3880.GB - params.config_profile_description = 'UPPMAX (Snowy) cluster profile provided by nf-core/configs.' + params.max_time = 700.h + params.max_memory = 3880.GB + params.config_profile_description = 'UPPMAX (Snowy) cluster profile provided by nf-core/configs.' } // Cluster: Irma if (hostname.startsWith("i")) { - params.max_memory = 250.GB - params.config_profile_description = 'UPPMAX (Irma) cluster profile provided by nf-core/configs.' + params.max_memory = 250.GB + params.config_profile_description = 'UPPMAX (Irma) cluster profile provided by nf-core/configs.' } // Cluster: Miarka if (hostname.startsWith("m")) { - params.max_memory = 357.GB - params.max_cpus = 48 - params.max_time = 480.h - params.config_profile_description = 'UPPMAX (Miarka) cluster profile provided by nf-core/configs.' + params.max_memory = 357.GB + params.max_cpus = 48 + params.max_time = 480.h + params.config_profile_description = 'UPPMAX (Miarka) cluster profile provided by nf-core/configs.' } // Cluster: Rackham if (hostname.startsWith("r")) { - params.max_cpus = 20 - params.max_memory = 970.GB - params.config_profile_description = 'UPPMAX (Rackham) cluster profile provided by nf-core/configs.' + params.max_cpus = 20 + params.max_memory = 970.GB + params.config_profile_description = 'UPPMAX (Rackham) cluster profile provided by nf-core/configs.' } // Cluster: Bianca - set in initial params block above @@ -112,14 +112,14 @@ if (hostname.startsWith("r")) { // Additional devel profile for running in devel queue // Run with `-profile upppmax,devel` profiles { - devel { - params { - config_profile_description = 'Testing & development profile for UPPMAX, provided by nf-core/configs.' - // Max resources to be requested by a devel job - max_memory = 120.GB - max_time = 1.h + devel { + params { + config_profile_description = 'Testing & development profile for UPPMAX, provided by nf-core/configs.' + // Max resources to be requested by a devel job + max_memory = 120.GB + max_time = 1.h + } + executor.queueSize = 1 + process.queue = 'devel' } - executor.queueSize = 1 - process.queue = 'devel' - } } diff --git a/conf/utd_europa.config b/conf/utd_europa.config new file mode 100644 index 000000000..8e8360131 --- /dev/null +++ b/conf/utd_europa.config @@ -0,0 +1,44 @@ +//Profile config names for nf-core/configs +params { + config_profile_description = 'University of Texas at Dallas HTC cluster profile provided by nf-core/configs' + config_profile_contact = 'Edmund Miller' + config_profile_contact_github = '@edmundmiller' + config_profile_contact_email = 'edmund.miller@utdallas.edu' + config_profile_url = 'https://docs.circ.utdallas.edu/user-guide/systems/europa.html' +} + +env { + TMPDIR = "/home/$USER/scratch/tmp" + APPTAINER_CACHEDIR="/home/$USER/scratch/apptainer" +} + +apptainer { + enabled = true + autoMounts = true + cacheDir = "/home/$USER/scratch/apptainer" +} + +// Submit up to 100 concurrent jobs +// pollInterval and queueStatInterval of every 5 minutes +// submitRateLimit of 20 per minute +executor { + queueSize = 100 + pollInterval = '5 min' + queueStatInterval = '5 min' + submitRateLimit = '20 min' + jobName = { "${task.process.split(':').last()}" } +} + +process { + beforeScript = 'module load apptainer' + executor = 'slurm' + queue = 'normal' + memory = 30.GB + cpus = 16 +} + +params { + max_memory = 30.GB + max_cpus = 16 + max_time = 48.h +} diff --git a/conf/utd_ganymede.config b/conf/utd_ganymede.config index c882a12b8..c11575f9d 100644 --- a/conf/utd_ganymede.config +++ b/conf/utd_ganymede.config @@ -1,8 +1,10 @@ //Profile config names for nf-core/configs params { - config_profile_description = 'University of Texas at Dallas HPC cluster profile provided by nf-core/configs' - config_profile_contact = 'Edmund Miller(@emiller88)' - config_profile_url = 'http://docs.oithpc.utdallas.edu/' + config_profile_description = 'University of Texas at Dallas HPC cluster profile provided by nf-core/configs' + config_profile_contact = 'Edmund Miller' + config_profile_contact_github = '@edmundmiller' + config_profile_contact_email = 'edmund.miller@utdallas.edu' + config_profile_url = 'https://docs.circ.utdallas.edu/user-guide/systems/ganymede.html' } env { @@ -11,10 +13,10 @@ env { } singularity { - enabled = true - envWhitelist='SINGULARITY_BINDPATH,LD_LIBRARY_PATH' - autoMounts = true - cacheDir = "/home/$USER/scratch/singularity" + enabled = true + envWhitelist='SINGULARITY_BINDPATH,LD_LIBRARY_PATH' + autoMounts = true + cacheDir = "/home/$USER/scratch/singularity" } def membership = "groups".execute().text @@ -42,16 +44,22 @@ executor { pollInterval = '5 min' queueStatInterval = '5 min' submitRateLimit = '20 min' + jobName = { "${task.process.split(':').last()}" } } process { - beforeScript = 'module load singularity/3.2.1' - executor = 'slurm' - queue = { select_queue(task.memory, task.cpu) } + beforeScript = 'module load singularity/3.2.1' + executor = 'slurm' + queue = { select_queue(task.memory, task.cpu) } + + withLabel:process_medium { + cpus = { check_max( 16 * task.attempt, 'cpus' ) } + memory = { check_max( 30.GB * task.attempt, 'memory' ) } + } } params { - max_memory = 250.GB - max_cpus = 28 - max_time = 96.h + max_memory = 250.GB + max_cpus = 28 + max_time = 96.h } diff --git a/conf/utd_sysbio.config b/conf/utd_sysbio.config index 28460a83e..27f728f77 100644 --- a/conf/utd_sysbio.config +++ b/conf/utd_sysbio.config @@ -1,9 +1,9 @@ //Profile config names for nf-core/configs params { - config_profile_description = 'University of Texas at Dallas HPC cluster profile provided by nf-core/configs' - config_profile_contact = 'Edmund Miller(@emiller88)' - config_profile_url = 'http://docs.oithpc.utdallas.edu/' - singularity_cache_dir = '/scratch/applied-genomics/singularity' + config_profile_description = 'University of Texas at Dallas HPC cluster profile provided by nf-core/configs' + config_profile_contact = 'Edmund Miller(@edmundmiller)' + config_profile_url = 'http://docs.oithpc.utdallas.edu/' + singularity_cache_dir = '/scratch/applied-genomics/singularity' } env { @@ -11,25 +11,25 @@ env { } singularity { - enabled = true - envWhitelist='SINGULARITY_BINDPATH' - autoMounts = true - cacheDir = params.singularity_cache_dir + enabled = true + envWhitelist='SINGULARITY_BINDPATH' + autoMounts = true + cacheDir = params.singularity_cache_dir } process { - beforeScript = 'module load singularity/3.4.1' - executor = 'slurm' - queue = { task.memory >= 30.GB && task.cpu <= 16 ? 'normal': 'smallmem' } + beforeScript = 'module load singularity/3.4.1' + executor = 'slurm' + queue = { task.memory >= 30.GB && task.cpu <= 16 ? 'normal': 'smallmem' } } // Preform work directory cleanup after a successful run cleanup = true params { - // TODO Need to initialize this - // igenomes_base = '/scratch/applied-genomics/references/iGenomes/references/' - max_memory = 90.GB - max_cpus = 16 - max_time = 96.h -} \ No newline at end of file + // TODO Need to initialize this + // igenomes_base = '/scratch/applied-genomics/references/iGenomes/references/' + max_memory = 90.GB + max_cpus = 16 + max_time = 96.h +} diff --git a/conf/uw_hyak_pedslabs.config b/conf/uw_hyak_pedslabs.config index 59db7bf67..e5740216d 100644 --- a/conf/uw_hyak_pedslabs.config +++ b/conf/uw_hyak_pedslabs.config @@ -1,10 +1,10 @@ params { - config_profile_description = 'UW Hyak Pedslabs cluster profile provided by nf-core/configs.' - config_profile_contact = 'Carson J. Miller (@CarsonJM)' - config_profile_url = 'https://www.peds.uw.edu/' - max_memory = 742.GB - max_cpus = 40 - max_time = 72.h + config_profile_description = 'UW Hyak Pedslabs cluster profile provided by nf-core/configs.' + config_profile_contact = 'Carson J. Miller (@CarsonJM)' + config_profile_url = 'https://www.peds.uw.edu/' + max_memory = 742.GB + max_cpus = 40 + max_time = 72.h } process { @@ -21,11 +21,11 @@ executor { } singularity { - enabled = true - autoMounts = true - cacheDir = '/gscratch/scrubbed/pedslabs/.apptainer' + enabled = true + autoMounts = true + cacheDir = '/gscratch/scrubbed/pedslabs/.apptainer' } debug { - cleanup = false + cleanup = false } diff --git a/conf/uzl_omics.config b/conf/uzl_omics.config index 37abd33b3..ca345d419 100644 --- a/conf/uzl_omics.config +++ b/conf/uzl_omics.config @@ -5,9 +5,9 @@ params { } params { - max_memory = 760.GB + max_memory = 760.GB max_cpus = 48 - max_time = 72.h + max_time = 72.h } process { diff --git a/conf/vai.config b/conf/vai.config index 18fd32cd9..d303a0758 100644 --- a/conf/vai.config +++ b/conf/vai.config @@ -1,18 +1,18 @@ params { - config_profile_description = 'Van Andel Institute HPC profile provided by nf-core/configs.' - config_profile_contact = 'Nathan Spix (@njspix)' - config_profile_url = 'https://vanandelinstitute.sharepoint.com/sites/SC/SitePages/Nodes-and-Partitions.aspx' - max_memory = 250.GB - max_cpus = 40 - max_time = 336.h + config_profile_description = 'Van Andel Institute HPC profile provided by nf-core/configs.' + config_profile_contact = 'Nathan Spix (@njspix)' + config_profile_url = 'https://vanandelinstitute.sharepoint.com/sites/SC/SitePages/Nodes-and-Partitions.aspx' + max_memory = 250.GB + max_cpus = 40 + max_time = 336.h } singularity { - enabled = true - autoMounts = true + enabled = true + autoMounts = true } process { - executor = 'slurm' - queue = 'long' + executor = 'slurm' + queue = 'long' } diff --git a/conf/vsc_calcua.config b/conf/vsc_calcua.config new file mode 100644 index 000000000..bf11b6410 --- /dev/null +++ b/conf/vsc_calcua.config @@ -0,0 +1,335 @@ +// Define the scratch directory, which will be used for storing the nextflow +// work directory and for caching apptainer/singularity files. +// Default to /tmp directory if $VSC_SCRATCH scratch env is not available, +// see: https://github.com/nf-core/configs?tab=readme-ov-file#adding-a-new-config +def scratch_dir = System.getenv("VSC_SCRATCH") ?: "/tmp" + +// Specify the work directory. +workDir = "$scratch_dir/work" + +// Perform work directory cleanup when the run has succesfully completed. +cleanup = true + +def host = System.getenv("VSC_INSTITUTE") + +// Check if APPTAINER_TMPDIR/SINGULARITY_TMPDIR environment variables are set. +// If they are available, try to create the tmp directory at the specified location. +// Skip if host is not CalcUA to avoid hindering github actions. +if ( host == "antwerpen" ) { + def apptainer_tmpdir = System.getenv("APPTAINER_TMPDIR") ?: System.getenv("SINGULARITY_TMPDIR") ?: null + if (! apptainer_tmpdir ) { + def tmp_dir = System.getenv("TMPDIR") ?: "/tmp" + System.err.println("\nWARNING: APPTAINER_TMPDIR/SINGULARITY_TMPDIR environment variable was not found.\nPlease add the line 'export APPTAINER_TMPDIR=\"\${VSC_SCRATCH}/apptainer/tmp\"' to your ~/.bashrc file (or set it with sbatch or in your job script).\nDefaulting to local $tmp_dir on the execution node of the Nextflow head process.\n") + // TODO: check if images stored there can be accessed by slurm jobs on other nodes + } else { + apptainer_tmpdir = new File(apptainer_tmpdir) + if (! apptainer_tmpdir.exists() ) { + try { + dir_created = apptainer_tmpdir.mkdirs() + } catch (java.io.IOException e) { + System.err.println("\nWARNING: Could not create directory at the location specified by APPTAINER_TMPDIR/SINGULARITY_TMPDIR: $apptainer_tmpdir\nPlease check if this is a valid path to which you have write permission. Exiting...\n") + } + } + } +} + +// Function to check if the selected partition profile matches the partition on which the master +// nextflow job was launched (either implicitly or via `sbatch --partition=`). +// If the profile type is `*_local` and the partitions do not match, stop the execution and +// warn the user. +def partition_checker(String profile) { + // Skip check if host machine is not CalcUA, in order to not hinder github actions. + if ( host != "antwerpen" ) { + // System.err.println("\nWARNING: Skipping comparison of current partition and requested profile because the current machine is not VSC CalcUA.") + return + } + + def current_partition = System.getenv("SLURM_JOB_PARTITION") + + try { + current_partition + } catch (java.io.IOException e) { + System.err.println("\nWARNING: Current partition could not be found in the expected \$SLURM_JOB_PARTITION environment variable. Please make sure that you submit your pipeline via a Slurm job instead of running the nextflow command directly on a login node.\nExiting...\n") + } + + try { + current_partition = profile + } catch (java.io.IOException e) { + System.err.println("\nWARNING: Slurm job was launched on the \'$current_partition\' partition, but the selected nextflow profile points to the $profile partition instead ('${profile}_local'). When using one of the local node execution profiles, please launch the job on the corresponding partition in Slurm.\nE.g., Slurm job submission command:\n sbatch --account --partition=broadwell script.slurm\nand job script containing a nextflow command with matching profile section:\n nextflow run -profile vsc_calcua,broadwell_local\nExiting...\n") + } +} + +// Reduce the job submit rate to about 30 per minute, this way the server +// won't be bombarded with jobs. +// Limit queueSize to keep job rate under control and avoid timeouts. +// Set read timeout to the maximum wall time. +// See: https://www.nextflow.io/docs/latest/config.html#scope-executor +executor { + submitRateLimit = '30/1min' + queueSize = 10 + exitReadTimeout = 7.day +} + +// Add backoff strategy to catch cluster timeouts and proper symlinks of files in scratch +// to the work directory. +// See: https://www.nextflow.io/docs/latest/config.html#scope-process +process { + stageInMode = "symlink" + stageOutMode = "rsync" + errorStrategy = { sleep(Math.pow(2, task.attempt ?: 1) * 200 as long); return 'retry' } + maxRetries = 3 +} + +// Specify that apptainer/singularity should be used and where the cache dir will be for the images. +// The singularity directive is used in favour of the apptainer one, because currently the apptainer +// variant will pull in (and convert) docker images, instead of using pre-built singularity ones. +// To use the pre-built singularity containers instead, the singularity options should be selected +// with apptainer installed on the system, which defines singularity as an alias (as is the case +// on CalcUA). +// See https://nf-co.re/docs/usage/installation#pipeline-software +// and https://nf-co.re/tools#how-the-singularity-image-downloads-work +// See https://www.nextflow.io/docs/latest/config.html#scope-singularity +singularity { + enabled = true + autoMounts = true + // See https://www.nextflow.io/docs/latest/singularity.html#singularity-docker-hub + cacheDir = "$scratch_dir/apptainer/nextflow_cache" // Equivalent to setting NXF_APPTAINER_CACHEDIR/NXF_SINGULARITY_CACHEDIR environment variable +} + +// Define profiles for the following partitions: +// - zen2, zen3, zen3_512 (Vaughan) +// - broadwell, broadwell_256 (Leibniz) +// - skylake (Breniac, formerly Hopper) +// For each partition, there is a "*_slurm" profile and a "*_local" profile. +// The former uses the slurm executor to submit each nextflow task as a separate job, +// whereas the latter runs all tasks on the individual node on which the nextflow +// master process was launched. +// See: https://www.nextflow.io/docs/latest/config.html#config-profiles +profiles { + // Automatic slurm partition selection based on task requirements + slurm { + params { + config_profile_description = 'Slurm profile with automatic partition selection for use on the CalcUA VSC HPC.' + config_profile_contact = 'pmoris@itg.be (GitHub: @pmoris)' + config_profile_url = 'https://docs.vscentrum.be/antwerp/tier2_hardware.html' + max_memory = 496.GB // = max memory of high memory nodes + max_cpus = 64 // = cpu count of largest nodes + max_time = 7.day // wall time of longest running nodes + } + process { + executor = 'slurm' + queue = { + // long running + if ( task.time > 3.day ) { + 'skylake' + // high memory + } else if ( task.memory > 240.GB ) { + 'zen3_512' + // medium memory and high cpu + } else if ( task.memory > 112.GB && task.cpus > 28 ) { + 'zen2,zen3' + // medium memory and lower cpu + } else if ( task.memory > 112.GB && task.cpus < 28 ) { + 'broadwell_256,zen2,zen3' + // lower memory and high cpu + } else if ( task.cpus > 28 ) { + 'zen2,zen3' + // lower memory and lower cpu + } else { + 'broadwell,skylake,zen2,zen3' + } + } + } + } + // Vaughan partitions + zen2_slurm { + params { + config_profile_description = 'Zen2 Slurm profile for use on the Vaughan cluster of the CalcUA VSC HPC.' + config_profile_contact = 'pmoris@itg.be (GitHub: @pmoris)' + config_profile_url = 'https://docs.vscentrum.be/antwerp/tier2_hardware/vaughan_hardware.html' + max_memory = 240.GB // 256 GB (total) - 16 GB (buffer) + max_cpus = 64 + max_time = 3.day + } + process { + executor = 'slurm' + queue = 'zen2' + } + } + zen2_local { + params { + config_profile_description = 'Zen2 local profile for use on a single node of the Vaughan cluster of the CalcUA VSC HPC.' + config_profile_contact = 'pmoris@itg.be (GitHub: @pmoris)' + config_profile_url = 'https://docs.vscentrum.be/antwerp/tier2_hardware/vaughan_hardware.html' + max_memory = get_allocated_mem(240) // 256 GB (total) - 16 GB (buffer) + max_cpus = get_allocated_cpus(64) + max_time = 3.day + } + process { + executor = 'local' + } + partition_checker("zen2") + } + zen3_slurm { + params { + config_profile_description = 'Zen3 Slurm profile for use on the Vaughan cluster of the CalcUA VSC HPC.' + config_profile_contact = 'pmoris@itg.be (GitHub: @pmoris)' + config_profile_url = 'https://docs.vscentrum.be/antwerp/tier2_hardware/vaughan_hardware.html' + max_memory = 240.GB // 256 GB (total) - 16 GB (buffer) + max_cpus = 64 + max_time = 3.day + } + process { + executor = 'slurm' + queue = 'zen3' + } + } + zen3_local { + params { + config_profile_description = 'Zen3 local profile for use on a single node of the Vaughan cluster of the CalcUA VSC HPC.' + config_profile_contact = 'pmoris@itg.be (GitHub: @pmoris)' + config_profile_url = 'https://docs.vscentrum.be/antwerp/tier2_hardware/vaughan_hardware.html' + max_memory = get_allocated_mem(240) // 256 GB (total) - 16 GB (buffer) + max_cpus = get_allocated_cpus(64) + max_time = 3.day + } + process { + executor = 'local' + } + partition_checker("zen3") + } + zen3_512_slurm { + params { + config_profile_description = 'Zen3_512 Slurm profile for use on the Vaughan cluster of the CalcUA VSC HPC.' + config_profile_contact = 'pmoris@itg.be (GitHub: @pmoris)' + config_profile_url = 'https://docs.vscentrum.be/antwerp/tier2_hardware/vaughan_hardware.html' + max_memory = 496.GB // 512 GB (total) - 16 GB (buffer) + max_cpus = 64 + max_time = 3.day + } + process { + executor = 'slurm' + queue = 'zen3_512' + } + } + zen3_512_local { + params { + config_profile_description = 'Zen3_512 local profile for use on a single node of the Vaughan cluster of the CalcUA VSC HPC.' + config_profile_contact = 'pmoris@itg.be (GitHub: @pmoris)' + config_profile_url = 'https://docs.vscentrum.be/antwerp/tier2_hardware/vaughan_hardware.html' + max_memory = get_allocated_mem(496) // 512 GB (total) - 16 GB (buffer) + max_cpus = get_allocated_cpus(64) + max_time = 3.day + } + process { + executor = 'local' + } + partition_checker("zen3_512") + } + // Leibniz partitions + broadwell_slurm { + params { + config_profile_description = 'Broadwell Slurm profile for use on the Leibniz cluster of the CalcUA VSC HPC.' + config_profile_contact = 'pmoris@itg.be (GitHub: @pmoris)' + config_profile_url = 'https://docs.vscentrum.be/antwerp/tier2_hardware/leibniz_hardware.html' + max_memory = 112.GB // 128 GB (total) - 16 GB (buffer) + max_cpus = 28 + max_time = 3.day + } + process { + executor = 'slurm' + queue = 'broadwell' + } + } + broadwell_local { + params { + config_profile_description = 'Broadwell local profile for use on a single node of the Leibniz cluster of the CalcUA VSC HPC.' + config_profile_contact = 'pmoris@itg.be (GitHub: @pmoris)' + config_profile_url = 'https://docs.vscentrum.be/antwerp/tier2_hardware/leibniz_hardware.html' + max_memory = get_allocated_mem(112) // 128 GB (total) - 16 GB (buffer) + max_cpus = get_allocated_cpus(28) + max_time = 3.day + } + process { + executor = 'local' + } + partition_checker("broadwell") + } + broadwell_256_slurm { + params { + config_profile_description = 'Broadwell_256 Slurm profile for use on the Leibniz cluster of the CalcUA VSC HPC.' + config_profile_contact = 'pmoris@itg.be (GitHub: @pmoris)' + config_profile_url = 'https://docs.vscentrum.be/antwerp/tier2_hardware/leibniz_hardware.html' + max_memory = 240.GB // 256 (total) - 16 GB (buffer) + max_cpus = 28 + max_time = 3.day + } + process { + executor = 'slurm' + queue = 'broadwell_256' + } + } + broadwell_256_local { + params { + config_profile_description = 'Broadwell_256 local profile for use on a single node of the Leibniz cluster of the CalcUA VSC HPC.' + config_profile_contact = 'pmoris@itg.be (GitHub: @pmoris)' + config_profile_url = 'https://docs.vscentrum.be/antwerp/tier2_hardware/leibniz_hardware.html' + max_memory = get_allocated_mem(240) // 256 (total) - 16 GB (buffer) + max_cpus = get_allocated_cpus(28) + max_time = 3.day + } + process { + executor = 'local' + } + partition_checker("broadwell_256") + } + // Breniac (previously Hopper) partitions + skylake_slurm { + params { + config_profile_description = 'Skylake Slurm profile for use on the Breniac (former Hopper) cluster of the CalcUA VSC HPC.' + config_profile_contact = 'pmoris@itg.be (GitHub: @pmoris)' + config_profile_url = 'https://www.uantwerpen.be/en/research-facilities/calcua/infrastructure/' + max_memory = 176.GB // 192 GB (total) - 16 GB (buffer) + max_cpus = 28 + max_time = 7.day + } + process { + executor = 'slurm' + queue = 'skylake' + } + } + skylake_local { + params { + config_profile_description = 'Skylake local profile for use on a single node of the Breniac (former Hopper) cluster of the CalcUA VSC HPC.' + config_profile_contact = 'pmoris@itg.be (GitHub: @pmoris)' + config_profile_url = 'https://www.uantwerpen.be/en/research-facilities/calcua/infrastructure/' + max_memory = get_allocated_mem(176) // 192 GB (total) - 16 GB (buffer) + max_cpus = get_allocated_cpus(28) + max_time = 7.day + } + process { + executor = 'local' + } + partition_checker("skylake") + } +} + +// Define functions to fetch the available CPUs and memory of the current execution node. +// Only used when running one of the *_local partition profiles and allows the cpu +// and memory thresholds to be set dynamic based on the available hardware as reported +// by Slurm. Can be supplied with a default return value, which should be set to the +// recommended thresholds for the particular partition's node types. +def get_allocated_cpus(int node_max_cpu) { + max_cpus = System.getenv("SLURM_CPUS_PER_TASK") ?: System.getenv("SLURM_JOB_CPUS_PER_NODE") ?: node_max_cpu + return max_cpus.toInteger() +} +def get_allocated_mem(int node_max_mem) { + def mem_per_cpu = System.getenv("SLURM_MEM_PER_CPU") + def cpus_per_task = System.getenv("SLURM_CPUS_PER_TASK") ?: System.getenv("SLURM_JOB_CPUS_PER_NODE") + + if ( mem_per_cpu && cpus_per_task ) { + node_max_mem = mem_per_cpu.toInteger() / 1000 * cpus_per_task.toInteger() + } + + return "${node_max_mem}.GB" +} diff --git a/conf/vsc_ugent.config b/conf/vsc_ugent.config index 4aca5fb96..88043aaf6 100644 --- a/conf/vsc_ugent.config +++ b/conf/vsc_ugent.config @@ -1,9 +1,3 @@ -// Set up the Tier 1 parameter -params.validationSchemaIgnoreParams = params.validationSchemaIgnoreParams.toString() + ",tier1_project" -if (!params.tier1_project) { - params.tier1_project = null -} - // Get the hostname and check some values for tier1 def hostname = "doduo" try { @@ -12,13 +6,16 @@ try { System.err.println("WARNING: Could not run sinfo to determine current cluster, defaulting to doduo") } -if(!params.tier1_project && hostname.contains("dodrio")){ - System.err.println("Please specify your project with --tier1_project in your Nextflow command or with params.tier1_project in your config file.") +def tier1_project = System.getenv("SBATCH_ACCOUNT") ?: System.getenv("SLURM_ACCOUNT") + +if (! tier1_project && hostname.contains("dodrio")) { + // Hard-code that Tier 1 cluster dodrio requires a project account + System.err.println("Please specify your VSC project account with environment variable SBATCH_ACCOUNT or SLURM_ACCOUNT.") System.exit(1) } // Define the Scratch directory -def scratch_dir = System.getenv("VSC_SCRATCH_PROJECTS_BASE") ? "${System.getenv("VSC_SCRATCH_PROJECTS_BASE")}/${params.tier1_project}" : // Tier 1 scratch +def scratch_dir = System.getenv("VSC_SCRATCH_PROJECTS_BASE") ? "${System.getenv("VSC_SCRATCH_PROJECTS_BASE")}/$tier1_project" : // Tier 1 scratch System.getenv("VSC_SCRATCH_VO_USER") ?: // VO scratch System.getenv("VSC_SCRATCH") // user scratch @@ -43,9 +40,30 @@ process { stageOutMode = "rsync" errorStrategy = { sleep(Math.pow(2, task.attempt ?: 1) * 200 as long); return 'retry' } maxRetries = 5 + // add GPU support with GPU label + // Adapted from https://github.com/nf-core/configs/blob/76970da5d4d7eadd8354ef5c5af2758ce187d6bc/conf/leicester.config#L26 + // More info on GPU SLURM options: https://hpc.vub.be/docs/job-submission/gpu-job-types/#gpu-job-types + withLabel: use_gpu { + // works on all GPU clusters of Tier 1 and Tier 2 + beforeScript = 'module load cuDNN/8.4.1.50-CUDA-11.7.0' + // TODO: Support multi-GPU configuations with e.g. ${task.ext.gpus} + // only add account if present + clusterOptions = {"--gpus=1" + (tier1_project ? " --account=$tier1_project" : "")} + containerOptions = { + // Ensure that the container has access to the GPU + workflow.containerEngine == "singularity" ? '--nv': + ( workflow.containerEngine == "docker" ? '--gpus all': null ) + } + } } // Specify that singularity should be used and where the cache dir will be for the images +// containerOptions --containall or --no-home can break e.g. downloading big models to ~/.cache +// solutions to error 'no disk space left': +// 1. remove --no-home using NXF_APPTAINER_HOME_MOUNT=true +// 2. increase the memory of the job. +// 3. change the script so the tool does not use the home folder. +// 4. increasing the Singularity memory limit using --memory. singularity { enabled = true autoMounts = true @@ -53,8 +71,8 @@ singularity { } env { - SINGULARITY_TMPDIR="$scratch_dir/.singularity" - APPTAINER_TMPDIR="$scratch_dir/.apptainer" + APPTAINER_TMPDIR="$scratch_dir/.apptainer/tmp" + APPTAINER_CACHEDIR="$scratch_dir/.apptainer/cache" } // AWS maximum retries for errors (This way the pipeline doesn't fail if the download fails one time) @@ -157,7 +175,7 @@ profiles { process { executor = 'slurm' queue = 'dodrio/cpu_rome' - clusterOptions = "-A ${params.tier1_project}" + clusterOptions = "-A ${tier1_project}" } } @@ -174,7 +192,7 @@ profiles { process { executor = 'slurm' queue = 'dodrio/cpu_rome_512' - clusterOptions = "-A ${params.tier1_project}" + clusterOptions = "-A ${tier1_project}" } } @@ -191,7 +209,7 @@ profiles { process { executor = 'slurm' queue = 'dodrio/cpu_milan' - clusterOptions = "-A ${params.tier1_project}" + clusterOptions = "-A ${tier1_project}" } } @@ -208,7 +226,7 @@ profiles { process { executor = 'slurm' queue = 'dodrio/gpu_rome_a100_40' - clusterOptions = "-A ${params.tier1_project}" + clusterOptions = "-A ${tier1_project}" } } @@ -225,7 +243,7 @@ profiles { process { executor = 'slurm' queue = 'dodrio/gpu_rome_a100_80' - clusterOptions = "-A ${params.tier1_project}" + clusterOptions = "-A ${tier1_project}" } } @@ -242,7 +260,7 @@ profiles { process { executor = 'slurm' queue = 'dodrio/debug_rome' - clusterOptions = "-A ${params.tier1_project}" + clusterOptions = "-A ${tier1_project}" } } @@ -259,7 +277,7 @@ profiles { process { executor = 'slurm' queue = 'dodrio/cpu_rome_all' - clusterOptions = "-A ${params.tier1_project}" + clusterOptions = "-A ${tier1_project}" } } @@ -276,7 +294,7 @@ profiles { process { executor = 'slurm' queue = 'dodrio/gpu_rome_a100' - clusterOptions = "-A ${params.tier1_project}" + clusterOptions = "-A ${tier1_project}" } } diff --git a/conf/wcm.config b/conf/wcm.config index 38cd3d1e2..c82d76c49 100644 --- a/conf/wcm.config +++ b/conf/wcm.config @@ -1,28 +1,28 @@ singularityDir = "/athena/elementolab/scratch/reference/.singularity/singularity_images_nextflow" params { - config_profile_description = 'Weill Cornell Medicine, Scientific Computing Unit Slurm cluster profile provided by nf-core/configs' - config_profile_contact = 'Ashley Stephen Doane, PhD (@DoaneAS)' - igenomes_base = '/athena/elementolab/scratch/reference/igenomes' + config_profile_description = 'Weill Cornell Medicine, Scientific Computing Unit Slurm cluster profile provided by nf-core/configs' + config_profile_contact = 'Ashley Stephen Doane, PhD (@DoaneAS)' + igenomes_base = '/athena/elementolab/scratch/reference/igenomes' } singularity { - enabled = true - envWhitelist='SINGULARITY_BINDPATH' - cacheDir = "/athena/elementolab/scratch/reference/.singularity/singularity_images_nextflow" - autoMounts = true + enabled = true + envWhitelist='SINGULARITY_BINDPATH' + cacheDir = "/athena/elementolab/scratch/reference/.singularity/singularity_images_nextflow" + autoMounts = true } process { - executor = 'slurm' - queue = 'panda_physbio' - scratch = true - scratch = '/scratchLocal/`whoami`_${SLURM_JOBID}' + executor = 'slurm' + queue = 'panda_physbio' + scratch = true + scratch = '/scratchLocal/`whoami`_${SLURM_JOBID}' } params { - max_memory = 32.GB - max_cpus = 8 - max_time = 24.h + max_memory = 32.GB + max_cpus = 8 + max_time = 24.h } diff --git a/conf/wustl_htcf.config b/conf/wustl_htcf.config index 6a1f505a7..ca9cf9f40 100644 --- a/conf/wustl_htcf.config +++ b/conf/wustl_htcf.config @@ -1,43 +1,41 @@ // Forked from https://github.com/nf-core/configs/blob/master/conf/prince.config -def singularityDir = set_singularity_path() +def labEnvVar = System.getenv("LAB") + +if (labEnvVar) { + System.out.println("Lab: " + labEnvVar) + singularityDir = "/ref/$LAB/data/singularity_images_nextflow" // If $LAB is set, use that +} else { + def id = "id -nG".execute().text + def labAutodetect = id.split(" ").last() + System.out.println("Lab: " + labAutodetect) + singularityDir = "/ref/" + labAutodetect + "/data/singularity_images_nextflow" +} params { - config_profile_description = """ - WUSTL High Throughput Computing Facility cluster profile provided by nf-core/configs. - Run from your scratch directory, the output files may be large! - Please consider running the pipeline on a compute node the first time, as it will be pulling the docker image, which will be converted into a singularity image, which is heavy on the login node. Subsequent runs can be done on the login node, as the docker image will only be pulled and converted once. By default, the images will be stored in $singularityDir - """.stripIndent() - config_profile_contact = "Gavin John " - config_profile_url = "https://github.com/nf-core/configs/blob/master/docs/wustl_htcf.md" + config_profile_description = """ + WUSTL High Throughput Computing Facility cluster profile provided by nf-core/configs. + Run from your scratch directory, the output files may be large! + Please consider running the pipeline on a compute node the first time, as it will be pulling the docker image, which will be converted into a singularity image, which is heavy on the login node. Subsequent runs can be done on the login node, as the docker image will only be pulled and converted once. By default, the images will be stored in $singularityDir + """.stripIndent() + config_profile_contact = "Gavin John " + config_profile_url = "https://github.com/nf-core/configs/blob/master/docs/wustl_htcf.md" - max_cpus = 24 - max_memory = 750.GB - max_time = 168.h + max_cpus = 24 + max_memory = 750.GB + max_time = 168.h } spack { - enabled = true + enabled = true } singularity { - enabled = true - cacheDir = singularityDir + enabled = true + cacheDir = singularityDir } process { - beforeScript = "exec \$( spack load --sh singularity )" - executor = "slurm" -} - -def set_singularity_path() { - def labEnvVar = System.getenv("LAB") - if (labEnvVar) { - System.out.println("Lab: " + labEnvVar) - return "/ref/$LAB/data/singularity_images_nextflow" // If $LAB is set, use that - } - def id = "id -nG".execute().text - def labAutodetect = id.split(" ").last() - System.out.println("Lab: " + labAutodetect) - return "/ref/" + labAutodetect + "/data/singularity_images_nextflow" + beforeScript = "exec \$( spack load --sh singularity )" + executor = "slurm" } diff --git a/conf/xanadu.config b/conf/xanadu.config index 3af56f8a5..fc03655db 100644 --- a/conf/xanadu.config +++ b/conf/xanadu.config @@ -1,38 +1,38 @@ -params { - config_profile_description = 'The UConn HPC profile' - config_profile_contact = 'noah.reid@uconn.edu' - config_profile_url = 'https://bioinformatics.uconn.edu/' - - // max resources - max_memory = 2.TB - max_cpus = 64 - max_time = 21.d - - // Path to shared singularity images - singularity_cache_dir = '/isg/shared/databases/nfx_singularity_cache' - -} - -process { - executor = 'slurm' - queue = { task.memory <= 245.GB ? 'general' : ( task.memory <= 512.GB ? 'himem' : 'himem2' ) } - - clusterOptions = { [ - task.memory <= 245.GB ? '--qos=general' : '--qos=himem', - // provide hardware constraints for particular processes - //"${task.process.tokenize(':')[-1]}" ==~ /[BWAbwa]{3}[-_][MEme]{3}2.*/ ? '--constraint="AVX|AVX2|AVX512|SSE41|SSE42"' : '' - ].join(' ').trim() } -} - -executor { - name = 'slurm' - submitRateLimit = '2 sec' - queueSize = 100 -} - -singularity { - enabled = true - cacheDir = params.singularity_cache_dir - autoMounts = true - conda.enabled = false -} \ No newline at end of file +params { + config_profile_description = 'The UConn HPC profile' + config_profile_contact = 'noah.reid@uconn.edu' + config_profile_url = 'https://bioinformatics.uconn.edu/' + + // max resources + max_memory = 2.TB + max_cpus = 64 + max_time = 21.d + + // Path to shared singularity images + singularity_cache_dir = '/isg/shared/databases/nfx_singularity_cache' + +} + +process { + executor = 'slurm' + queue = { task.memory <= 245.GB ? 'general' : ( task.memory <= 512.GB ? 'himem' : 'himem2' ) } + + clusterOptions = { [ + task.memory <= 245.GB ? '--qos=general' : '--qos=himem', + // provide hardware constraints for particular processes + //"${task.process.tokenize(':')[-1]}" ==~ /[BWAbwa]{3}[-_][MEme]{3}2.*/ ? '--constraint="AVX|AVX2|AVX512|SSE41|SSE42"' : '' + ].join(' ').trim() } +} + +executor { + name = 'slurm' + submitRateLimit = '2 sec' + queueSize = 100 +} + +singularity { + enabled = true + cacheDir = params.singularity_cache_dir + autoMounts = true + conda.enabled = false +} diff --git a/conf/york_viking.config b/conf/york_viking.config new file mode 100644 index 000000000..936ece225 --- /dev/null +++ b/conf/york_viking.config @@ -0,0 +1,48 @@ +/* +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +Nextflow config file for York Viking Cluster for the SLURM login nodes +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +Author: Matthew Care +Mail: matthew.care@york.ac.uk +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +*/ + +params { + config_profile_contact = "Matthew Care" + config_profile_description = "The University of York Viking profile " + config_profile_url = "https://vikingdocs.york.ac.uk/" +} + +apptainer { + enabled = true + autoMounts = true + // the default is 20 minutes and fails with large images + pullTimeout = "3 hours" +} + +process{ + maxRetries = 3 + clusterOptions = "--get-user-env --account=${System.getenv('USER_ACCOUNT')}" // Get user environment and assign account + cache = "lenient" + afterScript = "sleep 60" + stageInMode = "symlink" + stageOutMode = "rsync" + scratch = 'false' + afterScript = "sleep 60" +} + +executor { + name = "slurm" + queueSize = 200 + submitRateLimit = "10/1sec" + exitReadTimeout = "30 min" + jobName = { + task.name // [] and " " not allowed in job names + .replace("[", + "(") + .replace("]", + ")") + .replace(" ", + "_") + } +} diff --git a/configtest.nf b/configtest.nf index 39badecf9..35f98f716 100644 --- a/configtest.nf +++ b/configtest.nf @@ -7,4 +7,4 @@ print("$separator\n") params.each { assert it print("\t$it\n") -} \ No newline at end of file +} diff --git a/docs/arcc.md b/docs/arcc.md new file mode 100644 index 000000000..15213a25c --- /dev/null +++ b/docs/arcc.md @@ -0,0 +1,151 @@ +# nf-core/configs: ARCC Configuration + +The [Advanced Research Computing Center (ARCC)](http://www.uwyo.edu/arcc/) for the University +of Wyoming (UW) has been set up to allow its users to utilize Nextflow with Singularity. + +## Getting Started + +First, you will need an account on ARCC, if you already have an account, skip ahead, otherwise +please continue. To get an account, you will need to be a Principal Investigator (PI) or student +at UW, or be sponsored by a UW PI. To learn more please visit [ARCC - HPC Account Requests](https://arccwiki.atlassian.net/wiki/spaces/DOCUMENTAT/pages/1913684148/Accounts+Access+and+Security). + +With an account in hand, you are ready to proceed. + +## Running Nextflow + +Please consider making use of [screen or tmux](https://arccwiki.atlassian.net/wiki/spaces/DOCUMENTAT/pages/1617494076/Screen+and+Tmux+Commands) +before launching your Interactive Job. This will allow you to resume it later. + +When using Nextflow on ARCC it is recommended you launch Nextflow as an Interactive Jobs on one of the +compute nodes, instead of the login nodes. To do this you will use the `salloc` command to launch an +[Interactive Job](https://arccwiki.atlassian.net/wiki/spaces/DOCUMENTAT/pages/1599078403/Start+Processing#Interactive-Jobs). + +Once you are on a compute node, you can then use the `module` command to load Conda and/or +Singularity. + +### Creating a Nextflow environment + +As an ARCC user you may have noticed there is already a module for Nextflow. However, it +may be out of date or limited to a single version. All nf-core pipelines have minimum Nextflow +version requirements, so its easier to create a Nextflow environment, as it will ensure you +have the latest available Nextflow version. + +```{bash} +module load miniforge +conda create -n nextflow -c conda-forge -c bioconda nextflow +``` + +### Environment Variables + +When using Nextflow on ARCC, you will need to set a few environment variables. + +#### `NXF_SINGULARITY_CACHEDIR` + +This is a Nextflow specific environment variable that will let Nextflow know where you have +or would like downloaded Singularity images to be downloaded. + +```{bash} +export NXF_SINGULARITY_CACHEDIR="/path/to/your/singularity/image/cache" + +# Example for 'healthdatasci' +export NXF_SINGULARITY_CACHEDIR="/project/healthdatasci/singularity" +``` + +#### `SBATCH_ACCOUNT` + +The `SBATCH_ACCOUNT` environment variable will be used by Nextflow to inform SLURM which +account the job should be submitted under. + +```{bash} +export SBATCH_ACCOUNT= + +# Example for 'healthdatasci' +export SBATCH_ACCOUNT=heatlhdatasci +``` + +### Available Paritions + +At the moment, only the CPU based partitions are available from this config. In the event +a GPU partition is needed, please reach out. The GPU partitions require additional arguements +that will need to be added. + +The available partitions include: + +- `beartooth` +- `beartooth-bigmem` +- `beartooth-hugemem` +- `moran` +- `moran-bigmem` +- `moran-hugemem` +- `teton` +- `teton-cascade` +- `teton-hugemem` +- `teton-massmem` +- `teton-knl` + +Please see [Beartooth Hardware Summary Table](https://arccwiki.atlassian.net/wiki/spaces/DOCUMENTAT/pages/1721139201/Beartooth+Hardware+Summary+Table) +for the full list of partitions. + +#### Specifying a Partition + +Each partition is provided as a separate Nextflow profile, so you will need to pick a +specific partition to submit jobs to. Using the available partitions, you will replace +the `-` (dash) with an underscore. + +For example, to use `beartooth`, you would provide the following: + +```{bash} +-profile arcc,beartooth +``` + +To use `beartooth-bigmem``, you would provide: + +```{bash} +-profile arcc,beartooth_bigmem +``` + +## Example: Running nf-core/fetchngs + +```{bash} +# Start a screen +screen -S test-fetchngs + +# Start an interactive job +salloc --account=healthdatasci --time=12:00:00 --mem=32G + +# Load modules +module load singularity +module load miniforge +conda activate nextflow + +# Export NXF_SINGULARITY_CACHEDIR (consider adding to your .bashrc) +export NXF_SINGULARITY_CACHEDIR=/gscratch/rpetit/singularity + +# Export SBATCH_ACCOUNT to specificy which account to use +export SBATCH_ACCOUNT="healthdatasci" + +# Run the fetchngs test profile with Singularity +nextflow run nf-core/fetchngs \ + -profile test,arcc, \ + --outdir test-fetchngs \ +``` + +If everything is successful, you will be met with: + +```{bash} +-[nf-core/fetchngs] Pipeline completed successfully- +WARN: ============================================================================= + Please double-check the samplesheet that has been auto-created by the pipeline. + + Public databases don't reliably hold information such as strandedness + information, controls etc + + All of the sample metadata obtained from the ENA has been appended + as additional columns to help you manually curate the samplesheet before + running nf-core/other pipelines. +=================================================================================== +Completed at: 30-Jan-2024 14:36:24 +Duration : 1m 28s +CPU hours : (a few seconds) +Succeeded : 18 +``` diff --git a/docs/cambridge.md b/docs/cambridge.md index 55935e37b..2c26cb044 100644 --- a/docs/cambridge.md +++ b/docs/cambridge.md @@ -1,6 +1,6 @@ # nf-core/configs: Cambridge HPC Configuration -All nf-core pipelines have been successfully configured for use on the Cambridge HPC cluster at the [The University of Cambridge](https://www.cam.ac.uk/). +All nf-core pipelines have been successfully configured for use on the Cambridge HPC cluster at the [The University of Cambridge](https://www.cam.ac.uk/). To use, run the pipeline with `-profile cambridge`. This will download and launch the [`cambridge.config`](../conf/cambridge.config) which has been pre-configured with a setup suitable for the Cambridge HPC cluster. Using this profile, either a docker image containing all of the required software will be downloaded, and converted to a Singularity image or a Singularity image downloaded directly before execution of the pipeline. diff --git a/docs/crg.md b/docs/crg.md index 4c8113b2b..fe41d3951 100644 --- a/docs/crg.md +++ b/docs/crg.md @@ -2,16 +2,66 @@ All nf-core pipelines have been successfully configured for use on the CRG HPC cluster at the [Centre for Genomic Regulation](https://www.crg.eu/). -To use, run the pipeline with `-profile crg`. This will download and launch the [`crg.config`](../conf/crg.config) which has been pre-configured with a setup suitable for the CRG HPC cluster. Using this profile, a docker image containing all of the required software will be downloaded, and converted to a Singularity image before execution of the pipeline. +## Using the CRG config profile -Before running the pipeline you will need to download Nextflow and load Singularity using the environment module system on CRG cluster. Please check the main README of the pipeline to make sure that the version of Nextflow is compatible with that required to run the pipeline. You can do this by issuing the commands below: +In order to avoid overloading the CRG login nodes, a specific VM to run Nextflow pipelines is provided. This VM can submit jobs to the HPC scheduler, thus using the computing nodes in the cluster. You just need to connect to it via SSH (replacing with your username): ```bash -## Download Nextflow and load Singularity environment modules +## Log in to the node +ssh @nextflow.linux.crg.es +``` + +Before running the pipeline you will need to download Nextflow and load Singularity using the environment module system on CRG cluster. Please check the main README of the pipeline to make sure that the version of Nextflow is compatible with that required to run the pipeline. At the time of writing, the VM has an old version of Java installed. Thus, you need to make sure you load the Java 11 module for running Nextflow. You can do all this by issuing the commands below: + +```bash +## Download Nextflow wget -qO- https://get.nextflow.io | bash +``` + +For your convenience, you can move the `nextflow` launcher to a directory included in your `PATH` environment variable. + +```bash +## Load Singularity environment modules module use /software/as/el7.2/EasyBuild/CRG/modules/all module load Singularity/3.7.0 +module load Java/11.0.2 ``` +Adding the previous lines to your `.bash_profile` or `.bashrc` file is an option to avoid having to load the modules each time you start a session. + +To use, run the pipeline with `-profile crg`. This will download and launch the [`crg.config`](../conf/crg.config) which has been pre-configured with a setup suitable for the CRG HPC cluster, with the queues definitions. Using this profile, a docker image containing all of the required software will be downloaded, and converted to a Singularity image before execution of the pipeline. + +```bash +# Launch a nf-core pipeline with the crg profile +$ nextflow run nf-core/ -profile crg [...] +``` + +Remember to use `-bg` to launch `Nextflow` in the background, so that the pipeline doesn't exit if you leave your terminal session. +Alternatively, you can also launch `Nextflow` in a `screen` or a `tmux` session. + > NB: You will need an account to use the HPC cluster on CRG in order to run the pipeline. If in doubt contact IT. > NB: Nextflow will need to submit the jobs via SGE to the HPC cluster and as such the commands above will have to be executed on one of the login nodes. If in doubt contact IT. + +## Redirecting the `work` directory + +It is highly recommended to place the `work` directory within the `scratch` volume. + +> If your group has no space on the scratch volume, please open a ticket to SIT for receiving support. + +You might create a work folder in the CRG scratch volume and run the nextflow pipeline specifying that folder as the work directory using the parameter `-w` + +```bash +# Launch a nf-core pipeline with the crg profile redirecting the work dir to the scratch volume +$ nextflow run nf-core/ -profile crg -w /nfs/scratch01// +``` + +Alternatively, you can set the `NXF_WORK` environmental variable to set the Nextflow work directory to the scratch volume permanently. + +## Reducing the amout of RAM + +In case of big pipelines Nextflow can use a non trivial amount of RAM. You can reduce it by setting a special nextflow environmental variable that define the Java VM heap memory allocation limits: + +```bash +# Reduce the amount of RAM before launching a pipeline +$ export NXF_OPTS="-Xms250m -Xmx2000m" +``` diff --git a/docs/images/google-cloud-logo.svg b/docs/images/google-cloud-logo.svg index 18b0e4836..6c05d5265 100644 --- a/docs/images/google-cloud-logo.svg +++ b/docs/images/google-cloud-logo.svg @@ -1,96 +1,96 @@ - - - - - - - + + + + + + + + ]> + xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" x="0px" y="0px" viewBox="0 0 8986.8 1407.9" + style="enable-background:new 0 0 8986.8 1407.9;" xml:space="preserve"> - - - - + + + + Cloud_Logo_Nav - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + diff --git a/docs/ku_sund_dangpu.md b/docs/ku_sund_dangpu.md index 591f1e76d..1488ce3e4 100644 --- a/docs/ku_sund_dangpu.md +++ b/docs/ku_sund_dangpu.md @@ -15,19 +15,27 @@ tmux new-session -s Before running the pipeline you will need to load Nextflow and Singularity using the environment module system on DANGPU. Within the created session load Nextflow and Singularity and set up the environment by issuing the commands below: +first clear the environment and load Nextflow environment modules: + ```bash -## Load Nextflow and Singularity environment modules module purge -module load dangpu_libs java/11.0.15 nextflow/22.10.6 singularity/3.8.0 python/3.7.13 nf-core/2.7.2 +module load dangpu_libs openjdk/20.0.0 nextflow/23.04.1.5866 +module load singularity/3.8.0 python/3.7.13 nf-core/2.7.2 +``` + +for loading the older module nextflow/22.10.6 you can use `module load dangpu_libs java/11.0.15 nextflow/22.10.6` instead of `module load dangpu_libs openjdk/20.0.0 nextflow/23.04.1.5866`. -# set up bash environment variables for memory +Next, set up bash environment variables for memory. (You can avoid repeatedly writing this every time by placing this code chunk into ${HOME}/.bash_profile and ${HOME}/.bashrc) + +```bash export NXF_OPTS='-Xms1g -Xmx4g' export NXF_HOME=/projects/dan1/people/${USER}/cache/nxf-home export NXF_TEMP=/scratch/temp/${USER} +export NXF_WORK=/scratch/temp/${USER} export NXF_SINGULARITY_CACHEDIR=/projects/dan1/people/${USER}/cache/singularity-images ``` -Create the user-specific nextflow directories if they don't exist yet: +Create the user-specific nextflow directories if they don't exist yet. You have to do this only first time you run a nf-core pipeline. ``` mkdir -p $NXF_SINGULARITY_CACHEDIR @@ -42,13 +50,13 @@ To download and test a pipeline for the first time, use the `-profile test` and For example to run rnaseq: ``` -nextflow run nf-core/rnaseq -r 3.10.1 -profile test,ku_sund_dangpu --outdir +nextflow run nf-core/rnaseq -r 3.14.0 -profile test,ku_sund_dangpu --outdir ``` To run a pipeline: ``` -nextflow run nf-core/rnaseq -r 3.10.1 -profile ku_sund_dangpu --outdir --input +nextflow run nf-core/rnaseq -r 3.14.0 -profile ku_sund_dangpu --outdir --input ``` ## Notes diff --git a/docs/m3c.md b/docs/m3c.md new file mode 100644 index 000000000..950303681 --- /dev/null +++ b/docs/m3c.md @@ -0,0 +1,14 @@ +# nf-core/configs: M3C Configuration + +All nf-core pipelines have been successfully configured for use on the M3 cluster at the [M3 Research Center](https://www.medizin.uni-tuebingen.de/de/das-klinikum/einrichtungen/zentren/m3) here. + +To use, run the pipeline with `-profile m3c`. This will download and launch the [`m3c.config`](../conf/m3c.config) which has been pre-configured with a setup suitable for the M3 cluster. Using this profile, for DSL1 pipelines a docker image containing all of the required software will be downloaded, and converted to a Singularity image before execution of the pipeline. For pipelines in DSL2, the individual Singularity images will be downloaded. + +Before running the pipeline you will need to install Nextflow on the M3 cluster. You can do this by following the instructions [here](https://www.nextflow.io/). + +> [!Note] +> You will need an account to use the M3 HPC cluster in order to run the pipeline. If in doubt contact IT. +> [!Note] +> Nextflow will need to submit the jobs via the job scheduler to the HPC cluster and as such the commands above will have to be executed on one of the login nodes. If in doubt contact IT. +> [!Note] +> Each group needs to configure their singularity cache directory. diff --git a/docs/nyu_hpc.md b/docs/nyu_hpc.md index 8648a3c8a..c3214657b 100644 --- a/docs/nyu_hpc.md +++ b/docs/nyu_hpc.md @@ -2,11 +2,9 @@ All nf-core pipelines have been successfully configured for use on the HPC Cluster at New York University. -To use, run the pipeline with `-profile nyu_hpc`. This will download and launch the [`profile.config`](../conf/profile.config) which has been pre-configured with a setup suitable for the NYU HPC cluster. Using this profile, a docker image containing all of the required software will be downloaded, and converted to a Singularity image before execution of the pipeline. +To use, run the pipeline with `-profile nyu_hpc`. This will download and launch the [`nyu_hpc.config`](../conf/nyu_hpc.config) which has been pre-configured with a setup suitable for the NYU HPC cluster. Using this profile, a docker image containing all of the required software will be downloaded, and converted to a Singularity image before execution of the pipeline. -## Below are non-mandatory information e.g. on modules to load etc - -Before running the pipeline you will need to load Nextflow and Singularity using the environment module system on PROFILE CLUSTER. You can do this by issuing the commands below: +Before running the pipeline you will need to load Nextflow using the environment module system on NYU HPC. You can do this by issuing the commands below: ```bash ## Load Nextflow modules diff --git a/docs/pasteur.md b/docs/pasteur.md index 775a9b361..b3a9af1df 100644 --- a/docs/pasteur.md +++ b/docs/pasteur.md @@ -18,15 +18,15 @@ To do that: 1. Create a virtualenv to install nf-core - ```bash - module purge - module load Python/3.6.0 - module load java - module load singularity - cd /path/to/nf-core/workflows - virtualenv .venv -p python3 - . .venv/bin/activate - ``` +```bash +module purge +module load Python/3.6.0 +module load java +module load singularity +cd /path/to/nf-core/workflows +virtualenv .venv -p python3 +. .venv/bin/activate +``` 2. Install nf-core: [here](https://nf-co.re/tools#installation) 3. Get nf-core pipeline and container: [here](https://nf-co.re/tools#downloading-pipelines-for-offline-use) diff --git a/docs/pipeline/eager/maestro.md b/docs/pipeline/eager/maestro.md index 8853eefa1..4b2b6e823 100644 --- a/docs/pipeline/eager/maestro.md +++ b/docs/pipeline/eager/maestro.md @@ -26,5 +26,5 @@ More limited computational resources ## unlimitedtime -Every process has one year time limit. To be used only when some processes can not be completed for time reasons when using mitochondrial or nuclear profiles. +Every process has one year time limit. To be used only when some processes can not be completed for time reasons when using mitochondrial or nuclear profiles. Expect slow processes when using this profile because only 5 CPUs are available at a time. diff --git a/docs/psmn.md b/docs/psmn.md index bc5bd5475..d79980ad7 100644 --- a/docs/psmn.md +++ b/docs/psmn.md @@ -2,14 +2,17 @@ All nf-core pipelines have been successfully configured for use on the tars cluster at the Institut Pasteur. -To use, run the pipeline with `-profile pasteur`. This will download and launch the [`psmn.config`](../conf/psmn.config) which has been pre-configured with a setup suitable for the PSMN cluster. Using this profile, a docker image containing all of the required software will be downloaded, and converted to a Singularity image before execution of the pipeline. +To use, run the pipeline with `-profile psmn`. This will download and launch the [`psmn.config`](../conf/psmn.config) which has been pre-configured with a setup suitable for the PSMN cluster. Using this profile, a docker image containing all of the required software will be downloaded, and converted to a Singularity image before execution of the pipeline. + +You can have more information on the cluster parition here: +[www.ens-lyon.fr/PSMN/Documentation/clusters_usage/computing_resources.html](https://www.ens-lyon.fr/PSMN/Documentation/clusters_usage/computing_resources.html) ## Running the workflow on the PSMN cluster ### Install [Nextflow](https://www.nextflow.io/docs/latest/getstarted.html#) and [Charliecloud](https://hpc.github.io/charliecloud/index.html) The Nextflow binary is available in the folder `/Xnfs/abc/nextflow_bin/`. -All the Charliecloud binaries are available in the folder `Xnfs/abc/charliecloud_bin/`. +All the Charliecloud binaries are available in the folder `/Xnfs/abc/charliecloud_bin/`. You can update your `$PATH` variable with the following command to have access to Nextflow and Charliecloud: @@ -30,7 +33,8 @@ ch-run -b /scratch:/scratch /Xnfs/abc/charliecloud/img/nfcore%tools+2.6 -- nf-co For exemple to download the `nf-core/rnaseq` pipeline you can use the command: ```sh -ch-run -b /scratch:/scratch \ +cd /Xnfs/abc/nf_scratch// +ch-run -b /scratch:/scratch -b /Xnfs:"" \ /Xnfs/abc/charliecloud/img/nfcore%tools+2.6 -- nf-core \ download rnaseq -r 3.9 --outdir nf-core-rnaseq ``` @@ -39,16 +43,28 @@ ch-run -b /scratch:/scratch \ You can use the `nf-core download` command to download an nf-core pipeline and the configuration files for the PSMN: -``` -cd +```sh +cd /Xnfs/abc/nf_scratch// ch-run -b /scratch:/scratch \ /Xnfs/abc/charliecloud/img/nfcore%tools+2.6 -- nf-core \ download rnaseq -r 3.9 --outdir /nf-core-rnaseq -x none -c none ``` -The you can launch this pipeline with the PSMN profile +### Download all the necessary image + +By default the `psmn` profile will lookup charliecloud img in the `/Xnfs/abc/charliecloud/` folder. +To download all the images that are not already present in this folder you can use the following script +```sh +cd nf-core-rnaseq +pull_ch_images_locally.sh ``` + +### Launch the pipeline + +Then you can launch this pipeline with the PSMN profile + +```sh tmux cd nf-core-rnaseq nextflow run workflow -profile test,psmn --outdir results/ diff --git a/docs/qmul_apocrita.md b/docs/qmul_apocrita.md new file mode 100644 index 000000000..2eaf814f8 --- /dev/null +++ b/docs/qmul_apocrita.md @@ -0,0 +1,40 @@ +# nf-core/configs: Apocrita Configuration + +All nf-core pipelines have been successfully configured for use on QMUL's Apocrita cluster [Queen Mary University of London](https://docs.hpc.qmul.ac.uk/). + +To use, run the pipeline with `-profile qmul_apocrita`. This will download and launch the [`qmul_apocrita.config`](../conf/qmul_apocrita.config) which has been pre-configured with a setup suitable for Apocrita. + +## Using Nextflow on Apocrita + +Before running the pipeline you will need to configure Apptainer and install+configure Nextflow. + +### Singularity + +Set the correct configuration of the cache directories, where is replaced with you credentials which you can find by entering `whoami` into the terminal once you are logged into Apocrita. Once you have added your credentials save these lines into your `.bash_profile` file in your home directory (e.g. `/data/home//.bash_profile`): + +```bash +# Set all the Apptainer environment variables +export APPTAINER_CACHEDIR=/data/scratch//.apptainer/ +export APPTAINER_TMPDIR=/data/scratch//.apptainer/tmp +export APPTAINER_LOCALCACHEDIR=/data/scratch//.apptainer/localcache +export APPTAINER_PULLFOLDER=/data/scratch//.apptainer/pull +``` + +### Nextflow + +Download the latest release of nextflow. _Warning:_ the `self-update` line should update to the latest version, but sometimes not, so please check which is the latest release (https://github.com/nextflow-io/nextflow/releases), you can then manually set this by entering (`NXF_VER=XX.XX.X`). + +```bash +## Download Nextflow-all +curl -s https://get.nextflow.io | bash +nextflow -self-update +NXF_VER=XX.XX.X +chmod a+x nextflow +mv nextflow ~/bin/nextflow +``` + +Then make sure that your bin PATH is executable, by placing the following line in your `.bash_profile`: + +```bash +export PATH=$PATH:/data/home//bin +``` diff --git a/docs/sahmri.md b/docs/sahmri.md index bb00a368a..827de74f2 100644 --- a/docs/sahmri.md +++ b/docs/sahmri.md @@ -1,6 +1,6 @@ # nf-core/configs: SAHMRI HPC Configuration -All nf-core pipelines have been successfully configured for use on the HPC cluster at [SAHMRI](https://sahmri.org.au/). +All nf-core pipelines have been successfully configured for use on the HPC cluster at [SAHMRI](https://sahmri.org.au/). To use, run the pipeline with `-profile sahmri`. This will download and launch the [`sahmri.config`](../conf/sahmri.config) which has been pre-configured with a setup suitable for the SAHMRI HPC cluster. Using this profile, either a docker image containing all of the required software will be downloaded, and converted to a Singularity image or a Singularity image downloaded directly before execution of the pipeline. diff --git a/docs/seattlechildrens.md b/docs/seattlechildrens.md new file mode 100644 index 000000000..8e5c56a6b --- /dev/null +++ b/docs/seattlechildrens.md @@ -0,0 +1,67 @@ +# nf-core/configs: PROFILE Configuration + +All nf-core pipelines have been successfully configured for use on the the Cybertron HPC at Seattle Children Research Institude (SCRI), Seattle, WA. + +To use, run the pipeline with `-profile PROFILENAME`. This will download and launch the pipeline using [`seattlechildrens.config`](../conf/seattlechildrens.config) which has been pre-configured with a setup suitable for the Cybertron cluster at SCRI. Using this profile, a container with all of the required software will be downloaded. + +# Project info + +This config file is created for the use on the Cybertron HPC at Seattle Children Research Institude (SCRI), Seattle, WA. Using this config will pre-configure a set up suitable for the Cybertron HPC. The Singularity images will be downloaded to run on the cluster. The nextflow pipeline should be executed inside of the Cybertron system. + +# Below are mandatory information SCRI + +Before running the pipeline you will need to create a Nextflow environment on `mamba`. You can load _Singularity_ using the environment module system on **Cybertron**. + +## Create a Nextflow `mamba` environment + +1. Create _nextflow.yml_ file containing the following content. This YAML file can be utilized to set up a mamba environment, specifying both the version of Nextflow and the environment name. + +```yaml +name: nextflow +channels: + - bioconda + - conda-forge +dependencies: + - python>=3.9 + - nextflow==23.10.0 + - nf-core==2.10 + - graphviz +``` + +2. Setting channel priority + +Make sure that channel priority is set to flexible using the following comments: + +```bash +# print your current conda settings +mamba config --describe channel_priority +# set to flexible if not already done +mamba config --set channel_priority flexible +``` + +3. Create the _Nextflow_ `mamba` environment + +```bash +mamba env create -f nextflow.yaml +``` + +4. Running in HPC (Cybertron) + +Please look into [RSC-RP/nextflow_scri_config](https://github.com/RSC-RP/nextflow_scri_config) for details. + +```bash +# activate enviornment +mamba activate nextflow +module load singularity + +# to list all the projects with project codes you are authorized on HPC +project info + +# example to run nextflow pipeline (please replace with your own project code and module) +nextflow run -c 'conf/seattlechildrens.config' \ + [nf-core/module_name] \ + -profile test,PBS_singularity \ + --project ["your_project_code"] \ +``` + +You can find more information about computational resources [here](https:#child.seattlechildrens.org/research/center_support_services/research_informatics/research_scientific_computing/high_performance_computing_core/). You have to be an employee of SCRI to access the link. diff --git a/docs/tufts.md b/docs/tufts.md new file mode 100644 index 000000000..6f5adcbc4 --- /dev/null +++ b/docs/tufts.md @@ -0,0 +1,22 @@ +# nf-core/configs: Tufts HPC Configuration + +nf-core pipelines have been configured for use on the Tufts HPC clusters operated by Research Technology at Tufts University. + +To use Tufts's profile, run the pipeline with `-profile tufts`. + +Example: `nextflow run -profile tufts` + +Users can also put the `nextflow ...` command into a batch script and submit the job to computing nodes by `sbatch` or launch interative jobs to computing nodes by `srun`. Using this way, both nextflow manager processes and tasks will run on the allocated compute nodes using the `local` executor. It is recommended to use `-profile singularity` + +Example: `nextflow run -profile singularity` + +By default, the `batch` partition is used for job submission. Other partitions can be specified using the `--partition ` argument to the run. + +## Environment module + +Before running the pipeline, you will need to load the Nextflow module by: + +```bash +module purge ## Optional but recommended +module load nextflow singularity +``` diff --git a/docs/ucl_cscluster.md b/docs/ucl_cscluster.md new file mode 100644 index 000000000..74bb205d0 --- /dev/null +++ b/docs/ucl_cscluster.md @@ -0,0 +1,40 @@ +# nf-core/configs: CS cluster Configuration + +All nf-core pipelines have been successfully configured for use on UCL's CS cluster [University College London](https://hpc.cs.ucl.ac.uk/). + +To use, run the pipeline with `-profile ucl_cscluster`. This will download and launch the [`ucl_cscluster.config`](../conf/ucl_cscluster.config) which has been pre-configured with a setup suitable for the CS cluster. + +## Using Nextflow on CS cluster + +Before running the pipeline you will need to configure Singularity and install+configure Nextflow. + +### Singularity + +Set the correct configuration of the cache directories, where is replaced with you credentials which you can find by entering `whoami` into the terminal once you are logged into CS cluster. Once you have added your credentials save these lines into your `.bash_profile` file in your home directory (e.g. `/home//.bash_profile`): + +```bash +# Set all the Singularity environment variables +export SINGULARITY_CACHEDIR=/home//.singularity/ +export SINGULARITY_TMPDIR=/home//.singularity/tmp +export SINGULARITY_LOCALCACHEDIR=/home//.singularity/localcache +export SINGULARITY_PULLFOLDER=/home//.singularity/pull +``` + +### Nextflow + +Download the latest release of nextflow. _Warning:_ the `self-update` line should update to the latest version, but sometimes not, so please check which is the latest release (https://github.com/nextflow-io/nextflow/releases), you can then manually set this by entering (`NXF_VER=XX.XX.X`). + +```bash +## Download Nextflow-all +curl -s https://get.nextflow.io | bash +nextflow -self-update +NXF_VER=XX.XX.X +chmod a+x nextflow +mv nextflow ~/bin/nextflow +``` + +Then make sure that your bin PATH is executable, by placing the following line in your `.bash_profile`: + +```bash +export PATH=$PATH:/home//bin +``` diff --git a/docs/ucl_myriad.md b/docs/ucl_myriad.md index 1884a481f..45ccbb035 100644 --- a/docs/ucl_myriad.md +++ b/docs/ucl_myriad.md @@ -2,33 +2,24 @@ All nf-core pipelines have been successfully configured for use on UCL's myriad cluster [University College London](https://www.rc.ucl.ac.uk/docs/Clusters/Myriad/). -To use, run the pipeline with `-profile ucl_myriad`. This will download and launch the [`ucl_myriad.config`](../conf/ucl_myriad.config) which has been pre-configured with a setup suitable for the myriad cluster. Using this profile, a docker image containing all of the required software will be downloaded, and converted to a Singularity image before execution of the pipeline. +To use, run the pipeline with `-profile ucl_myriad`. This will download and launch the [`ucl_myriad.config`](../conf/ucl_myriad.config) which has been pre-configured with a setup suitable for the myriad cluster. ## Using Nextflow on Myriad -Before running the pipeline you will need to install and configure Nextflow and Singularity. +Before running the pipeline you will need to configure Apptainer and install+configure Nextflow. -### Singularity +### Apptainer This can be done with the following commands: -```bash -## Load Singularity environment modules - these commands can be placed in your ~/.bashrc also -module add java/openjdk-11/11.0.1 -module add singularity-env/1.0.0 -``` - -Then set the correct configuration of the cache directories, where is replaced with you credentials which you can find by entering `whoami` into the terminal once you are logged into myriad. Once you have added your credentials save these lines into your .bashrc file in the base directory (e.g. /home//.bashrc): +Set the correct configuration of the cache directories, where is replaced with you credentials which you can find by entering `whoami` into the terminal once you are logged into Myriad. Once you have added your credentials save these lines into your `.bash_profile` file in your home directory (e.g. `/home//.bash_profile`): ```bash -# Set all the Singularity cache dirs to Scratch -export SINGULARITY_CACHEDIR=/home//Scratch/.singularity/ -export SINGULARITY_TMPDIR=/home//Scratch/.singularity/tmp -export SINGULARITY_LOCALCACHEDIR=/home//Scratch/.singularity/localcache -export SINGULARITY_PULLFOLDER=/home//Scratch/.singularity/pull - -# Bind your Scratch directory so it is accessible from inside the container -export SINGULARITY_BINDPATH=/scratch/scratch/ +# Set all the Apptainer environment variables +export APPTAINER_CACHEDIR=/home//Scratch/.apptainer/ +export APPTAINER_TMPDIR=/home//Scratch/.apptainer/tmp +export APPTAINER_LOCALCACHEDIR=/home//Scratch/.apptainer/localcache +export APPTAINER_PULLFOLDER=/home//Scratch/.apptainer/pull ``` ### Nextflow @@ -38,13 +29,13 @@ Download the latest release of nextflow. Warning: the self-update line should up ```bash ## Download Nextflow-all curl -s https://get.nextflow.io | bash -NXF_VER=22.10.0 +NXF_VER=XX.XX.X nextflow -self-update chmod a+x nextflow mv nextflow ~/bin/nextflow ``` -Then make sure that your bin PATH is executable, by placing the following line in your .bashrc: +Then make sure that your bin PATH is executable, by placing the following line in your `.bash_profile`: ```bash export PATH=$PATH:/home//bin diff --git a/docs/unc_longleaf.md b/docs/unc_longleaf.md index f171ca6cb..3ac334f46 100644 --- a/docs/unc_longleaf.md +++ b/docs/unc_longleaf.md @@ -2,16 +2,15 @@ > **NB:** You will need an [account](https://help.rc.unc.edu/getting-started-on-longleaf/) to use the HPC cluster to run the pipeline. -We have configured the compute clusters to Apptainer (Singularity) loaded by default. Do not load the Singularity module or it will fail, as it is an older version than what's on the compute nodes. - -Before running the pipeline you will need to load Nextflow. You can do this by including the commands below in your SLURM/sbatch script: +Before running the pipeline you will need to load Nextflow and Apptainer. You can do this by including the commands below in your SLURM/sbatch script: ```bash ## Load Nextflow environment modules -module load nextflow/23.04.2 +module load nextflow/23.04.2; +module load apptainer/1.2.2-1; ``` -All of the intermediate files required to run the pipeline will be stored in the `work/` directory, which will be generated inside the location you ran the nf-core pipeline. It is recommended to delete this directory after the pipeline has finished successfully because it can get quite large, and all of the main output files will be saved in the `results/` directory anyway. +All of the intermediate files required to run the pipeline will be stored in the `work/` directory, which will be generated inside the location you ran the nf-core pipeline. It is recommended to delete this directory after the pipeline has finished successfully because it can get quite large, and all of the main output files will be saved in the `results/` directory anyway. You can also specify the working directory using the Nextflow [`-w` or `-work-dir` option](https://www.nextflow.io/docs/latest/cli.html#run). This configuration will automatically submit jobs to the `general` SLURM queue, where it may automatically be shuffled to different partitions depending on the time required by each process. diff --git a/docs/utd_europa.md b/docs/utd_europa.md new file mode 100644 index 000000000..dbc4af054 --- /dev/null +++ b/docs/utd_europa.md @@ -0,0 +1,21 @@ +# nf-core/configs: UTD Europa Configuration + +All nf-core pipelines have been successfully configured for use on the [Europa HTC cluster](https://docs.circ.utdallas.edu/user-guide/systems/europa.html) at [The Univeristy of Texas at Dallas](https://www.utdallas.edu/). + +To use, run the pipeline with `-profile utd_europa`. This will download and launch the [`utd_europa.config`](../conf/utd_europa.config) which has been pre-configured with a setup suitable for Europa. + +Before running the pipeline you will need to load Apptainer using the environment module system on Europa. You can do this by issuing the commands below: + +```bash +## Singularity environment modules +module purge +module load apptainer +``` + +All of the intermediate files required to run the pipeline will be stored in the `work/` directory. It is recommended to delete this directory after the pipeline has finished successfully because it can get quite large, and all of the main output files will be saved in the `results/` directory anyway. + +> [!NOTE] +> You will need an account to use Europa. +> To join, fill out our short survey at https://utd.link/trecis-lcars-signup. +> Nextflow will need to submit the jobs via SLURM to the HTC cluster and as such the commands above will have to be executed on the login node. +> If in doubt contact CIRC. diff --git a/docs/utd_ganymede.md b/docs/utd_ganymede.md index 6c5e3ea05..356830ac8 100644 --- a/docs/utd_ganymede.md +++ b/docs/utd_ganymede.md @@ -1,8 +1,8 @@ # nf-core/configs: UTD Ganymede Configuration -All nf-core pipelines have been successfully configured for use on the Ganymede HPC cluster at the [The Univeristy of Texas at Dallas](https://www.utdallas.edu/). +All nf-core pipelines have been successfully configured for use on the [Ganymede HPC cluster](https://docs.circ.utdallas.edu/user-guide/systems/ganymede.html) at [The Univeristy of Texas at Dallas](https://www.utdallas.edu/). -To use, run the pipeline with `-profile utd_ganymede`. This will download and launch the [`utd_ganymede.config`](../conf/utd_ganymede.config) which has been pre-configured with a setup suitable for the Ganymede HPC cluster. Using this profile, a docker image containing all of the required software will be downloaded, and converted to a Singularity image before execution of the pipeline. +To use, run the pipeline with `-profile utd_ganymede`. This will download and launch the [`utd_ganymede.config`](../conf/utd_ganymede.config) which has been pre-configured with a setup suitable for Ganymede. Before running the pipeline you will need to load Singularity using the environment module system on Ganymede. You can do this by issuing the commands below: @@ -14,5 +14,8 @@ module load singularity All of the intermediate files required to run the pipeline will be stored in the `work/` directory. It is recommended to delete this directory after the pipeline has finished successfully because it can get quite large, and all of the main output files will be saved in the `results/` directory anyway. -> NB: You will need an account to use the HPC cluster on Ganymede in order to run the pipeline. If in doubt contact Ganymedeadmins. -> NB: Nextflow will need to submit the jobs via SLURM to the HPC cluster and as such the commands above will have to be executed on one of the login nodes. If in doubt contact GanymedeAdmins. +> [!NOTE] +> You will need an account to use the HPC cluster on Ganymede in order to run the pipeline. +> https://docs.circ.utdallas.edu/user-guide/accounts/index.html +> Nextflow will need to submit the jobs via SLURM to the HPC cluster and as such the commands above will have to be executed on the login node. +> If in doubt contact CIRC. diff --git a/docs/vsc_calcua.md b/docs/vsc_calcua.md new file mode 100644 index 000000000..49a91608b --- /dev/null +++ b/docs/vsc_calcua.md @@ -0,0 +1,273 @@ +# nf-core/configs: CalcUA - UAntwerp Tier-2 High Performance Computing Infrastructure (VSC) + +> **NB:** You will need an [account](https://docs.vscentrum.be/access/vsc_account.html) to use the CalcUA VSC HPC cluster to run the pipeline. + + + + + +- [Quickstart](#quickstart) + - [Slurm-scheduled pipeline](#slurm-scheduled-pipeline) + - [Running pipeline in a single Slurm job](#running-pipeline-in-a-single-slurm-job) +- [Step-by-step instructions](#step-by-step-instructions) +- [Location of output and work directory](#location-of-output-and-work-directory) + - [Debug mode](#debug-mode) +- [Availability of Nextflow](#availability-of-nextflow) +- [Overview of partition profiles and resources](#overview-of-partition-profiles-and-resources) +- [Schedule Nextflow pipeline using Slurm](#schedule-nextflow-pipeline-using-slurm) +- [Local Nextflow run on a single (interactive) node](#local-nextflow-run-on-a-single-interactive-node) +- [Apptainer / Singularity environment variables for cache and tmp directories](#apptainer--singularity-environment-variables-for-cache-and-tmp-directories) +- [Troubleshooting](#troubleshooting) + - [Failed to pull singularity image](#failed-to-pull-singularity-image) + + + +## Quickstart + +To get started with running nf-core pipelines on CalcUA, you can use one of the example templates below. For more detailed info, see the extended explanations further below. + +### Slurm-scheduled pipeline + +Example `job_script.slurm` to run the pipeline using the Slurm job scheduler to queue the individual tasks making up the pipeline. Note that the head nextflow process used to launch the pipeline does not need to request many resources. + +```bash +#!/bin/bash -l +#SBATCH --partition=broadwell # choose partition to run the nextflow head process on +#SBATCH --job-name=nextflow # create a short name for your job +#SBATCH --nodes=1 # node count +#SBATCH --cpus-per-task=1 # only 1 cpu cores is needed to run the nextflow head process +#SBATCH --mem-per-cpu=4G # memory per cpu (4G is default for most partitions) +#SBATCH --time=00:02:00 # total run time limit (HH:MM:SS) +#SBATCH --account= # set project account + +# Load the available Nextflow module. +module load Nextflow + +# Or, if using a locally installed version of Nextflow, make Java available. +# module load Java + +# Set Apptainer/Singularity environment variables to define caching and tmp +# directories. These are used during the conversion of Docker images to +# Apptainer/Singularity ones. +# These lines can be omitted if the variables are already set in your `~/.bashrc` file. +export APPTAINER_CACHEDIR="${VSC_SCRATCH}/apptainer/cache" +export APPTAINER_TMPDIR="${VSC_SCRATCH}/apptainer/tmp" + +# Launch Nextflow head process. +# Provide a partition profile name to choose a particular partition queue, which +# will determine the available resources for each individual task in the pipeline. +# Note that the profile name ends with a `*_slurm` suffix, which indicates +# that this pipeline will submit each task to the Slurm job scheduler. +nextflow run nf-core/rnaseq \ + -profile test,vsc_calcua,broadwell_slurm \ + -with-report report.html \ + --outdir test_output + +# Alternatively, use the generic slurm profile to let Nextflow submit tasks +# to different partitions, depending on their requirements. +nextflow run nf-core/rnaseq \ + -profile test,vsc_calcua,slurm \ + -with-report report.html \ + --outdir test_output +``` + +### Running pipeline in a single Slurm job + +Example `job_script.slurm` to run the pipeline on a single node in local execution mode, only making use of the resources allocated by `sbatch`. + +```bash +#!/bin/bash -l +#SBATCH --partition=broadwell # choose partition to run the nextflow head process on +#SBATCH --job-name=nextflow # create a short name for your job +#SBATCH --nodes=1 # node count +#SBATCH --cpus-per-task=28 # request a full node for local execution (broadwell nodes have 28 cpus) +#SBATCH --mem=112G # total memory (e.g., 112G max for broadwell) - can be omitted to use default (= max / # cores) +#SBATCH --time=00:02:00 # total run time limit (HH:MM:SS) +#SBATCH --account= # set project account + +# Load the available Nextflow module. +module load Nextflow + +# Or, if using a locally installed version of Nextflow, make Java available. +# module load Java + +# Set Apptainer/Singularity environment variables to define caching and tmp +# directories. These are used during the conversion of Docker images to +# Apptainer/Singularity ones. +# These lines can be omitted if the variables are already set in your `~/.bashrc` file. +export APPTAINER_CACHEDIR="${VSC_SCRATCH}/apptainer/cache" +export APPTAINER_TMPDIR="${VSC_SCRATCH}/apptainer/tmp" + +# Launch Nextflow head process. +# Provide a partition profile name to choose a particular partition queue, which +# will determine the available resources for each individual task in the pipeline. +# Note that the profile name ends with a `*_local` suffix, which indicates +# that this pipeline will run in local execution mode on the submitted node. +nextflow run nf-core/rnaseq \ + -profile test,vsc_calcua,broadwell_local \ + -with-report report.html \ + --outdir test_output +``` + +## Step-by-step instructions + +1. Set the `APPTAINER_CACHEDIR` and `APPTAINER_TMPDIR` environment variables by adding the following lines to your `.bashrc` file (or simply add them to your Slurm job script): + + ``` + export APPTAINER_CACHEDIR="${VSC_SCRATCH}/apptainer/cache" + export APPTAINER_TMPDIR="${VSC_SCRATCH}/apptainer/tmp" + ``` + + When using the `~/.bashrc` method, you can ensure that the environment variables are available in your jobs by starting your scripts with the line `#! /bin/bash -l`, although this does not seem to be required (see [below](#apptainer--singularity-environment-variables-for-cache-and-tmp-directories) for more info). + +2. Load Nextflow in your job script via the command: `module load Nextflow/23.04.2`. Alternatively, when using [your own version of Nextflow](#availability-of-nextflow), use `module load Java`. + +3. Choose whether you want to run in [local execution mode on a single node](#local-nextflow-run-on-a-single-interactive-node) or make use of the [Slurm job scheduler to queue individual pipeline tasks](#schedule-nextflow-pipeline-using-slurm). + + - For Slurm scheduling, choose a partition profile ending in `*_slurm`. E.g., `nextflow run pipeline -profile vsc_calcua,broadwell_slurm`. + - For local execution mode on a single node, choose a partition profile ending in `*_local`. E.g., `nextflow run pipeline -profile vsc_calcua,broadwell_local`. + + Note that the `-profile` option can take multiple values, the first one always being `vsc_calcua` and the second one a partition plus execution mode. + +4. Specify the _partition_ that you want to run the pipeline on using the [`sbatch` command's `--partition=` option](https://docs.vscentrum.be/jobs/job_submission.html#specifying-a-partition) and how many _resources_ should be allocated. See the [overview of partitions and their resources](#overview-of-partition-profiles-and-resources) below, or refer to [the CalcUA documentation](https://docs.vscentrum.be/antwerp/tier2_hardware.html) for more info. + + - For Slurm scheduling, the partition on which the head process runs has no effect on the resources allocated to the actual pipeline tasks. The head process only requires minimal resources (e.g., 1 CPU and 4 GB RAM). + - For local execution mode on a single node, the partition selected via `sbatch` must match the one selected with nextflow's `-profile` option, otherwise the pipeline will not launch. It is probably convenient to simply request a full node (e.g., `--cpus-per-task=28` and `--mem=112G` for broadwell). Omitting `--mem-per-cpu` or `--mem` will [allocate the default memory value](https://docs.vscentrum.be/jobs/job_submission.html#requesting-memory), which is the total available memory divided by the number of cores, e.g., `28 * 4 GB = 112 GB` for broadwell (`128 GB - 16 GB buffer`). + +5. Submit the job script containing your full `nextflow run` command via `sbatch` or from an an interactive `srun` session launched via `screen` or `tmux` (to avoid the process from stopping when you disconnect your SSH session). + +--- + +## Location of output and work directory + +By default, Nextflow stores all of the intermediate files required to run the pipeline in the `work` directory. It is generally recommended to delete this directory after the pipeline has finished successfully because it can get quite large, and all of the main output files will be saved in the `results/` directory anyway. That's why this config contains a `cleanup` command that removes the `work` directory automatically once the pipeline has completed successfully. + +If the run does not complete successfully then the `work` directory should be removed manually to save storage space. The default work directory is set to `$VSC_SCRATCH/work` per this configuration. You can also use the [`nextflow clean` command](https://www.nextflow.io/docs/latest/cli.html#clean) to clean up all files related to a specific run (including not just the `work` directory, but also log files and the `.nextflow` cache directory). + +> **NB:** The Nextflow `work` directory for any pipeline is located in `$VSC_SCRATCH` by default and is cleaned automatically after a success pipeline run, unless the `debug` profile is provided. + +### Debug mode + +Debug mode can be enabled to always retain the `work` directory instead of cleaning it. To use it, pass `debug` as an additional value to the `-profile` option: + +`nextflow run -profile vsc_calcua,broadwell_local,debug` + +Note that this is a core config provided by nf-core pipelines, not something built into the VSC CalcUA config. + +## Availability of Nextflow + +Nextflow has been made available on CalcUA as a module. You can find out which versions are available by using `module av nextflow`. + +If you need to use a specific version of Nextflow that is not available, you can of course manually install it to your home directory and add the executable to your `PATH`: + +``` +curl -s https://get.nextflow.io | bash +mkdir -p ~/.local/bin/ && mv nextflow ~/.local/bin/ +``` + +Before it can be used, you will still need to load the Java module in your job scripts: `module load Java`. + +## Overview of partition profiles and resources + +> **NB:** Aside from the profiles defined in the table below, one additional profile is available, named `slurm`. It automatically lets Nextflow choose the most appropriate Slurm partition to submit each pipeline task to based on the task's requirements (CPU, memory and run time). +> Example usage: `nextflow run -profile vsc_calcua,slurm`. + +The CalcUA config defines two types of profiles for each of the following partitions: + +| Partition | Cluster | Profile name | Type | Max memory | Max CPU | Max wall time | Example usage | +| ------------- | ------------------------- | ------------------- | -------------------- | ------------------------ | -------------------- | ------------- | ----------------------------------------------------------------- | +| zen2 | Vaughan | zen2_slurm | Slurm scheduler | 240 GB (per task) | 64 (per task) | 3 days | `nextflow run -profile vsc_calcua,zen2_slurm` | +| zen2 | Vaughan | zen2_local | Local node execution | 240 GB (or as requested) | 64 (or as requested) | 3 days | `nextflow run -profile vsc_calcua,zen2_local` | +| zen3 | Vaughan | zen3_slurm | Slurm scheduler | 240 GB (per task) | 64 (per task) | 3 days | `nextflow run -profile vsc_calcua,zen3_slurm` | +| zen3 | Vaughan | zen3_local | Local node execution | 240 GB (or as requested) | 64 (or as requested) | 3 days | `nextflow run -profile vsc_calcua,zen3_local` | +| zen3_512 | Vaughan | zen3_512_slurm | Slurm scheduler | 496 GB (per task) | 64 (per task) | 3 days | `nextflow run -profile vsc_calcua,zen3_512_slurm` | +| zen3_512 | Vaughan | zen3_512_local | Local node execution | 496 GB (or as requested) | 64 (or as requested) | 3 days | `nextflow run -profile vsc_calcua,zen3_512_local` | +| broadwell | Leibniz | broadwell_slurm | Slurm scheduler | 112 GB (per task) | 28 (per task) | 3 days | `nextflow run -profile vsc_calcua,broadwell_slurm` | +| broadwell | Leibniz | broadwell_local | Local node execution | 112 GB (or as requested) | 28 (or as requested) | 3 days | `nextflow run -profile vsc_calcua,broadwell_local` | +| broadwell_256 | Leibniz | broadwell_256_slurm | Slurm scheduler | 240 GB (per task) | 28 (per task) | 3 days | `nextflow run -profile vsc_calcua,broadwell_256_slurm` | +| broadwell_256 | Leibniz | broadwell_256_local | Local node execution | 240 GB (or as requested) | 28 (or as requested) | 3 days | `nextflow run -profile vsc_calcua,broadwell_256_local` | +| skylake | Breniac (formerly Hopper) | skylake_slurm | Slurm scheduler | 176 GB (per task) | 28 (per task) | 7 days | `nextflow run -profile vsc_calcua,skylake_slurm` | +| skylake | Breniac (formerly Hopper) | skylake_local | Local node execution | 176 GB (or as requested) | 28 (or as requested) | 7 days | `nextflow run -profile vsc_calcua,skylake_local` | +| all | / | slurm | Slurm scheduler | / | / | / | `nextflow run -profile vsc_calcua,slurm` | + +For more information on the difference between the [\*\_slurm-type](#schedule-nextflow-pipeline-using-slurm) and [\*\_local-type](#local-nextflow-run-on-a-single-interactive-node) profiles, see below. Briefly, + +- Slurm profiles submit each pipeline task to the Slurm job scheduler using a particular partition. + - The generic `slurm` profile also submits jobs to the Slurm job scheduler, but it can stage them across different partitions simultaneously depending on the tasks' requirements. +- Local profiles run pipeline tasks on the local node, using only the resource that were requested by `sbatch` (or `srun` in interactive mode). + +The max memory for the Slurm partitions is set to the [available amount of memory for each partition](https://docs.vscentrum.be/antwerp/tier2_hardware.html) minus 16 GB (which is the amount reserved for the OS and file system buffers, [see slide 63 of this CalcUA introduction course](https://calcua.uantwerpen.be/courses/hpc-intro/IntroductionHPC-20240226.pdf)). For the local profiles the resources are set dynamically based on those requested by `sbatch`. + +More information on the hardware differences between the partitions can be found on [the CalcUA website](https://www.uantwerpen.be/en/research-facilities/calcua/infrastructure/) and in the [VSC documentation](https://docs.vscentrum.be/antwerp/tier2_hardware.html). You can also use the `sinfo -o "%12P %.10A %.11l %D %c %m"` command to see the available partitions yourself. + +> **NB:** Do not launch nextflow jobs directly from a login node. Not only will this occupy considerable resources on the login nodes (the nextflow master process/head job can still use considerable amounts of RAM, see [https://nextflow.io/blog/2024/optimizing-nextflow-for-hpc-and-cloud-at-scale.html](https://nextflow.io/blog/2024/optimizing-nextflow-for-hpc-and-cloud-at-scale.html)), but the command might get cancelled (since there is a wall time for the login nodes too). + +## Schedule Nextflow pipeline using Slurm + +The `*_slurm` (and `slurm`) profiles allow Nextflow to use the Slurm job scheduler to queue each pipeline task as a separate job. The main job that you manually submit using `sbatch` will run the head Nextflow process (`nextflow run ...`), which acts as a governor and monitoring job, and spawn new Slurm jobs for the different tasks in the pipeline. Each task will request the appropriate amount of resources defined by the pipeline (up to a threshold set in the given partition's profile) and will be run as an individual Slurm job. This means that each task will be placed in the scheduling queue individually and [all the standard priority rules](https://docs.vscentrum.be/jobs/why_doesn_t_my_job_start.html#why-doesn-t-my-job-start) will apply to each of them. + +The `nextflow run ...` command that launches the head process, can be invoked either via `sbatch` or from an an interactive `srun` session launched via `screen` or `tmux` (to avoid the process from stopping when you disconnect your SSH session), but it **does NOT need to request the total amount of resources that would be required for the full pipeline!** + +> **NB:** When using the slurm-type profiles, the initial job that launches the master nextflow process does not need many resources to run. Therefore, use the #SBATCH options to limit its requested to a small sensible amount (e.g., 2 CPUs and 4 GB RAM), regardless of how computationally intensive the actual pipeline is. + +> **NB:** The wall time of the Nextflow head process will ultimately determine how long the pipeline can run for. + +## Local Nextflow run on a single (interactive) node + +In contrast to the `*_slurm` profiles, the `*_local` profiles instead run in Nextflow's _local execution mode_, which means that they do not make use of the Slurm job scheduler. Instead, the head Nextflow process (`nextflow run ...`) will run on the allocated compute node and spawn all of sub-processes for the individual tasks in the pipeline on that same node (i.e., similar to running a pipeline on your own machine). The available resources are determined by the [`#SBATCH` options passed to Slurm](https://docs.vscentrum.be/jobs/job_submission.html#requesting-compute-resources) as usual and are shared among all tasks. + +The `nextflow run ...` command that launches the head process, can be invoked either via `sbatch` or from an an interactive `srun` session launched via `screen` or `tmux` (to avoid the process from stopping when you disconnect your SSH session) and it **DOES need to request the total amount of resources that are required by the full pipeline!** + +> **NB:** When using one of the single node profiles, make sure that you launch the job on the same partition as the one specified by the `-profile vsc_calcua,` option of your `nextflow run` command, either by launching it from the matching login node or by using the `sbatch` option `--partition=`. E.g., a job script containing the following nextflow command: +> `nextflow run -profile vsc_calcua,broadwell_local` +> should be launched from a [Leibniz login node](https://docs.vscentrum.be/antwerp/tier2_hardware/leibniz_hardware.html#login-infrastructure) or via the following `sbatch` command: +> `sbatch --account --partition broadwell script.slurm` + +> **NB:** The single node profiles **do not** automatically set the pipeline's CPU/RAM resource limits to those of a full node, but instead dynamically set them based on those allocated by Slurm, i.e. those requested via the `sbatch`. However, in many cases, it likely is a good idea to simply request a full node. + +## Apptainer / Singularity environment variables for cache and tmp directories + +> **NB:** The default directory where Nextflow will cache container images is `$VSC_SCRATCH/apptainer/nextflow_cache`. + +> **NB:** The recommended directories for apptainer/singularity's cache and tmp directories are `$VSC_SCRATCH/apptainer/cache` (cache directory for images layers) and `$VSC_SCRATCH/apptainer/tmp` (temporary directory used during build or docker conversion) respectively, to avoid filling up your home storage and/or job node's SSDs (since the default locations when unset are `$HOME/.apptainer/cache` and `/tmp` respectively). + +[Apptainer](https://apptainer.org/) is an open-source fork of [Singularity](https://sylabs.io/singularity/), which is an alternative container runtime to Docker. It is more suitable to usage on HPCs because it can be run without root privileges and does not use a dedicated daemon process. More info on the usage of Apptainer/Singularity on the VSC HPC can be found [here](https://docs.vscentrum.be/software/singularity.html). + +When executing Nextflow pipelines using Apptainer/Singularity, the container image files will by default be cached inside the pipeline work directory. The CalcUA config profile instead sets the [singularity.cacheDir setting](https://www.nextflow.io/docs/latest/singularity.html#singularity-docker-hub) to a central location on your scratch space (`$VSC_SCRATCH/apptainer/nextflow_cache`), in order to reuse them between different pipelines. This is equivalent to setting the `NXF_APPTAINER_CACHEDIR`/`NXF_SINGULARITY_CACHEDIR` environment variables manually (but note that the `cacheDir` defined in the config file takes precedence and cannot be overwritten by setting the environment variable). + +Apptainer/Singularity makes use of two additional environment variables, `APPTAINER_CACHEDIR`/`SINGULARITY_CACHEDIR` and `APPTAINER_TMPDIR`/`SINGULARITY_TMPDIR`. As recommended by the [VSC documentation on containers](https://docs.vscentrum.be/software/singularity.html#building-on-vsc-infrastructure), these should be set to a location on the scratch system, to avoid exceeding the quota on your home directory file system. + +> **NB:** The cachedir and tmpdir are only used when new images are built or converted from existing docker images. For most nf-core pipelines this does not happen, since they will instead try to directly pull pre-built singularity images from [Galaxy Depot](https://depot.galaxyproject.org/singularity/) + +- The [cache directory](https://apptainer.org/docs/user/main/build_env.html#cache-folders) `APPTAINER_CACHEDIR`/`SINGULARITY_CACHEDIR` is used to store files and layers used during image creation (or conversion of Docker/OCI images). Its default location is `$HOME/.apptainer/cache`, but it is recommended to change this to `$VSC_SCRATCH/apptainer/cache` on the CalcUA HPC instead. +- The [temporary directory](https://apptainer.org/docs/user/main/build_env.html#temporary-folders) `APPTAINER_TMPDIR`/`SINGULARITY_TMPDIR` is used to store temporary files when building an image (or converting a Docker/OCI source). The directory must have enough free space to hold the entire uncompressed image during all steps of the build process. Its default location is `/tmp`, but it is recommended to change this to `$VSC_SCRATCH/apptainer/tmp` on the CalcUA HPC instead. The reason being that the default `/tmp` would refer to a directory on the the compute node running the master nextflow process, which are [small SSDs on CalcUA](https://docs.vscentrum.be/antwerp/tier2_hardware/uantwerp_storage.html). + + > **NB:** The tmp directory needs to be created manually beforehand, otherwise pipelines that need to pull in and convert docker images, or the manual building of images yourself, will fail. + +Currently, Apptainer respects environment variables with either an `APPTAINER` or `SINGULARITY` prefix, but because [support for the latter might be dropped in the future](https://apptainer.org/docs/user/main/singularity_compatibility.html#singularity-prefixed-environment-variable-support), the former variant is recommended. + +These two variables can be set in several different ways: + +- Specified in your `~/.bashrc` file (e.g., `echo "export APPTAINER_CACHEDIR=${VSC_SCRATCH}/apptainer/cache APPTAINER_TMPDIR=${VSC_SCRATCH}/apptainer/tmp" >> ~/.bashrc`) - recommended. +- Passed to `sbatch` as a parameter or on a `#SBATCH` line in the job script (e.g., `--export=APPTAINER_CACHEDIR=${VSC_SCRATCH}/apptainer/cache,APPTAINER_TMPDIR=${VSC_SCRATCH}/apptainer/tmp`). +- Directly in your job script (e.g., `export APPTAINER_CACHEDIR=${VSC_SCRATCH}/apptainer/cache APPTAINER_TMPDIR=${VSC_SCRATCH}/apptainer/tmp`). + +However, note that for the `.bashrc` option to work, the environment need to be passed on to the slurm jobs. Currently, this seems to happen by default (i.e., variables defined in `~/.bashrc` are propagated), but there exist methods to enforce this more strictly. E.g., job scripts that start with `#!/bin/bash -l`, will ensure that jobs [launch using your login environment](https://docs.vscentrum.be/leuven/slurm_specifics.html#job-shell). Similarly, the `sbatch` options `[--get-user-env`](https://slurm.schedmd.com/sbatch.html#OPT_get-user-env) or [`--export=`](https://slurm.schedmd.com/sbatch.html#OPT_export) can be used. Also [see the CalcUA-specific](https://docs.vscentrum.be/jobs/slurm_pbs_comparison.html#main-differences-between-slurm-and-torque) and the [general VSC documentation for more info](https://docs.vscentrum.be/jobs/job_submission.html#the-job-environment). + +Lastly, note that this config file currently uses the Singularity engine rather than the Apptainer one (see [`singularity` directive: `enabled = true`](https://www.nextflow.io/docs/latest/config.html#scope-singularity)). The reason is that, for the time being, using the apptainer engine in nf-core pipelines will result in docker images being pulled and converted to apptainer ones, rather than making use of pre-built singularity images (see [nf-core documentation](https://nf-co.re/docs/usage/installation#pipeline-software)). Conversely, when making use of the singularity engine, pre-built images are downloaded and Apptainer will still be used in the background for running these, since the installation of `apptainer` will by default create an alias for `singularity` (and this is also the case on CalcUA). + +## Troubleshooting + +For general errors regarding the pulling of images, try clearing out the existing caches located in `$VSC_SCRATCH/apptainer`. + +### Failed to pull singularity image + +``` +FATAL: While making image from oci registry: error fetching image to cache: while building SIF from +layers: conveyor failed to get: while getting config: no descriptor found for reference +"139610e0c1955f333b61f10e6681e6c70c94357105e2ec6f486659dc61152a21" +``` + +Errors similar to the one above can be avoided by first downloading all required container images manually before running the pipeline. It seems like they could be caused by parallel downloads overwhelming the image repository (see [issue](https://github.com/apptainer/singularity/issues/5020)). + +To download a pipeline's required images, use `nf-core download --container-system singularity`. See the [nf-core docs](https://nf-co.re/tools#downloading-pipelines-for-offline-use) for more info. diff --git a/docs/vsc_ugent.md b/docs/vsc_ugent.md index b1d4fd9b5..6deb6ecf7 100644 --- a/docs/vsc_ugent.md +++ b/docs/vsc_ugent.md @@ -2,6 +2,20 @@ > **NB:** You will need an [account](https://www.ugent.be/hpc/en/access/faq/access) to use the HPC cluster to run the pipeline. +Regarding environment variables in `~/.bashrc`, make sure you have a setup similar to the one below. If you're already part of a VO, ask for one or use `VSC_DATA_USER` instead of `VSC_DATA_VO_USER`. + +```bash +# Needed for Tier1 accounts, not for Tier2 +export SLURM_ACCOUNT={FILL_IN_NAME_OF_YOUR_ACCOUNT} +export SALLOC_ACCOUNT=$SLURM_ACCOUNT +export SBATCH_ACCOUNT=$SLURM_ACCOUNT +# Needed for running Nextflow jobs +export NXF_HOME=$VSC_DATA_VO_USER/.nextflow +# Needed for running Apptainer containers +export APPTAINER_CACHEDIR=$VSC_DATA_VO_USER/.apptainer/cache +export APPTAINER_TMPDIR=$VSC_DATA_VO_USER/.apptainer/tmp +``` + First you should go to the cluster you want to run the pipeline on. You can check what clusters have the most free space on this [link](https://shieldon.ugent.be:8083/pbsmon-web-users/). Use the following commands to easily switch between clusters: ```shell @@ -30,6 +44,77 @@ To submit your job to the cluster by using the following command: qsub