Skip to content

Commit

Permalink
Initial commit
Browse files Browse the repository at this point in the history
  • Loading branch information
MathiasEskildsen authored Jan 30, 2024
0 parents commit b135164
Show file tree
Hide file tree
Showing 29 changed files with 356 additions and 0 deletions.
18 changes: 18 additions & 0 deletions .github/workflows/conventional-prs.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
name: PR
on:
pull_request_target:
types:
- opened
- reopened
- edited
- synchronize

jobs:
title-format:
runs-on: ubuntu-latest
steps:
- uses: amannn/[email protected]
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
validateSingleCommit: true
54 changes: 54 additions & 0 deletions .github/workflows/main.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,54 @@
name: Tests

on:
push:
branches: [ main ]
pull_request:
branches: [ main ]


jobs:
Formatting:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Formatting
uses: github/super-linter@v4
env:
VALIDATE_ALL_CODEBASE: false
DEFAULT_BRANCH: main
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
VALIDATE_SNAKEMAKE_SNAKEFMT: true

Linting:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Lint workflow
uses: snakemake/[email protected]
with:
directory: .
snakefile: workflow/Snakefile
args: "--lint"

Testing:
runs-on: ubuntu-latest
needs:
- Linting
- Formatting
steps:
- uses: actions/checkout@v2

- name: Test workflow
uses: snakemake/[email protected]
with:
directory: .test
snakefile: workflow/Snakefile
args: "--use-conda --show-failed-logs --cores 3 --conda-cleanup-pkgs cache --all-temp"

- name: Test report
uses: snakemake/[email protected]
with:
directory: .test
snakefile: workflow/Snakefile
args: "--report report.zip"
17 changes: 17 additions & 0 deletions .github/workflows/release-please.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
on:
push:
branches:
- main

name: release-please

jobs:
release-please:
runs-on: ubuntu-latest
steps:

- uses: GoogleCloudPlatform/release-please-action@v2
id: release
with:
release-type: go # just keep a changelog, no version anywhere outside of git tags
package-name: <repo>
5 changes: 5 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
results/**
resources/**
logs/**
.snakemake
.snakemake/**
11 changes: 11 additions & 0 deletions .snakemake-workflow-catalog.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
# configuration of display in snakemake workflow catalog: https://snakemake.github.io/snakemake-workflow-catalog

usage:
mandatory-flags: # optional definition of additional flags
desc: # describe your flags here in a few sentences (they will be inserted below the example commands)
flags: # put your flags here
software-stack-deployment: # definition of software deployment method (at least one of conda, singularity, or singularity+conda)
conda: true # whether pipeline works with --use-conda
singularity: false # whether pipeline works with --use-singularity
singularity+conda: false # whether pipeline works with --use-singularity --use-conda
report: true # add this to confirm that the workflow allows to use 'snakemake --report report.zip' to generate a report containing all results and explanations
1 change: 1 addition & 0 deletions .template/config/config.yaml.tmpl.tmpl
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
# add template configuration file (may use variables from copier.yml)
10 changes: 10 additions & 0 deletions .template/workflow/Snakefile.tmpl.tmpl
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
configfile: "config/config.yaml"

module [[ module_name ]]:
snakefile:
# TODO replace <release> with desired release
"https://github.com/[[ owner ]]/[[ repo ]]/raw/<release>/Snakefile"
config:
config

use rule * from [[ module_name ]]
21 changes: 21 additions & 0 deletions LICENSE
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
MIT License

Copyright (c) 2021, AUTHORS

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
22 changes: 22 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
# Snakemake workflow: `<snakemake_template>`

[![Snakemake](https://img.shields.io/badge/snakemake-≥7.18.2-brightgreen.svg)](https://snakemake.github.io)
[![GitHub actions status](https://github.com/<owner>/<repo>/workflows/Tests/badge.svg?branch=main)](https://github.com/<owner>/<repo>/actions?query=branch%3Amain+workflow%3ATests)

This is a (working) template repository designed for scientific projects where data is processed using [Snakemake](https://snakemake.readthedocs.io/).

## Requirements
All required tools are automatically installed by Snakemake using conda environments or singularity/apptainer containers, however Snakemake itself needs to be installed first. Load a software module with Snakemake, use a native install, or use the `environment.yml` file to create a conda environment for this particular project using fx `mamba env create -n <snakemake_template> -f environment.yml`.

## Usage
Adjust the `config.yaml` files under both `config/` and `profiles/` accordingly, then simply run `snakemake --profile profiles/<subfolder>` or submit a SLURM job using the `slurm_submit.sbatch` example script.
The usage of this workflow is also described in the [Snakemake Workflow Catalog](https://snakemake.github.io/snakemake-workflow-catalog/?usage=<owner>%2F<repo>).

# TODO
* Replace `<owner>` and `<repo>` with the correct values in this `README.md` as well as in files under `.github/workflows/`.
* Replace `<snakemake_template>` with the workflow/project name (can be the same as `<repo>`) here as well as in the `environment.yml` and `slurm_submit.sbatch` files.
* Add more requirements to the `environment.yml` file if needed, however tools for each Snakemake rule should **NOT** go here, they should be configured separately for each rule instead in `yaml` files under `envs/`.
* Fill in fields in this `README.md` file, in particular provide a proper description of what the workflow does with any relevant details and configuration.
* The workflow will occur in the public Snakemake workflow catalog once the repository has been made public and the provided GitHub actions finish correctly. Then the link under "Usage" will point to the usage instructions if `<owner>` and `<repo>` were correctly set. If you don't want to publish the workflow just delete the `.github/workflows/` and `.template/` folders and `snakemake-workflow-catalog.yml`.
* Consider the license - [Choose a license](https://choosealicense.com/)
* DELETE this **TODO** section when finished with all of the above, and then start developing your workflow!
Empty file added analysis/.gitkeep
Empty file.
2 changes: 2 additions & 0 deletions config/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
Describe how to configure the workflow (using config.yaml and maybe additional files).
All of them need to be present with example entries inside of the config folder.
17 changes: 17 additions & 0 deletions config/config.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
# This file is for various variables used throughout the workflow.
# Everything else regarding how things are run should go in the profile config

output_dir: "results"
# input folder is expected to contain a subfolder for each sampleID/barcode
# then all fastq files in each subfolder is concatenated and the folder name is
# used a sample ID downstream
input_dir: "data/samples/"
tmp_dir: "tmp"
db_path: "/databases/midas/MiDAS5.2_20231221/output/FLASVs.fa"
log_dir: "logs"

# Number of threads to use for individual roles.
# Not ideal, but threads can be set in multiple places, so the best is to set
# this to a large number and instead adjust the max-threads (per rule)
# in the profile config.yaml to suit your particular computing setup.
max_threads: 128
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
8 changes: 8 additions & 0 deletions environment.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
name: <snakemake_template>
channels:
- bioconda
- conda-forge
dependencies:
- snakemake=7.18.2
- snakefmt # snakemake code formatter
- graphviz=9.0.0 # to visualize the DAG
24 changes: 24 additions & 0 deletions extras/slurm-status.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
#!/usr/bin/env bash

# Check status of Slurm job

jobid="$1"

if [[ "$jobid" == Submitted ]]
then
echo smk-simple-slurm: Invalid job ID: "$jobid" >&2
echo smk-simple-slurm: Did you remember to add the flag --parsable to your sbatch call? >&2
exit 1
fi

output=$(sacct -j "$jobid" --format State --noheader | head -n 1 | awk '{print $1}')

if [[ $output =~ ^(COMPLETED).* ]]
then
echo success
elif [[ $output =~ ^(RUNNING|PENDING|COMPLETING|CONFIGURING|SUSPENDED).* ]]
then
echo running
else
echo failed
fi
38 changes: 38 additions & 0 deletions profiles/biocloud/config.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
#command with which to submit tasks as SLURM jobs
cluster:
mkdir -p logs/{rule}/ &&
sbatch
--parsable
--partition={resources.partition}
--qos={resources.qos}
--cpus-per-task={threads}
--mem={resources.mem_mb}
--job-name=smk-{rule}-{wildcards}
--output=logs/{rule}/{rule}-{wildcards}-%j.out
#if rules don't have resources set, use these default values.
#Note that "mem" will be converted to "mem_mb" under the hood, so mem_mb is prefered
default-resources:
- partition="general"
- qos="normal"
- threads=128
- mem_mb=1024
- gpu=0
#max threads per job/rule. Will take precedence over anything else. Adjust this
#before submitting to SLURM and leave threads settings elsewhere untouched
max-threads: 32
use-conda: True
use-singularity: False
conda-frontend: mamba
printshellcmds: False
jobs: 50
local-cores: 1
latency-wait: 120
restart-times: 1 #restart failed tasks this amount of times
max-jobs-per-second: 10 #don't touch
keep-going: True
rerun-incomplete: True
scheduler: greedy
max-status-checks-per-second: 5
cluster-cancel: scancel
#script to get job status for snakemake, unfortunately neccessary
cluster-status: extras/slurm-status.sh
29 changes: 29 additions & 0 deletions slurm_submit.sbatch
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
#!/usr/bin/bash -l
#SBATCH --job-name=<snakemake_template>
#SBATCH --output=job_%j_%x.out
#SBATCH --nodes=1
#SBATCH --ntasks=1
#SBATCH --ntasks-per-node=1
#SBATCH --partition=general
#SBATCH --cpus-per-task=1
#SBATCH --mem=1G
#SBATCH --mail-type=END,FAIL
#SBATCH [email protected]

# Exit on first error and if any variables are unset
set -eu

# Activate conda environment with only snakemake
conda activate <snakemake_template>

# Start workflow using ressources defined in the profile. Snakemake itself
# requires nothing, 1CPU + 1G mem is enough

# Render a DAG to visualize the workflow (optional)
snakemake --dag | dot -Tsvg > results/dag.svg

# Main workflow
snakemake --profile profiles/biocloud

# Generate a report once finished (optional)
snakemake --report results/report.html
20 changes: 20 additions & 0 deletions workflow/Snakefile
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
# Main entrypoint of the workflow.
# Please follow the best practices:
# https://snakemake.readthedocs.io/en/stable/snakefiles/best_practices.html,
# in particular regarding the standardized folder structure mentioned there.
import os
from snakemake.utils import min_version

min_version("7.18.2")

configfile: "config/config.yaml"

# list all subfolders in input_dir
sample_dirs = os.listdir(config['input_dir'])

include: "rules/concatenate_fastq.smk"
include: "rules/map2db.smk"

rule all:
input:
expand(os.path.join(config['output_dir'], "{sample}.sam"), sample=sample_dirs)
6 changes: 6 additions & 0 deletions workflow/envs/map2db.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
name: map2db
channels:
- bioconda
dependencies:
- minimap2=2.26
- samtools=1.18
Empty file added workflow/notebooks/.gitkeep
Empty file.
Empty file added workflow/report/.gitkeep
Empty file.
20 changes: 20 additions & 0 deletions workflow/rules/concatenate_fastq.smk
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
import glob
import os

# helper function to list all fastq files per wildcard (subfolder/sample)
def listFastq(wildcards):
fastqs = glob.glob(os.path.join(config['input_dir'], wildcards.sample, "*.fastq.gz"))
return fastqs

rule concatenate_fastq:
input:
listFastq
output:
temp(os.path.join(config['tmp_dir'], "samples", "{sample}_concat.fastq.gz"))
resources:
mem_mb = 600
threads: 1
log:
os.path.join(config["log_dir"], "concatenate_fastq", "{sample}.log")
shell:
"cat {input} > {output}"
33 changes: 33 additions & 0 deletions workflow/rules/map2db.smk
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
rule map2db:
input:
os.path.join(config['tmp_dir'], "samples", "{sample}_concat.fastq.gz")
output:
os.path.join(config['output_dir'], "{sample}.sam")
resources:
#depending on the tool memory usage usually scales with threads
#and/or input/database file size. Can calculate dynamically
mem_mb = 10240
threads: config['max_threads']
params:
db_path = config['db_path']
conda:
"../envs/map2db.yml"
log:
os.path.join(config["log_dir"], "map2db", "{sample}.log")
shell:
"""
minimap2 \
-ax map-ont \
-K20M \
-t {threads} \
--secondary=no \
{params.db_path} \
{input} \
| samtools \
view \
-F 4 \
-F 256 \
-F 2048 \
--threads {threads} \
-o {output}
"""
Empty file added workflow/scripts/.gitkeep
Empty file.

0 comments on commit b135164

Please sign in to comment.