Skip to content

Commit

Permalink
Add in possibility for netMHCpan via path
Browse files Browse the repository at this point in the history
  • Loading branch information
apeltzer committed Nov 15, 2019
1 parent da14bda commit 269940d
Show file tree
Hide file tree
Showing 5 changed files with 38 additions and 12 deletions.
1 change: 0 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,6 @@ nextflow run nf-core/epitopeprediction -profile test,<docker/singularity/conda/i
iv. Start running your own analysis!

<!-- TODO nf-core: Update the default command above used to run the pipeline -->
```bash
nextflow run nf-core/epitopeprediction -profile <docker/singularity/conda/institute> --reads '*_R{1,2}.fastq.gz' --genome GRCh37
```
Expand Down
1 change: 0 additions & 1 deletion conf/base.config
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,6 @@ process {
// NOTE - Only one of the labels below are used in the fastqc process in the main script.
// If possible, it would be nice to keep the same label naming convention when
// adding in your processes.
// TODO nf-core: Customise requirements for specific processes.
// See https://www.nextflow.io/docs/latest/config.html#config-process-selectors
withLabel:process_low {
cpus = { check_max( 2 * task.attempt, 'cpus' ) }
Expand Down
5 changes: 5 additions & 0 deletions docs/usage.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,7 @@
* [Reference genomes](#reference-genomes)
* [`--genome` (using iGenomes)](#genome-using-igenomes)
* [`--fasta`](#fasta)
* [`--netmhcpan`](#netmhcpan)
* [`--igenomes_ignore`](#igenomesignore)
* [Job resources](#job-resources)
* [Automatic resubmission](#automatic-resubmission)
Expand Down Expand Up @@ -178,6 +179,10 @@ If you prefer, you can specify the full path to your reference genome when you r
--fasta '[path to Fasta reference]'
```

### `--netmhcpan`

Configure path to exported [netMHCpan](https://services.healthtech.dtu.dk/service.php?NetMHCIIpan) directory. The directory should already contain the required data for netMHCPan that needs to be installed by the user separately.

### `--igenomes_ignore`

Do not load `igenomes.config` when running the pipeline. You may choose this option if you observe clashes between custom parameters and those supplied in `igenomes.config`.
Expand Down
8 changes: 4 additions & 4 deletions environment.yml
Original file line number Diff line number Diff line change
Expand Up @@ -7,8 +7,8 @@ dependencies:
- python=2.7.15 #dont upgrade, FRED2 needs python 2.7 for now
- multiqc=1.6 #cant upgrade due to Python2 fix
- snpsift=4.3.1t
- csvtk=0.17.0
- fred2=2.0.3
- mhcflurry=1.2.4
- csvtk=0.19.1
- fred2=2.0.4
- mhcflurry=1.4.3
- mhcnuggets=2.2
- conda-forge::r-markdown=0.9
- conda-forge::r-markdown=1.1
35 changes: 29 additions & 6 deletions main.nf
Original file line number Diff line number Diff line change
Expand Up @@ -31,11 +31,12 @@ def helpMessage() {
Alternative inputs:
--peptides Path to TSV file containing peptide sequences (minimum required: id and sequence column)
Options:
Pipeline options:
--filter_self Specifies that peptides should be filtered against the specified human proteome references Default: false
--wild_type Specifies that wild-type sequences of mutated peptides should be predicted as well Default: false
--mhc_class Specifies whether the predictions should be done for MHC class I or class II. Default: 1
--peptide_length Specifies the maximum peptide length Default: MHC class I: 8 to 11 AA, MHC class II: 15 to 16 AA
--netmhcpan Specifies the path to netMHCPan installation
References If not specified in the configuration file or you wish to overwrite any of the references
--reference_genome Specifies the ensembl reference genome version (GRCh37, GRCh38) Default: GRCh37
Expand Down Expand Up @@ -90,6 +91,9 @@ params.reference_proteome = false
multiqc_config = file(params.multiqc_config)
output_docs = file("$baseDir/docs/output.md")

//Check if netmhcpan needs to be used
if (params.netmhcpan) netmhcpan_path = file(params.netmhcpan)

ch_split_peptides = Channel.empty()
ch_split_variants = Channel.empty()

Expand Down Expand Up @@ -130,11 +134,6 @@ if ( params.filter_self & !params.reference_proteome ){
params.reference_proteome = file("$baseDir/assets/")
}

// AWSBatch sanity checking
if(workflow.profile == 'awsbatch'){
if (!params.awsqueue || !params.awsregion) exit 1, "Specify correct --awsqueue and --awsregion parameters on AWSBatch!"
if (!workflow.workDir.startsWith('s3') || !params.outdir.startsWith('s3')) exit 1, "Specify S3 URLs for workDir and outdir parameters on AWSBatch!"
}
//
// NOTE - THIS IS NOT USED IN THIS PIPELINE, EXAMPLE ONLY
// If you want to use the channel below in a process, define the following:
Expand Down Expand Up @@ -250,6 +249,30 @@ process get_software_versions {
"""
}


/*
* Prepare netmhcpan installation for running on HPC clusters properly
*/

process prepare_netmhcpan{

input:
file netmhcpan_path from netmhcpan_path

when: params.netmhcpan

output:
file(netmhcpan_path) into netmhcpan_path_for_fred2

script:
"""
#Fix path to netMHCPan correctly
sed -i "s#setenv NMHOME.*#setenv NMHOME ${netmhcpan_path}" ${netmhcpan_path}/netMHCpan
#Fix tmp folder to /tmp for clusters
sed -i "s#setenv TMPDIR.*#setenv TMPDIR /tmp" ${netmhcpan_path}/netMHCpan
"""
}

/*
* STEP 1 - Split variant data
*/
Expand Down

0 comments on commit 269940d

Please sign in to comment.