Skip to content

Latest commit

 

History

History
40 lines (26 loc) · 1.72 KB

ebi_codon_slurm.md

File metadata and controls

40 lines (26 loc) · 1.72 KB

nf-core/configs: EBI Codon Cluster SLURM Configuration

All nf-core pipelines have been successfully configured for use on the SLURM login nodes of the codon cluster at the European Bioinformatics Institute.

To use, run the pipeline with -profile ebi_codon_slurm. This will download and launch the ebi_codon_slurm.config which has been pre-configured with a setup suitable for the codon cluster.

You should not run Nextflow on the login nodes. You should submit a batch job that executes Nextflow.

Loading the required modules

Before running the pipeline you will need to load Nextflow and Singularity using the environment module system on the codon cluster. You can do this by issuing the commands below:

## Load Nextflow and Singularity environment modules
module purge
module load nextflow/22.10.1
module load singularityce/3.10.3

You may want to add those module load commands to your shell configuration file if you use them often.

Installing mamba

Run the following:

curl -L -O "https://github.com/conda-forge/miniforge/releases/latest/download/Mambaforge-$(uname)-$(uname -m).sh"
bash Mambaforge-$(uname)-$(uname -m).sh

Follow the instructions here for more details.

Setting up a suitable path for the Nextflow software cache

It is recommended to install conda environments and singularity containers in your /hps/software directory. To achieve this, add to your ~/.nextflow/config file the following lines:

singularity.cacheDir = "/hps/software/users/<group>/<user_id>/nextflow_software_cache/singularity"
conda.cacheDir = "/hps/software/users/<group>/<user_id>/nextflow_software_cache/conda"