Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Updated user documentation and user guide links for SHU BMRC #786

Merged
merged 4 commits into from
Nov 1, 2024
Merged
Show file tree
Hide file tree
Changes from 3 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion conf/shu_bmrc.config
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ params {

config_profile_description = 'Sheffield Hallam Universty - BMRC HPC'
config_profile_contact = 'Dr Lewis A Quayle (l.quayle@shu.ac.uk)'
config_profile_url = 'https://openflighthpc.org/latest/docs/'
config_profile_url = 'https://bmrc-hpc-documentation.readthedocs.io/en/latest/'

}

Expand Down
87 changes: 81 additions & 6 deletions docs/shu_bmrc.md
Original file line number Diff line number Diff line change
@@ -1,13 +1,88 @@
# nf-core/configs: Sheffield Hallam University BMRC Cluster Configuration

## Using the Institutional Configuration Profile
This document provides guidelines for using nf-core pipelines on Sheffield Hallam University's BMRC High-Performance Computing (HPC) cluster. The custom configuration file for this cluster enables optimised resource usage and workflow compatibility within the BMRC HPC environment, facilitating efficient execution of nf-core workflows.

To use [`shu_bmrc.config`](../conf/shu_bmrc.config), run nextflow with an nf-core pipeline using `-profile shu_bmrc` (note the single hyphen).
---

This will download and launch [`shu_bmrc.config`](../conf/shu_bmrc.config) which has been pre-configured with a setup suitable for the BMRC cluster and will automatically load the appropriate pipeline-specific configuration file.
## Table of Contents

## Running nf-core Pipelines on the BMRC Cluster
1. [Introduction](#introduction)
2. [Requirements](#requirements)
3. [Configuration Details](#configuration-details)
4. [Usage](#usage)
5. [Troubleshooting](#troubleshooting)
6. [Support and Contact](#support-and-contact)

A detailed guide to setting up Nextflow and nf-core and running pipelines on the BMRC Cluster is available here: [Running Nextflow with nf-core Pipelines on SHU BMRC Cluster](https://github.com/lquayle88/nfcore_on_shu_bmrc)
---

If you have any questions or issues not addressed by this guide, please contact: [l.quayle@shu.ac.uk](mailto:l.quayle@shu.ac.uk)
## Introduction

This configuration file is specifically designed for running nf-core workflows on the BMRC HPC cluster at **Sheffield Hallam University**. The configuration integrates optimal resource parameters and scheduling policies to ensure efficient job execution on the cluster, aligning with internal HPC policies and specifications.

The cluster configuration:

- **Location**: BMRC HPC Cluster at Sheffield Hallam University
- **Contact**: Dr Lewis A Quayle ([l.quayle@shu.ac.uk](mailto:l.quayle@shu.ac.uk))
- **Documentation**: [BMRC HPC Documentation](https://bmrc-hpc-documentation.readthedocs.io/en/latest/)

## Requirements

To use this configuration, you must have:

- **Access to BMRC HPC**: Ensure your user account is enabled for HPC access at Sheffield Hallam University. The **GlobalProtect VPN** is required for remote access. For setup instructions, refer to [SHU VPN Guide](https://www.shu.ac.uk/digital-skills/programs-and-applications/virtual-private-network-vpn).
- **Nextflow**: Version 20.10.0 or later is recommended for optimal compatibility.
lquayle88 marked this conversation as resolved.
Show resolved Hide resolved

For a detailed guide to setting up Nextflow and running nf-core pipelines on the BMRC cluster, refer to [Running nf-core Pipelines on SHU BMRC Cluster](https://bmrc-hpc-documentation.readthedocs.io/en/latest/nfcore/index.html).

## Configuration Details

The configuration has been tailored for the BMRC HPC, providing preset values for CPUs, memory, and scheduling to align with HPC policies.

### Core Configuration

- **Cluster Scheduler**: `slurm`
- **Max Retries**: 2 (automatically reattempts failed jobs)
- **Queue Size**: 50 jobs
- **Submit Rate Limit**: 1 job per second

### Resource Allocation

Each nf-core workflows will automatically receive the following default resource maxima:

| Resource | Setting |
| -------- | --------- |
| CPUs | 64 |
| Memory | 1007 GB |
| Time | 999 hours |

### Container Support

The configuration supports Apptainer for containerised workflows, with automatic mounting enabled, allowing seamless access to necessary filesystems within containers.

### Cleanup

Intermediate files from successful runs will be automatically deleted to free up storage.
lquayle88 marked this conversation as resolved.
Show resolved Hide resolved

## Usage

To launch an nf-core pipeline on the BMRC cluster using the `shu_bmrc` profile:

```bash
nextflow run nf-core/<pipeline_name> -profile shu_bmrc
```

## Troubleshooting

If you encounter issues, ensure you have:

- Followed the user guide on the BMRC HPC documentation site (see below).
- Specified the correct profile (`shu_bmrc`) for the cluster.
- Checked for sufficient permissions on the BMRC HPC cluster.
- Verified that Apptainer is enabled and accessible within your environment.

## Support and Contact

For support or questions, contact:

- **Primary Contact**: Dr Lewis A Quayle ([l.quayle@shu.ac.uk](mailto:l.quayle@shu.ac.uk))
- **BMRC HPC Documentation**: [Link](https://bmrc-hpc-documentation.readthedocs.io/en/latest/)
Loading