-
Notifications
You must be signed in to change notification settings - Fork 748
Unable to change time limit with custom config #1533
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Could I just check- are you using the `slurm' executor? https://www.nextflow.io/docs/latest/executor.html#slurm |
Yes, I am using Slurm to submit jobs on my server. |
Hi Peter Can you try with configuration: process {
withName: 'NFCORE_RNASEQ:PREPARE_GENOME:STAR_GENOMEGENERATE.*' {
time = 24.h
}
withName: 'NFCORE_RNASEQ:RNASEQ:FASTQ_QC_TRIM_FILTER_SETSTRANDEDNESS:BBMAP_BBSPLIT.*' {
time = 24.h
}
} Note that the selectors use regular expressions rather than glob patterns, so you'll need |
Thank you- I did understand you were using Slurm to submit jobs. However, sometimes people submit Nextflow jobs to Slurm clusters without setting:
in configuration, with the result that all jobs end up running local to the main Nextflow job, which would bypass the mechanisms Nextflow uses to apply time limits to requests (as well as playing havock with resource usage on that machine). I just wanted to check you were applying that configuration. |
Hi. Thanks for pointing that out. I added
I've tried to run the command with |
OK, sounds like your nodes are not submission hosts. I would talk to your cluster admins, explaining this use case- using a workflow manager to distribute jobs to worker nodes. They may recommend using the head node, or a specific node for this. I would not recommend 'local' operation, you have a cluster at your disposal and these workflows are really optimised for distributed operation where you send jobs out to nodes. |
Description of the bug
I'm running the pipeline with singularity, but for some reason, my
NFCORE_RNASEQ:PREPARE_GENOME:STAR_GENOMEGENERATE ({fasta file})
reports an error caused byProcess exceeded running time limit (1h)
. Same thing happened toNFCORE_RNASEQ:RNASEQ:FASTQ_QC_TRIM_FILTER_SETSTRANDEDNESS:BBMAP_BBSPLIT ({sample name})
I've looked into the
main.nf
innf-core/rnaseq/modules/nf-core/star/genomegenerate
andnf-core/rnaseq/modules/nf-core/bbmap/bbsplit/
, and both showed that the tasks were labeled with'process_high'
, which should put the job running time to 10+ hours.I've also added a custom config file to target these two tasks specifically, but it did not fix the problem. Any help is appreciated.
@marchoeppner @lbeltrame @robsyme @tucano
Command used and terminal output
Relevant files
params file
nextflow_params.yaml
:input: "/home/bow012/nextflow_RNAseq_test.csv"
outdir: "/expanse/lustre/scratch/bow012/temp_project/RNAseq_PCSD13_nextflow_results"
gtf: "/home/bow012/nextflow_reference/Homo_sapiens.GRCh38.113.gtf.gz"
fasta: "/home/bow012/nextflow_reference/Homo_sapiens.GRCh38.dna_sm.primary_assembly.fa.gz"
custom config file
RNAseq.config
:process {
withName: 'NFCORE_RNASEQ:PREPARE_GENOME:STAR_GENOMEGENERATE*' {
time = 24.h
}
withName: 'NFCORE_RNASEQ:RNASEQ:FASTQ_QC_TRIM_FILTER_SETSTRANDEDNESS:BBMAP_BBSPLIT*' {
time = 24.h
}
}
System information
Nextflow version: 24.10.5.5935
Slurm is used for job submission.
Submission format:
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=32
#SBATCH --time=48:00:00
#SBATCH --mem=200GB
The text was updated successfully, but these errors were encountered: