Skip to content
This repository has been archived by the owner on Jan 27, 2020. It is now read-only.

RunHaplotypecaller fails on scratch #471

Closed
szilvajuhos opened this issue Oct 4, 2017 · 10 comments
Closed

RunHaplotypecaller fails on scratch #471

szilvajuhos opened this issue Oct 4, 2017 · 10 comments
Assignees
Labels
Milestone

Comments

@szilvajuhos
Copy link
Collaborator

When running on scratch, the bed files are missing. It works when NFX_WORK is on a local directory:

export NXF_TEMP=/scratch
export NXF_LAUNCHBASE=/scratch
export NXF_WORK=/scratch
export NXF_HOME=/castor/project/proj_nobackup/nextflow
nextflow run /castor/project/proj_nobackup/CAW/default/main.nf --step skipPreprocessing --tools HaplotypeCaller --sample Preprocessing/Recalibrated/recalibrated.tsv
...
  ##### ERROR
  ##### ERROR MESSAGE: Could not read file chr14_16133336-16140527.bed because The interval file does not exist.
  ##### ERROR ------------------------------------------------------------------------------------------

changing NXF_WORK to `pwd`/work fixes the issue.

@szilvajuhos szilvajuhos added the bug label Oct 4, 2017
@szilvajuhos szilvajuhos added this to the Version 1.2 milestone Oct 4, 2017
@marcelm
Copy link
Member

marcelm commented Oct 4, 2017

Please give me a way to reproduce this. I run with NXF_WORK on /scratch and it works for me.

@szilvajuhos
Copy link
Collaborator Author

szilvajuhos commented Oct 5, 2017

Hey, created a directory at /proj/b2015110/nobackup/szilva/CAW471 with test data. There is a script and called like:

szilva@m2 /proj/b2015110/nobackup/szilva/CAW471 $ ./runTools.sh chr21_test.tsv HaplotypeCaller

When NXF_WORK is set to a local directory (pwd/work), it works fine. When set to scratch, it fails -
see HC.log

@szilvajuhos
Copy link
Collaborator Author

Also seems the returnMin() part always returns with totalMemory in the configuration. compareTo() expects integers, but the values added are different types. Therefore only a single HaplotypeCaller is launched albeit with a lot of memory.

@marcelm
Copy link
Member

marcelm commented Oct 5, 2017

Also seems the returnMin() part always returns with totalMemory in the configuration.

Please open a new issue for that. This change was not made by me.

@marcelm
Copy link
Member

marcelm commented Oct 5, 2017

I believe the problem is that the directory /scratch/ is missing in the Singularity images.

@szilvajuhos
Copy link
Collaborator Author

Indeed! @maxulysse to be blamed in #473

@marcelm
Copy link
Member

marcelm commented Oct 5, 2017

:-)

@marcelm
Copy link
Member

marcelm commented Oct 5, 2017

Note that the error message that you gave above:

ERROR MESSAGE: Could not read file chr14_16133336-16140527.bed because The interval file does not exist.

... is different from the one in the HC.log file:

Command error:
  /bin/bash: line 0: cd: /scratch/99/7fe4ac946d6830811079006f7b723a: No such file or directory
  /bin/bash: .command.run.1: No such file or directory

The one in HC.log appears to be caused by the missing /scratch/ directory. The first one is printed by HaplotypeCaller directly. Perhaps they are different issue. Or did you perhaps upgrade Nextflow in between?

@maxulysse
Copy link
Member

I'm fixing the /scratch/ issue.
In my opinion the problem is mainly because UPPMAX did not ask us for the right directory to create.
But I do am to blame for the other issue...

@maxulysse
Copy link
Member

So creating /scratch in the containers is not enough.
I think this should be finally resolved with adding runOptions = "--bind /scratch" in the singularity scope in the singularity-path.config file.
My guess is that /pica /proj /sw are automatically binded in UPPMAX systems, but not /scratch, so creating the container was not enough to do the trick.
I just made a request to support for having them bind /scratch too, but while waiting for that, runOptions = "--bind /scratch" will do the trick

This was referenced Oct 16, 2017
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants