Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error: no such file or directory during spatial normalisation #384

Closed
jokedurnez opened this issue Feb 10, 2017 · 5 comments
Closed

Error: no such file or directory during spatial normalisation #384

jokedurnez opened this issue Feb 10, 2017 · 5 comments
Assignees
Labels

Comments

@jokedurnez
Copy link
Contributor

Spatial normalization seems to have failed on 2 subjects. Got the following error trace after a few minutes.

170210-02:48:30,602 workflow ERROR:
	 [u'Node MNIApplyMask failed to run on host sh-1-22.local.']
170210-02:48:30,603 workflow INFO:
	 Saving crash info to /scratch/PI/russpold/data/psychosis/05_mriqc/out/logs/crash-20170210-024830-jdurnez-MNIApplyMask-c23067f9-b734-49c8-9c90-b4a1e566cc43.pklz
170210-02:48:30,603 workflow INFO:
	 Traceback (most recent call last):
  File "/usr/local/miniconda/lib/python2.7/site-packages/nipype/pipeline/plugins/multiproc.py", line 52, in run_node
    result['result'] = node.run(updatehash=updatehash)
  File "/usr/local/miniconda/lib/python2.7/site-packages/nipype/pipeline/engine/nodes.py", line 370, in run
    shutil.move(hashfile_unfinished, hashfile)
  File "/usr/local/miniconda/lib/python2.7/shutil.py", line 302, in move
    copy2(src, real_dst)
  File "/usr/local/miniconda/lib/python2.7/shutil.py", line 130, in copy2
    copyfile(src, dst)
  File "/usr/local/miniconda/lib/python2.7/shutil.py", line 82, in copyfile
    with open(src, 'rb') as fsrc:
IOError: [Errno 2] No such file or directory: u'/scratch/PI/russpold/data/psychosis/05_mriqc/work/workflow_enumerator/funcMRIQC/SpatialNormalization/MNIApplyMask/_0x8ca483cd43f00defaa8ca19a9e4c0610_unfinished.json'

Job continued for 3.5 more hours, but was then stopped with this:

170210-06:17:18,104 workflow ERROR:
	 could not run node: workflow_enumerator.funcMRIQC.SpatialNormalization.MNIApplyMask
170210-06:17:18,105 workflow INFO:
	 crashfile: /scratch/PI/russpold/data/psychosis/05_mriqc/out/logs/crash-20170210-024830-jdurnez-MNIApplyMask-c23067f9-b734-49c8-9c90-b4a1e566cc43.pklz
170210-06:17:18,105 workflow INFO:

Full logs here: mriqc_out.txt,
and here: stderr.txt

@oesteban oesteban added the bug label Feb 10, 2017
@oesteban oesteban self-assigned this Feb 10, 2017
@oesteban oesteban added this to the MRIQC 1.0.0 milestone Feb 10, 2017
@chrisgorgo
Copy link
Collaborator

Potential fix proposed in nipy/nipype#1750

@oesteban
Copy link
Member

oesteban commented Mar 7, 2017

Hi @jokedurnez , could you confirm if this was happening using --n_procs 1?

@oesteban
Copy link
Member

oesteban commented Mar 7, 2017

I think we got this: nipy/nipype#1750 (comment)

@chrisgorgo
Copy link
Collaborator

Current workaround for @jokedurnez and @IanEisenberg is to always specify a different working directory (-w) each time you call mriqc (especially when running multiple subjects in parallel on a cluster).

@jokedurnez
Copy link
Contributor Author

I hadn't specified --n_procs

Thanks for the workaround, will try that.

oesteban added a commit to oesteban/mriqc that referenced this issue Mar 7, 2017
Do not mask T2 template always, cache the result so we remove
the conflicting node.

Fixes nipreps#384
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants