Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

inception mode: running singularity containers in singularity containers #20

Closed
stebo85 opened this issue Nov 29, 2020 · 7 comments
Closed
Assignees

Comments

@stebo85
Copy link
Contributor

stebo85 commented Nov 29, 2020

I know it's possible, but it's not working yet. this is how far I got:
https://colab.research.google.com/drive/1WmXStJpD0JTWePCYzK6nLQJ9Kq8Z40dY?usp=sharing

@stebo85
Copy link
Contributor Author

stebo85 commented Jan 7, 2021

works now :) Trick was to use singularity 3.7.0 and a sandbox directory:
singularity build --sandbox neurodesk_20210106 neurodesk_20210106.sif

@civier
Copy link
Contributor

civier commented Jan 7, 2021

So cool!
Do you know what are the Linux and Singularity versions that Colab runs?
The problem with HPCs is that RHEL has just recently started supporting user namespaces, and even if supported, it is often not enabled due to security concerns.
Can you test it on the UQ HPC?

@stebo85
Copy link
Contributor Author

stebo85 commented Jan 7, 2021

Correct, this doesn't work yet on most HPCs. It's a future feature :) But it works nicely in our vnm docker container.

You can test if your HPC supports it:

curl -X GET https://swift.rc.nectar.org.au:8888/v1/AUTH_d6165cc7b52841659ce8644df1884d5e/singularityImages/neurodesk_20210107.sif -O

curl -X GET https://swift.rc.nectar.org.au:8888/v1/AUTH_d6165cc7b52841659ce8644df1884d5e/singularityImages/itksnap_3.8.0_20200811.sif -O

singularity shell neurodesk_20210107.sif
singularity shell itksnap_3.8.0_20200811.sif

Cheers
Steffen

@stebo85
Copy link
Contributor Author

stebo85 commented Feb 3, 2021

This could be interesting: https://github.com/cvmfs/cvmfsexec

Discussion from singularity mailinglist:
It turns out that mounting SIF files in an HPC environment is a huge
performance benefit over sandboxed containers unpacked on a high-speed
file server. That's because it moves the metadata operations to the
worker node instead of sending them all to the file server. I think
this is a major reason why most HPC administrators are willing to live
with the theoretical risk.

Another way to move metadata operations to the worker node is the CernVM
Filesystem (https://cernvm.cern.ch/fs). It also works most efficiently
with sandboxed singularity containers, so it's the best of both plus it
gives instantaneous and reliable world-wide distribution. The High
Energy Physics community uses it extensively along with completely
unprivileged singularity. It can even be run as an unprivileged user
when unprivileged user namespaces are enabled:
https://github.com/cvmfs/cvmfsexec

@civier
Copy link
Contributor

civier commented Feb 3, 2021

Thanks Steffen. I considered CVMFS for providing a remote standard conda environment in VNM, so let's discuss it as part of AEDAPT.

@stebo85
Copy link
Contributor Author

stebo85 commented Feb 3, 2021

Yes, would be good to play with this idea and see if it's viable for distributing our images

@stebo85 stebo85 self-assigned this Mar 31, 2021
@stebo85 stebo85 transferred this issue from NeuroDesk/neurocommand Sep 8, 2021
@stebo85
Copy link
Contributor Author

stebo85 commented Jan 21, 2022

I got the inception mode to work :) Could everyone test and let me know how well it works already?

Neurodesktop Container version to test: 20220120

Launch: Programming -> code -> code 220114
image

now you can load modules inside this container which call subcontainers:
image

@stebo85 stebo85 closed this as completed Feb 7, 2022
@stebo85 stebo85 moved this to Done - Needs testing in NeuroDesk Oct 23, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Archived in project
Development

No branches or pull requests

2 participants