Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use a daemonset with rshared mounts to mount FUSE #190

Closed
yuvipanda opened this issue Apr 2, 2018 · 16 comments
Closed

Use a daemonset with rshared mounts to mount FUSE #190

yuvipanda opened this issue Apr 2, 2018 · 16 comments

Comments

@yuvipanda
Copy link
Member

Currently, each user mounts fuse themselves. This has negative security consequences, since they require privileged containers to do this.

Long term, the solution is to implement a Container Storage Interface driver for GCS FUSE. The CSI standard has wide adoption across multiple projects (mesos can also use it, for example), while FlexVolumes are kubernetes specific. FlexVolumes are also deprecated in Kubernetes now, and will be removed in a (far future) release. CSI is more flexible.

For the near term, it would be great to do something that lets us let go of GCS Fuse.

I'm assuming the following conditions are true for the FUSE usage:

  1. Everyone has same access to the entire FUSE space (read/write)
  2. We can upgrade to Kubernetes 1.10 (which should be on GKE in a few weeks)

We can use the new support for rshared mounts in kubernetes 1.10 to do the following:

  1. Make a container that has all the software for doing the GCS Mount.
  2. Run this container as a privileged daemonset - this makes it run on all nodes.
  3. Mount GCSFuse as /data/gcsfuse on the host machine, via rshared mounts.
  4. For each user node, mount /data/gcsfuse with hostPath into their user pod. They can use this for accessing GCSFuse without needing privileged access.

How does this sound?

@yuvipanda
Copy link
Member Author

An alternative if we want to do this earlier is:

  1. Switch node type to Ubuntu in GKE
  2. Run something like https://github.com/berkeley-dsep-infra/data8xhub/tree/master/images/mounter in a daemonset. In that example, we run this script: https://github.com/berkeley-dsep-infra/data8xhub/blob/master/images/mounter/mounter.py on the host. We can instead run something that mounts GCS FUSE instead.

This can happen today if needed.

@mrocklin
Copy link
Member

mrocklin commented Apr 2, 2018

I agree that we probably don't need specialized permissions on a per-user basis. I would suggest that these be read-only. cc'ing @jhamman to verify

@jacobtomlinson
Copy link
Member

This sounds great. We are currently using a flex volume solution for S3 access, but as you say that is deprecated.

I'm sure there will be a very wide interest for a CSI for all object stores. Not sure if this is something we want to tackle.

Otherwise I like your suggestion, the only issue from my side is that it depends on 1.10. We are using kops to build and manage our cluster right now which doesn't support 1.9 yet.

@yuvipanda
Copy link
Member Author

yuvipanda commented Apr 3, 2018 via email

@jacobtomlinson
Copy link
Member

Nice thanks!

@stale
Copy link

stale bot commented Jun 25, 2018

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the stale label Jun 25, 2018
@stale
Copy link

stale bot commented Jul 2, 2018

This issue has been automatically closed because it had not seen recent activity. The issue can always be reopened at a later date.

@stale stale bot closed this as completed Jul 2, 2018
@yuvipanda yuvipanda reopened this Aug 7, 2022
@yuvipanda
Copy link
Member Author

I built https://github.com/yuvipanda/jupyterhub-roothooks/ to solve this!

@github-actions github-actions bot removed the stale label Aug 8, 2022
@github-actions
Copy link

github-actions bot commented Oct 7, 2022

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@github-actions github-actions bot added the stale label Oct 7, 2022
@yuvipanda
Copy link
Member Author

Please, let this be, @github-actions bot.

@github-actions github-actions bot removed the stale label Oct 8, 2022
@github-actions
Copy link

github-actions bot commented Dec 8, 2022

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@github-actions github-actions bot added the stale label Dec 8, 2022
@yuvipanda
Copy link
Member Author

Can we turn off stale-bot? :(

@github-actions github-actions bot removed the stale label Dec 9, 2022
@jhamman jhamman added the pinned label Dec 9, 2022
@jhamman
Copy link
Member

jhamman commented Dec 15, 2023

@yuvipanda - can this be closed out now? I wonder if we can transition this to a z2jh or 2i2c issue.

@yuvipanda
Copy link
Member Author

yeah, I think we can probably turn this into a 'how to enable FUSE safely' issue on z2jh. Think I can convince you to do that, @jhamman? :D

@jhamman
Copy link
Member

jhamman commented Dec 16, 2023

@jhamman jhamman closed this as completed Dec 16, 2023
@yuvipanda
Copy link
Member Author

yay, thanks @jhamman!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants