Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Inject a jwtproxy container into each Pod instead of creating a separated Pod for it #10404

Closed
sleshchenko opened this issue Jul 12, 2018 · 7 comments
Assignees
Labels
kind/task Internal things, technical debt, and to-do tasks to be performed. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. severity/P2 Has a minor but important impact to the usage or development of the system.

Comments

@sleshchenko
Copy link
Member

sleshchenko commented Jul 12, 2018

Description

The initial implementation of secure servers for Kubernetes/OpenShift infrastructures is implemented in the following way:

  • a separated Pod is injected into each workspace environment;
  • the corresponding container and services ports are exposed for secure servers;
  • JWTProxy container is configured to proxy requests to the corresponding service port;
    So, a secure server is exposed with Kubernetes Service withing namespace but does not have an externally accessible endpoint.
    There is created Ingress/Route that exposes externally the corresponding JWTProxy container port, which traffic will be proxied to original service ports if a correct machine token is present.

During a discussion with @l0rd @ibuziuk @skabashnyuk it was mentioned that it may have the following security issue: in a case when Che is configured to use the same namespace for workspaces of different users, there is an ability for a user to request another user security servers without specified machine token. It would be more secure to inject JWTProxy into each pod as a container. Then JWTProxy may be configured to proxy requests directly to localhost:{PORT} via internal pod network. In this case, a bare secure server won't be exposed by Service and it won't be possible to request secure server outside of pod without a correct machine token.

The only possible issue that is here - there is a possible conflict of ports that will be solved as a separated issue. It may happen because different containers in the same pod used the common ports pool. JwtProxy needs to occupy some ports, and the conflict may happen if the server (or just process) is configured to use the same port as JwtProxy chosen.
Looks like we are not able to predict all ports that may be occupied with processes.
The simple solution here is the same as we did for installers conflicts: use non-used by server port from configured scope https://github.com/eclipse/che/blob/242f56a8fd88a9ce1e57a40654d608330fafb0b4/infrastructures/kubernetes/src/main/java/org/eclipse/che/workspace/infrastructure/kubernetes/provision/InstallerServersPortProvisioner.java#L63-L64
So, within this issue it is also needed to decide if this solution is good enough or not, and implement the described algorithm of ports choosing or create a separated issue for finding better solution and implement the simplest algorithm for choosing ports for JwtProxy.

@sleshchenko sleshchenko added status/open-for-dev An issue has had its specification reviewed and confirmed. Waiting for an engineer to take it. kind/task Internal things, technical debt, and to-do tasks to be performed. team/platform labels Jul 12, 2018
@sleshchenko
Copy link
Member Author

Also, it may be an issue for Che internal servers that are supposed to be exposed by services for using internally within a workspace, so it is supposed to be not covered by authentication since it should not be accessible outside of the workspace. As far as I know, internal servers are used for language servers, and if exposing by services is not secure enough then language servers should be covered with authentication if it possible or internal servers should be reworked somehow. Otherwise, language servers that belong to foreign workspace may be requested by a user in the case when Che is configured to use the same namespace for different projects.
@l0rd @garagatyi WDYT?

@gazarenkov
Copy link
Contributor

I like this approach
@sleshchenko AFAIK internal servers for LSes mostly used bare socket (for the time most of them socat-ed stdin/out). Do you see any security issue with this scheme or you mean in general?

@sleshchenko
Copy link
Member Author

@gazarenkov

socat-ed stdin/out

Is it implemented by using PVC? If yes - I think there is no issue with that.

I mean the way of communication with LS which I investigated some time ago #8171 (comment)
When LS is exposed with service as an internal server to process requests via TCP
This way is described in Che Docs here

@sleshchenko
Copy link
Member Author

I have created an issue for reworking internal servers not to use Services for exposing.
The discussion about internal servers and language servers can be continued there #10420.

@garagatyi
Copy link

It's always nice to improve security but in this particular case, I think the flow with a single namespace for several users can make system design more error-prone for the sake of non-clear (to me personally) profit. I added my thoughts about it below.
I would like to explore more about the need of having that feature for non-development workflows.
@l0rd @sleshchenko @ibuziuk @skabashnyuk would you be willing to help me with that?
My thoughts about security and usage of a single namespace for multiple users:

  1. Now we allow a user to specify k8s Service in k8s/OS recipes. With a namespace shared between different users and that require to secure Services from alien users, we should forbid a user to use Service objects in the recipe. And the goal that @gorkem stated in Workspace.Next was exactly the opposite - to provide better adoption of recipes in Che. I can hardly imagine a real k8s app that doesn't need services.
  2. One of the thing that OS brings to provide better users isolation and overall security is isolation users in their own namespaces (projects). So why do we think that we would be able to create a better solution in Che in the meaningful time without affecting the development of other parts of the project?
  3. I imagine some tooling sidecars that expose sensitive user's data over APIs but don't need projects PV. Which means that they don't have to be in the same pod as Theia or LS. And we can handle their lifecycle better (addition of such a sidecar is possible at runtime; death of any sidecar doesn't make the whole pod with all the sidecars restart). And it also improves scalability (pod with all the sidecars might be very RAM heavy which means that it might be hard to find a suitable pod for it). But these servers would also need service discovery and the easiest way to provide it is to use k8s Service. Otherwise, we would have to reinvent the service discovery system for Che workspaces. Examples of such sidecars are terminal and exec agents: they expose sensitive information but don't need the project PV.

The general idea from me is that it might make sense to go this direction to secure APIs in some cases, but I'm not sure we know exact important use cases for that right now and that these use cases would compensate our efforts and complexity of the system we would bring to it.

@sleshchenko
Copy link
Member Author

sleshchenko commented Jul 16, 2018

@garagatyi

I would like to explore more about the need of having that feature for non-development workflows.

I would like to explore too. Generally, I agree with you and I think it would be OK to document that for non-development workflows different namespaces must be used for different users.

The only one thing that I want to note: there is the same security issue when each user has a unique namespace (like OSIO configuration) and share his running workspace while there are other running workspaces (it's not OSIO use case since there is limitation to have only one running workspace at the same time).
Then the user who is invited to use running workspace may request the services of other running workspaces. I think it is not so critical issue, but we should keep it in mind and well document such cases not to surprise users.

@sleshchenko sleshchenko self-assigned this Aug 7, 2018
@sleshchenko sleshchenko added status/in-progress This issue has been taken by an engineer and is under active development. and removed status/open-for-dev An issue has had its specification reviewed and confirmed. Waiting for an engineer to take it. labels Aug 7, 2018
@sleshchenko sleshchenko removed the status/in-progress This issue has been taken by an engineer and is under active development. label Sep 11, 2018
@skabashnyuk skabashnyuk added the severity/P2 Has a minor but important impact to the usage or development of the system. label Mar 6, 2019
@che-bot
Copy link
Contributor

che-bot commented Sep 7, 2019

Issues go stale after 180 days of inactivity. lifecycle/stale issues rot after an additional 7 days of inactivity and eventually close.

Mark the issue as fresh with /remove-lifecycle stale in a new comment.

If this issue is safe to close now please do so.

Moderators: Add lifecycle/frozen label to avoid stale mode.

@che-bot che-bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 7, 2019
@che-bot che-bot closed this as completed Sep 17, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/task Internal things, technical debt, and to-do tasks to be performed. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. severity/P2 Has a minor but important impact to the usage or development of the system.
Projects
None yet
Development

No branches or pull requests

5 participants