Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Service that sends traffic to a pod is not available from the inside of the same pod #10510

Closed
garagatyi opened this issue Jul 23, 2018 · 13 comments

Comments

@garagatyi
Copy link

Description

This issue prevents us from using service discovery in Workspace.Next. In Workspace.Next we have all the toolings inside of a single pod, at least for now. And because of the issue with hairpin NAT we can't access service from the inside of a pod. This makes Workspace.Next development on a laptop very tricky.
The issue is also described in openshift/origin#14031.
We have this issue on minikube too.

Reproduction Steps

OS and version:

Diagnostics:

@l0rd l0rd mentioned this issue Jul 23, 2018
24 tasks
@ghost
Copy link

ghost commented Jul 24, 2018

Yes, same issue with accessing services from within containers in the same pod. Should work with multi pod workspace though.

Not sure I know how to label this one though :)

@garagatyi
Copy link
Author

@eivantsov I'm not sure I got what you mean by "Should work with multi pod workspace though." AFAIK in the case when there are several pods in a workspace but some container from inside of the pod tries to reach another container in the same pod it would fail.

@ghost
Copy link

ghost commented Jul 24, 2018

I mean multi pod workspace, where there's 1 container per pod. I test LS in as sidecars on OpenShift/K8S this way

@garagatyi
Copy link
Author

Oh, I see. But it is not the case for sharing sources between sidecars. They would have to be in a single pod to have shared volume

@ghost
Copy link

ghost commented Jul 24, 2018

Sure. Pod collocation might be an option. But it's not safe to rely on it I think.

@garagatyi
Copy link
Author

yes, for now, it looks like something not reliable enough and supported in a limited set of installations. And it might be not wanted to use a solution that would force businesses to change their environment just to host Che.

@l0rd
Copy link
Contributor

l0rd commented Jul 25, 2018

Isn't this a duplicate of #8134?

@ghost
Copy link

ghost commented Jul 25, 2018

@l0rd it is. Just affects a different task.

In fact, this issue only affects local Origin/MiniShift - the issue's gone with a properly installed OCP/OSD.

I did try to use kublet network plugin, but having modified config yaml, Origin failed to start. I think if we figure out some workaround for Origin/MiniShift this isn't a big deal for us.

@garagatyi
Copy link
Author

@l0rd yes, it kinda duplicates the mentioned issue, but

  • mentioned issue says about OS only, whereas we have the same issue on minikube
  • I created this issue in regards to walking skeleton as it blocks us from using service discovery mechanism thus must be addressed to accomplish walking skeleton goals

@ghost
Copy link

ghost commented Jul 25, 2018

There's a workaround - to use pod name rather than service name. Otherwise, I'm not sure we can do anything on Che side, except for using routes. not services.

@garagatyi
Copy link
Author

Unfortunately, it might not seem obvious to a user. And our service discovery story covers that user can reach a plugin by the endpoint name from his ChePlugin description.
Since plugin author is not pod author he doesn't know the pod name, so the link to his service is unpredictable.
What we can do is to say that Che is responsible for injecting env vars with the correct services URLs, and all the sidecars would be responsible for reading those env vars and using data from them to connect to ChePlugin sidecars.
But this will make usage of Che for plugin authors a bit more complex.

@l0rd @slemeur WDYT? Should we try to find a solution to provide easy service discovery or work on adding env vars with sidecar endpoints URLs?

@ghost
Copy link

ghost commented Jul 29, 2018

@garagatyi @l0rd Please read my comment in a different issue - btw, we need to have just one :) I had hard time looking for the right one.

While using external routes would be a more reliable solution, I still think that for a local scenario - oc cluster up - configuring docker daemon isn't a big price to pay.

@garagatyi
Copy link
Author

Closing in favor of #8134

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants