Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can not access to container port via service on OCP #8134

Closed
sleshchenko opened this issue Jan 3, 2018 · 21 comments
Closed

Can not access to container port via service on OCP #8134

sleshchenko opened this issue Jan 3, 2018 · 21 comments
Assignees
Labels
kind/bug Outline of a bug - must adhere to the bug report template. severity/P1 Has a major impact to usage or development of the system.

Comments

@sleshchenko
Copy link
Member

sleshchenko commented Jan 3, 2018

Description

Can not access to container port via service on local OCP.

This feature is needed for running language servers in parallel containers.

Reproduction Steps

  1. Run OCP with the following script https://github.com/eclipse/che/blob/master/deploy/openshift/ocp.sh
  2. Create the following objects via OCP console:
  • Create a project;
  • Click Add to Project
  • Click Import YAML/JSON
  • Paste the following YAML and click Create
Objects to import
---
kind: List
items:
-
  apiVersion: v1
  kind: Pod
  metadata:
    name: tomcat-pod
    labels:
      app: tomcat
  spec:
    containers:
      -
        image: sleshchenko/webapp
        name: tomcat-container
        ports:
          -
            containerPort: 8080
            protocol: TCP
      -
        image: eclipse/ubuntu_jdk8
        name: requester
-
  apiVersion: v1
  kind: Service
  metadata:
    name: tomcat
  spec:
    ports:
      - name: tomcat
        port: 8080
        protocol: TCP
        targetPort: 8080
    selector:
      app: tomcat
  1. Open terminal in the newly created pod in requester container.
  2. Try to access tomcat via service. Execute curl tomcat:8080
    Expected: Response HELLO
    Actual: Request hung up.

Note that it works fine on https://console.codenvy.openshift.com/ or http://console.starter-us-east-2.openshift.com/.
Note that it works fine if containers are in separated pods.

Containers in separated pods
---
kind: List
items:
-
  apiVersion: v1
  kind: Pod
  metadata:
    name: tomcat-pod
    labels:
      app: tomcat
  spec:
    containers:
      -
        image: sleshchenko/webapp
        name: tomcat-container
        ports:
          -
            containerPort: 8080
            protocol: TCP
-
  apiVersion: v1
  kind: Pod
  metadata:
    name: requester
  spec:
    containers:
      -
        image: eclipse/ubuntu_jdk8
        name: requester
-
  apiVersion: v1
  kind: Service
  metadata:
    name: tomcat
  spec:
    ports:
      - name: tomcat
        port: 8080
        protocol: TCP
        targetPort: 8080
    selector:
      app: tomcat

OCP version:
3.7.0 and 3.9.0

Diagnostics:

@sleshchenko sleshchenko added kind/bug Outline of a bug - must adhere to the bug report template. team/platform status/analyzing An issue has been proposed and it is currently being analyzed for effort and implementation approach labels Jan 3, 2018
@gazarenkov gazarenkov added the severity/P1 Has a major impact to usage or development of the system. label Jan 3, 2018
@garagatyi
Copy link

@eivantsov created an issue in the upstream openshift/origin#17981

@garagatyi garagatyi self-assigned this Jan 11, 2018
@garagatyi garagatyi added status/in-progress This issue has been taken by an engineer and is under active development. and removed status/analyzing An issue has been proposed and it is currently being analyzed for effort and implementation approach labels Jan 11, 2018
@skabashnyuk skabashnyuk added team/osio and removed status/in-progress This issue has been taken by an engineer and is under active development. team/platform labels Feb 12, 2018
@slemeur slemeur added severity/P1 Has a major impact to usage or development of the system. and removed severity/P1 Has a major impact to usage or development of the system. labels Feb 15, 2018
@skabashnyuk
Copy link
Contributor

@riuvshin can you take a look on openshift team response?
2018-02-27 09 25 59

@ghost
Copy link

ghost commented Feb 27, 2018

I still have hard time finding docs on how to apply it for oc cluster up.

@riuvshin
Copy link
Contributor

@skabashnyuk same here, have no idea and unable to find any docs related to how can I configure kubenet with oc cluster up

@skabashnyuk
Copy link
Contributor

@riuvshin can comment that in openshift/origin#17981 or openshift mail list?

@l0rd
Copy link
Contributor

l0rd commented Apr 30, 2018

@sleshchenko @eivantsov I have tried to reproduce and curl tomcat:8080 doesn't work but curl tomcat-pod:8080 works fine.

So if the pod name is used (instead of the service name) the container to sidecar connection works fine. Wouldn't it be enough?

@ghost
Copy link

ghost commented Apr 30, 2018

@l0rd good to know there's workaround. Btw, how did you manage to find this solution? Any docs or clues that made you try pod name instead of a service name?

Unfortunately, it is not smth that solves LSP in a sidecar issue. Our client reads ws config, and grabs all servers of a special kind from runtime. In runtime, servers acquire URLs, and those are generated depending on infrastructure. In case of internal urls (servers can have internal urls) they are returned as service names. So, we need changes for OpenShift infra then, so that podName:port is returned instead of a serviceName:port.

@l0rd
Copy link
Contributor

l0rd commented Apr 30, 2018

I have just cat /etc/hosts from within the container so I have no idea how reliable it is. But still better than nothing.

@artaleks9
Copy link
Contributor

Still actual for ver.6.5.0

@ghost
Copy link

ghost commented Jul 25, 2018

Should we perhaps close the issue as it cannot be fixed on Che side. The issue has been opened for a while and the issue in Origin does not seem to get any attention either.

@riuvshin
Copy link
Contributor

@artaleks9 what do you mean by that? do you understand the problem? issue is not in che so it will not work with any CHE version.

@garagatyi
Copy link

@eivantsov we still need to find a workaround, so maybe we can leave this issue open and wait till it gets an appropriate priority.
I do not know how to fix it and how hard to find a solution, but still, the issue is important from my POV.

@ghost
Copy link

ghost commented Jul 29, 2018

@garagatyi what if we just say - if your cluster has been started with oc cluster up (no Ansible installer with advanced networking stuff), you will need to make sure that userland-proxy=false in docker daemon settings.

What I did is:

image

Of course, to make sure it really works, I created a Che workspace with one pod having two containers, one of those running a Language server. You can find ws json here - https://gist.github.com/eivantsov/374a30d5a5108bf24e00397d3cb6a3de

Che LS client was able to connect to a language server running in the different container in the same pod:

image

Changing your Docker daemon settings is a relatively easy operation. IMHO, it solves a local scenario while a proper OpenShift cluster does not have this problem (I was able to verify it works fine on OSD (when we had it) and OSO Pro - see screenshot below)

image

I may spin up an OpenShift cluster using Ansible according to official docs and confirm services are fine there too.

@l0rd
Copy link
Contributor

l0rd commented Jul 29, 2018 via email

@ghost
Copy link

ghost commented Jul 29, 2018

Yes, this is the next thing to try. I think docker daemon args are supported as mini(shift/kube) args.

@ghost
Copy link

ghost commented Jul 29, 2018

@l0rd there's no need to change any default configs (I mean urgent need) since you may pass config for docker daemon as an arg:

minishift start --vm-driver=kvm --memory=4096 --docker-opt userland-proxy=false

Works for me on Fedora:

image

Since this arg is most certainly inherited from minikube, I do not see any reasons why it should fail there. I will try it later of course to confirm.

@ghost
Copy link

ghost commented Jul 30, 2018

@l0rd @garagatyi do you think we can get away with updating docs? It does look like configuring Docker daemon options solves the local usecase scenario.

@garagatyi
Copy link

@eivantsov great investigation! Yes, I would love having docs updated (for both k8s and openshift installations) and think that it would be enough to close this issue.
Again thank you for the workaround, I'll try it a bit later.

@l0rd
Copy link
Contributor

l0rd commented Jul 31, 2018

@eivantsov 👍. I have tested and it work on minkube too. Let's update the documentation and close this one.

@gazarenkov
Copy link
Contributor

That's good, thanks a lot!
but mind #10420 as well, this one looks more straightforward and clean from Kubernetes (internal servers are localhosted in Pod)

@garagatyi
Copy link

Closing since we found a solution with userland proxy.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Outline of a bug - must adhere to the bug report template. severity/P1 Has a major impact to usage or development of the system.
Projects
None yet
Development

No branches or pull requests

8 participants