-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Processes unavailable by service: port #17981
Comments
What can I see is that access to pod from the inside of this pod by the service IP doesn't work for some reason. It works neither from the container that exposes port nor from other containers of the same pod. |
@garagatyi This could be caused by improper configured DNS. I'd start with checking This instruction also could be helpful: https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/#debugging-dns-resolution Hope this helps! |
@garagatyi -- Can you grab the iptables output from the node running openshift? And give me the service IP for the service you are looking at? Thanks. |
@php-coder this doesn't look like a DNS resolution issue since DNS gets resolved to IP of a service. But this IP is not responding.And it is not responding from the inside of the POD, but is responding from other PODs and my host. @knobunc sure. BTW it is an OCP running inside of Docker containers on my dev fedora 25. And we have the same situation on other dev fedoras and also on a CI (containerized OCP too). Unfold iptables --list output
|
Yep. It is because of hairpin mode. I am able to access the pod by its service after I exec'd into openshift container and put |
Looks like this issue is a duplicate of #14031 |
Can you see if your bridge is in promiscuous mode: And can you see if your node logs mention "Hairpin mode" please? |
ip link show docker0
docker logs origin 2>&1 | grep -i hair
|
@knobunc This ticket seems to have gone stale and is in the process of being escalated. Is the logs that @garagatyi provided sufficient? Thanks for looking back at this. The priority has raised on it. |
Comment from Ben Bennett 2018-03-01 13:19:04 EST
|
Issues go stale after 90d of inactivity. Mark the issue as fresh by commenting If this issue is safe to close now please do so with /lifecycle stale |
/remove-lifecycle stale |
Issues go stale after 90d of inactivity. Mark the issue as fresh by commenting If this issue is safe to close now please do so with /lifecycle stale |
Stale issues rot after 30d of inactivity. Mark the issue as fresh by commenting If this issue is safe to close now please do so with /lifecycle rotten |
Rotten issues close after 30d of inactivity. Reopen the issue by commenting /close |
@openshift-merge-robot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Description
Can not access to container port via service on local OCP.
Reproduction Steps
Add to Project
Import YAML/JSON
Create
Objects to import
requester
container.curl tomcat:8080
Expected: Response
HELLO
Actual: Request hung up.
Note that it works fine on our OSD and OSO too.
Note that it works fine if containers are in separated pods.
Containers in separate pods
OCP version:
3.7.0
Fedora 27. Firewalld is off.
The process is unavailable even when there's just one pod. Say I run tomcat on port 8080, and can curl it from inside a container as
localhost:8080
but nottomcat:8080
The text was updated successfully, but these errors were encountered: