Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to Access Pod Logs in Source Cluster #205

Open
takahashi-jo opened this issue Feb 21, 2024 · 2 comments
Open

Unable to Access Pod Logs in Source Cluster #205

takahashi-jo opened this issue Feb 21, 2024 · 2 comments

Comments

@takahashi-jo
Copy link

takahashi-jo commented Feb 21, 2024

Scenario

  • Admiralty 0.16.0
  • Cert Manager 1.12.7
  • AWS EKS 1.25

Problem Description

I've successfully managed to deploy pods from a source cluster to a target cluster using the following Job configuration:

apiVersion: batch/v1
kind: Job
metadata:
  name: admiralty-test
  namespace: xxxxxx
spec:
  template:
    metadata:
      annotations:
        multicluster.admiralty.io/elect: ""
    spec:
      containers:
      - name: c
        image: busybox
        command: ["sh", "-c", "echo Processing item && sleep 5"]
        resources:
          requests:
            cpu: 10m
      restartPolicy: Never

Source Cluster:

NAME                                             READY   STATUS      RESTARTS   AGE     IP              NODE                                             NOMINATED NODE   READINESS GATES
admiralty-test-m4w4b                             0/1     Completed   0          73s     10.254.16.169   admiralty-target-cluster-0f2eac64cb     <none>           <none>

Target Cluster:

NAME                         READY   STATUS      RESTARTS   AGE   IP              NODE                                               NOMINATED NODE   READINESS GATES
admiralty-test-m4w4b-wcxbv   0/1     Completed   0          50s   10.254.16.169   ip-10-254-16-237.ap-northeast-1.compute.internal   <none>           <none>

While I can successfully access the pod logs in the target cluster, attempting to do so in the source cluster results in a connection refused error.

Source Cluster:

Error from server: Get "https://10.254.1.189:10250/containerLogs/xxxxxx/admiralty-test-m4w4b/c": dial tcp 10.254.1.189:10250: connect: connection refused

Target Cluster:

Processing item

I'm able to access the logs in the target cluster without any issues, which suggests that the deployment and pod execution were successful. However, accessing the logs from the source cluster does not work as expected.

Could this issue be related to a specific Admiralty configuration or perhaps a networking restriction within my Kubernetes setup? Any advice or guidance on troubleshooting this issue would be greatly appreciated.

@flowinh2o
Copy link

flowinh2o commented Jun 6, 2024

  • AWS EKS 1.29
  • Admiralty 0.16.0
  • cert-manager 1.13.1

I am also running intp the same issue in which the pods can get spun up torn down correctly but no logs are streamed. I am using EKS cluster that are in different vpc but the source can obviously access the target if it can spin up the pods but maybe there are some other communications needed? I have 443 to the target cluster api open from the source via the security group and the source cluster is in a private network with no access.

I was also able to use my own pod on the source cluster that uses the target cluster secret and it's also able to get the logs off the target system.

I noticed in the controller-manager pod the following message which looks to be the cause

main.go:329] timed out waiting for virtual kubelet serving certificate to be signed, pod logs/exec won't be supportedmain.go:329] timed out waiting for virtual kubelet serving certificate to be signed, pod logs/exec won't be supported

@flowinh2o
Copy link

@takahashi-jo this might be related to #120

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants