-
Notifications
You must be signed in to change notification settings - Fork 867
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Issue with Registering the Member Cluster #4963
Comments
Hello @RainbowMango Can you help me with this? |
That due to you are trying to join the cluster to |
@RainbowMango Thank you for helping me! |
That is because the service of In case of running this command outside cluster, @chaosi-zju is there any document for how to expose |
I tried to change with NodePort and Updated config file with https://node-ip:port
|
Hi @amacharya, it seems that you are trying to install Karmada by For efficient installation, I recommend you to follow my steps to try it out: Prerequisite: assuming you have installed the host k8s cluster and member k8s clusters, and there are no network problems with the cluster pulling the image. Step 1: get host network ip (node ip) from kube-apiserver, and then add this ip to export KUBECONFIG=~/.kube/karmada-host.config
HOST_IP=$(kubectl get ep kubernetes -o jsonpath='{.subsets[0].addresses[0].ip}')
sed -i'' -e "/localhost/{n;s/ \"127.0.0.1/ \"${HOST_IP}\",\n&/g}" ./charts/karmada/values.yaml
Step 2: install karmada in host cluster by helm: helm install karmada -n karmada-system \
--kubeconfig ~/.kube/karmada-host.config \
--create-namespace \
--dependency-update \
--set apiServer.hostNetwork=true \
./charts/karmada
Step 3: export kubeconfig of karmada-apiserver to local path kubectl get secret -n karmada-system karmada-kubeconfig -o jsonpath={.data.kubeconfig} | base64 -d >~/.kube/karmada-apiserver.config
KARMADA_APISERVER_ADDR=$(kubectl get ep karmada-apiserver -n karmada-system | tail -n 1 | awk '{print $2}')
sed -i'' -e "s/karmada-apiserver.karmada-system.svc.cluster.local:5443/${KARMADA_APISERVER_ADDR}/g" ~/.kube/karmada-apiserver.config
Step 4: join member clusters: # download karmadactl if not exist
if ! which karmadactl >/dev/null 2>&1; then
curl -s https://raw.githubusercontent.com/karmada-io/karmada/master/hack/install-cli.sh | sudo bash
fi
# join member1、member2 to karmada with push mode
karmadactl join member1 --kubeconfig ~/.kube/karmada-apiserver.config --karmada-context karmada-apiserver --cluster-kubeconfig ~/.kube/members.config --cluster-context member1
karmadactl join member2 --kubeconfig ~/.kube/karmada-apiserver.config --karmada-context karmada-apiserver --cluster-kubeconfig ~/.kube/members.config --cluster-context member2
Step 5: check whether cluster ready. kubectl --kubeconfig ~/.kube/karmada-apiserver.config get cluster -o wide My installation method can help you avoid many minor problems. I strongly recommend that you try the above. If you still have any questions, please continue to consult me. |
@chaosi-zju - On a fresh AWS EKS environment, I redeployed and followed your steps. Karmada deployed correctly on the main cluster.
but still have an issue:
|
If I ssh of my main-cluster or member-cluster node to test the connection, then it is working.
I changed karmda-apiserver svc with NodePort, which also did not work. at last I tried with port forwarding, and it worked.
In ~/.kube/karmada-apiserver.config changed server to https://localhost:5443.
It is a temporary solution, but nothing is working apart from this Is it a potential bug? Or are there any steps specifically that we need to follow to deploy Karmada on the AWS EKS cluster to join and perform workload testing? |
Hi @amacharya Before you are executing karmadactl join karmada-member-1 --kubeconfig ~/.kube/karmada-apiserver.config --karmada-context karmada-apiserver --cluster-kubeconfig ~/.kube/members.config --cluster-context arn:aws:eks:eu-central-1:613829453723:cluster/karmada-member-1 can you check whether you can successfully connect to karmada-apiserver by just like: # check whether you can connect to karmada-apiserver
kubectl --kubeconfig ~/.kube/karmada-apiserver.config --context karmada-apiserver cluster-info
# check whether you can connect to member cluster apiserver
kubectl --kubeconfig ~/.kube/members.config --context arn:aws:eks:eu-central-1:613829453723:cluster/karmada-member-1 cluster-info |
Hello @chaosi-zju
|
So, it seems that your Let's go review kubectl get secret -n karmada-system karmada-kubeconfig -o jsonpath={.data.kubeconfig} | base64 -d >~/.kube/karmada-apiserver.config
KARMADA_APISERVER_ADDR=$(kubectl get ep karmada-apiserver -n karmada-system | tail -n 1 | awk '{print $2}')
sed -i'' -e "s/karmada-apiserver.karmada-system.svc.cluster.local:5443/${KARMADA_APISERVER_ADDR}/g" ~/.kube/karmada-apiser may be we have some mistake in this step. Since we installed the
and check:
|
I re-ran, but I am not seeing any error; I am getting the same value result compared to the previous
Yes, same value which we added in karmada-apiserver.config https://:5443
Yes, same value which we added in karmada-apiserver.config https://:5443
no , tried with telnet too(Ref - #4963 (comment)) |
I don't know much about your network situation, maybe it's a network problem? Then, maybe you should copy Take my test environment as example: $ kubectl --context karmada-host get po -A -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
karmada-system etcd-0 1/1 Running 0 12h 10.230.0.65 karmada-host-control-plane <none> <none>
karmada-system karmada-9a89z2j4bu-aggregated-apiserver-6b45b5ddcf-bm6r7 1/1 Running 2 (12h ago) 12h 10.230.0.63 karmada-host-control-plane <none> <none>
karmada-system karmada-9a89z2j4bu-apiserver-54bb5b7c95-frt7w 1/1 Running 0 12h 172.18.0.2 karmada-host-control-plane <none> <none>
karmada-system karmada-9a89z2j4bu-controller-manager-74bfdb44c8-pnk8h 1/1 Running 3 (12h ago) 12h 10.230.0.62 karmada-host-control-plane <none> <none>
karmada-system karmada-9a89z2j4bu-kube-controller-manager-684f8f7949-ggsv9 1/1 Running 2 (12h ago) 12h 10.230.0.64 karmada-host-control-plane <none> <none>
karmada-system karmada-9a89z2j4bu-scheduler-698ff8bdf7-rlsdg 1/1 Running 0 12h 10.230.0.61 karmada-host-control-plane <none> <none>
karmada-system karmada-9a89z2j4bu-webhook-6d8cd98fbf-wlgps 1/1 Running 0 12h 10.230.0.60 karmada-host-control-plane <none> <none>
kube-system coredns-5d78c9869d-74224 1/1 Running 0 17h 10.230.0.2 karmada-host-control-plane <none> <none>
kube-system coredns-5d78c9869d-q5scn 1/1 Running 0 17h 10.230.0.4 karmada-host-control-plane <none> <none>
kube-system etcd-karmada-host-control-plane 1/1 Running 0 17h 172.18.0.2 karmada-host-control-plane <none> <none>
kube-system kindnet-5m2fv 1/1 Running 0 17h 172.18.0.2 karmada-host-control-plane <none> <none>
kube-system kube-apiserver-karmada-host-control-plane 1/1 Running 0 17h 172.18.0.2 karmada-host-control-plane <none> <none>
kube-system kube-controller-manager-karmada-host-control-plane 1/1 Running 0 17h 172.18.0.2 karmada-host-control-plane <none> <none>
kube-system kube-proxy-pl4vt 1/1 Running 0 17h 172.18.0.2 karmada-host-control-plane <none> <none>
kube-system kube-scheduler-karmada-host-control-plane 1/1 Running 0 17h 172.18.0.2 karmada-host-control-plane <none> <none>
local-path-storage local-path-provisioner-6bc4bddd6b-2gg5r 1/1 Running 0 17h 10.230.0.3 karmada-host-control-plane <none> <none> My karmada-host cluster has only one single node, its node ip is $ cat ~/.kube/karmada-apiserver.config
apiVersion: v1
kind: Config
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUd...
server: https://172.18.0.2:5443
name: karmada-9a89z2j4bu-apiserver
... you can see $ ping 172.18.0.2
PING 172.18.0.2 (172.18.0.2) 56(84) bytes of data.
64 bytes from 172.18.0.2: icmp_seq=1 ttl=64 time=0.049 ms
^C
--- 172.18.0.2 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1006ms
rtt min/avg/max/mdev = 0.039/0.044/0.049/0.005 ms
$ curl -k https://172.18.0.2:5443/version
{
"major": "1",
"minor": "27",
"gitVersion": "v1.27.11",
"gitCommit": "b9e2ad67ad146db566be5a6db140d47e52c8adb2",
...
}# you can see my ip |
Regarding the network, everything I have selected as public for initial trial and my worker nodes(both cluster) are also running on the public subnet network. Have you created CRDs as well?
have you setup an AWS Client VPN or Bastion host to be able to ping your private IP of node where the karmada-apiserver is running? Would it be possible to give you defined steps on how you deploy the AWS EKS cluster (main and member, including vpc and subnet)? It would be helpful for me to check what exactly is missing. |
Btw, would it be possible to schedule a call? It would be really helpful! - TIA |
No, attention please, do not manually apply crds, do not change Those actions will be automatically done by karmada/charts/karmada/templates/post-install-job.yaml Lines 47 to 54 in 8eb4322
No, my environment is local, the cluster is under local network. I installed Karmada on my local PC by this method. What makes me curious is that, why you can use Can you please do following check again: # check whether you can connect to karmada-host
kubectl --kubeconfig ~/.kube/karmada-host.config --context karmada-host cluster-info
# check whether you can connect to karmada-apiserver
kubectl --kubeconfig ~/.kube/karmada-apiserver.config --context karmada-apiserver cluster-info Then, we need to check following things:
|
My AWS EKS cluster has been deployed on a public subnet with no restriction on traffic (inbound or outbound SG). I am able to ping the public IPv4 address of my instance, but I am not able to ping the private IPv4 address of my instance where Karmada-apiserver is running. Since you mentioned you deployed on a local machine, that is what I suspected. Do you consider this a potential bug to deploy on the AWS EKS env? |
Regarding CRDs - On new fresh deployment karmada-controller-manager pod is crashing
|
hi, some other friends has encountered the same problem, you can refer to: #4927 |
I am already using v1.9.1 |
also regarding this issue |
Hi, as I talked in #4927 (comment), the workload-rebalancer CRDs is introduced after v.1.10 version, it shouldn't appear in v1.9.1 version controller-manager images. |
May be, since we may haven't tested on AWS EKS env. So we attach great importance to your case and hope to find our shortcomings through your case. |
As you said above:
and
So, I wonder, if you ssh into your main-cluster node, right in there, you can connect to the private IPv4 address where Karmada-apiserver is running, you can also connect to the public IPv4 address where member clutser apiserver is running. So, if you copy |
@chaosi-zju |
Many thanks @chaosi-zju - It would be a really great help! |
@amacharya Maybe we can have a chat at the community meeting. Please add an item to the meeting slot. |
@chaosi-zju Do you find anything to improve during this discussion? @amacharya Any update from you?
I don't have more detailed info for now, and waiting for it, and will let you get updated. |
I have created AWS EKS K8s clusters (K8s v1.29)
Cluster 1: karmada-main
Created CRDs
Cluster 2: karmada-member-1
kubectl config use-context arn:aws:eks:us-east-1:xxxxx:cluster/karmada-main
karmadactl join karmada-member-1 --cluster-kubeconfig=/xxxx/xxxx/.kube/config
Error from server (NotFound): the server could not find the requested resource (get clusters.cluster.karmada.io)
Any help would be greatly appreciated
The text was updated successfully, but these errors were encountered: