-
Notifications
You must be signed in to change notification settings - Fork 65
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Temporarily decom MIT Brain hubs (bican, dandi, linc) #4097
Conversation
Partnerships is working on reconfiguring this relationship, but it isn't going to look like 3 separate hubs on 3 separate clusters. Decom this to save our cloud costs, as nobody is currently using these.
I've terraform destroyed everything for now. |
These clusters still incurring cloud costs from k8s clusters (72 USD / month) a core node (~180 USD / month), as this was controlled via eksctl rather than terraform. @yuvipanda if you think its OK to proceed i'll just delete the eksctl resources as well. |
@consideRatio ah, I missed that. yes, please do when you get a minute. |
Done, bican, dandi, linc deleted.
# to help drain operation from another terminal without availability of deployer creds
eksctl utils write-kubeconfig --config-file=$CLUSTER_NAME.eksctl.yaml --auto-kubeconfig
export KUBECONFIG=/home/erik/.kube/eksctl/clusters/$CLUSTER_NAME
# eksctl reports: 6 pods are unevictable from node ip-192-168-8-78.us-east-2.compute.internal
# I think four of these referred to two coredns pods and two ebs-csi-controller
# pods with protection from PDB resources
# I think another pod was from the cryptnono daemonset, which as a daemonset
# shouldn't cause issues, but maybe it had a emptyDir volume or similar that maybe requires
# a flag that it was ok to delete also such pods? |
This comment was marked as outdated.
This comment was marked as outdated.
Due to this, I'll go for a merge - then the pagerduty healthchecks are deleted I think via automation scanning our cluster config files. |
🎉🎉🎉🎉 Monitor the deployment of the hubs here 👉 https://github.com/2i2c-org/infrastructure/actions/runs/9185897951 |
This can happen if the branch is not up-to-date with main #2766 |
Partnerships is working on reconfiguring this relationship, but it isn't going to look like 3 separate hubs on 3 separate clusters. Decom this to save our cloud costs, as nobody is currently using these.
Based on a meeting I just had with them (along with @colliand). Will have an associated issue soon.
Ref https://github.com/2i2c-org/leads/issues/344