autoneg
provides simple custom integration between GKE and GCLB. autoneg
is a GKE controller which works in conjunction with the GKE NEG controller to manage integration between your GKE service endpoints and GCLB backend services.
GKE users may wish to register NEG backends from multiple clusters into the same backend service, or may wish to orchestrate advanced deployment strategies in a custom fashion. autoneg
can enable those use cases.
autoneg
depends on the GKE NEG controller to manage the lifecycle of NEGs corresponding to your GKE services. autoneg
will associate those NEGs as backends to the GCLB backend service named in the autoneg
configuration.
Since autoneg
depends explicitly on the GKE NEG controller, it also inherits the same scope. autoneg
only takes action based on a GKE service, and does not make any changes corresponding to pods or deployments. Only changes to the service will cause any action by autoneg
.
On deleting the GKE service, autoneg
will deregister NEGs from the specified backend service, and the GKE NEG controller will then delete the NEGs.
In your GKE service, two annotations are required in your service definition:
cloud.google.com/neg
enables the GKE NEG controller; specify as standalone NEGsanthos.cft.dev/autoneg
specifies name and other configuration
metadata:
annotations:
cloud.google.com/neg: '{"exposed_ports": {"80":{}}}'
anthos.cft.dev/autoneg: '{"name":"autoneg_test", "max_rate_per_endpoint":1000}'
autoneg
will detect the NEGs that are created by the GKE NEG controller, and register them with the backend service specified in the autoneg
configuration annotation.
Only the NEGs created by the GKE NEG controller will be added or removed from your backend service. This mechanism should be safe to use across multiple clusters.
Note: autoneg
will initialize the capacityScaler
variable to 1 on new registrations. On any changes, autoneg
will leave whatever is set in that value. The capacityScaler
mechanism can be used orthogonally by interactive tooling to manage traffic shifting in such uses cases as deployment or failover.
Specify options to configure the backends representing the NEGs that will be associated with the backend service. Options can be referenced in the backends
section of the REST resource definition. Only options listed here are available in autoneg
.
name
: optional. The name of the backend service to register backends with. Defaults to GKE service name.max_rate_per_endpoint
: required. Integer representing the maximum rate a pod can handle.
As autoneg
is accessing GCP APIs, you must ensure that the controller has authorization to call those APIs. To follow the principle of least privilege, it is recommended that you configure your cluster with Workload Identity to limit permissions to a GCP service account that autoneg
operates under. If you choose not to use Workload Identity, you will need to create your GKE cluster with the "cloud-platform" scope.
First, set up the GCP resources necessary to support Workload Identity, run the script:
PROJECT=myproject deploy/workload_identity.sh
Then, on each cluster in your project where you'd like to install autoneg
, run these two commands:
kubectl apply -f deploy/autoneg.yaml
kubectl annotate sa -n autoneg-system default \
iam.gke.io/gcp-service-account=autoneg-system@${PROJECT_ID}.iam.gserviceaccount.com
This will create all the Kubernetes resources required to support autoneg
and annotate the default service account in the autoneg-system
namespace to associate a GCP service account using Workload Identity.
autoneg
is based on Kubebuilder, and as such, you can customize and deploy autoneg
according to the Kubebuilder "Run It On the Cluster" section of the Quick Start. autoneg
does not define a CRD, so you can skip any Kubebuilder steps involving CRDs.
The included deploy/autoneg.yaml
is the default output of Kubebuilder's make deploy
step, coupled with a public image.
Do keep in mind the additional configuration to enable Workload Identity.