Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Create grpc deployment when starting operator #332

Open
wants to merge 9 commits into
base: main
Choose a base branch
from
60 changes: 60 additions & 0 deletions config/rbac/role.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,42 @@ rules:
- patch
- update
- watch
- apiGroups:
- ""
resources:
- configmaps
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- apps
resources:
- deployments
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- ""
resources:
- services
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- ""
resources:
Expand Down Expand Up @@ -158,6 +194,30 @@ rules:
- get
- patch
- update
- apiGroups:
- ""
resources:
- events
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- coordination.k8s.io
resources:
- leases
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- getporter.org
resources:
Expand Down
126 changes: 126 additions & 0 deletions controllers/types.go
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,17 @@ import (

installationv1 "get.porter.sh/porter/gen/proto/go/porterapis/installation/v1alpha1"
"google.golang.org/grpc"
appsv1 "k8s.io/api/apps/v1"
corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/api/resource"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/util/intstr"
"k8s.io/utils/ptr"
)

const (
PorterNamespace = "porter-operator-system"
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is already defined in controllers/installation_controller.go

PorterGRPCName = "porter-grpc-service"
)

type PorterClient interface {
Expand All @@ -15,3 +26,118 @@ type PorterClient interface {
type ClientConn interface {
Close() error
}

var GrpcDeployment = &appsv1.Deployment{
ObjectMeta: metav1.ObjectMeta{
Name: PorterGRPCName,
Namespace: PorterNamespace,
Labels: map[string]string{
"app": "porter-grpc-service",
},
},
Spec: appsv1.DeploymentSpec{
Replicas: ptr.To(int32(1)),
Selector: &metav1.LabelSelector{
MatchLabels: map[string]string{
"app": "porter-grpc-service",
},
},
Template: corev1.PodTemplateSpec{
ObjectMeta: metav1.ObjectMeta{
Labels: map[string]string{
"app": "porter-grpc-service",
},
},
Spec: corev1.PodSpec{
Containers: []corev1.Container{
{
Name: "porter-grpc-service",
Image: "ghcr.io/getporter/server:v1.1.0",
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can me make the image version a const? I think it's fine to have the server version compiled into the operator controller to ensure compatibility but it should be easy to upgrade when we want to release a new version

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can update this.

Ports: []corev1.ContainerPort{
{
Name: "grpc",
ContainerPort: 3001,
},
},
Args: []string{"api-server", "run"},
VolumeMounts: []corev1.VolumeMount{
{
MountPath: "/porter-config",
Name: "porter-grpc-service-config-volume",
},
},
Resources: corev1.ResourceRequirements{
Limits: corev1.ResourceList{
corev1.ResourceCPU: resource.MustParse("2000m"),
corev1.ResourceMemory: resource.MustParse("512Mi"),
},
Requests: corev1.ResourceList{
corev1.ResourceCPU: resource.MustParse("100m"),
corev1.ResourceMemory: resource.MustParse("32Mi"),
},
},
},
},
Volumes: []corev1.Volume{
{
Name: "porter-grpc-service-config-volume",
VolumeSource: corev1.VolumeSource{
ConfigMap: &corev1.ConfigMapVolumeSource{
LocalObjectReference: corev1.LocalObjectReference{
Name: "porter-grpc-service-config",
},
Items: []corev1.KeyToPath{
{
Key: "config",
Path: "config.yaml",
},
},
},
},
},
},
},
},
},
}

var GrpcService = &corev1.Service{
ObjectMeta: metav1.ObjectMeta{
Name: PorterGRPCName,
Namespace: PorterNamespace,
Labels: map[string]string{
"app": "porter-grpc-service",
},
},
Spec: corev1.ServiceSpec{
Ports: []corev1.ServicePort{
{
Protocol: corev1.ProtocolTCP,
TargetPort: intstr.FromInt(3001),
Port: int32(3001),
},
},
Selector: map[string]string{"app": "porter-grpc-service"},
Type: corev1.ServiceTypeClusterIP,
},
}

var GrpcConfigMap = &corev1.ConfigMap{
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is where I think the complexity is for managing the grpc server as part of the controller. The porter config that this uses HAS to be the porter config that the installations used when running so that means it needs to use the porter config for the namespace. We put the restriction in the only allow a single porter config per namespace instead of a porter config to be defined for every installation. The operator sets up a default config if one doesn't exist. This service should be using the porter config that's applied to the namespace, either the default one or the user provided one. It also has to track if that PorterConfig ever changes and then reload with the new config so that it can hit the backing stores where the installations actually live.

Btw @schristoff we should add support for moving porter storage data from one backend to another. This will "just work" for secrets as long as the secrets are managed externally for the new store like they should be but for storage backend AFAIK there's no way to move from one DB to another if that changes in the porter config and we should definitely support that!

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The porter config that this uses HAS to be the porter config that the installations used when running so that means it needs to use the porter config for the namespace. We put the restriction in the only allow a single porter config per namespace instead of a porter config to be defined for every installation. The operator sets up a default config if one doesn't exist. This service should be using the porter config that's applied to the namespace, either the default one or the user provided one. It also has to track if that PorterConfig ever changes

That means we can't do this implementation because this is trying to install the grpc server before we even do an installation. If the installation needs to happen to success to then create a deployment/configmap/service to get the Porter config in the namespace, (that resource isn't created until something makes it get created) then we will have to do that after the first installation. Relying on the default Porter config at runtime seems a little tricky as that default is making assumption around the installation resource process.

What I can do is move this to be done once at setup during the first installation and dynamically create the configmap that will map to the porter config by the installation we have in the namespace once the installation is complete.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is really interesting because of how the Porter config gets resolved for an AgentAction:

func (r *AgentActionReconciler) resolvePorterConfig(ctx context.Context, log logr.Logger, action *porterv1.AgentAction) (porterv1.PorterConfigSpec, error) {

We probably need to create a "default" grpc server in the operator namespace then have namespace specific servers IF a PorterConfig is specified for that namespace BUT not for the system..... This just feels gross.... But the resolvePorterConfig should be able to handle checking if a grpc server exists that will be able to handle the AgentAction based on the PorterConfig that is selected to run that AgentAction

ObjectMeta: metav1.ObjectMeta{
Name: "porter-grpc-service-config",
Namespace: PorterNamespace,
},
Data: map[string]string{
"config": ConfigmMapConfig,
},
}

var ConfigmMapConfig = `
default-secrets-plugin: "kubernetes.secrets"
default-storage: "mongodb"
storage:
- name: "mongodb"
plugin: "mongodb"
config:
url: "mongodb://root:demopasswd@porter-operator-mongodb.demo.svc.cluster.local"
`
54 changes: 54 additions & 0 deletions main.go
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
package main

import (
"context"
"flag"
"os"

Expand All @@ -9,10 +10,13 @@ import (

_ "k8s.io/client-go/plugin/pkg/client/auth"

"golang.org/x/sync/errgroup"
apierrors "k8s.io/apimachinery/pkg/api/errors"
"k8s.io/apimachinery/pkg/runtime"
utilruntime "k8s.io/apimachinery/pkg/util/runtime"
clientgoscheme "k8s.io/client-go/kubernetes/scheme"
ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/healthz"
"sigs.k8s.io/controller-runtime/pkg/log/zap"
"sigs.k8s.io/controller-runtime/pkg/metrics/server"
Expand All @@ -34,14 +38,17 @@ func init() {
}

func main() {
ctx := context.Background()
var metricsAddr string
var enableLeaderElection bool
var probeAddr string
var createGrpc bool
flag.StringVar(&metricsAddr, "metrics-bind-address", ":8080", "The address the metric endpoint binds to.")
flag.StringVar(&probeAddr, "health-probe-bind-address", ":8081", "The address the probe endpoint binds to.")
flag.BoolVar(&enableLeaderElection, "leader-elect", false,
"Enable leader election for controller manager. "+
"Enabling this will ensure there is only one active controller manager.")
flag.BoolVar(&createGrpc, "create-grpc", true, "create grpc deployment for use in operator")
opts := zap.Options{
Development: true,
}
Expand Down Expand Up @@ -116,6 +123,53 @@ func main() {
os.Exit(1)
}

if createGrpc {
g, ctx := errgroup.WithContext(ctx)
g.Go(func() error {
k8sClient := mgr.GetClient()
err := k8sClient.Create(ctx, controllers.GrpcConfigMap, &client.CreateOptions{})
if err != nil {
if apierrors.IsAlreadyExists(err) {
setupLog.Info("configmap already exists, not creating")
return nil
}
setupLog.Info("error creating configmap, %s", err.Error())
}

err = k8sClient.Create(ctx, controllers.GrpcDeployment, &client.CreateOptions{})
if err != nil {
if apierrors.IsAlreadyExists(err) {
setupLog.Info("deployment already exists, not creating")
return nil
}
setupLog.Info("error creating deployment, %s", err.Error())
}
// NOTE: Don't crash, just don't deploy if Get fails for any other reason than not found
return nil
})

g.Go(func() error {
k8sClient := mgr.GetClient()
err := k8sClient.Create(ctx, controllers.GrpcService, &client.CreateOptions{})
if err != nil {
if apierrors.IsAlreadyExists(err) {
setupLog.Info("service already exists, not creating")
return nil
}
setupLog.Info("error creating service, %s", err.Error())
}
return nil
})

go func() {
if err := g.Wait(); err != nil {
setupLog.Error(err, "error with async operation of creating grpc deployment")
os.Exit(1)
}
setupLog.Info("grpc server has been created")
}()
}

setupLog.Info("starting manager")
if err := mgr.Start(ctrl.SetupSignalHandler()); err != nil {
setupLog.Error(err, "problem running manager")
Expand Down
Loading