Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

NETOBSERV-736: Added downstream deployment configuration for monitoring collection #282

Merged
merged 1 commit into from
Mar 7, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 8 additions & 1 deletion .mk/development.mk
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,7 @@ deploy-kafka-tls:
kubectl create namespace $(NAMESPACE) --dry-run=client -o yaml | kubectl apply -f -
kubectl apply -f "https://strimzi.io/install/latest?namespace="$(NAMESPACE) -n $(NAMESPACE)
kubectl apply -f "https://raw.githubusercontent.com/netobserv/documents/main/examples/kafka/metrics-config.yaml" -n $(NAMESPACE)
curl -s -L "https://raw.githubusercontent.com/netobserv/documents/main/examples/kafka/tls.yaml" | envsubst | kubectl apply -n $(NAMESPACE) -f -
curl -s -L "https://raw.githubusercontent.com/netobserv/documents/main/examples/kafka/tls.yaml" | envsubst | kubectl apply -n $(NAMESPACE) -f -
@echo -e "\n==>Using storage class ${DEFAULT_SC}"
kubectl apply -f "https://raw.githubusercontent.com/netobserv/documents/main/examples/kafka/topic.yaml" -n $(NAMESPACE)
kubectl apply -f "https://raw.githubusercontent.com/netobserv/documents/main/examples/kafka/user.yaml" -n $(NAMESPACE)
Expand Down Expand Up @@ -164,3 +164,10 @@ set-plugin-image:
kubectl wait -n $(NAMESPACE) --timeout=60s --for condition=Available=True deployment netobserv-controller-manager
kubectl rollout status -n $(NAMESPACE) --timeout=60s deployment netobserv-plugin
kubectl wait -n $(NAMESPACE) --timeout=60s --for condition=Available=True deployment netobserv-plugin

.PHONY: set-release-kind-downstream
set-release-kind-downstream:
kubectl -n $(NAMESPACE) set env deployment netobserv-controller-manager -c "manager" DOWNSTREAM_DEPLOYMENT=true
@echo -e "\n==> Redeploying..."
kubectl rollout status -n $(NAMESPACE) --timeout=60s deployment netobserv-controller-manager
kubectl wait -n $(NAMESPACE) --timeout=60s --for condition=Available=True deployment netobserv-controller-manager
10 changes: 10 additions & 0 deletions DEVELOPMENT.md
Original file line number Diff line number Diff line change
Expand Up @@ -176,3 +176,13 @@ cd hack

The metrics will be visible in the Openshift console under the tab `Observe -> Metrics.`
Look for the metrics that begin with `netobserv_.`

## Simulating a downstream deployment

To configure the operator to run as a downstream deployment run this command:

```
make set-release-kind-downstream
```

Most notably change will concern the monitoring part which will use the platoform monitoring stack instead of the user workload monitoring stack.
14 changes: 10 additions & 4 deletions bundle/manifests/netobserv-operator.clusterserviceversion.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -509,17 +509,18 @@ spec:
- apiGroups:
- ""
resources:
- nodes
- pods
- services
- endpoints
- secrets
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- secrets
- nodes
- pods
- services
verbs:
- get
- list
Expand Down Expand Up @@ -576,6 +577,7 @@ spec:
- rbac.authorization.k8s.io
resources:
- clusterrolebindings
- rolebindings
verbs:
- create
- delete
Expand All @@ -587,6 +589,7 @@ spec:
- rbac.authorization.k8s.io
resources:
- clusterroles
- roles
verbs:
- create
- delete
Expand Down Expand Up @@ -649,6 +652,7 @@ spec:
- --ebpf-agent-image=$(RELATED_IMAGE_EBPF_AGENT)
- --flowlogs-pipeline-image=$(RELATED_IMAGE_FLOWLOGS_PIPELINE)
- --console-plugin-image=$(RELATED_IMAGE_CONSOLE_PLUGIN)
- --downstream-deployment=$(DOWNSTREAM_DEPLOYMENT)
command:
- /manager
env:
Expand All @@ -658,6 +662,8 @@ spec:
value: quay.io/netobserv/flowlogs-pipeline:v0.1.8
- name: RELATED_IMAGE_CONSOLE_PLUGIN
value: quay.io/netobserv/network-observability-console-plugin:v0.1.9
- name: DOWNSTREAM_DEPLOYMENT
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

would DOWNSTREAM_DEPLOYMENT be set to true in CPAAS pipeline stages?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, the last step for this task is to update downstream build to change DOWNSTREAM_DEPLOYMENT
We already do this kind of changes for container images such as RELATED_IMAGE_CONSOLE_PLUGIN

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

got it, thanks!. Could you open a PR for downstream setting this env to true?

we'd have to test this PR post-merge cc @jotak

value: "false"
image: quay.io/netobserv/network-observability-operator:1.0.2
imagePullPolicy: Always
livenessProbe:
Expand Down
3 changes: 3 additions & 0 deletions config/manager/manager.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -27,13 +27,16 @@ spec:
- --ebpf-agent-image=$(RELATED_IMAGE_EBPF_AGENT)
- --flowlogs-pipeline-image=$(RELATED_IMAGE_FLOWLOGS_PIPELINE)
- --console-plugin-image=$(RELATED_IMAGE_CONSOLE_PLUGIN)
- --downstream-deployment=$(DOWNSTREAM_DEPLOYMENT)
env:
- name: RELATED_IMAGE_EBPF_AGENT
value: quay.io/netobserv/netobserv-ebpf-agent:v0.3.0
- name: RELATED_IMAGE_FLOWLOGS_PIPELINE
value: quay.io/netobserv/flowlogs-pipeline:v0.1.8
- name: RELATED_IMAGE_CONSOLE_PLUGIN
value: quay.io/netobserv/network-observability-console-plugin:v0.1.9
- name: DOWNSTREAM_DEPLOYMENT
value: "false"
image: controller:latest
name: manager
imagePullPolicy: Always
Expand Down
11 changes: 7 additions & 4 deletions config/rbac/role.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -83,17 +83,18 @@ rules:
- apiGroups:
- ""
resources:
- nodes
- pods
- services
- endpoints
- secrets
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- secrets
- nodes
- pods
- services
verbs:
- get
- list
Expand Down Expand Up @@ -150,6 +151,7 @@ rules:
- rbac.authorization.k8s.io
resources:
- clusterrolebindings
- rolebindings
verbs:
- create
- delete
Expand All @@ -161,6 +163,7 @@ rules:
- rbac.authorization.k8s.io
resources:
- clusterroles
- roles
verbs:
- create
- delete
Expand Down
65 changes: 46 additions & 19 deletions controllers/flowcollector_controller.go
Original file line number Diff line number Diff line change
Expand Up @@ -32,6 +32,7 @@ import (
"github.com/netobserv/network-observability-operator/controllers/reconcilers"
"github.com/netobserv/network-observability-operator/pkg/conditions"
"github.com/netobserv/network-observability-operator/pkg/discover"
"github.com/netobserv/network-observability-operator/pkg/helper"
"github.com/netobserv/network-observability-operator/pkg/watchers"
)

Expand Down Expand Up @@ -62,9 +63,9 @@ func NewFlowCollectorReconciler(client client.Client, scheme *runtime.Scheme, co

//+kubebuilder:rbac:groups=apps,resources=deployments;daemonsets,verbs=get;list;watch;create;update;patch;delete
//+kubebuilder:rbac:groups=core,resources=namespaces;services;serviceaccounts;configmaps,verbs=get;list;watch;create;update;patch;delete
//+kubebuilder:rbac:groups=core,resources=secrets,verbs=get;list;watch
//+kubebuilder:rbac:groups=rbac.authorization.k8s.io,resources=clusterroles,verbs=get;create;delete;watch;list
//+kubebuilder:rbac:groups=rbac.authorization.k8s.io,resources=clusterrolebindings,verbs=get;list;create;delete;update;watch
//+kubebuilder:rbac:groups=core,resources=secrets;endpoints,verbs=get;list;watch
//+kubebuilder:rbac:groups=rbac.authorization.k8s.io,resources=clusterroles;roles,verbs=get;create;delete;watch;list
//+kubebuilder:rbac:groups=rbac.authorization.k8s.io,resources=clusterrolebindings;rolebindings,verbs=get;list;create;delete;update;watch
//+kubebuilder:rbac:groups=console.openshift.io,resources=consoleplugins,verbs=get;create;delete;update;patch;list;watch
//+kubebuilder:rbac:groups=operator.openshift.io,resources=consoles,verbs=get;update;list;update;watch
//+kubebuilder:rbac:groups=flows.netobserv.io,resources=flowcollectors,verbs=get;list;watch;create;update;patch;delete
Expand Down Expand Up @@ -96,21 +97,15 @@ func (r *FlowCollectorReconciler) Reconcile(ctx context.Context, req ctrl.Reques

ns := getNamespaceName(desired)
r.certWatcher.Reset(ns)
// If namespace does not exist, we create it
nsExist, err := r.namespaceExist(ctx, ns)
if err != nil {
return ctrl.Result{}, err
}
if !nsExist {
err = r.Create(ctx, buildNamespace(ns))
if err != nil {
return ctrl.Result{}, r.failure(ctx, conditions.CannotCreateNamespace(err), desired)
}
}

clientHelper := r.newClientHelper(desired)
previousNamespace := desired.Status.Namespace

err = r.reconcileOperator(ctx, clientHelper, ns, desired)
if err != nil {
return ctrl.Result{}, err
}

// Create reconcilers
flpReconciler := flowlogspipeline.NewReconciler(ctx, clientHelper, ns, previousNamespace, r.config.FlowlogsPipelineImage, &r.permissions, r.availableAPIs)
var cpReconciler consoleplugin.CPReconciler
Expand Down Expand Up @@ -297,16 +292,48 @@ func getNamespaceName(desired *flowslatest.FlowCollector) string {
return constants.DefaultOperatorNamespace
}

func (r *FlowCollectorReconciler) namespaceExist(ctx context.Context, nsName string) (bool, error) {
err := r.Get(ctx, types.NamespacedName{Name: nsName}, &corev1.Namespace{})
func (r *FlowCollectorReconciler) namespaceExist(ctx context.Context, nsName string) (*corev1.Namespace, error) {
ns := &corev1.Namespace{}
err := r.Get(ctx, types.NamespacedName{Name: nsName}, ns)
if err != nil {
if errors.IsNotFound(err) {
return false, nil
return nil, nil
}
log.FromContext(ctx).Error(err, "Failed to get namespace")
return false, err
return nil, err
}
return ns, nil
}

func (r *FlowCollectorReconciler) reconcileOperator(ctx context.Context, clientHelper reconcilers.ClientHelper, ns string, desired *flowslatest.FlowCollector) error {
// If namespace does not exist, we create it
nsExist, err := r.namespaceExist(ctx, ns)
if err != nil {
return err
}
return true, nil
desiredNs := buildNamespace(ns, r.config.DownstreamDeployment)
if nsExist == nil {
err = r.Create(ctx, desiredNs)
if err != nil {
return r.failure(ctx, conditions.CannotCreateNamespace(err), desired)
}
} else if !helper.IsSubSet(nsExist.ObjectMeta.Labels, desiredNs.ObjectMeta.Labels) {
err = r.Update(ctx, desiredNs)
if err != nil {
return err
}
}
if r.config.DownstreamDeployment {
desiredRole := buildRoleMonitoringReader(ns)
if err := clientHelper.ReconcileRole(ctx, desiredRole); err != nil {
return err
}
desiredBinding := buildRoleBindingMonitoringReader(ns)
if err := clientHelper.ReconcileRoleBinding(ctx, desiredBinding); err != nil {
return err
}
}
return nil
}

// checkFinalizer returns true (and/or error) if the calling function needs to return
Expand Down
53 changes: 51 additions & 2 deletions controllers/flowcollector_objects.go
Original file line number Diff line number Diff line change
@@ -1,14 +1,63 @@
package controllers

import (
"github.com/netobserv/network-observability-operator/controllers/constants"
corev1 "k8s.io/api/core/v1"
rbacv1 "k8s.io/api/rbac/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)

func buildNamespace(ns string) *corev1.Namespace {
const (
downstreamLabelKey = "openshift.io/cluster-monitoring"
downstreamLabelValue = "true"
roleSuffix = "-metrics-reader"
monitoringServiceAccount = "prometheus-k8s"
monitoringNamespace = "openshift-monitoring"
)

func buildNamespace(ns string, isDownstream bool) *corev1.Namespace {
labels := map[string]string{}
if isDownstream {
labels[downstreamLabelKey] = downstreamLabelValue
}
return &corev1.Namespace{
ObjectMeta: metav1.ObjectMeta{
Name: ns,
Name: ns,
Labels: labels,
},
}
}

func buildRoleMonitoringReader(ns string) *rbacv1.Role {
cr := rbacv1.Role{
ObjectMeta: metav1.ObjectMeta{
Name: constants.OperatorName + roleSuffix,
Namespace: ns,
},
Rules: []rbacv1.PolicyRule{{APIGroups: []string{""},
Verbs: []string{"get", "list", "watch"},
Resources: []string{"pods", "services", "endpoints"},
},
},
}
return &cr
}

func buildRoleBindingMonitoringReader(ns string) *rbacv1.RoleBinding {
return &rbacv1.RoleBinding{
ObjectMeta: metav1.ObjectMeta{
Name: constants.OperatorName + roleSuffix,
Namespace: ns,
},
RoleRef: rbacv1.RoleRef{
APIGroup: "rbac.authorization.k8s.io",
Kind: "Role",
Name: constants.OperatorName + roleSuffix,
},
Subjects: []rbacv1.Subject{{
Kind: "ServiceAccount",
Name: monitoringServiceAccount,
Namespace: monitoringNamespace,
}},
}
}
2 changes: 2 additions & 0 deletions controllers/operator/config.go
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,8 @@ type Config struct {
FlowlogsPipelineImage string
// ConsolePluginImage is the image of the Console Plugin that is managed by the operator
ConsolePluginImage string
// Release kind is either upstream or downstream
DownstreamDeployment bool
}

func (cfg *Config) Validate() error {
Expand Down
45 changes: 45 additions & 0 deletions controllers/reconcilers/client_helper.go
Original file line number Diff line number Diff line change
Expand Up @@ -112,6 +112,33 @@ func (c *ClientHelper) ReconcileClusterRoleBinding(ctx context.Context, desired
return c.UpdateOwned(ctx, &actual, desired)
}

func (c *ClientHelper) ReconcileRoleBinding(ctx context.Context, desired *rbacv1.RoleBinding) error {
actual := rbacv1.RoleBinding{}
if err := c.Get(ctx, types.NamespacedName{Name: desired.ObjectMeta.Name}, &actual); err != nil {
if errors.IsNotFound(err) {
return c.CreateOwned(ctx, desired)
}
return fmt.Errorf("can't reconcile RoleBinding %s: %w", desired.Name, err)
}
if helper.IsSubSet(actual.Labels, desired.Labels) &&
actual.RoleRef == desired.RoleRef &&
reflect.DeepEqual(actual.Subjects, desired.Subjects) {
if actual.RoleRef != desired.RoleRef {
//Roleref cannot be updated deleting and creating a new rolebinding
log := log.FromContext(ctx)
log.Info("Deleting old RoleBinding", "Namespace", actual.GetNamespace(), "Name", actual.GetName())
err := c.Delete(ctx, &actual)
if err != nil {
log.Error(err, "error deleting old RoleBinding", "Namespace", actual.GetNamespace(), "Name", actual.GetName())
}
return c.CreateOwned(ctx, desired)
}
// role binding already reconciled. Exiting
return nil
}
return c.UpdateOwned(ctx, &actual, desired)
}

func (c *ClientHelper) ReconcileClusterRole(ctx context.Context, desired *rbacv1.ClusterRole) error {
actual := rbacv1.ClusterRole{}
if err := c.Get(ctx, types.NamespacedName{Name: desired.Name}, &actual); err != nil {
Expand All @@ -129,3 +156,21 @@ func (c *ClientHelper) ReconcileClusterRole(ctx context.Context, desired *rbacv1

return c.UpdateOwned(ctx, &actual, desired)
}

func (c *ClientHelper) ReconcileRole(ctx context.Context, desired *rbacv1.Role) error {
actual := rbacv1.Role{}
if err := c.Get(ctx, types.NamespacedName{Name: desired.Name}, &actual); err != nil {
if errors.IsNotFound(err) {
return c.CreateOwned(ctx, desired)
}
return fmt.Errorf("can't reconcile Role %s: %w", desired.Name, err)
}

if helper.IsSubSet(actual.Labels, desired.Labels) &&
reflect.DeepEqual(actual.Rules, desired.Rules) {
// role already reconciled. Exiting
return nil
}

return c.UpdateOwned(ctx, &actual, desired)
}
1 change: 1 addition & 0 deletions controllers/suite_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -167,6 +167,7 @@ func NewTestFlowCollectorReconciler(client client.Client, scheme *runtime.Scheme
EBPFAgentImage: "registry-proxy.engineering.redhat.com/rh-osbs/network-observability-ebpf-agent@sha256:6481481ba23375107233f8d0a4f839436e34e50c2ec550ead0a16c361ae6654e",
FlowlogsPipelineImage: "registry-proxy.engineering.redhat.com/rh-osbs/network-observability-flowlogs-pipeline@sha256:6481481ba23375107233f8d0a4f839436e34e50c2ec550ead0a16c361ae6654e",
ConsolePluginImage: "registry-proxy.engineering.redhat.com/rh-osbs/network-observability-console-plugin@sha256:6481481ba23375107233f8d0a4f839436e34e50c2ec550ead0a16c361ae6654e",
DownstreamDeployment: false,
},
}
}
Expand Down
1 change: 1 addition & 0 deletions main.go
Original file line number Diff line number Diff line change
Expand Up @@ -91,6 +91,7 @@ func main() {
flag.StringVar(&config.EBPFAgentImage, "ebpf-agent-image", "quay.io/netobserv/netobserv-ebpf-agent:main", "The image of the eBPF agent")
flag.StringVar(&config.FlowlogsPipelineImage, "flowlogs-pipeline-image", "quay.io/netobserv/flowlogs-pipeline:main", "The image of Flowlogs Pipeline")
flag.StringVar(&config.ConsolePluginImage, "console-plugin-image", "quay.io/netobserv/network-observability-console-plugin:main", "The image of the Console Plugin")
flag.BoolVar(&config.DownstreamDeployment, "downstream-deployment", false, "Either this deployment is a downstream deployment ot not")
OlivierCazade marked this conversation as resolved.
Show resolved Hide resolved
flag.BoolVar(&versionFlag, "v", false, "print version")
opts := zap.Options{
Development: true,
Expand Down