Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add tooling to customize the all-in-one manifests #2406

Closed
sebgl opened this issue Jan 10, 2020 · 11 comments · Fixed by #3124
Closed

Add tooling to customize the all-in-one manifests #2406

sebgl opened this issue Jan 10, 2020 · 11 comments · Fixed by #3124
Assignees
Labels
>feature Adds or discusses adding a feature to the product

Comments

@sebgl
Copy link
Contributor

sebgl commented Jan 10, 2020

Some users would like to run the operator in a particular namespace, with restricted RBAC access:

Issue #374 and this design doc mentions a CLI tool to help generate the correct manifest according to one's need:

  • operator deployed cluster-wide vs. in a single namespace
  • reduced RBAC privileges according to the above choice
  • list of namespaces for the operator to monitor

Detailing every manual patch to be made in a documentation page seems quite complicated, so a CLI/script/helm/kustomize seems more appropriate to me. We could also rely on it to customize the E2E tests manifests.

@sebgl sebgl added the >feature Adds or discusses adding a feature to the product label Jan 10, 2020
@barkbay
Copy link
Contributor

barkbay commented Jan 30, 2020

The tool would need to take into account the cluster role needed to create subjectaccessreviews in the context of #2468 , see #2482 (comment)

@pebrc
Copy link
Collaborator

pebrc commented Feb 6, 2020

We discussed offline:

  • let's explore how far we can get with kustomize
  • it has the benefit of being integrated into recent versions of kubectl
  • it would allow use to keep a similar installation experience as we have it now

@sebgl
Copy link
Contributor Author

sebgl commented Apr 2, 2020

A good first step could be to document how to tweak the existing manifests. It came as a question again: https://discuss.elastic.co/t/difficulty-installing-eck-in-a-single-namespace/225891.

@pebrc
Copy link
Collaborator

pebrc commented Apr 2, 2020

Regarding this discuss forum comment. If the configuration the Makefile in config spits out when you run it with make generate-namespace is invalid we should maybe just fix that?

@sebgl
Copy link
Contributor Author

sebgl commented Apr 2, 2020

Regarding this discuss forum comment. If the configuration the Makefile in config spits out when you run it with make generate-namespace is invalid we should maybe just fix that?

That's also an option yes.

I would be in favor of removing all those files that we have a hard time maintaining in sync with each other. I thought we already did that actually when removing the namespace/global operators concept.

@pebrc
Copy link
Collaborator

pebrc commented Apr 2, 2020

Just tested on a fresh k8s cluster and our single namespace make target seems to work fine

env OPERATOR_NAME=eck-restricted  OPERATOR_IMAGE=docker.elastic.co/eck-snapshots/eck-operator:1.1.0-2020-03-30-5ae99238   NAMESPACE=my-ns MANAGED_NAMESPACES=my-ns  make generate-namespace | kubectl apply -f -

This gives you a restricted single namespace deployment of ECK. I have not double checked all the RBAC rules whether they are as minimal as possible, but I think this is a start.

@davelosert
Copy link

I just saw this issue and maybe my perspective as a user can help (and I also want to +1 the necessity of this 😄 ):

This is a much neede feature for us, as we are using multiple elasticsearch-clusters within our kubernetes-cluster. And we also have several clusters where we need it each with different namespaces.

So our reqruiement is to be very flexible in terms of which namespaces the operator needs to watch.

Deploying the all-in-one solution however gives the operator a lot of permissions (if I understand it correctly, it can almost do anything in any namespace in the cluster) - this is far form a least-privilige approach and does not feel right from a security perspective, so it is not the right option for us.

S in order to use the namespace version, we currently have to check out this repository and run the make command for every combination we need (or tweak it's output), then check the results into our repository. This is very cumbersome and also hard to automate.

A helm chart would fit best into our tool-stack, as we could simply set a version for the operator once and then configure the namespaces to be watched in several value.ymls for the several clusters. Kustomize would also get the job done.

I think the bottomline requirement here is basically: Download a version of the operator once, then be able to configure it for many environments - all within the project's repository.

@charith-elastic charith-elastic self-assigned this Apr 6, 2020
@charith-elastic
Copy link
Contributor

Based on the upcoming 1.1.0 release, possible ways of customizing the operator installation are as follows:

  • Give operator access to all namespaces or restrict to a set of namespaces
  • Install the CRDs or skip them (as CRDs are cluster resources, a previous installation may have already created them)
  • Enable or disable the webhook
  • Provide custom webhook certificates
  • Change the operator namespace
  • Set operator requests and limits
  • Change log level
  • Enable metrics
  • Enable tracing
  • Set default container registry
  • Set max concurrent reconciles
  • Set the validity and rotation parameters of certificates
  • Enable RBAC on references

The available tools for generating customized manifests using the above parameters are:

  • Hand-rolled templating (Makefiles, shell scripts, or even Go code)
  • Helm
  • Kustomize
  • Other less widely adopted tools such as Kpt, Pulumi, Ksonnet

Hand-rolled templating

Pros

  • It's the status quo
  • Relies on tools already available in the environment

Cons

  • Anything other than simple substitutions become very complicated to express
  • Templating system is not well-known or documented -- making it hard to maintain
  • Environmental differences need to be accounted for (Linux vs. MacOS, GNU vs. POSIX)

Helm

Pros

  • Widely adopted
  • Well documented
  • Includes a lot of useful utility functions and features to cater for many different use cases
  • Fairly easy to express complicated logic

Cons

  • Security concerns regarding Tiller
  • Helm V2 vs. V3
  • Template expressions can get harder to read

Kustomize

Pros

  • Widely available due to being integrated into Kubectl
  • Easier to understand

Cons

  • Tailored more towards enforcing conventions or ensuring required configuration is present
  • Expressing conditionals is difficult and requires extra development work in some cases
  • Documentation is sparse

Other tools

Most of these tools are fairly new (therefore, not widely in use yet) and require extra tooling to be installed.

How other popular projects handle installation

Istio

  • The istioctl binary provides a command for generating installation manifests
    • Pre-defined profiles provided for common use cases
    • Each profile can be customized by setting configuration values in the command line (very similar to Helm)
  • Helm chart is available as well but is being deprecated in favour of the istioctl manifest command described above

cert-manager

  • Helm chart
  • Alternatively, different all-in-one manifests are available for download depending on the Kubernetes version

Knative

  • Users have to apply a series of manifests depending on the components they want
  • Alternatively, an operator is available to manage installation

KubeDB

  • Helm chart
  • Shell script that customizes the manifest based on flags

CockroachDB

  • Helm chart
  • Users can also apply a series of manifests according to the configuration they need

@sebgl
Copy link
Contributor Author

sebgl commented Apr 8, 2020

I wanted to highlight that one big blocker for Helm is this issue: Helm does not support upgrading our CRDs api version. See also this issue in our repo. Hence if you have a Helm-managed ECK setup, you cannot upgrade ECK if the upgrade includes a change in CRD versioning. You'll have to delete all resources first. Which is pretty bad :(

According to the issue it looks like it has been resolved recently, so probably worth double-checking.

@charith-elastic
Copy link
Contributor

charith-elastic commented Apr 8, 2020

Indeed, Helm does not support updating or deleting CRDs. Based on discussions about this very issue in the past, as I understand it, the reasons we have been reluctant to use Helm as an official distribution method are:

  1. Missing support for updating CRDs
  2. Concerns about the security of Tiller (not all users will want to use Helm because of this)
  3. Not enough bandwidth in the team to maintain and support an official chart
  4. It's fairly trivial for users to create their own Helm charts to match their requirements

1 and 2 are probably the reasons why most other projects in the Kubernetes ecosystem provide the option to use raw manifests in place of Helm as well. Our main issue right now is the difficulty and complexity of generating manifests for the different use cases that users have. I think a good way of addressing this problem is to have a tool similar to istioctl manifest or KubeDB installer to make that process easier.

The workflow would be:

  • Users invoke the tool to generate a manifest for their use case (global operator, single namespace operator etc.)
  • Users run their own customization process on the generated manifest with Kustomize, Helm, or any other tool of their choice to add environment-specific details like labels, annotations etc.

@Bessonov
Copy link

Bessonov commented Apr 9, 2020

I would to add, that the tiller isn't needed with helm3. But because we apply helm template | kubectl apply practice anyway, it wasn't needed even with helm2. Because you can create a raw manifest from charts, I don't see any benefit for raw manifests, except that no helm binary is needed.

Not enough bandwidth in the team to maintain and support an official chart
It's fairly trivial for users to create their own Helm charts to match their requirements

Well, if it's fairly trivial... I would prefer to have a central place for enhancements.

I think a good way of addressing this problem is to have a tool similar to istioctl

istioctl is an entirely another beast. See also Istio Installer:

It is based on a fork of the Istio helm templates, refactored to increase modularity and isolation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
>feature Adds or discusses adding a feature to the product
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants