Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Create new lagoon type "worker" for long operating processes #2525

Closed
tobybellwood opened this issue Feb 22, 2021 · 5 comments · Fixed by #3221
Closed

Create new lagoon type "worker" for long operating processes #2525

tobybellwood opened this issue Feb 22, 2021 · 5 comments · Fixed by #3221

Comments

@tobybellwood
Copy link
Member

Currently, Lagoon projects that have long running (non-web) processes use the cli Lagoon type to run these processes in. This is not optimal, as the cli type should be reserved for short-running, instant spin-up type uses.

We are proposing to add an additional type worker and corresponding worker-persistent type to Lagoon. The main aim of these types is to accommodate projects that have long-running, persistent, and non-web-facing processes. Where a project contains a worker type and needs cronjobs - these cronjobs would be run inside the worker pod instead of a separate cron pod.

The main benefit of this is that cli pods can be idled when they are not in use, reducing load on a cluster, and can be treated appropriately during scaling or node-shuffling events. This will also mean that projects can operate their web-facing containers as lean as possible, offloading backend processing or interactive tasks to an additional pod, with specialist tooling.

@tobybellwood tobybellwood added this to the v2.1.0 milestone Feb 22, 2021
@shreddedbacon
Copy link
Member

Would be cool to be able to also have this be extendable with a custom port

services:
  worker:
    labels:
      lagoon.port.enabled: true
      lagoon.port.number: 8080
      lagoon.probe.liveness: 30
      lagoon.probe.readiness: 30

And these would be injected into the template with conditions for the port and direct for the probes.
This would give some flexibility to the worker type then to be a little bit more generic.

For the probe overrides, it would probably be good to have some sane maximums.

This could also then possibly be used for #2528

@tobybellwood
Copy link
Member Author

I agree - having a configurable internal port (still no ingress) would make it ultimately more configurable.

@Schnitzel
Copy link
Contributor

mhh how about we make it completely customizable and allow developers to inject helm variables via Lagoon?
For me this has the following advantages:

  • solve all possible cases where we need to change something in the charts, not just the port number and livensss probes.
  • engineers don't need to learn a new yaml structure, they already know the helm/k8s structure and can use it
  • we're not adding anything more in the docker-compose.yml structure that is already quite cluttered and doesn't allow a lot of flexibility (like we need to separate the labels with dots)
  • future proofing: as we don't need to add new docker-compose.yml labels if we want to change something else in the helm variables

one thing though:

  • as this could be used by developers to shoot themselves in the foot, I would suggest to have a flag in the lagoon project api customHelmValues and only if that flag is set to true we allow to inject custom values. If that is not set we fail the build. (this allows higher security)

@smlx
Copy link
Member

smlx commented Feb 23, 2021

@Schnitzel if the variables didn't live in docker-compose.yml, where would developers set them?

@tobybellwood
Copy link
Member Author

mhh how about we make it completely customizable and allow developers to inject helm variables via Lagoon?

Could this be applied to all helmcharts and not just this specific one?

The purpose of this chart is to define an always-running non-ingress type, instead of abusing the cli type - making this chart too flexible would leave it open to misuse as well!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants