Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

.Values.replicas should be independently set for each service #341

Open
Deaddy opened this issue Jun 30, 2023 · 3 comments
Open

.Values.replicas should be independently set for each service #341

Deaddy opened this issue Jun 30, 2023 · 3 comments
Labels
Category:Enhancement Add new functionality Priority:p3-medium Normal priority

Comments

@Deaddy
Copy link

Deaddy commented Jun 30, 2023

The number replicas should be configurable independently for each service, i.e. drop .Values.replicas and then have .Values.services.$service.replicas for each $service we deploy.

Random thoughts:

  • I think it is fine to make this a breaking change and not do another layer of defaults/ifs in the templates to make sure it does work with .Values.replicas being set
  • this would also be more consistent with most other major helm charts
  • the progress of scale all services #15 would then also be reflected in the values file, as a non-scalable service would not have replicad field, making it a little bit more self-documented
@wkloucek
Copy link
Contributor

wkloucek commented Jun 30, 2023

we also could treat replicas like we do resources. We have a global default setting and a per service setting, which wins over the global one. Or do you vote for only a per service option?

@wkloucek wkloucek added Category:Enhancement Add new functionality Priority:p3-medium Normal priority labels Jun 30, 2023
@Deaddy
Copy link
Author

Deaddy commented Jul 3, 2023

Well, that is what I meant with the first point, I do not really think a global setting is useful enough to warrant the added complexity in the helm chart.

I also expect some components like nats to require quorum, so just having one large replica setting might also invite configuration accidents, and I guess most components do not need to scale beyond HA, whereas proxy and frontend probably need quite a few more replicas in most settings.

I think I would even argue that a global setting is kinda useless for resources as well, but having good defaults there might be more tricky than replicas: 1 for each service.

@wkloucek
Copy link
Contributor

wkloucek commented Sep 7, 2023

I also expect some components like nats to require quorum

The builtin NATS does not support scaling / clustering. Therefore we have an example with an external NATS cluster. Currently you also need to ensure settings replicas for NATS streams, that's why we recently added NACK to the example to achieve this. See: https://github.com/owncloud/ocis-charts/tree/master/deployments/ocis-nats

There are no other components that have something like a quorum.
But other components that cannot be scaled beyond one replica and should be replaced by external scalable / HA components if needed (IDM = LDAP, IDP = OIDC Provider -> see: https://github.com/owncloud/ocis-charts/tree/master/deployments/external-user-management). Other components are not yet scalable and tracked here: #15

so just having one large replica setting might also invite configuration accidents, and I guess most components do not need to scale beyond HA, whereas proxy and frontend probably need quite a few more replicas in most settings.

I totally get your point and agree that we should offer a replica setting per service.

I guess we should have the following logic when it comes to replicas / HPA settings:

  • apply global replicas setting to service if no global / service-specific HPA / service-specific replicas is set
  • apply service-specific replicas setting to service if no service-specific HPA is set
  • apply global HPA setting to service if no service-specific replicas / HPA is set
  • apply service-specific HPA setting to service if service-specific HPA is set

In short: service-specific setting wins. And HPA wins over the replicas setting when it has the same specifically (global / service-specific).

I think I would even argue that a global setting is kinda useless for resources as well, but having good defaults there might be more tricky than replicas: 1 for each service.

We have this because many service have a similar low need for resources and they can be configured together in this way. But still it's true that you need to check if resources are set correct in the end.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Category:Enhancement Add new functionality Priority:p3-medium Normal priority
Projects
None yet
Development

No branches or pull requests

2 participants