Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add the possibility for the controller to be in a shared namespace #4558

Closed
panzouh opened this issue Oct 24, 2023 · 14 comments
Closed

Add the possibility for the controller to be in a shared namespace #4558

panzouh opened this issue Oct 24, 2023 · 14 comments
Assignees
Labels
backlog candidate Pull requests/issues that are candidates to be backlog items proposal An issue that proposes a feature request

Comments

@panzouh
Copy link
Contributor

panzouh commented Oct 24, 2023

Is your feature request related to a problem? Please describe.
I would like to use nginx plugged with a sidecar, the purpose of the sidecar is to pull a configuration from an API and then pushing it to Nginx via a shared volume. My problem is that when the configuration is reloaded i would like to send a SIGHUP signal to nginx process to gracefully shutdown and reload configuration.

Describe the solution you'd like
The solution would be simple it would be to add this key like in the values.yaml :

controller:
  ## The name of the Ingress Controller daemonset or deployment.
  name: controller

  ## The kind of the Ingress Controller installation - deployment or daemonset.
  kind: deployment

  ## Shared process namespace between containers in the Ingress Controller pod.
  sharedProcessNamespace: false

And this in ./templates/deployment.yaml & ./templates/daemonset.yaml :

apiVersion: apps/v1
kind: Deployment # Same on daemonset
metadata:
  name: {{ include "nginx-ingress.controller.fullname" . }}
  namespace: {{ .Release.Namespace }}
  labels:
    {{- include "nginx-ingress.labels" . | nindent 4 }}
{{- if .Values.controller.annotations }}
  annotations: {{ toYaml .Values.controller.annotations | nindent 4 }}
{{- end }}
spec:
  {{- if not .Values.controller.autoscaling.enabled }}
  replicas: {{ .Values.controller.replicaCount }}
  {{- end }}
  selector:
    matchLabels:
      {{- include "nginx-ingress.selectorLabels" . | nindent 6 }}
  template:
    metadata:
      labels:
        {{- include "nginx-ingress.selectorLabels" . | nindent 8 }}
{{- if .Values.nginxServiceMesh.enable }}
        nsm.nginx.com/enable-ingress: "true"
        nsm.nginx.com/enable-egress: "{{ .Values.nginxServiceMesh.enableEgress }}"
        nsm.nginx.com/deployment: {{ include "nginx-ingress.controller.fullname" . }}
{{- end }}
{{- if .Values.controller.pod.extraLabels }}
{{ toYaml .Values.controller.pod.extraLabels | indent 8 }}
{{- end }}
{{- if or .Values.prometheus.create .Values.controller.pod.annotations }}
      annotations:
{{- if .Values.prometheus.create }}
        prometheus.io/scrape: "true"
        prometheus.io/port: "{{ .Values.prometheus.port }}"
        prometheus.io/scheme: "{{ .Values.prometheus.scheme }}"
{{- end }}
{{- if .Values.controller.pod.annotations }}
{{ toYaml .Values.controller.pod.annotations | indent 8 }}
{{- end }}
{{- end }}
    spec:
      {{- if .Values.controller.sharedProcessNamespace }}
      shareProcessNamespace: true
      {{- end }}
      # [...]

Describe alternatives you've considered
I considered for a moment to add a cronjob to my cluster that fetch configuration and kubectl exec into Nginx to reload configuration but for me it is not the proper way to do it because it does not take in consideration if the configuration changed or not.

Edit : I also considered to support multiple configmaps since the flag nginxConfigMaps is named with a plural but support only one configuration.

@panzouh panzouh added the proposal An issue that proposes a feature request label Oct 24, 2023
@github-actions
Copy link

Hi @panzouh thanks for reporting!

Be sure to check out the docs and the Contributing Guidelines while you wait for a human to take a look at this 🙂

Cheers!

@vepatel
Copy link
Contributor

vepatel commented Oct 24, 2023

Hey @panzouh , reading the description it seem you're interested in directly communicating with nginx from outside the IC so Is Ingress Controller the right project for this purpose?

@panzouh
Copy link
Contributor Author

panzouh commented Oct 24, 2023

Hello, my problem concerns indeed communication with the nginx process, but this is made impossible by the values in the chart and in templates. Although nginx could expose a route or a socket to force a reload, I think it has everything already built in to do what I need https://nginx.org/en/docs/control.html. I also think that it is simplier to patch a few lines in a value file than changing nginx behavior.

@brianehlert
Copy link
Collaborator

I have been watching this and keep going back to this comment:

I also considered to support multiple configmaps since the flag nginxConfigMaps is named with a plural but support only one configuration.

In the end I think everyone would be best served by supporting multiple config maps.
The NGINX Ingress Controller has a number of reload optimizations that are built into the controller process itself. And the system really is not designed to have some external actor also attempting to configure and reload NGINX. I think the risk of either race or source of truth issues becomes real when there is more than one actor trying to manage the configuration and reload nginx.

In the end we want to deliver something that stays true to a native K8s integration and keeps the K8s API the source of truth and thus any CI/CD tooling that drives configuration using the K8s API. And not providing strong conduits to work around the way the system is implemented.

Curious what others think.

@panzouh
Copy link
Contributor Author

panzouh commented Oct 25, 2023

In the end we want to deliver something that stays true to a native K8s integration and keeps the K8s API the source of truth and thus any CI/CD tooling that drives configuration using the K8s API. And not providing strong conduits to work around the way the system is implemented.

I do agree, It is indeed a better direction considering the K8s integration, although I did not look much in the code to evaluate the cost of this feature instead of adding a couple keys in the value file and templates.

@panzouh
Copy link
Contributor Author

panzouh commented Oct 26, 2023

I watched a little bit the code, could these snippets do the trick ?

// main.go
func processConfigMaps(kubeClient *kubernetes.Clientset, cfgParams *configs.ConfigParams, nginxManager nginx.Manager, templateExecutor *version1.TemplateExecutor) *configs.ConfigParams {
	var aggregatedParams *configs.ConfigParams
	if *nginxConfigMaps != "" {
		nginxConfigMapArr := strings.Split(*nginxConfigMaps, ",")
		for _, nginxConfigMap := range nginxConfigMapArr {
			ns, name, err := k8s.ParseNamespaceName(nginxConfigMap)
			if err != nil {
				glog.Fatalf("Error parsing the nginx-configmaps argument: %v", err)
			}
			cfm, err := kubeClient.CoreV1().ConfigMaps(ns).Get(context.TODO(), name, meta_v1.GetOptions{})
			if err != nil {
				glog.Fatalf("Error when getting %v: %v", *cfm, err)
			}
			cfgParams = configs.ParseConfigMap(cfm, *nginxPlus, *appProtect, *appProtectDos, *enableTLSPassthrough)
			if cfgParams.MainServerSSLDHParamFileContent != nil {
				fileName, err := nginxManager.CreateDHParam(*cfgParams.MainServerSSLDHParamFileContent)
				if err != nil {
					glog.Fatalf("Configmap %s/%s: Could not update dhparams: %v", ns, name, err)
				} else {
					cfgParams.MainServerSSLDHParam = fileName
				}
			}
			if cfgParams.MainTemplate != nil {
				err = templateExecutor.UpdateMainTemplate(cfgParams.MainTemplate)
				if err != nil {
					glog.Fatalf("Error updating NGINX main template: %v", err)
				}
			}
			if cfgParams.IngressTemplate != nil {
				err = templateExecutor.UpdateIngressTemplate(cfgParams.IngressTemplate)
				if err != nil {
					glog.Fatalf("Error updating ingress template: %v", err)
				}
			}
			// Merge the config params.
			if aggregatedParams == nil {
				aggregatedParams = cfgParams
			} else {
				aggregatedParams = configs.MergeConfigParams(aggregatedParams, cfgParams)
			}
		}
	}
	return aggregatedParams
}
//configs/configmaps.go
func MergeConfigParams(target, source *ConfigParams) *ConfigParams {
	if target == nil {
		// If the target is nil, create a new instance to avoid modifying the input
		target = &ConfigParams{}
	}

	// Merge individual fields
	if source.HTTP2 {
		target.HTTP2 = source.HTTP2
	}
	
	// [...]
	return target
}

@danielnginx danielnginx added backlog Pull requests/issues that are backlog items backlog candidate Pull requests/issues that are candidates to be backlog items and removed backlog Pull requests/issues that are backlog items labels Oct 26, 2023
@vepatel
Copy link
Contributor

vepatel commented Oct 26, 2023

@panzouh you can push custom config via means of configmap as well, see https://github.com/nginxinc/kubernetes-ingress/tree/v3.3.1/examples/shared-examples/custom-templates#example.
Can just redeploy the configmap using kubectl and it'll reload the nginx. Not really sure what's the requirement of having multiple configmaps here.

@panzouh
Copy link
Contributor Author

panzouh commented Oct 27, 2023

I don't think that creating a fully customized configuration for a basic http block is the correct way of dealing with the problem i rather push another configmap via the configmap flag

@vepatel
Copy link
Contributor

vepatel commented Oct 31, 2023

Can you clarify it a bit more here, is the goal to modify nginx.conf using another confingmap?

@panzouh
Copy link
Contributor Author

panzouh commented Oct 31, 2023

Sure, I have a configuration which is dynamically pulled from an API and built to be a http block saved in an other Nginx configuration, and in Nginx default configmap I include the folder where the file is.

@shaun-nx shaun-nx self-assigned this Nov 14, 2023
@panzouh
Copy link
Contributor Author

panzouh commented Nov 16, 2023

👋 @shaun-nx When my change will be released through the helm repository ?

@brianehlert
Copy link
Collaborator

What is in main will go out with the next release.
The project releases quarterly.

@shaun-nx
Copy link
Contributor

shaun-nx commented Nov 20, 2023

Hi @panzouh let us know if the change worked for you. If so we can close this issue 😄

@panzouh
Copy link
Contributor Author

panzouh commented Nov 21, 2023

Hi yes I had to clone the repo to try it out but yes adding sharedProcessNamespace solved my issue. I can't wait to see the next release ! 😉

@panzouh panzouh closed this as completed Nov 21, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
backlog candidate Pull requests/issues that are candidates to be backlog items proposal An issue that proposes a feature request
Projects
Archived in project
Development

No branches or pull requests

5 participants