Template for helm chart
When creating these templates, I was inspired by the chart-writing style of Kong
In the my-project directory, you can find a template for creating a Helm chart.
Don't forget replace myproject and myservice to your project name and service name.
Templates for services and ingresses are used to ensure consistency during deployment. These templates are stored in the _helpers.tpl file.
When deploying, you can specify the chart name and set the namespace where the chart will be deployed. The namespace can be overridden in values.yaml by modifying the namespace
variable.
To support multiple deployments of the application and ensure unique object names, all objects generated by Helm from the chart are prefixed with the release name. This value can be overridden by setting fullnameOverride
in values.yaml.
If needed, additional labels can be applied to all objects in the cluster by adding them to commonLabels
in values.yaml.
_helpers.tpl contains templates for different types of services:
- myproject.loadBalancer myservice-loadbalancer.yaml
- myproject.nodePort myservice-nodeport.yaml
- myproject.ClusterIP myservice-service.yaml
Due to the usage of Go templates in Helm, port generation is not a trivial task. Below is an example of port generation:
# Declare a list for ports
{{- $ports := list -}}
# If necessary, we can add a default port that will always be created
{{- $httpPort := dict -}}
{{- $_ := set $httpPort "name" "http" -}}
{{- $_ := set $httpPort "port" .Values.application.service.port -}}
{{- $_ := set $httpPort "targetPort" .Values.application.service.targetPort -}}
{{- $_ := set $httpPort "protocol" "TCP" -}}
# Check if port generation mode is enabled
{{- if or (eq .Values.application.service.mode "range") (eq .Values.application.service.mode "random") }}
# The most complex part: subtracting start from end to get
# the number of ports to create. `add1` is used to increase
# the count by 1, so the upper boundary is included. For
# some reason, Go templates do not treat this value as an
# integer, so we explicitly cast it to an integer. After
# that, we use `until` to generate a sequence from 0 to
# our value, and finally, we loop with `range`. In our case,
# `i` and `p` are equal, so it doesn't matter which value we use.
{{- range $i, $p := until (add1 (sub .Values.application.service.range.end .Values.application.service.range.start) | int) }}
# Create a local dictionary for a new port and fill it
# with values
{{- $udpPort := dict -}}
# Generate the name
{{- $_ := set $udpPort "name" ($i | printf "udp-%d") -}}
# Add our value to the start to get the new `port` and
# `targetPort`
{{- $_ := set $udpPort "port" (add $.Values.application.service.range.start $i) -}}
{{- $_ := set $udpPort "targetPort" (add $.Values.application.service.range.start $i) -}}
{{- $_ := set $udpPort "protocol" "UDP" -}}
# Finally, add the resulting dictionary to the list
{{- $ports = append $ports $udpPort -}}
{{- end }}
{{- end }}
The updateStrategy
and the number of replicas
are set according to the parameters specified in values.yaml.
Labels and annotations are composed of global chart-level labels and annotations, additional labels and annotations specific to the current deployment (from values.yaml).
The probe and resources sections are inserted from values.yaml.
If a port range needs to be used, this part can be generated them similarly to the method described in the "Port Range Generation" section.
Besides the usual variables, variable NODE_NAME passed to the container using the spec.nodeName parameter from the Downward API.
The Deployment template contains an affinity
section. By default, this section uses podAntiAffinity
, but you can change it to podAffinity
if needed.
The choice between preferredDuringSchedulingIgnoredDuringExecution
and requiredDuringSchedulingIgnoredDuringExecution
is based on the value of .Values.myservice.affinity.type
.
If preferredDuringSchedulingIgnoredDuringExecution
is set, you can add additional rules, for example:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchLabels:
app: myservice
topologyKey: kubernetes.io/hostname
- weight: 80
podAffinityTerm:
labelSelector:
matchLabels:
app: anotherservice
topologyKey: kubernetes.io/hostname
Similarly, for requiredDuringSchedulingIgnoredDuringExecution
:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels:
app: myservice
topologyKey: kubernetes.io/hostname
- labelSelector:
matchLabels:
app: anotherservice
topologyKey: kubernetes.io/hostname
The template also includes topologySpreadConstraints
.
For generating ingress resources, the template myproject.ingress in _helpers.tpl is used and called in the myservice-ingress.yaml file. Additionally, the myservice-ingress.yaml file contains logic to generate the hosts section and override the default value if a relevant parameter is added in values.yaml.
Files cluster-role-binding-myservice.yaml and role-binding.yaml contain examples of role bindings.
Files role.yaml and cluster-role-myservice.yaml contain examples of roles.
To generate a ServiceMonitor
or VMServiceScrape
, the template myproject.ServiceMonitoring in _helpers.tpl is used and called in the myservice-servicemonitoring.yaml file. The configuration for ServiceMonitor
and VMServiceScrape
is located in the metrics section of each application in values.yaml.
Here's the corrected version of your sentence:
The template allows you to choose between Prometheus and VictoriaMetrics using the provider
variable from the metrics section.
⚠️ Warning: For the creation of this entity, the necessary CRDs must be installed alongside Prometheus/VictoriaMetrics.
In _helpers.tpl
, shared templates used across our charts are stored. It contains:
myproject.namespace
: Template to define the namespace for the project.myproject.name
: Template for the project name.myproject.fullname
: Template for the full name of the project, typically combining the name with other identifiers.myproject.chart
: Template with information about the chart, such as its name and version.
myproject.selectorLabels
: Used to add labels to selectors.myproject.metaLabels
: Applied to all resources in the cluster, containing labels with the chart’s name, version, and any additional labels declared in values.yaml.
myservice.imageTemplates
: Template for generating image URLs based on the default registry, application name, and tag. If these values are overridden in values.yaml, the overridden values are used.myproject.probes
: Template for configuring health probes in deployments.myproject.resources
: Template for setting resource requests and limits in deployments.
myproject.ingress
: Template for Ingress resources.myproject.udpIngress
: Template specifically for UDP ingress(part of APIGateway).
myproject.loadBalancer
: Template for LoadBalancer services.myproject.nodePort
: Template for NodePort services.myproject.ClusterIP
: Template for ClusterIP services.
myproject.serviceMonitor
: Template for creating a ServiceMonitor for monitoring purposes.