Skip to content
This repository has been archived by the owner on Apr 12, 2023. It is now read-only.

Small changes for install chart on kebernetes 1.11 #113

Open
wants to merge 4 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion helm/prometheus-chart/Chart.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -12,4 +12,4 @@ sources:
- https://github.com/prometheus/alertmanager
- https://github.com/prometheus/prometheus
tillerVersion: ">=2.8.0"
version: 0.1.0-[[ .SHA ]]
version: 0.1.0
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

4 changes: 4 additions & 0 deletions helm/prometheus-chart/templates/grafana-deployment.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -75,9 +75,11 @@ spec:
- name: config
mountPath: "/etc/grafana/grafana.ini"
subPath: grafana.ini
{{- if .Values.grafana.auth }}
- name: ldap
mountPath: "/etc/grafana/ldap.toml"
subPath: ldap.toml
{{- end }}
# Data sources to provision on startup
- name: datasources
mountPath: /etc/grafana/provisioning/datasources
Expand Down Expand Up @@ -127,12 +129,14 @@ spec:
- name: config
configMap:
name: {{ template "prometheus.grafana.fullname" . }}
{{- if .Values.grafana.auth }}
- name: ldap
secret:
secretName: {{ template "prometheus.grafana.fullname" . }}
items:
- key: ldap-toml
path: ldap.toml
{{- end }}
- name: datasources
configMap:
name: grafana-datasources
Expand Down
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
piVersion: v1
apiVersion: v1
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

good catch

kind: ServiceAccount
metadata:
labels:
Expand Down
10 changes: 5 additions & 5 deletions helm/prometheus-chart/values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
alertmanager:
## If false, alertmanager will not be installed
##
enabled: true
enabled: false

## alertmanager container name
##
Expand Down Expand Up @@ -104,7 +104,7 @@ alertmanager:
## If true, alertmanager will create/use a Persistent Volume Claim
## If false, use emptyDir
##
enabled: true
enabled: false

## alertmanager data Persistent Volume access modes
## Must match those of existing PV or dynamic provisioner
Expand Down Expand Up @@ -360,7 +360,7 @@ server:
## If true, Prometheus server will create/use a Persistent Volume Claim
## If false, use emptyDir
##
enabled: true
enabled: false

## Prometheus server data Persistent Volume access modes
## Must match those of existing PV or dynamic provisioner
Expand Down Expand Up @@ -449,7 +449,7 @@ initChownData:
## If false, data ownership will not be reset at startup
## This allows the prometheus-server to be run with an arbitrary user
##
enabled: true
enabled: false
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't all of these be true?

Copy link
Contributor

@pipo02mix pipo02mix Jan 31, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah I would like to default everything to enable, you can always customize your values and use them in the installation command.

helm install -f myvalues.yaml


## initChownData container name
##
Expand Down Expand Up @@ -868,7 +868,7 @@ grafana:
affinity: {}

adminUser: admin
# adminPassword: strongpassword
adminPassword: Ericom123$
ozlevka-work marked this conversation as resolved.
Show resolved Hide resolved

## Use an alternate scheduler, e.g. "stork".
## ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/
Expand Down