Skip to content

Commit

Permalink
Fix markdownlint. Also change 'bash' to appropriate linguist codebloc…
Browse files Browse the repository at this point in the history
…k option

Signed-off-by: Scott Rigby <scott@r6by.com>
  • Loading branch information
scottrigby committed Oct 9, 2020
1 parent 7dd4680 commit ec488cf
Showing 1 changed file with 20 additions and 20 deletions.
40 changes: 20 additions & 20 deletions charts/kube-prometheus-stack/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -198,6 +198,7 @@ For more in-depth documentation of configuration options meanings, please see

The prometheus operator does not support annotation-based discovery of services, using the `PodMonitor` or `ServiceMonitor` CRD in its place as they provide far more configuration options.
For information on how to use PodMonitors/ServiceMonitors, please see the documentation on the `prometheus-operator/prometheus-operator` documentation here:

- [ServiceMonitors](https://github.com/prometheus-operator/prometheus-operator/blob/master/Documentation/user-guides/getting-started.md#include-servicemonitors)
- [PodMonitors](https://github.com/prometheus-operator/prometheus-operator/blob/master/Documentation/user-guides/getting-started.md#include-podmonitors)
- [Running Exporters](https://github.com/prometheus-operator/prometheus-operator/blob/master/Documentation/user-guides/running-exporters.md)
Expand All @@ -215,42 +216,42 @@ If the **prometheus-operator** values are compatible with the new **kube-prometh

1. Patch the PersistenceVolume created/used by the prometheus-operator chart to `Retain` claim policy:

```bash
kubectl patch pv/<PersistentVolume name> -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}'
```
```console
kubectl patch pv/<PersistentVolume name> -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}'
```

**Note:** To execute the above command, the user must have a cluster wide permission. Please refer [Kubernetes RBAC](https://kubernetes.io/docs/reference/access-authn-authz/rbac/)
**Note:** To execute the above command, the user must have a cluster wide permission. Please refer [Kubernetes RBAC](https://kubernetes.io/docs/reference/access-authn-authz/rbac/)

2. Uninstall the **prometheus-operator** release and delete the existing PersistentVolumeClaim, and verify PV become Released.

```bash
helm uninstall prometheus-operator -n monitoring
kubectl delete pvc/<PersistenceVolumeClaim name> -n monitoring
```
```console
helm uninstall prometheus-operator -n monitoring
kubectl delete pvc/<PersistenceVolumeClaim name> -n monitoring
```

Additonaly, you have to manually remove the remaining `prometheus-operator-kubelet` service.
Additonaly, you have to manually remove the remaining `prometheus-operator-kubelet` service.

```bash
kubectl delete service/prometheus-operator-kubelet -n kube-system
```
```console
kubectl delete service/prometheus-operator-kubelet -n kube-system
```

You can choose to remove all your existing CRDs (ServiceMonitors, Podmonitors, etc.) if you want to. If you would like to keep these, you can set `prometheusOperator.createCustomResource` to `false` to disable CRD provisioning during the fresh installation.
You can choose to remove all your existing CRDs (ServiceMonitors, Podmonitors, etc.) if you want to. If you would like to keep these, you can set `prometheusOperator.createCustomResource` to `false` to disable CRD provisioning during the fresh installation.

3. Remove current `spec.claimRef` values to change the PV's status from Released to Available.

```bash
kubectl patch pv/<PersistentVolume name> --type json -p='[{"op": "remove", "path": "/spec/claimRef"}]' -n monitoring
```
```console
kubectl patch pv/<PersistentVolume name> --type json -p='[{"op": "remove", "path": "/spec/claimRef"}]' -n monitoring
```

**Note:** To execute the above command, the user must have a cluster wide permission. Please refer [Kubernetes RBAC](https://kubernetes.io/docs/reference/access-authn-authz/rbac/)

After these steps, proceed to a fresh **kube-prometheus-stack** installation and make sure the current release of **kube-prometheus-stack** matching the `volumeClaimTemplate` values in the `values.yaml`.
After these steps, proceed to a fresh **kube-prometheus-stack** installation and make sure the current release of **kube-prometheus-stack** matching the `volumeClaimTemplate` values in the `values.yaml`.

The binding is done via metching a specific amount of storage requested and with certain access modes.

For example, if you had storage specified as this with **prometheus-operator**:

```bash
```yaml
volumeClaimTemplate:
spec:
storageClassName: gp2
Expand All @@ -262,12 +263,11 @@ volumeClaimTemplate:

You have to specify matching `volumeClaimTemplate` with 50Gi storage and `ReadWriteOnce` access mode.


Additionally, you should check the current AZ of your legacy installation's PV, and configure the fresh release to use the same AZ as the old one. If the pods are in a different AZ than the PV, the release will fail to bind the existing one, hence creating a new PV.

This can be achieved either by specifying the labels trough `values.yaml`, e.g. setting `prometheus.prometheusSpec.nodeSelector` to:

```bash
```yaml
nodeSelector:
failure-domain.beta.kubernetes.io/zone: east-west-1a
```
Expand Down

0 comments on commit ec488cf

Please sign in to comment.