Skip to content
This repository has been archived by the owner on Aug 20, 2024. It is now read-only.

Commit

Permalink
rook.io part cleaned
Browse files Browse the repository at this point in the history
  • Loading branch information
ruzickap committed Feb 7, 2019
1 parent f26c078 commit 28fc74e
Showing 1 changed file with 0 additions and 140 deletions.
140 changes: 0 additions & 140 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -440,43 +440,6 @@ Output:
+----+--------------------------------+-------+-------+--------+---------+--------+---------+-----------+
```

Check health of Ceph cluster:

```bash
kubectl -n rook-ceph exec $(kubectl -n rook-ceph get pod -l "app=rook-ceph-tools" -o jsonpath="{.items[0].metadata.name}") -- ceph health detail
```

Output:

```shell
HEALTH_OK
```

Check monitor quorum status of Ceph:

```bash
kubectl -n rook-ceph exec $(kubectl -n rook-ceph get pod -l "app=rook-ceph-tools" -o jsonpath="{.items[0].metadata.name}") -- ceph quorum_status --format json-pretty
```

Dump monitoring information from Ceph:

```bash
kubectl -n rook-ceph exec $(kubectl -n rook-ceph get pod -l "app=rook-ceph-tools" -o jsonpath="{.items[0].metadata.name}") -- ceph mon dump
```

Output:

```shell
dumped monmap epoch 3
epoch 3
fsid 1f4458a6-f574-4e6c-8a25-5a5eef6eb0a7
last_changed 2019-02-04 09:41:39.772112
created 2019-02-04 09:40:08.865074
0: 10.96.25.143:6790/0 mon.c
1: 10.102.39.160:6790/0 mon.a
2: 10.102.49.137:6790/0 mon.b
```

Check the cluster usage status:

```bash
Expand All @@ -494,109 +457,6 @@ POOLS:
replicapool 1 0 B 0 40 GiB 0
```

Check OSD usage of Ceph:

```bash
kubectl -n rook-ceph exec $(kubectl -n rook-ceph get pod -l "app=rook-ceph-tools" -o jsonpath="{.items[0].metadata.name}") -- ceph osd df
```

Output:

```shell
ID CLASS WEIGHT REWEIGHT SIZE USE AVAIL %USE VAR PGS
2 hdd 0.01880 1.00000 19 GiB 4.8 GiB 14 GiB 25.15 1.08 36
1 hdd 0.01880 1.00000 19 GiB 4.4 GiB 15 GiB 22.65 0.98 32
0 hdd 0.01880 1.00000 19 GiB 4.2 GiB 15 GiB 21.87 0.94 32
TOTAL 58 GiB 13 GiB 44 GiB 23.22
MIN/MAX VAR: 0.94/1.08 STDDEV: 1.40
```

Check the Ceph monitor:

```bash
kubectl -n rook-ceph exec $(kubectl -n rook-ceph get pod -l "app=rook-ceph-tools" -o jsonpath="{.items[0].metadata.name}") -- ceph mon stat
```

Output:

```shell
e3: 3 mons at {a=10.102.39.160:6790/0,b=10.102.49.137:6790/0,c=10.96.25.143:6790/0}, election epoch 14, leader 0 c, quorum 0,1,2 c,a,b
```

Check OSD stats:

```bash
kubectl -n rook-ceph exec $(kubectl -n rook-ceph get pod -l "app=rook-ceph-tools" -o jsonpath="{.items[0].metadata.name}") -- ceph osd stat
```

Output:

```shell
3 osds: 3 up, 3 in; epoch: e20
```

Check pool stats:

```bash
kubectl -n rook-ceph exec $(kubectl -n rook-ceph get pod -l "app=rook-ceph-tools" -o jsonpath="{.items[0].metadata.name}") -- ceph osd pool stats
```

Output:

```shell
pool replicapool id 1
nothing is going on
```

Check pg stats:

```bash
kubectl -n rook-ceph exec $(kubectl -n rook-ceph get pod -l "app=rook-ceph-tools" -o jsonpath="{.items[0].metadata.name}") -- ceph pg stat
```

Output:

```shell
100 pgs: 100 active+clean; 0 B data, 13 GiB used, 44 GiB / 58 GiB avail
```

List the Ceph pools in detail:

```bash
kubectl -n rook-ceph exec $(kubectl -n rook-ceph get pod -l "app=rook-ceph-tools" -o jsonpath="{.items[0].metadata.name}") -- ceph osd pool ls detail
```

Output:

```shell
pool 1 'replicapool' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 100 pgp_num 100 last_change 20 flags hashpspool stripe_width 0 application rbd
```

Check the CRUSH map view of OSDs:

```bash
kubectl -n rook-ceph exec $(kubectl -n rook-ceph get pod -l "app=rook-ceph-tools" -o jsonpath="{.items[0].metadata.name}") -- ceph osd tree
```

Output:

```shell
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 0.05640 root default
-4 0.01880 host pruzicka-k8s-istio-demo-node01
2 hdd 0.01880 osd.2 up 1.00000 1.00000
-3 0.01880 host pruzicka-k8s-istio-demo-node02
1 hdd 0.01880 osd.1 up 1.00000 1.00000
-2 0.01880 host pruzicka-k8s-istio-demo-node03
0 hdd 0.01880 osd.0 up 1.00000 1.00000
```

List the cluster authentication keys:

```bash
kubectl -n rook-ceph exec $(kubectl -n rook-ceph get pod -l "app=rook-ceph-tools" -o jsonpath="{.items[0].metadata.name}") -- ceph auth list
```

## Install ElasticSearch, Kibana, Fluentbit

Add [ElasticSearch operator](https://github.com/upmc-enterprises/elasticsearch-operator) to Helm:
Expand Down

0 comments on commit 28fc74e

Please sign in to comment.