Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add metric for persistent volume size #555

Closed
EmilyM1 opened this issue Oct 10, 2018 · 13 comments · Fixed by #674
Closed

Add metric for persistent volume size #555

EmilyM1 opened this issue Oct 10, 2018 · 13 comments · Fixed by #674

Comments

@EmilyM1
Copy link

EmilyM1 commented Oct 10, 2018

Is this a BUG REPORT or FEATURE REQUEST?:
Feature

/kind feature

What happened:

What you expected to happen:
It would be helpful to see the size of persistent volumes.

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

Environment:

  • Kubernetes version (use kubectl version):
  • Kube-state-metrics image version
@EmilyM1 EmilyM1 changed the title Add metric for persisent volume size Add metric for persistent volume size Oct 10, 2018
@EmilyM1 EmilyM1 closed this as completed Oct 10, 2018
@EmilyM1 EmilyM1 reopened this Oct 11, 2018
@brancz
Copy link
Member

brancz commented Oct 16, 2018

PersistentVolume sizes are specific to the provider. The kubelet exposes metrics about PersistentVolumeClaims mounted on the particular node, maybe that's what you are looking for?

@chancez
Copy link
Member

chancez commented Oct 18, 2018

Capacity is a field that isn't provider specific. It's how a PVC knows if a PV (created manually or not) if it's big enough to fulfill a PVC.

        {
            "apiVersion": "v1",
            "kind": "PersistentVolume",
            "metadata": {
                "annotations": {
                    "kubernetes.io/createdby": "gce-pd-dynamic-provisioner",
                    "pv.kubernetes.io/bound-by-controller": "yes",
                    "pv.kubernetes.io/provisioned-by": "kubernetes.io/gce-pd"
                },
                "creationTimestamp": "2018-10-18T00:08:49Z",
                "finalizers": [
                    "kubernetes.io/pv-protection"
                ],
                "labels": {
                    "failure-domain.beta.kubernetes.io/region": "us-west1",
                    "failure-domain.beta.kubernetes.io/zone": "us-west1-a"
                },
                "name": "pvc-fbaf9e97-d269-11e8-b137-42010a8a0005",
                "resourceVersion": "680104",
                "selfLink": "/api/v1/persistentvolumes/pvc-fbaf9e97-d269-11e8-b137-42010a8a0005",
                "uid": "fd43b20d-d269-11e8-b137-42010a8a0005"
            },
            "spec": {
                "accessModes": [
                    "ReadWriteOnce"
                ],
                "capacity": {
                    "storage": "5Gi"
                },
                "claimRef": {
                    "apiVersion": "v1",
                    "kind": "PersistentVolumeClaim",
                    "name": "hdfs-namenode-data-hdfs-namenode-0",
                    "namespace": "metering-ci2-integration-master",
                    "resourceVersion": "680010",
                    "uid": "fbaf9e97-d269-11e8-b137-42010a8a0005"
                },
                "gcePersistentDisk": {
                    "fsType": "ext4",
                    "pdName": "kubernetes-dynamic-pvc-fbaf9e97-d269-11e8-b137-42010a8a0005"
                },
                "persistentVolumeReclaimPolicy": "Delete",
                "storageClassName": "standard"
            },
            "status": {
                "phase": "Bound"
            }
     }

@brancz
Copy link
Member

brancz commented Oct 18, 2018

Sorry you’re absolutely right. I was thinking about the usage of the volume. This seems reasonable.

@yeplaa
Copy link

yeplaa commented Nov 6, 2018

Kubernetes exposes metrics regarding the volume size :

kubelet_volume_stats_available_bytes
kubelet_volume_stats_capacity_bytes
kubelet_volume_stats_inodes
kubelet_volume_stats_inodes_free
kubelet_volume_stats_inodes_used
kubelet_volume_stats_used_bytes

@brancz Would it be possible for kube-state-metrics to retrieve them to expose them later?

@brancz
Copy link
Member

brancz commented Nov 6, 2018

kube-state-metrics strictly only converts API objects into metrics, as long as this is not part of a Kubernetes API object, kube-state-metrics will not expose it.

@chancez
Copy link
Member

chancez commented Nov 6, 2018

@brancz This is exposed by the kube API. PVs have a stoage capacity just like PVCs have a requested capacity. This isn't about used.

@brancz
Copy link
Member

brancz commented Nov 6, 2018

@chancez I was only referring to @yeplaa's comment. Anything contained in an API object is perfectly valid to be added.

@chancez
Copy link
Member

chancez commented Nov 6, 2018

@brancz Ah, I see. I agree. I will end up using this above metrics in the future, but that's another story. (I don't think those are collected yet)

@yeplaa
Copy link

yeplaa commented Nov 7, 2018

@brancz @chancez To understand, do you intend to integrate these metrics on Kube-state-metrics, or not? Is there an interest for you?

@brancz
Copy link
Member

brancz commented Nov 7, 2018

Capacity and request are valid as they come directly from API objects, but not any of these:

kubelet_volume_stats_available_bytes
kubelet_volume_stats_inodes
kubelet_volume_stats_inodes_free
kubelet_volume_stats_inodes_used
kubelet_volume_stats_used_bytes

@yeplaa
Copy link

yeplaa commented Nov 7, 2018

OK thank's for your reply @brancz.
Actually with kube-state-metrics, the kube_persistentvolumeclaim_resource_requests_storage_bytes metric indicates the capacity of storage requested.. To follow then the possibility to have the stats_used or stats_available of PVC.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 5, 2019
@chancez
Copy link
Member

chancez commented Feb 5, 2019

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 5, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants