Skip to content

Commit

Permalink
Remove usage of deprecated v1beta API endpoints (#6543)
Browse files Browse the repository at this point in the history
  • Loading branch information
rawkode authored and danielnelson committed Oct 23, 2019
1 parent 988e036 commit 47a708e
Show file tree
Hide file tree
Showing 11 changed files with 107 additions and 96 deletions.
6 changes: 2 additions & 4 deletions Gopkg.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

99 changes: 53 additions & 46 deletions plugins/inputs/kube_inventory/README.md
Original file line number Diff line number Diff line change
@@ -1,17 +1,25 @@
# Kube_Inventory Plugin

This plugin generates metrics derived from the state of the following Kubernetes resources:
- daemonsets
- deployments
- nodes
- persistentvolumes
- persistentvolumeclaims
- pods (containers)
- statefulsets

- daemonsets
- deployments
- nodes
- persistentvolumes
- persistentvolumeclaims
- pods (containers)
- statefulsets

Kubernetes is a fast moving project, with a new minor release every 3 months. As
such, we will aim to maintain support only for versions that are supported by
the major cloud providers; this is roughly 4 release / 2 years.

**This plugin supports Kubernetes 1.11 and later.**

#### Series Cardinality Warning

This plugin may produce a high number of series which, when not controlled
for, will cause high load on your database. Use the following techniques to
for, will cause high load on your database. Use the following techniques to
avoid cardinality issues:

- Use [metric filtering][] options to exclude unneeded measurements and tags.
Expand Down Expand Up @@ -61,6 +69,7 @@ avoid cardinality issues:
#### Kubernetes Permissions

If using [RBAC authorization](https://kubernetes.io/docs/reference/access-authn-authz/rbac/), you will need to create a cluster role to list "persistentvolumes" and "nodes". You will then need to make an [aggregated ClusterRole](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#aggregated-clusterroles) that will eventually be bound to a user or group.

```yaml
---
kind: ClusterRole
Expand All @@ -70,9 +79,9 @@ metadata:
labels:
rbac.authorization.k8s.io/aggregate-view-telegraf: "true"
rules:
- apiGroups: [""]
resources: ["persistentvolumes","nodes"]
verbs: ["get","list"]
- apiGroups: [""]
resources: ["persistentvolumes", "nodes"]
verbs: ["get", "list"]

---
kind: ClusterRole
Expand All @@ -81,14 +90,15 @@ metadata:
name: influx:telegraf
aggregationRule:
clusterRoleSelectors:
- matchLabels:
rbac.authorization.k8s.io/aggregate-view-telegraf: "true"
- matchLabels:
rbac.authorization.k8s.io/aggregate-to-view: "true"
- matchLabels:
rbac.authorization.k8s.io/aggregate-view-telegraf: "true"
- matchLabels:
rbac.authorization.k8s.io/aggregate-to-view: "true"
rules: [] # Rules are automatically filled in by the controller manager.
```
Bind the newly created aggregated ClusterRole with the following config file, updating the subjects as needed.
```yaml
---
apiVersion: rbac.authorization.k8s.io/v1
Expand All @@ -100,15 +110,14 @@ roleRef:
kind: ClusterRole
name: influx:telegraf
subjects:
- kind: ServiceAccount
name: telegraf
namespace: default
- kind: ServiceAccount
name: telegraf
namespace: default
```
### Metrics:
+ kubernetes_daemonset
- kubernetes_daemonset
- tags:
- daemonset_name
- namespace
Expand All @@ -122,7 +131,7 @@ subjects:
- number_unavailable
- updated_number_scheduled
- kubernetes_deployment
* kubernetes_deployment
- tags:
- deployment_name
- namespace
Expand All @@ -131,22 +140,22 @@ subjects:
- replicas_unavailable
- created
+ kubernetes_endpoints
- kubernetes_endpoints
- tags:
- endpoint_name
- namespace
- hostname
- node_name
- port_name
- port_protocol
- kind (*varies)
- kind (\*varies)
- fields:
- created
- generation
- ready
- port
- kubernetes_ingress
* kubernetes_ingress
- tags:
- ingress_name
- namespace
Expand All @@ -161,7 +170,7 @@ subjects:
- backend_service_port
- tls
+ kubernetes_node
- kubernetes_node
- tags:
- node_name
- fields:
Expand All @@ -172,15 +181,15 @@ subjects:
- allocatable_memory_bytes
- allocatable_pods
- kubernetes_persistentvolume
* kubernetes_persistentvolume
- tags:
- pv_name
- phase
- storageclass
- fields:
- phase_type (int, [see below](#pv-phase_type))
+ kubernetes_persistentvolumeclaim
- kubernetes_persistentvolumeclaim
- tags:
- pvc_name
- namespace
Expand All @@ -189,7 +198,7 @@ subjects:
- fields:
- phase_type (int, [see below](#pvc-phase_type))
- kubernetes_pod_container
* kubernetes_pod_container
- tags:
- container_name
- namespace
Expand All @@ -204,7 +213,7 @@ subjects:
- resource_limits_cpu_units
- resource_limits_memory_bytes
+ kubernetes_service
- kubernetes_service
- tags:
- service_name
- namespace
Expand All @@ -218,7 +227,7 @@ subjects:
- port
- target_port
- kubernetes_statefulset
* kubernetes_statefulset
- tags:
- statefulset_name
- namespace
Expand All @@ -236,26 +245,25 @@ subjects:

The persistentvolume "phase" is saved in the `phase` tag with a correlated numeric field called `phase_type` corresponding with that tag value.

|Tag value |Corresponding field value|
-----------|-------------------------|
|bound | 0 |
|failed | 1 |
|pending | 2 |
|released | 3 |
|available | 4 |
|unknown | 5 |
| Tag value | Corresponding field value |
| --------- | ------------------------- |
| bound | 0 |
| failed | 1 |
| pending | 2 |
| released | 3 |
| available | 4 |
| unknown | 5 |

#### pvc `phase_type`

The persistentvolumeclaim "phase" is saved in the `phase` tag with a correlated numeric field called `phase_type` corresponding with that tag value.

|Tag value |Corresponding field value|
-----------|-------------------------|
|bound | 0 |
|lost | 1 |
|pending | 2 |
|unknown | 3 |

| Tag value | Corresponding field value |
| --------- | ------------------------- |
| bound | 0 |
| lost | 1 |
| pending | 2 |
| unknown | 3 |

### Example Output:

Expand All @@ -271,7 +279,6 @@ kubernetes_pod_container,container_name=telegraf,namespace=default,node_name=ip-
kubernetes_statefulset,namespace=default,statefulset_name=etcd replicas_updated=3i,spec_replicas=3i,observed_generation=1i,created=1544101669000000000i,generation=1i,replicas=3i,replicas_current=3i,replicas_ready=3i 1547597616000000000
```
[metric filtering]: https://github.com/influxdata/telegraf/blob/master/docs/CONFIGURATION.md#metric-filtering
[retention policy]: https://docs.influxdata.com/influxdb/latest/guides/downsampling_and_retention/
[max-series-per-database]: https://docs.influxdata.com/influxdb/latest/administration/config/#max-series-per-database-1000000
Expand Down
17 changes: 8 additions & 9 deletions plugins/inputs/kube_inventory/client.go
Original file line number Diff line number Diff line change
Expand Up @@ -5,9 +5,8 @@ import (
"time"

"github.com/ericchiang/k8s"
"github.com/ericchiang/k8s/apis/apps/v1beta1"
"github.com/ericchiang/k8s/apis/apps/v1beta2"
"github.com/ericchiang/k8s/apis/core/v1"
v1APPS "github.com/ericchiang/k8s/apis/apps/v1"
v1 "github.com/ericchiang/k8s/apis/core/v1"
v1beta1EXT "github.com/ericchiang/k8s/apis/extensions/v1beta1"

"github.com/influxdata/telegraf/internal/tls"
Expand Down Expand Up @@ -48,15 +47,15 @@ func newClient(baseURL, namespace, bearerToken string, timeout time.Duration, tl
}, nil
}

func (c *client) getDaemonSets(ctx context.Context) (*v1beta2.DaemonSetList, error) {
list := new(v1beta2.DaemonSetList)
func (c *client) getDaemonSets(ctx context.Context) (*v1APPS.DaemonSetList, error) {
list := new(v1APPS.DaemonSetList)
ctx, cancel := context.WithTimeout(ctx, c.timeout)
defer cancel()
return list, c.List(ctx, c.namespace, list)
}

func (c *client) getDeployments(ctx context.Context) (*v1beta1.DeploymentList, error) {
list := &v1beta1.DeploymentList{}
func (c *client) getDeployments(ctx context.Context) (*v1APPS.DeploymentList, error) {
list := &v1APPS.DeploymentList{}
ctx, cancel := context.WithTimeout(ctx, c.timeout)
defer cancel()
return list, c.List(ctx, c.namespace, list)
Expand Down Expand Up @@ -111,8 +110,8 @@ func (c *client) getServices(ctx context.Context) (*v1.ServiceList, error) {
return list, c.List(ctx, c.namespace, list)
}

func (c *client) getStatefulSets(ctx context.Context) (*v1beta1.StatefulSetList, error) {
list := new(v1beta1.StatefulSetList)
func (c *client) getStatefulSets(ctx context.Context) (*v1APPS.StatefulSetList, error) {
list := new(v1APPS.StatefulSetList)
ctx, cancel := context.WithTimeout(ctx, c.timeout)
defer cancel()
return list, c.List(ctx, c.namespace, list)
Expand Down
4 changes: 2 additions & 2 deletions plugins/inputs/kube_inventory/daemonset.go
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ import (
"context"
"time"

"github.com/ericchiang/k8s/apis/apps/v1beta2"
"github.com/ericchiang/k8s/apis/apps/v1"

"github.com/influxdata/telegraf"
)
Expand All @@ -23,7 +23,7 @@ func collectDaemonSets(ctx context.Context, acc telegraf.Accumulator, ki *Kubern
}
}

func (ki *KubernetesInventory) gatherDaemonSet(d v1beta2.DaemonSet, acc telegraf.Accumulator) error {
func (ki *KubernetesInventory) gatherDaemonSet(d v1.DaemonSet, acc telegraf.Accumulator) error {
fields := map[string]interface{}{
"generation": d.Metadata.GetGeneration(),
"current_number_scheduled": d.Status.GetCurrentNumberScheduled(),
Expand Down
12 changes: 6 additions & 6 deletions plugins/inputs/kube_inventory/daemonset_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ import (
"testing"
"time"

"github.com/ericchiang/k8s/apis/apps/v1beta2"
"github.com/ericchiang/k8s/apis/apps/v1"
metav1 "github.com/ericchiang/k8s/apis/meta/v1"

"github.com/influxdata/telegraf/testutil"
Expand All @@ -24,7 +24,7 @@ func TestDaemonSet(t *testing.T) {
name: "no daemon set",
handler: &mockHandler{
responseMap: map[string]interface{}{
"/daemonsets/": &v1beta2.DaemonSetList{},
"/daemonsets/": &v1.DaemonSetList{},
},
},
hasError: false,
Expand All @@ -33,10 +33,10 @@ func TestDaemonSet(t *testing.T) {
name: "collect daemonsets",
handler: &mockHandler{
responseMap: map[string]interface{}{
"/daemonsets/": &v1beta2.DaemonSetList{
Items: []*v1beta2.DaemonSet{
"/daemonsets/": &v1.DaemonSetList{
Items: []*v1.DaemonSet{
{
Status: &v1beta2.DaemonSetStatus{
Status: &v1.DaemonSetStatus{
CurrentNumberScheduled: toInt32Ptr(3),
DesiredNumberScheduled: toInt32Ptr(5),
NumberAvailable: toInt32Ptr(2),
Expand Down Expand Up @@ -90,7 +90,7 @@ func TestDaemonSet(t *testing.T) {
client: cli,
}
acc := new(testutil.Accumulator)
for _, dset := range ((v.handler.responseMap["/daemonsets/"]).(*v1beta2.DaemonSetList)).Items {
for _, dset := range ((v.handler.responseMap["/daemonsets/"]).(*v1.DaemonSetList)).Items {
err := ks.gatherDaemonSet(*dset, acc)
if err != nil {
t.Errorf("Failed to gather daemonset - %s", err.Error())
Expand Down
5 changes: 2 additions & 3 deletions plugins/inputs/kube_inventory/deployment.go
Original file line number Diff line number Diff line change
Expand Up @@ -4,8 +4,7 @@ import (
"context"
"time"

"github.com/ericchiang/k8s/apis/apps/v1beta1"

v1 "github.com/ericchiang/k8s/apis/apps/v1"
"github.com/influxdata/telegraf"
)

Expand All @@ -23,7 +22,7 @@ func collectDeployments(ctx context.Context, acc telegraf.Accumulator, ki *Kuber
}
}

func (ki *KubernetesInventory) gatherDeployment(d v1beta1.Deployment, acc telegraf.Accumulator) error {
func (ki *KubernetesInventory) gatherDeployment(d v1.Deployment, acc telegraf.Accumulator) error {
fields := map[string]interface{}{
"replicas_available": d.Status.GetAvailableReplicas(),
"replicas_unavailable": d.Status.GetUnavailableReplicas(),
Expand Down
Loading

0 comments on commit 47a708e

Please sign in to comment.