Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add agent fleet enrolment k8s manifest #26566

Merged
Show file tree
Hide file tree
Changes from 3 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion deploy/kubernetes/Makefile
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
ALL=filebeat metricbeat auditbeat heartbeat elastic-agent-standalone
ALL=filebeat metricbeat auditbeat heartbeat elastic-agent-standalone elastic-agent-managed
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ruflin do you think naming is ok here? Should we call it just elastic-agent maybe? Or elastic-agent-fleet? elastic-agent-managed is fine for me too.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No strong preference. managed looks fine as we have also standalone.

BEAT_VERSION=$(shell head -n 1 ../../libbeat/docs/version.asciidoc | cut -c 17- )

.PHONY: all $(ALL)
Expand Down
203 changes: 203 additions & 0 deletions deploy/kubernetes/elastic-agent-managed-kubernetes.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,203 @@
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: elastic-agent
namespace: kube-system
labels:
app: elastic-agent
spec:
selector:
matchLabels:
app: elastic-agent
template:
metadata:
labels:
app: elastic-agent
spec:
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
serviceAccountName: elastic-agent
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
containers:
- name: elastic-agent
image: docker.elastic.co/beats/elastic-agent:8.0.0
env:
- name: FLEET_ENROLL
value: "1"
- name: FLEET_INSECURE
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm worried about this, FLEET_INSECURE option. Do we have a way to go with the secure approach? @mtojek any ideas?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the fleet server is deployed in elastic cloud or in a cloud with trusted CA then FLEET_INSECURE can be false which is default value.
If it is deployed in another cluster with tls enabled and self signed certificates then FLEET_CA has to be used and the CA needs to be mounted.
If tls is not enabled (a fine option for private cloud) then FLEET_INSECURE has to be set to 1.
In our case as it is a proposed manifest I believe we should leave the default value and add a comment on top of the variable of its purpose.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for clarifying, I think we are ok for now and we can iterate on it later if we see requests for that.

value: "1"
- name: FLEET_URL
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we add a description in a comment about this var and maybe a sample value (in the comment)? I expect users having difficulties in recognising what value they should put here.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Description added. The weird part on the url is that the port is not implied by the protocol and it must be added in the end.

value: "fleet_server_ip:port"
- name: KIBANA_HOST
value: "http://kibana:5601"
- name: KIBANA_FLEET_USERNAME
value: "elastic"
- name: KIBANA_FLEET_PASSWORD
value: ""
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
securityContext:
runAsUser: 0
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
volumeMounts:
- name: proc
mountPath: /hostfs/proc
readOnly: true
- name: cgroup
mountPath: /hostfs/sys/fs/cgroup
readOnly: true
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
- name: varlog
mountPath: /var/log
readOnly: true
volumes:
- name: proc
hostPath:
path: /proc
- name: cgroup
hostPath:
path: /sys/fs/cgroup
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: varlog
hostPath:
path: /var/log
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: elastic-agent
subjects:
- kind: ServiceAccount
name: elastic-agent
namespace: kube-system
roleRef:
kind: ClusterRole
name: elastic-agent
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
namespace: kube-system
name: elastic-agent
subjects:
- kind: ServiceAccount
name: elastic-agent
namespace: kube-system
roleRef:
kind: Role
name: elastic-agent
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: elastic-agent-kubeadm-config
namespace: kube-system
subjects:
- kind: ServiceAccount
name: elastic-agent
namespace: kube-system
roleRef:
kind: Role
name: elastic-agent-kubeadm-config
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: elastic-agent
labels:
k8s-app: elastic-agent
rules:
- apiGroups: [""]
resources:
- nodes
- namespaces
- events
- pods
- services
- configmaps
verbs: ["get", "list", "watch"]
# Enable this rule only if planing to use kubernetes_secrets provider
#- apiGroups: [""]
# resources:
# - secrets
# verbs: ["get"]
- apiGroups: ["extensions"]
ChrsMark marked this conversation as resolved.
Show resolved Hide resolved
resources:
- replicasets
verbs: ["get", "list", "watch"]
- apiGroups: ["apps"]
resources:
- statefulsets
- deployments
- replicasets
verbs: ["get", "list", "watch"]
- apiGroups:
- ""
resources:
- nodes/stats
verbs:
- get
# required for apiserver
- nonResourceURLs:
- "/metrics"
verbs:
- get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: elastic-agent
# should be the namespace where elastic-agent is running
namespace: kube-system
labels:
k8s-app: elastic-agent
rules:
- apiGroups:
- coordination.k8s.io
resources:
- leases
verbs: ["get", "create", "update"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: elastic-agent-kubeadm-config
namespace: kube-system
labels:
k8s-app: elastic-agent
rules:
- apiGroups: [""]
resources:
- configmaps
resourceNames:
- kubeadm-config
verbs: ["get"]
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: elastic-agent
namespace: kube-system
labels:
k8s-app: elastic-agent
---
Original file line number Diff line number Diff line change
@@ -0,0 +1,80 @@
apiVersion: apps/v1
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. This might be my ignorance, but why do we keep them in two directories (deploy/kubernetes/elastic-agent-managed vs deploy/kubernetes) ?
    I'm asking as most likely I'm interested in replacing the https://github.com/elastic/elastic-package/blob/master/internal/install/static_kubernetes_elastic_agent_yml.go with something more sophisticated and as small as possible.

  2. Is it only an example or is it tested by CI somehow?

Copy link
Contributor Author

@MichaelKatsoulis MichaelKatsoulis Jun 29, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. This is a good question. In deploy/kubernetes/elastic-agent-managed the manifests are splited by kind while in the deploy/kubernetes it is just one manifest for all. I followed the pattern of the rest. Maybe this gives users more flexibility on how to deploy the manifests and which to deploy. @ChrsMark do you know if there is another reason?

  2. I have tested it locally with the snapshot image version and elastic stack in elastic cloud.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. Under deploy/kubernetes/elastic-agent-managed the manifests are split per kind as @MichaelKatsoulis explained. Then deploy/kubernetes/elastic-agent-managed.yml is the final complete manifest based on them if you run make all (see Makefile in same directory). To be honest I don't really know the reason for this but we keep doing like this so far.
  2. CI only tests that these manifests are valid as far as I know. See lint stage at

kind: DaemonSet
metadata:
name: elastic-agent
namespace: kube-system
labels:
app: elastic-agent
spec:
selector:
matchLabels:
app: elastic-agent
template:
metadata:
labels:
app: elastic-agent
spec:
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
serviceAccountName: elastic-agent
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
containers:
- name: elastic-agent
image: docker.elastic.co/beats/elastic-agent:8.0.0
env:
- name: FLEET_ENROLL
value: "1"
- name: FLEET_INSECURE
value: "1"
- name: FLEET_URL
value: "fleet_server_ip:port"
- name: KIBANA_HOST
value: "http://kibana:5601"
- name: KIBANA_FLEET_USERNAME
value: "elastic"
- name: KIBANA_FLEET_PASSWORD
value: ""
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
securityContext:
runAsUser: 0
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
volumeMounts:
- name: proc
mountPath: /hostfs/proc
readOnly: true
- name: cgroup
mountPath: /hostfs/sys/fs/cgroup
readOnly: true
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
- name: varlog
mountPath: /var/log
readOnly: true
volumes:
- name: proc
hostPath:
path: /proc
- name: cgroup
hostPath:
path: /sys/fs/cgroup
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: varlog
hostPath:
path: /var/log
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: elastic-agent
subjects:
- kind: ServiceAccount
name: elastic-agent
namespace: kube-system
roleRef:
kind: ClusterRole
name: elastic-agent
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
namespace: kube-system
name: elastic-agent
subjects:
- kind: ServiceAccount
name: elastic-agent
namespace: kube-system
roleRef:
kind: Role
name: elastic-agent
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: elastic-agent-kubeadm-config
namespace: kube-system
subjects:
- kind: ServiceAccount
name: elastic-agent
namespace: kube-system
roleRef:
kind: Role
name: elastic-agent-kubeadm-config
apiGroup: rbac.authorization.k8s.io
Loading