Skip to content

Commit

Permalink
Merge pull request #109 from kanisterio/sync
Browse files Browse the repository at this point in the history
Start consuming kando; Enhance kanctl
  • Loading branch information
Ilya Kislenko authored Jul 20, 2018
2 parents 4f7f1d1 + b986983 commit c15d722
Show file tree
Hide file tree
Showing 32 changed files with 2,252 additions and 236 deletions.
6 changes: 6 additions & 0 deletions .goreleaser.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,9 @@ project_name: kanister
builds:
- main: cmd/kanctl/main.go
binary: kanctl
ldflags:
- -s -w -X github.com/kanisterio/kanister/pkg/version.VERSION={{.Version}} -X github.com/kanisterio/kanister/pkg/version.GIT_COMMIT={{.Commit}} -X github.com/kanisterio/kanister/pkg/version.BUILD_DATE={{.Date}}
- ./usemsan=-msan
goos:
- windows
- darwin
Expand All @@ -11,6 +14,9 @@ builds:
- amd64
- main: cmd/kando/main.go
binary: kando
ldflags:
- -s -w -X github.com/kanisterio/kanister/pkg/version.VERSION={{.Version}} -X github.com/kanisterio/kanister/pkg/version.GIT_COMMIT={{.Commit}} -X github.com/kanisterio/kanister/pkg/version.BUILD_DATE={{.Date}}
- ./usemsan=-msan
goos:
- windows
- darwin
Expand Down
2 changes: 1 addition & 1 deletion build/build.sh
Original file line number Diff line number Diff line change
Expand Up @@ -37,5 +37,5 @@ export GOARCH="${ARCH}"

go install -v \
-installsuffix "static" \
-ldflags "-X ${PKG}/pkg.VERSION=${VERSION}" \
-ldflags "-X ${PKG}/pkg/version.VERSION=${VERSION}" \
./...
22 changes: 22 additions & 0 deletions docs/templates.rst
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,7 @@ The TemplateParam struct is defined as:
type TemplateParams struct {
StatefulSet StatefulSetParams
Deployment DeploymentParams
PVC PVCParams
ArtifactsIn map[string]crv1alpha1.Artifact // A Kanister Artifact
ArtifactsOut map[string]crv1alpha1.Artifact
Profile *Profile
Expand Down Expand Up @@ -108,6 +109,27 @@ For example, to access the Name of a Deployment use:
"{{ index .Deployment.Name }}"
PVC
---

PVCParams includes the name and namespace of the persistent volume claim
that is being acted on.

.. code-block:: go
:linenos:
// PVCParams are params for a PVC
type PVCParams struct {
Name string
Namespace string
}
For example, to access the Name of a persistent volume claim, use:

.. code-block:: go
"{{ .PVC.Name }}"
Artifacts
=========

Expand Down
21 changes: 21 additions & 0 deletions examples/helm/kanister/kanister-elasticsearch/Chart.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
name: elasticsearch
home: https://www.elastic.co/products/elasticsearch
version: 1.3.0
appVersion: 6.3.1
description: Flexible and powerful open source, distributed real-time search and analytics
engine.
icon: https://static-www.elastic.co/assets/blteb1c97719574938d/logo-elastic-elasticsearch-lt.svg
sources:
- https://www.elastic.co/products/elasticsearch
- https://github.com/jetstack/elasticsearch-pet
- https://github.com/giantswarm/kubernetes-elastic-stack
- https://github.com/GoogleCloudPlatform/elasticsearch-docker
- https://github.com/clockworksoul/helm-elasticsearch
- https://github.com/pires/kubernetes-elasticsearch-cluster
maintainers:
- name: Tom Manville
email: tom@kasten.io
- name: Ilya Kislenko
email: ilya@kasten.io
- name: Supriya Kharade
email: supriya@kasten.io
8 changes: 8 additions & 0 deletions examples/helm/kanister/kanister-elasticsearch/OWNERS
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
approvers:
- tdmanv
- depohmel
- SupriyaKasten
reviewers:
- tdmanv
- depohmel
- SupriyaKasten
188 changes: 188 additions & 0 deletions examples/helm/kanister/kanister-elasticsearch/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,188 @@
# Elasticsearch Helm Chart

This chart uses a standard Docker image of Elasticsearch (docker.elastic.co/elasticsearch/elasticsearch-oss) and uses a service pointing to the master's transport port for service discovery.
Elasticsearch does not communicate with the Kubernetes API, hence no need for RBAC permissions.

## Warning for previous users
If you are currently using an earlier version of this Chart you will need to redeploy your Elasticsearch clusters. The discovery method used here is incompatible with using RBAC.
If you are upgrading to Elasticsearch 6 from the 5.5 version used in this chart before, please note that your cluster needs to do a full cluster restart.
The simplest way to do that is to delete the installation (keep the PVs) and install this chart again with the new version.
If you want to avoid doing that upgrade to Elasticsearch 5.6 first before moving on to Elasticsearch 6.0.

## Prerequisites Details

* Kubernetes 1.6+
* PV dynamic provisioning support on the underlying infrastructure

## StatefulSets Details
* https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/

## StatefulSets Caveats
* https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#limitations

## Todo

* Implement TLS/Auth/Security
* Smarter upscaling/downscaling
* Solution for memory locking

## Chart Details
This chart will do the following:

* Implemented a dynamically scalable elasticsearch cluster using Kubernetes StatefulSets/Deployments
* Multi-role deployment: master, client (coordinating) and data nodes
* Statefulset Supports scaling down without degrading the cluster

## Installing the Chart

To install the chart with the release name `my-release`:

```bash
$ helm repo add incubator http://storage.googleapis.com/kubernetes-charts-incubator
$ helm install --name my-release incubator/elasticsearch
```

## Deleting the Charts

Delete the Helm deployment as normal

```
$ helm delete my-release
```

Deletion of the StatefulSet doesn't cascade to deleting associated PVCs. To delete them:

```
$ kubectl delete pvc -l release=my-release,component=data
```

## Configuration

The following table lists the configurable parameters of the elasticsearch chart and their default values.

| Parameter | Description | Default |
| ------------------------------------ | ------------------------------------------------------------------- | ------------------------------------ |
| `appVersion` | Application Version (Elasticsearch) | `6.3.1` |
| `image.repository` | Container image name | `docker.elastic.co/elasticsearch/elasticsearch-oss` |
| `image.tag` | Container image tag | `6.3.1` |
| `image.pullPolicy` | Container pull policy | `Always` |
| `cluster.name` | Cluster name | `elasticsearch` |
| `cluster.kubernetesDomain` | Kubernetes cluster domain name | `cluster.local` |
| `cluster.xpackEnable` | Writes the X-Pack configuration options to the configuration file | `false` |
| `cluster.config` | Additional cluster config appended | `{}` |
| `cluster.env` | Cluster environment variables | `{}` |
| `client.name` | Client component name | `client` |
| `client.replicas` | Client node replicas (deployment) | `2` |
| `client.resources` | Client node resources requests & limits | `{} - cpu limit must be an integer` |
| `client.priorityClassName` | Client priorityClass | `nil` |
| `client.heapSize` | Client node heap size | `512m` |
| `client.podAnnotations` | Client Deployment annotations | `{}` |
| `client.nodeSelector` | Node labels for client pod assignment | `{}` |
| `client.tolerations` | Client tolerations | `{}` |
| `client.serviceAnnotations` | Client Service annotations | `{}` |
| `client.serviceType` | Client service type | `ClusterIP` |
| `master.exposeHttp` | Expose http port 9200 on master Pods for monitoring, etc | `false` |
| `master.name` | Master component name | `master` |
| `master.replicas` | Master node replicas (deployment) | `2` |
| `master.resources` | Master node resources requests & limits | `{} - cpu limit must be an integer` |
| `master.priorityClassName` | Master priorityClass | `nil` |
| `master.podAnnotations` | Master Deployment annotations | `{}` |
| `master.nodeSelector` | Node labels for master pod assignment | `{}` |
| `master.tolerations` | Master tolerations | `{}` |
| `master.heapSize` | Master node heap size | `512m` |
| `master.name` | Master component name | `master` |
| `master.persistence.enabled` | Master persistent enabled/disabled | `true` |
| `master.persistence.name` | Master statefulset PVC template name | `data` |
| `master.persistence.size` | Master persistent volume size | `4Gi` |
| `master.persistence.storageClass` | Master persistent volume Class | `nil` |
| `master.persistence.accessMode` | Master persistent Access Mode | `ReadWriteOnce` |
| `data.exposeHttp` | Expose http port 9200 on data Pods for monitoring, etc | `false` |
| `data.replicas` | Data node replicas (statefulset) | `3` |
| `data.resources` | Data node resources requests & limits | `{} - cpu limit must be an integer` |
| `data.priorityClassName` | Data priorityClass | `nil` |
| `data.heapSize` | Data node heap size | `1536m` |
| `data.persistence.enabled` | Data persistent enabled/disabled | `true` |
| `data.persistence.name` | Data statefulset PVC template name | `data` |
| `data.persistence.size` | Data persistent volume size | `30Gi` |
| `data.persistence.storageClass` | Data persistent volume Class | `nil` |
| `data.persistence.accessMode` | Data persistent Access Mode | `ReadWriteOnce` |
| `data.podAnnotations` | Data StatefulSet annotations | `{}` |
| `data.nodeSelector` | Node labels for data pod assignment | `{}` |
| `data.tolerations` | Data tolerations | `{}` |
| `data.terminationGracePeriodSeconds` | Data termination grace period (seconds) | `3600` |
| `data.antiAffinity` | Data anti-affinity policy | `soft` |

Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`.

In terms of Memory resources you should make sure that you follow that equation:

- `${role}HeapSize < ${role}MemoryRequests < ${role}MemoryLimits`

The YAML value of cluster.config is appended to elasticsearch.yml file for additional customization ("script.inline: on" for example to allow inline scripting)

# Deep dive

## Application Version

This chart aims to support Elasticsearch v2 and v5 deployments by specifying the `values.yaml` parameter `appVersion`.

### Version Specific Features

* Memory Locking *(variable renamed)*
* Ingest Node *(v5)*
* X-Pack Plugin *(v5)*

Upgrade paths & more info: https://www.elastic.co/guide/en/elasticsearch/reference/current/setup-upgrade.html

## Mlocking

This is a limitation in kubernetes right now. There is no way to raise the
limits of lockable memory, so that these memory areas won't be swapped. This
would degrade performance heavily. The issue is tracked in
[kubernetes/#3595](https://github.com/kubernetes/kubernetes/issues/3595).

```
[WARN ][bootstrap] Unable to lock JVM Memory: error=12,reason=Cannot allocate memory
[WARN ][bootstrap] This can result in part of the JVM being swapped out.
[WARN ][bootstrap] Increase RLIMIT_MEMLOCK, soft limit: 65536, hard limit: 65536
```

## Minimum Master Nodes
> The minimum_master_nodes setting is extremely important to the stability of your cluster. This setting helps prevent split brains, the existence of two masters in a single cluster.
>When you have a split brain, your cluster is at danger of losing data. Because the master is considered the supreme ruler of the cluster, it decides when new indices can be created, how shards are moved, and so forth. If you have two masters, data integrity becomes perilous, since you have two nodes that think they are in charge.
>This setting tells Elasticsearch to not elect a master unless there are enough master-eligible nodes available. Only then will an election take place.
>This setting should always be configured to a quorum (majority) of your master-eligible nodes. A quorum is (number of master-eligible nodes / 2) + 1
More info: https://www.elastic.co/guide/en/elasticsearch/guide/1.x/_important_configuration_changes.html#_minimum_master_nodes

# Client and Coordinating Nodes

Elasticsearch v5 terminology has updated, and now refers to a `Client Node` as a `Coordinating Node`.

More info: https://www.elastic.co/guide/en/elasticsearch/reference/5.5/modules-node.html#coordinating-node

## Select right storage class for SSD volumes

### GCE + Kubernetes 1.5

Create StorageClass for SSD-PD

```
$ kubectl create -f - <<EOF
kind: StorageClass
apiVersion: extensions/v1beta1
metadata:
name: ssd
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-ssd
EOF
```
Create cluster with Storage class `ssd` on Kubernetes 1.5+

```
$ helm install incubator/elasticsearch --name my-release --set data.storageClass=ssd,data.storage=100Gi
```
31 changes: 31 additions & 0 deletions examples/helm/kanister/kanister-elasticsearch/templates/NOTES.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
The elasticsearch cluster has been installed.

Elasticsearch can be accessed:

* Within your cluster, at the following DNS name at port 9200:

{{ template "elasticsearch.client.fullname" . }}.{{ .Release.Namespace }}.svc.cluster.local

* From outside the cluster, run these commands in the same shell:
{{- if contains "NodePort" .Values.client.serviceType }}

export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ template "elasticsearch.client.fullname" . }})
export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}")
echo http://$NODE_IP:$NODE_PORT
{{- else if contains "LoadBalancer" .Values.client.serviceType }}

WARNING: You have likely exposed your Elasticsearch cluster direct to the internet.
Elasticsearch does not implement any security for public facing clusters by default.
As a minimum level of security; switch to ClusterIP/NodePort and place an Nginx gateway infront of the cluster in order to lock down access to dangerous HTTP endpoints and verbs.

NOTE: It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status of by running 'kubectl get svc -w {{ template "elasticsearch.client.fullname" . }}'

export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ template "elasticsearch.client.fullname" . }} -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
echo http://$SERVICE_IP:9200
{{- else if contains "ClusterIP" .Values.client.serviceType }}

export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app={{ template "elasticsearch.name" . }},component={{ .Values.client.name }},release={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}")
echo "Visit http://127.0.0.1:9200 to use Elasticsearch"
kubectl port-forward --namespace {{ .Release.Namespace }} $POD_NAME 9200:9200
{{- end }}
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "elasticsearch.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}

{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
*/}}
{{- define "elasticsearch.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- if contains $name .Release.Name -}}
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{- end -}}

{{/*
Create a default fully qualified client name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
*/}}
{{- define "elasticsearch.client.fullname" -}}
{{ template "elasticsearch.fullname" . }}-{{ .Values.client.name }}
{{- end -}}

{{/*
Create a default fully qualified data name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
*/}}
{{- define "elasticsearch.data.fullname" -}}
{{ template "elasticsearch.fullname" . }}-{{ .Values.data.name }}
{{- end -}}

{{/*
Create a default fully qualified master name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
*/}}
{{- define "elasticsearch.master.fullname" -}}
{{ template "elasticsearch.fullname" . }}-{{ .Values.master.name }}
{{- end -}}
Loading

0 comments on commit c15d722

Please sign in to comment.