diff --git a/examples/elasticsearch/README.md b/examples/elasticsearch/README.md index a7c7f98e7f..e4e732b48b 100644 --- a/examples/elasticsearch/README.md +++ b/examples/elasticsearch/README.md @@ -1,19 +1,27 @@ # Elasticsearch Helm Chart -This chart uses a standard Docker image of Elasticsearch (docker.elastic.co/elasticsearch/elasticsearch-oss) version 6.3.1 and uses a service pointing to the master's transport port for service discovery. -Elasticsearch does not communicate with the Kubernetes API, hence no need for RBAC permissions. +This chart uses a standard Docker image of +[Elasticsearch](docker.elastic.co/elasticsearch/elasticsearch-oss) version +8.5.1 and uses a service pointing to the master's transport port for service +discovery. Elasticsearch does not communicate with the Kubernetes API, +hence no need for RBAC permissions. ## Warning for previous users -If you are currently using an earlier version of this Chart you will need to redeploy your Elasticsearch clusters. The discovery method used here is incompatible with using RBAC. -If you are upgrading to Elasticsearch 6 from the 5.5 version used in this chart before, please note that your cluster needs to do a full cluster restart. -The simplest way to do that is to delete the installation (keep the PVs) and install this chart again with the new version. -If you want to avoid doing that upgrade to Elasticsearch 5.6 first before moving on to Elasticsearch 6.0. +If you are currently using an earlier version of this Chart you will need to +redeploy your Elasticsearch clusters. The discovery method used here is +incompatible with using RBAC. If you are upgrading to Elasticsearch 6 from the +5.5 version used in this chart before, please note that your cluster needs to +do a full cluster restart. The simplest way to do that is to delete the +installation (keep the PVs) and install this chart again with the new version. +If you want to avoid doing that upgrade to Elasticsearch 5.6 first before +moving on to Elasticsearch 6.0. ## Prerequisites Details -* Kubernetes 1.9+ with Beta APIs enabled. -* PV support on the underlying infrastructure. -* Kanister version 0.85.0 with `profiles.cr.kanister.io` CRD installed +* Kubernetes 1.20+ +* PV provisioner support in the underlying infrastructure +* Kanister controller version 0.85.0 installed in your cluster +* Kanctl CLI installed (https://docs.kanister.io/tooling.html#install-the-tools) ## StatefulSets Details * https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/ @@ -24,66 +32,120 @@ If you want to avoid doing that upgrade to Elasticsearch 5.6 first before moving ## Chart Details This chart will do the following: -* Implement a dynamically scalable elasticsearch cluster using Kubernetes StatefulSets/Deployments and also add the kanister blueprint to be used with it. +* Implement a dynamically scalable elasticsearch cluster using Kubernetes +StatefulSets/Deployments and also add the kanister blueprint to be used with it. * Multi-role deployment: master, client (coordinating) and data nodes * Statefulset Supports scaling down without degrading the cluster ## Installing the Chart -For basic installation, you can install using the provided Helm chart that will install an instance of Elasticsearch as well as a Kanister blueprint to be used with it. +For basic installation, you can install using the provided Helm chart that will +install an instance of Elasticsearch as well as a Kanister blueprint to be used +with it. -Prior to install you will need to have the Kanister Helm repository added to your local setup. +Prior to install you will need to have the Elastic Helm repository added to +your local setup. ```bash $ helm repo add elastic https://helm.elastic.co +$ helm repo update ``` -Then install the sample Elasticsearch application with the release name `my-release` in its own namespace -`es-test` using the command below. Make sure you have the kanister controller running in namespace `kasten-io` which is the default setting in Elasticsearch charts. Otherwise, you will also have to set the `kanister.controller_namespace` parameter value to the respective kanister controller namespace in the following command: +Then install the sample Elasticsearch application with the release name +`elasticsearch` in its own namespace `es-test` using the command below. +Make sure you have the kanister controller running in namespace `kanister` +which is the default setting in Elasticsearch charts. Otherwise, you will also +have to set the `kanister.controller_namespace` parameter value to the +respective kanister controller namespace in the following command: ```bash -$ kubectl create namespace es-test -$ helm install --namespace es-test elasticsearch elastic/elasticsearch --set antiAffinity=soft +$ helm install --namespace es-test elasticsearch elastic/elasticsearch \ + --set antiAffinity=soft --create-namespace ``` -The command deploys Elasticsearch on the Kubernetes cluster in the default +The command deploys Elasticsearch on the Kubernetes cluster with the default configuration. +## Integrating with Kanister + +In case, if you don't have `Kanister` installed already, you can use following +commands to do that. +Add Kanister Helm repository and install Kanister operator +```bash +$ helm repo add kanister https://charts.kanister.io +$ helm install kanister --namespace kanister --create-namespace \ + kanister/kanister-operator --set image.tag=0.85.0 +``` + +### Create Profile + +Create Profile CR if not created already + ```bash -kanctl --namespace es-test create profile --bucket --region ap-south-1 s3compliant --access-key --secret-key +$ kanctl create profile s3compliant --access-key \ + --secret-key --namespace es-test \ + --bucket --region ``` -This command creates a profile which we will use later. + +**NOTE:** The command will configure a location where artifacts resulting from Kanister -data operations such as backup should go. This is stored as a `profiles.cr.kanister.io` -*CustomResource (CR)* which is then referenced in Kanister ActionSets. Every ActionSet -requires a Profile reference to complete the action. This CR (`profiles.cr.kanister.io`) -can be shared between Kanister-enabled application instances. +data operations such as backup should go. This is stored as a +`profiles.cr.kanister.io` *CustomResource (CR)* which is then referenced in +Kanister ActionSets. Every ActionSet requires a Profile reference to complete +the action. This CR (`profiles.cr.kanister.io`) can be shared between +Kanister-enabled application instances. + +### Create Blueprint -Once Elasticsearch is running, you can populate it with some data. Follow the instructions that get displayed by running command `helm status my-release` to connect to the application. +In order to perform backup, restore, and delete operations on the running +elasticsearch, we need to create a blueprint. ```bash +$ kubectl create -f ./elasticsearch-blueprint.yaml -n kanister +``` + +Once Elasticsearch is running, you can populate it with some data. Follow the +instructions that get displayed by running command +`helm status elasticsearch -n es-test` to connect to the application. + +```bash +# Log in into elasticsearch container and get shell access +$ kubectl exec -it elasticsearch-master-0 -n es-test -c elasticsearch -- bash + # Create index called customer -$ curl -X PUT "localhost:9200/customer?pretty" +$ curl -X PUT "https://elastic:${ELASTIC_PASSWORD}@localhost:9200/customer?pretty" -k +{ + "acknowledged" : true, + "shards_acknowledged" : true, + "index" : "customer" +} # Add a customer named John Smith -$ curl -X PUT "localhost:9200/customer/_doc/1?pretty" -H 'Content-Type: application/json' -d' +$ curl -X PUT "https://elastic:${ELASTIC_PASSWORD}@localhost:9200/customer/_doc/1?pretty" \ + -H 'Content-Type: application/json' -d '{"name": "John Smith"}' -k { - "name": "John Smith" + "_index" : "customer", + "_id" : "1", + "_version" : 1, + "result" : "created", + "_shards" : { + "total" : 2, + "successful" : 2, + "failed" : 0 + }, + "_seq_no" : 0, + "_primary_term" : 1 } -' # View the data -$ curl -X GET "localhost:9200/_cat/indices?v" +$ curl -X GET "https://elastic:${ELASTIC_PASSWORD}@localhost:9200/_cat/indices?v" -k health status index uuid pri rep docs.count docs.deleted store.size pri.store.size -green open customer xbwj34pTSZOdDI7xVR0qIA 5 1 1 0 8.9kb 4.4kb - - - -$ curl 'localhost:9200/customer/_search?q=*&pretty' +green open customer YmIH-p0DRD--KzIA6i4Ayg 1 1 0 0 450b 225b +$ curl -X GET "https://elastic:${ELASTIC_PASSWORD}@localhost:9200/customer/_search?q=*&pretty" -k { - "took" : 9, + "took" : 867, "timed_out" : false, "_shards" : { "total" : 1, @@ -100,7 +162,6 @@ $ curl 'localhost:9200/customer/_search?q=*&pretty' "hits" : [ { "_index" : "customer", - "_type" : "_doc", "_id" : "1", "_score" : 1.0, "_source" : { @@ -110,108 +171,178 @@ $ curl 'localhost:9200/customer/_search?q=*&pretty' ] } } - -``` - -## Create the Blueprint - -In order to perform backup, restore, and delete operations on the running elasticsearch, we need to create a blueprint. You can create that using the command below from the root of kanister repo. - -```bash -kubectl create -f ./examples/elasticsearch/elasticsearch-blueprint.yaml -n kasten-io ``` ## Protect the Application -You can now take a backup of the Elasticsearch data using an ActionSet defining backup for this application. Create an ActionSet in the same namespace as the controller using `kanctl`, a command-line tool that helps create ActionSets as shown below: +You can now take a backup of the Elasticsearch data using an ActionSet defining +backup for this application. Create an ActionSet in the same namespace as the +controller using `kanctl`, a command-line tool that helps create ActionSets as +shown below: ```bash -$ kanctl create actionset --action backup --namespace kasten-io --blueprint elasticsearch-blueprint --statefulset es-test/elasticsearch-master --profile es-test/ +$ kubectl get profile -n es-test +NAME AGE +s3-profile-4dxn8 7m25s -$ kubectl --namespace kasten-io get actionsets.cr.kanister.io -NAME AGE -backup-lphk7 2h +$ kanctl create actionset --action backup --namespace kanister \ + --blueprint elasticsearch-blueprint \ + --statefulset es-test/elasticsearch-master \ + --profile es-test/s3-profile-4dxn8 +actionset backup-kmms4 created # View the status of the actionset -$ kubectl --namespace kasten-io describe actionset backup-lphk7 -``` - -The PROFILE_NAME is the name of the profile generated from earlier kanctl create profile command. It can be retrieved using, - -```bash -kubectl get profiles.cr.kanister.io -n es-test +$ kubectl --namespace kanister get actionsets.cr.kanister.io +NAME PROGRESS LAST TRANSITION TIME STATE +backup-kmms4 100.00 2023-01-04T09:45:22Z complete ``` ## Disaster strikes! -Let's say someone with fat fingers accidentally deleted the customer index using the following command: +Let's say someone with fat fingers accidentally deleted the customer index +using the following command: ```bash -$ curl -X DELETE "localhost:9200/customer?pretty" +# Log in into elasticsearch container and get shell access +$ kubectl exec -it elasticsearch-master-0 -n es-test -c elasticsearch -- bash + +# Delete the index +$ curl -X DELETE "https://elastic:${ELASTIC_PASSWORD}@localhost:9200/customer?pretty" -k { "acknowledged" : true } -``` -If you try to access this data in the database, you should see that it is no longer there: - -```bash -$ curl -X GET "localhost:9200/_cat/indices?v" +# Get the index +$ curl -X GET "https://elastic:${ELASTIC_PASSWORD}@localhost:9200/_cat/indices?v" -k health status index uuid pri rep docs.count docs.deleted store.size pri.store.size ``` ## Restore the Application -To restore the missing data, we want to use the backup created earlier in the steps above. An easy way to do this is to leverage `kanctl`, a command-line tool that helps create ActionSets that depend on other ActionSets: +To restore the missing data, we want to use the backup created earlier in the +steps above. An easy way to do this is to leverage `kanctl`, a command-line tool +that helps create ActionSets that depend on other ActionSets: ```bash -$ kanctl --namespace kasten-io create actionset --action restore --from "backup-lphk7" -actionset restore-backup-lphk7-hndm6 created +$ kanctl create actionset --action restore --namespace kanister --from backup-kmms4 +actionset restore-backup-kmms4-rp89l created # View the status of the ActionSet -kubectl --namespace kasten-io describe actionset restore-backup-lphk7-hndm6 +$ kubectl --namespace kanister get actionsets.cr.kanister.io restore-backup-kmms4-rp89l +NAME PROGRESS LAST TRANSITION TIME STATE +restore-backup-kmms4-rp89l 100.00 2023-01-04T09:54:11Z complete ``` You should now see that the data has been successfully restored to Elasticsearch! ```bash -$ curl -X GET "localhost:9200/_cat/indices?v" +# Log in into elasticsearch container and get shell access +$ kubectl exec -it elasticsearch-master-0 -n es-test -c elasticsearch -- bash + +$ curl -X GET "https://elastic:${ELASTIC_PASSWORD}@localhost:9200/_cat/indices?v" -k health status index uuid pri rep docs.count docs.deleted store.size pri.store.size -green open customer xbwj34pTSZOdDI7xVR0qIA 5 1 1 0 8.9kb 4.4kb +green open customer VtP3QddrTdq69mvq3NwCuQ 1 1 1 0 9.2kb 4.5kb + +$ curl -X GET "https://elastic:${ELASTIC_PASSWORD}@localhost:9200/customer/_search?q=*&pretty" -k +{ + "took" : 34, + "timed_out" : false, + "_shards" : { + "total" : 1, + "successful" : 1, + "skipped" : 0, + "failed" : 0 + }, + "hits" : { + "total" : { + "value" : 1, + "relation" : "eq" + }, + "max_score" : 1.0, + "hits" : [ + { + "_index" : "customer", + "_id" : "1", + "_score" : 1.0, + "_source" : { + "name" : "John Smith" + } + } + ] + } +} ``` ## Delete the Artifacts -The artifacts created by the backup action can be cleaned up using the following command: +The artifacts created by the backup action can be cleaned up using the +following command: ```bash -$ kanctl --namespace kasten-io create actionset --action delete --from "backup-lphk7" -actionset "delete-backup-lphk7-5n8nz" created +$ kanctl create actionset --action delete --namespace kanister \ + --from backup-kmms4 --namespacetargets kanister +actionset delete-backup-kmms4-sd6tj created # View the status of the ActionSet -$ kubectl --namespace kasten-io describe actionset delete-backup-lphk7-5n8nz +$ kubectl --namespace kanister get actionsets.cr.kanister.io delete-backup-kmms4-sd6tj +NAME PROGRESS LAST TRANSITION TIME STATE +delete-backup-kmms4-sd6tj 100.00 2023-01-04T09:59:53Z complete ``` ### Troubleshooting -If you run into any issues with the above commands, you can check the logs of the controller using: +If you run into any issues with the above commands, you can check the logs of +the controller using: ```bash -$ kubectl --namespace kasten-io logs -l app=kanister-operator +$ kubectl --namespace kanister logs -l app=kanister-operator ``` -## Delete the Helm deployment as normal +## Cleanup -``` -$ helm delete my-release +### Uninstalling the chart + +```bash +$ helm delete elasticsearch -n es-test +release "elasticsearch" uninstalled ``` -Deletion of the StatefulSet doesn't cascade to deleting associated PVCs. To delete them: +Deletion of the StatefulSet doesn't cascade to deleting associated PVCs. +To delete them: +```bash +$ kubectl delete pvc -l app=elasticsearch-master -n es-test +persistentvolumeclaim "elasticsearch-master-elasticsearch-master-0" deleted +persistentvolumeclaim "elasticsearch-master-elasticsearch-master-1" deleted +persistentvolumeclaim "elasticsearch-master-elasticsearch-master-2" deleted ``` -$ kubectl delete pvc -l release=my-release,component=data + +### Delete CRs + +Remove Blueprint, Profile CR and ActionSet + +```bash +$ kubectl delete blueprints.cr.kanister.io elasticsearch-blueprint -n kanister +blueprint.cr.kanister.io "elasticsearch-blueprint" deleted + +$ kubectl get profiles.cr.kanister.io -n es-test +NAME AGE +s3-profile-4dxn8 93m + +$ kubectl delete profiles.cr.kanister.io s3-profile-4dxn8 -n es-test +profile.cr.kanister.io "s3-profile-4dxn8" deleted + +$ kubectl delete actionset backup-kmms4 delete-backup-kmms4-sd6tj \ + restore-backup-kmms4-rp89l -n kanister +actionset.cr.kanister.io "backup-kmms4" deleted +actionset.cr.kanister.io "delete-backup-kmms4-sd6tj" deleted +actionset.cr.kanister.io "restore-backup-kmms4-rp89l" deleted ``` ## Configuration -If you're on a single node cluster, you'd need to set the antiAffinity to soft while installing the helm chart by running `--set antiAffinity=soft` so that pods are not stuck in the pending state. For other configurations of elasticsearch helm chart, please refer https://github.com/elastic/helm-charts/blob/master/elasticsearch/README.md +If you're on a single node cluster, you'd need to set the antiAffinity to soft +while installing the helm chart by running `--set antiAffinity=soft` so that +pods are not stuck in the pending state. For other configurations of +elasticsearch helm chart, please refer +https://github.com/elastic/helm-charts/blob/master/elasticsearch/README.md