From 6ee1950ba82c04797dbab680799b0c64612f7302 Mon Sep 17 00:00:00 2001 From: fanmin shi Date: Thu, 9 Nov 2017 13:21:57 -0800 Subject: [PATCH] doc: remove backup_config.md, backup_service.md, and download_backup.md --- doc/user/backup_config.md | 147 ------------------------------------ doc/user/backup_service.md | 57 -------------- doc/user/download_backup.md | 93 ----------------------- 3 files changed, 297 deletions(-) delete mode 100644 doc/user/backup_config.md delete mode 100644 doc/user/backup_service.md delete mode 100644 doc/user/download_backup.md diff --git a/doc/user/backup_config.md b/doc/user/backup_config.md deleted file mode 100644 index e67ddcdd0..000000000 --- a/doc/user/backup_config.md +++ /dev/null @@ -1,147 +0,0 @@ -# Backup Options Config Guide - -In etcd operator, we provide the following options to save cluster backups to: -- Persistent Volume (PV) on GCE or AWS -- Persistent Volume (PV) with custom StorageClasses -- S3 bucket on AWS -- Azure Blob Storage (ABS) container - -This docs talks about how to configure etcd operator to use these backup options. - -## PV with custom StorageClass - -If your Kubernetes supports the [StorageClass](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#storageclasses) resource, you can use them to back up your etcd cluster. To do this, specify a `StorageClass` value in the cluster's Backup spec, like so: - -```yaml -spec: - ... - backup: - ... - storageType: "PersistentVolume" - pv: - volumeSizeInMB: 512 - storageClass: foo -``` - -This spec field provides more granular control over how to persist etcd data to PersistentVolumes. This is essentially saving backups to a PersistentVolume with a predefined StorageClass. - -## PV on GCE - -This is essentially saving backups to an instance of GCE PD if running on GCE Kubernetes. - -Create your own storage class with the provisioner for GCE PD: -```yaml -apiVersion: storage.k8s.io/v1 -kind: StorageClass -metadata: - name: storage-class-gce-pd -provisioner: kubernetes.io/gce-pd -``` -Then specify the name of the above storage class in the cluster backup spec field `spec.backup.pv.storageClass`. - -## PV on AWS - -This is essentially saving backups on an instance of AWS EBS if running on AWS Kubernetes. - -Create your own storage class with the provisioner for AWS EBS: -```yaml -apiVersion: storage.k8s.io/v1 -kind: StorageClass -metadata: - name: storage-class-aws-ebs -provisioner: kubernetes.io/aws-ebs -``` -Then specify the name of the above storage class in the cluster backup spec field `spec.backup.pv.storageClass`. - -## S3 on AWS - -The S3 backup policy is configured in a cluster's spec. - -See the [S3 backup with cluster specific configuration](spec_examples.md#s3-backup-and-cluster-specific-s3-configuration) spec to see what the cluster's `spec.backup` field should be configured as to set a cluster specific S3 backup configuration. The following additional fields need to be set under the cluster spec's `spec.backup.s3` field: -- `s3Bucket`: The name of the S3 bucket to store backups in. -- `awsSecret`: The secret object name which should contain two files named `credentials` and `config` . -- `prefix`: (Optional) The S3 [prefix](http://docs.aws.amazon.com/AmazonS3/latest/dev/ListingKeysHierarchy.html). - -The profile to use in both the files `credentials` and `config` is `default` : -``` -$ cat ~/.aws/credentials -[default] -aws_access_key_id = XXX -aws_secret_access_key = XXX - -$ cat ~/.aws/config -[default] -region = us-west-1 -``` - -We can then create the secret named "aws" from the two files by: -```bash -$ kubectl -n create secret generic aws --from-file=$AWS_DIR/credentials --from-file=$AWS_DIR/config -``` - -Once the secret is created, it can be used to configure a new cluster or update an existing one with the specific S3 configurations: -``` -spec: - backup: - s3: - s3Bucket: example-s3-bucket - awsSecret: aws - prefix: example-prefix -``` - -For AWS k8s users: If `credentials` file is not given, -operator and backup sidecar pods will make use of AWS IAM roles on the nodes where they are deployed. - -## ABS on Azure - -The ABS backup policy is configured in a cluster's spec. See [spec_examples.md](spec_examples.md#three-member-cluster-with-abs-backup) for an example. - -### Prerequisites - - * An ABS container will need to be created in Azure. Here we name the container `etcd-backups`. - - ``` - $ export AZURE_STORAGE_ACCOUNT= AZURE_STORAGE_KEY= - $ az storage container create -n etcd-backups - ``` - - * A Kubernetes secret will need to be created. - - An example of the secret manifest should look like: - ``` - apiVersion: v1 - kind: Secret - metadata: - name: abs-credentials - type: Opaque - stringData: - storage-account: - storage-key: - ``` - - To create the secret from the secret manifest: - ``` - $ kubectl -n create -f secret-abs-credentials.yaml - ``` - -What we have: -- A secret "abs-credentials" -- An ABS container "etcd_backups" - -### Cluster configuration - -The following fields need to be set under the cluster spec's `spec.backup.abs` field: -- `absContainer`: The name of the ABS container to store backups in. -- `absSecret`: The secret object name (as created above.) - -An example cluster with specific ABS configurations then looks like: -``` -spec: - backup: - storageType: "ABS" - abs: - absContainer: etcd-backups - absSecret: abs-credentials -``` - - diff --git a/doc/user/backup_service.md b/doc/user/backup_service.md deleted file mode 100644 index 4f42d6604..000000000 --- a/doc/user/backup_service.md +++ /dev/null @@ -1,57 +0,0 @@ -## Backup service - -A backup service will be created if the etcd cluster has backup enabled. -The backup service saves backup for the etcd cluster based on the requirement of the [backup spec](https://github.com/coreos/etcd-operator/blob/master/example/example-etcd-cluster-with-backup.yaml#L8-L12). - -The backup service will skip creating a new snapshot if the etcd cluster revision has not changed since the last snapshot, i.e the etcd-cluster data has not been modified (e.g., `Put`, `Delete`, `Txn`). - -It also exposes an HTTP API for requesting a new backup and retrieving existing backups. The HTTP API can be accessed from inside the kubernetes cluster as: -```bash -$ curl "http://-backup-sidecar:19999/v1/ -``` - -## HTTP API v1 - -#### GET /v1/backupnow - -The backup service requests a backup from the etcd cluster immediately when it receives the `GET` request. - -Response Body - -JSON format of the backup status when backup is successful. - -``` go -type BackupStatus struct { - // Creation time of the backup. - CreationTime string `json:"creationTime"` - - // Size is the size of the backup in MB. - Size float64 `json:"size"` - - // Version is the version of the backup cluster. - Version string `json:"version"` - - // TimeTookInSecond is the total time took to create the backup. - TimeTookInSecond int `json:"timeTookInSecond"` -} -``` - -#### GET /v1/backup - -The backup service returns the most recent backup in the body of the HTTP response when it receives the `GET` request. - -Request Parameters - -- etcdVersion (optional): backup service checks the compatibility between the latest backup and the etcd server with passed in `etcdVersion`. -For example, if we want to get a backup for etcd server 3.1.0, we should set etcdVersion to 3.1.0. Backup service will check the if its latest backup can be used to restore a 3.1.0 etcd cluster. - -- etcdRevision (optional): when both etcdVersion and etcdRevision are provided, a backup with revision equal to etcdRevision and taken from the etcd cluster with version equal to etcdVersion will be returned. - -Response Headers - -- X-etcd-Version: the etcd cluster version tht the backup was made from -- X-Revision: the etcd store revision when the backup was made - -#### GET /v1/status - -The backup service returns the service status in JSON format. The JSON payload is defined in pkg backapi.ServiceStatus. diff --git a/doc/user/download_backup.md b/doc/user/download_backup.md deleted file mode 100644 index e00b4f3d7..000000000 --- a/doc/user/download_backup.md +++ /dev/null @@ -1,93 +0,0 @@ -# Download Backup - -The backups that etcd operator saves are [etcd snapshots](https://github.com/coreos/etcd/blob/master/Documentation/op-guide/recovery.md). -This document talks about the ways to download these backup/snapshot files. - -## Backup Service - -If etcd cluster with backup is still running, there is a backup service. -It's named as `${CLUSTER_NAME}-backup-sidecar`. For example: -``` -$ kubectl get svc -etcd-cluster-backup-sidecar 10.39.243.229 19999/TCP 4m -``` - -Given the etcd version of the cluster, you can download backup via service endpoint -`http://${CLUSTER_NAME}-backup-sidecar:19999/v1/backup?etcdVersion=${ETCD_VERSION}` . For example: -``` -$ curl "http://etcd-cluster-backup-sidecar:19999/v1/backup?etcdVersion=3.1.0" -o 3.1.0_etcd.backup -``` -"etcdVersion" parameter is the version of the potential etcd cluster to restore and run. -On success, the backup's etcd version should be compatible with given version. -Otherwise, it would return non-OK response. - -If sending the request from a pod in a different namespace, use DNS name `${CLUSTER_NAME}-backup-sidecar.${NAMESPACE}.svc` . - -Outside kubernetes cluster, we need additional step to access the backup service, -e.g. using [ingress](https://kubernetes.io/docs/user-guide/ingress/) . - -## Get backup from S3 - -If backup service is healthy, we suggest to get backups via that. - -However, in disaster scenarios like when Kubernetes is down, the backup service is not running, -and we cannot retrieve backup from backup service. - -If backup storage type is "S3", users have to get the backup directly from S3. - -First of all, setup aws cli: https://aws.amazon.com/cli/ . - -Given the S3 bucket name that you passed to when starting etcd operator and the cluster name, -backups are saved under prefix in form of `//"v1"///` . - -If [`s3_prefix`](./backup_config.md#cluster-level-configuration) is specified, List all backup files: - -``` -$ aws s3 ls //v1/// -2017-01-24 02:13:30 24608 3.1.0_0000000000000002_etcd.backup -... 3.1.0_000000000000000f_etcd.backup -``` - -Otherwise: - -``` -$ aws s3 ls /v1/// -2017-01-24 02:13:30 24608 3.1.0_0000000000000002_etcd.backup -... 3.1.0_000000000000000f_etcd.backup -``` - -Backup file name format is `__etcd.backup` . Revision is hexadecimal. - -Unless intentional, just pick the backup with max revision. -E.g. in above examples, we would pick file "3.1.0_000000000000000f_etcd.backup" . - -Download backup: -``` -$ aws s3 cp "s3:// //v1/// $target_local_file -``` - -## Get backup from PV -If backup service is healthy, we suggest to get backups via that. - -However, in disaster scenarios like when Kubernetes is down, the backup service is not running, -and we cannot retrieve backup from backup service. - -If backup storage type is "PV", users need to get backup directly from PV. - -TODO: document how to find and mount the disk. - -Backups will be stored to (and restored from) the PVC named as `${CLUSTER_NAME}-pvc` -under the directory `/var/etcd-backup/v1/${CLUSTER_NAME}/`. - -List all backup files: -``` -$ ls /var/etcd-backup/v1/${CLUSTER_NAME}/ -3.1.0_0000000000000002_etcd.backup 3.1.0_000000000000000f_etcd.backup ... -``` - -Backup file name format is `${ETCD_VERSION}_${CLUSTER_REVISION}_etcd.backup` . Revision is hexadecimal. - -Unless intentional, just pick the backup with max revision. -E.g. in above examples, we would pick file "3.1.0_000000000000000f_etcd.backup" . - -TODO: document how to download.