Skip to content

Commit

Permalink
docs: add design doc for Ceph COSI driver
Browse files Browse the repository at this point in the history
The first draft for the design doc for ceph cosi driver with Rook

Resolves rook#7843

Signed-off-by: Jiffin Tony Thottan <thottanjiffin@gmail.com>
  • Loading branch information
thotz committed Jun 8, 2023
1 parent 372f0a6 commit 39ad782
Showing 1 changed file with 140 additions and 0 deletions.
140 changes: 140 additions & 0 deletions design/ceph/object/ceph-cosi-driver.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,140 @@
# Ceph COSI Driver Support

## Targeted for v1.12

## Background

Container Object Storage Interface (COSI) is a specification for container orchestration frameworks to manage object storage. Even though there is no standard protocol defined for Object Store, it has flexibility to add support for all. The COSI spec abstracts common storage features such as create/delete buckets, grant/revoke access to buckets, attach/detach buckets, and more. COSI released v1alpha1 with Kubernetes 1.25. COSI is a new project and is not yet fully integrated by Kubernetes.
More details about COSI can be found [here](https://kubernetes.io/blog/2022/09/02/cosi-kubernetes-object-storage-management/)
It is projected that COSI will be the only supported object storage driver in the near future.

## COSI Driver Deployment

The [COSI controller](https://github.com/kubernetes-sigs/container-object-storage-interface-controller) is deployed as container in the default namespace. The Ceph COSI driver is deployed as a statefulset with a single replica along with [COSI sidecar container](https://github.com/kubernetes-sigs/container-object-storage-interface-provisioner-sidecar). The Ceph COSI driver can be deployed in any namespace not along with the COSI controller. The Ceph COSI driver is deployed with a service account that has the following RBAC permissions:

## Integration plan with Rook

The aim to support the v1alpha1 version of COSI in Rook v1.12. It will be extended to beta and release versions as appropriate. There should be option in `operator.yaml` to validate whether the COSI controller exists and another option at object store level to bring up the Ceph COSI driver. Each Ceph object store should have its own COSI driver.

### Pre-requisites

- COSI CRDs should be installed in the cluster via following command

```bash
kubectl apply -k github.com/kubernetes-sigs/container-object-storage-interface-api
```

- COSI controller should be deployed in the cluster via following command

```bash
kubectl apply -k github.com/kubernetes-sigs/container-object-storage-interface-controller
```

### End to End Workflow

#### Changes in Common.yaml

Following contents need to be append to `common.yaml` :

- <https://github.com/ceph/ceph-cosi/blob/master/resources/sa.yaml>
- <https://github.com/ceph/ceph-cosi/blob/master/resources/rbac.yaml>

#### cephcosi.ceph.rook.io CRD

The users need to define following CRD to bring up the COSI driver for the ceph object store.

```yaml
apiVersion: ceph.rook.io/v1
kind: CephCOSI
metadata:
name: rook-ceph-cosi
namespace: rook-ceph
spec:
objectStoreName: <object-store-name>
```
The current COSI architecture allows driver to interact with one object store endpoint with limitations in the APIs. Hence planning to support one object store for the driver in Rook 1.12. When multiple object stores are supported in the future, the user can define `spec.objectStoreName` to `all`. Rook Operator allows one such CRD in the cluster aka one ceph cosi driver for the cluster. In future this driver can support multiple object stores.

#### ceph-object-cosi-controller

The `ceph-object-cosi-controller` will watch for `cephCOSI` CRD in the cluster and will bring up the COSI driver for the ceph object stores. If object store name is specified then the Ceph COSI driver will wait until it is ready.It will also create `CephObjectStoreUser` objectstorage-provisioner-<object-store-name> which provides credentials for Ceph COSI driver. Rook should also prevent deletion of the Ceph Object Store if it is attach with driver. The secret need to updated if the ceph object store endpoint changes or credentials are changed.

#### Creating COSI Related CRDs

There are five different kubernetes resources related to COSI. They are `Bucket`, `BucketAccess`, `BucketAccessClass`, `BucketClass` and `BucketClaim`. The user can create these resources using following `kubectl` command. All the examples will be added to deploy/examples/cosi directory in the Rook repository.

```bash
kubectl create -f deploy/examples/cosi/bucketclass.yaml
kubectl create -f deploy/examples/cosi/bucketclaim.yaml
kubectl create -f deploy/examples/cosi/bucketaccessclass.yaml
kubectl create -f deploy/examples/cosi/bucketaccess.yaml
```

#### Consuming COSI Bucket

The user needs to mount the secret as volume created by `BucketAccess` to the application pod. The user can use the secret to access the bucket by parsing the mounted file.

```yaml
spec:
containers:
volumeMounts:
- name: cosi-secrets
mountPath: /data/cosi
volumes:
- name: cosi-secrets
secret:
secretName: ba-secret
```

```bash
# cat /data/cosi/bucket_info.json
```json
{
apiVersion: "v1alpha1",
kind: "BucketInfo",
metadata: {
name: "ba-$uuid"
},
spec: {
bucketName: "ba-$uuid",
authenticationType: "KEY",
endpoint: "https://rook-ceph-my-store:443",
accessKeyID: "AKIAIOSFODNN7EXAMPLE",
accessSecretKey: "wJalrXUtnFEMI/K...",
region: "us-east-1",
protocols: [
"s3"
]
}
}
```

#### Coexistence of COSI and lib-bucket-provisioner

Currently the ceph object store provisioned via Object Bucket Claim (OBC). They both can coexist and can even use same backend bucket from ceph storage. No deployment/configuration changes are required to support both. The lib-bucket-provisioner is deprecated and eventually will be replaced by COSI when it becomes more and more stable. The CRDs used by both are different hence there is no conflicts between them.

#### Transition from libbucket provisioner to COSI

This applied to OBC with reclaim policy is `Retain` otherwise the bucket will be deleted when OBC is deleted. So no point in migrating the OBC with `Delete` reclaim policy.

- First the user need to create a **COSI Bucket resource** pointing to the backend bucket.
- Then user can create BucketAccessClass and BucketAccess using the COSI Bucket CRD.
- Now the update application's credentials with BucketAccess secret, for OBC it was combination of secret and config map with keys word like AccessKey, SecretKey, Bucket, BucketHost etc. Here details in as JSON format in the [secret](#Consuming-COSI-Bucket).
- Finally the user need to delete the existing OBC.

### Points to remember

#### Ceph COSI Driver Requirements

- CephObjectStore should be deployed and running in the cluster
- The credentials/endpoint for the CephObjectStore should be available by creating CephObjectStoreUser with proper permissions
- The COSI controller should be deployed in the cluster
- Rook can able to manage multiple ceph cosi drivers
- Rook should not modify COSI resources like Bucket, BucketAccess, BucketAccessClass, or BucketClass.

#### Rook Requirements

- Rook need to dynamically create/update the secret containing the credentials of the ceph object store for ceph COSI driver.
- User should not be required to deploy Rook differently when using COSI and OBC for ceph object store, expect the minimal changes in the `operator.yaml` and `objectstore.yaml`.
- When provisioning ceph COSI driver Rook must uniquely identify the driver name so that multiple COSI drivers or multiple Rook instances within a Kubernetes cluster will not collide.

0 comments on commit 39ad782

Please sign in to comment.