Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(spc,csp): adding support for pool ReadOnly Threshold limit #1609

Merged
merged 7 commits into from
Feb 26, 2020

Conversation

mynktl
Copy link
Contributor

@mynktl mynktl commented Feb 7, 2020

Changes:

  • Added support to set readOnly threshold for SPC/CSP.

Sample SPC spec

spec:
  name: cstor-pool
  type: sparse
  maxPools: 3
  poolSpec:
    poolType: striped
    roThresholdLimit: 80

Sample CSP spec:

spec:
  group:
  - blockDevice:
    - deviceID: /var/openebs/sparse/2-ndm-sparse.img
      inUseByPool: true
      name: sparse-362d689cfa30cf1ac2cd11ca6dcf9449
  poolSpec:
    cacheFile: /tmp/pool1.cache
    overProvisioning: false
    poolType: striped
    roThresholdLimit: 80
    thickProvisioning: false

Sample output :

kubectl get csp -n openebs
NAME              ALLOCATED   FREE    CAPACITY   STATUS    READONLY   TYPE      AGE
cstor-pool-1mns   117M        9.82G   9.94G      Healthy   true      striped   3h
Events:
  Type     Reason                 Age                  From       Message
  ----     ------                 ----                 ----       -------
  Normal   Synced                 12m                  CStorPool  Received Resource create event
  Normal   Synced                 12m (x2 over 12m)    CStorPool  Received Resource modify event
  Normal   Created                12m                  CStorPool  Resource created successfully
  Normal   Synced                 8m23s                CStorPool  Received Resource create event
  Normal   Imported               8m22s                CStorPool  Resource imported successfully
  Normal   Synced                 29s (x4 over 8m23s)  CStorPool  Received Resource modify event
  Warning  PoolReadOnlyThreshold  21s                  CStorPool  Pool storage limit reached to threshold. Pool expansion is required to make it's replica RW

Note:

  1. Since SPC reconciliation is not supported, user can set roThresholdLimit on SPC during creation only.
  2. One can manually update the CSP's roThresholdLimit.
  3. By default roThresholdLimit is set to 0.
  4. If roThresholdLimit is set to 0 or 100:
    - Writes on the pool will be allowed until it runs out of space

Signed-off-by: mayank mayank.patel@mayadata.io

Signed-off-by: mayank <mayank.patel@mayadata.io>
Signed-off-by: mayank <mayank.patel@mayadata.io>
@mynktl mynktl marked this pull request as ready for review February 11, 2020 08:21
@mynktl
Copy link
Contributor Author

mynktl commented Feb 11, 2020

test case: openebs/openebs#2909

Copy link

@mittachaitu mittachaitu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Provided few comments

pkg/apis/openebs.io/v1alpha1/cstor_pool.go Outdated Show resolved Hide resolved
pkg/install/v1alpha1/cstor_pool.go Outdated Show resolved Hide resolved
cmd/cstor-pool-mgmt/pool/pool.go Show resolved Hide resolved
cmd/cstor-pool-mgmt/controller/pool-controller/handler.go Outdated Show resolved Hide resolved
cmd/cstor-pool-mgmt/controller/pool-controller/handler.go Outdated Show resolved Hide resolved
Copy link

@mittachaitu mittachaitu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Once the pool is set as ReadOnly I think we should avoid creating CVRs on that pool.

Signed-off-by: mayank <mayank.patel@mayadata.io>
@mynktl
Copy link
Contributor Author

mynktl commented Feb 25, 2020

@mittachaitu I have addressed your comments. PTAL.
Regarding CVR creation, we should not block it. CVR creation will happen but IOs won't and that will prevent pool running out of space. So we are ok with CVR creation.

(cStorPool.Spec.PoolSpec.ROThresholdLimit != 0 &&
cStorPool.Spec.PoolSpec.ROThresholdLimit != 100) {
if !cStorPool.Status.ReadOnly {
if err = pool.SetPoolRDMode(cStorPool, true); err != nil {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

did we covered the cases where pool.SetPoolRDMode is done, but, update of cStorPool.Status.ReadOnly as true in etcd is not done due to any reason like restart of pool-mgmt / update to etcd failed?

@@ -134,6 +135,12 @@ func (cb *CasPoolBuilder) withAnnotations(annotations map[string]string) *CasPoo
return cb
}

// WithPoolROThreshold set PoolROThreshold value
func (cb *CasPoolBuilder) WithPoolROThreshold(poolROThreshold int) *CasPoolBuilder {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@kmova Should we add default value for pool readonly threshold, like 80, so if pool storage gets consumed up-to 80% then replica controller will set readonly mode to pool and no IOs will be allowed(volume creation will be alloed) on the pool unless user expand the pool.?

Signed-off-by: mayank <mayank.patel@mayadata.io>
return
}

func convertToBytes(a []string) (number []int64, err error) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@sonasingh46 check whether this fn helps you

@vishnuitta vishnuitta requested a review from kmova February 26, 2020 07:01
Signed-off-by: mayank <mayank.patel@mayadata.io>
Copy link
Contributor

@vishnuitta vishnuitta left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

changes are good

mayank added 2 commits February 26, 2020 16:39
Signed-off-by: mayank <mayank.patel@mayadata.io>
Signed-off-by: mayank <mayank.patel@mayadata.io>
@vishnuitta vishnuitta merged commit 6daf5f0 into openebs-archive:master Feb 26, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants