Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Create service_namespace via helm if namespaced_separation is false #1322

Closed
wants to merge 3 commits into from

Conversation

gowrisankar22
Copy link

@gowrisankar22 gowrisankar22 commented May 21, 2021

PR Contains

  • Right now, if the namespaced_separation is set to the false service instances created in the namespace mentioned in the service_namespace but this namespace should be created manually. This PR creates via helm automatically.
  • Added another toggle create_services_namespace to create the namespace.

@cf-gitbot
Copy link

We have created an issue in Pivotal Tracker to manage this:

https://www.pivotaltracker.com/story/show/178246396

The labels on this github issue will be updated when the story is started.

@linux-foundation-easycla
Copy link

linux-foundation-easycla bot commented May 21, 2021

CLA Signed

The committers are authorized under a signed CLA.

@codecov
Copy link

codecov bot commented May 21, 2021

Codecov Report

Merging #1322 (fe83c71) into master (0b67b98) will decrease coverage by 0.36%.
The diff coverage is n/a.

@@            Coverage Diff             @@
##           master    #1322      +/-   ##
==========================================
- Coverage   72.16%   71.80%   -0.37%     
==========================================
  Files          44       44              
  Lines        3837     3837              
==========================================
- Hits         2769     2755      -14     
- Misses        754      765      +11     
- Partials      314      317       +3     

@coveralls
Copy link

coveralls commented May 21, 2021

Pull Request Test Coverage Report for Build 5420

  • 0 of 0 changed or added relevant lines in 0 files are covered.
  • No unchanged relevant lines lost coverage.
  • Overall coverage remained the same at 88.805%

Totals Coverage Status
Change from base Build 5407: 0.0%
Covered Lines: 9690
Relevant Lines: 10534

💛 - Coveralls

@anoopjb anoopjb requested review from jayeshjk and anoopjb May 21, 2021 15:58
@anoopjb
Copy link
Contributor

anoopjb commented May 25, 2021

Looks good 👍
The only downside is that if there is already a services namespace available and is not managed by helm, we cannot proceed.

➜  helm-charts git:(master) $ kubectl create namespace services
namespace/services created
➜  helm-charts git:(master) $ helm install --set cluster.host=xxxxxxxx --set broker.enable_namespaced_separation=false --set broker.services_namespace=services --namespace interoperator --wait interoperator interoperator

Error: rendered manifests contain a resource that already exists. 
Unable to continue with install: Namespace "services" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; 
label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"; 
annotation validation error: missing key "meta.helm.sh/release-name": must be set to "interoperator";
annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "interoperator"

@gowrisankar22
Copy link
Author

gowrisankar22 commented May 26, 2021

@anoopjb True, during the helm install, we have to provide the unique namespace but this is quite known(k8s behavior). Helm upgrade should be ok.

@anoopjb
Copy link
Contributor

anoopjb commented May 26, 2021

Yes. Helm upgrade should not have a problem. 👍

Since Interoperator is stateless, it is possible to uninstall and then install the Interoperator without removing the service instances or binding. The new Interoperator will pick up the service instances and bindings from CRDs. Some of the users may have this deployment strategy. I think we should support this as well.

We discussed in the team and felt may be we can introduce a toggle in values.yaml, something like create_services_namespace: false. Based on the toggle, we may choose to create the namespace.
@jayeshjk

@gowrisankar22
Copy link
Author

@anoopjb Make sense. I updated the PR with the changes.

jayeshjk
jayeshjk previously approved these changes May 27, 2021
@jayeshjk
Copy link
Contributor

Please merge after pipeline is green

@vinaybheri
Copy link
Contributor

Validation on k8s cluster: n is succeeded.

@vinaybheri
Copy link
Contributor

Validation on k8s cluster: n-1 is succeeded.

@vinaybheri
Copy link
Contributor

Validation on k8s cluster: n-2 is succeeded.

@vinaybheri
Copy link
Contributor

Validation on k8s cluster: n is succeeded.

anoopjb
anoopjb previously approved these changes May 29, 2021
@anoopjb
Copy link
Contributor

anoopjb commented May 30, 2021

I think there is a possibility of accidental deletion of namespace.
I got inspiration from helm/helm#6794 (comment) and did some experiments.

I installed the interoperator with the changes,

$ helm install --set cluster.host=xxxx \
     --set broker.enable_namespaced_separation=false \
     --set broker.create_services_namespace=true \
     --set broker.services_namespace=services \
     --namespace interoperator --wait interoperator interoperator

It worked perfectly. The services namespace got created successfully.
The newly created service instances are all placed in the namespace.

bash-3.2$ kubectl get sfserviceinstances -n services
NAME                                   STATE       AGE   CLUSTERID
0ccfa7e0-d6da-4225-94df-4ce85a4f61e2   succeeded   40m   1

But we have a tricky situation during helm upgrade.
If the create_services_namespace is false, the rendered template will not have the namespace template.
Helm then deletes the namespace and thus terminating all the resources in it.
Thus the service instances in the services namespace is lost.
I feel this can happen, mostly accidentally, since we keep the create_services_namespace as false by default.

$ helm upgrade --set cluster.host=xxxx \
      --set broker.create_services_namespace=false \
      --set broker.enable_namespaced_separation=false \
      --namespace interoperator --wait interoperator interoperator
kubectl get ns services 
NAME       STATUS        AGE
services   Terminating   61m

Another scenario that I can imagine is when downgrading to an older helm chart version where creating namespace is not implemented.
Looks like playing with namespaces is a risky business.

Let's dig a little deeper into this. I am parking this PR for further discussion.
@jayeshjk @gowrisankar22

@anoopjb anoopjb requested review from anoopjb and jayeshjk May 30, 2021 18:15
@jayeshjk
Copy link
Contributor

Thanks @anoopjb for looking into it.
On a second look wrt above observation, I also feel that management of the services namespace should be kept independent of the interoperator deployment.
The services namespace will contain metadata related to user service instances and it makes sense to separate it from stateless interoperator control plane.
@gowrisankar22 will it possible for you to try other workarounds for your scenario.

@jayeshjk jayeshjk dismissed their stale review June 1, 2021 06:56

needs further discussions

@jayeshjk jayeshjk dismissed anoopjb’s stale review June 1, 2021 06:56

needs further discussions

@anoopjb anoopjb closed this May 19, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants