-
Notifications
You must be signed in to change notification settings - Fork 50
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Create service_namespace via helm if namespaced_separation is false #1322
Conversation
We have created an issue in Pivotal Tracker to manage this: https://www.pivotaltracker.com/story/show/178246396 The labels on this github issue will be updated when the story is started. |
|
Codecov Report
@@ Coverage Diff @@
## master #1322 +/- ##
==========================================
- Coverage 72.16% 71.80% -0.37%
==========================================
Files 44 44
Lines 3837 3837
==========================================
- Hits 2769 2755 -14
- Misses 754 765 +11
- Partials 314 317 +3 |
Pull Request Test Coverage Report for Build 5420
💛 - Coveralls |
Looks good 👍
|
@anoopjb True, during the helm install, we have to provide the unique namespace but this is quite known(k8s behavior). Helm upgrade should be ok. |
Yes. Helm upgrade should not have a problem. 👍 Since Interoperator is stateless, it is possible to uninstall and then install the Interoperator without removing the service instances or binding. The new Interoperator will pick up the service instances and bindings from CRDs. Some of the users may have this deployment strategy. I think we should support this as well. We discussed in the team and felt may be we can introduce a toggle in |
@anoopjb Make sense. I updated the PR with the changes. |
Please merge after pipeline is green |
Validation on k8s cluster: n is succeeded. |
Validation on k8s cluster: n-1 is succeeded. |
Validation on k8s cluster: n-2 is succeeded. |
Validation on k8s cluster: n is succeeded. |
I think there is a possibility of accidental deletion of namespace. I installed the interoperator with the changes, $ helm install --set cluster.host=xxxx \
--set broker.enable_namespaced_separation=false \
--set broker.create_services_namespace=true \
--set broker.services_namespace=services \
--namespace interoperator --wait interoperator interoperator It worked perfectly. The bash-3.2$ kubectl get sfserviceinstances -n services
NAME STATE AGE CLUSTERID
0ccfa7e0-d6da-4225-94df-4ce85a4f61e2 succeeded 40m 1 But we have a tricky situation during helm upgrade. $ helm upgrade --set cluster.host=xxxx \
--set broker.create_services_namespace=false \
--set broker.enable_namespaced_separation=false \
--namespace interoperator --wait interoperator interoperator kubectl get ns services
NAME STATUS AGE
services Terminating 61m Another scenario that I can imagine is when downgrading to an older helm chart version where creating namespace is not implemented. Let's dig a little deeper into this. I am parking this PR for further discussion. |
Thanks @anoopjb for looking into it. |
PR Contains
service_namespace
but this namespace should be created manually. This PR creates via helm automatically.create_services_namespace
to create the namespace.