Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

change targetpath permission to allow non-root access #430

Closed
wants to merge 3 commits into from

Conversation

Madhu-1
Copy link
Collaborator

@Madhu-1 Madhu-1 commented Jun 13, 2019

change targetpath permission to allow non-root access
Signed-off-by: Madhu Rajanna madhupr007@gmail.com

Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
@Madhu-1
Copy link
Collaborator Author

Madhu-1 commented Jun 13, 2019

@humblec @ShyamsundarR can we get this PR in for csiv1.0?

@humblec
Copy link
Collaborator

humblec commented Jun 13, 2019

@Madhu-1 can u check why the CI fail?

Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
@Madhu-1
Copy link
Collaborator Author

Madhu-1 commented Jun 13, 2019

@humblec its a know issue with CI https://travis-ci.org/ceph/ceph-csi/jobs/545193245#L562

Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
@ShyamsundarR
Copy link
Contributor

@humblec @ShyamsundarR can we get this PR in for csiv1.0?

We should get it in for v1.0, but version would be v1.0.1 when we decide to update the images in quay, right? The whole point of creating the final v1.0.0 was to retain it unchanging and provide the next .z version with the fixes, I would hence state we retain that premise.

@Madhu-1
Copy link
Collaborator Author

Madhu-1 commented Jun 13, 2019

if we update it to v1.0.1 we need to update it in rook also.
if we fix this in 1.0.0 people already using this will get the update

planning to update rook for v1.1.0

@ShyamsundarR
Copy link
Contributor

if we update it to v1.0.1 we need to update it in rook also.
if we fix this in 1.0.0 people already using this will get the update

planning to update rook for v1.1.0

I understand the rook integration position, but as we have stated we will retain the 1.0.0 image as is, I would still push for 1.0.1 and update rook as well to the same.

@Madhu-1
Copy link
Collaborator Author

Madhu-1 commented Jun 13, 2019

if we update it to v1.0.1 we need to update it in rook also.
if we fix this in 1.0.0 people already using this will get the update
planning to update rook for v1.1.0

I understand the rook integration position, but as we have stated we will retain the 1.0.0 image as is, I would still push for 1.0.1 and update rook as well to the same.

@humblec what's your thought on this?

@Madhu-1
Copy link
Collaborator Author

Madhu-1 commented Jun 13, 2019

if we update it to v1.0.1 we need to update it in rook also.
if we fix this in 1.0.0 people already using this will get the update
planning to update rook for v1.1.0

I understand the rook integration position, but as we have stated we will retain the 1.0.0 image as is, I would still push for 1.0.1 and update rook as well to the same.

@ShyamsundarR without making release should we push v1.0.1 images?

@humblec
Copy link
Collaborator

humblec commented Jun 13, 2019

@ShyamsundarR as @Madhu-1 mentioned, this fix is important and the customer should get it without the changes in the tag. So, lets put it on 1.0.0 and we have to avoid any extra tagging or should not complicate the things.

@humblec
Copy link
Collaborator

humblec commented Jun 13, 2019

@ShyamsundarR @poornimag can you please approve this PR as this is needed urgently for some testing ?

@ShyamsundarR
Copy link
Contributor

@ShyamsundarR as @Madhu-1 mentioned this fix is important and the customer should get it without the changes in the tag. So, lets put it on 1.0.0 and we have to avoid any extra tagging or should not complicate the things.

This breaks what we have committed to users with the 1.0.0 image versions, so what is special that the one customer cannot use a new image version? or test the 1.0-canary?

If you just need the vote, then I will give it, but reasoning is missing.

@ShyamsundarR
Copy link
Contributor

@ShyamsundarR as @Madhu-1 mentioned this fix is important and the customer should get it without the changes in the tag. So, lets put it on 1.0.0 and we have to avoid any extra tagging or should not complicate the things.

This breaks what we have committed to users with the 1.0.0 image versions, so what is special that the one customer cannot use a new image version? or test the 1.0-canary?

If you just need the vote, then I will give it, but reasoning is missing.

Is it because the image versions are baked into the manifests carried by Rook and that it would need a further PR for the user to use the updated image?

Copy link
Contributor

@ShyamsundarR ShyamsundarR left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Barring the version discussion code changes are fine.

@humblec
Copy link
Collaborator

humblec commented Jun 13, 2019

This breaks what we have committed to users with the 1.0.0 image versions,

@ShyamsundarR can you point me the readme or document where we mentioned this ?

@ShyamsundarR
Copy link
Contributor

ShyamsundarR commented Jun 13, 2019

This breaks what we have committed to users with the 1.0.0 image versions,

@ShyamsundarR can you point me the readme or document where we mentioned this ?

#345 is a start.

<< Humble's edit below>>
Well, first of all there is no release till date for v1.0, so no clear decision have been made or confirmed with above. We tagged v1.0 for the reasons like our default branch and other preparations for v1.1.0. Updating rook templates , backporting and making a release on Rook v1.0 and user updating to that release is required for a user to get this change. more or less, the changes in ceph-csi may not be the "one and only" thing coming in rook for a clear tracking from user pov. My point is that, the new users are getting this change without any hiccups or immediately if we update the existing tag, so I would vote for bringing the change as mentioned by @Madhu-1 . The existing deployments are NOT going to be disturbed as the image pull policy is NOT PRESENT. So, it should not be a concern.

@ShyamsundarR
Copy link
Contributor

This breaks what we have committed to users with the 1.0.0 image versions,

@ShyamsundarR can you point me the readme or document where we mentioned this ?

#345 is a start.

<< Humble's edit below>>
Well, first of all there is no release till date for v1.0, so no clear decision have been made or confirmed with above. We tagged v1.0 for the reasons like our default branch and other preparations for v1.1.0. Updating rook templates , backporting and making a release on Rook v1.0 and user updating to that release is required for a user to get this change. more or less, the changes in ceph-csi may not be the "one and only" thing coming in rook for a clear tracking from user pov. My point is that, the new users are getting this change without any hiccups or immediately if we update the existing tag, so I would vote for bringing the change as mentioned by @Madhu-1 . The existing deployments are NOT going to be disturbed as the image pull policy is NOT PRESENT. So, it should not be a concern.

Rook has the following means of mentioning which CSI image versions to use, that should come in handy, rather than having the push a PR to Rook IMO,

Also, when a pod changes nodes, or is restarted on another node, it would pull the latest image, if it was never pulled before, so the statement "deployments NOT going to be disturbed" is not true.

@humblec
Copy link
Collaborator

humblec commented Jun 13, 2019

Also, when a pod changes nodes, or is restarted on another node, it would pull the latest image, if it was never pulled before, so the statement "deployments NOT going to be disturbed" is not true.

Node plugin is a deamonset. That means image is present on EVERY NODE!! so, no question/confusion on "pod changes nodes or is restarted on another node ....".

In short that statement is correct :).

The only possibility is when you scale up the cluster itself, not when pod is rescheduled.

@ShyamsundarR
Copy link
Contributor

Also, when a pod changes nodes, or is restarted on another node, it would pull the latest image, if it was never pulled before, so the statement "deployments NOT going to be disturbed" is not true.

Node plugin is a deamonset. That means image is present on EVERY NODE!! so, no question/confusion on "pod changes nodes or is restarted on another node ....".

In short that statement is correct :).

Agreed, I stand corrected.

The only possibility is when you scale up the cluster itself, not when pod is rescheduled.

But the above is a possibility, so still needs to be tackled, no?

@Madhu-1
Copy link
Collaborator Author

Madhu-1 commented Jun 19, 2019

@humblec @ShyamsundarR ping

@ShyamsundarR
Copy link
Contributor

The only last option, in terms of simplicity till we close out how we release updates, I want to provide is as follows,

The pod manifests if they change is what gets us in trouble as these are
packaged into the rook image and hence needs a rook image rebuild. That
does not seem to be the case at present, so gives us an opportunity to
go ahead and roll out 1.0.1.

@humblec @Madhu-1 If this is still not acceptable, please roll out 1.0.0 by taking a
decision as the release owners, because any further back and forth on this is not going to get us anywhere IMO.

@ekuric
Copy link

ekuric commented Jun 26, 2019

@ShyamsundarR @humblec @Madhu-1 I wonder, when we can expect to have new images
per last @ShyamsundarR comment, I mean quay.io/cephcsi/cephfsplugin:v1.0.1 and quay.io/cephcsi/rbdplugin:v1.0.0 which are supposed to fix / workaround this problem. I would like to update cluster and test with usual suspects mongdb/postgresql.

@Madhu-1
Copy link
Collaborator Author

Madhu-1 commented Jul 18, 2019

closing this one. this as been fixed in new release v1.1.0

@Madhu-1 Madhu-1 closed this Jul 18, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants