-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[stable9] Group shares with same source and target #25543
Conversation
@PVince81, thanks for your PR! By analyzing the annotation information on this pull request, we identified @SergioBertolinSG, @rullzer and @icewind1991 to be potential reviewers |
Fixes #24575 |
How strange. I see that OC 9.0 had a |
This 577651f is my tentative backport. It isn't finished yet as there are many unclear things. I just solved the conflicts and pushed. This is broken. |
Just debugged into the |
Oh, it does work! I had to manually fix it in the DB then it continues working. This means that there is another part of the code that will create the bogus extra share with target "/test (2)" before this mounting code was reached in the first place. |
Okay confirmed. As soon as a I share with "group2" after sharing with "group1", some code is automatically creating the bogus entries:
So if I could prevent that to happen, then the original grouping code could be made to work again and we won't need to full backport of the MountProvider logic. |
Shared_Updater::postShareHook is called, it calls |
Okay, so that re-setup the storages for the receiver and instead of grouping the shares it returned both separately, for this specific call (we're still in the call that creates the share):
|
Noooooooooooo... The grouping logic is only called if the logged in user matches: https://github.com/owncloud/core/blob/v9.0.4/lib/private/share/share.php#L1932 But in this specific call, we're logged in as "admin" and creating a share that affects the user "user1". Since the logged in user is not "user1", the grouping doesn't happen. WTF! Now I think one difference between this and OC 8.2 is that we probably did not call |
577651f
to
971b3a6
Compare
And here we go. In the light of #25543 (comment), here is a fix that adds a flag to force-group the received shares in the MountProvider: 2ac24cb @rullzer what do you think ? |
Expected: received share is still called "test_renamed" (8.2 behavior)
Actual: two folders, "test" and "test_renamed"
|
971b3a6
to
8239ea3
Compare
|
Raised #25568 for the case from #25543 (comment) |
Backported the tests from #25568 and part of its logic to fix #25543 (comment) Now the tests will tell you that it works, and a manual test also shows so. However, if you look at d5f2d2c I didn't bother to sort by stime like I did in the original commit. For some weirdly twisted reason, it seems the logic that comes before that already properly generates the correct order for the files_target.
|
Okay, I decided to improve the repair logic to pick the best target name based on all subshares and exclude the ones with numbers like "(2)". In case the user renamed all of the duplicates, the most recent will be used. This is ready for review now. |
annnd another quick fix for yet another case with direct share: deb1f08 |
Added flag to enforce grouping of received shares even when the method is called for a user different than the current one. This can happen at sharing time whenever the recipient's FS is being setup from the sharer's call. This fixes duplicated received folders for new shares.
The repair step was a bit overeager to skip repairing so it missed the case where a group share exists without subshares but with an additional direct user share.
This would slow down the upgrade needlessly as there is no repair to be done.
Pick the most recent subshare that has no parenthesis from duplication which should match whichever name the user picked last. If all subshares have duplicate parenthesis names, use the least recent group share's target instead.
Whenever a group share is created after a direct share, the stime order needs to be properly considered in the repair routine, considering that the direct user share is appended to the $subShares array and breaking its order.
deb1f08
to
d2ad804
Compare
@owncloud/qa can you help testing this ? Would be good to have it in 9.0.5 |
@rullzer are you still interested in reviewing this ? |
👍 from my side. The code makes sense to me. |
@rullzer thanks a bunch ! Now waiting for @owncloud/qa to do a final test check |
Forward ports of the two last commits about file_target decision when repairing: |
Works fine 👍 |
This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs. |
Backport of #25113 to stable9
backport server side grouping fix: aa42b7b and e5af146 (I have a local squashed version to make it easier)=> alternative fix@owncloud/sharing