Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow for persisting the /home/user/ directory #1105

Merged
merged 6 commits into from
Jun 9, 2023

Conversation

AObuchow
Copy link
Collaborator

@AObuchow AObuchow commented May 19, 2023

What does this PR do?

Allows for persisting the /home/user/ directory in workspaces that have persistent storage enabled (i.e. they're using the per-user/common or per-workspace PVC strategies). Each workspace will have their own unique /home/user/ directory (i.e. they do not share /home/user/ directories).

This feature is disabled by default, but can be enabled with the config.workspace.persistUserHome.enabled: <bool> DWOC field.

When enabled, a volume named persistentHome will be added to the DevWorkspace, and mounted to all container components. If a DevWorkspace already has a volume that mounts to /home/user/, the DevWorkspace spec remains unchanged
.

What issues does this PR fix or reference?

Fix #1097

Is it tested? How?

Normal usage

  1. Install DWO using the changes from this PR
  2. Configure config.workspace.persistUserHome.enabled: true in the DWOC with kubectl edit dwoc -n $NAMESPACE:
apiVersion: controller.devfile.io/v1alpha1  
config:  
 routing:  
   defaultRoutingClass: basic  
 workspace:  
   imagePullPolicy: Always  
+   persistUserHome:  
+     enabled: true  
kind: DevWorkspaceOperatorConfig
  1. Apply a DevWorkspace that uses persistent storage (i.e. it uses the per-user/common or per-workspace storage strategy. The plain.yaml devworkspace will work for this case (the common/per-user PVC strategy should be used).

  2. Once the workspace starts up, verify that the devworkspace's pod spec has a volumeMount that uses the correct PVC & mounts to /home/user/

containers:
    - resources:
        limits:
          memory: 512Mi
        requests:
          memory: 64M
      terminationMessagePath: /dev/termination-log
      name: web-terminal
      command:
        - tail
        - '-f'
        - /dev/null
(...)
      volumeMounts:
        - name: claim-devworkspace
          mountPath: /tmp/workspace-storage
          subPath: workspacecf48085b48de41fa
        - name: claim-devworkspace
          mountPath: /projects
          subPath: workspacecf48085b48de41fa/projects
+        - name: claim-devworkspace
+          mountPath: /home/user/
+          subPath: workspacecf48085b48de41fa/home/user/
        - name: workspace-metadata
          readOnly: true
          mountPath: /devworkspace-metadata
        - name: kube-api-access-7gxvp
          readOnly: true
          mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      terminationMessagePolicy: File
      image: 'quay.io/wto/web-terminal-tooling:next'
  1. Exec with a shell into the workspace pod and create a file in /home/user/, e.g. echo "test" > /home/user/myfile.txt
  2. Stop the workspace by doing a kubectl edit dw plain-devworkspace -n $NAMESPACE and setting spec.started to false. Wait for the workspace deployment to scale to 0 pods
  3. Start the workspace again by doing a kubectl edit dw plain-devworkspace -n $NAMESPACE and setting spec.started to true. Wait for the workspace to be ready again by checking it's status with kubectl get dw -n $NAMESPACE -w
  4. Ensure the file you created in step 5. has persisted by doing a cat /home/user/myfile.txt

Feature override case (/home/user/ volume already defined)

  1. Install DWO using the changes from this PR
  2. Configure config.workspace.persistUserHome.enabled: true in the DWOC
  3. Apply a DevWorkspace that already defines a volume that mounts to /home/user/, e.g.:
kind: DevWorkspace  
apiVersion: workspace.devfile.io/v1alpha2  
metadata:  
 name: plain-dw-with-home-vm  
spec:  
 started: true  
 routingClass: 'basic'  
 template:  
   components:  
     - name: pre-defined-home  
       volume: {}  
     - name: web-terminal  
       container:  
         image: quay.io/wto/web-terminal-tooling:next  
         memoryLimit: 512Mi  
         mountSources: true  
         volumeMounts:  
           - name: pre-defined-home  
             path: /home/user/  
         command:  
          - "tail"  
          - "-f"  
          - "/dev/null"
  1. Once the devworkspace is deployed, ensure the persistentHome volumeMount does not exist, and that the pre-defined-home volumeMount exists instead.

PR Checklist

  • E2E tests pass (when PR is ready, comment /test v8-devworkspace-operator-e2e, v8-che-happy-path to trigger)
    • v8-devworkspace-operator-e2e: DevWorkspace e2e test
    • v8-che-happy-path: Happy path for verification integration with Che

Part of devfile#1097

Signed-off-by: Andrew Obuchowicz <aobuchow@redhat.com>
Signed-off-by: Andrew Obuchowicz <aobuchow@redhat.com>
@AObuchow
Copy link
Collaborator Author

/retest

@codecov
Copy link

codecov bot commented May 20, 2023

Codecov Report

Patch coverage: 83.33% and project coverage change: +0.30 🎉

Comparison is base (89370ce) 51.64% compared to head (ea5f513) 51.94%.

❗ Current head ea5f513 differs from pull request most recent head df63d7b. Consider uploading reports for the commit df63d7b to get more accurate results

Additional details and impacted files
@@            Coverage Diff             @@
##             main    #1105      +/-   ##
==========================================
+ Coverage   51.64%   51.94%   +0.30%     
==========================================
  Files          79       80       +1     
  Lines        7285     7356      +71     
==========================================
+ Hits         3762     3821      +59     
- Misses       3237     3246       +9     
- Partials      286      289       +3     
Impacted Files Coverage Δ
controllers/workspace/devworkspace_controller.go 62.42% <0.00%> (-0.30%) ⬇️
pkg/config/defaults.go 70.00% <ø> (ø)
pkg/provision/storage/shared.go 59.39% <0.00%> (ø)
pkg/config/sync.go 74.71% <100.00%> (+0.65%) ⬆️
pkg/library/home/persistentHome.go 100.00% <100.00%> (ø)
pkg/provision/workspace/tokens.go 91.57% <100.00%> (+0.18%) ⬆️

... and 3 files with indirect coverage changes

☔ View full report in Codecov by Sentry.
📢 Do you have feedback about the report comment? Let us know in this issue.

Copy link
Collaborator

@amisevsk amisevsk left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a good first shot at persistent home directories and should work, but there is some messiness I think we should clean up before merging. It feels like the added functionality is being grafted on in places where it's not a perfect fit

A (totally separate) approach maybe worth considering:

Conceptually, we can think of the effect of persistUserHome.enabled = true on a devfile/DevWorkspace level rather than a pods/containers level. We can imagine the setting internally converting our devfiles by adding a volume component and mounting it to all container components:

 kind: DevWorkspace
 apiVersion: workspace.devfile.io/v1alpha2
 metadata:
   name: plain-devworkspace
 spec:
   template:
     components:
       - name: web-terminal
         container:
           image: quay.io/wto/web-terminal-tooling:next

           memoryLimit: 512Mi
           mountSources: true
+          volumeMounts:
+            - name: persistentHome
+              path: /home/user/
           command:
             - "tail"
             - "-f"
             - "/dev/null"
+      - name: persistentHome
+        volume: {}

This is a fairly simple operation, so why don't we just do that, out of step with (i.e. prior) to calling the storage provisioners?

  • If persistent home is enabled for the workspace,
    • Check preconditions (maybe don't do anything if user already has mounts added for home?)
    • Edit the flattened (or unflattened?) devfile by adding a volume component and updating all regular components to mount it at /home/user

With this approach, we don't need to worry about how the volume is handled, where it's mounted, etc., as the StorageProvisioner code is set up to handle mounting volume components to devfile components already. This would mean we can just plug into the reconcile loop as another (small) step and leave the provisioners code unchanged (and it also means the StorageProvisioners don't need to think about this setting)

pkg/provision/storage/shared.go Outdated Show resolved Hide resolved
Comment on lines 157 to 162
// When we persist the home directory, a volumeMount is created that references the PVC volume.
// This volumeMount should not be rewritten
if shouldPersistHomeDirectory(workspace) {
additionalVolumes[pvcName] = true
}

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you explain what this is doing and why it's necessary? Seems like a steep cost to have to pass in the full common.DevWorkspaceWithConfig rather than just a DevWorkspaceTemplateSpec just for this.

This reads like a misuse of the additionalVolumes struct -- it will work because we're adding a container that references the PVC volume directly, but in general we assume that containers aren't directly referring to the PVC anyways. I worry that we're breaking the general flow of steps in ProvisionStorage by passing data to rewriteContainerVolumeMounts that's already half processed and expecting the function to detect and handle that.

The purpose behind additionalVolumes is to hold volumes mounted to the workspace container that aren't controlled by DWO's storage provisioners (i.e. are not the per-user/per-workspace PVC) -- it's strange to optionally add the per-user/per-workspace PVC to this list in that context.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We might need to also consider how we handle workspaces that do not normally need storage when persistsUserHome is enabled -- currently, ProvisionStorage bails out early if the workspace does not have any volumes and mountSources is false for all containers (i.e. we would create no non-ephemeral PVCs for the workspace). Do we still want to add /home/user in that case?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks to the new approach of this PR (that you suggested) this is already handled, since the persistent home devfile volume will be added to the DevWorkspace, resulting in a PVC being provisioned.

Comment on lines 300 to 302
containerName := podAdditions.Containers[cIdx].Name
volumeMountName := podAdditions.Containers[cIdx].VolumeMounts[vmIdx].Name
return fmt.Errorf("volumeMount '%s' for container '%s' already mounts to %s", volumeMountName, containerName, homeUserDirPath)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is failing the workspace the best we can do here? At an intention level, a user mounting a home directory by setting it up in their devfile just wants to duplicate the functionality we're adding here.

Is it possible for us to detect this case and include the user's manual configuration in the current functionality?

Some options:

  • If persistUserHome is enabled, ignore user-defined home mounts and use our own
  • If user's devfile has /home/user mounts already built-in, don't try to do our own solution (allowing the user to control it more closely if they so choose)
  • Somehow integrate our functionality with whatever the user provides (e.g. if a devfile volume is being used for a persistent home dir, we reuse that volume instead of creating our own?)

I'm kind of partial to no. 2 above -- if someone wants to configure it themself we shouldn't necessarily mess with that.

Returning an error here would mean any user who has in the mean time set up persistent home directories for themselves would have a potentially invalid devfile once this change rolls out.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm kind of partial to no. 2 above -- if someone wants to configure it themself we shouldn't necessarily mess with that.

Agreed, I ended up changing things so that if a user's devfile already has a volume mounting to /home/user, the persistentHome volume is not added and the feature is essentially disabled for that workspace.

pkg/provision/storage/shared.go Outdated Show resolved Hide resolved
pkg/config/sync.go Outdated Show resolved Hide resolved
@AObuchow
Copy link
Collaborator Author

@amisevsk +1 for your suggested approach - I agree, the way I currently have this implemented seems a bit like an abuse of the existing storage provisioner code. Currently reworking this PR, and will address your feedback further :)

@AObuchow
Copy link
Collaborator Author

@amisevsk still doing some fixups -- will request a review when this is ready for you to check out again.

@AObuchow AObuchow force-pushed the persistent_home branch 5 times, most recently from cc35471 to 3eb33df Compare May 30, 2023 21:49
@AObuchow AObuchow requested a review from amisevsk May 31, 2023 14:11
@AObuchow
Copy link
Collaborator Author

@amisevsk I've finally updated this PR according to your feedback and also updated the PR description.

It should be good to review however I realized I can further improve things by adding a warning to the DevWorkspace if persistUserHome is enabled and the Devworkspace already mounts a volume to /user/home/. Will work on adding another commit for this today.

@AObuchow AObuchow force-pushed the persistent_home branch 2 times, most recently from 23236df to f239b19 Compare May 31, 2023 17:20
@AObuchow
Copy link
Collaborator Author

@amisevsk Added the warning condition when the DW already mounts a volume to /home/user/.

Good for review now, sorry for the last-minute changes.

Copy link
Collaborator

@amisevsk amisevsk left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Generally looking good 👍

Some minor organizational feedback

Comment on lines 143 to 153
sort.Slice(problemNames, func(i, j int) bool {
return problemNames[i] < problemNames[j]
})

if len(problemPaths) == 1 {
return fmt.Errorf("the following ServiceAccount tokens have the same path (%s) and mount path (%s): %s", path, mountPath, strings.Join(problemNames, ", "))
}
}
sort.Slice(problemNames, func(i, j int) bool {
return problemNames[i] < problemNames[j]
})
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's not clear why this change is here (and why we sort twice, once before checking if the list has >1 items)

Copy link
Collaborator Author

@AObuchow AObuchow Jun 5, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's not clear why this change is here

While working on this PR, the SA tokens test for when SA tokens have colliding mount paths failed due to the order of the SA tokens in the actual error message not matching the order in the expected output/error message.

I re-ran the CI and the test passed, which made me suspect there was some flakiness in the ordering of the SA tokens.

(and why we sort twice, once before checking if the list has >1 items)

I made a mistake. My intention was to ensure the SA tokens are always sorted before we return, not before checking if the list has >1 items.

// limitations under the License.
//

package workspace
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Minor nit: this function is more of a library function than a provision function -- we expect packages in pkg/provision to sync resources with the cluster, whereas pkg/library packages operate on DevWorkspaces themselves.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd also consider not using dwerrors here out of principle, as it suggests a retryError may be returned which is not the case (we're just reusing it for warnings -- instead I'd return something like (warning string, err error)) but I'm not super firm on this one so it's up to you.

Structurally, maybe also consider taking a "return a modified copy" over editing the object passed in -- we do both but IMO it's cleaner to return a copy that you have apply as we do in the flatten package.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Minor nit: this function is more of a library function than a provision function -- we expect packages in pkg/provision to sync resources with the cluster, whereas pkg/library packages operate on DevWorkspaces themselves.

I ended up moving these functions into pkg/library/home/ though I'm open to other names for the package.

I considered naming the package homedir and persistentHome.
I also considered if the pkg/library/defaults package could be used to hold these functions (technically, the persistent home functions deal with adding the default (non-user-defined) home volume. Maybe a bit of a stretch?)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Structurally, maybe also consider taking a "return a modified copy" over editing the object passed in -- we do both but IMO it's cleaner to return a copy that you have apply as we do in the flatten package.

I reworked things to go with your suggested approach.

One thing I went back-and-fourth on while working on this was trying to hide the complexity of the different cases that can occur from the devworkspace controller code. With the latest revision, the controller code is relatively simple, but the return values of AddPersistentHomeVolume() are relatively complex (bool, DWTemplateSpec and error).

AddPersistentHomeVolume() could be simplified by making the controller call needsPersistentHomeDirectory() and deal with the different cases itself, but this made the controller code look a bit messy IMO. I'm open to going with this approach, however.

Comment on lines 80 to 82
return false, &dwerrors.WarningError{
Message: fmt.Sprintf("Unable to add persistentHome volume: volume '%s' already mounts to %s", volumeMount.Name, constants.HomeUserDirectory),
}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder if it's worth adding the warning here -- the user is doing nothing wrong and cannot really fix the issue themselves, as it's an administrator-level setting.

This suggests that folks shouldn't manually configure home directories, which I'm not sure is the case (e.g. if your devfile needs persistent home and you want it to work regardless of whether the config is set).

Perhaps a rewording to be more of an "info" warning than a "warning" warning?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder if it's worth adding the warning here -- the user is doing nothing wrong and cannot really fix the issue themselves, as it's an administrator-level setting.

Agreed.

I ended up trying to reword the warning, but it might be better to not add a warning to the workspace and instead log this at the controller level (basically, trying to let the admin know what's going on rather than warning the user).

Let me know your thoughts.

Comment on lines 42 to 59
homeVolume := v1alpha2.Component{
Name: constants.HomeVolumeName,
ComponentUnion: v1alpha2.ComponentUnion{
Volume: &v1alpha2.VolumeComponent{},
},
}
homeVolumeMount := v1alpha2.VolumeMount{
Name: constants.HomeVolumeName,
Path: constants.HomeUserDirectory,
}

workspace.Spec.Template.Components = append(workspace.Spec.Template.Components, homeVolume)
for _, component := range workspace.Spec.Template.Components {
if component.Container == nil {
continue
}
component.Container.VolumeMounts = append(component.Container.VolumeMounts, homeVolumeMount)
}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does it make sense to do an extra-careful look before we leap here? E.g. apply the changes to a copy of components, check that the resulting workspace is still valid via devfilevalidation.ValidateComponents(components) and only return the added home if it doesn't break the workspace?

My main concern here is applying a change that takes a valid devfile and makes it invalid through some edge case (e.g. if the user has a volume named persistentHome that mounts somewhere else).

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 good call, I added the use of devfilevalidation.ValidateComponents(components) and added an extra test for the case where the user already has a volume named persistentHome

Comment on lines 293 to 301
shouldPersistHome, workspaceWithHomeVolume, err := home.AddPersistentHomeVolume(workspace)
if err != nil {
reconcileStatus.addWarning(fmt.Sprintf("Info: default persistentHome volume is not being used: %s", err.Error()))
} else if shouldPersistHome {
workspace.Spec.Template = *workspaceWithHomeVolume
}

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it's cleaner to move the NeedsPersistentHome check out to be separate -- returning a bool to decide what you should do with the result is a bit of an antipattern. Below is the same number of lines and clearer IMO.

	if home.NeedsPersistentHomeVolume(workspace) {
		workspaceWithHomeVolume, err := home.AddPersistentHomeVolume(workspace)
		if err != nil {
			reconcileStatus.addWarning(fmt.Sprintf("Info: default persistentHome volume is not being used: %s", err.Error()))
		}
		workspace.Spec.Template = *workspaceWithHomeVolume
	}

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed, that's a lot clearer, and returning a bool to decide what to do with the result feels incorrect.

I went back-and-fourth on this, but I decided to remove the workspace warning when the user is already mounting a volume to /home/user/, as the user isn't doing anything wrong, and it simplifies things a bit.

// Returns false if `persistUserHome` is disabled.
// Returns an error if at least one of the DevWorkspace's container components already mount a volume to `/home/user/`.
func needsPersistentHomeDirectory(workspace *common.DevWorkspaceWithConfig) (bool, error) {
if !*workspace.Config.Workspace.PersistUserHome.Enabled {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Dereferencing a raw pointer reads as sketchy (even though value is always set through the default)

Suggested change
if !*workspace.Config.Workspace.PersistUserHome.Enabled {
if pointer.BoolDeref(workspace.Config.Workspace.PersistUserHome.Enabled, false) {

Comment on lines 142 to 149
}

if len(problemPaths) == 1 {
sort.Strings(problemNames)
return fmt.Errorf("the following ServiceAccount tokens have the same path (%s) and mount path (%s): %s", path, mountPath, strings.Join(problemNames, ", "))
}
}
sort.Strings(problemNames)
return fmt.Errorf("multiple ServiceAccount tokens share the same path and mount path: %s", strings.Join(problemNames, ", "))
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It still looks weird to me to have two identical commands in different spots, but it's not really an issue for this PR 🤷

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed. I've only encountered the CI tests failing once from the issue this is trying to fix.
We could just leave it out and see if it pops up in the future? 🤷‍♂️

Alternatively, the SA tokens test could be reworked a bit to prevent this flaky test (would involve using a regex to split up the error message into SA tokens, and then sort that result).

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It failed on another PR (updating the console dashboard) so the fix is worthwhile. I tried poking at it a bit to see if I could simplify but it's just annoying to implement so this is fine :D

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds good :)

- name: my-defined-volume
volume: {}
- name: pre-existing-home-volume
volume: {}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: please add trailing newlines to test files.

@AObuchow AObuchow force-pushed the persistent_home branch 2 times, most recently from d30bd16 to ea5f513 Compare June 7, 2023 21:00
Copy link
Collaborator

@amisevsk amisevsk left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@openshift-ci
Copy link

openshift-ci bot commented Jun 8, 2023

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: amisevsk, AObuchow

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci
Copy link

openshift-ci bot commented Jun 8, 2023

New changes are detected. LGTM label has been removed.

This commit allows the `/home/user/` directory in workspaces to  persist if  persistent storage is enabled
(through the use of the 'per-user' ('common') or 'per-workspace' storage class).

To enable this feature, modify the DWOC and set `config.workspace.persistUserHome.enabled` to `true`.

When enabled, a Devfile volume named `persistentHome` will be added to DevWorkspaces. All DevWorkspace container
components will mount the `persistentHome` volume at `/home/user/`. If a container component of a DevWorkspace already
mounts to `/home/user/`, the DWO-provisioned `persistentHome` volume will not be added to the DevWorkspace.

Fix devfile#1097

Signed-off-by: Andrew Obuchowicz <aobuchow@redhat.com>
Signed-off-by: Andrew Obuchowicz <aobuchow@redhat.com>
Signed-off-by: Andrew Obuchowicz <aobuchow@redhat.com>
The ServiceAccount token path collision tests were failing inconsistently, due to inconsistent ordering of the SA tokens.
SA tokens are now sorted by name when returning an error involving a path collision.

Signed-off-by: Andrew Obuchowicz <aobuchow@redhat.com>
@AObuchow
Copy link
Collaborator Author

AObuchow commented Jun 9, 2023

Squashed the fixups and just updated the commit message for d637ab8 to remove the mention of a workspace warning being applied when a volume already mounts to /home/user/

@AObuchow AObuchow merged commit 7f2085d into devfile:main Jun 9, 2023
@AObuchow AObuchow deleted the persistent_home branch June 9, 2023 17:39
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Support for persistent $HOME directory
2 participants