Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Inconsistent data persistence configuration #387

Open
fmigneault opened this issue Oct 2, 2023 · 10 comments
Open

Inconsistent data persistence configuration #387

fmigneault opened this issue Oct 2, 2023 · 10 comments
Assignees

Comments

@fmigneault
Copy link
Collaborator

fmigneault commented Oct 2, 2023

Description

According to the following definition:

# Root directory under which all data persistence should be nested under
export DATA_PERSIST_ROOT="/data"

Setting this parameter seem to indicate that the server would be configured such that data persistence is applied.

However, unless something more is defined to override volumes, the following definitions are applied by default:
https://github.com/bird-house/birdhouse-deploy/blob/master/birdhouse/config/data-volume/docker-compose-extra.yml
https://github.com/bird-house/birdhouse-deploy/blob/master/birdhouse/config/wps_outputs-volume/docker-compose-extra.yml

Therefore, the data and wps_outputs volumes are not "persisted" at all. They reside in a temporary volume created by docker-compose, which could be wiped on any down operation. (note: here it is assumed that "persistence" means that the data lives across server recreation following down/up. If the server crashes and DATA_PERSIST_ROOT is not itself mounted from another network/backup location, the data could still be lost).

One way to define volume overrides to define "persisted" data is as follows (assuming the default values are applied):

volumes:
  data:
    driver_opts:
      type: none
      device: /data
      o: bind
  wps_outputs:
    driver_opts:
      type: none
      device: /data/wps_outputs
      o: bind

Questions

  • Should we help users setup these definitions by default (instead of the currently misleading configuration)?

    • We need to consider that the driver_opts can differ for different setups depending on the actual drive/mount strategy. The sample config above is therefore not necessarily applicable for all use cases.
    • We need to consider servers that already configure these overrides, such that conflicting/overriding definitions are defined (using a component that must be explicit added to EXTRA_CONFS_DIR)?
    • Should we only better guide the user in this process, by improving the comment around DATA_PERSIST_ROOT variable (and others)?
  • How does Cowbird and WPS-outputs/user-workspace synchorinisation behaves without these overrides? @ChaamC?

  • Other considerations for existing servers / deployment procedures? @mishaschwartz @tlvu ?

@fmigneault fmigneault self-assigned this Oct 2, 2023
@fmigneault
Copy link
Collaborator Author

One inconsistency I found is as follows:

# WARNING:
# Working and output directories must be mounted as exact same path (e.g.: /some/path:/some/path)
# to avoid mismatching locations between container-side path resolution of cwltool stagedir/tmpdir/outdir
# and corresponding server-side runtime docker directories since Docker Application Packages will not be
# running inside this docker (not docker-in-docker), but next to it (sibling-dockers).
# Mapping of paths is needed because some part of the code runs inside Weaver/Worker containers, but actual
# execution of the children Application Package of CWL processes calls 'docker run' from the host machine
# and will expect to find the same directory locations.
- ${WEAVER_WPS_WORKDIR}:${WEAVER_WPS_WORKDIR}
- ${WEAVER_WPS_OUTPUTS_DIR}:${WEAVER_WPS_OUTPUTS_DIR}

Because Weaver requires same path locations host-side:in-docker, the "WPS volume" configured here gets defined (by default) as /data/wps_outputs/weaver.
However, the default wps_outputs docker-volume will be somewhere else entirely.
Therefore, locations mounted in proxy as wps_outputs will have (amongst other things):

# /var/lib/docker/volumes/birdhouse_wps_outputs/_data/
finch/
hummingbird/
raven/
weaver/

The weaver directory in this case will be empty, because it is not what is configured for the Weaver component.
This impacts as well the synchronization in Cowbird defined here to map THREDDS/WPS-outputs directories :

@ChaamC
Copy link
Collaborator

ChaamC commented Oct 2, 2023

How does Cowbird and WPS-outputs/user-workspace synchronisation behaves without these overrides? @ChaamC?

I didn't use any overrides while testing those, and didn't find any issues. Note that I didn't only tested using Magpie/Cowbird/Jupyterhub and didn't validate the birds that produce wpsoutputs data. I only used fake wpsoutputs data to validate my features.
Also, symlinks creation in Cowbird don't necessarily require an existing source path, as long as that source path is mounted succesfully in JupyterHub after to have a functional symlink.

I probably didn't have a problem since Cowbird uses a direct path for the data volume mount (here), instead of what some birds do with the generic volume mount (example here).

@fmigneault
Copy link
Collaborator Author

fmigneault commented Oct 2, 2023

Other use cases:

- "${DATA_PERSIST_SHARED_ROOT}:${DATA_PERSIST_SHARED_ROOT}"

- ${JUPYTERHUB_USER_DATA_DIR}:${JUPYTERHUB_USER_DATA_DIR}
- jupyterhub_data_persistence:/persist:rw

- thredds_persistence:/usr/local/tomcat/content/thredds
- ${DATA_PERSIST_ROOT}/datasets:/pavics-data
- ${DATA_PERSIST_ROOT}/ncml:/pavics-ncml
- wps_outputs:/pavics-data/wps_outputs

Which causes (by default) for Cowbird to handle "correctly" permission sync requests within /data directory, but for which none are really "effective" since data volume, thredds_persistence and jupyterhub_data_persistence are used with alternate locations/volumes in /var/lib/docker/volumes/... instead of the expected directories nested under DATA_PERSIST_SHARED_ROOT and other similar variables.

@mishaschwartz
Copy link
Collaborator

They reside in a temporary volume created by docker-compose, which could be wiped on any down operation

Note that a down operation does not delete named volumes. You have to explicitly request to bring them down with the --volumes flag (https://docs.docker.com/engine/reference/commandline/compose_down/)

@mishaschwartz
Copy link
Collaborator

Can we also add to the discussion: Why are we creating some named volumes outside of the docker compose lifecycle?

I'm talking specifically about:

  • grafana_persistence
  • jupyterhub_data_persistence
  • prometheus_persistence
  • thredds_persistence

I think it's also confusing that we are treating these volumes differently from named volumes in docker compose and bind-mounts.

@fmigneault
Copy link
Collaborator Author

Can we also add to the discussion: Why are we creating some named volumes outside of the docker compose lifecycle?

Yes, we can add that as well.

I think it's also confusing that we are treating these volumes differently from named volumes in docker compose and bind-mounts.

In the case of jupyterhub_data_persistence, I can agree having it as named-volume by itself because the contents themselves don't really need to be mapped elsewhere. It is JUPYTERHUB_USER_DATA_DIR that must be shared to allow Cowbird to synchorinize links. I don't like the idea of passing jupyterhub_data_persistence with db/secrets to Cowbird since it needs the root DATA_PERSIST_ROOT, and using a directory under /data for jupyterhub_data_persistence would give Cowbird more access than it needs.

Not sure about the others.

@fmigneault
Copy link
Collaborator Author

fmigneault commented Oct 3, 2023

@mishaschwartz
Thanks for pointing out the externally created named volumes.
I tried running an instance with a custom docker-compose-extra.yml to override them.
I encounter this error with thredds_persistence:

Error response from daemon: failed to mount local volume: mount /data/thredds:/var/lib/docker/volumes/birdhouse_thredds_persistence/_data, flags: 0x1000: no such file or directory

Although the docker-compose config seems to indicate what I expect:

volumes:
  thredds_persistence:
    driver_opts:
      device: /data/thredds
      o: bind
      type: none

... I believe the pre-creation of thredds_persistence as named-volume without the driver_opts causes a conflict that docker does not like.

Using the same strategy as data and wps_outputs for default thredds_persistence: {} seems to work, and allows me to override it as desired afterward.

@tlvu Any idea about this comment:

# no error if already exist
# create externally so nothing will delete these data volume automatically
docker volume create thredds_persistence # logs, cache

As @mishaschwartz pointed out, unless --volumes is added, even pavics-compose down does not remove this volume, and it also stays even when it is not attached anymore to any service. Could it be an old docker behaviour, or some other cleanup manipulation that caused the mentioned deletion of data?

reference: https://gitlab.com/crim.ca/clients/daccs/daccs-configs/-/merge_requests/28

@tlvu
Copy link
Collaborator

tlvu commented Oct 3, 2023

First off, thanks @fmigneault for opening this issue. This is exactly what I was worried about in my comment #360 (comment). I felt the concern I raised about the location of wps_outputs that Cowbird assume was not the good one, was not well understood. That's why I asked, is there a notebook or manual tests that test Cowbird end-to-end in a real PAVICS deployment.

Now I will try to answer the various questions, ping me if I forget some.

The data and wps_outputs data-volume predates me at Ouranos. So I am not sure why there is /data on disk, then those separated data-volume. My hunch is these are meant to be temporary storage. User should download immediately those wps_output generated by their computation so the wps_outputs folder are not meant for long term persistent storage. Also, as @mishaschwartz pointed out, docker-compose down won't destroy them, we need an explicit docker-compose down --volume to actually nuke them. So their persistance is still above anonymous data volume that are gone even with a simple docker-compose down.

All the external data-volume (grafana_persistence, jupyterhub_data_persistence, prometheus_persistence, thredds_persistence) I created them. They are purposely external so that even a docker-compose down --volume won't destroy them, so they are persistent. I choose data-volume to simplify their management since with data-volume, the permissions are managed by docker. Had Geoserver data store been a data-volume, then the script fix-geoserver-data-dir-perm would not have existed.

So if data-volume is great, why do we need /data? For very large datasets (Thredds, Geoserver, Jupyter user dir), they have to live on another disk so being in /data, their respective folder can be mapped to another disk. Also for folders that are "synced" often (ex: /data/ncml), being outside of a data-volume simplify the sync job.

So the decisions are purely opportunistic. I have no objections if we want to standardize all the external data-volume (grafana_persistence, jupyterhub_data_persistence, prometheus_persistence, thredds_persistence) to become under /data. Note we will need data migration script and probably maybe some sort of fix-geoserver-data-dir-perm.

For Cowbird, maybe we need to migrate data and wps_outputs data-volume to under /data. data data-volume are used by older birds using the buildout framework (https://github.com/bird-house/birdhousebuilder.recipe.pywps), currently only Hummingbird. All newer birds (Finch, Raven, all the new birds from PCIC) are using wps_outputs. So if we migrate wps_outputs, we need to coordinate with PCIC!

I think for the moment, the cheapest route is to have a full end-to-end test with Cowbird inside a real PAVICS, with a real bird actually generate a wps output for Cowbird to perform the hardlink. If we are lucky and we can work out some magic to make this works, then no wps_outputs data-volume migration required and no need to coordinate with PCIC. Otherwise, we will still have a real test to avoid Cowbird regression when some config in a real PAVICS changes.

@fmigneault
Copy link
Collaborator Author

One problem with the current default setup is that there are multiple "wps_outputs", since some are referenced by named-volumes (finch, raven, hummingbird), while others are /data/wps_outputs (weaver). Cowbird does not take into account this situation of dual locations. Therefore, we can never have proper hardlinks of all WPS outputs simultaneously in user workspaces.

For the external data-volumes, having them with explicit drive paths would persist the contents. Indeed, a docker-compose down --volume would remove the "volume definition", but once recreated, it would be remapped with the existing directory. The down operation does not wipe the mapped drive path contents. For adjusting permissions of those drives, it is possible to add more options to the o: flag of the device to apply owners/access automatically. Another option would be to have another docker service that runs only once, as dependency to the target service using the volume, only to edit the permissions as necessary.

For Cowbird data and wps_outputs, I am not too worried because their current definitions are overridable. They are not "blocking" issues. Their current default location are probably not the best to represent persistent data (notably for the case of user-workspaces), but we can leave them as is and have a very recommended (default set?) "component/persistent-data-volumes" that sets /data and /data/wps_outputs for them. Of course, we need to synchronise how to enable all of this. Are PCIC planing to use Cowbird? If not, for their use case, leaving data: {} and wps_outputs: {} is perfectly fine.

@tlvu
Copy link
Collaborator

tlvu commented Oct 12, 2023

Are PCIC planing to use Cowbird? If not, for their use case, leaving data: {} and wps_outputs: {} is perfectly fine.

@fmigneault They don't, for now. But Cowbird will eventually be in the default "minimal" list of components enabled automatically.

I guess, as you said, having Cowbird enabled but miss configured is fine since it is unused.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants