Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Volumes in a non default location breaks the setup #45

Open
Luxtech opened this issue May 6, 2021 · 5 comments
Open

Volumes in a non default location breaks the setup #45

Luxtech opened this issue May 6, 2021 · 5 comments

Comments

@Luxtech
Copy link

Luxtech commented May 6, 2021

I cloned the git repo and edited the docker-compose file so that volumes are located in a known location. This is something that I always do for docker projects so that I can backup the volumes easily. When i changed the location of the volumes for this partkeepr container the http://localhost:8080/setup/ gives this error: Not Found The requested URL was not found on this server
If I keep the original docker-compose file the setup works.

Can someone help me to solve this problem? Or is it not possible to change the volume locations?

My docker-compose file:

version: "3"
services:

  database:
    image: mariadb:10.0
    restart: on-failure
    environment:
      - MYSQL_RANDOM_ROOT_PASSWORD=yes
      - MYSQL_DATABASE=partkeepr
      - MYSQL_USER=partkeepr
      - MYSQL_PASSWORD=partkeepr
    volumes:
      - ./volumes/mysql-data:/var/lib/mysql

  partkeepr:
    image: mhubig/partkeepr:latest
    restart: on-failure
    environment:
      - PARTKEEPR_DATABASE_HOST
      - PARTKEEPR_DATABASE_NAME
      - PARTKEEPR_DATABASE_PORT
      - PARTKEEPR_DATABASE_USER
      - PARTKEEPR_DATABASE_PASS
      - PARTKEEPR_FR3D_LDAP_DRIVER_ACCOUNTCANONICALFORM
      - PARTKEEPR_FR3D_LDAP_DRIVER_ACCOUNTDOMAINNAME
      - PARTKEEPR_FR3D_LDAP_DRIVER_ACCOUNTDOMAINNAMESHORT
      - PARTKEEPR_FR3D_LDAP_DRIVER_ACCOUNTFILTERFORMAT
      - PARTKEEPR_FR3D_LDAP_DRIVER_BASEDN
      - PARTKEEPR_FR3D_LDAP_DRIVER_BINDREQUIRESDN
      - PARTKEEPR_FR3D_LDAP_DRIVER_HOST
      - PARTKEEPR_FR3D_LDAP_DRIVER_OPTREFERRALS
      - PARTKEEPR_FR3D_LDAP_DRIVER_PASSWORD
      - PARTKEEPR_FR3D_LDAP_DRIVER_PORT
      - PARTKEEPR_FR3D_LDAP_DRIVER_USESSL
      - PARTKEEPR_FR3D_LDAP_DRIVER_USESTARTTLS
      - PARTKEEPR_FR3D_LDAP_DRIVER_USERNAME
      - PARTKEEPR_FR3D_LDAP_USER_ATTRIBUTE_EMAIL
      - PARTKEEPR_FR3D_LDAP_USER_ATTRIBUTE_USERNAME
      - PARTKEEPR_FR3D_LDAP_USER_BASEDN
      - PARTKEEPR_FR3D_LDAP_USER_ENABLED
      - PARTKEEPR_FR3D_LDAP_USER_FILTER
      - PARTKEEPR_LOCALE
      - PARTKEEPR_MAILER_AUTH_MODE
      - PARTKEEPR_MAILER_ENCRYPTION
      - PARTKEEPR_MAILER_HOST
      - PARTKEEPR_MAILER_PASSWORD
      - PARTKEEPR_MAILER_PORT
      - PARTKEEPR_MAILER_TRANSPORT
      - PARTKEEPR_MAILER_USER
      - PARTKEEPR_AUTH_MAX_USERS
      - PARTKEEPR_CATEGORY_PATH_SEPARATOR
      - PARTKEEPR_CRONJOB_CHECK
      - PARTKEEPR_FILESYSTEM_DATA_DIRECTORY
      - PARTKEEPR_FILESYSTEM_QUOTA
      - PARTKEEPR_MAINTENANCE
      - PARTKEEPR_MAINTENANCE_MESSAGE
      - PARTKEEPR_MAINTENANCE_TITLE
      - PARTKEEPR_OCTOPART_APIKEY
      - PARTKEEPR_PARTS_INTERNALPARTNUMBERUNIQUE
      - PARTKEEPR_PARTS_LIMIT
      - PARTKEEPR_USERS_LIMIT
      - PARTKEEPR_SECRET
    ports:
      - "8080:80"
    volumes:
      - ./volumes/partkeepr-conf:/var/www/html/app/config
      - ./volumes/partkeepr-data:/var/www/html/data
      - ./volumes/partkeepr-web:/var/www/html/web
    depends_on:
      - database

  cronjob:
    image: mhubig/partkeepr:latest
    restart: on-failure
    entrypoint: []
    command: bash -c "crontab /etc/cron.d/partkeepr && cron -f"
    volumes:
      - ./volumes/partkeepr-conf:/var/www/html/app/config:ro
      - ./volumes/partkeepr-data:/var/www/html/data
      - ./volumes/partkeepr-web:/var/www/html/web
    depends_on:
      - partkeepr

volumes:
  partkeepr-conf:
  partkeepr-data:
  partkeepr-web:
  mysql-data:
@m4v
Copy link

m4v commented Jul 25, 2021

I believe that what is happening is that the image has files in those directories, thus directories like /var/www/html/data are overridden by empty ones when the volumes are mounted. Docker initializes named volumes with the contents of the image so this problem doesn't happen.

@ukoda
Copy link

ukoda commented Mar 9, 2022

I am trying to do the same thing for the same reason and seeing the same issue. I have tried running the original docker-compose file to create a good set of volume related files in /var/lib/docker/volumes, shutting it down, then copying them to where I really want them and updating the docker-compose file to point there and then bringing it back up. It throws a forbidden error so clearly I don't understand enough about how docker works to solve this problem at this time.

@ngtech
Copy link

ngtech commented May 13, 2022

You can use a third one time "init" container to copy the correct files to the non default location bind mounted volumes. I am attaching an excerpt of a working docker-compose.yaml file that bind mounts /data/docker/partkeepr/{conf,data,web} as volumes and attaches them to the partkeepr , partkeepr-cronjob and partkeepr-init containers.

  1. Modify your compose file to bind mound the named volumes
  2. Create the three host folders
  3. Copy the partkeepr-init service definition below (Mounts volumes under /www-dest/)
  4. Run docker-compose up partkeepr-init to initialize your bind mounted folders and wait until it rsyncs the data.
  5. Comment out / delete partkeepr-init service.
  6. Proceed with installation / usage.
  partkeepr:
    image: mhubig/partkeepr:latest
    container_name: partkeepr
    environment:
	...	     
    volumes:
      - partkeepr-conf:/var/www/html/app/config
      - partkeepr-data:/var/www/html/data
      - partkeepr-web:/var/www/html/web
	
  partkeepr-cronjob:
         ...
    volumes:
      - partkeepr-conf:/var/www/html/app/config
      - partkeepr-data:/var/www/html/data
      - partkeepr-web:/var/www/html/web    

  partkeepr-init:
    image: mhubig/partkeepr:latest
    restart: "no"
    entrypoint: []
    command: bash -c "apt update && apt install -y rsync && rsync -avHhSp /var/www/ /var/www-dest/"
    volumes:
      - partkeepr-conf:/var/www-dest/html/app/config
      - partkeepr-data:/var/www-dest/html/data
      - partkeepr-web:/var/www-dest/html/web

volumes:
  partkeepr-conf:
    driver: local
    driver_opts:
      o: bind
      type: none
      device: /data/docker/partkeepr/conf
  partkeepr-data:
    driver: local
    driver_opts:
      o: bind
      type: none
      device: /data/docker/partkeepr/data      
  partkeepr-web:
    driver: local
    driver_opts:
      o: bind
      type: none
      device: /data/docker/partkeepr/web

@Rascalov
Copy link

Rascalov commented Dec 1, 2022

ngtech's solution worked for me. I also copied over the database seperately from the standard location to the custom location and it works as intended.

@crazyelectron-io
Copy link

I had a similar issue with a container running in a Kubernetes Pod and using an init-container like @ngtech mentioned worked like a charm. Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants