Skip to content

Cronjobs

A.s. edited this page Nov 5, 2024 · 18 revisions

This is what we have in every VM:

Correspondingly,

  1. clean-up /tmp from files older than 6 months.
  2. remove all docker dangling images, networks, and volumes.
  3. re-build images to cache new layers to speed up any deployments.
  4. remove any instant backups (*.sqldump) that are older than a week.
0 1 * * 6 find /tmp/ -ctime +180 -exec /bin/rm -rf {} +
0 2 * * 6 docker system prune -a -f --volumes
0 3 * * * nice -n 5 docker-compose -f /root/parkour2/docker-compose.yml build
0 4 * * * make -C /root/parkour2 sweep
0 5 * * * uv cache prune  # only if you have it

These are specific to production:

Correspondingly,

  1. (Recommended) To keep the django_session database table under control.
  2. (Optional) To remove historical records that have existed for a certain amount of time.
  3. (Optional) To remove historical records that are duplicated (a record is created on each Model.save(), regardless of any changes, so..) if you find a lot, prune 'em at every hour.
0 5 * * 6 docker-compose exec -it parkour2-django python manage.py clearsessions
0 4 * * 6 docker-compose exec -it parkour2-django python manage.py clean_old_history --days 800 --auto
0 * * * * docker-compose exec -it parkour2-django python manage.py clean_duplicate_history --minutes 90 --auto

Note

On production, there's also the backup strategy with a periodicity of its own that's bundled in the rsnapshot container.


And, this is what we have in one of our 'staging' VMs:

To re-deploy the app with a new database snapshot from production so that staff can access a dev deployment for testing.

*/30 * * * * make -C /root/parkour2 clean import-pgdb dev-migras

Note

This 'staging' VM also serve us, bioinformaticians, to test our (short- or long-reads) demultiplexing, and (secondary) analyses, pipelines that tightly integrate to our LIMS.