You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
RUN ENV=prod ENV_FILL_MISSING_VALUES=1 SECRET_KEY=dummy python3 manage.py collectstatic --no-input --clear
1.2) Since env vars are not automatically passed during image build, a target folder for collectstatic is chosen from settings.py - which is root('static') == /root/src/static:
1.4) Thus whatever was collected during build phase is overridden by volume mount at startup
1.5) Even if you manually log into running container and run collectstatic, it won't work for ManifestStaticfilesStorage which requires everything to be collected before app startup - ongoing changes won't be picked until restart
Proposal:
2.1) If we decide to run collectstatic during docker image build, then remove volume mount in docker-compose. May not work if some project collects static files to s3 or so - it's weird to do it on image build rather then when deploying.
2.2) If we decide to run collectstatic during deployment, then we should remove if from image build.
Side note:
3.1) I think ManifestStaticfilesStorage is a nice thing for effective and reliable versioning of static files and should be included in cookiecutter template.
The text was updated successfully, but these errors were encountered:
3.1) - ManifestStaticfilesStorage makes sense when you want to have a heavy load of repeat users connecting to your Django app and I'm not sure we're aiming for that - in such cases one usually goes with an SPA, or at least serve the static files from a CDN, but I'm not sure
So how to reconcile this between poormans and AWS? Currently, in AWS deployments, docker images contain the statics, from the build step, and nginx has access to a volume with static files. What would an alternative be? running collectstatic in cloud-init? I'm kinda reluctant to do that because it can easily fail and it would be better to have it fail before refreshing EC2 instances. But maybe the build step could dump the statics to /dev/null?
1.1) Static files are collected during image build:
cookiecutter-rt-django/{{cookiecutter.repostory_name}}/app/envs/prod/Dockerfile
Line 34 in 72db713
1.2) Since env vars are not automatically passed during image build, a target folder for
collectstatic
is chosen fromsettings.py
- which isroot('static')
==/root/src/static
:https://github.com/reef-technologies/cookiecutter-rt-django/blob/master/%7B%7Bcookiecutter.repostory_name%7D%7D/app/src/%7B%7Bcookiecutter.django_project_name%7D%7D/settings.py#L184
1.3) Docker-compose mounts a separate volume for static files to the same location (
/root/src/static
):https://github.com/reef-technologies/cookiecutter-rt-django/blob/master/%7B%7Bcookiecutter.repostory_name%7D%7D/envs/prod/docker-compose.yml#L51
1.4) Thus whatever was collected during build phase is overridden by volume mount at startup
1.5) Even if you manually log into running container and run
collectstatic
, it won't work forManifestStaticfilesStorage
which requires everything to be collected before app startup - ongoing changes won't be picked until restartProposal:
2.1) If we decide to run
collectstatic
during docker image build, then remove volume mount in docker-compose. May not work if some project collects static files to s3 or so - it's weird to do it on image build rather then when deploying.2.2) If we decide to run
collectstatic
during deployment, then we should remove if from image build.Side note:
3.1) I think ManifestStaticfilesStorage is a nice thing for effective and reliable versioning of static files and should be included in cookiecutter template.
The text was updated successfully, but these errors were encountered: