-
Notifications
You must be signed in to change notification settings - Fork 344
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(docker): add basic docker support #1433
base: master
Are you sure you want to change the base?
Conversation
✅ Deploy Preview for selfoss canceled.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the contribution.
Still the main obstacle is that I do not understand how users that want Docker expect to use the image so I have no way of evaluating whether the PR fits that.
Do people just run docker
commands manually? Or use something like docker-compose? How do they handle updates? What are security considerations? How do people configure software? How do they manage data backups?
.dockerignore
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I do not really like these huge blacklists with patterns that are not relevant. Maybe just copy what we need. Or better copy the result on npm run dist
, which also prunes the vendor/
directory.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've reduced the blacklist. Usually we want to "copy everything relevant" (COPY . .
) without having to bother for new / changed directories, although the opposite is also valid.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right. Which is why I also suggested using npm run dist
command. It will build a production package, including filtering the PHP directories. It would allow us to clean the Dockerfile even further and we would not need the .dockerignore
.
Of course, it would mean building the image from scratch every time but perhaps that is fine for production/non-dev image.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually, when you COPY
something into a container image, it can't get removed, so if we copy local data, for example, for local builds, it will go into the final image size even if scripts inside container build filter them. This is a good practice in the docker community to filter out everything that could exist in a local development environment to avoid side effects.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IIUC, each RUN
creates a new layer which is suboptimal https://docs.docker.com/develop/develop-images/dockerfile_best-practices/#minimize-the-number-of-layers
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Indeed, but having more separate, atomic layers with fine-grained COPY
is far better than a few non-atomic layers. Each layer actually add a few bytes (technically speaking, a layer is a tar file, and all layers of an image are mounted on top of the previous ones)
A few months later, let me try to answer.
I understand this issue. Maybe I can just say that I'm a Kubernetes contributor (see my profile) and my production clusters manage ~15000 running containers right now, if that helps. Of course I don't want to pretend to be someone here! ;)
It really depend, some use docker-compose, some use raw docker/containerd commands, some use Kubernetes either directly or through Helm, some use home NAS like synology to manage containers for them, but all of them use container image definition. I'm going to update a bit the PR to be more up-to-date and take into account your suggestions! |
Notes:
|
c78629b
to
2910891
Compare
Hello @jtojnar, would you have any time to review this PR? |
I would really love to use selfoss from a container so that I can automate / simplify my self-hosting, and at the same time I would hate to have a long-running fork of selfoss. What is your opinion on having an official image @jtojnar ? |
Sorry about not replying sooner, I have opened the tab multiple times but then got distracted each time. I think this is valuable but at the same time I still worry about my ability to keep it working, as I do not use Docker. Especially when I have had less time for my open-source projects since I started a full-time job. After giving it some more thought, I think I would be fine with merging this as long as the level of support is clearly communicated. For example, we could have it called I could also grant you and @radek-sprta commit access to this repo so you could maintain this without depending on me for merging. Alternately, we could create a separate |
✅ Deploy Preview for selfoss canceled.
|
Either ways are fine for me, and of course I think, as there seem to be several people interested into maintaining a Docker image with several tries over the years, that we should find a way to have the most maintainable solution. Pros of having a separate repo: more secure Cons of having a separate repo:
I don't think too many people with write access (btw, not push, only merge!) are needed. |
package.json
Outdated
"install-dependencies:client": "npm install --production=false --prefix client/", | ||
"install-dependencies-ci:client": "npm ci --production=false --prefix client/", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it would make sense to just use npm ci
for install-dependencies:client
– reading the docs, I do not see why one would want an unclean installation of dependencies.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Indeed, fixed!
.dockerignore
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right. Which is why I also suggested using npm run dist
command. It will build a production package, including filtering the PHP directories. It would allow us to clean the Dockerfile even further and we would not need the .dockerignore
.
Of course, it would mean building the image from scratch every time but perhaps that is fine for production/non-dev image.
Not necessarily, see e.g. https://github.com/wallabag/docker. That one currently only builds tagged versions but nighlties could be handled by GitHub actions in the main repo triggering builds in the docker one.
Yeah, I think that would be a good thing – at least until selfoss development team contains enough people who could maintain it so that the risk of it becoming unmaintained is reduced.
Additionally, separating would make synchronous changes more work: contributor would need to make PRs in two different places. And the repos could get out of sync breaking compatibility. That could be minimized by using a single entry point like
Right, we could require going through PRs and then adding review requirement in branch protections. |
Here is a first try of #1350 as a simpler alternative / first step than #1170
Here are some notes:
If you think this could easily be done, feel free to point me to the right direction, but I was not able to properly separate build from serve (due to always needing to build extensions...)edit: https://hub.docker.com/_/composer/ recommends doing what is done in this pr, so everything seems ok, and I now delete development packages.What do you think?
Potential next steps if this gets merged: