-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add a Dockerfile to build an image #140
Conversation
Looks good. Before I merge this
|
Another possibility for configuring the DP2 engine is to use So, for instance: # Bind engine to 0.0.0.0 instead of localhost
RUN sed -i 's/org.daisy.pipeline.ws.host=.*/org.daisy.pipeline.ws.host=0.0.0.0/' /opt/daisy-pipeline2/etc/system.properties
# Enable calabash debugging
RUN sed -i 's/\(com.xmlcalabash.*\)INFO/\1DEBUG/' /opt/daisy-pipeline2/etc/config-logback.xml
# Enable 4 concurrent jobs
RUN sed -i 's/.*\(org.daisy.pipeline.procs\)=[0-9]*/\1=4/' /opt/daisy-pipeline2/etc/system.properties For remote mode, the |
The |
I'm adding a
i.e. neither the key nor the secret is specified. I'm not sure how this should be handled. |
@bertfrees what kind of test did you have in mind. How can you test a docker image, other than running it manually? |
@josteinaj I'm not so sure if I like the idea of |
Right, I forgot about that. If you specify Regarding the test, I was thinking about just a shell script that starts the container and connects to it with the CLI, and possibly runs a sample file through it. Basically I was gonna copy this: https://github.com/bertfrees/benetech-docker-pipeline2/blob/test/Makefile. P.S.: read: http://daisy.github.io/pipeline/Get-Help/User-Guide/Pipeline-as-Service |
@egli instead of |
I believe the configuration issue is now solved via environment variables. Re the test I'll look at the example you provided on Monday |
Good job. The only thing I wasn't quite happy with at first were the new environment variables. But after talking to you this morning I see a valid use case after all.
|
The automatic mapping from environment variables to system properties could be (took some inspiration from https://github.com/weavejester/environ):
We should do this only for the variables that start with (*): and more specifically only the ones that are actually meant for configuration; the other ones should be removed from system.properties: see also the section labeled "do not edit" in http://daisy.github.io/pipeline/wiki/Configuration-Files. |
The Dockerfile uses a multistage build to first build the artifacts using maven. Then it copies the artifacts into a final image which exposes the port and starts the pipeline.
If PIPELINE2_AUTH_CLIENTKEY and/or PIPELINE2_AUTH_CLIENTSECRET are defined in the environment, when starting the pipeline, use those values. This simplyfies dockerization of the pipeline.
that is used everywhere else, for example in the default config of the pipeline cli
so that it can be set at run time for example when starting a Docker image and remove it from the system.properties (otherwise setting it as an option when starting the JVM seems to have no effect)
The test starts two containers based on the same image. One for the pipeline itself and a second one for the cli. It then starts a script from the cli.
I've added some more stuff here: daisy/pipeline@f00f389^...docker |
There is something fishy with setting HOST to |
For curl this does not seem to matter apparently (e.g. Also, this issue makes me wonder whether the web API should even expose the full paths. Why not just use the relative paths? Another observation is that the CLI does not even use the @rdeltour @josteinaj Your thoughts? |
@egli I have another request. Currently, if you start the pipeline2 Docker service, you have to wait a few seconds before the web service is up. There is apparently a Docker feature called "health status" that can help you with that. I haven't tried it myself because my version of Docker is not new enough, but it would look something like this in docker-compose.yml: healthcheck:
# Waiting for web service to be up...
test: ["CMD", "curl", "http://localhost:8181/ws/alive"]
interval: 10s
timeout: 10s
retries: 5 End in the depending service you do: depends_on:
pipeline2:
condition: service_healthy The See |
@bertfrees I just added a health check to the pipeline2 docker image. As for the |
Huh? Then what is the point of the health check? :/ Did you read that argument somewhere, or is it yours? |
I read something along that line deep down in a StackOverflow comment (to a solution that was still detailing the |
I think Docker aren't communicating this very well. I could only find some explanation here and in this discussion. What it comes down to I think is that they are moving away from docker-compose and towards a new approach in which apparently some concepts like I'm not really satisfied by the explanation they give for this choice. I'm not convinced that an application that depends on a service always knows best when the service is ready and how long to wait for it. (But to be fair I have to say that I haven't read that much about the new approach yet.) Anyway, a health check does not make sense if you can't use it, and as long as you are using docker-compose I think it makes perfect sense to implement the "waiting" with docker-compose. I suggest we just keep using the v2 format so that I can use the condition form of |
I read that too. I use docker-compose to quickly bring up a test instance, for that use case it is quite good. ATM I do not intend to use swarm services. But then again when just quickly bring up a test instance you don't really care that much about the health service. A few failed attempts of the webui to connect to the pipeline are not the end of the world.
Makes sense. |
I want to use it for other things too, like tests. With tests it's quite convenient when you can just run them and be sure the Pipeline server is running. I think It's annoying if you need to implement the waiting logic in every little test application that you write, while it can be centralized in the server, which knows best how long to wait etc. That's why I liked the idea of the health check. Until I'm convinced that the other approach is better I want to try this. |
Also add a Makefile
and add it to the Makefile
…m properties - PIPELINE2_HOME - PIPELINE2_BASE - PIPELINE2_DATA - PIPELINE2_WS_LOCALFS - PIPELINE2_WS_AUTHENTICATION *nix only.
because you can now directly specify the Pipeline properties through environment variables. Note that this will only work for system properties that start with "org.daisy.pipeline" though.
Instead use the PIPELINE2_WS_LOCALFS and PIPELINE2_WS_AUTHENTICATION environment variables directly.
8c41fbe
to
c946f36
Compare
I have pushed my version of the branch. Depends on daisy/pipeline-framework#126. |
Should be merged into develop, not master! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM (I'm not experienced in Docker, but having read the discussion the proposed changes makes sense to me).
Thanks @egli (and @bertfrees and @josteinaj), looks like a very useful thing to have!
Here's a PR that adds a Dockerfile that
Build and run the image as follows (you need the newest docker for the multistage build):
You should now be able to connect to the pipeline using a client. However, at the moment this probably doesn't work as the pipeline by default is set up to work locally. This needs to be changed either in the config file in the original source or by changing the config file in the docker image.