AlloyCI Runner can use Docker to run builds on user provided images. This is possible with the use of Docker executor.
The Docker executor when used with AlloyCI, connects to Docker Engine
and runs each build in a separate and isolated container using the predefined
image that is set up in .alloy-ci.json
and in accordance in
config.toml
.
That way you can have a simple and reproducible build environment that can also run on your workstation. The added benefit is that you can test all the commands that we will explore later from your shell, rather than having to test them on a dedicated CI server.
The Docker executor divides the build into multiple steps:
- Prepare: Create and start the services.
- Pre-build: Clone, restore cache and download artifacts from previous stages. This is run on a special Docker Image.
- Build: User build. This is run on the user-provided docker image.
- Post-build: Create cache, upload artifacts to AlloyCI. This is run on a special Docker Image.
The special Docker Image is based on Alpine Linux and contains all the tools required to run the prepare step the build: the Git binary and the Runner binary for supporting caching and artifacts. You can find the definition of this special image in the official Runner repository.
The image
keyword is the name of the Docker image that is present in the
local Docker Engine (list all images with docker images
) or any image that
can be found at Docker Hub. For more information about images and Docker
Hub please read the Docker Fundamentals documentation.
In short, with image
we refer to the docker image, which will be used to
create a container on which your build will run.
If you don't specify the namespace, Docker implies library
which includes all
official images. That's why you'll see
many times the library
part omitted in .alloy-ci.json
and config.toml
.
For example you can define an image like image: ruby:2.1
, which is a shortcut
for image: library/ruby:2.1
.
Then, for each Docker image there are tags, denoting the version of the image.
These are defined with a colon (:
) after the image name. For example, for
Ruby you can see the supported tags at https://hub.docker.com/_/ruby/. If you
don't specify a tag (like image: ruby
), latest
is implied.
The services
keyword defines just another Docker image that is run during
your build and is linked to the Docker image that the image
keyword defines.
This allows you to access the service image during build time.
The service image can run any application, but the most common use case is to
run a database container, e.g., mysql
. It's easier and faster to use an
existing image and run it as an additional container than install mysql
every
time the project is built.
To better understand how the container linking works, read Linking containers together.
To summarize, if you add mysql
as service to your application, this image
will then be used to create a container that is linked to the build container.
According to the workflow this is the first step that is performed
before running the actual builds.
The service container for MySQL will be accessible under the hostname mysql
.
So, in order to access your database service you have to connect to the host
named mysql
instead of a socket or localhost
.
You can simply define an image that will be used for all jobs and a list of services that you want to use during build time.
{
"image": "ruby:2.2",
"services": [
"postgres:9.3"
],
"before_script": [
"bundle install"
],
"test": {
"script": [
"bundle exec rake spec"
]
}
}
It is also possible to define different images and services per job:
{
"before_script": [
"bundle install"
],
"test:2.1": {
"image": "ruby:2.1",
"services": [
"postgres:9.3"
],
"script": [
"bundle exec rake spec"
]
},
"test:2.2": {
"image": "ruby:2.2",
"services": [
"postgres:9.4"
],
"script": [
"bundle exec rake spec"
]
}
}
Look for the [runners.docker]
section:
[runners.docker]
image = "ruby:2.1"
services = ["mysql:latest", "postgres:latest"]
The image and services defined this way will be added to all builds run by
that Runner, so even if you don't define an image
inside .alloy-ci.json
,
the one defined in config.toml
will be used.
You can also define images located on private registries that could also require authentication.
All you have to do is be explicit on the image definition in .alloy-ci.json
.
image: my.registry.tld:5000/namepace/image:tag
In the example above, AlloyCI Runner will look at my.registry.tld:5000
for the
image namespace/image:tag
.
If the repository is private you need to authenticate your AlloyCI Runner in the registry. Read more on using a private Docker registry.
Let's say that you need a Wordpress instance to test some API integration with your application.
You can then use for example the tutum/wordpress as a service image in your
.alloy-ci.json
:
services:
- tutum/wordpress:latest
When the build is run, tutum/wordpress
will be started first and you will have
access to it from your build container under the hostname tutum__wordpress
and tutum-wordpress
.
The AlloyCI Runner creates two alias hostnames for the service that you can use alternatively. The aliases are taken from the image name following these rules:
- Everything after
:
is stripped - For the first alias, the slash (
/
) is replaced with double underscores (__
) - For the second alias, the slash (
/
) is replaced with a single dash (-
)
Using a private service image will strip any port given and apply the rules as
described above. A service registry.alloy-wp.com:4999/tutum/wordpress
will
result in hostname registry.alloy-wp.com__tutum__wordpress
and
registry.alloy-wp.com-tutum-wordpress
.
Many services accept environment variables which allow you to easily change database names or set account names depending on the environment.
AlloyCI Runner 1.0 and up passes all JSON-defined variables to the created service containers.
For all possible configuration variables check the documentation of each image provided in their corresponding Docker hub page.
Note: All variables will be passed to all services containers. It's not designed to distinguish which variable should go where. Secure variables are only passed to the build container.
You can mount a path in RAM using tmpfs. This can speed up the time required to test if there is a lot of I/O related work, such as with databases.
If you use the tmpfs
and services_tmpfs
options in the runner configuration, you can specify multiple paths, each with its own options. See the docker reference for details.
This is an example config.toml
to mount the data directory for the official Mysql container in RAM.
[runners.docker]
# For the main container
[runners.docker.tmpfs]
"/var/lib/mysql" = "rw,noexec"
# For services
[runners.docker.services_tmpfs]
"/var/lib/mysql" = "rw,noexec"
AlloyCI Runner mounts a /builds
directory to all shared services.
See an issue: https://gitlab.com/gitlab-org/gitlab-ci-multi-runner/issues/1520
See the specific documentation for using PostgreSQL as a service.
See the specific documentation for using MySQL as a service.
After the service is started, AlloyCI Runner waits some time for the service to be responsive. Currently, the Docker executor tries to open a TCP connection to the first exposed service in the service container.
The Docker executor by default stores all builds in
/builds/<namespace>/<project-name>
and all caches in /cache
(inside the
container).
You can overwrite the /builds
and /cache
directories by defining the
builds_dir
and cache_dir
options under the [[runners]]
section in
config.toml
. This will modify where the data are stored inside the container.
If you modify the /cache
storage path, you also need to make sure to mark this
directory as persistent by defining it in volumes = ["/my/cache/"]
under the
[runners.docker]
section in config.toml
.
Read the next section of persistent storage for more information.
The Docker executor can provide a persistent storage when running the containers.
All directories defined under volumes =
will be persistent between builds.
The volumes
directive supports 2 types of storage:
<path>
- the dynamic storage. The<path>
is persistent between subsequent runs of the same concurrent job for that project. The data is attached to a custom cache container:runner-<short-token>-project-<id>-concurrent-<job-id>-cache-<unique-id>
.<host-path>:<path>[:<mode>]
- the host-bound storage. The<path>
is bind to<host-path>
on the host system. The optional<mode>
can specify that this storage is read-only or read-write (default).
If you make the /builds
to be the host-bound storage, your builds will be stored in:
/builds/<short-token>/<concurrent-id>/<namespace>/<project-name>
, where:
<short-token>
is a shortened version of the Runner's token (first 8 letters)<concurrent-id>
is a unique number, identifying the local job ID on the particular Runner in context of the project
The Docker executor supports a number of options that allows to fine tune the
build container. One of these options is the privileged
mode.
The configured privileged
flag is passed to the build container and all
services, thus allowing to easily use the docker-in-docker approach.
First, configure your Runner (config.toml) to run in privileged
mode:
[[runners]]
executor = "docker"
[runners.docker]
privileged = true
Then, make your build script (.alloy-ci.json
) to use Docker-in-Docker
container:
{
"image": "docker:git",
"services": [
"docker:dind"
],
"build": {
"script": [
"docker build -t my-image .",
"docker push my-image"
]
}
}
The Docker executor doesn't overwrite the ENTRYPOINT
of a Docker image.
That means that if your image defines the ENTRYPOINT
and doesn't allow to run
scripts with CMD
, the image will not work with the Docker executor.
With the use of ENTRYPOINT
it is possible to create special Docker image that
would run the build script in a custom environment, or in secure mode.
You may think of creating a Docker image that uses an ENTRYPOINT
that doesn't
execute the build script, but does execute a predefined set of commands, for
example to build the docker image from your directory. In that case, you can
run the build container in privileged mode, and make
the build environment of the Runner secure.
Consider the following example:
-
Create a new Dockerfile:
FROM docker:dind ADD / /entrypoint.sh ENTRYPOINT ["/bin/sh", "/entrypoint.sh"]
-
Create a bash script (
entrypoint.sh
) that will be used as theENTRYPOINT
:#!/bin/sh dind docker daemon --host=unix:///var/run/docker.sock \ --host=tcp://0.0.0.0:2375 \ --storage-driver=vf & docker build -t "$BUILD_IMAGE" . docker push "$BUILD_IMAGE"
-
Push the image to the Docker registry.
-
Run Docker executor in
privileged
mode. Inconfig.toml
define:[[runners]] executor = "docker" [runners.docker] privileged = true
-
In your project use the following
.alloy-ci.json
:{ "variables": { "BUILD_IMAGE": "my.image" }, "build": { "image": "my/docker-build:image", "script": [ "Dummy Script" ] } }
This is just one of the examples. With this approach the possibilities are limitless.
When using the docker
or docker+machine
executors, you can set the
pull_policy
parameter which defines how the Runner will work when pulling
Docker images (for both image
and services
keywords).
Note: If you don't set any value for the
pull_policy
parameter, then Runner will use thealways
pull policy as the default value.
Now let's see how these policies work.
The never
pull policy disables images pulling completely. If you set the
pull_policy
parameter of a Runner to never
, then users will be able
to use only the images that have been manually pulled on the docker host
the Runner runs on.
If an image cannot be found locally, then the Runner will fail the build with an error similar to:
Pulling docker image local_image:latest ...
ERROR: Build failed: Error: image local_image:latest not found
When to use this pull policy?
This pull policy should be used if you want or need to have a full control on which images are used by the Runner's users. It is a good choice for private Runners that are dedicated to a project where only specific images can be used (not publicly available on any registries).
When not to use this pull policy?
This pull policy will not work properly with most of auto-scaled
Docker executor use cases. Because of how auto-scaling works, the never
pull policy may be usable only when using a pre-defined cloud instance
images for chosen cloud provider. The image needs to contain installed
Docker Engine and local copy of used images.
When the if-not-present
pull policy is used, the Runner will first check
if the image is present locally. If it is, then the local version of
image will be used. Otherwise, the Runner will try to pull the image.
When to use this pull policy?
This pull policy is a good choice if you want to use images pulled from remote registries but you want to reduce time spent on analyzing image layers difference, when using heavy and rarely updated images. In that case, you will need once in a while to manually remove the image from the local Docker Engine store to force the update of the image.
It is also the good choice if you need to use images that are built and available only locally, but on the other hand, also need to allow to pull images from remote registries.
When not to use this pull policy?
This pull policy should not be used if your builds use images that are updated frequently and need to be used in most recent versions. In such situation, the network load reduction created by this policy may be less worthy than the necessity of the very frequent deletion of local copies of images.
This pull policy should also not be used if your Runner can be used by different users which should not have access to private images used by each other. Especially do not use this pull policy for shared Runners.
To understand why the if-not-present
pull policy creates security issues
when used with private images, read the
security considerations documentation.
The always
pull policy will ensure that the image is always pulled.
When always
is used, the Runner will try to pull the image even if a local
copy is available. If the image is not found, then the build will
fail with an error similar to:
Pulling docker image registry.tld/my/image:latest ...
ERROR: Build failed: Error: image registry.tld/my/image:latest not found
When to use this pull policy?
This pull policy should be used if your Runner is publicly available and configured as a shared Runner in your AlloyCI instance. It is the only pull policy that can be considered as secure when the Runner will be used with private images.
This is also a good choice if you want to force users to always use the newest images.
Also, this will be the best solution for an auto-scaled configuration of the Runner.
When not to use this pull policy?
This pull policy will definitely not work if you need to use locally stored images. In this case, the Runner will skip the local copy of the image and try to pull it from the remote registry. If the image was build locally and doesn't exist in any public registry (and especially in the default Docker registry), the build will fail with:
Pulling docker image local_image:latest ...
ERROR: Build failed: Error: image local_image:latest not found
Note: Starting with AlloyCI Runner 1.0, both docker-ssh and docker-ssh+machine executors are deprecated and will be removed in one of the upcoming releases.
We provided a support for a special type of Docker executor, namely Docker-SSH (and the autoscaled version: Docker-SSH+Machine). Docker-SSH uses the same logic as the Docker executor, but instead of executing the script directly, it uses an SSH client to connect to the build container.
Docker-ssh then connects to the SSH server that is running inside the container using its internal IP.
This executor is no longer maintained and will be removed in near future.