This project creates self-hosted (ephemeral) GitHub runners based on libkrun. libkrun creates microVMs, so the project enables fully isolated runners inside your infrastruture. MicroVMs boot fast, providing an experience close to running containers. libkrun creates and starts VMs based on the multi-platform OCI images created for this project -- ubuntu (default) or fedora. The project will create microVMs using either krunvm or krun and podman.
Provided you are at the root directory of this project, the following would
create two runner loops (the -n
option) that are bound to this repository
(the efrecon/gh-runner-krunvm
principal). Runners can also be registered at
the organization
or enterprise
scope using the -s
option. In the example
below, the value of the -T
option should be an access token.
In each loop, as soon as one job has been picked up and executed, a new pristine
runner will be created and registered.
./orchestrator.sh -v -n 2 -- -T ghp_XXXX -p efrecon/gh-runner-krunvm
The project tries to have good default options and behaviour. For example, nor
the value of the token, nor the value of the runner registration token will be
visible to the workflows using your runners. The default is to create far-less
capable runners than the GitHub runners, i.e. 1G or memory and 2 vCPUs. Unless
otherwise specified, runners have random names and carry labels with the name of
the base repository, e.g. ubuntu
and krunvm
. The GitHub runner
implementation will automatically add other labels in addition to those.
In the example above, the double-dash --
separates options given to the
user-facing orchestrator from options to the loop implementation runner
script. All options appearing after the --
will be blindly passed to the
runner loop and script. All scripts within the project accepts short options
only and can either be controlled through options or environment variables --
but CLI options have precedence. Running scripts with the -h
option will
provide help and a list of those variables. Variables starting with
ORCHESTRATOR_
will affect the behaviour of the orchestrator, while variables
starting with RUNNER_
will affect the behaviour of each runner (loop).
- Fully isolated GitHub runners on your infrastructure, through microVM.
- container-like experience: microVMs boot quickly.
- No special network configuration
- Ephemeral runners, i.e. will start from a pristine "empty" state at each run.
- Secrets isolation to avoid leaking to workflows.
- Run on amd64 and arm64 platforms, probably able to run on MacOS.
- Standard "medium-sized" base OS installations (node, python, dev tools, etc.)
- Run on top of any OCI image -- base "OS" separated from runner installation.
- Support for registration at the repository, organisation and enterprise level.
- Support for github.com, but also local installations of the forge.
- Ability to mount local directories to cache local runner-based requirements or critical software tools.
- Good compatibility with the regular GitHub runners: same user ID, member of
the
docker
group, password-lesssudo
, etc. - Supports both krunvm or the krun runtime under podman.
- In theory, the main ubuntu and fedora images should be able to be used in more traditional container-based solutions -- perhaps sysbox? Reports and/or changes are welcome.
- Relaying of the container daemon logs to provide for improved debugging of complex workflows.
This project is coded in pure POSIX shell and has only been tested on Linux. The images are automatically built both for amd64 and arm64. However, krunvm also runs on MacOS. No "esoteric" options have been used when using the standard UNIX binary utilities. PRs are welcome to make the project work on MacOS, if it does not already.
Apart from the standard UNIX binary utilities, you will need the following installed on the host. Installation is easiest on Fedora (see original issue for installation on older versions).
curl
jq
- A compatible runtime, i.e. either:
krun
andpodman
.krunvm
, its requirements andbuildah
Note: When opting for krunvm
, you do not need podman
.
The runner script requires a token to register the runners at the principal. This project has been tested with classic PAT, but should work with repo-scoped tokens. When creating one, you should give your token the following permissions.
- repo
- workflow
- read:public_key
- read:repo_hook
- admin:org_hook
- notifications
- Linux host installation easiest on Fedora
- Inside the runners: Docker not supported. Replaced by
podman
in emulation mode. - Inside the runners: No support for docker network, containers run in "host" (but: inside the microVM) networking mode only. This is alleviated by a docker shim
The orchestrator creates as many loops of ephemeral runners as requested.
These loops are implemented as part of the runner.sh script: the
script will create a microVM based on the default image (see below), memory and
vCPU requirement. It will then start that microVM using krunvm
or podman
and
the VM will start an (ephemeral) GitHub runner. As soon as a job has
been executed on that runner, the microVM will end and a new will be created.
The OCI image is built in two parts:
- The base images -- fedora and ubuntu -- install a minimal set of binaries and packages, both the ones necessary to execute the runner, but also a sane minimal default for workflows. Regular GitHub runners have a wide number of installed packages. The base images have much less.
- The main installs the runner binaries and scripts and creates a directory structure that is used by the rest of the project.
As Docker-in-Docker does not work in krunvm microVMs, the base image installs
podman and associated binaries. This should be transparent to the workflows as
podman will be run in the background, in compatibility mode, and listening to
the Docker socket at its standard location. The Docker client (and compose and
buildx plugins) are however installed on the base image. This is to ensure that
most workflows should work without changes. The microVM also limits to running
containers with the --network host
option. This is made transparent through a
docker CLI wrapper that will automatically add this option
to all (relevant) commands.
When the microVM starts, the entrypoint.sh script will
be started. This script will pick its options using an .env
file, shared from
the host. The file will be sourced and removed at once. This ensures that
secrets are not leaked to the workflows through the process table or a file.
Upon start, the script will request a runner token,
configure the runner and then start the actions runner .NET implementation,
under the runner
user. The runner
user shares the same id as the one at
GitHub and is also a member of the docker
group. Similarily to GitHub runners,
the user is capable of sudo
without a password.
Runner tokens are written to the directory that is shared with the host. This is
used during initial synchronisation, to avoid starting up several runners at the
same time from the main orchestrator loop. The tokens are automatically removed
as soon as the runner is up, they are also protected so that the runner
user
cannot read their content.
This project was written to control my anxeity to face my daughter's newly discovered eating disorder and start helping her out of it. It started as a rewrite of this project after having failed to run those images inside the microVMs generated by krunvm.