turtlefinder
is a Go module that discovers various container engines in a
Linux host, including container engines that have been put into containers. If
you consider such configurations to be rarer than rare, then please take a look
at KinD (Kubernetes-in-Docker) as well as Docker
Desktop on WSL2 (Windows Subsystem for
Linux).
It supports the following container engines:
- Docker
- containerd (both native API as well as CRI Event PLEG API)
- CRI-O (CRI Event PLEG API)
- podman (via Docker-compatible API only)
The turtlefinder
package originates from
Ghostwire (part of the Edgeshark
project) and has been carved out in order to foster easy reuse in other projects
without the need for importing the full Ghostwire module.
Simply create a single turtlefinder. No need to create individual “containerizers” (lxkns' word for things that discover the current container workload of an container engine) and then stitching them together, such as when dealing with Docker and containerd simultaneously.
enginectx, cancel := context.WithCancel(context.Background())
containerizer := turtlefinder.New(
func() context.Context { return enginectx },
/* options... */
)
Whenever a turtlefinder finds a new container engine process, it tries to talk
sense to it and discover and track its container workload. In order to shut down
such engine workload background tracking (watching) a turtlefinder expects us to
supply it with a suitable “background” context; one we preferably have control
over. So this is, what the first parameter to New
is.
For further options, please refer to the module documentation.
The "Edgeshark" project consist of several repositories:
- Edgeshark Hub repository
- G(h)ostwire discovery service
- Packetflix packet streaming service
- Containershark Extcap plugin for Wireshark
- support modules:
- 🖝 Turtlefinder 🖜
- csharg (CLI)
- mobydig
- ieddata
Finding container engines works in principle as follows:
- detect long-running engines (also commonly refered to as "demons"):
- scan the process tree for processes with known names (such as
dockerd
,containerd
,cri-o
, et cetera). The well-known process names are supplied by a set of built-in "detectors" in form of sub-packages of thegit.luolix.top/siemens/turtlefinder/detector
package. - scan matching processes for file descriptors referencing listening unix domain sockets: we assume them to be potential container engine API endpoints.
- scan the process tree for processes with known names (such as
- detect
socket-activated
engines:
- scan the process tree for socket-activating processes with known names,
especially
systemd
. Again, the well-known process names are supplied by a set of build-in socket-activator detectors in form of sub-packages of thegit.luolix.top/siemens/turtlefinder/activator
package. - scan matching processes for file descriptors referencing listening unix
domain sockets with well-known suffixes, such as
podman.sock
. While this is slightly less efficient compared with directly using well-known absolute socket API paths, our approach is much more powerful as it finds suffix-matching API endpoints even in containers. This scanning happens only for a newly found socket activator or when we detect a change in the socket activator's socket configuration (such as after a configuration reload). - activate the services ("don't call them demons") behind the API sockets
and wait for the
demonsservice processes to appear, before proceeding with talking to these endpoints. (The rationale is that we need the engine PIDs for the turtlefinder hierarchy detection to work.)
- scan the process tree for socket-activating processes with known names,
especially
- try to talk sense to the API endpoints found; this won't be always the case, such as when we trip on metrics endpoints, and other strange endpoints. Where we succeed, we add the engine found to our list of engines to watch.
Additionally, we also do engine pruning when we can't find a particular engine process anymore after a more recent process tree scan.
Furthermore, we do some fancy things during workload discovery in order to figure out how container engines might have been stuck into containers of another container engine: that is, the hierarchy of container engines. This is especially useful for such system configurations as KinD clusters and Docker Desktop on WSL2.
The included turtlefinder.code-workspace
defines the following tasks:
- View Go module documentation task: installs
pkgsite
, if not done already so, then startspkgsite
and opens VSCode's integrated ("simple") browser to show the csharg documentation.
- pksite service: auxilliary task to run
pkgsite
as a background service usingscripts/pkgsite.sh
. The script leverages browser-sync and nodemon to hot reload the Go module documentation on changes; many thanks to @mdaverde's Build your Golang package docs locally for paving the way.scripts/pkgsite.sh
adds automatic installation ofpkgsite
, as well as thebrowser-sync
andnodemon
npm packages for the local user. - view pkgsite: auxilliary task to open the VSCode-integrated "simple" browser
and pass it the local URL to open in order to show the module documentation
rendered by
pkgsite
. This requires a detour via a task input with ID "pkgsite".
make
: lists all targets.make test
: runs all tests – please note that this strictly requires a genuine Docker (moby) container demon to be present. Trying to substitute adockerd
withpodman
will make tests fail for good reason, as podman isn't Docker for our purposes.make pkgsite
: installsx/pkgsite
, as well as thebrowser-sync
andnodemon
npm packages first, if not already done so. Then runs thepkgsite
and hot reloads it whenever the documentation changes.make report
: installs@gojp/goreportcard
if not yet done so and then runs it on the code base.make vuln
: install (or updates) govuln and then checks the Go sources.
Please see CONTRIBUTING.md.
(c) Siemens AG 2023‒24