-
Notifications
You must be signed in to change notification settings - Fork 230
Kubernetes Deployment Example
You can run Faktory in your Kubernetes cluster quite easily. A Helm chart is the easiest way to get up and running. However, if you'd like to write Kubernetes definitions yourself, here are some tips and samples.
Here we tell Kubernetes to deploy a single replica of the Faktory Server. A few things to note:
- A volume is mounted to store Faktory's Redis data so it will persist across restarts
- A ConfigMap is used to store Faktory's configuration files. That ConfigMap is mounted as a volume inside Faktory's container.
- There is a sidecar container that watches the configuration files, and if they change, sends a
SIGHUP
to the Faktory server process to hot-reload configuration (thanks @jbielick) - The deployment strategy is Recreate so we only ever have one instance of Faktory
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: faktory-server
labels:
app: faktory-server
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: faktory-server
template:
metadata:
labels:
app: faktory-server
spec:
shareProcessNamespace: true
terminationGracePeriodSeconds: 10
containers:
- name: faktory-server-config-watcher
image: busybox
command:
- sh
- "-c"
- |
sum() {
current=$(find /conf -type f -exec md5sum {} \; | sort -k 2 | md5sum)
}
sum
last="$current"
while true; do
sum
if [ "$current" != "$last" ]; then
pid=$(pidof faktory)
echo "$(date -Iseconds) [conf.d] changes detected - signaling Faktory with pid=$pid"
kill -HUP "$pid"
last="$current"
fi
sleep 1
done
volumeMounts:
- name: faktory-server-configs-volume
mountPath: "/conf"
- image: docker.contribsys.com/contribsys/faktory:1.2.0
name: faktory-server
command:
- "/faktory"
- "-b"
- ":7419"
- "-w"
- ":7420"
- "-e"
- "production"
imagePullPolicy: Always
envFrom:
- configMapRef:
name: production-config
volumeMounts:
- name: faktory-server-configs-volume
mountPath: "/etc/faktory/conf.d"
- name: faktory-server-storage-volume
mountPath: "/var/lib/faktory/db"
volumes:
- name: faktory-server-configs-volume
configMap:
name: faktory-server-configmap
- name: faktory-server-storage-volume
persistentVolumeClaim:
claimName: faktory-server-storage-pv-claim
An example configmap that will be mounted into the deployment above
---
apiVersion: v1
kind: ConfigMap
metadata:
name: faktory-server-configmap
data:
cron.toml: |2
[[cron]]
schedule = "*/1 * * * *"
[cron.job]
queue = "default"
reserve_for = 60
retry = -1
type = "Cron::SomeRandomCron"
throttles.toml: |2
[throttles.default]
concurrency = 1
timeout = 60
statsd.toml: |2
[statsd]
location = "datadog-agent-svc.default.svc.cluster.local:8125"
namespace = "faktory"
tags = ["env:production"]
The volume that will be mounted to store the faktory data.
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: faktory-server-storage-pv-claim
spec:
accessModes:
- ReadWriteOnce
storageClassName: [storage_class_here]
resources:
requests:
storage: 5Gi
This exposes the Faktory Server to the rest of your cluster. You can then use for example: tcp://faktory-server-svc.default.svc.cluster.local:7419
as your host for the Faktory clients.
kind: Service
apiVersion: v1
metadata:
name: faktory-server-svc
spec:
selector:
app: faktory-server
ports:
- name: faktory
protocol: TCP
port: 7419
- name: dashboard
protocol: TCP
port: 7420
Home | Installation | Getting Started Ruby | Job Errors | FAQ | Related Projects
This wiki is tracked by git and publicly editable. You are welcome to fix errors and typos. Any defacing or vandalism of content will result in your changes being reverted and you being blocked.