A FlexVolume driver for kubernetes which allows you to mount Ploop volumes to your kubernetes pods.
Kubernetes FlexVolumes are currently in Alpha state, so this plugin is as well. Use it at your own risk.
mkdir -p src/github.com/virtuozzo
ln -s ../../../ src/github.com/virtuozzo/ploop-flexvol
export GOPATH=`pwd`
cd src/github.com/virtuozzo/ploop-flexvol
make
In order to use the flexvolume driver, you'll need to install it on every node you want to use ploop on in the kubelet volume-plugin-dir
. By default this is /usr/libexec/kubernetes/kubelet-plugins/volume/exec/
You need a directory for the volume driver vendor, so create it:
mkdir -p /usr/libexec/kubernetes/kubelet-plugins/volume/exec/virtuozzo~ploop
Then drop the binary in there:
mv ploop /usr/libexec/kubernetes/kubelet-plugins/volume/exec/virtuozzo~ploop/ploop
You can now use ploops as usual!
An example pod config would look like this:
apiVersion: v1
kind: Pod
metadata:
name: nginx-ploop
spec:
containers:
- name: nginx
image: nginx
volumeMounts:
- name: test
mountPath: /data
ports:
- containerPort: 80
nodeSelector:
os: parallels # make sure you label your nodes to be ploop compatible
volumes:
- name: test
flexVolume:
driver: "virtuozzo/ploop" # this must match your vendor dir
options:
volumeID: "golang-ploop-test"
size: "10G"
volumePath: "/vstorage/storage_pool/kubernetes"
This will mount a block device from a ploop image located at /vstorage/storage_pool/kubernetes/golang-ploop-test
directory.
You can verify the ploop volume was mounted by finding the node where your pod was scheduled by running ploop list
:
# ploop list
ploop18115 /vstorage/storage_pool/kubernetes/golang-ploop-test/golang-ploop-test
-
volumePath
a path to a virtuozzo storage directory where ploop image is located
-
volumeID
an unique name for a ploop image
-
size=[0-9]*[KMG]
size of the volume
-
vzsReplicas=normal[:min]|/X
Replication level specification:
normal The number of replicas to maintain.
minimum The minimum number of replicas required to write a file (optional).
/X Write tolerance (normal-minimum). The number of replicas allowed to go offline provided that the client is still allowed to write the file.
The number of replicas must be in the range 1-64.
-
vzsEencoding=M+N[/X]
Encoding specification:
M The stripe-depth.
N The number of parity blocks.
X The write tolerance. The number of replicas allowed to go offline provided that the client is still allowed to write the file.
-
vzsFailureDomain=disk|host|rack|row|room
Failure domain for file replicas.
This parameter controls how replicas are distributed across CSs in the cluster:
disk - place no more then 1 replica per disk/CS
host - place no more then 1 replica per host (default)
rack - place no more then 1 replica per RACK
row - place no more then 1 replica per ROW
room - place no more then 1 replica per ROOM
-
vzsTier=0-3
Storage tier for file replicas.
NOTE: high verbosity logging level may include some secret data, so it's strongly recomended to avoid using verbosity level >= 4 for production systems
By default, ploop-flexvol redirects all logging data to the systemd-journald service. If you want to use another way to collect logging data, you can create a wrapper script. It has to redirect stdout to the 3 descriptor and execute the plugin binary according with the following rules:
./ploop wrapper [glog flags] -- ploop [plugin options]
Here is an example to save logging data into a file:
#!/bin/sh
exec 3>&1
`dirname $0`/ploop.bin wrapper -logtostderr -- ploop "$@" &>> /var/log/ploop-flexvol.log