-
Notifications
You must be signed in to change notification settings - Fork 64
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Operator v2: tracking features #334
Comments
@WanzenBug It would be a great chance to add a migration script to the the new K8S backend and make it mandatory for the operator V2 :-) |
Consider using Kustomize as the default deployment tool. Allows for greater control and from a maintainability point of view its simpler to patch resources instead of templating them. Good example of where this is used is https://github.com/kubernetes-sigs/node-feature-discovery |
On that front, I can report that the We are still thinking about adding some form of Helm chart, since a lot of users are still used to that. |
Along with the registry, could we configure the image pull secrets, pull policy and image tag? Makes it simpler for end-users to automate upgrading the application. For instance, using apiVersion: piraeus.io/v1
kind: LinstorCluster
metadata:
name: linstorcluster
spec:
imageSource:
repository: registry.example.com/piraeus
tag: v1.10.0
pullPolicy: IfNotPresent
pullSecrets:
- "SecretName" |
Nice work on v2 so far! I've deployed the operator using kustomize from the apiVersion: piraeus.io/v1
kind: LinstorCluster
metadata:
name: linstor-cluster
spec:
nodeSelector:
node-role.kubernetes.io/linstor: ""
---
apiVersion: piraeus.io/v1
kind: LinstorSatelliteConfiguration
metadata:
name: all-satellites
spec:
storagePools:
- name: fs1
filePool: {}
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: simple-fs
parameters:
csi.storage.k8s.io/fstype: xfs
# linstor.csi.linbit.com/autoPlace: "3" # not sure what this does = replica?
linstor.csi.linbit.com/storagePool: fs1
provisioner: linstor.csi.linbit.com
volumeBindingMode: WaitForFirstConsumer The
After manually removing the init-container and building my own image of the operator I found we also can't mount hostPath
Only the Since I only want to use file backed storage pools I removed all lvm mounts from the
FYI: On Talos $ talosctl -n 100.64.6.90 list /usr/lib/
NODE NAME
100.64.6.90 .
100.64.6.90 cryptsetup
100.64.6.90 engines-1.1
100.64.6.90 libaio.so
....many .so files
100.64.6.90 udev
100.64.6.90 xfsprogs
$ talosctl -n 100.64.6.90 list /lib/modules
NODE NAME
100.64.6.90 .
100.64.6.90 5.15.86-talos I can confirm I've installed the $ talosctl -n 100.64.6.90 read /proc/drbd
version: 9.2.0 (api:2/proto:86-121)
GIT-hash: 71e60591f3d7ea05034bccef8ae362c17e6aa4d1 build by @buildkitsandbox, 2023-01-11 12:22:06
Transports (api:18): tcp (9.2.0) Afterthoughts (I'm not an expert) The In contrast, Talos is a kernel that uses a modular design, which means that it loads only the necessary modules at runtime. It does not have a Because the script is trying to mount the refs: @DJAlPee - Got this already working on Talos, but only the main branch piraeus v1 version. |
just a note, loading modules on talos is disabled, since talos is a configuration driven os, module loading and it's parameters are specified in the machine config, so I guess having an option to disable the init container makes more sense |
This sounds like the exact use-case we now have
This disables the init container on all nodes. |
This seems to be a pretty nice approach! |
Because you almost always want to use |
As @frezbo stated, module loading is disabled in Talos. So we have the "almost" case here 😉 |
Operator v2 is released. |
We've recently started work on Operator v2
This is intended as a list of features that need to be ported from v1, or features we want to add in v2:
Note: this list is not complete. If there is something to be added, please comment below
The text was updated successfully, but these errors were encountered: