You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
root@controller-node-1:/home/cyclinder/sriov# kubectl get sriovnetworknodepolicies.sriovnetwork.openshift.io -A -o wide
NAMESPACE NAME AGE
kube-system policy1 43m
kube-system policy2 7m44s
When use-cdi is enabled, a cdiSpec is created for each resourcePool on every call to ListWatch of the grpc server. The cdiSpec is then written to DefaultDynamicDir+cdiSpecPrefix+resourcePrefix with expand to /var/run/cdi/sriov-dp-nvidia.com.yaml. The function that write the cdiSpec does an atomic write, i.e. write to a temp file, save and then rename to the target name. This is in conflict with our desire to write all specs to the same file.
In order to fix this, we should either generate a unique file name for each resourcePool using GenerateNameForTransientSpec or create a shared memory cache that would handle writes to the local file.
What happened?
I have two sriovnodepolicy configs, see below:
What did you expect to happen?
What are the minimal steps needed to reproduce the bug?
Anything else we need to know?
k8snetworkplumbingwg/sriov-network-operator#735
Component Versions
Please fill in the below table with the version numbers of components used.
Config Files
Config file locations may be config dependent.
Device pool config file location (Try '/etc/pcidp/config.json')
Multus config (Try '/etc/cni/multus/net.d')
CNI config (Try '/etc/cni/net.d/')
Kubernetes deployment type ( Bare Metal, Kubeadm etc.)
Kubeconfig file
SR-IOV Network Custom Resource Definition
Logs
SR-IOV Network Device Plugin Logs (use
kubectl logs $PODNAME
)Multus logs (If enabled. Try '/var/log/multus.log' )
Kubelet logs (journalctl -u kubelet)
The text was updated successfully, but these errors were encountered: