-
Notifications
You must be signed in to change notification settings - Fork 554
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Collect metadata in k8s cm #113
Conversation
LGTM! Besides the default metadata storage for k8s deployments I mentioned earlier. @rootfs I think this is ready to be merged once the last comment is addressed - if you don't oppose. |
Cool, lgtm |
thanks @mickymiek for getting this done and @gman0 for the review |
how to config "KUBERNETES_CONFIG_PATH"? |
@zhucan according to the deployment documentation, it's an (optional) environment variable and may be set in the DaemonSet manifest of the plugin. |
yeah, I know, but I want to know the value of the "KUBERNETES_CONFIG_PATH",can you tell me how to set it ? |
@zhucan this is only needed if you're deploying the plugin as out-of-cluster. In that case the variable should be set to the path of your |
no, you are wrong;The value of the KUBERNETES_CONFIG_PATH is "~/.kube/config" |
KUBERNETES_CONFIG_PATH: if you use k8s_configmap as metadata store, specify the path of your k8s config file (if not specified, the plugin will assume you're running it inside a k8s cluster and find the config itself). |
There is no examples for the csi-rbdplugin.yaml to set the KUBERNETES_CONFIG_PATH? |
sure, if that works for you
Unfortunately not ATM, you could try and add something like this in your RBD plugin deployment in the - name: csi-rbdplugin
...
env:
- name: KUBERNETES_CONFIG_PATH
value: "/path-to-your-k8s-config"
... |
when the KUBERNETES_CONFIG_PATH not set; There is an error: Failed to get cluster config with error: unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined |
I deploy the plugin in the k8s-cluster |
@zhucan AFAIK if you are deploying in k8s-cluster you don't need to set this env, just remove it from the container yaml file. |
@zhucan could you please open up a new issue describing this problem + other info that might aid in debugging? |
I don't set it,it will start from "rest.InClusterConfig()", There is an error func NewK8sClient() *k8s.Clientset { |
ok, wait for a moment, it maybe a new issue |
do you have this two files in your cluster |
no,no,no; the errors in this part " |
yeah , How can I get the "/var/run/secrets/kubernetes.io/serviceaccount/token" file ? |
Kubernetes 1.13 has been released, so we can use that instead of some pre-1.13 master branch. csi-test 0.3.5 no longer depends on the post-0.3 CSI spec, so plain 0.3 is fine now. ceph/ceph-csi#92 has been merged. We can use the upstream release again. Because the 0.3 image tag is following the master branch (ceph/ceph-csi#96), we get the latest features, which includes support for storing persistent data in a config map (ceph/ceph-csi#113). That mode worked whereas storing on the node failed with an error about not being able to create the file (probably because the directory hadn't been created). Instead of trying to fix that, the new feature is used. Provisioning tests were failing because patching the driver name was (no longer?) done correctly.
Collect metadata in k8s cm
Sync devel branch with upstream
This PR gives an options to persist RBD and CephFS metadata (currently stored on node) in a k8s configmap.
Related task: #66
(Copied from #102)