Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Collect metadata in k8s cm #113

Merged
merged 8 commits into from
Jan 7, 2019
Merged

Collect metadata in k8s cm #113

merged 8 commits into from
Jan 7, 2019

Conversation

mickymiek
Copy link

This PR gives an options to persist RBD and CephFS metadata (currently stored on node) in a k8s configmap.

Related task: #66

(Copied from #102)

@gman0
Copy link
Contributor

gman0 commented Dec 28, 2018

LGTM! Besides the default metadata storage for k8s deployments I mentioned earlier. @rootfs I think this is ready to be merged once the last comment is addressed - if you don't oppose.

@gman0
Copy link
Contributor

gman0 commented Jan 7, 2019

Cool, lgtm

@rootfs
Copy link
Member

rootfs commented Jan 7, 2019

thanks @mickymiek for getting this done and @gman0 for the review

@rootfs rootfs merged commit 101b15e into ceph:master Jan 7, 2019
@zhucan
Copy link

zhucan commented Jan 29, 2019

Cool, lgtm

how to config "KUBERNETES_CONFIG_PATH"?

@gman0
Copy link
Contributor

gman0 commented Jan 29, 2019

@zhucan according to the deployment documentation, it's an (optional) environment variable and may be set in the DaemonSet manifest of the plugin.

@zhucan
Copy link

zhucan commented Jan 29, 2019

@zhucan according to the deployment documentation, it's an (optional) environment variable and may be set in the DaemonSet manifest of the plugin.

yeah, I know, but I want to know the value of the "KUBERNETES_CONFIG_PATH",can you tell me how to set it ?

@gman0
Copy link
Contributor

gman0 commented Jan 29, 2019

@zhucan this is only needed if you're deploying the plugin as out-of-cluster. In that case the variable should be set to the path of your admin.conf .

@zhucan
Copy link

zhucan commented Jan 29, 2019

@zhucan this is only needed if you're deploying the plugin as out-of-cluster. In that case the variable should be set to the path of your admin.conf .

no, you are wrong;The value of the KUBERNETES_CONFIG_PATH is "~/.kube/config"

@zhucan
Copy link

zhucan commented Jan 29, 2019

@zhucan this is only needed if you're deploying the plugin as out-of-cluster. In that case the variable should be set to the path of your admin.conf .

KUBERNETES_CONFIG_PATH: if you use k8s_configmap as metadata store, specify the path of your k8s config file (if not specified, the plugin will assume you're running it inside a k8s cluster and find the config itself).

@zhucan
Copy link

zhucan commented Jan 29, 2019

There is no examples for the csi-rbdplugin.yaml to set the KUBERNETES_CONFIG_PATH?

@gman0
Copy link
Contributor

gman0 commented Jan 29, 2019

@zhucan

no, you are wrong;The value of the KUBERNETES_CONFIG_PATH is "~/.kube/config"

sure, if that works for you

There is no examples for the csi-rbdplugin.yaml to set the KUBERNETES_CONFIG_PATH?

Unfortunately not ATM, you could try and add something like this in your RBD plugin deployment in the csi-rbdplugin container spec definition:

        - name: csi-rbdplugin
          ...
          env:
            - name: KUBERNETES_CONFIG_PATH
              value: "/path-to-your-k8s-config"
          ...

@zhucan
Copy link

zhucan commented Jan 29, 2019

@zhucan

no, you are wrong;The value of the KUBERNETES_CONFIG_PATH is "~/.kube/config"

sure, if that works for you

There is no examples for the csi-rbdplugin.yaml to set the KUBERNETES_CONFIG_PATH?

Unfortunately not ATM, you could try and add something like this in your RBD plugin deployment in the csi-rbdplugin container spec definition:

        - name: csi-rbdplugin
          ...
          env:
            - name: KUBERNETES_CONFIG_PATH
              value: "/path-to-your-k8s-config"
          ...

when the KUBERNETES_CONFIG_PATH not set; There is an error: Failed to get cluster config with error: unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined

@zhucan
Copy link

zhucan commented Jan 29, 2019

@zhucan

no, you are wrong;The value of the KUBERNETES_CONFIG_PATH is "~/.kube/config"

sure, if that works for you

There is no examples for the csi-rbdplugin.yaml to set the KUBERNETES_CONFIG_PATH?

Unfortunately not ATM, you could try and add something like this in your RBD plugin deployment in the csi-rbdplugin container spec definition:

        - name: csi-rbdplugin
          ...
          env:
            - name: KUBERNETES_CONFIG_PATH
              value: "/path-to-your-k8s-config"
          ...

when the KUBERNETES_CONFIG_PATH not set; There is an error: Failed to get cluster config with error: unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined

I deploy the plugin in the k8s-cluster

@Madhu-1
Copy link
Collaborator

Madhu-1 commented Jan 29, 2019

@zhucan AFAIK if you are deploying in k8s-cluster you don't need to set this env, just remove it from the container yaml file.

@gman0
Copy link
Contributor

gman0 commented Jan 29, 2019

@zhucan could you please open up a new issue describing this problem + other info that might aid in debugging?

@zhucan
Copy link

zhucan commented Jan 29, 2019

@zhucan AFAIK if you are deploying in k8s-cluster you don't need to set this env, just remove it from the container yaml file.

I don't set it,it will start from "rest.InClusterConfig()", There is an error


func NewK8sClient() *k8s.Clientset {
var cfg *rest.Config
var err error
cPath := os.Getenv("KUBERNETES_CONFIG_PATH")
if cPath != "" {
cfg, err = clientcmd.BuildConfigFromFlags("", cPath)
if err != nil {
glog.Errorf("Failed to get cluster config with error: %v\n", err)
os.Exit(1)
}
} else {
cfg, err = rest.InClusterConfig()
if err != nil {
glog.Errorf("Failed to get cluster config with error: %v\n", err)
os.Exit(1)
}
}
client, err := k8s.NewForConfig(cfg)
if err != nil {
glog.Errorf("Failed to create client with error: %v\n", err)
os.Exit(1)
}
return client
}

@zhucan
Copy link

zhucan commented Jan 29, 2019

@zhucan could you please open up a new issue describing this problem + other info that might aid in debugging?

ok, wait for a moment, it maybe a new issue

@Madhu-1
Copy link
Collaborator

Madhu-1 commented Jan 29, 2019

@zhucan AFAIK if you are deploying in k8s-cluster you don't need to set this env, just remove it from the container yaml file.

I don't set it,it will start from "rest.InClusterConfig()", There is an error

func NewK8sClient() *k8s.Clientset {
var cfg *rest.Config
var err error
cPath := os.Getenv("KUBERNETES_CONFIG_PATH")
if cPath != "" {
cfg, err = clientcmd.BuildConfigFromFlags("", cPath)
if err != nil {
glog.Errorf("Failed to get cluster config with error: %v\n", err)
os.Exit(1)
}
} else {
cfg, err = rest.InClusterConfig()
if err != nil {
glog.Errorf("Failed to get cluster config with error: %v\n", err)
os.Exit(1)
}
}
client, err := k8s.NewForConfig(cfg)
if err != nil {
glog.Errorf("Failed to create client with error: %v\n", err)
os.Exit(1)
}
return client
}

do you have this two files in your cluster
"/var/run/secrets/kubernetes.io/serviceaccount/token"
"/var/run/secrets/kubernetes.io/serviceaccount/ca.crt"

@zhucan
Copy link

zhucan commented Jan 29, 2019

@zhucan AFAIK if you are deploying in k8s-cluster you don't need to set this env, just remove it from the container yaml file.

I don't set it,it will start from "rest.InClusterConfig()", There is an error
func NewK8sClient() *k8s.Clientset {
var cfg *rest.Config
var err error
cPath := os.Getenv("KUBERNETES_CONFIG_PATH")
if cPath != "" {
cfg, err = clientcmd.BuildConfigFromFlags("", cPath)
if err != nil {
glog.Errorf("Failed to get cluster config with error: %v\n", err)
os.Exit(1)
}
} else {
cfg, err = rest.InClusterConfig()
if err != nil {
glog.Errorf("Failed to get cluster config with error: %v\n", err)
os.Exit(1)
}
}
client, err := k8s.NewForConfig(cfg)
if err != nil {
glog.Errorf("Failed to create client with error: %v\n", err)
os.Exit(1)
}
return client
}

do you have this two files in your cluster
"/var/run/secrets/kubernetes.io/serviceaccount/token"
"/var/run/secrets/kubernetes.io/serviceaccount/ca.crt"

no,no,no; the errors in this part "
host, port := os.Getenv("KUBERNETES_SERVICE_HOST"), os.Getenv("KUBERNETES_SERVICE_PORT")
if len(host) == 0 || len(port) == 0 {
return nil, ErrNotInCluster
}"

@Madhu-1
Copy link
Collaborator

Madhu-1 commented Jan 29, 2019

@zhucan
Copy link

zhucan commented Jan 30, 2019

coreos/etcd-operator#731
kubernetes/kubernetes#40973

yeah , How can I get the "/var/run/secrets/kubernetes.io/serviceaccount/token" file ?

avalluri pushed a commit to intel/oim that referenced this pull request Jan 30, 2019
Kubernetes 1.13 has been released, so we can use that instead of some
pre-1.13 master branch.

csi-test 0.3.5 no longer depends on the post-0.3 CSI spec, so plain
0.3 is fine now.

ceph/ceph-csi#92 has been merged. We can use
the upstream release again. Because the 0.3 image tag is following the
master branch (ceph/ceph-csi#96), we get the
latest features, which includes support for storing persistent data in
a config map (ceph/ceph-csi#113).

That mode worked whereas storing on the node failed with an error
about not being able to create the file (probably because the
directory hadn't been created). Instead of trying to fix that, the
new feature is used.

Provisioning tests were failing because patching the driver name was
(no longer?) done correctly.
wilmardo pushed a commit to wilmardo/ceph-csi that referenced this pull request Jul 29, 2019
pkalever pushed a commit to pkalever/ceph-csi that referenced this pull request Aug 30, 2022
Sync devel branch with upstream
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants