Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kubernetes 1.7.0 - PersistentVolumeClaim is not bound: "es-persistent-storage-elasticsearch-logging-0" #199

Closed
nkhine opened this issue Jul 2, 2017 · 4 comments

Comments

@nkhine
Copy link
Contributor

nkhine commented Jul 2, 2017

i have tried to install v1.7.0 but get this

Critical pod kube-system_elasticsearch-logging-0 doesn't fit on any node. and therefore 

all the pods apart from elasticsearch-logging-0 have started and in the dashboard, i have, PersistentVolumeClaim is not bound: "es-persistent-storage-elasticsearch-logging-0" (repeated 6 times)

➜  tack git:(develop) ✗ kubectl get pods --all-namespaces                                                                                                                                                              (git)-[develop] 
NAMESPACE     NAME                                                               READY     STATUS    RESTARTS   AGE
default       busybox                                                            1/1       Running   1          1h
kube-system   cluster-autoscaler-2018616338-4sndj                                1/1       Running   0          1h
kube-system   elasticsearch-logging-0                                            0/1       Pending   0          1h
kube-system   fluentd-6n248                                                      1/1       Running   0          1h
kube-system   fluentd-c0jw0                                                      1/1       Running   0          1h
kube-system   fluentd-srpf8                                                      1/1       Running   0          1h
kube-system   fluentd-wf52g                                                      1/1       Running   0          1h
kube-system   fluentd-z3m7w                                                      1/1       Running   0          1h
kube-system   heapster-v1.3.0-634771249-xz51g                                    2/2       Running   0          1h
kube-system   kibana-logging-3751581462-f8j2k                                    1/1       Running   0          1h
kube-system   kube-apiserver-ip-10-0-10-10.eu-west-2.compute.internal            1/1       Running   0          1h
kube-system   kube-apiserver-ip-10-0-10-11.eu-west-2.compute.internal            1/1       Running   0          1h
kube-system   kube-apiserver-ip-10-0-10-12.eu-west-2.compute.internal            1/1       Running   0          1h
kube-system   kube-controller-manager-ip-10-0-10-10.eu-west-2.compute.internal   1/1       Running   0          1h
kube-system   kube-controller-manager-ip-10-0-10-11.eu-west-2.compute.internal   1/1       Running   0          1h
kube-system   kube-controller-manager-ip-10-0-10-12.eu-west-2.compute.internal   1/1       Running   0          1h
kube-system   kube-dns-2255216023-2x13h                                          3/3       Running   0          1h
kube-system   kube-dns-2255216023-k1m0m                                          3/3       Running   0          52m
kube-system   kube-dns-autoscaler-3587138155-dkx79                               1/1       Running   0          1h
kube-system   kube-proxy-ip-10-0-10-10.eu-west-2.compute.internal                1/1       Running   0          1h
kube-system   kube-proxy-ip-10-0-10-11.eu-west-2.compute.internal                1/1       Running   0          1h
kube-system   kube-proxy-ip-10-0-10-12.eu-west-2.compute.internal                1/1       Running   0          1h
kube-system   kube-proxy-ip-10-0-10-20.eu-west-2.compute.internal                1/1       Running   0          1h
kube-system   kube-proxy-ip-10-0-10-247.eu-west-2.compute.internal               1/1       Running   0          1h
kube-system   kube-rescheduler-2136974456-ldn6z                                  1/1       Running   0          1h
kube-system   kube-scheduler-ip-10-0-10-10.eu-west-2.compute.internal            1/1       Running   0          1h
kube-system   kube-scheduler-ip-10-0-10-11.eu-west-2.compute.internal            1/1       Running   0          1h
kube-system   kube-scheduler-ip-10-0-10-12.eu-west-2.compute.internal            1/1       Running   0          1h
kube-system   kubernetes-dashboard-2227282072-xd27l                              1/1       Running   0          52m

is it an issue with my setup of the cluster or is there a change i have missed?

if i disable the elasticsearch-logging cluster installs, but i am still not able to add volumes and make PersistentVolumeClaims

works fine with 1.6.6

@tomfotherby
Copy link

tomfotherby commented Jul 3, 2017

I noticed from your output there are 5 fluentd pods. In Tack, there are usually 6 instances, therefore I'm assuming 1 instance might have auto-scaled down. If the instance that scaled down was the one with elasticsearch on it you might have run into the same bug as me, described in issue #192 . Maybe disable auto-scaling and see if you get the same issue.

I deleted the above because I tried v1.7 and found the same issue, elasticsearch-logging-0 doesn't start

kubectl -n kube-system describe pod elasticsearch-logging-0
> PersistentVolumeClaim is not bound: "es-persistent-storage-elasticsearch-logging-0" (repeated 6 times)

@nkhine
Copy link
Contributor Author

nkhine commented Jul 3, 2017

even if you disable elasticsearch-logging, you're not able to create PersistentVolumeCounts

@tomfotherby
Copy link

tomfotherby commented Jul 3, 2017

I worked around this problem using the following commands (warning, I'm a Kubernetes beginner so not sure if doing anything inappropriate):

First, clean up the broken elasticsearch pod:

$kubectl delete -f addons/logging/elasticsearch-logging.yml
$kubectl -n kube-system delete pvc es-persistent-storage-elasticsearch-logging-0

Create a storageclass.yaml file:

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: standard
provisioner: kubernetes.io/aws-ebs
parameters:
  type: gp2

(found from https://kubernetes.io/docs/concepts/storage/persistent-volumes/#storageclasses)

Create the storage class:

$kubectl create -f storageclass.yaml
$kubectl get storageclass
NAME       TYPE
standard   kubernetes.io/aws-ebs

Change addons/logging/elasticsearch-logging.yml to use the new storage class

      annotations:
        volume.beta.kubernetes.io/storage-class: "standard"

(was volume.alpha.kubernetes.io/storage-class: default)

Re-create: kubectl create -f addons/logging/elasticsearch-logging.yml

@wellsie
Copy link
Member

wellsie commented Aug 1, 2017

fixed #202

@wellsie wellsie closed this as completed Aug 1, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants