You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
One of the main issues we have with volumes is the "FailedSync" and the mount timeouts.
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
4m 4m 1 {default-scheduler } Normal Scheduled Successfully assigned postgres-1015904327-0cq7c to [...]-pool-4cpu-15gb-878b5c33-dfm5
3m 2m 8 {controller-manager } Warning FailedMount Failed to attach volume "pvc-3ed3f856-0e1c-11e7-824a-42010af0012c" on node "[..]-pool-4cpu-15gb-878b5c33-dfm5" with: googleapi: Error 400: The disk resource '[...]-a-pvc-3ed3f856-0e1c-11e7-824a-42010af0012c' is already being used by '[...]-pool-4cpu-15gb-878b5c33-dm7r'
2m 2m 1 {kubelet [...]-pool-4cpu-15gb-878b5c33-dfm5} Warning FailedMount Unable to mount volumes for pod "postgres-1015904327-0cq7c_[...](6785daae-0ef2-11e7-824a-42010af0012c)": timeout expired waiting for volumes to attach/mount for pod "[...]"/"postgres-1015904327-0cq7c". list of unattached/unmounted volumes=[database-volume]
2m 2m 1 {kubelet [...]-pool-4cpu-15gb-878b5c33-dfm5} Warning FailedSync Error syncing pod, skipping: timeout expired waiting for volumes to attach/mount for pod "[...]"/"postgres-1015904327-0cq7c". list of unattached/unmounted volumes=[database-volume]
It looks like it's a bug and is often related to a given node. It'd be nice to identify such "non-running" pods and their hosts, so we can probably quickly identify that one given node is not going well.
The text was updated successfully, but these errors were encountered:
sroze
changed the title
Identify the "FailedSync"
Identify raisons of non-running pods such as the "FailedSync"
Mar 22, 2017
One of the main issues we have with volumes is the "FailedSync" and the mount timeouts.
It looks like it's a bug and is often related to a given node. It'd be nice to identify such "non-running" pods and their hosts, so we can probably quickly identify that one given node is not going well.
The text was updated successfully, but these errors were encountered: