Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Document VSphere reboot issue #44

Closed
mrIncompetent opened this issue Jul 19, 2018 · 0 comments
Closed

Document VSphere reboot issue #44

mrIncompetent opened this issue Jul 19, 2018 · 0 comments
Labels
priority/high sig/cluster-management Denotes a PR or issue as being assigned to SIG Cluster Management.

Comments

@mrIncompetent
Copy link
Contributor

Document the reboot issues when using vSphere.

Result of kubermatic/kubermatic#1571

From the original issue:

When using a vsphere seed and rebooting a node that runs one or more pods with attached PVs, that > node can not be started anymore because the cloud provider does not remove the volume binding from > the old node even thought the pod will get rescheduled to another node.

This then means that the vsphere instance can not be started anymore until an operator manually > > > removes the binding to the node inside vsphere.

Upstream issue: kubernetes/kubernetes#63577

This issue will be resolved with kubernetes 1.12: kubernetes/kubernetes#63413 (comment)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
priority/high sig/cluster-management Denotes a PR or issue as being assigned to SIG Cluster Management.
Projects
None yet
Development

No branches or pull requests

2 participants