-
Notifications
You must be signed in to change notification settings - Fork 99
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
volume is in pendig state in case of waitforfirstconsumer #31
Comments
cc: @iyashu |
@w3aman It may or may not be scheduled on node-2 in above example. See it's not guaranteed that pod will be scheduled on the node having free storage space since the kube-scheduler score the nodes based on other parameters. It's not aware of storage capacity accessible from nodes at all. But the important part here is we can see that the pod is being retried for rescheduling. It may or may not land into the node having enough capacity to fit the pvc claim. Once we merge storage capacity tracking pull request (#21), this problem will perish. |
In the case of waitforfirstconsumer, the pod will be scheduled by parameters, then the node will be selected by scheduler, but lvm did not participate in kubernetes scheduling. This means that the node selected by lvm and the one selected by kubernetes scheduler may not be the same. |
@w3aman As pointed in #31 (comment) , this problem might not exist anymore since the linked PR was merged. Please check back and update/reopen if still any problem. |
What steps did you take and what happened:
Volume group "lvmvg" has insufficient free space (7679 extents): 10240 required.
error.now pod and pvc both are in pending. will driver not try to provision on node2 rather than node1 ?
pod describe:
controller log :
node1-agent:
The text was updated successfully, but these errors were encountered: