You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
What steps did you take and what happened:
kubernetes has 3 nodes, node A has vggroup vg_hdd, node B has vggroup vg_hdd, node C has vggroup vg_ssd.
All nodes have the same topology label.
After creating the PVC, the lvmvolume may be created on nodeA or nodeB, the scheduling algorithm does not consider using vggroup information for matching. Then the volvolume will fail to create.
What did you expect to happen:
Not only topology information, but also vggroup information need to be considered when creating volvolumes.
After scheduling, should check the vggroup information
The output of the following commands will help us better understand what's going on:
(Pasting long output into a GitHub gist or other Pastebin is fine.)
After creating the PVC, the lvmvolume may be created on nodeA or nodeB,
@zwForrest, if node A nad Node B don't have a volume group of name vg_ssd*, volume should never be created on that node. Can you check the volume groups available on those nodes? Also, note that a temperary vol object may be created with the Status as Pending/failed, but it will be deleted automatically without getting transitioned into Ready state.
I would recommend to use delayed binding to leverage storage info to make the scheduling decesion by k8s. You can use volumeBindingMode: WaitForFirstConsumer in the storageclass to use that.
What steps did you take and what happened:
kubernetes has 3 nodes, node A has vggroup vg_hdd, node B has vggroup vg_hdd, node C has vggroup vg_ssd.
All nodes have the same topology label.
Pvc yaml
Storageclass yaml
After creating the PVC, the lvmvolume may be created on nodeA or nodeB, the scheduling algorithm does not consider using vggroup information for matching. Then the volvolume will fail to create.
What did you expect to happen:
Not only topology information, but also vggroup information need to be considered when creating volvolumes.
After scheduling, should check the vggroup information
The output of the following commands will help us better understand what's going on:
(Pasting long output into a GitHub gist or other Pastebin is fine.)
kubectl logs -f openebs-lvm-controller-0 -n kube-system -c openebs-lvm-plugin
kubectl logs -f openebs-lvm-node-[xxxx] -n kube-system -c openebs-lvm-plugin
kubectl get pods -n kube-system
kubectl get lvmvol -A -o yaml
Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]
Environment:
kubectl version
):/etc/os-release
):The text was updated successfully, but these errors were encountered: