Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

When creating lvmvolume without check whether lvmnode matches the vgpattern #174

Closed
zwForrest opened this issue Jan 17, 2022 · 4 comments
Closed
Assignees
Labels
Backlog project/community question Further information is requested

Comments

@zwForrest
Copy link

zwForrest commented Jan 17, 2022

What steps did you take and what happened:
kubernetes has 3 nodes, node A has vggroup vg_hdd, node B has vggroup vg_hdd, node C has vggroup vg_ssd.
All nodes have the same topology label.

Pvc yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: lvm-ssd
  namespace: openebs
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  storageClassName: openebs-lvmpv-ssd

Storageclass yaml

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: openebs-lvmpv-ssd
parameters:
  storage: lvm
  vgpattern: vg_ssd*
provisioner: local.csi.openebs.io
reclaimPolicy: Delete
volumeBindingMode: Immediate

After creating the PVC, the lvmvolume may be created on nodeA or nodeB, the scheduling algorithm does not consider using vggroup information for matching. Then the volvolume will fail to create.

What did you expect to happen:

Not only topology information, but also vggroup information need to be considered when creating volvolumes.
After scheduling, should check the vggroup information

The output of the following commands will help us better understand what's going on:
(Pasting long output into a GitHub gist or other Pastebin is fine.)

  • kubectl logs -f openebs-lvm-controller-0 -n kube-system -c openebs-lvm-plugin
  • kubectl logs -f openebs-lvm-node-[xxxx] -n kube-system -c openebs-lvm-plugin
  • kubectl get pods -n kube-system
  • kubectl get lvmvol -A -o yaml

Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]

Environment:

  • LVM Driver version
  • Kubernetes version (use kubectl version):
  • Kubernetes installer & version:
  • Cloud provider or hardware configuration:
  • OS (e.g. from /etc/os-release):
@pawanpraka1
Copy link
Contributor

pawanpraka1 commented Jan 17, 2022

After creating the PVC, the lvmvolume may be created on nodeA or nodeB,

@zwForrest, if node A nad Node B don't have a volume group of name vg_ssd*, volume should never be created on that node. Can you check the volume groups available on those nodes? Also, note that a temperary vol object may be created with the Status as Pending/failed, but it will be deleted automatically without getting transitioned into Ready state.

I would recommend to use delayed binding to leverage storage info to make the scheduling decesion by k8s. You can use volumeBindingMode: WaitForFirstConsumer in the storageclass to use that.

@dsharma-dc dsharma-dc added the question Further information is requested label Jun 3, 2024
@dsharma-dc
Copy link
Contributor

@abhilashshetty04 Please take a look at this one as well.

@abhilashshetty04
Copy link
Contributor

@zwForrest , We have created an enhancement ticket #312. It has some additional suggestions also. Please take a look.

@pchandra19
Copy link

This will be tracked further in #312

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Backlog project/community question Further information is requested
Projects
None yet
Development

No branches or pull requests

6 participants