You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
What steps did you take and what happened:
[A clear and concise description of what the bug is, and what commands you ran.]
to reproduce (microk8s for easy testing on fresh ec2 instance, for example):
sudo snap install microk8s --classic --channel=1.31
truncate -s 5G /tmp/disk.img
sudo losetup -f /tmp/disk.img --show
# use the name of the disk from that last command and put it here:
sudo pvcreate /dev/loop<?>
sudo vgcreate lvmvg /dev/loop<?> ## here lvmvg is the volume group name to be created
microk8s helm repo add openebs https://openebs.github.io/openebs
microk8s helm install openebs --namespace openebs openebs/openebs --set engines.replicated.mayastor.enabled=false --set engines.local.zfs.enabled=false --create-namespace
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: openebs-lvm
provisioner: local.csi.openebs.io
parameters:
storage: "lvm"
volgroup: "lvmvg"
volumeBindingMode: WaitForFirstConsumer ## note: when this is replaced with "Immediate" it creates a pv successfully, but the pod still can't use it for some reason
Warning FailedScheduling 99s default-scheduler 0/1 nodes are available: 1 node(s) did not have enough free storage. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
What did you expect to happen:
normal pod bootup
The output of the following commands will help us better understand what's going on:
(Pasting long output into a GitHub gist or other Pastebin is fine.)
storageClassName in the pvc.spec doesnt match with storageClass.name you have share. Is that a typo?
Is this one worker node cluster? Can you share output of kubectl get lvmnode -n openebs -oyaml
We dont see any /CreateVolume calls in the csi-controller logs. Can you please share the complete log. Can you also share csi-provisioner log after reproducing again.
What event you see on the pod describe when volumeBindingMode is Immediate?
following the docs here: https://openebs.io/docs/user-guides/local-storage-user-guide/local-pv-lvm/lvm-installation#prerequisites
What steps did you take and what happened:
[A clear and concise description of what the bug is, and what commands you ran.]
to reproduce (microk8s for easy testing on fresh ec2 instance, for example):
in the pod events:
Warning FailedScheduling 99s default-scheduler 0/1 nodes are available: 1 node(s) did not have enough free storage. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
What did you expect to happen:
normal pod bootup
The output of the following commands will help us better understand what's going on:
(Pasting long output into a GitHub gist or other Pastebin is fine.)
kubectl logs -f openebs-lvm-localpv-controller-7b6d6b4665-fk78q -n openebs -c openebs-lvm-plugin
kubectl logs -f openebs-lvm-localpv-node-[xxxx] -n openebs -c openebs-lvm-plugin
kubectl get pods -n openebs
kubectl get lvmvol -A -o yaml
Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]
Environment:
kubectl version
):v1.31.2
/etc/os-release
):Ubuntu 24.04.1 LTS
The text was updated successfully, but these errors were encountered: