-
Notifications
You must be signed in to change notification settings - Fork 99
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
BUG: lvmvolume is inconsistent with the actual allocated lv size #316
Comments
kubectl logs -n kube-system openebs-lvm-node-d8krs -c openebs-lvm-plugin
|
@bxy4543 Could you share the storage class and pvc yaml files here, or in Github gist? |
controller version:
and i found error:
|
|
The filesystem resize path seems to have run into error while doing fsck and that left it there. I'll check this for some more clues. |
Thank you for your support. After the underlying lv is expanded, the file system expansion step is reached. However, it can be seen that the lv of the test-node-003 node has not been successfully expanded. |
I see that the storage class mentions |
And I see fsck comes from mount_linux.formatAndMountSensitive which is referenced in NodePublishVolume.MountFilesystem, but my pod is still able to mount the pvc successfully |
It was there for a while for backup, but it's gone now |
@bxy4543 If possible please upload all log files here as zipped. |
The openebs-lvm-node-003 log is missing the logs from earlier timestamp for volume |
|
Two questions:
|
And the
same problem:
|
There are few things that are observed here:
It would be good if you can share the version of all the things involved i.e. kubernetes, openebs plugin, host OS details etc. NOTE: In the last comment, the pvc and lvmvolume look unrelated, they are having different IDs. |
Thanks, I'll share it later |
It would also be interesting to know how the edits are being made to the PVC. Also, whether a retry worked successfully or not. The error doesn't indicate any issue with this provisioner till now. The failure happened in kubernetes API calls during resize due to PVC edit conflicting with some other conflicting operation on PVC. |
Yes, but I wonder why the csi controller failed and the retry succeeded, but the LVMVolume was inconsistent with the physical LV |
this was because physical LV is expanded when |
How to avoid this situation? why does patching the error pvc not re-trigger grpc |
I'd guess kubernetes expects us to retry manually as the message says |
pvc-c745ae87-6b56-4684-9cfb-9e21ca9517e5: 4Gi |
I think |
The one key difference here in this LVM provisioner from other e.g openebs/Mayastor is that I haven't been able to reproduce the error yet using the script provided by you though.
|
Got it, looking forward to improving this problem. I can always reproduce it in my environment:
use version:
|
@datacore-tilangovan can you help in reproducing this issue in your setup where you are benchmarking for postgresql? |
Does this PR #260 help to address this issue? |
What steps did you take and what happened:
Due to insufficient pvc resources, the pod keeps restarting. After expanding the pvc, it is found that the lv has not been successfully expanded. The remaining vg resources of the node are sufficient. Since the underlying lv has not been successfully expanded, the pod keeps restarting.
lvSize: 1Gib , lvmvo: 16Gib
What did you expect to happen:
lv Size = LvmVolume cr size
The output of the following commands will help us better understand what's going on:
(Pasting long output into a GitHub gist or other Pastebin is fine.)
kubectl logs -f openebs-lvm-localpv-controller-7b6d6b4665-fk78q -n openebs -c openebs-lvm-plugin
| no error
kubectl logs -f openebs-lvm-localpv-node-[xxxx] -n openebs -c openebs-lvm-plugin
| I0620 02:51:50.488174 1 grpc.go:72] GRPC call: /csi.v1.Node/NodePublishVolume requests {"target_path":"/var/lib/kubelet/pods/77cf54bb-be60-44e4-8ae0-bb3744b2f10e/volumes/kuberne
tes.io~csi/pvc-bcc852f8-7266-48c4-aedc-34adb523d224/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"csi.storage.k8s.
io/ephemeral":"false","csi.storage.k8s.io/pod.name":"xxx-mongodb-2","csi.storage.k8s.io/pod.namespace":"ns-gewclvtg","csi.storage.k8s.io/pod.uid":"77cf54bb-be60-44e4-8ae
0-bb3744b2f10e","csi.storage.k8s.io/serviceAccount.name":"xxx","openebs.io/cas-type":"localpv-lvm","openebs.io/volgroup":"lvmvg","storage.kubernetes.io/cs
iProvisionerIdentity":"1718646815547-5148-local.csi.openebs.io"},"volume_id":"pvc-bcc852f8-7266-48c4-aedc-34adb523d224"}
I0620 02:51:50.534603 1 mount_linux.go:312]
fsck
error fsck from util-linux 2.38.1/dev/mapper/lvmvg-pvc--bcc852f8--7266--48c4--aedc--34adb523d224 is mounted.
e2fsck: Cannot continue, aborting.
kubectl get pods -n openebs
| health
kubectl get lvmvol -A -o yaml
Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]
Environment:
kubectl version
):/etc/os-release
):The text was updated successfully, but these errors were encountered: