-
Notifications
You must be signed in to change notification settings - Fork 99
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
LVM pvc infinitely unmounting #303
Comments
Hi @tarelda , I do see 1
But we see delete event for the same volume later in the log.
Few questions:
|
I made deployment for an app that requested persistent volume through PVC with StorageClass handled by LVM plugin. Then I deleted it, because mounts weren't made in the pods. After that pods were stuck in Terminating state and volumes not deleted. Then I went to to town and deleted everything manually (including PVCs). I did it few times, hence I had multiple instances of volumes to be unmounted in the logs. As I recall this behaviour persisted even through OpenEBS redeployment with helm. Small clarification - by manual delete of pvc I mean deleting through What is strange, few days later I finally figured out that when I was installing OpenEBS I haven't corrected kubelet dir paths in values.yml to match microK8S. Since that logs finally cleaned up and volumes started to be mounted correctly. But I don't understand why then paths in openebs-lvm-localpv-node pods log were for correct kubelet directory. |
@tarelda , Happy to know that everythings fine now. Without setting the correct kubelet mount path for microk8s path never got mounted on the pod. Im guessing in the unmount workflow kubelet knows that its microk8s platform, So it supplies correct path in the NodeUnpublishVolumeRequest when the pod was stuck in Question: |
@abhilash, this can be closed I'd think? If there are any further clarifying info provided to report similar issue, this or a new issue might be opened. |
Closing this as per the above comment. |
What steps did you take and what happened:
I have created simple registry deployment with one claim. Unfortunately for some reason is not getting mounted, but after I deleted deployment and pvc kubelet logs still shows that it is trying to unmount it.
Also pods from deployment had to be manually deleted, because they were stuck in terminating state. Probably because they wrote to mountpoint (these was in logs before, but I manually cleaned up mountpoint directory).
What did you expect to happen:
I expected to have clean environment to start over again. I don't know why it is still trying to unmount nonexistent volumes.
The output of the following commands will help us better understand what's going on:
(Pasting long output into a GitHub gist or other Pastebin is fine.)
kubectl logs -f openebs-lvm-localpv-controller-7b6d6b4665-fk78q -n openebs -c openebs-lvm-plugin
kubectl logs -f openebs-lvm-localpv-node-[xxxx] -n openebs -c openebs-lvm-plugin
I included only repeating part here, but full log is here .
kubectl get pods -n openebs
kubectl get lvmvol -A -o yaml
Anything else you would like to add:
I installed openebs directly through helm to get version 4.0.1 instead of microk8s default 3.10 that is installed through addon.
Environment:
kubectl version
):/etc/os-release
): Ubuntu 22.04.4 LTSThe text was updated successfully, but these errors were encountered: