-
Notifications
You must be signed in to change notification settings - Fork 99
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Is there a way to mount a LV onto two pods? #281
Comments
@iPenx Do you want a volume to be used by two different pods? I don't think that is a recommended use case, due to possible data corruption, due to two application writing? I will try it out and post the steps here if it's feasible. |
Two pods on the same node, yes, that is supposed to be allowed by |
@ianroberts IIUC @iPenx wants to have two pods on same node to access the volume, which is indeed |
No, it doesn't work for me either. I've got three pods all forced to the same node by volumes:
- name: shared-pv
persistentVolumeClaim:
claimName: coordination-service-data The PVC is a RWO claim with my LVM Local PV storage class and it has been successfully bound to a PV, which has mounted into the first of the three pods to start. But the other two are stuck pending and syslog on the relevant node is full of
|
@ianroberts How about using the volume as Block mode. Would that help here? |
Block mode would just give me a raw block device rather than a mounted filesystem. I'm essentially trying to use a single PV as a shared filesystem between containers in multiple pods, in the same way as you might use an
|
Ah, apparently there's a shared setting that can be set at the |
@ianroberts Let us know if that worked for you. There's a reason to keep that disabled by default, which is to ensure data safety. Although we might want to make it default since it's a valid use case and also now with |
Sadly it's not possible to edit the parameters of an existing |
@ianroberts StorageClass parameters cannot be edited once created, but you can always create a new storage class, or recreate(delete and create) the storage class. Storage class is just a config for a volume creation. |
Ah, ok, this is a production cluster and I was worried that deleting the SC would affect the existing volumes that I very much don’t want to lose! |
@ianroberts , SC dont have any bearing to already provisioned PVC. Its read only when PVC is created referring it. |
Indeed - encouraged by the previous comments I deleted and re created the SC with the shared setting enabled and that has worked correctly. So I guess my original issue could be closed as “not planned”, unless
|
yes, i have tried two pods and one pvc. ---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: localpv-data
spec:
storageClassName: localpv-lvm-ephemeral
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Gi
---
apiVersion: v1
kind: Pod
metadata:
name: pod-1
spec:
terminationGracePeriodSeconds: 1
containers:
- name: busybox
image: busybox
command:
- cat
args:
- "-n"
volumeMounts:
- mountPath: /data
name: data
tty: true
volumes:
- name: data
persistentVolumeClaim:
claimName: localpv-data
---
apiVersion: v1
kind: Pod
metadata:
name: pod-2
spec:
terminationGracePeriodSeconds: 1
containers:
- name: busybox
image: busybox
command:
- cat
volumeMounts:
- mountPath: /data
name: data
tty: true
volumes:
- name: data
persistentVolumeClaim:
claimName: localpv-data storage class apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
annotations:
name: localpv-lvm-ephemeral
parameters:
storage: lvm
volgroup: vg-localpv
provisioner: local.csi.openebs.io
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
allowedTopologies:
- matchLabelExpressions:
- key: k8s.io/hostname
values:
- xxx and it will get an error event kubectl get event
i think after the LV mounts for the first running pod, the next pod will get "device already" because of ReadWriteOnce mode -- in this mode, lvm-localpv agent mounts the LV on a pods' private path, like the error above "/var/lib/kubelet/pods/2eca2f78-bd5e-40c2-bfc8-75135e2e2585/volumes/kubernetes.io~csi/pvc-25dd8e5c-fd39-4735-b6c9-8f0bafc43d05/mount" |
@iPenx yes, we've established that you need to set |
@ianroberts thank you for your reply so quickly. let me try it. |
it worked. thank you all. add apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: localpv-lvm-ephemeral
provisioner: local.csi.openebs.io
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
parameters:
storage: lvm
volgroup: vg-localpv
shared: "yes"
allowVolumeExpansion: true
allowedTopologies:
- matchLabelExpressions:
- key: k8s.io/hostname
values:
- xxx i closed this issue. |
in the big data ecosystem, the requirement for two pods in the same task to share a storage volume is very common.
is there a way for OpenEBS LVM LocalPV to mount a LV onto two pods on the same node?
The text was updated successfully, but these errors were encountered: