Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is it possible to reattach volume when kubernetes cluster recreated? #339

Closed
Ochita opened this issue Sep 26, 2024 · 1 comment
Closed

Comments

@Ochita
Copy link

Ochita commented Sep 26, 2024

Hello. We currently using this great product in our kubernetes cluster without any problems. But, due to some hardware issues we had to reinstall OS and lost etcd data. However, we still has our data on other disks and easily recovered by vgscan and etc. tools.
So. We have our manifests in repo. We got our data on the node.
How can we redeploy our software and reattach existing volumes to kubernetes with saving possibility to manage via lvm-localpv (resize, delete)? Is it possible to somehow manually fulfill PVC with desired volume?

@Ochita
Copy link
Author

Ochita commented Oct 16, 2024

So. it's possible to acheve by recreating PV and LVMVolume resources on kubernetes cluster. I used next pipeline:
on each node run
lvs --units b to get size for each volume
lvs to get size for each volume in Gi
blkid /dev/openebs-lvmpv/* to get fs type for each volume (openebs-lvmpv is our Volume Group)

than I create resource for each volume using template. filled some values (shared and thin provision) according to our storageclass which was in manifests repo

apiVersion: local.openebs.io/v1alpha1
kind: LVMVolume
metadata:
  finalizers:
  - lvm.openebs.io/finalizer
  labels:
    kubernetes.io/nodename: {node_hostname}
  name: {lvm_name}
  namespace: openebs
spec:
  capacity: "{lvm_size}"
  ownerNodeID: {node_hostname}
  shared: "yes"
  thinProvision: "no"
  vgPattern: ^openebs-lvmpv$
  volGroup: openebs-lvmpv
status:
  state: Ready

And than recreate all PVs using template

apiVersion: v1
kind: PersistentVolume
metadata:
  name: {lvm_name}
spec:
  accessModes:
    - ReadWriteOnce
  capacity:
    storage: {lvm_size_g}Gi
  csi:
    driver: local.csi.openebs.io
    fsType: {lvm_fs_type}
    volumeHandle: {lvm_name}
  nodeAffinity:
    required:
      nodeSelectorTerms:
        - matchExpressions:
            - key: openebs.io/nodename
              operator: In
              values:
                - {node_hostname}
  persistentVolumeReclaimPolicy: Retain
  storageClassName: {storageclass}
  volumeMode: Filesystem

And after that I deployed all manifests of our apps and it worked well. Noted that we previously had configured some node labels for our services and also all our services uses different size volumes so it was easy for kubernetes to attach volume to right service using just PVC.

@Ochita Ochita closed this as completed Oct 16, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant