Skip to content

Commit

Permalink
feat(lvm-driver): enable RAID support
Browse files Browse the repository at this point in the history
Add support for LVM2 RAID types and parameters, with sane defaults
for backwards compatibility. lvm-driver now assumes that a non-
specified RAID type corresponds to the previous default of linear
RAID, where data is packed onto disk until it runs out of space,
continuing to the next as necessary.

Tests have been added to cover the main supported RAID types (e.g.
raid0, raid1, raid5, raid6, and raid10), but technically any valid
LVM RAID type should work as well.

Fixes openebs#164

Signed-off-by: Nicholas Cioli <[email protected]>
  • Loading branch information
nicholascioli committed Nov 4, 2023
1 parent 9e0ac5b commit 32dec67
Show file tree
Hide file tree
Showing 15 changed files with 648 additions and 2 deletions.
3 changes: 2 additions & 1 deletion buildscripts/build.sh
Original file line number Diff line number Diff line change
Expand Up @@ -96,12 +96,13 @@ output_name="bin/${PNAME}/"$GOOS"_"$GOARCH"/"$CTLNAME
if [ $GOOS = "windows" ]; then
output_name+='.exe'
fi
env GOOS=$GOOS GOARCH=$GOARCH go build -ldflags \
env GOOS=$GOOS GOARCH=$GOARCH CGO_ENABLED=0 go build -ldflags \
"-X github.com/openebs/lvm-localpv/pkg/version.GitCommit=${GIT_COMMIT} \
-X main.CtlName='${CTLNAME}' \
-X github.com/openebs/lvm-localpv/pkg/version.Version=${VERSION} \
-X github.com/openebs/lvm-localpv/pkg/version.VersionMeta=${VERSION_META}"\
-o $output_name\
-installsuffix cgo \
./cmd

echo ""
Expand Down
1 change: 1 addition & 0 deletions changelogs/unreleased/164-nicholascioli
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
add support for LVM raid options
51 changes: 51 additions & 0 deletions ci/ci-test.sh
Original file line number Diff line number Diff line change
Expand Up @@ -33,6 +33,12 @@ fi
FOREIGN_LVM_SYSTEMID="openebs-ci-test-system"
FOREIGN_LVM_CONFIG="global{system_id_source=lvmlocal}local{system_id=${FOREIGN_LVM_SYSTEMID}}"

# RAID info for corresponding tests
RAID_COUNT=5

# RAID info for corresponding tests
RAID_COUNT=5

# Clean up generated resources for successive tests.
cleanup_loopdev() {
sudo losetup -l | grep '(deleted)' | awk '{print $1}' \
Expand Down Expand Up @@ -60,13 +66,28 @@ cleanup_foreign_lvmvg() {
cleanup_loopdev
}

cleanup_raidvg() {
sudo vgremove raidvg -y || true

for IMG in `seq ${RAID_COUNT}`
do
if [ -f /tmp/openebs_ci_raid_disk_${IMG}.img ]
then
rm /tmp/openebs_ci_raid_disk_${IMG}.img
fi
done

cleanup_loopdev
}

cleanup() {
set +e

echo "Cleaning up test resources"

cleanup_lvmvg
cleanup_foreign_lvmvg
cleanup_raidvg

kubectl delete pvc -n openebs lvmpv-pvc
kubectl delete -f "${SNAP_CLASS}"
Expand All @@ -93,10 +114,40 @@ foreign_disk="$(sudo losetup -f /tmp/openebs_ci_foreign_disk.img --show)"
sudo pvcreate "${foreign_disk}"
sudo vgcreate foreign_lvmvg "${foreign_disk}" --config="${FOREIGN_LVM_CONFIG}"

# setup a RAID volume group
cleanup_raidvg
raid_disks=()
for IMG in `seq ${RAID_COUNT}`
do
truncate -s 1024G /tmp/openebs_ci_raid_disk_${IMG}.img
raid_disk="$(sudo losetup -f /tmp/openebs_ci_raid_disk_${IMG}.img --show)"
sudo pvcreate "${raid_disk}"

raid_disks+=("${raid_disk}")
done
sudo vgcreate raidvg "${raid_disks[@]}"

# setup a RAID volume group
cleanup_raidvg
raid_disks=()
for IMG in `seq ${RAID_COUNT}`
do
truncate -s 1024G /tmp/openebs_ci_raid_disk_${IMG}.img
raid_disk="$(sudo losetup -f /tmp/openebs_ci_raid_disk_${IMG}.img --show)"
sudo pvcreate "${raid_disk}"

raid_disks+=("${raid_disk}")
done
sudo vgcreate raidvg "${raid_disks[@]}"

# install snapshot and thin volume module for lvm
sudo modprobe dm-snapshot
sudo modprobe dm_thin_pool

# install RAID modules for lvm
sudo modprobe dm_raid
sudo modprobe dm_integrity

# Prepare env for running BDD tests
# Minikube is already running
kubectl apply -f "${LVM_OPERATOR}"
Expand Down
48 changes: 48 additions & 0 deletions deploy/lvm-operator.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -95,12 +95,47 @@ spec:
description: Capacity of the volume
minLength: 1
type: string
integrity:
description: Integrity specifies whether logical volumes should be
checked for integrity. If it is set to "yes", then the LVM LocalPV
Driver will enable DM integrity for the logical volume
enum:
- "yes"
- "no"
type: string
lvcreateoptions:
description: LvCreateOptions are extra options for creating a volume.
Options should be separated by ; e.g. "--vdo;--readahead;auto"
type: string
mirrors:
description: Mirrors specifies the mirror count for a RAID configuration.
minimum: 0
type: integer
nosync:
description: NoSync enables the `--nosync` option of a RAID volume.
If it is set to "yes", then LVM will skip drive sync when creating
the mirrors. Defaults to "no"
enum:
- "yes"
- "no"
type: string
ownerNodeID:
description: OwnerNodeID is the Node ID where the volume group is
present which is where the volume has been provisioned. OwnerNodeID
can not be edited after the volume has been provisioned.
minLength: 1
type: string
raidtype:
description: RaidType specifies the type of RAID for the logical volume.
Defaults to linear, if unspecified.
enum:
- linear
- raid0
- raid1
- raid5
- raid6
- raid10
type: string
shared:
description: Shared specifies whether the volume can be shared among
multiple pods. If it is not set to "yes", then the LVM LocalPV Driver
Expand All @@ -109,6 +144,18 @@ spec:
- "yes"
- "no"
type: string
stripecount:
description: StripeCount specifies the stripe count for a RAID configuration.
This is equal to the number of physical volumes to scatter the logical
volume
minimum: 0
type: integer
stripesize:
description: StripeSize specifies the size of a stripe for a RAID
configuration. Must be a power of 2 but must not exceed the physical
extent size
minimum: 0
type: integer
thinProvision:
description: ThinProvision specifies whether logical volumes can be
thinly provisioned. If it is set to "yes", then the LVM LocalPV
Expand All @@ -129,6 +176,7 @@ spec:
required:
- capacity
- ownerNodeID
- raidtype
- vgPattern
- volGroup
type: object
Expand Down
48 changes: 48 additions & 0 deletions deploy/yamls/lvmvolume-crd.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -74,12 +74,47 @@ spec:
description: Capacity of the volume
minLength: 1
type: string
integrity:
description: Integrity specifies whether logical volumes should be
checked for integrity. If it is set to "yes", then the LVM LocalPV
Driver will enable DM integrity for the logical volume
enum:
- "yes"
- "no"
type: string
lvcreateoptions:
description: LvCreateOptions are extra options for creating a volume.
Options should be separated by ; e.g. "--vdo;--readahead;auto"
type: string
mirrors:
description: Mirrors specifies the mirror count for a RAID configuration.
minimum: 0
type: integer
nosync:
description: NoSync enables the `--nosync` option of a RAID volume.
If it is set to "yes", then LVM will skip drive sync when creating
the mirrors. Defaults to "no"
enum:
- "yes"
- "no"
type: string
ownerNodeID:
description: OwnerNodeID is the Node ID where the volume group is
present which is where the volume has been provisioned. OwnerNodeID
can not be edited after the volume has been provisioned.
minLength: 1
type: string
raidtype:
description: RaidType specifies the type of RAID for the logical volume.
Defaults to linear, if unspecified.
enum:
- linear
- raid0
- raid1
- raid5
- raid6
- raid10
type: string
shared:
description: Shared specifies whether the volume can be shared among
multiple pods. If it is not set to "yes", then the LVM LocalPV Driver
Expand All @@ -88,6 +123,18 @@ spec:
- "yes"
- "no"
type: string
stripecount:
description: StripeCount specifies the stripe count for a RAID configuration.
This is equal to the number of physical volumes to scatter the logical
volume
minimum: 0
type: integer
stripesize:
description: StripeSize specifies the size of a stripe for a RAID
configuration. Must be a power of 2 but must not exceed the physical
extent size
minimum: 0
type: integer
thinProvision:
description: ThinProvision specifies whether logical volumes can be
thinly provisioned. If it is set to "yes", then the LVM LocalPV
Expand All @@ -108,6 +155,7 @@ spec:
required:
- capacity
- ownerNodeID
- raidtype
- vgPattern
- volGroup
type: object
Expand Down
129 changes: 129 additions & 0 deletions design/lvm/storageclass-parameters/raid.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,129 @@
---
title: LVM-LocalPV RAID
authors:
- "@nicholascioli"
owners: []
creation-date: 2023-11-04
last-updated: 2023-11-04
status: Implemented
---

# LVM-LocalPV RAID

## Table of Contents
- [LVM-LocalPV RAID](#lvm-localpv-raid)
- [Table of Contents](#table-of-contents)
- [Summary](#summary)
- [Motivation](#motivation)
- [Goals](#goals)
- [Non Goals](#non-goals)
- [Proposal](#proposal)
- [User Stories](#user-stories)
- [Implementation Details](#implementation-details)
- [Usage details](#usage-details)
- [Test Plan](#test-plan)
- [Graduation Criteria](#graduation-criteria)
- [Drawbacks](#drawbacks)
- [Alternatives](#alternatives)


## Summary

This proposal charts out the workflow details to support creation of RAID volumes.

## Motivation

### Goals

- Able to provision RAID volumes in a VolumeGroup.
- Able to specify VolumeGroup-specific RAID options for all sub volumes.
- Able to specify extra options for all volumes in a VolumeGroup.

### Non Goals

- Validating combinations of RAID types / options.

## Proposal

### User Stories

- RAIDed volumes provide data redundancy and can mitigate data loss due to individual drive failures.
- Ability to specify extra arguments for VolumeGroups allow for user customizations without needing
to rework k8s schemas.

### Implementation Details

- User/Admin has to set RAID-sepcific options under storageclass parameters which
are used when creating volumes in the VolumeGroup.
- During volume provisioning time external-provisioner will read all key-value pairs
that are specified under referenced storageclass and pass information to CSI
driver as payload for `CreateVolume` gRPC request.
- After receiving the `CreateVolume` request CSI driver will pick appropriate node based
on scheduling attributes(like topology information, matching VG name and available capacity)
and creates LVM volume resource by setting `Spec.RaidType` to a valid type along with other properties.
- Once the LVMVolume resource is created corresponding node LVM volume controller reconcile
LVM volume resource in the following way:
- LVM controller will check `Spec.RaidType` field, if the field is set to anything other
than `linear`, then the controller will perform following operations:
- Fetch information about existence of matching VolumeGroup.
- If there is a VolumeGroup with <vg_name> name then controller will create a volume.
Command used to create thin volume: `lvcreate --type <RAID_TYPE> --raidintegrity <INTEGRITY> --nosync ... <LVCREATEOPTIONS> -y`
- If volume creation is successfull then controller will LVM volume resource as `Ready`.
- After watching `Ready` status CSI driver will return success response to `CreateVolume` gRPC
request.

### Usage details

1. User/Admin can configure the following options under the storageclass parameters.

Option | Required | Valid Values | Description
-------|----------|--------------|-------------------
`type` | `true` | `raid0` / `stripe`, `raid` / `raid1` / `mirror`, `raid5`, `raid6`, `raid10` | The RAID type of the volume.
`integrity` | `false` | `true`, `false` | Whether or not to enable DM integrity for the volume. Defaults to `false`.
`mirrors` | depends | [0, ∞) | Mirror count. Certain RAID configurations require this to be set.
`nosync` | `false` | `true`, `false` | Whether or not to disable the initial sync. Defaults to false.
`stripecount` | depends | [0, ∞) | Stripe count. Certain RAID configurations require this to be set.
`stripesize` | `false` | [0, ∞) (but must be a power of 2) | The size of each stripe. If not specified, LVM will choose a sane default.
`lvcreateoptions` | `false` | String, delimited by `;` | Extra options to be passed to LVM when creating volumes.

An example is shown below
```yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: openebs-lvm
provisioner: local.csi.openebs.io
parameters:
storage: "lvm"
volgroup: "lvmvg"
raidType: "raid1"
lvcreateoptions: "--vdo;--readahead auto"
```
### Test Plan
- Provision an application on various RAID configurations, verify volume accessibility from application,
and verify that `lvs` reports correct RAID information.

## Graduation Criteria

All testcases mentioned in [Test Plan](#test-plan) section need to be automated

## Drawbacks

- Since the RAID options exist at the storageclass level, changes to the storage
class RAID options is not possible without custom logic per RAID type or manual
operator interactions.
- Validation of the RAID options depend on the version of LVM2 installed as well as
the type of RAID used and its options. This is outside of the scope of these changes
and will cause users to have to debug issues with a finer comb to see why certain
options do not work together or on their specific machine.

## Alternatives

RAID can be done in either software or hardware, with many off-the-shelf products
including built-in hardware solutions. There are also other software RAID alternatives
that can be used below LVM, such as mdadm.

This unfortunately requires operators to decouple
the SotrageClass from the RAID configuration, but does simplify the amount of code maintained by
this project.
Loading

0 comments on commit 32dec67

Please sign in to comment.