A Container Storage Interface (CSI) driver for cloudscale.ch volumes. The CSI plugin allows you to use cloudscale.ch volumes with your preferred Container Orchestrator.
The cloudscale.ch CSI plugin is mostly tested on Kubernetes. In theory, it should also work on other Container Orchestrators like Mesos or Cloud Foundry. Feel free to test it on other COs and give us feedback.
# Add a cloudscale.ch API token as secret, replace the placeholder string starting with `a05...` with your own secret
$ kubectl -n kube-system create secret generic cloudscale --from-literal=access-token=a05dd2f26b9b9ac2asdas__REPLACE_ME____123cb5d1ec17513e06da
# Add repository
$ helm repo add csi-cloudscale https://cloudscale-ch.github.io/csi-cloudscale
# Install driver
$ helm install -n kube-system -g csi-cloudscale/csi-cloudscale
This plugin supports the following volume parameters (in case of Kubernetes: parameters on the
StorageClass
object):
csi.cloudscale.ch/volume-type
:ssd
orbulk
; defaults tossd
if not set
For LUKS encryption:
csi.cloudscale.ch/luks-encrypted
: set to the string"true"
if the volume should be encrypted with LUKScsi.cloudscale.ch/luks-cipher
: cipher to use; must be supported by the kernel and LUKS, we suggestaes-xts-plain64
csi.cloudscale.ch/luks-key-size
: key-size to use; we suggest512
foraes-xts-plain64
For LUKS encrypted volumes, a secret that contains the LUKS key needs to be referenced through
the csi.storage.k8s.io/node-stage-secret-name
and csi.storage.k8s.io/node-stage-secret-namespace
parameter. See the included StorageClass
definitions and the examples/kubernetes/luks-encrypted-volumes
folder for examples.
The default deployment bundled in the deploy/kubernetes/releases
folder includes the following
storage classes:
cloudscale-volume-ssd
- the default storage class; uses an ssd volume, no LUKS encryptioncloudscale-volume-bulk
- uses a bulk volume, no LUKS encryptioncloudscale-volume-ssd-luks
- uses an ssd volume that will be encrypted with LUKS; a luks-key must be suppliedcloudscale-volume-bulk-luks
- uses a bulk volume that will be encrypted with LUKS; a luks-key must be supplied
To use one of the shipped LUKS storage classes, you need to create a secret named
${pvc.name}-luks-key
in the same namespace as the persistent volume claim. The secret must
contain an element called luksKey
that will be used as the LUKS encryption key.
Example: If you create a persistent volume claim with the name my-pvc
, you need to create a
secret my-pvc-luks-key
.
The cloudscale.ch CSI plugin follows semantic versioning.
The current version is: v3.5.6
.
- Bug fixes will be released as a
PATCH
update. - New features (such as CSI spec bumps) will be released as a
MINOR
update. - Significant breaking changes makes a
MAJOR
update.
The following table describes the required cloudscale.ch driver version per Kubernetes release. We recommend using the latest cloudscale.ch CSI driver compatible with your Kubernetes release.
Kubernetes Release | Minimum cloudscale.ch CSI driver | Maximum cloudscale.ch CSI driver |
---|---|---|
<= 1.16 | v1.3.1 | |
1.17 | v1.3.1 | v3.0.0 |
1.18 | v1.3.1 | v3.3.0 |
1.19 | v1.3.1 | v3.3.0 |
1.20 | v2.0.0 | v3.5.2 |
1.21 | v2.0.0 | v3.5.2 |
1.22 | v3.1.0 | v3.5.2 |
1.23 | v3.1.0 | v3.5.2 |
1.24 | v3.1.0 | v3.5.6 |
1.25 | v3.3.0 | v3.5.6 |
1.26 | v3.3.0 | v3.5.6 |
1.27 | v3.3.0 | v3.5.6 |
1.28 | v3.3.0 | v3.5.6 |
1.29 | v3.3.0 | v3.5.6 |
1.30 | v3.3.0 | v3.5.6 |
1.31 | v3.3.0 | v3.5.6 |
Requirements:
- Nodes must be able to access the metadata service at
169.254.169.254
using HTTP. The required route is pushed by DHCP. --allow-privileged
flag must be set to true for both the API server and the kubelet- (if you use Docker) the Docker daemon of the cluster nodes must allow shared mounts
- If you want to use LUKS encrypted volumes, the kernel on your nodes must have support for
device mapper
infrastructure with thecrypt target
and the appropriate cryptographic APIs
Replace the placeholder string starting with a05...
with your own secret and
save it as secret.yml
:
apiVersion: v1
kind: Secret
metadata:
name: cloudscale
namespace: kube-system
stringData:
access-token: "a05dd2f26b9b9ac2asdas__REPLACE_ME____123cb5d1ec17513e06da"
and create the secret using kubectl:
$ kubectl create -f ./secret.yml
secret "cloudscale" created
You should now see the cloudscale secret in the kube-system
namespace along with other secrets
$ kubectl -n kube-system get secrets
NAME TYPE DATA AGE
default-token-jskxx kubernetes.io/service-account-token 3 18h
cloudscale Opaque 1 18h
You can install the CSI plugin and sidecars using one of the following methods:
- Helm (requires a Helm installation)
- YAML Manifests (only kubectl required)
Before you can install the csi-cloudscale chart, you need to add the helm repository:
$ helm repo add csi-cloudscale https://cloudscale-ch.github.io/csi-cloudscale
Then install the latest stable version:
$ helm install -n kube-system -g csi-cloudscale/csi-cloudscale
Advanced users can customize the installation by specifying custom values. The following table summarizes the most-frequently used parameters. For a complete list please refer to values.yaml
Parameter | Default | Description |
---|---|---|
attacher.resources | {} |
Resource limits and requests for the attacher side-car. |
cloudscale.apiUrl | https://api.cloudscale.ch/ |
URL of the cloudscale.ch API. You can almost certainly use the default. |
cloudscale.max_csi_volumes_per_node | 125 |
Override max. Number of CSI Volumes per Node. |
cloudscale.token.existingSecret | cloudscale |
Name of the Kubernetes Secret which contains the cloudscale.ch API Token. |
controller.resources | {} |
Resource limits and requests for the controller container. |
controller.serviceAccountName | null |
Override the controller service account name. |
driverRegistrar.resources | {} |
Resource limits and requests for the driverRegistrar side-car. |
extraDeploy | [] |
To deploy extra objects together with the driver. |
nameOverride | null |
Override the default {{ .Release.Name }}-csi-cloudscale name pattern with a custom name. |
node.resources | {} |
Resource limits and requests for the node container. |
node.serviceAccountName | null |
Override the controller node account name. |
node.tolerations | [] |
Set tolerations on the node daemonSet. |
provisioner.resources | {} |
Resource limits and requests for the provisioner side-car. |
resizer.resources | {} |
Resource limits and requests for the resizer side-car. |
Note: if you want to test a debug/dev release, you can use the following command:
$ helm install -g -n kube-system --set controller.image.tag=dev --set node.image.tag=dev --set controller.image.pullPolicy=Always --set node.image.pullPolicy=Always ./charts/csi-cloudscale
Before you continue, be sure to checkout to a tagged
release.
Always use the latest stable version
For example, to use the latest stable version (v3.5.6
) you can execute the following command:
$ kubectl apply -f https://raw.githubusercontent.com/cloudscale-ch/csi-cloudscale/master/deploy/kubernetes/releases/csi-cloudscale-v3.5.6.yaml
The storage classes cloudscale-volume-ssd
and cloudscale-volume-bulk
will be created. The
storage class cloudscale-volume-ssd
is set to "default" for dynamic provisioning. If you're
using multiple storage classes you might want to remove the annotation and re-deploy it. This is
based on the recommended mechanism of deploying CSI drivers on Kubernetes
Create a PersistentVolumeClaim. This makes sure a volume is created and provisioned on your behalf:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: csi-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
storageClassName: cloudscale-volume-ssd
Check that a new PersistentVolume
is created based on your claim:
$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-0879b207-9558-11e8-b6b4-5218f75c62b9 5Gi RWO Delete Bound default/csi-pvc cloudscale-volume-ssd 3m
The above output means that the CSI plugin successfully created (provisioned) a new Volume on behalf of you. You should be able to see this newly created volumes in the server detail view in the cloudscale.ch UI.
The volume is not attached to any node yet. It will only attach to a node if a workload (i.e: pod) is scheduled to a specific node. Now let us create a Pod that refers to the above volume. When the Pod is created, the volume will be attached, formatted and mounted to the specified container:
kind: Pod
apiVersion: v1
metadata:
name: my-csi-app
spec:
containers:
- name: my-frontend
image: busybox
volumeMounts:
- mountPath: "/data"
name: my-cloudscale-volume
command: [ "sleep", "1000000" ]
volumes:
- name: my-cloudscale-volume
persistentVolumeClaim:
claimName: csi-pvc
Check if the pod is running successfully:
$ kubectl describe pods/my-csi-app
Write inside the app container:
$ kubectl exec -ti my-csi-app /bin/sh
/ # touch /data/hello-world
/ # exit
$ kubectl exec -ti my-csi-app /bin/sh
/ # ls /data
hello-world
When updating from csi-cloudscale v1.x to v2.x please note the following:
- Ensure that all API objects of the existing v1.x installation are removed.
The easiest way to achieve this is by running
kubectl delete -f <old version>
before installing the new driver version. - Prior to the installation of v2.x, existing persistent volumes (PVs) must be annotated
with:
"pv.kubernetes.io/provisioned-by=csi.cloudscale.ch"
. You can use this script or any other means to set the annotation. - If you are using self defined storage classes: change the storage class provisioner names
from
"ch.cloudscale.csi"
to"csi.cloudscale.ch"
.
When updating from csi-cloudscale v2.x to v3.x please note the following:
- The node label
region
was renamed tocsi.cloudscale.ch/zone
. - The new release adds the
csi.cloudscale.ch/zone
label to all nodes (existing ones as well as new ones added after the upgrade) - The
region
label will stay in place for existing nodes and not be added to new nodes. It can be safely removed from all nodes from acsi-cloudscale
driver perspective.
Please use the following options with care.
In the v1.3.0
release the default CSI volumes per node limit of has been increased
to 125 (previously 23). To take advantage of the higher CSI limit you must ensure that
all your cluster nodes are using virtio-scsi
devices (i.e. /dev/sdX
devices are used).
This is the default for servers created after October 1st, 2020.
If you want to use a different value, for example because one of your nodes does not use
virtio-scsi
, you can set the following environment variable for the csi-cloudscale-plugin
container in the csi-cloudscale-node
DaemonSet:
env:
- name: CLOUDSCALE_MAX_CSI_VOLUMES_PER_NODE
value: '10'
Or use the cloudscale.max_csi_volumes_per_node
value of the Helm chart.
Note that there are currently the following hard-limits per Node:
- 26 volumes (including root) for
virtio-blk
(/dev/vdX
). - 128 volumes (including root) for
virtio-scsi
(/dev/sdX
).
Requirements:
- Go: min
v1.10.x
- Helm
Build out the charts/
directory from the Chart.lock
file:
$ cd charts/csi-cloudscale/
$ helm repo add bitnami https://charts.bitnami.com/bitnami
$ helm repo update
$ helm dependency build charts/csi-cloudscale
Install the chart from local sources:
$ helm install -n kube-system -g ./charts/csi-cloudscale
Useful commands to compare the generated helm chart to the static YAML manifests:
$ helm template csi-cloudscale --dry-run -n kube-system --set nameOverride=csi-cloudscale charts/csi-cloudscale | kubectl-slice -f - -o deploy/kubernetes/releases/generated
$ kubectl-slice -f deploy/kubernetes/releases/csi-cloudscale-v6.0.0.yaml -o deploy/kubernetes/releases/v3
After making your changes, run the unit tests:
$ make test
Note: If you want to run just a single test case, from csi-test
, find the corresponding,
It
in the source code, and temporarly replace it with FIt
, example:
- It("should work if node-expand is called after node-publish", func() {
+ FIt("should work if node-expand is called after node-publish", func() {
If you want to test your changes, create a new image with the version set to dev
:
apt install docker.io
# At this point you probably need to add your user to the docker group
docker login --username=cloudscalech [email protected]
$ VERSION=dev make publish
This will create a binary with version dev
and docker image pushed to
cloudscalech/cloudscale-csi-plugin:dev
To run the integration tests run the following:
$ export KUBECONFIG=$(pwd)/kubeconfig
$ TESTARGS='-run TestPod_Single_SSD_Volume' make test-integration
To release a new version bump first the version:
$ make NEW_VERSION=vX.Y.Z bump-version
$ make NEW_CHART_VERSION=vX.Y.Z bump-chart-version
Make sure everything looks good. Verify that the Kubernetes compatibility matrix is up-to-date. Create a new branch with all changes:
$ git checkout -b new-release
$ git add .
$ git push origin
After it's merged to master, create a new Github
release from
master with the version v3.5.6
and then publish a new docker build:
$ git checkout master
$ make publish
This will create a binary with version v3.5.6
and docker image pushed to
cloudscalech/cloudscale-csi-plugin:v3.5.6
At cloudscale.ch we value and love our community! If you have any issues or would like to contribute, feel free to open an issue/PR