As of Nov 13, 2020, charts in this repo will no longer be updated. For more information, see the Helm Charts Deprecation and Archive Notice, and Update.
This directory contains a Kubernetes chart to deploy a five node Patroni cluster using a Spilo and a StatefulSet.
This chart is deprecated and no longer supported.
- Kubernetes 1.9+
- PV support on the underlying infrastructure
- Make namespace configurable
This chart will do the following:
- Implement a HA scalable PostgreSQL 10 cluster using a Kubernetes StatefulSet.
To install the chart with the release name my-release
:
$ helm repo add incubator https://kubernetes-charts-incubator.storage.googleapis.com/
$ helm dependency update
$ helm install --name my-release incubator/patroni
To install the chart with randomly generated passwords:
$ helm install --name my-release incubator/patroni \
--set credentials.superuser="$(< /dev/urandom tr -dc _A-Z-a-z-0-9 | head -c32)",credentials.admin="$(< /dev/urandom tr -dc _A-Z-a-z-0-9 | head -c32)",credentials.standby="$(< /dev/urandom tr -dc _A-Z-a-z-0-9 | head -c32)"
Your access point is a cluster IP. In order to access it spin up another pod:
$ kubectl run -i --tty --rm psql --image=postgres --restart=Never -- bash -il
Then, from inside the pod, connect to PostgreSQL:
$ psql -U admin -h my-release-patroni.default.svc.cluster.local postgres
<admin password from values.yaml>
postgres=>
The following table lists the configurable parameters of the patroni chart and their default values.
Parameter | Description | Default |
---|---|---|
nameOverride |
Override the name of the chart | nil |
fullnameOverride |
Override the fullname of the chart | nil |
replicaCount |
Amount of pods to spawn | 5 |
image.repository |
The image to pull | registry.opensource.zalan.do/acid/spilo-10 |
image.tag |
The version of the image to pull | 1.5-p5 |
image.pullPolicy |
The pull policy | IfNotPresent |
credentials.superuser |
Password of the superuser | tea |
credentials.admin |
Password of the admin | cola |
credentials.standby |
Password of the replication user | pinacolada |
kubernetes.dcs.enable |
Using Kubernetes as DCS | true |
kubernetes.configmaps.enable |
Using Kubernetes configmaps instead of endpoints | false |
etcd.enable |
Using etcd as DCS | false |
etcd.deployChart |
Deploy etcd chart | false |
etcd.host |
Host name of etcd cluster | nil |
etcd.discovery |
Domain name of etcd cluster | nil |
zookeeper.enable |
Using ZooKeeper as DCS | false |
zookeeper.deployChart |
Deploy ZooKeeper chart | false |
zookeeper.hosts |
List of ZooKeeper cluster members | host1:port1,host2:port,etc... |
consul.enable |
Using Consul as DCS | false |
consul.deployChart |
Deploy Consul chart | false |
consul.host |
Host name of consul cluster | nil |
env |
Extra custom environment variables | {} |
walE.enable |
Use of Wal-E tool for base backup/restore | false |
walE.scheduleCronJob |
Schedule of Wal-E backups | 00 01 * * * |
walE.retainBackups |
Number of base backups to retain | 2 |
walE.s3Bucket: |
Amazon S3 bucket used for wal-e backups | nil |
walE.gcsBucket |
GCS storage used for Wal-E backups | nil |
walE.kubernetesSecret |
K8s secret name for provider bucket | nil |
walE.backupThresholdMegabytes |
Maximum size of the WAL segments accumulated after the base backup to consider WAL-E restore instead of pg_basebackup | 1024 |
walE.backupThresholdPercentage |
Maximum ratio (in percents) of the accumulated WAL files to the base backup to consider WAL-E restore instead of pg_basebackup | 30 |
resources |
Any resources you wish to assign to the pod | {} |
nodeSelector |
Node label to use for scheduling | {} |
tolerations |
List of node taints to tolerate | [] |
affinityTemplate |
A template string to use to generate the affinity settings | Anti-affinity preferred on hostname |
affinity |
Affinity settings. Overrides affinityTemplate if set. |
{} |
schedulerName |
Alternate scheduler name | nil |
persistentVolume.accessModes |
Persistent Volume access modes | [ReadWriteOnce] |
persistentVolume.annotations |
Annotations for Persistent Volume Claim` | {} |
persistentVolume.mountPath |
Persistent Volume mount root path | /home/postgres/pgdata |
persistentVolume.size |
Persistent Volume size | 2Gi |
persistentVolume.storageClass |
Persistent Volume Storage Class | volume.alpha.kubernetes.io/storage-class: default |
persistentVolume.subPath |
Subdirectory of Persistent Volume to mount | "" |
rbac.create |
Create required role and rolebindings | true |
serviceAccount.create |
If true, create a new service account | true |
serviceAccount.name |
Service account to be used. If not set and serviceAccount.create is true , a name is generated using the fullname template |
nil |
Specify each parameter using the --set key=value[,key=value]
argument to helm install
.
Alternatively, a YAML file that specifies the values for the parameters can be provided while installing the chart. For example,
$ helm install --name my-release -f values.yaml incubator/patroni
Tip: You can use the default values.yaml
To remove the spawned pods you can run a simple helm delete <release-name>
.
Helm will however preserve created persistent volume claims, to also remove them execute the commands below.
$ release=<release-name>
$ helm delete $release
$ kubectl delete pvc -l release=$release
Patroni is responsible for electing a PostgreSQL master pod by leveraging the
DCS of your choice. After election it adds a spilo-role=master
label to the
elected master and set the label to spilo-role=replica
for all replicas.
Simultaneously it will update the <release-name>-patroni
endpoint to let the
service route traffic to the elected master.
$ kubectl get pods -l spilo-role -L spilo-role
NAME READY STATUS RESTARTS AGE SPILO-ROLE
my-release-patroni-0 1/1 Running 0 9m replica
my-release-patroni-1 1/1 Running 0 9m master
my-release-patroni-2 1/1 Running 0 8m replica
my-release-patroni-3 1/1 Running 0 8m replica
my-release-patroni-4 1/1 Running 0 8m replica