postgres backup with pg_dump

From Shell Scripting to Kubernetes - Postgres Backups With pg_dump [Part 2]

In the first part of the tutorial, we have explained how to create PostgreSQL backups using the pg_dump command. This second part will focus on Helm and converting Kubernetes YAMLs into a Helm Chart.

Creating the Helm Chart

If you wish, you may use those yml files for many situations. But you will need to make small changes for every case: the namespace, the resources name, the environment values etc. One of the popular solutions for this problem, in the Kubernetes world, is Helm - the 'Kubernetes package manager'. Other options are kustomize or using sed to do inline search-and-replaces.

Moving into the helm directory, the file Chart.yaml is basically a description of the chart.

In the file values.yaml are the variables which can be changed at install and their default value. The namespace is not here, this and the release name will be defined at install time in the command line.

Inside the directory templates you will find our three resources yml files, but slightly changed or shall I say parametrized. Everywhere you see {{ Something }} that is a placeholder which will be replaced during helm install. {{ .Values.something }} will be taken from the values.yaml file. Then, {{ .Release.Name }} and {{ .Release.Namespace }} are values which will be defined in the helm install command.

Notes.txt is the text displayed at the end of helm install, some sort of usage information.

Publishing the Helm Chart

To use GitHub as a helm repository:

helm package .      # will produce the file pgdump-0.1.0.tgz with the chart files

helm repo index .   # create or update the index.yaml for repo

git add index.yaml pgdump-0.1.0.tgz

git commit -m 'New chart version'

git push

To get chart repository URL, just get the raw URL for the file index.yaml created above ( ), then strip the filename. See below.

Using the Helm Chart

Most probably you will want to override at least the values for pghost and secret_pgpass to match your postgresql installation. The simplest way is to copy values.yaml with another name, like values_override.yaml and edit the second file. Then use this to install the helm chart:

cd helm

helm upgrade --install -f values.yaml -f values_override.yaml -n <NAMESPACE> <RELEASE-NAME> .

We recommend you use a release name like <SOMETHING>-pgdump. All the resources created by the helm chart will have this name: a PVC, a deployment and a cronjob.

Make sure you have the correct values, especially for

- pghost - this should be the name of the service (svc) pointing to the postgres database and

- secret_pgpass - this should be the name of the secret holding the postgres password.

Multi-Attach Error for Volume

While testing the helm chart I have encountered this error, in the pod created by the cronjob. The pod is stuck in 'Init:0/1' state and the events show:

kubectl -n cango-web describe pod [...]


Warning  FailedAttachVolume  67s   attachdetach-controller  Multi-Attach error for volume "..." Volume is already used by pod(s) [...]

Warning  FailedMount         3m44s  kubelet                  Unable to attach or mount volumes: unmounted volumes=[data], unattached volumes=[data kube-api-access-...]: timed out waiting for the condition

The problem is that we are mounting the same volume in the pod created by the deployment and also in the pod created by the cronjob and the access mode is ReadWriteOnce. You will see this only when those pods are on different nodes since the ReadWriteOnce access mode means the volume can be mounted only once per node.

One simple solution will be to use ReadWriteMany but this is not supported by some storage classes, like DigitalOcean for example. Another way will be to give up the deployment pod which is really useful only when you want to do a database restore or for testing, debugging, verifying.

Yet another solution will be to force the cronjob pods to be created on the same node as the deployment pod. This is using the fact that RWO means once per node, not once per pod! Thus we can use inter-pod affinity, as shown in the section affinity from the file helm/template/cronjob.yml.

Improvements TBD

  • - use volumemount when reading secret password in deployment to cope with the situation when postgres password changes (env variables are not re-read on the fly)
  • - storageclass in helm chart
  • - resource limits

Find this tutorial and files also on Github:

About the author

viorel anghel esolutions

Viorel Anghel has 20+ years of experience as an IT Professional, taking on various roles, such as Systems Architect, Sysadmin, Network Engineer, SRE, Devops, and Tech Lead. He has a background in Unix/Linux systems administration, high availability, scalability, change, and config management. Also, Viorel is a RedHat Certified Engineer and AWS Certified Solutions Architect, working with Docker, Kubernetes, Xen, AWS, GCP, Cassandra, Kafka, and many other technologies. He is the Head of Cloud and Infrastructure at eSolutions.