In the first part of the tutorial, we have explained how to create PostgreSQL backups using the pg_dump command. This second part will focus on Helm and converting Kubernetes YAMLs into a Helm Chart.
If you wish, you may use those yml files for many situations. But you will need to make small changes for every case: the namespace, the resources name, the environment values etc. One of the popular solutions for this problem, in the Kubernetes world, is Helm - the 'Kubernetes package manager'. Other options areor using to do inline search-and-replaces.
Moving into thedirectory, the file is basically a description of the chart.
In the fileare the variables which can be changed at install and their default value. The namespace is not here, this and the release name will be defined at install time in the command line.
Inside the directoryyou will find our three resources yml files, but slightly changed or shall I say parametrized. Everywhere you see that is a placeholder which will be replaced during helm install. will be taken from the file. Then, and are values which will be defined in the helm install command.
is the text displayed at the end of helm install, some sort of usage information.
To use GitHub as a helm repository:
helm package . # will produce the file pgdump-0.1.0.tgz with the chart files helm repo index . # create or update the index.yaml for repo git add index.yaml pgdump-0.1.0.tgz git commit -m 'New chart version' git push
To get chart repository URL, just get the raw URL for the file index.yaml created above ( https://raw.githubusercontent.com/viorel-anghel/pgdump-kubernetes/main/helm/index.yaml ), then strip the filename. See below.
Most probably you will want to override at least the values forand to match your postgresql installation. The simplest way is to copy with another name, like and edit the second file. Then use this to install the helm chart:
cd helm helm upgrade --install -f values.yaml -f values_override.yaml -n <NAMESPACE> <RELEASE-NAME> .
We recommend you use a release name like <SOMETHING>-pgdump. All the resources created by the helm chart will have this name: a PVC, a deployment and a cronjob.
Make sure you have the correct values, especially for
-- this should be the name of the service (svc) pointing to the postgres database and
-- this should be the name of the secret holding the postgres password.
While testing the helm chart I have encountered this error, in the pod created by the cronjob. The pod is stuck instate and the events show:
kubectl -n cango-web describe pod [...] [...] Warning FailedAttachVolume 67s attachdetach-controller Multi-Attach error for volume "..." Volume is already used by pod(s) [...] Warning FailedMount 3m44s kubelet Unable to attach or mount volumes: unmounted volumes=[data], unattached volumes=[data kube-api-access-...]: timed out waiting for the condition
The problem is that we are mounting the same volume in the pod created by the deployment and also in the pod created by the cronjob and the access mode is ReadWriteOnce. You will see this only when those pods are on different nodes since theaccess mode means the volume can be mounted only once per node.
One simple solution will be to usebut this is not supported by some storage classes, like DigitalOcean for example. Another way will be to give up the deployment pod which is really useful only when you want to do a database restore or for testing, debugging, verifying.
Yet another solution will be to force the cronjob pods to be created on the same node as the deployment pod. This is using the fact that RWO means once per node, not once per pod! Thus we can use inter-pod, as shown in the section affinity from the file .
Find this tutorial and files also on Github: https://github.com/viorel-anghel/pgdump-kubernetes.git
Viorel Anghel has 20+ years of experience as an IT Professional, taking on various roles, such as Systems Architect, Sysadmin, Network Engineer, SRE, Devops, and Tech Lead. He has a background in Unix/Linux systems administration, high availability, scalability, change, and config management. Also, Viorel is a RedHat Certified Engineer and AWS Certified Solutions Architect, working with Docker, Kubernetes, Xen, AWS, GCP, Cassandra, Kafka, and many other technologies. He currently serves as Head of Cloud and Infrastructure at eSolutions.