Installation and upgrade v1
OpenShift
For instructions on how to install Cloud Native PostgreSQL on Red Hat OpenShift Container Platform, please refer to the "OpenShift" section.
Installing the Operator on Kubernetes
Obtaining an EDB subscription token
Important
You must obtain an EDB subscription token to install EDB Postgres Distributed for Kubernetes. The token will grant access to the EDB private software repositories.
Installing EDB Postgres Distributed for Kubernetes requires an EDB Repos 2.0 token to gain access to the EDB private software repositories.
You can obtain the token by visiting your EDB Account Profile. You will have to sign in if you are not already logged in.
Your account profile page displays the token next to the label Repos 2.0 Token. By default, the token is obscured. Click the "Show" button (an eye icon) to reveal it.
Your token entitles you to access one of two repositories: standard
or enterprise
.
standard
- Includes the operators and the EDB Postgres Extended operand images.enterprise
- Includes the operators, and the operand images for EDB Postgres Advanced and EDB Postgres Extended.
Set the relevant value, determined by your subscription, as an environment variable EDB_SUBSCRIPTION_PLAN
.
EDB_SUBSCRIPTION_PLAN=enterprise
then set the Repos 2.0 token as an environment variable EDB_SUBSCRIPTION_TOKEN
.
EDB_SUBSCRIPTION_TOKEN=<your-token>
Warning
The token is sensitive information. Please ensure that you don't expose it to unauthorized users.
You can now proceed with the installation.
Using the Helm Chart
The operator can be installed using the provided Helm chart.
Directly using the operator manifest
We can deploy the EDB Postgres Distributed for Kubernetes operator directly using the manifest.
This manifest will install both EDB Postgres Distributed for Kubernetes operator
and the latest
supported EDB Postgres for Kubernetes operator
in the same namespace. To deploy the operators
using the manifest, follow the steps below:
Install the cert-manager
EDB Postgres Distributed for Kubernetes requires Cert Manager 1.10 or higher. You can follow the installation guide or use the command below to deploy cert-manager.
kubectl apply -f \ https://github.com/cert-manager/cert-manager/releases/download/v1.16.2/cert-manager.yaml
Install the EDB pull secret
Before installing EDB Postgres Distributed for Kubernetes, you need to create a pull secret for the EDB container registry.
The pull secret needs to be saved in the namespace where the operator will reside (pgd-operator-system
by default).
Create the pgd-operator-system
namespace using this command:
kubectl create namespace pgd-operator-system
To create the pull secret, run the following command:
kubectl create secret -n pgd-operator-system docker-registry edb-pull-secret \ --docker-server=docker.enterprisedb.com \ --docker-username=k8s_${EDB_SUBSCRIPTION_PLAN}_pgd \ --docker-password=${EDB_SUBSCRIPTION_TOKEN}
Install the operator manifest
Now that the pull-secret has been added to the namespace, the operator can be installed like any other resource in Kubernetes,
through a YAML manifest applied via kubectl
.
There are two different manifests available depending on your subscription plan:
- Standard: The latest standard operator manifest.
- Enterprise: The latest enterprise operator manifest.
You can install the manifest for the latest version of the operator by running:
kubectl apply --server-side -f \ https://get.enterprisedb.io/pg4k-pgd/pg4k-pgd-${EDB_SUBSCRIPTION_PLAN}-1.1.1.yaml
You can check the operator deployment with the following command:
kubectl get deployment -n pgd-operator-system pgd-operator-controller-manager
Note
As EDB Postgres Distributed for Kubernetes internally manages each PGD node using the
Cluster
resource defined by EDB Postgres for Kubernetes, we also need to have the
EDB Postgres for Kubernetes operator installed as a dependency. The manifest used
above contains a well-tested version of EDB Postgres for Kubernetes operator, which will
be installed into the same namespace as the EDB Postgres Distributed for Kubernetes operator.
Details about the deployment
In Kubernetes, the operator is by default installed in the pgd-operator-system
namespace as a Kubernetes Deployment
. The name of this deployment depends on the installation method.
When installed through the manifest, by default, it is named pgd-operator-controller-manager
.
When installed via Helm, by default, the deployment name is derived from the helm release name, appended with the
suffix -edb-postgres-distributed-for-kubernetes
(i.e., <name>-edb-postgres-distributed-for-kubernetes
).
Note
With Helm you can customize the name of the deployment via the
fullnameOverride
field in the "values.yaml" file.
You can get more information using the describe
command in kubectl
:
$ kubectl get deployments -n pgd-operator-system NAME READY UP-TO-DATE AVAILABLE AGE <deployment-name> 1/1 1 1 18m
kubectl describe deploy \ -n pgd-operator-system \ <deployment-name>
As with any Deployment, it sits on top of a ReplicaSet and supports rolling upgrades. The default configuration of the EDB Postgres Distributed for Kubernetes operator is a Deployment of a single replica, which is suitable for most installations. If the node where the pod is running is not reachable anymore, the pod will be rescheduled on another node.
If you require high availability at the operator level, it is possible to specify multiple replicas in the Deployment configuration, given that the operator supports leader election. In addition, you can take advantage of taints and tolerations to make sure that the operator does not run on the same nodes where the actual PostgreSQL clusters are running (this might even include the control plane for self-managed Kubernetes installations).
Operator configuration
You can change the default behavior of the operator by overriding some default options. For more information, please refer to the "Operator configuration" section.
Deploy PGD Clusters
Be sure to create a cert issuer before you start deploying PGD clusters. The Helm chart prompts you to do this, but in case you miss it, you can, for example, run:
kubectl apply -f \ https://raw.githubusercontent.com/EnterpriseDB/edb-postgres-for-kubernetes-charts/main/hack/samples/issuer-selfsigned.yaml
With the operators and a self-signed cert issuer deployed, you can start creating PGD clusters. See the Quick start for an example.
Default Operand images
By default, each operator release binds a default version and flavor for PGD and PGD Proxy images.
If the image names
are not specified in the spec.pgd.imageName
and spec.pgdproxy.imageName
fields of the pgdgroup YAML file, the default
images will be used.
Default images can be overwritten using the pgd-operator-controller-manager-config
operator configuration map.
For more details, see the section on EDB Postgres Distributed for Kubernetes Operator configuration.
We can find what images PGD cluster is using by checking the PGDGroup status:
kubectl get pgdgroup <pgdgroup name> -o yaml | yq ".status.image"
Specifying operand images
Here is an example of a PGD Cluster using explicit image names:
apiVersion: pgd.k8s.enterprisedb.io/v1beta1 kind: PGDGroup metadata: name: group-example-customized spec: instances: 2 proxyInstances: 2 witnessInstances: 1 imageName: docker.enterprisedb.com/k8s_enterprise_pgd/edb-postgres-extended-pgd:17.4-5.7.0-1 imagePullSecrets: - name: registry-pullsecret pgd: parentGroup: name: world create: true cnp: storage: size: 1Gi pgdProxy: imageName: docker.enterprisedb.com/k8s_enterprise_pgd/edb-pgd-proxy:5.7.1-1
Specifying operand images using ImageCatalog
Since the release of version 1.1.1, the PGD4K-PGD operator supports using ImageCatalog
to specify operand images.
Different ImageCatalogs
are available based on your subscription plan and PGD versions for each PostgreSQL flavor.
Note that the image included in the ImageCatalog
are ubi-9 based.
- EDB Postgres Advanced PGD: https://get.enterprisedb.io/pgd-k8s-image-catalogs/epas-k8s-<EDB_SUBSCRIPTION_PLAN>-pgd-<PGD_VERSION>.yaml
- EDB Postgres Extended PGD: https://get.enterprisedb.io/pgd-k8s-image-catalogs/pgextended-k8s-<EDB_SUBSCRIPTION_PLAN>-pgd-<PGD_VERSION>.yaml
- Postgres Community PGD: https://get.enterprisedb.io/pgd-k8s-image-catalogs/postgresql-k8s-<EDB_SUBSCRIPTION_PLAN>-pgd-<PGD_VERSION>.yaml
We can create an Imagecatalog
in the PGD Cluster namespace and reference it in your YAML file.
Let's create an EDB Postgres Extended PGD 5.7 ImageCatalog
:
kubectl create -f \ https://get.enterprisedb.io/pgd-k8s-image-catalogs/epas-k8s-enterprise-pgd-5.7.yaml
The following example shows how to use the EDB Postgres Extended PGD 5.7 ImageCatalog
to
specify a PostgreSQL major version of 17 for the PGD operand image. The PGD proxy image is automatically sourced from
the ImageCatalog
.
apiVersion: pgd.k8s.enterprisedb.io/v1beta1 kind: PGDGroup metadata: name: group-example-catalog spec: instances: 2 proxyInstances: 2 witnessInstances: 1 imageCatalogRef: apiGroup: pgd.k8s.enterprisedb.io kind: ImageCatalog major: 17 name: epas-enterprise-pgd-5.7 ...
Upgrade
Important
Please carefully read the release notes before performing an upgrade, as some versions might require extra steps.
The EDB Postgres Distributed for Kubernetes (PGD4K) operator relies on the EDB Postgres for Kubernetes (PG4K) operator to manage clusters. To upgrade the EDB PGD4K operator, the PG4K operator must also be upgraded to a supported version. It is recommended to keep the EDB PG4K operator on the Long Term Support (LTS) version, as this is the tested version compatible with the PGD4K operator.
To upgrade the EDB Postgres Distributed (PGD) for Kubernetes operator, follow these steps:
- Upgrade the EDB Postgres for Kubernetes operator as a dependency
- Upgrade the PGD4K controller and the related Kubernetes resources
Note that when you upgrade the EDB Postgres for Kubernetes operator, the instance manager on each PGD node will also be upgraded. For more details, please refer to the EDB Postgres for Kubernetes upgrades documentation.
Unless differently stated in the release notes, those steps are normally done by applying the manifest of the newer version for plain Kubernetes installations.
Compatibility among versions
EDB Postgres Distributed for Kubernetes (PGD4K) follows semantic versioning. Every release of the operator within the same API version is compatible with the previous one. The current API version is v1, corresponding to versions 1.x.y of the operator.
The minor
version of PGD4K operator is tracking a PG4K LTS release change.
For example:
- PGD4K operator 1.0.x is fully tested against PG4K LTS 1.22.
- PGD4K operator 1.1.x is fully tested against PG4K LTS 1.25.
A PGD4K operator release has the same support scope as the PG4K LTS release it is tracking.
In addition to new features, new versions of the operator contain bug fixes and stability enhancements. Each version is released in order to maintain the most secure and stable Postgres environment. Because of this, we strongly encourage users to upgrade to the latest version of the operator.
The release notes page contains a detailed list of the changes introduced in every released version of EDB Postgres Distributed for Kubernetes, and it must be read before upgrading to a newer version of the software.
Most versions are directly upgradable, and in that case, simply applying the newer manifest for plain Kubernetes installations will complete the upgrade.
When versions are not directly upgradable, the old version (of both PGD4K and PG4K) needs to be removed before installing the new one. This won't affect user data, only the operator itself.
Upgrading to 1.0.1 on Red Hat OpenShift
On the openshift platform, starting from version 1.0.1, the EDB Postgres Distributed for Kubernetes operator is required to reference the LTS releases of the PG4K operator. For example, PGD4K v1.0.1 specifically references the PG4K LTS version 1.22.x (any patch release of the 1.22 LTS branch is valid).
To upgrade PGD4K operator, ensure that the PG4K operator is upgraded to a supported version first. Only then can the PGD4K operator be upgraded.
Important
As PGD4K v1.0.0 does not require referencing an LTS release of the PG4K operator, it may have been installed with any PG4K version. If the installed PG4K version is less than 1.22, you can upgrade to PG4K version 1.22 by changing the subscription channel. For more information, see upgrading the operator. However, if the installed PG4K version is greater than 1.22, you will need to reinstall the operator from the stable-1.22 channel to upgrade PGD4K to v1.0.1.
Server-side apply of manifests
To ensure compatibility with Kubernetes 1.29 and upcoming versions, EDB Postgres Distributed for Kubernetes now mandates the utilization of "Server-side apply" when deploying the operator manifest.
While employing this installation method poses no challenges for new
deployments, updating existing operator manifests using the --server-side
option may result in errors like the example below:
Apply failed with 1 conflict: conflict with "kubectl-client-side-apply" using..
If such errors arise, they can be resolved by explicitly specifying the
--force-conflicts
option to enforce conflict resolution:
kubectl apply --server-side --force-conflicts -f <OPERATOR_MANIFEST>
From then on, kube-apiserver
will be automatically acknowledged as a recognized
manager for the CRDs, eliminating the need for any further manual intervention
on this matter.
Upgrading operands
The PGD Cluster supports in-place upgrades of the operand image's minor version, though the PostgreSQL service will be temporarily unavailable during the upgrade.
Checking Current PGD and Proxy Versions
Before upgrading, you can check the current PGD and PGD proxy versions with the following command:
kubectl get pgdgroup <pgdgroup name> -o yaml | yq ".status.image"
Upgrade Procedure
Using Default or Customized Image Name: To upgrade the operand to a new minor version, replace the imageName in the
spec.imageName
andspec.pgdproxy.imageName
sections of the PGD group YAML file with the new imageName. The images on each node will be upgraded sequentially and restarted accordingly.Using Image Catalog: If the PGD Cluster manages image versions using an
ImageCatalog
, simply upgrade the image version specified in the referencedImageCatalog
. The PGD Cluster will automatically apply the new image version.