Distributed high availability

Distributed high-availability clusters are powered by EDB Postgres Distributed. They use multi-master logical replication to deliver more advanced cluster management compared to a physical replication-based system. Distributed high-availability clusters let you deploy a cluster across multiple regions or a single region. For use cases where high availability across regions is a major concern, a cluster deployment with distributed high availability enabled can provide two data groups with a witness group in a third region.

This configuration provides a true active-active solution as each data group is configured to accept writes.

Distributed high-availability clusters support both EDB Postgres Advanced Server and EDB Postgres Extended Server database distributions.

Distributed high-availability clusters contain one or two data groups. Your data groups can contain either three data nodes or two data nodes and one witness node. At any given time, one of these data nodes in each group is the leader and accepts writes, while the rest are referred to as shadow nodes. We recommend that you don't use two data nodes and one witness node in production unless you use asynchronous commit scopes.

PGD Connection Manager can routes all application traffic to the write leader node. Each group has its own write leader reducing distributed data conflicts. PGD leverages a distributed consensus model to determine availability of the data nodes in the cluster. On failure or unavailability of the leader, PGD elects a new write leader and redirects application traffic. Together with the core capabilities of EDB Postgres Distributed, this mechanism of routing application traffic to the leader node enables fast failover and switchover.

The witness node/witness group doesn't host data but exists for management purposes. It supports operations that require a consensus, for example, in case of an region failure.

Note

Operations against a distributed high-availability cluster leverage the EDB Postgres Distributed set-leader feature, which provides subsecond interruptions during planned lifecycle operations.

Single data location

A configuration with a single data location has one data group and either:

  • Two data nodes with one lead, one shadow, and a witness node, each in separate availability zones

    region(2 data + 1 witness)

  • Three data nodes with one lead and two shadow nodes, each in separate availability zones

    region(3 data)

Multiple groups with a witness node

A configuration with multiple data locations has two data groups that contain either:

  • Three data nodes in each group:

    • A data node and two shadow nodes in one group

    • The same configuration in another group

    • A witness node in a third group

      groups(2 x 3 data + 1 witness)

    • Groups will currently be in the same region (the region the cluster resides in), but this may change in the future.

Advisory locks

Advisory locks aren't replicated between Postgres nodes, so advisory locks taken on a shadow node don't conflict with advisory locks taken on the lead. We recommend that applications that rely on advisory locking avoid using read-only workloads for those transactions.


Could this page be better? Report a problem or suggest an addition!