How to Scale a Single-Server Database: A Guide to Distributed PostgreSQL

Published December 13, 2023

PostgreSQL is a fast-growing and popular open-source database known for its reliability, cost-effectiveness, advanced features, and extensive ecosystem of tools.

Born from the Berkeley POSTGRES project, the PostgreSQL database was designed for single-server deployments, meaning that a single primary instance stores a consistent copy of all application data and handles both read and write requests.

To scale read workloads, the primary instance is often supplemented with read replica instances, which help distribute the read requests and ease the load on the primary server. However, for high-load and globally distributed applications, a single primary and multiple replica configuration is not enough. A fully distributed PostgreSQL setup is essential.

Defining Distributed PostgreSQL

At its core, a distributed database runs and stores data across multiple servers, usually on a shared-nothing architecture. This architecture reduces bottlenecks, enables horizontal scaling, and enhances availability by removing single points of failure.

Distributed databases are characterized by the presence of multiple nodes that communicate directly with each other to orchestrate data distribution, replicate changes, coordinate distributed transactions, and handle failovers.

Traditional PostgreSQL is a single-node relational database. It lacks native distributed database attributes. This means that a “distributed” PostgreSQL database usually has to take one of the following forms:

  • Multi-Master Asynchronous Replication: Involves multiple standalone PostgreSQL instances with asynchronous replication and conflict resolution mechanisms.
  • Multi-Master Sharded PostgreSQL with a Coordinator: Data is sharded across multiple standalone PostgreSQL instances. A coordinator node handles application connections and directs requests to the appropriate standalone PostgreSQL instances.
  • Multi-Master Shared-Nothing PostgreSQL: Utilizes a true distributed database which is feature- and runtime-compatible with PostgreSQL.

Feature compatibility with PostgreSQL means that a database can support the advanced features in PostgreSQL— beyond the ANSI SQL standard—to help you get the most out of your data. However, runtime compatibility with PostgreSQL is the highest level of PostgreSQL compatibility. It ensures that a database will match PostgreSQL execution semantics at runtime.
Understanding PostgreSQL Compatibility: From Syntax to Runtime Levels.

When Is Distributed PostgreSQL Necessary?

A distributed PostgreSQL deployment is necessary in three situations.

  • Scalability: The database needs to scale both write and read workloads and/or store the data volume beyond a single database server’s capacity.
  • Resiliency: The database must be resilient against zone/region/DR-level outages, with no data loss (RPO=0) and rapid failover (RTO measured in seconds).
  • Geographic Compliance: The application must store user data in specific geographies to comply with data regulatory requirements and/or ensure low latency for reads and writes globally.

To meet any (or all) of these three criteria, PostgreSQL should be set up in a distributed configuration with multiple primary instances.

Poster Image

Different Types of Distributed PostgreSQL

So now, let’s break down each deployment option in detail, highlighting the strengths, weaknesses, and potential challenges. This should help you determine which type would work best for your specific application needs.

Multi-Master Asynchronous Replication

In a multi-master setup with asynchronous active-active replication, you must provision multiple primary PostgreSQL instances. Usually, each instance stores a full data set and handles write and read requests. These instances replicate changes to each other as long as any record can be updated on any database instance.

Replication is done asynchronously for two reasons.

  1. The databases might be located in distant regions or data centers.
  2. Any update on one instance could cause a conflict on another.

Synchronous replication in this context could negatively affect latency and throughput due to conflict resolution and the distance between the Postgres instances.

Before PostgreSQL 16, logical replication in active-active configurations wasn’t recommended because of a transaction loopback issue. This occurs when an update from PostgreSQL instance A is replicated to instance B and then sent back from B to A.

Although this loopback issue was addressed in PostgreSQL 16, it’s still not advised to rely on built-in logical replication for multi-master setups. Instead, you should consider third-party solutions for multi-master replication.

An Example of Multi-Master Asynchronous Replication

To better illustrate multi-master replication, let’s consider EDB Postgres Distributed (PGD) using a simple setup involving three primary PostgreSQL instances deployed across three availability zones:

Distributed PostgreSQL architecture with EDB Postgres Distributed (PGD).
Multi-master PostgreSQL configuration with EDB Postgres Distributed (PGD).
  • Primary instances: Every availability zone hosts a primary instance that stores a complete data set. Changes are first committed locally, then asynchronously replicated to other primary instances in different locations. Due to this, data across all instances isn’t always the same at any given moment. There’s the potential for data lag, even when using the synchronous_commit property.
  • PGD Proxy: Client connections are managed through a proxy that routes application requests to the appropriate primary instances. The proxy is particularly useful in setups with multiple primary instances in a single availability zone. The proxy can designate a specific primary instance within the availability zone to manage client requests, reducing replication conflicts.
  • Standby instances: Asynchronous replication in primary instances carries a risk of data loss, especially if a primary fails before changes are received by the other primaries. To counter this, complement each primary instance with one or more standby instances. For zero data loss (RPO=0) in a particular availability zone, each primary should replicate changes synchronously to at least two standbys. Note: Using only one standby can cause the primary to “get stuck” if that standby becomes unavailable.

Considerations for Multi-Master with Asynchronous Replication

By using a multi-master setup with asynchronous active-active replication, you can scale both read and write workloads, improving latency for users in various locations. However, asynchronous replication can lead to various conflicts. So application developers should be well-versed with a wide range of durability and consistency options as well as concepts like commit scopes, group commit, conflict-free replicated data types, and other specifics.

In terms of high availability, multi-master replication is a strong choice for disaster recovery strategies designed to withstand zone, region, or data center outages. But the final choice ultimately comes down to your target availability SLAs and RPO / RTO. For example, achieving no data loss (RPO=0) during major disruptions (like a data center failure) means ensuring all changes are fully replicated (including those that were not yet replicated from the failed region), which might result in longer recovery times, which could conflict with established RTO goals.

Finally, if the storage capacity of a single primary PostgreSQL instance is enough for your application, then adopting the multi-master solution above could be a good option. But if your data starts to outgrow the limits of a single database server, you’ll need to upgrade to a larger server. This can lead to longer upgrade cycles, potential downtimes, and a reduction in high availability during the upgrade.

Multi-Master Sharded PostgreSQL with a Coordinator

When your application read/write workload exceeds the resources of a single primary database instance, you will (most likely) face storage space exhaustion or CPU constraints. You will need to scale vertically. This involves upgrading the database server to another instance with more storage and CPU cores. To do this usually requires downtime, which may not be suitable for some application use cases.

Sharding offers a way to achieve horizontal scalability by distributing data, and, consequently, read/write workloads across multiple database servers. In a sharded database cluster, a coordinator node receives application requests and routes them to the appropriate database servers for execution.

While the PostgreSQL database doesn’t natively support sharding, its ecosystem includes extensions for setting up sharded PostgreSQL clusters.

An Example of Multi-Master Sharded PostgreSQL with a Coordinator

To illustrate multi-master sharded PostgreSQL with a coordinator, let’s consider CitusData, a PostgreSQL extension for sharding. In this example, CitusData is used to create a basic multi-master setup across three availability zones.

Distributed PostgreSQL architecture with YugabyteDB.
Multi-master shared-nothing PostgreSQL configuration with CitusData.
  • Primary Instances: Each availability zone has a primary database instance that is assigned a subset of the data to hold (in shards). For example, the primary in Zone A owns shards 1-5, while Zone C’s primary handles shards 11-15. This distributed setup is a multi-master PostgreSQL configuration that is horizontally scalable while evenly distributing read/write workloads among all primaries.
  • Coordinator: The primary instances operate as standalone PostgreSQL servers and are unaware of each other. The client connects to the coordinator node, which is familiar with data distribution details like the ownership of shards. The coordinator also processes client query requests and routes them to the appropriate primary instances for execution.
  • Standby Instances: Since no single primary instance holds the entire data set, there’s a risk of partial data loss if a primary fails. To mitigate this, each primary should have one or more standby instances. To ensure zero data loss (RPO=0), synchronous replication with at least two standby instances per primary is recommended. For a detailed high-availability architecture, refer to our comprehensive guide on high availability.

Considerations for Multi-Master Sharded PostgreSQL with a Coordinator

Multi-master sharded PostgreSQL deployments are well-suited for scenarios that require horizontal scalability by distributing data and the read/write workloads across multiple database instances. However, the scaling process isn’t fully automated. While it’s feasible to add new nodes to the cluster, manually rebalancing the shards is necessary. Reducing the cluster size later isn’t an option either.

In terms of high availability (HA), this sharded PostgreSQL deployment provides flexibility to design a configuration that meets targeted availability SLAs, RPO, and RTO objectives—but merely having a sharded cluster with multiple primaries and standbys does not constitute a comprehensive HA setup. You will need to incorporate additional components for failover, failback, and load balancing.

Finally, solutions like CitusData are recommended for multi-tenant applications and real-time analytics. If you choose to apply them for traditional OLTP workloads, be aware of specific limitations that apply to sharded tables. These include a lack of support for foreign and unique keys, serializable transactions, and eventually consistent cross-shard transactions.

NOTE: The considerations above apply to CitusData and Azure Microsoft CosmosDB for PostgreSQL, which is a sharded PostgreSQL service for CitusData deployments. If you use other sharding options, these considerations might differ.

Multi-Master Shared-Nothing PostgreSQL

In a multi-master shared-nothing PostgreSQL architecture, there is no notion of a coordinator node. Instead, the database nodes communicate directly, determining data distribution, transaction coordination, and other aspects of distributed deployment.

While the multi-master shared-nothing architecture is typically associated with distributed databases, PostgreSQL does not inherently align with this model. However, it is possible to run PostgreSQL deployments in this configuration using YugabyteDB, a distributed database built on PostgreSQL.

An Example of Multi-Master Shared-Nothing PostgreSQL

To illustrate multi-master shared-nothing PostgreSQL, let’s explore a multi-node YugabyteDB cluster spanning three availability zones.

Distributed PostgreSQL architecture with YugabyteDB.
Multi-master shared-nothing PostgreSQL configuration with YugabyteDB.
  • YugabyteDB Node: This configuration example uses a three-node YugabyteDB database cluster that spans three availability zones. Data is automatically sharded across all nodes. The nodes also store redundant copies of data based on your replication factor configuration. The nodes communicate directly to coordinate distributed transactions, synchronously replicate changes using the Raft consensus protocol, and automatically handle various failure scenarios.
  • Client: The YugabyteDB cluster operates transparently for client applications; they do not need to be aware of its distributed nature. Clients can connect to the cluster using existing PostgreSQL drivers and tools and send requests via any node. For example, in our architecture (above), the client connects to and sends requests through nodes 1 and 2. If either node 1 or 2 determines that some requests should be handled by node 3, they will forward those requests to node 3 automatically.

Considerations for Multi-Master Shared-Nothing PostgreSQL

YugabyteDB enables a multi-master shared-nothing PostgreSQL deployment that can scale data and read/write workloads (vertically and horizontally). The scaling process is elastic and fully automated, whether you’re adding or removing nodes from the cluster.

In terms of high availability, YugabyteDB’s cluster is inherently fault-tolerant; nodes autonomously manage potential outages. If one node fails, the remaining nodes that possess redundant and consistent copies of data will take over client requests. YugabyteDB ensures RPO=0 (no data loss) and an RTO ranging between 3-15 seconds, depending on the database and TCP/IP configuration defaults.

However, while YugabyteDB is feature and runtime-compatible with PostgreSQL, transitioning from PostgreSQL to YugabyteDB might require more than a simple lift-and-shift for some applications. Certain application workloads and queries may need optimization to achieve greater performance within distributed database clusters.

Finally, while YugabyteDB’s query layer is an enhanced version of the PostgreSQL query engine, it’s the storage layer that sets it apart from PostgreSQL. Many storage-specific features (and extensions) of Postgres, like logical and physical replication, are unnecessary in YugabyteDB. Additionally, common PostgreSQL maintenance tasks like vacuuming and managing transaction ID wraparounds and other nuances linked to the Postgres storage layer are no longer concerns with YugabyteDB.

Conclusion

The demand for distributed PostgreSQL deployments is growing, driven by forward-thinking enterprises’ needs for databases that can scale read and write workloads effectively, ensure resilience against outages with no data loss (RPO=0), and guarantee rapid failover (seconds-level RTO).

Also, many applications now require a globally distributed database solution. This is essential for those who have to store specific user data in specific regions, whether to comply with data regulations or ensure low latency for operational efficiency.

Although originally designed for single-server environments, PostgreSQL has evolved significantly. Its rich ecosystem and strong user base now include custom extensions and PostgreSQL-compatible technologies that allow you to design a distributed PostgreSQL architecture tailored to the unique requirements of your specific applications.

Distributed PostgreSQL FAQ:

  • Is PostgreSQL obsolete?
    No, PostgreSQL isn’t obsolete. Its ecosystem supports custom extensions and technologies for creating distributed architectures, meeting diverse application demands.
  • When Is distributed PostgreSQL necessary?
    Distributed PostgreSQL is necessary for scalability, resiliency, and geographic compliance. When these needs arise, PostgreSQL should be configured in a distributed manner with multiple primary instances.
  • What are the different types of distributed PostgreSQL?
    Distributed PostgreSQL types include Multi-Master Asynchronous Replication, Multi-Master Sharded PostgreSQL with Coordinator, and Multi-Master Shared-Nothing architectures.