This blog was originally published in September 2018 and was updated in Feburary of 2024
Configuring replication between two databases is considered to be the best strategy for achieving high availability during disasters and provides fault tolerance against unexpected failures. PostgreSQL satisfies this requirement through streaming replication. We shall talk about another option called logical replication and logical decoding in our future blog post.
PostgreSQL streaming replication is a process that replicates data from a primary PostgreSQL database to one or more standby databases in real-time, ensuring that these standby databases mirror the primary database accurately. This replication method ensures high availability and redundancy by keeping the standby servers synchronized with the primary server. It plays a vital role in disaster recovery, load balancing, and reducing downtime for maintenance activities. By continuously replicating data with minimal delay, streaming replication allows for a smooth transition to a standby server with virtually no interruption in case the primary server encounters a failure.
PostgreSQL streaming replication offers a framework for enhancing database resilience, performance, and data safety. This technology underpins critical capabilities businesses rely on for continuous operation and data management.
Streaming replication ensures that your PostgreSQL databases remain accessible, minimizing downtime and maintaining business continuity. Replicating data to standby servers guarantees that a backup is always ready to take over in case the primary server fails, thus enhancing system reliability. Learn more about deploying PostgreSQL for high availability in our blog post Setting Up and Deploying PostgreSQL for High Availability.
One of the key advantages of streaming replication is its ability to distribute read queries across multiple servers. This maximizes the utilization of your hardware resources and improves application response times, leading to a more efficient and scalable system architecture.
In the event of a catastrophic failure, streaming replication is your first line of defense to ensure data is not permanently lost by providing a mechanism for fast recovery, minimizing data loss and operational downtime. Dive deeper into strategies in our eBook PostgreSQL Disaster Recovery.
Streaming replication facilitates real-time data integration into data warehouses, enabling timely insights and decision-making. It supports the replication of live data to a warehouse, where it can be analyzed without impacting the performance of the primary database, thus serving both operational and analytical purposes seamlessly.
Streaming replication in PostgreSQL works on log shipping. Every transaction in postgres is written to a transaction log called WAL (write-ahead log) to achieve durability. A slave uses these WAL segments to continuously replicate changes from its master.
There exists three mandatory processes – wal sender , wal receiver and startup process, these play a major role in achieving streaming replication in postgres.
A wal sender process runs on a master, whereas the wal receiver and startup processes runs on its slave. When you start the replication, a wal receiver process sends the LSN (Log Sequence Number) up until when the WAL data has been replayed on a slave, to the master. And then the wal sender process on master sends the WAL data until the latest LSN starting from the LSN sent by the wal receiver, to the slave. Wal receiver writes the WAL data sent by wal sender to WAL segments. It is the startup process on slave that replays the data written to WAL segment. And then the streaming replication begins.
Note: Log Sequence Number, or LSN, is a pointer to a location in the WAL.
Create the user in master using whichever slave should connect for streaming the WALs. This user must have REPLICATION ROLE.
|
1 |
CREATE USER replicator<br>WITH REPLICATION<br>ENCRYPTED PASSWORD 'replicator'; |
The following parameters on the master are considered as mandatory when setting up streaming replication.
The above parameters can be set on the master using these commands followed by a restart:
|
1 |
ALTER SYSTEM SET wal_level TO 'hot_standby';<br>ALTER SYSTEM SET archive_mode TO 'ON';<br>ALTER SYSTEM SET max_wal_senders TO '5';<br>ALTER SYSTEM SET wal_keep_segments TO '10';<br>ALTER SYSTEM SET listen_addresses TO '*';<br>ALTER SYSTEM SET hot_standby TO 'ON';<br>ALTER SYSTEM SET archive_command TO 'test ! -f /mnt/server/archivedir/%f && cp %p /mnt/server/archivedir/%f'; |
|
1 |
$ pg_ctl -D $PGDATA restart -mf |
Add an entry to pg_hba.conf of the master to allow replication connections from the slave. The default location of pg_hba.conf is the data directory. However, you may modify the location of this file in the file postgresql.conf. In Ubuntu/Debian, pg_hba.conf may be located in the same directory as the postgresql.conf file by default. You can get the location of postgresql.conf in Ubuntu/Debian by calling an OS command => pg_lsclusters.
|
1 |
host replication replicator 192.168.0.28/32 md5 |
The IP address mentioned in this line must match the IP address of your slave server. Please change the IP accordingly.
In order to get the changes into effect, issue a SIGHUP:
|
1 |
$ pg_ctl -D $PGDATA reload<br>Or<br>$ psql -U postgres -p 5432 -c "select pg_reload_conf()" |
pg_basebackup helps us to stream the data through the wal sender process from the master to a slave to set up replication. You can also take a tar format backup from master and copy that to the slave server. You can read more about tar format pg_basebackup here
The following step can be used to stream data directory from master to slave. This step can be performed on a slave.
|
1 |
$ pg_basebackup -h 192.168.0.28 -U replicator -p 5432 -D $PGDATA -P -Xs -R |
Please replace the IP address with your master’s IP address.
In the above command, you see an optional argument -R. When you pass -R, it automatically creates a recovery.conf file that contains the role of the DB instance and the details of its master. It is mandatory to create the recovery.conf file on the slave in order to set up a streaming replication. If you are not using the backup type mentioned above, and choose to take a tar format backup on master that can be copied to slave, you must create this recovery.conf file manually. Here are the contents of the recovery.conf file:
|
1 |
$ cat $PGDATA/recovery.conf<br><br>standby_mode = 'on'<br>primary_conninfo = 'host=192.168.0.28 port=5432 user=replicator password=replicator'<br>restore_command = 'cp /path/to/archive/%f %p'<br>archive_cleanup_command = 'pg_archivecleanup /path/to/archive %r' |
In the above file, the role of the server is defined by standby_mode.
standby_mode must be set to ON for slaves in postgres.
And to stream WAL data, details of the master server are configured using the parameter
primary_conninfo .
The two parameters standby_mode and primary_conninfo are automatically created when you use the optional argument -R while taking a pg_basebackup. This recovery.conf file must exist in the data directory($PGDATA) of Slave.
Start your slave once the backup and restore are completed.
If you have configured a backup (remotely) using the streaming method mentioned in Step 4, it just copies all the files and directories to the data directory of the slave. Which means it is both a back up of the master data directory and also provides for restore in a single step.
If you have taken a tar back up from the master and shipped it to the slave, you must unzip/untar the back up to the slave data directory, followed by creating a recovery.conf as mentioned in the previous step. Once done, you may proceed to start your PostgreSQL instance on the slave using the following command.
|
1 |
$ pg_ctl -D $PGDATA start |
In a production environment, it is always advisable to have the parameter restore_command set appropriately. This parameter takes a shell command (or a script) that can be used to fetch the WAL needed by a slave if the WAL is not available on the master.
For example:
If a network issue has caused a slave to lag behind the master for a substantial time, it is less likely to have those WALs required by the slave available on the master’s pg_xlog or pg_wal location. Hence, it is sensible to archive the WALs to a safe location, and to have the commands that are needed to restore the WAL set to restore_command parameter in the recovery.conf file of your slave. To achieve that, you have to add a line similar to the next example to your recovery.conf file in slave. You may substitute the cp command with a shell command/script or a copy command that helps the slave get the appropriate WALs from the archive location.
|
1 |
restore_command = 'cp /mnt/server/archivedir/%f "%p"' |
Setting the above parameter requires a restart and cannot be done online.
As discussed earlier, a wal sender and a wal receiver process are started on the master and the slave after setting up replication. Check for these processes on both master and slave using the following commands.
|
1 |
On Master<br>==========<br>$ ps -eaf | grep sender<br><br>On Slave<br>==========<br>$ ps -eaf | grep receiver<br>$ ps -eaf | grep startup |
You must see those all three processes running on master and slave as you see in the following example log.
|
1 |
On Master<br>=========<br>$ ps -eaf | grep sender<br>postgres 1287 1268 0 10:40 ? 00:00:00 postgres: wal sender process replicator 192.168.0.28(36924) streaming 0/50000D68<br><br>On Slave<br>=========<br>$ ps -eaf | egrep "receiver|startup"<br>postgres 1251 1249 0 10:40 ? 00:00:00 postgres: startup process recovering 000000010000000000000050<br>postgres 1255 1249 0 10:40 ? 00:00:04 postgres: wal receiver process streaming 0/50000D68 |
You can see more details by querying the master’s pg_stat_replication view.
|
1 |
$ psql<br>postgres=# x<br>Expanded display is on.<br>postgres=# select * from pg_stat_replication;<br>-[ RECORD 1 ]----+------------------------------<br>pid | 1287<br>usesysid | 24615<br>usename | replicator<br>application_name | walreceiver<br>client_addr | 192.168.0.28<br>client_hostname | <br>client_port | 36924<br>backend_start | 2018-09-07 10:40:48.074496-04<br>backend_xmin | <br>state | streaming<br>sent_lsn | 0/50000D68<br>write_lsn | 0/50000D68<br>flush_lsn | 0/50000D68<br>replay_lsn | 0/50000D68<br>write_lag | <br>flush_lag | <br>replay_lag | <br>sync_priority | 0<br>sync_state | async |
Reference: https://www.postgresql.org/docs/10/static/warm-standby.html#STANDBY-SERVER-SETUP
When implementing PostgreSQL streaming replication, it is critical to understand the performance consequences and optimization options. While this replication strategy improves availability and disaster recovery, it also adds challenges that must be addressed in order to maintain system efficiency.
Streaming replication has a low impact on the primary server’s performance but is not negligible. The primary server must handle the increased load of sending write-ahead logs (WAL) to standby servers. The process is efficient but does require network bandwidth and resources, which may compromise the primary’s speed, particularly under heavy write loads.
To address these challenges and enhance replication efficiency, it’s crucial to fine-tune the replication configuration. Making precise adjustments, including setting the WAL segment size, optimizing the WAL buffer, and configuring suitable replication timeouts, can greatly enhance the efficiency of replication and lighten the workload on the primary server. In addition, employing features such as delayed replication and replication slots significantly optimize resource consumption and maintain consistent data across servers.
Distributing read and write operations across servers is a fundamental strategy for managing a replicated PostgreSQL setup. By directing read queries to standby servers through streaming replication, the primary server’s load is reduced, enhancing the system’s performance. It necessitates meticulous planning and setup to route read operations to the most suitable server, taking into account variables such as replication delay and the capacity of each server. Achieving this balance is crucial for maximizing performance and scalability within a replicated PostgreSQL environment, enabling the system to accommodate increasing volumes of data and user requests efficiently.
To ensure your PostgreSQL streaming replication runs smoothly and efficiently, incorporating specific optimization strategies is essential. These tips can help you enhance performance, reduce replication lag, and ensure the high availability and reliability of your database system.
Optimizing the Write-Ahead Logging (WAL) system, vital for ensuring data durability and facilitating replication in PostgreSQL, requires fine-tuning the configuration to balance performance efficiency and disk space utilization. By adjusting settings like wal_buffers, wal_writer_delay, and max_wal_size, you can enhance the efficiency of write operations and the speed of replication. Allocating a dedicated disk for WAL files can also decrease I/O contention, thereby boosting overall system performance.
For reduced latency and enhanced throughput, it’s crucial to configure and optimize your network specifically for replication traffic. Strategies may involve dedicating network interfaces to handle replication data, tweaking network configurations to support bulk data transfers efficiently, and maintaining dependable, high-speed connections between your primary and standby servers.
Replication slots are used to safeguard against the premature deletion of WAL segments before they have been applied by the standby server, thus preventing data loss. By keeping an eye on and fine-tuning these slots, you can better control disk space usage on the primary server and avert replication delays. It’s advisable to periodically assess and modify the quantity of replication slots and their configurations, tailoring them to your specific replication requirements and the capabilities of your standby servers.
Replication delay can influence the timeliness of data on standby servers, potentially affecting read scalability and failover operations. To control replication delay, tracking delay indicators using resources like pg_stat_replication is critical. Implement strategies such as using faster hardware, optimizing queries, and adjusting WAL and network configurations to minimize lag. Consider using delayed replication to balance data freshness and availability in scenarios where lag is unavoidable.
Efficiently routing read queries to standby servers can decrease the workload on the primary server and elevate the system’s overall efficiency. By incorporating load balancing through modifications at the application level or utilizing external utilities, you can ensure that standby servers process read-only queries. This strategy boosts system performance and facilitates read scaling, making it possible to manage an increased volume of queries by expanding the number of standby servers.
Did you know that Percona now provides PostgreSQL support services? We’re here to help. Ensuring PostgreSQL is ready for enterprise deployment extends beyond mere software installation. Our comprehensive services range from daily support to specialized consulting for intricate performance issues and design obstacles, providing the professional know-how necessary for operating PostgreSQL in mission-critical and production settings.
PostgreSQL streaming replication is a method for copying and synchronizing data from a primary database to one or more secondary databases in real-time, allowing for continuous data replication and ensuring high availability and data redundancy.
Benefits include improved data availability and disaster recovery, load balancing for read operations across multiple servers, and minimal downtime during maintenance or unexpected failures.
Streaming replication requires setting up a primary PostgreSQL server along with one or more standby servers. Essential steps include modifying the primary server’s settings to permit connections from the standby servers, establishing authentication measures, preparing the standby servers with a base backup from the primary, and initiating the replication process.
In synchronous replication, transactions must be confirmed by both the primary and a specified number of standby servers before they are considered committed, ensuring data consistency but potentially impacting performance. Asynchronous replication allows transactions to be completed without waiting for standby servers, offering better performance but at the risk of data loss if the primary server fails before the standby is updated.
Streaming replication performance can be monitored using various tools like Percona Monitoring and Management and PostgreSQL’s built-in functions. Key metrics include replication lag, transaction throughput, and resource usage on both primary and standby servers. Tools like pg_stat_replication provide real-time insights into the replication process.
Resources
RELATED POSTS