Setting up Streaming Replication in PostgreSQL

Setting up Streaming Replication in PostgreSQL


Configuring replication between two databases is considered to be a best strategy towards achieving high availability during disasters and provides fault tolerance against unexpected failures. PostgreSQL satisfies this requirement through streaming replication. We shall talk about another option called logical replication and logical decoding in our future blog post.

Streaming replication works on log shipping. Every transaction in postgres is written to a transaction log called WAL (write-ahead log) to achieve durability. A slave uses these WAL segments to continuously replicate changes from its master.

There exists three mandatory processes – wal sender , wal receiver and startup process, these play a major role in achieving streaming replication in postgres.

A wal sender process runs on a master, whereas the wal receiver and startup processes runs on its slave. When you start the replication, a wal receiver process sends the LSN (Log Sequence Number) up until when the WAL data has been replayed on a slave, to the master. And then the wal sender process on master sends the WAL data until the latest LSN starting from the LSN sent by the wal receiver, to the slave. Wal receiver writes the WAL data sent by wal sender to WAL segments. It is the startup process on slave that replays the data written to WAL segment. And then the streaming replication begins.

Note: Log Sequence Number, or LSN, is a pointer to a location in the WAL.

Steps to setup streaming replication between a master and one slave

Step 1:

Create the user in master using whichever slave should connect for streaming the WALs. This user must have REPLICATION ROLE.

Step 2:

The following parameters on the master are considered as mandatory when setting up streaming replication.

  • archive_mode : Must be set to ON to enable archiving of WALs.
  • wal_level : Must be at least set to hot_standby  until version 9.5 or replica  in the later versions.
  • max_wal_senders : Must be set to 3 if you are starting with one slave. For every slave, you may add 2 wal senders.
  • wal_keep_segments : Set the WAL retention in pg_xlog (until PostgreSQL 9.x) and pg_wal (from PostgreSQL 10). Every WAL requires 16MB of space unless you have explicitly modified the WAL segment size. You may start with 100 or more depending on the space and the amount of WAL that could be generated during a backup.
  • archive_command : This parameter takes a shell command or external programs. It can be a simple copy command to copy the WAL segments to another location or a script that has the logic to archive the WALs to S3 or a remote backup server.
  • listen_addresses : Set it to * or the range of IP Addresses that need to be whitelisted to connect to your master PostgreSQL server. Your slave IP should be whitelisted too, else, the slave cannot connect to the master to replicate/replay WALs.
  • hot_standby : Must be set to ON on standby/replica and has no effect on the master. However, when you setup your replication, parameters set on the master are automatically copied. This parameter is important to enable READS on slave. Otherwise, you cannot run your SELECT queries against slave.

The above parameters can be set on the master using these commands followed by a restart:

Step 3:

Add an entry to pg_hba.conf of the master to allow replication connections from the slave. The default location of pg_hba.conf is the data directory. However, you may modify the location of this file in the file  postgresql.conf. In Ubuntu/Debian, pg_hba.conf may be located in the same directory as the postgresql.conf file by default. You can get the location of postgresql.conf in Ubuntu/Debian by calling an OS command => pg_lsclusters.

The IP address mentioned in this line must match the IP address of your slave server. Please change the IP accordingly.

In order to get the changes into effect, issue a SIGHUP:

Step 4:

pg_basebackup helps us to stream the data through the  wal sender process from the master to a slave to set up replication. You can also take a tar format backup from master and copy that to the slave server. You can read more about tar format pg_basebackup here

The following step can be used to stream data directory from master to slave. This step can be performed on a slave.

Please replace the IP address with your master’s IP address.

In the above command, you see an optional argument -R. When you pass -R, it automatically creates a recovery.conf  file that contains the role of the DB instance and the details of its master. It is mandatory to create the recovery.conf file on the slave in order to set up a streaming replication. If you are not using the backup type mentioned above, and choose to take a tar format backup on master that can be copied to slave, you must create this recovery.conf file manually. Here are the contents of the recovery.conf file:

In the above file, the role of the server is defined by standby_mode. standby_mode  must be set to ON for slaves in postgres.
And to stream WAL data, details of the master server are configured using the parameter primary_conninfo .

The two parameters standby_mode  and primary_conninfo are automatically created when you use the optional argument -R while taking a pg_basebackup. This recovery.conf file must exist in the data directory($PGDATA) of Slave.

Step 5:

Start your slave once the backup and restore are completed.

If you have configured a backup (remotely) using the streaming method mentioned in Step 4, it just copies all the files and directories to the data directory of the slave. Which means it is both a back up of the master data directory and also provides for restore in a single step.

If you have taken a tar back up from the master and shipped it to the slave, you must unzip/untar the back up to the slave data directory, followed by creating a recovery.conf as mentioned in the previous step. Once done, you may proceed to start your PostgreSQL instance on the slave using the following command.

Step 6:

In a production environment, it is always advisable to have the parameter restore_command set appropriately. This parameter takes a shell command (or a script) that can be used to fetch the WAL needed by a slave, if the WAL is not available on the master.

For example:

If a network issue has caused a slave to lag behind the master for a substantial time, it is less likely to have those WALs required by the slave available on the master’s pg_xlog or pg_wal location. Hence, it is sensible to archive the WALs to a safe location, and to have the commands that are needed to restore the WAL set to restore_command parameter in the recovery.conf file of your slave. To achieve that, you have to add a line similar to the next example to your recovery.conf file in slave. You may substitute the cp command with a shell command/script or a copy command that helps the slave get the appropriate WALs from the archive location.

Setting the above parameter requires a restart and cannot be done online.

Final step: validate that replication is setup

As discussed earlier, a wal sender  and a wal receiver  process are started on the master and the slave after setting up replication. Check for these processes on both master and slave using the following commands.

You must see those all three processes running on master and slave as you see in the following example log.

You can see more details by querying the master’s pg_stat_replication view.

Reference :

If you found this post interesting…

Did you know that Percona now provides PostgreSQL support services? If you’d like to read more about this, here’s some more information. We’re here to help.


Share this post

Comments (6)

  • Douglas Hunley Reply

    You’re conflating two different, albeit related, things. WAL shipping is not strictly needed for streaming replication. Which means that ‘archive_mode’ , ‘archive_command’ , and ‘restore_command’ are not necessary. Yes, they are good practice and *should* be used, but they are not necessary for streaming replication. Your comments about needing 3 wal_sender processes is also misleading. You need two because you are using ‘-Xs’ in the pg_basebackup command. The third won’t be used during that process. If you used ‘-Xf’ (not recommended) you’d only need 1 wal_sender. And once the slave is up and running, it will only ever use 1.

    September 11, 2018 at 4:07 pm
    • avivallarapu Reply

      Hi Douglas. Yes. Thats true. You may not need archive_mode or archive_command or restore_command for streaming replication. But, when you setup streaming replication and due to a network lag or whatever reason, if the Slave is falling behind, also if the WALs in pg_xlog or pg_wal are recycled, without archiving the WALs the slave can never get back to sync with Master. Thus, we made sure to ensure we suggest the best practices while building streaming replication, rather than just building it. In every Production environment, these parameters are a must. Also archiving of WALs ensure a better backup strategy. So, you consider all these factors while setting up a replica. And regarding the wal sender process being set to 3, while building a slave, if you are using ‘-Xs’ to stream WALs while streaming data directory, you need 2 wal sender processes. At the same time, if there is a backup job running on Master due to whatever reason, you need another wal sender process. It should not in fact hurt setting additional wal sender processes. I usually recommend 2 wal sender processes per each slave. And 1 dedicated to Master for many obvious reasons.

      September 11, 2018 at 4:26 pm
  • cmennens Reply

    When you create the replicator role, you MUST specify WITH LOGIN REPLICATION or you can’t do a pg_basebackup from the slave…

    October 23, 2018 at 12:36 pm
    • Avinash Vallarapu Reply

      You need to specify “WITH LOGIN” only when you use CREATE ROLE. As we have used CREATE USER … , it automatically assign LOGIN role to the replication user -> replicator (here in this post).

      December 17, 2018 at 5:38 pm
  • Krishna Reply

    HI Avinash,

    Thanks for the detailed documentation, it helped me alot in my academic project.

    I have tried the same on Windows environment and i am struggling to start SLAVE node (as i have moved MASTER node data to SLAVE). When I use the original data folder, able to start the SLAVE node, but when I do all the configurations, I am facing a challenge as shown below.

    2018-12-24 11:17:31.235 IST [13104] FATAL: database system identifier differs between the primary and standby
    2018-12-24 11:17:31.235 IST [13104] DETAIL: The primary’s identifier is 6637748246234064208, the standby’s identifier is 6637788080756493616.

    December 24, 2018 at 7:57 am
  • Priit Reply

    Hi, thanks for the great guide. But I have question regarding to physical replication slot. Is it useful to create this slot also or what are you thoughts ? I see in earlier versions it was used and the Definition itself says also, ” Replication slots are a crash-safe data structure which can be created
    on either a master or a standby to prevent premature removal of
    write-ahead log segments needed by a standby, as well as (with
    hot_standby_feedback=on) pruning of tuples whose removal would cause
    replication conflicts. “

    January 18, 2019 at 1:06 am

Leave a Reply