Replication in PostgreSQL – Setting up Streaming

Configuring replication between two databases is considered to be the best strategy towards achieving high availability during disasters and provides fault tolerance against unexpected failures. PostgreSQL satisfies this requirement through streaming replication. We shall talk about another option called logical replication and logical decoding in our future blog post.

Understanding replication in PostgreSQL

Streaming replication in PostgreSQL works on log shipping. Every transaction in postgres is written to a transaction log called WAL (write-ahead log) to achieve durability. A slave uses these WAL segments to continuously replicate changes from its master.

There exists three mandatory processes – wal sender , wal receiver and startup process, these play a major role in achieving streaming replication in postgres.

A wal sender process runs on a master, whereas the wal receiver and startup processes runs on its slave. When you start the replication, a wal receiver process sends the LSN (Log Sequence Number) up until when the WAL data has been replayed on a slave, to the master. And then the wal sender process on master sends the WAL data until the latest LSN starting from the LSN sent by the wal receiver, to the slave. Wal receiver writes the WAL data sent by wal sender to WAL segments. It is the startup process on slave that replays the data written to WAL segment. And then the streaming replication begins.

Note: Log Sequence Number, or LSN, is a pointer to a location in the WAL.

Streaming replication in PostgreSQL between a master and one slave

Step 1:

Create the user in master using whichever slave should connect for streaming the WALs. This user must have REPLICATION ROLE.

Step 2:

The following parameters on the master are considered as mandatory when setting up streaming replication.

  • archive_mode : Must be set to ON to enable archiving of WALs.
  • wal_level : Must be at least set to hot_standby  until version 9.5 or replica  in the later versions.
  • max_wal_senders : Must be set to 3 if you are starting with one slave. For every slave, you may add 2 wal senders.
  • wal_keep_segments : Set the WAL retention in pg_xlog (until PostgreSQL 9.x) and pg_wal (from PostgreSQL 10). Every WAL requires 16MB of space unless you have explicitly modified the WAL segment size. You may start with 100 or more depending on the space and the amount of WAL that could be generated during a backup.
  • archive_command : This parameter takes a shell command or external programs. It can be a simple copy command to copy the WAL segments to another location or a script that has the logic to archive the WALs to S3 or a remote backup server.
  • listen_addresses : Specifies which IP interfaces could accept connections. You could specify all the TCP/IP addresses on which the server could listen to connections from client. ‘*’ means all available IP interfaces. The default : localhost allows only local TCP/IP connections to be made to the postgres server.
  • hot_standby : Must be set to ON on standby/replica and has no effect on the master. However, when you setup your replication, parameters set on the master are automatically copied. This parameter is important to enable READS on slave. Otherwise, you cannot run your SELECT queries against slave.

The above parameters can be set on the master using these commands followed by a restart:

Step 3:

Add an entry to pg_hba.conf of the master to allow replication connections from the slave. The default location of pg_hba.conf is the data directory. However, you may modify the location of this file in the file  postgresql.conf. In Ubuntu/Debian, pg_hba.conf may be located in the same directory as the postgresql.conf file by default. You can get the location of postgresql.conf in Ubuntu/Debian by calling an OS command => pg_lsclusters.

The IP address mentioned in this line must match the IP address of your slave server. Please change the IP accordingly.

In order to get the changes into effect, issue a SIGHUP:

Step 4:

pg_basebackup helps us to stream the data through the  wal sender process from the master to a slave to set up replication. You can also take a tar format backup from master and copy that to the slave server. You can read more about tar format pg_basebackup here

The following step can be used to stream data directory from master to slave. This step can be performed on a slave.

Please replace the IP address with your master’s IP address.

In the above command, you see an optional argument -R. When you pass -R, it automatically creates a recovery.conf  file that contains the role of the DB instance and the details of its master. It is mandatory to create the recovery.conf file on the slave in order to set up a streaming replication. If you are not using the backup type mentioned above, and choose to take a tar format backup on master that can be copied to slave, you must create this recovery.conf file manually. Here are the contents of the recovery.conf file:

In the above file, the role of the server is defined by standby_mode. standby_mode  must be set to ON for slaves in postgres.
And to stream WAL data, details of the master server are configured using the parameter primary_conninfo .

The two parameters standby_mode  and primary_conninfo are automatically created when you use the optional argument -R while taking a pg_basebackup. This recovery.conf file must exist in the data directory($PGDATA) of Slave.

Step 5:

Start your slave once the backup and restore are completed.

If you have configured a backup (remotely) using the streaming method mentioned in Step 4, it just copies all the files and directories to the data directory of the slave. Which means it is both a back up of the master data directory and also provides for restore in a single step.

If you have taken a tar back up from the master and shipped it to the slave, you must unzip/untar the back up to the slave data directory, followed by creating a recovery.conf as mentioned in the previous step. Once done, you may proceed to start your PostgreSQL instance on the slave using the following command.

Step 6:

In a production environment, it is always advisable to have the parameter restore_command set appropriately. This parameter takes a shell command (or a script) that can be used to fetch the WAL needed by a slave if the WAL is not available on the master.

For example:

If a network issue has caused a slave to lag behind the master for a substantial time, it is less likely to have those WALs required by the slave available on the master’s pg_xlog or pg_wal location. Hence, it is sensible to archive the WALs to a safe location, and to have the commands that are needed to restore the WAL set to restore_command parameter in the recovery.conf file of your slave. To achieve that, you have to add a line similar to the next example to your recovery.conf file in slave. You may substitute the cp command with a shell command/script or a copy command that helps the slave get the appropriate WALs from the archive location.

Setting the above parameter requires a restart and cannot be done online.

Final step: validate that postgresql replication is setup

As discussed earlier, a wal sender  and a wal receiver  process are started on the master and the slave after setting up replication. Check for these processes on both master and slave using the following commands.

You must see those all three processes running on master and slave as you see in the following example log.

You can see more details by querying the master’s pg_stat_replication view.

Reference :

If you found this post interesting…

Did you know that Percona now provides PostgreSQL support services? We’re here to help.

Share this post

Comments (22)

  • Douglas Hunley

    You’re conflating two different, albeit related, things. WAL shipping is not strictly needed for streaming replication. Which means that ‘archive_mode’ , ‘archive_command’ , and ‘restore_command’ are not necessary. Yes, they are good practice and *should* be used, but they are not necessary for streaming replication. Your comments about needing 3 wal_sender processes is also misleading. You need two because you are using ‘-Xs’ in the pg_basebackup command. The third won’t be used during that process. If you used ‘-Xf’ (not recommended) you’d only need 1 wal_sender. And once the slave is up and running, it will only ever use 1.

    September 11, 2018 at 4:07 pm
    • avivallarapu

      Hi Douglas. Yes. Thats true. You may not need archive_mode or archive_command or restore_command for streaming replication. But, when you setup streaming replication and due to a network lag or whatever reason, if the Slave is falling behind, also if the WALs in pg_xlog or pg_wal are recycled, without archiving the WALs the slave can never get back to sync with Master. Thus, we made sure to ensure we suggest the best practices while building streaming replication, rather than just building it. In every Production environment, these parameters are a must. Also archiving of WALs ensure a better backup strategy. So, you consider all these factors while setting up a replica. And regarding the wal sender process being set to 3, while building a slave, if you are using ‘-Xs’ to stream WALs while streaming data directory, you need 2 wal sender processes. At the same time, if there is a backup job running on Master due to whatever reason, you need another wal sender process. It should not in fact hurt setting additional wal sender processes. I usually recommend 2 wal sender processes per each slave. And 1 dedicated to Master for many obvious reasons.

      September 11, 2018 at 4:26 pm
      • Chk Pcs

        Hi Avivallarapu
        I’m really newbie to PostgreSQL, I’m confusing
        Do we need to enabled archive_mode = on for streaming replication?

        If you use streaming replication without file-based continuous archiving, the server might recycle old WAL segments before the standby has received them. If this occurs, the standby will need to be reinitialized from a new base backup. You can avoid this by setting wal_keep_segments to a value large enough to ensure that WAL segments are not recycled too early, or by configuring a replication slot for the standby. If you set up a WAL archive that’s accessible from the standby, these solutions are not required, since the standby can always use the archive to catch up provided it retains enough segments.

        March 7, 2019 at 4:31 am
  • cmennens

    When you create the replicator role, you MUST specify WITH LOGIN REPLICATION or you can’t do a pg_basebackup from the slave…

    October 23, 2018 at 12:36 pm
    • Avinash Vallarapu

      You need to specify “WITH LOGIN” only when you use CREATE ROLE. As we have used CREATE USER … , it automatically assign LOGIN role to the replication user -> replicator (here in this post).

      December 17, 2018 at 5:38 pm
  • Krishna

    HI Avinash,

    Thanks for the detailed documentation, it helped me alot in my academic project.

    I have tried the same on Windows environment and i am struggling to start SLAVE node (as i have moved MASTER node data to SLAVE). When I use the original data folder, able to start the SLAVE node, but when I do all the configurations, I am facing a challenge as shown below.

    2018-12-24 11:17:31.235 IST [13104] FATAL: database system identifier differs between the primary and standby
    2018-12-24 11:17:31.235 IST [13104] DETAIL: The primary’s identifier is 6637748246234064208, the standby’s identifier is 6637788080756493616.

    December 24, 2018 at 7:57 am
    • Avinash Vallarapu

      Hi Krishna, Hope you issue is resolved. You may not have used the data directory that is copied from Master. initdb generates a system identifier and the same gets copied to Slave. So, something went wrong with the copy.

      March 1, 2019 at 9:45 am
  • Priit

    Hi, thanks for the great guide. But I have question regarding to physical replication slot. Is it useful to create this slot also or what are you thoughts ? I see in earlier versions it was used and the Definition itself says also, ” Replication slots are a crash-safe data structure which can be created
    on either a master or a standby to prevent premature removal of
    write-ahead log segments needed by a standby, as well as (with
    hot_standby_feedback=on) pruning of tuples whose removal would cause
    replication conflicts. “

    January 18, 2019 at 1:06 am
  • Sergey Gavrilov

    Thank you Avinash Vallarapu! Nice explanation! It really save my day! Keep going in that way!))

    March 1, 2019 at 1:23 am
    • Avinash Vallarapu

      Thank you Sergey. Good to know.

      March 1, 2019 at 9:43 am
  • Kavita

    Hello Avinash, have you setup streaming replication in postgreSQL running in docker environment?

    June 4, 2019 at 7:18 pm
    • avivallarapu

      Should not be much different but i can surely think of a post on it Kavita.

      October 30, 2019 at 4:44 pm
  • Balasubramanian M P

    Hi , i have configured for Data replication but i dont see the process from select * from pg_stat_replication; , below is the output of the command,

    na=# select * from pg_stat_replication;
    pid | usesysid | usename | application_name | client_addr | client_hostname | c
    lient_port | backend_start | backend_xmin | state | sent_lsn | write_lsn | flush
    _lsn | replay_lsn | write_lag | flush_lag | replay_lag | sync_priority | sync_st
    (0 rows)

    My database name is na but recovery.conf file got created under -P directory from /opt/postgres/data/-P

    [root@meylvhpnaap02 data]# pwd

    Please help on this

    October 18, 2019 at 10:18 am
    • avivallarapu

      You see any errors in the PG logs of both Master and Standby ?

      October 30, 2019 at 4:43 pm
  • geno

    “listen_addresses : Set it to * or the range of IP Addresses that need to be whitelisted to connect to your master PostgreSQL server. Your slave IP should be whitelisted too, else, the slave cannot connect to the master to replicate/replay WALs.”

    I dont think the word “whitelisted” is appropriate here. It implies something will be blocking the IP address unless specified.
    Instead, your telling postgres which addresses to listen on for incoming connections.

    You are definately NOT adding the slave’s IP address in the ‘listen_addresses’ list on the master.
    Go read the documentation again.

    Otherwise, thanks for the article

    November 25, 2019 at 2:10 pm
  • Shubha

    Hi Avinash, This is an excellent write up!
    I have a question on creating a new user for replication. Why cant we use an existing user for replication job as well?
    Every blog I have come across suggests creating a new user for Replication.
    I would really appreciate your insight on this.


    December 16, 2019 at 5:08 pm
    • avivallarapu

      You can definitely use an existing user as well. The user just needs to have the REPLICATION ROLE. Sometime we see user using postgres, the superuser to setup replication. It works, but not recommended as you do not really a superuser that has privileges to perform any possible action on the database, just to setup replication.

      January 15, 2020 at 1:37 pm
  • Tony Libbrecht

    A number of months ago, I have been setting up streaming replication between 2 locations.
    Following your blog, setup went well and streaming worked well all the time. (Postgre 9.5).

    Now an issue happened on master, and streaming stopped.
    Therefore I want to take base backup on slave again and reinitiate streaming from scratch.
    It is not important that I might lose some data. Just want to start over.

    Question : on master, currently there are a number of wal files remaining in pg_xlog folder.
    Do I need to remove the pg_xlog files on master, before taking pg_basebackup on slave ?

    Thank you

    January 15, 2020 at 7:56 am
    • avivallarapu

      Hi Tony, Great to hear that the blog post helped you. You don’t have to remove those WAL segments from the pg_xlog of the Master.

      January 15, 2020 at 1:44 pm
  • Push

    Does it fail over in case master goes down ?

    February 20, 2020 at 4:16 am
  • zifnab

    Hi Balasubramanian M P,
    i guess $PGDATA is empty. You can check it with: echo $PGDATA.
    Your screenshot shows you did the command with user root. Try user postgres.

    May 28, 2020 at 6:58 am

Comments are closed.

Use Percona's Technical Forum to ask any follow-up questions on this blog topic.