Configuring replication between two databases is considered to be the best strategy towards achieving high availability during disasters and provides fault tolerance against unexpected failures. PostgreSQL satisfies this requirement through streaming replication. We shall talk about another option called logical replication and logical decoding in our future blog post.
Understanding replication in PostgreSQL
Streaming replication in PostgreSQL works on log shipping. Every transaction in postgres is written to a transaction log called WAL (write-ahead log) to achieve durability. A slave uses these WAL segments to continuously replicate changes from its master.
There exists three mandatory processes – wal sender , wal receiver and startup process, these play a major role in achieving streaming replication in postgres.
A wal sender process runs on a master, whereas the wal receiver and startup processes runs on its slave. When you start the replication, a wal receiver process sends the LSN (Log Sequence Number) up until when the WAL data has been replayed on a slave, to the master. And then the wal sender process on master sends the WAL data until the latest LSN starting from the LSN sent by the wal receiver, to the slave. Wal receiver writes the WAL data sent by wal sender to WAL segments. It is the startup process on slave that replays the data written to WAL segment. And then the streaming replication begins.
Note: Log Sequence Number, or LSN, is a pointer to a location in the WAL.
Streaming replication in PostgreSQL between a master and one slave
Create the user in master using whichever slave should connect for streaming the WALs. This user must have REPLICATION ROLE.
CREATE USER replicator
ENCRYPTED PASSWORD 'replicator';
The following parameters on the master are considered as mandatory when setting up streaming replication.
- archive_mode : Must be set to ON to enable archiving of WALs.
- wal_level : Must be at least set to hot_standby until version 9.5 or replica in the later versions.
- max_wal_senders : Must be set to 3 if you are starting with one slave. For every slave, you may add 2 wal senders.
- wal_keep_segments : Set the WAL retention in pg_xlog (until PostgreSQL 9.x) and pg_wal (from PostgreSQL 10). Every WAL requires 16MB of space unless you have explicitly modified the WAL segment size. You may start with 100 or more depending on the space and the amount of WAL that could be generated during a backup.
- archive_command : This parameter takes a shell command or external programs. It can be a simple copy command to copy the WAL segments to another location or a script that has the logic to archive the WALs to S3 or a remote backup server.
- listen_addresses : Set it to * or the range of IP Addresses that need to be whitelisted to connect to your master PostgreSQL server. Your slave IP should be whitelisted too, else, the slave cannot connect to the master to replicate/replay WALs.
- hot_standby : Must be set to ON on standby/replica and has no effect on the master. However, when you setup your replication, parameters set on the master are automatically copied. This parameter is important to enable READS on slave. Otherwise, you cannot run your SELECT queries against slave.
The above parameters can be set on the master using these commands followed by a restart:
ALTER SYSTEM SET wal_level TO 'hot_standby';
ALTER SYSTEM SET archive_mode TO 'ON';
ALTER SYSTEM SET max_wal_senders TO '5';
ALTER SYSTEM SET wal_keep_segments TO '10';
ALTER SYSTEM SET listen_addresses TO '*';
ALTER SYSTEM SET hot_standby TO 'ON';
ALTER SYSTEM SET archive_command TO 'test ! -f /mnt/server/archivedir/%f && cp %p /mnt/server/archivedir/%f';
$ pg_ctl -D $PGDATA restart -mf
Add an entry to pg_hba.conf of the master to allow replication connections from the slave. The default location of pg_hba.conf is the data directory. However, you may modify the location of this file in the file postgresql.conf. In Ubuntu/Debian, pg_hba.conf may be located in the same directory as the postgresql.conf file by default. You can get the location of postgresql.conf in Ubuntu/Debian by calling an OS command => pg_lsclusters.
host replication replicator 192.168.0.28/32 md5
The IP address mentioned in this line must match the IP address of your slave server. Please change the IP accordingly.
In order to get the changes into effect, issue a SIGHUP:
$ pg_ctl -D $PGDATA reload
$ psql -U postgres -p 5432 -c "select pg_reload_conf()"
pg_basebackup helps us to stream the data through the wal sender process from the master to a slave to set up replication. You can also take a tar format backup from master and copy that to the slave server. You can read more about tar format pg_basebackup here
The following step can be used to stream data directory from master to slave. This step can be performed on a slave.
$ pg_basebackup -h 192.168.0.28 -U replicator -p 5432 -D $PGDATA -P -Xs -R
Please replace the IP address with your master’s IP address.
In the above command, you see an optional argument -R. When you pass -R, it automatically creates a recovery.conf file that contains the role of the DB instance and the details of its master. It is mandatory to create the recovery.conf file on the slave in order to set up a streaming replication. If you are not using the backup type mentioned above, and choose to take a tar format backup on master that can be copied to slave, you must create this recovery.conf file manually. Here are the contents of the recovery.conf file:
$ cat $PGDATA/recovery.conf
standby_mode = 'on'
primary_conninfo = 'host=192.168.0.28 port=5432 user=replicator password=replicator'
restore_command = 'cp /path/to/archive/%f %p'
archive_cleanup_command = 'pg_archivecleanup /path/to/archive %r'
In the above file, the role of the server is defined by standby_mode. standby_mode must be set to ON for slaves in postgres.
And to stream WAL data, details of the master server are configured using the parameter primary_conninfo .
The two parameters standby_mode and primary_conninfo are automatically created when you use the optional argument -R while taking a pg_basebackup. This recovery.conf file must exist in the data directory($PGDATA) of Slave.
Start your slave once the backup and restore are completed.
If you have configured a backup (remotely) using the streaming method mentioned in Step 4, it just copies all the files and directories to the data directory of the slave. Which means it is both a back up of the master data directory and also provides for restore in a single step.
If you have taken a tar back up from the master and shipped it to the slave, you must unzip/untar the back up to the slave data directory, followed by creating a recovery.conf as mentioned in the previous step. Once done, you may proceed to start your PostgreSQL instance on the slave using the following command.
$ pg_ctl -D $PGDATA start
In a production environment, it is always advisable to have the parameter restore_command set appropriately. This parameter takes a shell command (or a script) that can be used to fetch the WAL needed by a slave if the WAL is not available on the master.
If a network issue has caused a slave to lag behind the master for a substantial time, it is less likely to have those WALs required by the slave available on the master’s pg_xlog or pg_wal location. Hence, it is sensible to archive the WALs to a safe location, and to have the commands that are needed to restore the WAL set to restore_command parameter in the recovery.conf file of your slave. To achieve that, you have to add a line similar to the next example to your recovery.conf file in slave. You may substitute the cp command with a shell command/script or a copy command that helps the slave get the appropriate WALs from the archive location.
restore_command = 'cp /mnt/server/archivedir/%f "%p"'
Setting the above parameter requires a restart and cannot be done online.
Final step: validate that postgresql replication is setup
As discussed earlier, a wal sender and a wal receiver process are started on the master and the slave after setting up replication. Check for these processes on both master and slave using the following commands.
$ ps -eaf | grep sender
$ ps -eaf | grep receiver
$ ps -eaf | grep startup
You must see those all three processes running on master and slave as you see in the following example log.
$ ps -eaf | grep sender
postgres 1287 1268 0 10:40 ? 00:00:00 postgres: wal sender process replicator 192.168.0.28(36924) streaming 0/50000D68
$ ps -eaf | egrep "receiver|startup"
postgres 1251 1249 0 10:40 ? 00:00:00 postgres: startup process recovering 000000010000000000000050
postgres 1255 1249 0 10:40 ? 00:00:04 postgres: wal receiver process streaming 0/50000D68
You can see more details by querying the master’s pg_stat_replication view.
Expanded display is on.
postgres=# select * from pg_stat_replication;
-[ RECORD 1 ]----+------------------------------
pid | 1287
usesysid | 24615
usename | replicator
application_name | walreceiver
client_addr | 192.168.0.28
client_port | 36924
backend_start | 2018-09-07 10:40:48.074496-04
state | streaming
sent_lsn | 0/50000D68
write_lsn | 0/50000D68
flush_lsn | 0/50000D68
replay_lsn | 0/50000D68
sync_priority | 0
sync_state | async
If you found this post interesting…
Did you know that Percona now provides PostgreSQL support services? We’re here to help.