This latest and greatest version of PostgreSQL, version 13, has been here since last summer and continues the tradition of gains in performance, administration, application development, and security enhancements.
Of the many new features and enhancements that have been made, some of the more notable ones include:
TIP: More detailed information can be found in the Release Notes here.
I learned long ago that it can be something of a challenge keeping up to date with all this new stuff. Therefore, I’d like to cover a small subset of these new advances by demonstrating simple use cases which I hope you will find constructive and are related to pgbench, logical replication, streaming replication fail-over, and re-provisioning a deprecated PRIMARY as a STANDBY using pg_rewind.
As you may recall, pgbench is a simple program for benchmark tests on PostgreSQL by running the same sequence of SQL commands over and over in multiple concurrent database sessions. Pgbench finds its most common use, as the name implies, as a benchmarking tool testing hardware and configuration runtime parameters for the purposes of performance tuning.
The latest iteration of pgbench has a number of new capabilities and includes:
The first step is to of course initialize the benchmarking, in this case, we start out using hash partitioning:
|
1 |
# hash partitioning table public.pgbench_accounts<br>pgbench -i --partition-method=hash --partitions=5 |
|
1 |
...<br>creating tables...<br>creating 5 partitions...<br>generating data (client-side)...<br>100000 of 100000 tuples (100%) done (elapsed 0.04 s, remaining 0.00 s)<br>vacuuming...<br>creating primary keys...<br>done in 0.79 s (drop tables 0.02 s, create tables 0.17 s, client-side generate 0.13 s, vacuum 0.26 s, primary keys 0.22 s). |
Here’s what it should look like:
|
1 |
Schema | Name | Type | Owner | Persistence | Size <br>--------+--------------------+-------------------+----------+-------------+---------+<br> public | pgbench_accounts | partitioned table | postgres | permanent | 0 bytes <br> public | pgbench_accounts_1 | table | postgres | permanent | 2656 kB <br> public | pgbench_accounts_2 | table | postgres | permanent | 2656 kB <br> public | pgbench_accounts_3 | table | postgres | permanent | 2656 kB <br> public | pgbench_accounts_4 | table | postgres | permanent | 2656 kB <br> public | pgbench_accounts_5 | table | postgres | permanent | 2656 kB <br> public | pgbench_branches | table | postgres | permanent | 40 kB <br> public | pgbench_history | table | postgres | permanent | 0 bytes <br> public | pgbench_tellers | table | postgres | permanent | 40 kB |
And here’s the partitioned pgbench_accounts table definition:
|
1 |
Partitioned table "public.pgbench_accounts"<br> Column | Type | Collation | Nullable | Default | Storage | Stats target Description<br>----------+---------------+-----------+----------+---------+----------+--------------<br> aid | integer | | not null | | plain | <br> bid | integer | | | | plain | <br> abalance | integer | | | | plain | <br> filler | character(84) | | | | extended | <br>Partition key: HASH (aid)<br>Indexes:<br> "pgbench_accounts_pkey" PRIMARY KEY, btree (aid)<br>Foreign-key constraints:<br> "pgbench_accounts_bid_fkey" FOREIGN KEY (bid) REFERENCES pgbench_branches(bid)<br>Referenced by:<br> TABLE "pgbench_history" CONSTRAINT "pgbench_history_aid_fkey" FOREIGN KEY (aid) REFERENCES pgbench_accounts(aid)<br>Partitions: pgbench_accounts_1 FOR VALUES WITH (modulus 5, remainder 0),<br> pgbench_accounts_2 FOR VALUES WITH (modulus 5, remainder 1),<br> pgbench_accounts_3 FOR VALUES WITH (modulus 5, remainder 2),<br> pgbench_accounts_4 FOR VALUES WITH (modulus 5, remainder 3),<br> pgbench_accounts_5 FOR VALUES WITH (modulus 5, remainder 4) |
Alternatively, we can just as easily partition using range partitioning:
|
1 |
# range partitioning public.pgbench_accounts<br>pgbench -i --partition-method=range --partitions=5 |
|
1 |
Partitioned table "public.pgbench_accounts"<br> Column | Type | Collation | Nullable | Default | Storage | Stats target | Description<br>----------+---------------+-----------+----------+---------+----------+--------------+<br> aid | integer | | not null | | plain | |<br> bid | integer | | | | plain | |<br> abalance | integer | | | | plain | |<br> filler | character(84) | | | | extended | |<br>Partition key: RANGE (aid)<br>Indexes:<br> "pgbench_accounts_pkey" PRIMARY KEY, btree (aid)<br>Partitions: pgbench_accounts_1 FOR VALUES FROM (MINVALUE) TO (20001),<br> pgbench_accounts_2 FOR VALUES FROM (20001) TO (40001),<br> pgbench_accounts_3 FOR VALUES FROM (40001) TO (60001),<br> pgbench_accounts_4 FOR VALUES FROM (60001) TO (80001),<br> pgbench_accounts_5 FOR VALUES FROM (80001) TO (MAXVALUE) |
For those people curious to see the actual commands creating the partition(s) just update the runtime parameter and look in your postgres log:
|
1 |
postgres=# alter system set log_statement = 'ddl';<br>ALTER SYSTEM<br>postgres=# select pg_reload_conf();<br>pg_reload_conf<br>----------------<br>t<br><br>postgres=# show log_statement;<br>log_statement<br>---------------<br>ddl<br> |
|
1 |
LOG: statement: drop table if exists pgbench_accounts, pgbench_branches, pgbench_history, pgbench_tellers<br>LOG: statement: create table pgbench_history(tid int,bid int,aid int,delta int,mtime timestamp,filler char(22))<br>LOG: statement: create table pgbench_tellers(tid int not null,bid int,tbalance int,filler char(84)) with (fillfactor=100)<br>LOG: statement: create table pgbench_accounts(aid int not null,bid int,abalance int,filler char(84)) partition by range (aid)<br>LOG: statement: create table pgbench_branches(bid int not null,bbalance int,filler char(88)) with (fillfactor=100)<br>LOG: statement: create table pgbench_accounts_1<br> partition of pgbench_accounts<br> for values from (minvalue) to (20001) with (fillfactor=100)<br>LOG: statement: create table pgbench_accounts_2<br> partition of pgbench_accounts<br> for values from (20001) to (40001) with (fillfactor=100)<br>LOG: statement: create table pgbench_accounts_3<br> partition of pgbench_accounts<br> for values from (40001) to (60001) with (fillfactor=100)<br>LOG: statement: create table pgbench_accounts_4<br> partition of pgbench_accounts<br> for values from (60001) to (80001) with (fillfactor=100)<br>LOG: statement: create table pgbench_accounts_5<br> partition of pgbench_accounts<br> for values from (80001) to (maxvalue) with (fillfactor=100)<br>LOG: statement: alter table pgbench_branches add primary key (bid)<br>LOG: statement: alter table pgbench_tellers add primary key (tid)<br>LOG: statement: alter table pgbench_accounts add primary key (aid) |
Let’s initialize a database with the following conditions:
Open file postgresql.conf and edit the following runtime parameters. Restart the server and add extension pg_stat_statement to any database using the command create pg_stat_statement.
|
1 |
# pg_conftool 13 main postgresql.conf edit<br> shared_preload_libraries = 'pg_stat_statements'<br> pg_stat_statements.max = 10000<br> pg_stat_statements.track = all<br> |
|
1 |
systemctl restart postgresql@13-main |
Initialize an empty database:
|
1 |
export PGHOST=pg1-POC13 PGPORT=5432 PGDATABASE=db01 PGUSER=postgres PGPASSWORD=mypassword<br>createdb db01<br>pgbench -i -s 10 -I dtGvpf -F 90 --partition-method=hash --partitions=5<br> |
This query should return a nice summary of the commands thus far executed:
|
1 |
SELECT query, calls, total_exec_time::int, rows, (100.0 * shared_blks_hit /<br> nullif(shared_blks_hit + shared_blks_read, 0))::int AS hit_percent<br>FROM pg_stat_statements<br>ORDER BY total_exec_time DESC<br>LIMIT 5; |
|
1 |
-[ RECORD 1 ] ----------------+------------------------------------------------------------------------------------<br>query | SELECT abalance FROM pgbench_accounts WHERE aid = $1<br>calls | 1560162<br>total_exec_time | 73958<br>rows | 1560162<br>hit_percent | 94<br>-[ RECORD 2 ]<br>----------------+------------------------------------------------------------------------------------<br>query | insert into pgbench_accounts(aid,bid,abalance,filler) select aid, (aid - $1) / $2 + $3, $4, $5 from generate_series($7, $8) as aid<br>calls | 1<br>total_exec_time | 2250<br>rows | 1000000<br>hit_percent | 100<br>-[ RECORD 3 ]<br>----------------+------------------------------------------------------------------------------------<br>query | create database db03<br>calls | 1<br>total_exec_time | 2092<br>rows | 0<br>hit_percent | 90<br>-[ RECORD 4 ]<br>----------------+------------------------------------------------------------------------------------<br>query | vacuum analyze pgbench_accounts<br>calls | 1<br>total_exec_time | 1591<br>rows | 0<br>hit_percent | 92<br>-[ RECORD 5 ]<br>----------------+------------------------------------------------------------------------------------<br>query | alter table pgbench_accounts add primary key (aid)<br>calls | 1<br>total_exec_time | 1086<br>rows | 0<br>hit_percent | 59 |
Execute the benchmarking for a duration of five minutes. Just to make things clearer, you can reset the stats in view pg_stat_statements.
|
1 |
# clear the stats before starting the benchmarking<br># SELECT pg_stat_statements_reset();<br>pgbench -c 40 -j 7 -T 300 -b tpcb-like db01 -P 60 |
New SQL statements representing the DML operations are now listed:
|
1 |
SELECT query, calls, total_exec_time::int, rows, (100.0 * shared_blks_hit /<br> nullif(shared_blks_hit + shared_blks_read, 0))::int AS hit_percent<br>FROM pg_stat_statements<br>ORDER BY total_exec_time DESC<br>LIMIT 5; |
|
1 |
-[ RECORD 1 ]<br>---+-----------------------------------------------------------------------------------------------------------<br>query | UPDATE pgbench_branches SET bbalance = bbalance + $1 WHERE bid = $2<br>calls | 42767<br>total_exec_time | 6203809<br>rows | 42767<br>hit_percent | 100<br>-[ RECORD 2 ]<br>---+-----------------------------------------------------------------------------------------------------------<br>query | UPDATE pgbench_tellers SET tbalance = tbalance + $1 WHERE tid = $2<br>calls | 42767<br>total_exec_time | 3146175<br>rows | 42767<br>hit_percent | 100<br>-[ RECORD 3 ]<br>---+-----------------------------------------------------------------------------------------------------------<br>query | UPDATE pgbench_accounts SET abalance = abalance + $1 WHERE aid = $2<br>calls | 42767<br>total_exec_time | 28281<br>rows | 42767<br>hit_percent | 95<br>-[ RECORD 4 ]<br>---+-----------------------------------------------------------------------------------------------------------<br>query | INSERT INTO pgbench_history (tid, bid, aid, delta, mtime) VALUES ($1, $2, $3, $4, CURRENT_TIMESTAMP)<br>calls | 42767<br>total_exec_time | 22797<br>rows | 42767<br>hit_percent | 100<br>-[ RECORD 5 ]<br>---+-----------------------------------------------------------------------------------------------------------<br>query | SELECT $2 FROM ONLY "public"."pgbench_branches" x WHERE "bid" OPERATOR(pg_catalog.=) $1 FOR KEY SHARE OF x<br>calls | 42767<br>total_exec_time | 2347<br>rows | 42767<br>hit_percent | 100 |
Notice how the child tables have grown in size:
|
1 |
List of relations<br> Schema | Name | Type | Owner | Persistence | Size | Description<br>--------+--------------------+-------------------+----------+-------------+---------+<br> public | pgbench_accounts | partitioned table | postgres | permanent | 0 bytes |<br> public | pgbench_accounts_1 | table | postgres | permanent | 28 MB |<br> public | pgbench_accounts_2 | table | postgres | permanent | 29 MB |<br> public | pgbench_accounts_3 | table | postgres | permanent | 28 MB |<br> public | pgbench_accounts_4 | table | postgres | permanent | 28 MB |<br> public | pgbench_accounts_5 | table | postgres | permanent | 28 MB |<br> public | pgbench_branches | table | postgres | permanent | 168 kB |<br> public | pgbench_history | table | postgres | permanent | 2384 kB |<br> public | pgbench_tellers | table | postgres | permanent | 272 kB | |
We now explore another very useful addition to pg version 13 which is the transparent replication of partitioned tables via logical decoding. No longer is it necessary to expend valuable time manually adding the triggers and supplementary instruction replicating the child tables.
Using the aforementioned pgbench example, execute the following. Note that one must update table history, by adding a primary key, otherwise, replication is not possible with this table:
|
1 |
# UPDATE TABLE, ADD PK<br>alter table public.pgbench_history add primary key (tid,bid,aid,mtime);<br><br># CREATE DATABASE ON SUBSCRIBER NODE<br>createdb -h pg4-POC13 -U postgres db |
|
1 |
# COPY DATABASE SCHEMA<br>pg_dump -h pg1-POC13 -U postgres -s db01 | psql 'host=pg4-POC13 user=postgres dbname=db01'<br><br>#<br># PROVIDER pg1-POC13: DB01<br>#<br>psql 'host=pg1-POC13 user=postgres password=mypassword dbname=db01' <<_eof_<br> set ON_ERROR_STOP<br> create publication publication1 for all tables;<br>_eof_<br><br>#<br># SUBSCRIBER pg4-POC13: DB01<br>#<br>psql 'host=pg4-POC13 user=postgres password=mypassword dbname=db01' <<_eof_<br> set ON_ERROR_STOP<br> create subscription subscript_set1<br> connection 'host=pg1-POC13 dbname=db01 user=postgres password=mypassword'<br> publication publication1<br> with (copy_data=true, create_slot=true, enabled=true, slot_name=myslot1);<br>_eof_ |
And here we see the child accounts tables have been correctly replicated:
|
1 |
List of relations<br> Schema | Name | Type | Owner | Persistence | Size | Description<br>--------+--------------------+-------------------+----------+-------------+------------+<br> public | pgbench_accounts | partitioned table | postgres | permanent | 0 bytes |<br> public | pgbench_accounts_1 | table | postgres | permanent | 28 MB |<br> public | pgbench_accounts_2 | table | postgres | permanent | 29 MB |<br> public | pgbench_accounts_3 | table | postgres | permanent | 28 MB |<br> public | pgbench_accounts_4 | table | postgres | permanent | 28 MB |<br> public | pgbench_accounts_5 | table | postgres | permanent | 28 MB |<br> public | pgbench_branches | table | postgres | permanent | 8192 bytes |<br> public | pgbench_history | table | postgres | permanent | 2352 kB |<br> public | pgbench_tellers | table | postgres | permanent | 8192 bytes |<br> |
Because of the large number of new features added to streaming replication, I’m limiting the focus on two enhancements:
Recall the three-node streaming replication cluster we’ve been using:
|
1 |
pg1-POC13: PRIMARY (read-write)<br>pg2-POC13: REPLICA (read-only, streaming)<br>pg3-POC13: REPLICA (read-only, streaming) |
Suppose a failover promotes pg2. This new feature adds robustness to the entire database cluster. REPLICA pg3 continues service without interruption as it is redirected from pg1 to pg2. Previous versions of postgres required host pg3 to be restarted in order to effect new primary_conninfo parameters.
pg3-POC13; run a benchmark of SELECTS on host pg3
|
1 |
pgbench -h pg3-POC13 -U postgres -c 40 -j 7 -T 300 -b select-only db01 -P 5 |
pg2-POC13; promote the host while the bench-marking on pg3 is active
|
1 |
-- create a new slot for pg3<br>select * from pg_create_physical_replication_slot('pg3');<br>-- confirm slot is inactive<br>select * from pg_get_replication_slots();<br>-- promote host<br>select pg_promote():<br>-- confirm read-write state<br>select pg_is_in_recovery(); |
TIP: it’s understood that runtime parameter wal_keep_size is a non-zero value and is set sufficiently retaining WALs as pg3 is redirected away from pg1 towards pg2.
pg3-POC13; point to pg2-POC13
|
1 |
-- make the updates<br>alter system set primary_conninfo = 'user=postgres host=10.231.38.112';<br>alter system set primary_slot_name = 'pg3';<br>select pg_reload_conf();<br>-- confirm replication is active<br>select * from pg_stat_wal_receiver; |
pg2-POC13; validate replication from pg2->pg3
|
1 |
-- confirm slot is active<br>select * from pg_get_replication_slots();<br>-- confirm replication is active<br>select * from pg_stat_replication;<br> |
When re-provisioning a failed PRIMARY, ie pg1, as a new STANDBY, it used to be necessary to edit the requisite recovery runtime configuration parameters after executing pg_rewind but before starting the host.
PostgreSQL version 13 now simplifies the exercise by providing the switch –write-recovery-conf and updating postgresql.auto.conf with the correct runtime parameters.
TIP: pg_rewind requires runtime parameter wal_log_hints=on before re-provisioning takes place on the failed PRIMARY
pg2-POC13; add a slot on PRIMARY pg2 for the new STANDBY pg1
|
1 |
-- as postgres create a new slot<br>select * from pg_create_physical_replication_slot('pg1');<br>select * from pg_get_replication_slots(); |
pg1-POC13; re-provision pg1
|
1 |
# as postgres, perform the rewind<br>/usr/lib/postgresql/13/bin/pg_rewind <br> --target-pgdata=/var/lib/postgresql/13/main <br> --source-server="host=pg2-POC13 port=5432 user=postgres dbname=postgres " <br> --write-recovery-conf <br> --progress |
You should get messaging similar to the following:
|
1 |
pg_rewind: connected to server<br> pg_rewind: servers diverged at WAL location 0/6CDCEA88 on timeline 1<br> pg_rewind: rewinding from last common checkpoint at 0/6CDCE9D8 on timeline 1<br> pg_rewind: reading source file list<br> pg_rewind: reading target file list<br> pg_rewind: reading WAL in target<br> pg_rewind: need to copy 206 MB (total source directory size is 422 MB)<br> 211594/211594 kB (100%) copied<br> pg_rewind: creating backup label and updating control file<br> pg_rewind: syncing target data directory<br> pg_rewind: Done! |
Now you can restart host pg1-POC13 and bring it back into service:
|
1 |
# as root, restart the server<br>systemctl start postgresql@13-main |
Login pg1 and confirm replication:
|
1 |
# confirm replication is active<br>select * from pg_stat_wal_receiver; |
In the case of failure, check the following:
Although not germane to features and capabilities, I was a little curious to see the differences between PostgreSQL 13 and earlier versions. Here’s a table with metrics creating a data cluster, database, and time required initializing pgbench. It’s interesting to see how performance times have improved over the previous versions:
|
1 |
initdb datacluster binaries createdb pgbench -i<br>ver time size(bytes) size(bytes) time time<br>9.6 0m0.889s 38,877,134 38,844,934 0m0.311s 0m0.236s<br>10 0m0.729s 39,598,542 42,054,339 0m0.725s 0m0.240s<br>11 0m0.759s 40,844,747 41,336,566 0m0.683s 0m0.212s<br>12 0m0.592s 41,560,196 43,853,282 0m0.179s 0m0.213s<br>13 0m0.502s 41,266,877 65,652,665 0m0.188s 0m0.168s |
Frankly, I find it amazing how the size of both binaries and the data cluster has remained so compact over the years. I remember, way back in 2000, I was working with a proprietary RDBMS that upon initialization created an empty instance of 1GB in size – I wonder how big it gets these days??
Happy Hacking!
Our white paper “Why Choose PostgreSQL?” looks at the features and benefits of PostgreSQL and presents some practical usage examples. We also examine how PostgreSQL can be useful for companies looking to migrate from Oracle.
Resources
RELATED POSTS