There are quite a number of methods these days for installing Patroni. I want to discuss the next step, which is how to start tuning a running system.

The manner of updating a running Postgres server typically involves editing the standard Postgres files:

  • postgresql.conf
  • postgresql.auto.conf
  • pg_hba.conf

Working with a Patroni HA cluster, one has the added ability to tune both individual hosts and simultaneously across the entire cluster.

For the purposes of demonstration, I’m using Spilo, which is a docker image of an HA PostgreSQL cluster and consists of the following:

HAProxy Patroni

The first step is, of course, to download and install SPILO:

Once installed, you can stop and restart with the following commands:

Next, query the system in order to identify the entry points:

Logging into any of the Postgres containers, or even the haproxy container itself,  allows me to identify the PRIMARY node. Typically, patronictl requires an argument to the configuration file but not on SPILO.

In my case, my PRIMARY is currently set on member “patroni2”. Note that the name of this Patroni cluster is demo:

FYI, one can locate the Patroni configuration file thusly:

In this case, the configuration file is postgres0.yml:

In order to update runtime parameters across the entire cluster, one can use the Patroni REST API. In this case, though, the patronictl edit-config tool is demonstrated editing changes to the cluster’s Distributed Configuration Store (DCS):

These are my defaulted parameter settings: notice that both the Postgres runtime parameters, postgresql.conf, and the host-based authentication rules, pg_hba, are editable:

As you can see, there’s not much here by way of configuration. Only max_connections has been set. Notice also that the host-based authentication, pg_hba.conf, can be updated too:

 

PostgreSQL Enterprise

Changing a runtime variable not requiring a server restart, a SIGHUP, will be updated automatically across the cluster within 10 seconds, which is controlled by loop_wait. Recall the name of this Patroni cluster is demo:

Polling all three nodes demonstrates the updates have taken place across the cluster:

On the other hand, a  runtime parameter requiring a restart needs administrative intervention:

After waiting 10 seconds, one can see that the column Pending restart appears, thus indicating that a restart is required:

The restart command can invoke a server restart across one or more nodes. The –force argument precludes the interactive question/answer mode:

As you can see, editing runtime parameters is trivial. The only caveat is to watch the indentation. Referring to the documentation, you’ll find there’s an entire gamut of options that can further configure the cluster’s overall behavior:

EXAMPLE: Change replication from async to synchronous replication.

STEP ONE: Query the PRIMARY

STEP TWO: Perform the update. Note the number count of sync’d nodes, parameter  synchronous_node_count, is by default set at only 1:

STEP 3: Confirm update has taken place, this does not require a restart.

STEP 4: Query the PRIMARY; replication is now synchronous.

Multiple changes can also be made using your favorite editor, which in this case, defaults to vim:

Conclusion

Properly configured, the Patroni DCS manages not only the standard multi-node streaming replication cluster but also disaster recovery procedures and hybrid configurations that include logical nodes and citrus clusters and even administers and manages entire STANDBY data centers.

See why running open source PostgreSQL in-house demands more time, expertise, and resources than most teams expect — and what it means for IT and the business.

 

PostgreSQL in the Enterprise: The Real Cost of Going DIY

Subscribe
Notify of
guest

1 Comment
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
shahidullah

Nice article, very well explained.