Recently we published a blog about a very simple application failover using libpq features which could be the simplest of all automatic application connection routing. In this blog post, we are discussing how a proxy server using HAProxy can be used for connection routing which is a well-known technique with very wide deployment. There are multiple ways HAProxy can be configured with PostgreSQL which we shall cover in upcoming blogs, but configuring a xinetd service to respond to Http requests on individual nodes of a database cluster is one of the most traditional approaches.
HAProxy could be the most popular connection routing and load balancing software available. Along with PostgreSQL, it is used across different types of High Availability Clusters. HAProxy, as the name indicates, works as a proxy for TCP (Layer 4) and HTTP (Layer 7), but it has additional features of load balancing also. The TCP proxying feature allows us to use it for database connections of PostgreSQL. There are three objectives of connection routing of a PostgreSQL cluster:
HAProxy maintains an internal routing table. In this blog, we are going to take a look at the most traditional approach to configure HAProxy with PostgreSQL. This approach is independent of underlying clustering software and can be used even with the traditional PostgreSQL built-in replication feature without any clustering or automation solutions.
In this generic configuration, we won’t use any special software or capabilities offered by clustering frameworks. This requires us to have 3 components:
HAProxy has a built-in check for PostgreSQL with option pgsql-check. (Documentation is available here) This is good enough for basic Primary failover. But the lack of features to detect and differentiate the Primary and Hot-Standby nodes makes it less useful.
Meanwhile, HAProxy with xinetd would give us the luxury to see what is the Master and what is a hot standby to redirect connections appropriately. We will be writing about the built-in check pgsql-check in upcoming blog posts and explain how to make use of it effectively.
Xinetd (Extended Internet Service Daemon) is a Super-server daemon. It can listen to requests on custom ports and respond to requests by executing custom logic. In this case, we have custom scripts to check the status of the database. In the script we use writes HTTP header with status code. Different status code represents the status of the database instance. Status code 200 if PostgreSQL instance is Primary, 206 if PostgreSQL is Hot Standby, and 503 if status cannot be verified.
Every database server needs to have a xinetd service running on a port for status checks of PostgreSQL instances running in them. Generally, port: 23267 is used for this purpose, but we can use any port of our choice. This service uses a custom-developed script (shell script) to understand the 3 different statuses of PostgreSQL instances.
Since the status check is available through a port exposed by xinetd, HAProxy can send a request to that port and understand the status from the response.
First, we need to have a script that can check the status of a PostgreSQL instance. It is quite simple, the shell script invokes psql utility and executes pg_is_in_recovery() function of postgres. Based on the result, it can understand whether it is a master or slave or whether it failed to connect.
A sample script is here:
|
1 |
#!/bin/bash<br># This script checks if a postgres server is healthy running on localhost. It will return:<br># "HTTP/1.x 200 OKr" (if postgres is running smoothly)<br># - OR -<br># "HTTP/1.x 500 Internal Server Errorr" (else)<br># The purpose of this script is make haproxy capable of monitoring postgres properly<br># It is recommended that a low-privileged postgres user is created to be used by this script.<br># For eg. create user healthchkusr login password 'hc321';<br> <br>PGBIN=/usr/pgsql-10/bin<br>PGSQL_HOST="localhost"<br>PGSQL_PORT="5432"<br>PGSQL_DATABASE="postgres"<br>PGSQL_USERNAME="postgres"<br>export PGPASSWORD="passwd"<br>TMP_FILE="/tmp/pgsqlchk.out"<br>ERR_FILE="/tmp/pgsqlchk.err"<br> <br> <br># We perform a simple query that should return a few results<br> <br>VALUE=`/opt/bigsql/pg96/bin/psql -t -h localhost -U postgres -p 5432 -c "select pg_is_in_recovery()" 2> /dev/null`<br># Check the output. If it is not empty then everything is fine and we return something. Else, we just do not return anything.<br> <br> <br>if [ $VALUE == "t" ]<br>then<br> /bin/echo -e "HTTP/1.1 206 OKrn"<br> /bin/echo -e "Content-Type: Content-Type: text/plainrn"<br> /bin/echo -e "rn"<br> /bin/echo "Standby"<br> /bin/echo -e "rn"<br>elif [ $VALUE == "f" ]<br>then<br> /bin/echo -e "HTTP/1.1 200 OKrn"<br> /bin/echo -e "Content-Type: Content-Type: text/plainrn"<br> /bin/echo -e "rn"<br> /bin/echo "Primary"<br> /bin/echo -e "rn"<br>else<br> /bin/echo -e "HTTP/1.1 503 Service Unavailablern"<br> /bin/echo -e "Content-Type: Content-Type: text/plainrn"<br> /bin/echo -e "rn"<br> /bin/echo "DB Down"<br> /bin/echo -e "rn"<br>fi |
Instead of password-based authentication, any password-less authentication methods can be used.
It is a good practice to keep the script in /opt folder, but make sure that it has got execute permission:
|
1 |
$ sudo chmod 755 /opt/pgsqlchk |
Now we can install xinetd on the server. Optionally, we can install a telnet client so that we can test the functionality.
|
1 |
$ sudo yum install -y xinetd telnet |
Now let us create a xinetd definition/configuration.
|
1 |
$ sudo vi /etc/xinetd.d/pgsqlchk |
Add a configuration specification to the same file as below:
|
1 |
service pgsqlchk<br>{<br> flags = REUSE<br> socket_type = stream<br> port = 23267<br> wait = no<br> user = nobody<br> server = /opt/pgsqlchk<br> log_on_failure += USERID<br> disable = no<br> only_from = 0.0.0.0/0<br> per_source = UNLIMITED<br>} |
Add the pgsqlchk service to /etc/services.
|
1 |
$ sudo bash -c 'echo "pgsqlchk 23267/tcp # pgsqlchk" >> /etc/services' |
Now xinetd service can be started.
|
1 |
$ sudo systemctl start xinetd |
We need to have HAProxy installed on the server:
|
1 |
$ sudo yum install -y haproxy |
Create or modify the HAProxy configuration. Open /etc/haproxy/haproxy.cfg using a text editor.
|
1 |
$ sudo vi /etc/haproxy/haproxy.cfg |
A sample HAProxy configuration file is given below:
|
1 |
global<br> maxconn 100<br> <br>defaults<br> log global<br> mode tcp<br> retries 2<br> timeout client 30m<br> timeout connect 4s<br> timeout server 30m<br> timeout check 5s<br> <br>listen stats<br> mode http<br> bind *:7000<br> stats enable<br> stats uri /<br> <br>listen ReadWrite<br> bind *:5000<br> option httpchk<br> http-check expect status 200<br> default-server inter 3s fall 3 rise 2 on-marked-down shutdown-sessions<br> server pg0 pg0:5432 maxconn 100 check port 23267<br> server pg1 pg1:5432 maxconn 100 check port 23267<br> <br>listen ReadOnly<br> bind *:5001<br> option httpchk<br> http-check expect status 206<br> default-server inter 3s fall 3 rise 2 on-marked-down shutdown-sessions<br> server pg0 pg0:5432 maxconn 100 check port 23267<br> server pg1 pg1:5432 maxconn 100 check port 23267 |
As per the above configuration, the key points to note are
Now everything is set for starting the HAProxy service.
|
1 |
$ sudo systemctl start haproxy |
As per HAProxy configuration, we should be able to access the port 5000 for a read-write connection.
|
1 |
$ psql -h localhost -p 5000 -U postgres<br>Password for user postgres:<br>psql (9.6.5)<br>Type "help" for help.<br><br>postgres=# select pg_is_in_recovery();<br>pg_is_in_recovery<br>-------------------<br>f<br>(1 row) |
For read-only connection, we should be able to access the port 5001:
|
1 |
$ psql -h localhost -p 5001 -U postgres<br>Password for user postgres:<br>psql (9.6.5)<br>Type "help" for help.<br><br>postgres=# select pg_is_in_recovery();<br>pg_is_in_recovery<br>-------------------<br>t<br>(1 row) |
This is a very generic way of configuring HAProxy with a PostgreSQL cluster, but it’s not limited to any particular cluster topology. Healthcheck is done by a custom shell script and the result of the health check is available through xinetd service. HAProxy uses this information for maintaining the routing table and redirecting the connection to the appropriate node in the cluster.