Configuring ProxySQL Binlog Reader

Configuring ProxySQL Binlog ReaderIn a previous post, MySQL High Availability: Stale Reads and How to Fix Them, I’ve talked about the challenges of scaling out reads, where some types of applications cannot tolerate reading stale data. One of the ways of fixing it is by using ProxySQL Binlog Reader.

Long story short, binlog reader is a lightweight binary that keeps reading binlogs and informing ProxySQL – in real-time – about what has been applied on the said server. Since MySQL 5.7, as part of the OK_PACKET, the server will also send back information about the generated GTID event to clients. Knowing which GTID each server has applied and what was the last generated GTID the client connection received from the OK_PACKET, ProxySQL can route the following-up reads to a server that has already applied the said GTID.

At the time of this writing, you will need to compile binlog reader yourself.

Compile Binlog Reader:

To compile it on Centos7 you will need a few packets and repos pre-installed:

Add the hash header file into include directory:

Compile libslave and ProxySQL Binlog Reader:


Running Binlog Reader:

In order to run binlog reader, your mysql server must have boost installed:

GTID must be enabled and session_track_gtids configured to OWN_GTID:

To start the binary, you need to specify a port for it to bind to. This port will be later configured on ProxySQL:

At this moment, if you inspect the error log, you should see something like:

Please note, if you find the below error, it means you haven’t executed any event that generated a GTID on the server (#Issue7):


Configuring ProxySQL:

Now it’s time to inform proxysql that:

  1. The servers have proxysql binlog reader running on port X. Assuming you already have your mysql_servers table populated, all you will have to do is update each entry to inform the GTID port:

    Please note, if none of the slaves has received the GTID you are requesting, you want the master to serve this request. For this reason, it is important that your master is part of your read HG (it can be with the lowest possible weight). You can achieve that by setting mysql-monitor_writer_is_also_reader to true (default value).
  2. A particular query rule should enforce GTID from the writer HG. Most of the cases, this will be the rule that matches the SELECT queries:

Network Traffic:

One of the advantages of this feature is that you can serve proxysql server with minimal impact on your network since proxysql binlog read will read only a portion of each binlog event to extract the GTID and only it will be sent over the network. This is unlike a normal slave, where all of the binlog events is sent. A GTID event will be classified into three categories and the size will vary depending on each state (below names are not official):

  1. Full

    When ProxySQL detects this, it is a completely new GTID (for a new server_uuid). In the above example, we are sending 48 bytes corresponding to the full GTID set of da1bb000-5c9c-11e9-9ee7-00163efabac0:1-55299. This is normally sent when proxysql does not have any track of GTIDs for this server.
  2. Delta

    When ProxySQL has already sent a full GTID event, it will send only the delta event, compounded by the full server_uuid + incremental id. In the above example, we are sending 42 bytes (instead of the initial 48) and the GTID event is da1bb000-5c9c-11e9-9ee7-00163efabac0:1-55300.
  3. Incremental

    All events after the Delta will contain only the incremental part of the GTID (it will omit the server_uuid). On the above example, we are sending only 9 bytes and sending the GTID 55301 (complete GTID is da1bb000-5c9c-11e9-9ee7-00163efabac0:1-55301).


To simulate the issue, we will use a simple PHP script:

The script will:

  1. Populate a table with some random data.
  2. For each row, it will update a datetime column with the current time.
  3. Immediately after the update, it will try to query one of the slaves using the current time as a parameter. If the slave has not processed the update from step 2, it will return 0 rows. The script will report a dirty read and try the same select again after 0.5 seconds and report the number of returned rows.
  4. If step 3 succeeded, it will report as ok.

Example run without GTID consistent reads:

Example run with GTID consistent reads:


To monitor if a backend server is sending GTID event, you can query stats_mysql_gtid_executed table:

In order to verify if queries are been served using this feature, you can then query stats_mysql_connection_pool table and look for the Queries_GTID_sync column:


There are a few caveats to consider if you want to adopt this feature:

  1. In order for this feature to work, the read query must be executed in the same connection as the write query. Or, if you are using proxysql version 2.0.9 onwards, you can send a query comment specifying the minimum GTID which a server must have in order to serve the read.
  2. Some write queries don’t generate GTID. For example, an UPDATE that returns 0 affected rows in this case (either due to a condition not matching any row in the table or updated columns being equal to current values), proxysql will not use the feature and redirect the following reads to any slave not updating Queries_GTID_sync.
  3. This feature doesn’t work with Galera replication.
  4. If you don’t have your master as part of your read hostgroup (that is done by default with proxysql variable mysql-monitor_writer_is_also_reader), ProxySQL will behave like it was executing SELECT WAIT_FOR_EXECUTED_GTID_SET, where the execution of reading query will stall until the slave has received the requested ID or proxysql mysql-connect_timeout_server_max has elapsed.

Moving from a centralized architecture into a distributed one can bring some challenges, and a delay in replication causing slaves to provide stale data is one of them. ProxySQL binlog readers can help mitigate this issue.

Please note: This feature and binary (proxysql_binlog_reader) are relatively new and are not considered GA as of now.  We highly advise you to extensively test it before implementing it in production.

Share this post

Comment (1)

  • deniszh Reply

    I tried that year ago on one of our Percona MySQL 5.7 clusters and was severely affected by degraded replication performance bug, which probably still didn’t fixed –
    I didn’t try it on 8.0 though.

    February 24, 2020 at 5:10 pm

Leave a Reply