Asynchronous Query Execution with MySQL 5.7 X Plugin

In this blog, we will discuss MySQL 5.7 asynchronous query execution using the X Plugin.


MySQL 5.7 supports X Plugin / X Protocol, which allows (if the library supports it) asynchronous query execution. In 2014, I published a blog on how to increase a slow query performance with the parallel query execution. There, I created a prototype in the bash shell. Here, I’ve tried a similar idea with NodeJS + mysqlx library (which uses MySQL X Plugin).

TL;DR version: By using the MySQL X Plugin with NodeJS I was able to increase query performance 10x (some query rewrite required).

X Protocol and NodeJS

Here are the steps required:

  1. First, we will need to enable X Plugin in MySQL 5.7.12+, which will use a different port (33060 by default).
  2. Second, download and install NodeJS (>4.2) and mysql-connector-nodejs-1.0.2.tar.gz (follow Getting Started with Connector/Node.JS guide).

    Please note: on older systems, you will probably need to upgrade the nodejs version. Follow the Installing Node.js via package manager guide.
  3. All set! Now we can use the asynchronous queries feature.

Test data 

I’m using the same Wikipedia Page Counts dataset (wikistats) I’ve used for my Apache Spark and MySQL example. Let’s imagine we want to compare the popularity of MySQL versus PostgeSQL in January 2008 (comparing the total page views). Here are the sample queries:

The table size only holds data for English Wikipedia for January 2008, but still has ~200M rows and ~16G in size. Both queries run for ~5 minutes each, and utilize only one CPU core (one connection = one CPU core). The box has 24 CPU cores, Intel(R) Xeon(R) CPU L5639 @ 2.13GHz. Can we run the query in parallel, utilizing all cores?

That is possible now with NodeJS and X Plugin, but require some preparation:

  1. Partition the table using hash, 24 partitions:
  2. Rewrite the query running one connection (= one thread) per each partition, choosing its own partition for each thread:
  3. Wrap it up inside the NodeJS Callback functions / Promises.

The code

The explanation

The idea here is rather simple:

  1. Find all the partitions for the table by using “select partition_name from information_schema.partitions”
  2. For each partition, run the query in parallel: create a connection, run the query with a specific partition name, define the callback function, then close the connection.
  3. As the callback function is used, the code will not be blocked, but rather proceed to the next iteration. When the query is finished, the callback function will be executed.
  4. Inside the callback function, I’m saving the result into an array and also calculating the total (actually I only need a total in this example).

Asynchronous Salad: tomacucumtoes,bersmayonn,aise *

This may blow your mind: because everything is running asynchronously, the callback functions will return when ready. Here is the result of the above script:

… here the script will wait for the async calls to return, and they will return when ready – the order is not defined.

Meanwhile, we can watch MySQL processlist:

And CPU utilization:

Now, here is our “salad”:

As we can see, all partitions are in random order. If needed, we can even sort the result array (which isn’t needed for this example as we only care about the total). Finally our result and timing:

Timing and Results

  • Original query, single thread: 5 minutes
  • Modified query, 24 threads in Node JS: 30 seconds
  • Performance increase: 10x

If you are interested in the original question (MySQL versus PostgreSQL, Jan 2008):

  • MySQL, total visits: 88935
  • PostgreSQL total visits: 17753

Further Reading:

PS: Original Asynchronous Salad Joke, by Vlad @Crazy_Owl (in Russian)

Share this post

Comments (6)

  • Scott Lanning

    Just as a sort of side note: I think this is “pipelining” ( ) rather than strictly-speaking “parallelized” (though obviously the queries themselves are in parallel across the partitions). You get results from NodeJS in random order, but AFAICT from playing with the X Protocol in Perl, the queries sent back over the socket are returned in the same order as you sent them. So if, for example, you sent the 1st query and it takes the server 5 minutes, while the remaining 23 queries take 30 seconds each, you’d still only get the results after 5 minutes. At least that’s my hypothesis. 🙂

    May 28, 2016 at 11:51 am
    • Johannes Schlüter

      The example is parellizing, mind that he’s using different connections leading to different server threads. But you are right – with pipelining the order is kept. The server will respond in the queried order.

      May 28, 2016 at 12:12 pm
  • Scott Lanning

    (Sorry if this is a dupe. Failed to post before.)
    AFAICT what happens is, although you get results from NodeJS randomly, the server will actually return the resultsets in the same order as the queries were sent. So (again, AFAICT), beware that if the 1st query was to take 5 minutes while the others took 30 seconds, you’d still have to wait 5 minutes before any results are sent back.

    May 28, 2016 at 12:02 pm
    • Alexander Rubin

      Scott, yes, I’m opening 24 connections to MySQL, similar how map/reduce works.

      May 30, 2016 at 1:05 pm
  • datacharmer

    Thanks for the example. It shows clearly how to manipulate queries and results in JS.
    However, I am puzzled by the method that you chose. The parallelization is possible because partitions in MySQL 5.7 don’t lock the whole table as they did before (
    The same result can be achieved by using a shell script that runs N queries (with the regular MySQL client) in the background and report the result to a text file, which is then summarized.
    The benefit that I see in your solution is only the ability of running parallelized queries in a clean syntax without being a wizard of parallel execution with background processes.
    For the purpose of understanding the technology better (I am still exploring its capabilities) could you show an example, even without code, that produces benefits without using partitions?

    May 28, 2016 at 3:27 pm
    • Alexander Rubin

      Giuseppe, yes, you are right and this was confusing. The parallelization is NodeJS is just much easier and very similar to what I did a year ago with a simple shell script.

      Unfortunately, pipelining with X Plugin ( does not gives much better performance as it still runs all queries in 1 thread and only saves the round trip. (In the next blog post I’m going to show how it can be beneficial thou).

      Here is the timing:
      1. Pipeline with NojeJS
      $ time node async_wikistats_pipeline.js

      All done! Total: 17753

      real 5m39.666s
      user 0m0.212s
      sys 0m0.024s

      2. Direct query – partitioned table:
      mysql> select sum(tot_visits) from wikistats.wikistats_by_day_spark_part where url like ‘%postgresql%’;
      | sum(tot_visits) |
      | 17753 |
      1 row in set (5 min 31.44 sec)

      3. Direct query – non partitioned table.
      mysql> select sum(tot_visits) from wikistats.wikistats_by_day_spark where url like ‘%postgresql%’;
      | sum(tot_visits) |
      | 17753 |
      1 row in set (4 min 38.16 sec)

      With pipeline with NojeJS I’m reusing the same connection (and do not open a new one for each thread).

      I wish with pipelining with X Plugin I can open a number of connections:
      For example:
      var conn = mysqlx.getNodeSession( cs, );

      Then X Plugin will run queries in parallel across those connections.

      May 30, 2016 at 1:03 pm

Comments are closed.

Use Percona's Technical Forum to ask any follow-up questions on this blog topic.