MySQL insert performance with iiBench Python client

MySQL insert performance with iiBench Python client


Mark Callaghan recently developed and released an enhanced Python version of Tokutek’s iiBench benchmark (Thanks Mark!). We’re happy to see a Python version of the benchmark as it can now more easily be run by a broader group of people in more diverse environments. Going forward, we will continue building upon Mark’s work on the Python version. In addition to porting iiBench to Python, Mark added query capabilities to it, functionality that we were also planning to add. We will test and discuss query performance in a future post.

Given our focus on overall performance, we tested the insert performance of the Python version ( and found that the resulting numbers were about 30% lower than with the C++ version. The graph below shows the C++ version in blue and the Python version with a single insert thread in green.

We realized that the amount of time it took the Python code to generate the strings for the queries was much longer than the time required by the C++ version. Since the database had to wait while the client was generating the next set of queries, it extended the overall wall clock time for execution (figures 2 and 3) and lowered the final metrics that we were comparing – total execution time and inserts/second.

We chose to explore two solutions:
1) Expose both send_query / read_query_result methods in MySQLdb. Using these two functions instead of a single call to execute allows to generate the SQL query string for the next insertion call while the database is working on the current insertion (figure 4).

2) Implement the producer/consumer pattern in, with a producer thread generating the SQL query strings and the consumer thread sending the queries to the database (figure 5). The time that the database has to wait is reduced to the amount of time required to copy the insert string from shared memory to MySQLdb and for MySQLdb to send it onwards to MySQL.

The first option would require that people download a modified version of MySQLdb as well, so we implemented the second option since it would be easier to distribute. We moved data between the producer and consumer processes via a shared queue.

The query performance with the modified (red line in graph) was slightly better than with the C++ client. These results were in line with what we would expect. With the modified, we reduced the amount of time the database waited between insert statements to almost zero. Simultaneously, Python may have incurred a slight performance penalty with its memory management overhead.

Note that all of these tests were done with the new release of TokuDB, v1.1.2, which has about 30% better performance across the board than the previous release used for past benchmarks.

We will be contributing our changes back to Mark Callaghan’s Python iiBench via Launchpad.


Share this post

Comments (3)

  • Mark Callaghan Reply

    This is great. I wondered about Python overhead, but I didn’t worry enough to spend time on it and improve it as you have. I like the second version (no changes to MySQLdb). I need to change the code to remove the dependency on the multiprocessing package so that people with Python versions less than 2.6 can run this.

    And the impact of the Python overhead is much greater for Tokutek than other storage engines, given that performance doesn’t degrade much as the table gets large.

    March 18, 2009 at 12:52 pm
  • Arjen Lentz Reply

    Bigger multi-row inserts.

    March 18, 2009 at 10:21 pm
  • TokuDB FAQ | Tokutek Reply

    […] is a popular open-source benchmark developed by Tokutek with various improvements from online users. It stresses storage engine performance for inserting data while maintaining secondary indexes. Use […]

    February 1, 2011 at 6:45 pm

Leave a Reply