High Rate insertion with MySQL and Innodb

I again work with the system which needs high insertion rate for data which generally fits in memory. Last time I worked with similar system it used MyISAM and the system was built using multiple tables. Using multiple key caches was the good solution at that time and we could get over 200K of inserts/sec.

This time I worked with Innodb tables… it was a different system with different table structure, not to mention different hardware so It can’t be compared directly, still it is nice to see you can get the numbers as high with Innodb too.

I will spare you all experiments we went through and just share final numbers. On 8 core Opteron Box we were able to achieve 275K inserts/sec at which time we started to see load to get IO bound because of log writes and flushing dirty buffers. I’m confident you can get to 400K+ inserts/sec on faster hardware and disks (say better RAID or Flash) which is a very cool number. Of course, mind you this is in memory insertion in the simple table and table with long rows and bunch of indexes will see lower numbers.

So what’s the deal ? First MySQL 5.5 (frankly I did not try Percona Server 5.1 in this case) With MySQL 5.1 and Innodb Plugin we could see 40%+ CPU wasted on mutex spinlocks (per oprofile), which went down to about 15% in MySQL 5.5.8 with 8 concurrent threads. This both shows there is a substantial gains as well as room for more performance optimizations. Dmitri has good suggestions on tuning MySQL 5.5 and this is what I used for start. Using multiple buffer pools with innodb_buffer_pool_instances=8 was very important.

Second thing – Partitioning. Unfortunately MySQL 5.5 leaves the huge bottleneck for write workloads in place – there is per index rw lock, so only one thread can insert index entry at the time, which can be significant bottleneck. We got 2x+ better performance by hash partitioning table by one of the columns and I would expect gains can be higher with more cores. PARTITION BY HASH(col) PARTITIONS 8 is what we used. This looks like a good workaround but remember partitioning can impact performance of your select queries dramatically.

The inserts in this case of course are bulk inserts… using single value inserts you would get much lower numbers. In fact we used load data infile which is one of the ways to get a great performance (the competing way is to have prepared bulk insert statements).

We need to try new Percona Server 5.5 on our Cisco box to see if we can get to 500K inserts/sec – this can be a nice round number 🙂

Share this post

Comments (22)

  • Raine

    I would bet that HandlerSocket would even go further 😉

    January 7, 2011 at 5:08 pm
  • Peter Zaitsev


    Actually, I would not expect significant gains from Handler Socket in this case. The main gain from Handler Socket is a lot of small selects as in this case parsing, locking tables etc is majority of execution time, for LOAD DATA INFILE and even multiple value insert majority of time is spent on actually doing insert not on this overhead.

    January 7, 2011 at 5:22 pm
  • Patrick Casey

    This is a legit question, I assure you, although it smells a bit trollish to me, so apologies in advance, but:

    Is this actually a common workload in the mysql community e.g. very high frequency of small inserts?

    More generally, more benchmarks folks run (here and elsewhere) are read/write intensive transactional benchmarks, but most of the actual mysql implementations that, I personally have been involved with are very, very read heavy web applications.

    I’ve always had this personal gut feel that the database optimization teams fixate on OLTP systems because that’s the “glory” side of the game, but the actual production community would be much happier with work on the query optimizer and/or making simple selects faster since that’s the common use case.

    This is one of those cases though where I’m loathe to draw a general case conclusion about what the “community” is doing based on just what I’ve seen, so I’d be curious to hear other people’s opinions on the topic.

    January 8, 2011 at 9:57 am
  • Peter Zaitsev


    No. This is not very frequent workload. When it comes for focusing on heavy read write I believe this is there a lot of problems are. Reads are fist easy to scale you can “just” add more slaves in a lot of cases (OK replication is async etc but still scaling reads is a lot easier than writes) and when MySQL scales a lot better for reads these days with multiple CPUs.

    When you’re speaking about making simple selects faster in direct way (not via HandlerSocket and such which reinvents rules of the game) typically there is not too much you can do with tuning and this would require a lot of work on the code to make it faster. I have not seen much people ready to pay for this work because it is just easier to invest in hardware.

    The Query Optimizer – Monty is doing some work in MariaDB and may be Oracle will have re-focus some day. There is tons of things which can be done, to make MySQL work better with complicated queries but I think most people just know MySQL is just not good for this and it is not getting very good any time soon. Solutions range from using Infobright or InfiniDB to going to other databases and even use Hadoop, Hive and other solutions. As yourself how many people using stock MySQL for OLTP part for their business use something else for Analytics.

    January 8, 2011 at 10:40 am
  • Baron Schwartz

    It isn’t the most common workload, but there are still lots of people doing write-heavy workloads with MySQL. Any kind of stats-collection tends to get this usage pattern. Phone-home systems, agents that send results to a central location, sensor data are a few of the ones I’ve seen. Some of these are really write-heavy, like 95% or 99% writes.

    January 8, 2011 at 2:03 pm
  • Patrick Casey


    I’m with you I suppose in that tuning write workloads is usually a more solveable problem e.g. there’s usually something you can do to improve the throughput of any given special case. As we’ve all agreed in earlier threads, its not always a good idea to go down this road, but that’s another question :).

    My point here I guess is that database experts (and I’m by no means picking on this community in particular, its endemic to virtually every vendor I know, not just mysql and the mysql community), tend to always be ready to solve “write throughput” problems because, well, they’re solveable.

    For whatever reason, the core database teams seem to fixate on the same thing too; look at the benchmarks major vendors put out (not just mysql) for example and they’re usually OLTP benchmarks unless they’re a specialized analytic vendor.

    It just seems to me that this focus on OLTP misses the mark for the majority of the real world use cases.

    Personally I’d be happy to sacrifice, say, 50% of my write throughput for a better optimizer and faster simple selects.

    That’s not a tuning thing, its more a back end database vendor thing, but I see them as related in that both involve a focus on OLTP as a benchmark.

    Just my $0.02 of course, as usual 🙂

    January 8, 2011 at 2:51 pm
  • Martin

    What were you using as a client? The best I’ve ever been able to get for insert rates is around 10k/sec on any system with a single client. Also, what does the test table schema look like? Finally, what rates were you getting for LOAD DATA on the same table?

    January 9, 2011 at 8:16 pm
  • Caio Spadafora


    This workload seens to be the same that I face with my backup system, Bacula. Beside the write rate I also have a lot of huge deletes and selects. So I have a lot of individuals row writes and in high rates and some deletes and selects working with a huge amount of data (my main table involved has at least 50GB os data).

    Today I am using Percona Server 5.1.53 with MyISAM storage engine, do you recommend any changes? Upgrading to MySQL 5.5? Changing to InnoDB? Talk about others RDBMS like PG or even Oracle?

    January 9, 2011 at 9:04 pm
  • Peter Zaitsev


    MyISAM is often out of question because of table locks as well as corruption in case of crash. If you use Multiple tables and multiple key caches (or segmented key cache in MariaDB) you should be able to get a higher insertion rate than Innodb… but when querying this data is also the issue.

    If you have mix of insets selects etc Innodb is probably better choice. Percona Server and 5.5 are both good with best depends on workload. If you’re IO bound SSD might be good independently of this.

    January 9, 2011 at 9:33 pm
  • Mark Callaghan

    Patrick – how are simple selects to slow?

    January 10, 2011 at 8:58 am
  • Patrick Casey


    Even on a switched network, its something like 700us for me to get a select by primary key out of mysql. Sounds like a very fast number, but most of that is overhead (query parse, network overhead, etc). There’s lots of app layer caching folks write (myself included) to avoid doing lots of lookups against the database because doing, say, 1000 lookups like this adds 700ms to a page render … not acceptable.

    So instead folks write error prone client side caching, have to deal with more code, invalidations, etc.

    I’d dearly love (pie in the sky here) to drive simple select time down to close to the wire time, say 70-100us. At that frequency I could delete a lot of client side code and just read from the database whenever I wanted data.

    More stable, easier to maintain, performs acceptably. A trifecta win :).

    — Pat

    January 10, 2011 at 12:11 pm
  • Mark Callaghan

    Patrick — I won’t disagree with you on those terms. memcache and NDB provide that. HandlertonSocket provides QPS that is closer to memcache, but I am not sure if it provides the response time. And SELECT/HANDLER definitely won’t. That isn’t a performance problem I need to fix today.

    January 10, 2011 at 3:12 pm
  • Michel Bartz

    Patrick – That’s a valid point, one of my sites main problem was exactly what you describe. I found one solution, i dropped MySQL for Redis. I realized that most of the operation i do (like 90% of them), and most websites for that matter, are really simple, and SQL is just a useless overhead, combining the server side data structure capacities of Redis with its very good performance and its “safeness”, i can just write/read data from Redis without having to worry. Which is what most developers are asking for.

    January 10, 2011 at 5:54 pm
  • Burhan


    I want to change the value of innodb_buffer_pool_instances to 8, the field is readonly, how do I do that?

    March 29, 2012 at 6:24 am
  • Peter Zaitsev

    Change it in my.cnf and restart

    March 29, 2012 at 9:40 am
  • Burhan

    Thanks Peter, it worked! 🙂

    I am using python and MySql 5.5, I have an XML file which I parse and the total number of records are 1.9 million, the table has parent child relationship and the engine type is InnoDB. The process is very slow, it first checks either parent record exists; if so then child record is inserted, otherwise it first inserts parent and then child, after setting innodb_buffer_pool_instances to 8, I don’t see any difference.

    I guess I am doing something wrong, could you please advise, how must I do that?

    March 29, 2012 at 12:03 pm
  • Szymon Stasik

    nothing have worked for me

    testing many config tweaks with:

    sysbench –test=oltp –oltp-table-size=720000 –max-time=30 –max-requests=0 –mysql-table-engine=innodb –db-ps-mode=auto –mysql-engine-trx=yes –oltp-skip-trx=false –oltp-test-mode=complex –oltp-dist-type=uniform –oltp-range-size=1 –num-threads=16 –mysql-db=test –mysql-user=root –db-driver=mysql run

    got no more than aprox. 1 core fully loaded on 8 cores cpu…

    even with tokudb (just performance was quite better)

    just moved to postgres and got ~100x better performance utilising all cores

    June 2, 2012 at 10:29 am
  • timir


    i have table contain 2 core table in mysql inodb.but it’s getting locks everytime when multiple insert hep and .
    so please help me out with solutions.


    December 16, 2013 at 5:35 am
  • timir


    i have table contain 2 core record in table of mysql inodb.but it’s getting locks everytime when multiple insert heppand .
    so please help me out with solutions.


    December 16, 2013 at 5:36 am
  • Bhargav


    I’ve a certain question and I could not find an answer to it no matter where I search. I recently conducted a mysqlslap test to determine which type of insert is best from a performance standpoint. The files containing the queries were stored on a linux system and read from it. I hope someone could through some insight on this. Following are the benchmark obtained through mysqlslap test:

    MySQL Version: 5.6.14
    Total Records inserted on empty table: 5000
    Time taken for Individual/Separate Insert: 5.24 seconds
    Time taken for Batch Insert: 2.77 seconds
    Time taken for Bulk Insert: 0.52 seconds

    Does it have to do anything with parsing, Created_tmp_tables or Innodb_buffer_pool_pages_flushed? Also the query cache is tuned off.


    March 27, 2014 at 7:50 am
  • av

    can be calculate bandwidth of select query how much data in byte selected ?

    October 28, 2015 at 8:34 am
  • Dnky

    I am not sure you have given any information that would be useful for someone who is looking for a mysql scalability.

    August 19, 2017 at 9:41 am

Comments are closed.

Use Percona's Technical Forum to ask any follow-up questions on this blog topic.