EmergencyEMERGENCY? Get 24/7 Help Now!

READ-COMMITED vs REPETABLE-READ in tpcc-like load

 | February 11, 2010 |  Posted In: Benchmarks, MySQL

PREVIOUS POST
NEXT POST

Question what is better isolation level is poping up again and again. Recently it was discussed in InnoDB : Any real performance improvement when using READ COMMITED isolation level ? and in Repeatable read versus read committed for InnoDB .
Serge in his post explains why READ COMMITED is better for TPCC load, so
why don’t we take tpcc-mysql bencmark and check on results.

I took 3 different datasets 1000w (100GB of data), 300w (30GB) and 10w (1GB) for box with 32GB of RAM and buffer_pool 26GB. Latest case 10w is interesting as I expect a lot of contention on row level having small dataset.
I used as usually tpcc-mysql benchmark with 16 and 32 (for 10w) concurrent users.

Also I had binary log enabled on RBR mode (as READ-COMMITED does not support STATEMENT based replication in 5.1), and everything was run under XtraDB-9 ( based on InnoDB-plugin 1.0.6)

So there are results:

For 1000w:

1000w

There READ-COMMITED seems more preferable, however difference is very small.

For 300w:
300w

I would say both modes are even there, there is no winner for me.

for 10w:
10w

Almost indentical resutls in this case.

However, however.
For 10w run I got 60 Deadlock errors “1213, Deadlock found when trying to get lock; try restarting transaction” in READ-COMMITED mode, and 0 Deadlocks in REPEATABLE-READ mode.

I understand that 60 deadlocks for total 2704687 transactions can be ignored, but it seems you have better chance to get DEADLOCK in READ-COMMITED then in REPEATABLE-READ.

So both modes looks even for me, though some facts to consider:

  • READ-COMMITED is used rare than REPEATABLE-READ (default), that is less-tested
  • READ-COMMITED does not work with statement-based replication in 5.1
  • with READ-COMMITED you may have more DEADLOCKS
PREVIOUS POST
NEXT POST
Vadim Tkachenko

Vadim Tkachenko co-founded Percona in 2006 and serves as its Chief Technology Officer. Vadim leads Percona Labs, which focuses on technology research and performance evaluations of Percona’s and third-party products. Percona Labs designs no-gimmick tests of hardware, filesystems, storage engines, and databases that surpass the standard performance and functionality scenario benchmarks. Vadim’s expertise in LAMP performance and multi-threaded programming help optimize MySQL and InnoDB internals to take full advantage of modern hardware. Oracle Corporation and its predecessors have incorporated Vadim’s source code patches into the mainstream MySQL and InnoDB products. He also co-authored the book High Performance MySQL: Optimization, Backups, and Replication 3rd Edition.

6 Comments

  • VJ Kumar, in one case I was just working on we had two transactions deleting rows using secondary key and then inserting new records with same secondary key values. By switching to read-committed we were able to avoid deadlocks because InnoDB would lock a gap after the delete in repeatable-read. In RC InnoDB almost never uses a gap locking which is a common reason for deadlocks in RR.

  • For some time we have used Read-Committed and has encountered alot of weird bugs with this isolation level, it seems
    that everytime the mysql team made a new feature, it wasnt tested as well with read-committed isolation level as the commonly used repeatable-read.

    We have now changed isolation level to default(repeatable read). This has resulted in noticeable more deadlocks than read-committed, and so far less
    bugs.

    To me it seems strange that repeatable-read gives less deadlocks compared to read-committed as it is a higher isolation level and therefore must lock more.

  • very interesing results, never seen a benchmark quantify the cost of isolation levels. please also provide comparisions for all other levels.

  • I thought this was old news, but good to see it proven in practice. The way I learnt about it long ago is that given InnoDB’s architecture, there’s no advantage to lowering the isolation level. The amount of ‘housekeeping’ internally (aka overhead) is virtually the same between REPEATABLE READ and READ COMMITTED.

  • We switched from REPEATABLE-READ to READ-COMMITTED because we were getting lots and lots of deadlocks all of the time. Deadlocks have pretty much gone away completely since switching to READ-COMMITTED, which seems to be in direct contradiction with your statement. Why would that be? We are using MySQL 5.0.

  • I am not sure why there should be any difference in locking or dead-locking between RC and RR at all.

    I am not familiar with InnoDB internals, but assuming it implements traditional multi-versioning of the Oracle flavour, there should not be any locking while providing read consistency. E.g. in Oracle, the only difference between the case when you have statement-level read consistency and the case when you want to ensure transaction level read consistency is how far back in time, or to what specific version of the row, you want to go: to the beginning of the statement or to the beginning of the entire transaction.

    What am I missing ? I am sure for an InnoDB expert it should be easy to create a deterministic test case and demonstrate why there is more locking (or any locking at all) with RR vs. RC.

Leave a Reply

 
 

Percona’s widely read Percona Data Performance blog highlights our expertise in enterprise-class software, support, consulting and managed services solutions for both MySQL® and MongoDB® across traditional and cloud-based platforms. The decades of experience represented by our consultants is found daily in numerous and relevant blog posts.

Besides specific database help, the blog also provides notices on upcoming events and webinars.
Want to get weekly updates listing the latest blog posts? Subscribe to our blog now! Submit your email address below and we’ll send you an update every Friday at 1pm ET.

No, thank you. Please do not ask me again.