We all enjoyed Yoshinori announcement of HandlerSocket, the plugin to MySQL which open NOSQL way to access data stored in InnoDB.
The published results are impressive, but I want to understand some, that’s why I run couple more experiments.
In blog post Yoshinori used the case when all data fits into memory, and one of question I had what if we put data on SSD ( FusionIO 320GB MLC in this experiment) how it will affect throughput. The idea there is to check if it can be good NOSQL solution with permanent storage.
I should give respect to HandlerSocket developers – I was able to install and get it working with Percona Server 5.1.50-12.1 without any issues.
So for experiment I used Cisco UCS C250 for server and Dell PowerEdge R900 for client running Perl script (single thread) with HandlerSocket client.
Table is standard sysbench table with 300 mil rows and I used kind of PK lookups queries to HandlerSocket.
To measure how IO access affects throughput, I vary amount of rows accessed in script, on 150 mil rows the table does not fit into memory (and the more rows we access, the more amount of IO we have to perform), and with 300 mil rows the datasize is twice as available buffer pool.
And we compare results when table is located on FusionIO and on regular RAID10 ( 8 disks).
and row results are on Wiki page
As you see with regular disk you can’t expect good throughput when data stops fitting into memory, while
with FusionIO it is pretty much acceptable. With data as twice as big as memory we only had about half throughput drop, which is pretty decent result.
I am looking to run write benchmarks on HandlerSocket and if the results are good we may include it in our Percona Server distribution.