My post back in April, https://www.percona.com/blog/2010/04/08/fast-ssd-or-more-memory/, caused quite interest, especially on topic SSD vs Memory.
That time I used fairy small dataset, so it caused more questions, like, should we have more then 128GB of memory?
If we use fast solid state drive, should we still be looking to increase memory, or that configuration provides best possible performance.
To address this, I took Cisco UCS C250 server in our lab, with 384GB of memory and FusionIO 320GB MLC. I generated 230GB data for sysbench benchmark
and run read-only and read-write OLTP workload with varying buffer pool size from 50 to 300GB (with O_DIRECT setting, so
os cache is not used)
This allows as to see effect of having more memory available.
The graph result is:
and raw numbers are on Wiki bencmarks
So let’s take detailed look on numbers with 120GB ( as if you have system with 128GB of RAM) and 250GB
| Buffer_pool | read-only, tps | read-write, tps |
| 120GB | 1866.87 | 2547.69 |
| 250GB | 5656.62 (ratio 3x) | 7633.38 (ratio 2.99) |
So you see doubling memory gives 3x ! performance improvement. And it is despite we store data on one of fastest available storage.
So to get best possible performance our advise is still the same – you should try to fit your active dataset into memory, and it is possible as nowadays systems with 300GB+ RAM already available.