EmergencyEMERGENCY? Get 24/7 Help Now!

Is there benefit from having more memory ?

 | November 19, 2010 |  Posted In: Benchmarks, Insight for DBAs

PREVIOUS POST
NEXT POST

My post back in April, https://www.percona.com/blog/2010/04/08/fast-ssd-or-more-memory/, caused quite interest, especially on topic SSD vs Memory.

That time I used fairy small dataset, so it caused more questions, like, should we have more then 128GB of memory?
If we use fast solid state drive, should we still be looking to increase memory, or that configuration provides best possible performance.

To address this, I took Cisco UCS C250 server in our lab, with 384GB of memory and FusionIO 320GB MLC. I generated 230GB data for sysbench benchmark
and run read-only and read-write OLTP workload with varying buffer pool size from 50 to 300GB (with O_DIRECT setting, so
os cache is not used)

This allows as to see effect of having more memory available.

The graph result is:

and raw numbers are on Wiki bencmarks

So let’s take detailed look on numbers with 120GB ( as if you have system with 128GB of RAM) and 250GB

Buffer_pool read-only, tps read-write, tps
120GB 1866.87 2547.69
250GB 5656.62 (ratio 3x) 7633.38 (ratio 2.99)

So you see doubling memory gives 3x ! performance improvement. And it is despite we store data on one of fastest available storage.

So to get best possible performance our advise is still the same – you should try to fit your active dataset into memory, and it is possible as nowadays systems with 300GB+ RAM already available.

PREVIOUS POST
NEXT POST
Vadim Tkachenko

Vadim Tkachenko co-founded Percona in 2006 and serves as its Chief Technology Officer. Vadim leads Percona Labs, which focuses on technology research and performance evaluations of Percona’s and third-party products. Percona Labs designs no-gimmick tests of hardware, filesystems, storage engines, and databases that surpass the standard performance and functionality scenario benchmarks. Vadim’s expertise in LAMP performance and multi-threaded programming help optimize MySQL and InnoDB internals to take full advantage of modern hardware. Oracle Corporation and its predecessors have incorporated Vadim’s source code patches into the mainstream MySQL and InnoDB products. He also co-authored the book High Performance MySQL: Optimization, Backups, and Replication 3rd Edition.

10 Comments

  • @Nils: The extended memory technology (EMT) is designed to make the solution cheaper and not expensive. I could put 4 GB dimms on the 48 Dimm slots and get 192 GB of memory at very less price, better yet i can put 24 – 8 GB dimms and get 192 GB. WIth full 8 GB i can go all the way upto 384 GB of memory. That’s the power of EMT.
    More info at: http://www.cisco.com/en/US/prod/collateral/ps10265/ps10280/ps10300/white_paper_c11-525300_ps10276_Products_White_Paper.html

  • Vadim,

    As always I would mention for results for “memory fitting” benchmarks it is going to be very workload specifics. Please read these results as “Memory is still faster than Flash” rather than assuming the difference will be 3x for your workload – it can be a lot more and a lot less depending on variety of factors.

  • That comes as no surprise. The Cisco Memory Extension technology (whatever it is really called I don’t know) just pushes the point where adding RAM becomes exponentially more expensive further back because you can throw in smaller, cheaper modules.

  • If you look at the growth curve on server memory, it seems like we’re still in moore’s law territory here. Two years ago I bought some 64G servers and that was on the border of unusual. These days you get 128G in a standard chasis and > 300 is commodity high memory builds.

    Not sure what this means for the future of databases except that IO performance and IO efficiency is getting less and less critical in my buildouts. If I can throw memory at the problem for a fraction of the cost of a super-fast disk array, why not solve the problem with memory which is A) cheaper, B) draws less power and B) faster than even a flash disk array.

  • I would be interested in a benchmark of XtraDB vs PBXT. Could you do something like that? Because PBXT is maybe the better choice if the database doesn’t fit in memory. But numbers are better than conjectures.

  • Glenn,

    It is 24-threads benchmarks, as from my previous post on this Cisco box we can get maximum performance
    with 24 running threads.

  • @Nils: The extended memory technology (EMT) is designed to make the solution cheaper and not expensive. I could put 4 GB dimms on the 48 Dimm slots and get 192 GB of memory at very less price, better yet i can put 24 – 8 GB dimms and get 192 GB. WIth full 8 GB i can go all the way upto 384 GB of memory. That’s the power of EMT.
    More info at: http://www.cisco.com/en/US/prod/collateral/ps10265/ps10280/ps10300/white_paper_c11-525300_ps10276_Products_White_Paper.html

Leave a Reply

 
 

Percona’s widely read Percona Data Performance blog highlights our expertise in enterprise-class software, support, consulting and managed services solutions for both MySQL® and MongoDB® across traditional and cloud-based platforms. The decades of experience represented by our consultants is found daily in numerous and relevant blog posts.

Besides specific database help, the blog also provides notices on upcoming events and webinars.
Want to get weekly updates listing the latest blog posts? Subscribe to our blog now! Submit your email address below and we’ll send you an update every Friday at 1pm ET.

No, thank you. Please do not ask me again.