October 2, 2014

When is it a time to upgrade memory ?

Quite commonly I get a question similar to this – “My Innodb Buffer Pool is already 90% full, should I be thinking about upgrading memory already?”
This is a wrong way to put the question. Unless you have very small database (read as database which is less than innodb_buffer_pool_size) You will have all buffer pool busy sooner or later.
How to figure out if it is time for upgrade when ?

Look at number of misses per second Check number of innodb file reads and writes per second and see how high are these. Decent drive can do some 150-200 IOs/sec this is how you can guess about the load. If you just get couple of reads per second your working set fits to the memory very well if it is hundreds you’re likely to be IO bound or becoming one.

Look at Iowait iostat -dx 10 will show disk utilization. Low (less than 50%) means there are rarely anyone waiting from disk to service requests.

Look at Trends Really it is hard to give advice without trending data. So you have 50 reads/sec is it problem waiting to happen ? You can hardly tell unless you have a trending. If it was 5 reads/sec a week ago 20 reads/sec couple of days ago and 50 today I would be worried. Trending database size number of queries etc is also very helpful – for example growth from 20 to 50 reads/sec may be because load is getting more IO bound or may be just because amount of queries increased dramatically or may be because queries changed their plans.

Do the sizing This especially works well in sharding environment w Master-Master replication or something else which allows you to do light tests in production and which has relatively uniform database content. In cases like this you can often do some experiments with different innodb_buffer_pool_size (while having innodb_flush_method=O_DIRECT) and see how performance depends on buffer size so you get and understanding what Memory-To-Disk ratio is optimal for your application, or at which point performance drops dramatically. It is even better if you have Benchmark relevant for your application setup (I do not put it first because few people do have this setup) so the matter of becomes problem of benchmarking. Once you found out your system starts to degrade quickly as database size reaches say 3x of innodb_buffer_pool_size you can use it as guidance to add more memory or more servers.

Of course this is all oversimplifications – you’ve also got to look at CPU scalability in particular if you have many cores, consider how much IO bandwidth you have etc but these factors already should allow you to make an informed decision.

About Peter Zaitsev

Peter managed the High Performance Group within MySQL until 2006, when he founded Percona. Peter has a Master's Degree in Computer Science and is an expert in database kernels, computer hardware, and application scaling.

Comments

  1. dim says:

    Honestly, the only true answer may give the only one thing: give more RAM to MySQL and see if it really helps :-)

    Rgds,
    -dim

  2. idont says:

    Investigating 1 hour cost as much a xGb. So better buying some RAM. :)

  3. Sheeri says:

    it’s not just the cost of the RAM to keep in mind, you need to turn the machine off and put more RAM in it, and turn it on and hope the RAM is seeded properly, etc.

    Plus, if you’re not using memory properly, adding more will only help a little bit. If you use RAM better, then it’s better whether or not you add more RAM or not.

    As for the OP — I’d also recommend checking out the Innodb_buffer_pool_pages% variables. innodb_max_dirty_pages_pct is set to 90 by default, meaning that by default up to 90% of your buffer pool might be waiting for a flush to disk — if you have a lot of Innodb_buffer_pool_pages_dirty, it can be a sign that you need to flush to disk more, especially if there’s not a lot of disk writes.

  4. peter says:

    Dim,

    Trial and Error approach is a whole different angle. More Memory ? Faster CPUs ? Faster Disks ? Adding The index ? Changing table engine. For everything you can use “try and see approach” but that may cause a lot of time and money wasted and unneeded downtime caused.

  5. peter says:

    idont,

    Few things

    1) The ratio between labor and parts cost is different in different region of the world. So there is different sensitivity where. However it is true unless small database is expected it is often good idea to buy boxes with as much “cheap” memory as they can fit. At certain memory cheap size it tends to skyrocket.

    2) Rental Servers – even though the memory is cheap many hosting providers would not only charge you effectively cost of memory to upgrade but also will put heavy monthly fees for hosting services with large memory amounts. This needs to be factored in in the cost.

    3) Upgrade and testing is time and effort too and causes downtime unless there is redundancy in the system which need to be factored in.

    4) Many larger applications have many servers so it is not the question of upgrading 1 server but upgrading 10 or 100 which multiplies both hardware cost as well as labor and testing.

  6. peter says:

    Sheeri,

    Good point which I surely have not mentioned. Upgrading RAM should come together with MySQL settings adjustments to use it well.

    Regarding innodb_max_dirty_pages_pct – in most cases this being high or low depends on how your load is structured and how large your logs are. If there is working set which is both read and written the value will likely be high because there are little pages which do not need to be modified brought to buffer pool. Regarding flushes if there is any considerable disk IO activity Innodb will be very slow to flush dirty pages unless there is checkpointing activity or amount of dirty pages hits 90%

  7. Sheeri says:

    Peter — you said:

    “Regarding flushes if there is any considerable disk IO activity Innodb will be very slow to flush dirty pages unless there is checkpointing activity or amount of dirty pages hits 90%”

    But you also start out with people saying that their innodb buffer is 90% full. one of my points was that it may be mostly full of dirty pages, and that can be checked by setting the innodb_max_dirty_pages_pct lower, and seeing if that helps — ie, setting it to 20% or 50% and seeing if your innodb buffer is still “full”.

  8. peter says:

    Sheeri, “buffer full” meant there are little of free pages. Generally there is little reason for have any significant number of free pages if you have large database and box warmed up. I was referring to the case when the database was gradually growing and finally just reaches the size it does not fill to buffer pool completely.

    The innodb_max_dirty_pages_pct will not affect number of free pages only number of dirty pages. Even after page is flushed from the buffer pool it just becomes “clean” but not free :)

  9. idont says:

    @Peter: Thanks for your really good points! :)

    I was too much thinking about situations I know:

    – startups: not so many servers + not enough time to investigate + not all the needed skills (That’s why we read carefully your blog daily! :) ) => RAM upgrade is a cheap, fast and efficient solution…

    – Fortune 500: external consultant doing dev (like I was) are expensive. My collegues in India were also not charged as low as we think.

    BTW, keep on your great work! I learned a lot with your great site (Except one big mistery about MySQL v5.1… its release date in stable version… ;) )

  10. dim says:

    Peter,

    let’s not mix all things together, let’s just speak about RAM.

    1. Generally speaking, the same server will never work worse if you add some RAM to it (except you use a buggy OS).

    2. Another point – will it really improve your database performance?..

    And following questions coming in mind:

    until which size we may consider MySQL cache still being effective?
    1GB? 10GB? 100GB? as well, seeing all notes about cache locking – what
    about concurrency?

    is MySQL cache working *always* better rather OS filesystem cache?
    and probably we should compare first the ratio between ‘logical’ reads
    from MySQL vs real ‘physical’ reads processed by OS?

    There may be other points (like monitoring a global memory usage under OS, etc.), but let’s cover at these first..

    Rgds,
    -dim

  11. Maxime says:

    I have some performance issue sine at least 10 months on a MySQL server. I am not an expert, I am doing some tuning reading articles like this one (by the way this website is awesome).
    Well I think my server can’t load all indexes in memory and this is slowing it down.
    I can tell because when I launch my application which is filling one table it’s working quite well (even if it’s not perfect) but when I activate some others functionality of my application the server start to respond slowly and the first task which was working quite well has a very poor insertion rate now.
    The thing is that it’s difficult to tune this server because I have few reading queries but on large tables (~10 millions rows) and a lot lot of insertions queries on these same tables.

    I checked “iostat -dx 10″ and the %util is not really constant, sometime he is lower to 5% and sometime it’s upper 90%. I think this might be due to the “innodb_flush_log_at_trx_commit=2″ into the config file.

    The “innodb_buffer_pool_size” is set to 4Go but the whole database size is almost 8Go (~60 millions records).
    This database contains some real-time information, so if the database is slow it start to be an issue.
    I have implemented some memcached server to avoid to do useless queries to the database, but the thing is that the cache is not up to date as we would due to the database latency.

    To give you more information, this is my.cnf:

    [mysqld]
    datadir=/var/lib/mysql
    socket=/var/lib/mysql/mysql.sock
    user=mysql

    max_connections=600
    connect_timeout=30
    wait_timeout=15
    max_allowed_packet=64M
    default-collation=utf8_unicode_ci
    default-character-set=utf8
    default-storage-engine=InnoDB
    bulk_insert_buffer_size=8M

    # Innobd Tuning
    #innodb_force_recovery=2
    innodb_thread_concurrency=2
    innodb_file_per_table=1
    innodb_doublewrite=0

    # Set buffer pool size to 50-80% of your computer’s memory
    innodb_buffer_pool_size=4G
    innodb_additional_mem_pool_size=20M

    # Set the log file size to about 25% of the buffer pool size
    innodb_log_group_home_dir = /mnt/log_mysql
    innodb_log_file_size=1G
    innodb_log_buffer_size=64M
    innodb_flush_log_at_trx_commit=2

    [mysql.server]
    user=mysql
    basedir=/var/lib
    thread_concurrency = 8
    max_allowed_packet = 64M

    [mysqld_safe]
    log-error=/var/log/mysqld.log
    pid-file=/var/run/mysqld/mysqld.pid

    The server (on amazon) has 7Go memory:
    [root@dom*************** log]# free -m
    total used free shared buffers cached
    Mem: 7175 7158 17 0 47 1818
    -/+ buffers/cache: 5292 1882
    Swap: 0 0 0

    I am just wondering if the latency of the server is due to the lack of memory or not ?
    I hope you could help me, I know that this website is not a support platform for mysql and that I am not a DBA.
    Please do not tell me “if you don’t know how to do it, hire a guy that know how to do it…”

    Thanks a lot for helping.

    Maxime

Speak Your Mind

*